$10 Bonus on $20 Signup

Tuesday 30 May 2017

Cowbird moms choosy when selecting foster parents for their young


Date:
May 23, 2017
Source:
University of Illinois at Urbana-Champaign
Summary:
Despite their reputation as uncaring, absentee moms, cowbird mothers are capable of making sophisticated choices among potential nests in order to give their offspring a better chance of thriving, a new study shows.
Share:
FULL STORY

Cowbird moms pay attention to the size of eggs in the nests they choose for egg-laying, a new study finds. Inset: Two cowbird eggs in the nest of a northern cardinal, with two (larger) eggs of its own.
Credit: Photo by Loren Merrill; Inset photo by Scott Chiavacci
Brown-headed cowbirds are unconventional mothers. Rather than building nests and nurturing their chicks, they lay their eggs in the nests of other species, leaving their young ones to compete for resources with the foster parents' own hatchlings. Despite their reputation as uncaring, absentee moms, cowbird mothers are capable of making sophisticated choices among potential nests in order to give their offspring a better chance of thriving, a new study shows.

Brown-headed cowbirds are known to lay their eggs in the nests of more than 200 other bird species of varying sizes, and typically do so after the host bird has laid her own eggs. The new study, led by a team at the University of Illinois, found that when cowbird mothers chose the nest of a larger host bird, they preferred those that held smaller-than-average eggs for that species. Smaller host eggs give the cowbird eggs a better chance of being successfully incubated; smaller host hatchlings mean the cowbird chicks face less competition for food and nest space.
"It implies a level of resolution in cowbird decision-making that people hadn't seen before," said Loren Merrill, a postdoctoral researcher at the Illinois Natural History Survey who conducted the study with INHS scientists Scott Chiavacci and Thomas Benson and Illinois State University researcher Ryan Paitz.
"Scientists originally saw cowbirds as egg dumpers that would put their eggs in any nest they found," Merrill said. "And while that may be the case in some areas, or for some birds, it doesn't tell the whole story. In fact, the more people have looked at cowbird behavior, the more our understanding has evolved of exactly how discriminating cowbirds can be."

The findings are reported in the journal Oecologia.
From April through August for five seasons ending in 2015, the researchers hunted through 16 shrubland sites across Illinois, looking for cowbirds and nests in which cowbirds might place their eggs.
"Nest searching is really fun fieldwork, except when you're trekking through poison ivy- and hawthorn-infested lands," Merrill said. "You either get really lucky and see nesting material in a bird's mouth, and you watch until they lead you to a nest, or you listen and watch for the vocalizations and behaviors the adults use when you are close to a nest."
Cowbirds use similar tactics to scout for host nests.
"They do a lot of skulking around the underbrush," Merrill said. "They'll also perch in an inconspicuous place and just watch. They are cuing in on the behavior of hosts that are building their nests or taking food back to nests."

The research team found nearly 3,000 nests of 34 bird species and checked each nest roughly every three days. Cowbirds placed eggs in more than 400 of these nests. In addition to weighing and measuring cowbird and host eggs in more than 180 nests, the researchers also tested the composition of some cowbird eggs to determine whether female cowbirds give some eggs an added boost, such as a greater proportion of yolk or higher levels of yolk hormones like testosterone.
"The cowbird could adjust allocation to the egg based on the perceived value of that egg. If she thinks the egg is in a good environment, she can invest more," Merrill said. "Or, if the host isn't appropriate and she doesn't think the environment is favorable, she could invest less."
The researchers found no variation in egg investment based on the differences among host species or size variations within a host species, however.

"We didn't see evidence that female cowbirds were adjusting resources in that respect, but that's not to say it isn't happening," Merrill said.
Tracking individual female cowbirds would help scientists better understand how mother cowbirds try to help their offspring.
"It's really hard to do," he said. "But being able to track the individual females over time and identify where they are depositing eggs would provide a lot of insight into how much of what we see is an adaptive allocation strategy, or whether the mother's health and other constraints are driving her behavior."

Story Source:
Materials provided by University of Illinois at Urbana-Champaign. Original written by Trish L. Barker. Note: Content may be edited for style and length.

Ancient DNA evidence shows hunter-gatherers and farmers were intimately linked


Date:
May 25, 2017
Source:
Cell Press
Summary:
In human history, the transition from hunting and gathering to farming is a significant one. As such, hunter-gatherers and farmers are usually thought about as two entirely different sets of people. But researchers reporting new ancient DNA evidence show that in the area we now recognize as Romania, at least, hunter-gatherers and farmers were living side by side, intermixing with each other, and having children.
Share:
FULL STORY

The skeleton shows a photo of Burial M95-2 from Schela Cladovei (SC-1) attached.
Credit: Copyright Clive Bonsall
 
In human history, the transition from hunting and gathering to farming is a significant one. As such, hunter-gatherers and farmers are usually thought about as two entirely different sets of people. But researchers reporting new ancient DNA evidence in Current Biology on May 25 show that in the area we now recognize as Romania, at least, hunter-gatherers and farmers were living side by side, intermixing with each other, and having children.
"We expected some level of mixing between farmers and hunter-gatherers, given the archaeological evidence for contact among these communities," says Michael Hofreiter of University of Potsdam in Germany. "However, we were fascinated by the high levels of integration between the two communities as reconstructed from our ancient DNA data."

The findings add evidence to a longstanding debate about how the Neolithic transition, when people gave up hunting and gathering for farming, actually occurred, the researchers say. In those debates, the question has often been about whether the movement of people or the movement of ideas drove the transition.
Earlier evidence suggested that the Neolithic transition in Western Europe occurred mostly through the movement of people, whereas cultural diffusion played a larger role to the east, in Latvia and Ukraine. The researchers in the new study were interested in Romania because it lies between these two areas, presenting some of the most compelling archaeological evidence for contact between incoming farmers and local hunter-gatherers.

Indeed, the new findings show that the relationship between hunter-gatherers and farmers in the Danube basin can be more nuanced and complex. The movement of people and the spread of culture aren't mutually exclusive ideas, the researchers say, "but merely the ends of a continuum."
The researchers came to this conclusion after recovering four ancient human genomes from Romania spanning a time transect between 8.8 thousand and 5.4 thousand years ago. The researchers also analyzed two Mesolithic (hunter-gatherer) genomes from Spain to provide further context.
The DNA revealed that the Romanian genomes from thousands of years ago had significant ancestry from Western hunter-gatherers. However, they also had a lesser but still sizeable contribution from Anatolian farmers, suggesting multiple admixture events between hunter-gatherers and farmers. An analysis of the bones also showed they ate a varied diet, with a combination of terrestrial and aquatic sources.
"Our study shows that such contacts between hunter-gatherers and farmers went beyond the exchange of food and artefacts," Hofreiter says. "As data from different regions accumulate, we see a gradient across Europe, with increasing mixing of hunter-gatherers and farmers as we go east and north. Whilst we still do not know the drivers of this gradient, we can speculate that, as farmers encountered more challenging climatic conditions, they started interacting more with local hunter-gatherers. These increased contacts, which are also evident in the archaeological record, led to genetic mixing, implying a high level of integration between very different people."

The findings are a reminder that the relationships within and among people in different places and at different times aren't simple. It's often said that farmers moved in and outcompeted hunter-gatherers with little interaction between the two. But the truth is surely much richer and more varied than that. In some places, as the new evidence shows, incoming farmers and local hunter-gatherers interacted and mixed to a great extent. They lived together, despite large cultural differences.
Understanding the reasons for why the interactions between these different people led to such varied outcomes, Hofreiter says, is the next big step. The researchers say they now hope to use ancient DNA evidence to add more chapters to the story as they explore the Neolithic transition as it occurred in other parts of the world, outside of Europe.

Story Source:
Materials provided by Cell Press. Note: Content may be edited for style and length.

Why the Sumatra earthquake was so severe


Date:
May 25, 2017
Source:
University of Southampton
Summary:
An international team of scientists has found evidence suggesting the dehydration of minerals deep below the ocean floor influenced the severity of the Sumatra earthquake, which took place on Dec. 26, 2004.
Share:
FULL STORY

A 'free-fall funnel', part of the drilling process.
Credit: Tim Fulton, IODP / JRSO
 
An international team of scientists has found evidence suggesting the dehydration of minerals deep below the ocean floor influenced the severity of the Sumatra earthquake, which took place on December 26, 2004.
The earthquake, measuring magnitude 9.2, and the subsequent tsunami, devastated coastal communities of the Indian Ocean, killing over 250,000 people.
Research into the earthquake was conducted during a scientific ocean drilling expedition to the region in 2016, as part of the International Ocean Discovery Program (IODP), led by scientists from the University of Southampton and Colorado School of Mines.

During the expedition on board the research vessel JOIDES Resolution, the researchers sampled, for the first time, sediments and rocks from the oceanic tectonic plate which feeds the Sumatra subduction zone. A subduction zone is an area where two of the Earth's tectonic plates converge, one sliding beneath the other, generating the largest earthquakes on Earth, many with destructive tsunamis.
Findings of a study on sediment samples found far below the seabed are now detailed in a new paper led by Dr Andre Hüpers of the MARUM-Center for Marine Environmental Sciences at University of Bremen - published in the journal Science.

Expedition co-leader Professor Lisa McNeill, of the University of Southampton, says: "The 2004 Indian Ocean tsunami was triggered by an unusually strong earthquake with an extensive rupture area. We wanted to find out what caused such a large earthquake and tsunami and what this might mean for other regions with similar geological properties."
The scientists concentrated their research on a process of dehydration of sedimentary minerals deep below the ground, which usually occurs within the subduction zone. It is believed this dehydration process, which is influenced by the temperature and composition of the sediments, normally controls the location and extent of slip between the plates, and therefore the severity of an earthquake.

In Sumatra, the team used the latest advances in ocean drilling to extract samples from 1.5 km below the seabed. They then took measurements of sediment composition and chemical, thermal, and physical properties and ran simulations to calculate how the sediments and rock would behave once they had travelled 250 km to the east towards the subduction zone, and been buried significantly deeper, reaching higher temperatures.
The researchers found that the sediments on the ocean floor, eroded from the Himalayan mountain range and Tibetan Plateau and transported thousands of kilometres by rivers on land and in the ocean, are thick enough to reach high temperatures and to drive the dehydration process to completion before the sediments reach the subduction zone. This creates unusually strong material, allowing earthquake slip at the subduction fault surface to shallower depths and over a larger fault area - causing the exceptionally strong earthquake seen in 2004.

Dr Andre Hüpers of the University of Bremen says: "Our findings explain the extent of the large rupture area, which was a feature of the 2004 earthquake, and suggest that other subduction zones with thick and hotter sediment and rocks, could also experience this phenomenon.
"This will be particularly important for subduction zones with limited or no historic subduction earthquakes, where the hazard potential is not well known. Subduction zone earthquakes typically have a return time of a few hundred to a thousand years. Therefore our knowledge of previous earthquakes in some subduction zones can be very limited."

Similar subduction zones exist in the Caribbean (Lesser Antilles), off Iran and Pakistan (Makran), and off western USA and Canada (Cascadia). The team will continue research on the samples and data obtained from the Sumatra drilling expedition over the next few years, including laboratory experiments and further numerical simulations, and they will use their results to assess the potential future hazards both in Sumatra and at these comparable subduction zones.

Story Source:
Materials provided by University of Southampton. Note: Content may be edited for style and length.

Tiny shells indicate big changes to global carbon cycle


Future conditions not only stress marine creatures but also may throw off ocean carbon balance

Date:
May 25, 2017
Source:
University of California - Davis
Summary:
Experiments with tiny, shelled organisms in the ocean suggest big changes to the global carbon cycle are underway, according to a new study.
Share:
FULL STORY

This is live foraminifera in culture.
Credit: UC Davis Bodega Marine Laboratory
 
Experiments with tiny, shelled organisms in the ocean suggest big changes to the global carbon cycle are underway, according to a study from the University of California, Davis.
For the study, published in the journal Scientific Reports, scientists raised foraminifera -- single-celled organisms about the size of a grain of sand -- at the UC Davis Bodega Marine Laboratory under future, high CO2 conditions.
These tiny organisms, commonly called "forams," are ubiquitous in marine environments and play a key role in food webs and the ocean carbon cycle.

Stressed Under Future Conditions
After exposing them to a range of acidity levels, UC Davis scientists found that under high CO2, or more acidic, conditions, the foraminifera had trouble building their shells and making spines, an important feature of their shells.
They also showed signs of physiological stress, reducing their metabolism and slowing their respiration to undetectable levels.
This is the first study of its kind to show the combined impact of shell building, spine repair, and physiological stress in foraminifera under high CO2 conditions. The study suggests that stressed and impaired foraminifera could indicate a larger scale disruption of carbon cycling in the ocean.

Off Balance
As a marine calcifier, foraminifera use calcium carbonate to build their shells, a process that plays an integral part in balancing the carbon cycle.
Normally, healthy foraminifera calcify their shells and sink to the ocean floor after they die, taking the calcite with them. This moves alkalinity, which helps neutralize acidity, to the seafloor.
When foraminifera calcify less, their ability to neutralize acidity also lessens, making the deep ocean more acidic.
But what happens in the deep ocean doesn't stay in the deep ocean.

Impacts for Thousands of Years
"It's not out-of-sight, out-of-mind," said lead author Catherine Davis, a Ph.D. student at UC Davis during the study and currently a postdoctoral associate at the University of South Carolina. "That acidified water from the deep will rise again. If we do something that acidifies the deep ocean, that affects atmospheric and ocean carbon dioxide concentrations on time scales of thousands of years."
Davis said the geologic record shows that such imbalances have occurred in the world's oceans before, but only during times of major change.
"This points to one of the longer time-scale effects of anthropogenic climate change that we don't understand yet," Davis said.

Upwelling Brings 'Future' to Surface
One way acidified water returns to the surface is through upwelling, when strong winds periodically push nutrient-rich water from the deep ocean up to the surface. Upwelling supports some of the planet's most productive fisheries and ecosystems. But additional anthropogenic, or human-caused, CO2 in the system is expected to impact fisheries and coastal ecosystems.
UC Davis' Bodega Marine Laboratory in Northern California is near one of the world's most intense coastal upwelling areas. At times, it experiences conditions most of the ocean isn't expected to experience for decades or hundreds of years.
"Seasonal upwelling means that we have an opportunity to study organisms in high CO2, acidic waters today -- a window into how the ocean may look more often in the future," said co-author Tessa Hill, an associate professor in earth and planetary sciences at UC Davis. "We might have expected that a species of foraminifera well-adapted to Northern California wouldn't respond negatively to high CO2 conditions, but that expectation was wrong. This study provides insight into how an important marine calcifier may respond to future conditions, and send ripple effects through food webs and carbon cycling."

Story Source:
Materials provided by University of California - Davis. Original written by Kat Kerlin. Note: Content may be edited for style and length.

Scientists borrow from electronics to build circuits in living cells


Date:
May 25, 2017
Source:
University of Washington
Summary:
Synthetic biology researchers have demonstrated a new method for digital information processing in living cells, analogous to the logic gates used in electric circuits. The circuits are the largest ever published to date in eurkaryotic cells and a key step in harnessing the potential of cells as living computers that can respond to disease, efficiently produce biofuels or develop plant-based chemicals.
Share:
FULL STORY

This is an artist's impression of connected CRISPR-dCas9 NOR gates.
Credit: University of Washington
 
Living cells must constantly process information to keep track of the changing world around them and arrive at an appropriate response.
Through billions of years of trial and error, evolution has arrived at a mode of information processing at the cellular level. In the microchips that run our computers, information processing capabilities reduce data to unambiguous zeros and ones. In cells, it's not that simple. DNA, proteins, lipids and sugars are arranged in complex and compartmentalized structures.
But scientists -- who want to harness the potential of cells as living computers that can respond to disease, efficiently produce biofuels or develop plant-based chemicals -- don't want to wait for evolution to craft their desired cellular system.

In a new paper published May 25 in Nature Communications, a team of UW synthetic biology researchers have demonstrated a new method for digital information processing in living cells, analogous to the logic gates used in electric circuits. They built a set of synthetic genes that function in cells like NOR gates, commonly used in electronics, which each take two inputs and only pass on a positive signal if both inputs are negative. NOR gates are functionally complete, meaning one can assemble them in different arrangements to make any kind of information processing circuit.
The UW engineers did all this using DNA instead of silicon and solder, and inside yeast cells instead of at an electronics workbench. The circuits the researchers built are the largest ever published to date in eurkaryotic cells, which, like human cells, contain a nucleus and other structures that enable complex behaviors.
"While implementing simple programs in cells will never rival the speed or accuracy of computation in silicon, genetic programs can interact with the cell's environment directly," said senior author and UW electrical engineering professor Eric Klavins. "For example, reprogrammed cells in a patient could make targeted, therapeutic decisions in the most relevant tissues, obviating the need for complex diagnostics and broad spectrum approaches to treatment."

Each cellular NOR gate consists of a gene with three programmable stretches of DNA -- two to act as inputs, and one to be the output. The authors then took advantage of a relatively new technology known as CRISPR-Cas9 to target those specific DNA sequences inside a cell. The Cas9 protein acts like a molecular gatekeeper in the circuit, sitting on the DNA and determining if a particular gate will be active or not.
If a gate is active, it expresses a signal that directs the Cas9 to deactivate another gate within the circuit. In this way, the researchers can "wire" together the gates to create logical programs in the cell.
What sets the study apart from previous work, researchers said, is the scale and complexity of the circuits successfully assembled -- which included up to seven NOR gates assembled in series or parallel.
At this size, circuits can begin to execute really useful behaviors by taking in information from different environmental sensors and performing calculations to decide on the correct response. Imagined applications include engineered immune cells that can sense and respond to cancer markers or cellular biosensors that can easily diagnose infectious disease in patient tissue.
These large DNA circuits inside cells are a major step toward an ability to program living cells, the researchers said. They provide a framework where logical programs can be easily implemented to control cellular function and state.

Story Source:
Materials provided by University of Washington. Note: Content may be edited for style and length.

Magnetic switch turns strange quantum property on and off


Date:
May 25, 2017
Source:
National Institute of Standards and Technology (NIST)
Summary:
A research team has developed the first switch that turns on and off a quantum behavior called the Berry phase. The discovery promises to provide new insight into the fundamentals of quantum theory and may lead to new quantum electronic devices.
Share:
FULL STORY

These images show the orbital paths of electrons trapped within a circular region within graphene. In the classical orbit (top image), an electron that travels in a complete circuit has the same physical state as when it started on the path. However, when an applied magnetic field reaches a critical value, (bottom image), an electron completing a circuit has a different physical state its original one. The change is called a Berry phase and the magnetic field acts as a switch to turn on the Berry phase. The result is that the electron is raised to a higher energy level.
Credit: Christopher Gutiérrez, Daniel Walkup/NIST
 
When a ballerina pirouettes, twirling a full revolution, she looks just as she did when she started. But for electrons and other subatomic particles, which follow the rules of quantum theory, that's not necessarily so. When an electron moves around a closed path, ending up where it began, its physical state may or may not be the same as when it left.
Now, there is a way to control the outcome, thanks to an international research group led by scientists at the National Institute of Standards and Technology (NIST). The team has developed the first switch that turns on and off this mysterious quantum behavior. The discovery promises to provide new insight into the fundamentals of quantum theory and may lead to new quantum electronic devices.
To study this quantum property, NIST physicist and fellow Joseph A. Stroscio and his colleagues studied electrons corralled in special orbits within a nanometer-sized region of graphene -- an ultrastrong, single layer of tightly packed carbon atoms. The corralled electrons orbit the center of the graphene sample just as electrons orbit the center of an atom. The orbiting electrons ordinarily retain the same exact physical properties after traveling a complete circuit in the graphene. But when an applied magnetic field reaches a critical value, it acts as a switch, altering the shape of the orbits and causing the electrons to possess different physical properties after completing a full circuit.

The researchers report their findings in the May 26, 2017, issue of Science.
The newly developed quantum switch relies on a geometric property called the Berry phase, named after English physicist Sir Michael Berry who developed the theory of this quantum phenomenon in 1983. The Berry phase is associated with the wave function of a particle, which in quantum theory describes a particle's physical state. The wave function -- think of an ocean wave -- has both an amplitude (the height of the wave) and a phase -- the location of a peak or trough relative to the start of the wave cycle.
When an electron makes a complete circuit around a closed loop so that it returns to its initial location, the phase of its wave function may shift instead of returning to its original value. This phase shift, the Berry phase, is a kind of memory of a quantum system's travel and does not depend on time, only on the geometry of the system -- the shape of the path. Moreover, the shift has observable consequences in a wide range of quantum systems.

Although the Berry phase is a purely quantum phenomenon, it has an analog in non-quantum systems. Consider the motion of a Foucault pendulum, which was used to demonstrate Earth's rotation in the 19th century. The suspended pendulum simply swings back and forth in the same vertical plane, but appears to slowly rotate during each swing -- a kind of phase shift -- due to the rotation of Earth beneath it.
Since the mid-1980s, experiments have shown that several types of quantum systems have a Berry phase associated with them. But until the current study, no one had constructed a switch that could turn the Berry phase on and off at will. The switch developed by the team, controlled by a tiny change in an applied magnetic field, gives electrons a sudden and large increase in energy.
Several members of the current research team -- based at the Massachusetts Institute of Technology and Harvard University -- developed the theory for the Berry phase switch.
To study the Berry phase and create the switch, NIST team member Fereshte Ghahari built a high-quality graphene device to study the energy levels and the Berry phase of electrons corralled within the graphene.
First, the team confined the electrons to occupy certain orbits and energy levels. To keep the electrons penned in, team member Daniel Walkup created a quantum version of an electric fence by using ionized impurities in the insulating layer beneath the graphene. This enabled a scanning tunneling microscope at NIST's nanotechnology user facility, the Center for Nanoscale Science and Technology, to probe the quantum energy levels and Berry phase of the confined electrons.

The team then applied a weak magnetic field directed into the graphene sheet. For electrons moving in the clockwise direction, the magnetic field created tighter, more compact orbits. But for electrons moving in counterclockwise orbits, the magnetic field had the opposite effect, pulling the electrons into wider orbits. At a critical magnetic field strength, the field acted as a Berry phase switch. It twisted the counterclockwise orbits of the electrons, causing the charged particles to execute clockwise pirouettes near the boundary of the electric fence.
Ordinarily, these pirouettes would have little consequence. However, says team member Christopher Gutiérrez, "the electrons in graphene possess a special Berry phase, which switches on when these magneticallyinduced pirouettes are triggered."
When the Berry phase is switched on, orbiting electrons abruptly jump to a higher energy level. The quantum switch provides a rich scientific tool box that will help scientists exploit ideas for new quantum devices, which have no analog in conventional semiconductor systems, says Stroscio.

Story Source:
Materials provided by National Institute of Standards and Technology (NIST). Note: Content may be edited for style and length.

The big star that couldn't become a supernova


One star's 'massive fail' could help solve a mystery

Date:
May 25, 2017
Source:
Ohio State University
Summary:
For the first time in history, astronomers have been able to watch as a dying star was reborn as a black hole. It went out with a whimper instead of a bang.
Share:
FULL STORY

In the failed supernova of a red supergiant, the envelope of the star is ejected and expands, producing a cold, red transient source surrounding the newly formed black hole, as illustrated by the expanding shell (left to right). Some residual material may fall onto the black hole, as illustrated by the stream and the disk, potentially powering some optical and infrared emissions years after the collapse.
Credit: NASA, ESA, P. Jeffries (STScI)
 
For the first time in history, astronomers have been able to watch as a dying star was reborn as a black hole.
It went out with a whimper instead of a bang.
The star, which was 25 times as massive as our sun, should have exploded in a very bright supernova. Instead, it fizzled out -- and then left behind a black hole.
"Massive fails" like this one in a nearby galaxy could explain why astronomers rarely see supernovae from the most massive stars, said Christopher Kochanek, professor of astronomy at The Ohio State University and the Ohio Eminent Scholar in Observational Cosmology.
As many as 30 percent of such stars, it seems, may quietly collapse into black holes -- no supernova required.

"The typical view is that a star can form a black hole only after it goes supernova," Kochanek explained. "If a star can fall short of a supernova and still make a black hole, that would help to explain why we don't see supernovae from the most massive stars."
He leads a team of astronomers who have been using the Large Binocular Telescope (LBT) to look for failed supernovae in other galaxies. They published their latest results in the Monthly Notices of the Royal Astronomical Society.
Among the galaxies they've been watching is NGC 6946, a galaxy 22 million light-years away that is nicknamed the "Fireworks Galaxy" because supernovae frequently happen there -- indeed, SN 2017eaw, discovered on May 14th, is shining near maximum brightness now. Starting in 2009, one particular star in the Fireworks Galaxy, named N6946-BH1, began to brighten weakly. By 2015, it appeared to have winked out of existence.

The astronomers aimed the Hubble Space Telescope at the star's location to see if it was still there but merely dimmed. They also used the Spitzer Space Telescope to search for any infrared radiation emanating from the spot. That would have been a sign that the star was still present, but perhaps just hidden behind a dust cloud.
All the tests came up negative. The star was no longer there. By a careful process of elimination, the researchers eventually concluded that the star must have become a black hole. It's too early in the project to know for sure how often stars experience massive fails, but Scott Adams, a former Ohio State student who recently earned his Ph.D. doing this work, was able to make a preliminary estimate.
"N6946-BH1 is the only likely failed supernova that we found in the first seven years of our survey. During this period, six normal supernovae have occurred within the galaxies we've been monitoring, suggesting that 10 to 30 percent of massive stars die as failed supernovae," he said.
"This is just the fraction that would explain the very problem that motivated us to start the survey."
To study co-author Krzystof Stanek, the really interesting part of the discovery is the implications it holds for the origins of very massive black holes -- the kind that the LIGO experiment detected via gravitational waves. (LIGO is the Laser Interferometer Gravitational-Wave Observatory.)

It doesn't necessarily make sense, said Stanek, professor of astronomy at Ohio State, that a massive star could undergo a supernova -- a process which entails blowing off much of its outer layers -- and still have enough mass left over to form a massive black hole on the scale of those that LIGO detected.
"I suspect it's much easier to make a very massive black hole if there is no supernova," he concluded.

Story Source:
Materials provided by Ohio State University. Original written by Pam Frost Gorder. Note: Content may be edited for style and length.

Nuclear spent fuel fire could force millions of people to relocate


Date:
May 25, 2017
Source:
Princeton University, Woodrow Wilson School of Public and International Affairs
Summary:
The US Nuclear Regulatory Commission relied on faulty analysis to justify its refusal to adopt a critical measure for protecting Americans from nuclear-waste fires at dozens of reactor sites around the country, according to a recent article. Radioactivity from such a fire could force approximately 8 million people to relocate and result in $2 trillion in damages.
Share:
FULL STORY

This image captures the spread of radioactivity from a hypothetical fire in a high-density spent-fuel pool at the Peach Bottom Nuclear Power Plant in Pennsylvania. Based on the guidance from the US Environmental Protection Agency and the experience from the Chernobyl and Fukushima accidents, populations in the red and orange areas would have to be relocated for many years, and many in the yellow area would relocate voluntarily. In this scenario, which is based on real weather patterns that occurred in July 2015, four major cities would be contaminated (New York City, Philadelphia, Baltimore and Washington, D.C.), resulting in the displacement of millions of people.
Credit: Photo courtesy of Michael Schoeppner, Princeton University, Program on Science and Global Security
The U.S. Nuclear Regulatory Commission (NRC) relied on faulty analysis to justify its refusal to adopt a critical measure for protecting Americans from the occurrence of a catastrophic nuclear-waste fire at any one of dozens of reactor sites around the country, according to an article in the May 26 issue of Science magazine. Fallout from such a fire could be considerably larger than the radioactive emissions from the 2011 Fukushima accident in Japan.
Published by researchers from Princeton University and the Union of Concerned Scientists, the article argues that NRC inaction leaves the public at high risk from fires in spent-nuclear-fuel cooling pools at reactor sites. The pools -- water-filled basins that store and cool used radioactive fuel rods -- are so densely packed with nuclear waste that a fire could release enough radioactive material to contaminate an area twice the size of New Jersey. On average, radioactivity from such an accident could force approximately 8 million people to relocate and result in $2 trillion in damages.
These catastrophic consequences, which could be triggered by a large earthquake or a terrorist attack, could be largely avoided by regulatory measures that the NRC refuses to implement. Using a biased regulatory analysis, the agency excluded the possibility of an act of terrorism as well as the potential for damage from a fire beyond 50 miles of a plant. Failing to account for these and other factors led the NRC to significantly underestimate the destruction such a disaster could cause.

"The NRC has been pressured by the nuclear industry, directly and through Congress, to low-ball the potential consequences of a fire because of concerns that increased costs could result in shutting down more nuclear power plants," said paper co-author Frank von Hippel, a senior research physicist at Princeton's Program on Science and Global Security (SGS), based at the Woodrow Wilson School of Public and International Affairs. "Unfortunately, if there is no public outcry about this dangerous situation, the NRC will continue to bend to the industry's wishes."
Von Hippel's co-authors are Michael Schoeppner, a former postdoctoral researcher at Princeton's SGS, and Edwin Lyman, a senior scientist at the Union of Concerned Scientists.
Spent-fuel pools were brought into the spotlight following the March 2011 nuclear disaster in Fukushima, Japan. A 9.0-magnitude earthquake caused a tsunami that struck the Fukushima Daiichi nuclear power plant, disabling the electrical systems necessary for cooling the reactor cores. This led to core meltdowns at three of the six reactors at the facility, hydrogen explosions, and a release of radioactive material.
"The Fukushima accident could have been a hundred times worse had there been a loss of the water covering the spent fuel in pools associated with each reactor," von Hippel said. "That almost happened at Fukushima in Unit 4."

In the aftermath of the Fukushima disaster, the NRC considered proposals for new safety requirements at U.S. plants. One was a measure prohibiting plant owners from densely packing spent-fuel pools, requiring them to expedite transfer of all spent fuel that has cooled in pools for at least five years to dry storage casks, which are inherently safer. Densely packed pools are highly vulnerable to catching fire and releasing huge amounts of radioactive material into the atmosphere.
The NRC analysis found that a fire in a spent-fuel pool at an average nuclear reactor site would cause $125 billion in damages, while expedited transfer of spent fuel to dry casks could reduce radioactive releases from pool fires by 99 percent. However, the agency decided the possibility of such a fire is so unlikely that it could not justify requiring plant owners to pay the estimated cost of $50 million per pool.
The NRC cost-benefit analysis assumed there would be no consequences from radioactive contamination beyond 50 miles from a fire. It also assumed that all contaminated areas could be effectively cleaned up within a year. Both of these assumptions are inconsistent with experience after the Chernobyl and Fukushima accidents.
In two previous articles, von Hippel and Schoeppner released figures that correct for these and other errors and omissions. They found that millions of residents in surrounding communities would have to relocate for years, resulting in total damages of $2 trillion -- nearly 20 times the NRC's result. Considering the nuclear industry is only legally liable for $13.6 billion, thanks to the Price Anderson Act of 1957, U.S. taxpayers would have to cover the remaining costs.

The authors point out that if the NRC does not take action to reduce this danger, Congress has the authority to fix the problem. Moreover, the authors suggest that states that provide subsidies to uneconomical nuclear reactors within their borders could also play a constructive role by making those subsidies available only for plants that agreed to carry out expedited transfer of spent fuel.
"In far too many instances, the NRC has used flawed analysis to justify inaction, leaving millions of Americans at risk of a radiological release that could contaminate their homes and destroy their livelihoods," said Lyman. "It is time for the NRC to employ sound science and common-sense policy judgments in its decision-making process."

Story Source:
Materials provided by Princeton University, Woodrow Wilson School of Public and International Affairs. Original written by B. Rose Kelly. Note: Content may be edited for style and length.

Monday 29 May 2017

New genetic roots for intelligence discovered


Date:
May 23, 2017
Source:
Vrije Universiteit Amsterdam
Summary:
Scientists have made a major advance in understanding the genetic underpinnings of intelligence. Using a large dataset of more than 78,000 individuals with information on DNA genotypes and intelligence scores, the team discovered novel genes and biological routes for intelligence.
Share:
FULL STORY

Brain (stock image).
Credit: © adimas / Fotolia
 
Intelligence is one of the most investigated traits in humans and higher intelligence is associated with important economic and health-related life outcomes. Despite high heritability estimates of 45% in childhood and 80% in adulthood, only a handful of genes had previously been associated with intelligence and for most of these genes the findings were not reliable. The study, published in the journal Nature Genetics, uncovered 52 genes for intelligence, of which 40 were completely new discoveries. Most of these genes are predominantly expressed in brain tissue.

"These results are very exciting as they provide very robust associations with intelligence. The genes we detect are involved in the regulation of cell development, and are specifically important in synapse formation, axon guidance and neuronal differentiation. These findings for the first time provide clear clues towards the underlying biological mechanisms of intelligence," says Danielle Posthuma, Principal Investigator of the study.
The study also showed that the genetic influences on intelligence are highly correlated with genetic influences on educational attainment, and also, albeit less strongly, with smoking cessation, intracranial volume, head circumference in infancy, autism spectrum disorder and height. Inverse genetic correlations were reported with Alzheimer's disease, depressive symptoms, smoking history, schizophrenia, waist-to-hip ratio, body mass index, and waist circumference.

"These genetic correlations shed light on common biological pathways for intelligence and other traits. Seven genes for intelligence are also associated with schizophrenia; nine genes also with body mass index, and four genes were also associated with obesity. These three traits show a negative correlation with intelligence," says Suzanne Sniekers, first author of the study and postdoc in the lab of Posthuma. "So, a variant of gene with a positive effect on intelligence, has a negative effect on schizophrenia, body mass index or obesity."
Future studies will need to clarify the exact role of these genes in intelligence in order to obtain a more complete picture of how genetic differences lead to differences in intelligence. "The current genetic results explain up to 5% of the total variance in intelligence. Although this is quite a large amount of variance for a trait as intelligence, there is still a long road to go: given the high heritability of intelligence, many more genetic effects are expected to be important, and these can only be detected in even larger samples," says Danielle Posthuma.

Story Source:
Materials provided by Vrije Universiteit Amsterdam. Note: Content may be edited for style and length.

Two-headed worms 'coded': Bioelectric code behind worm self-healing manipulated


Altering bioelectric networks reveals a 'hidden code' that controls the number of heads produced after injury even in normal-appearing flatworms

Date:
May 23, 2017
Source:
Tufts University
Summary:
Researchers have succeeded in permanently rewriting flatworms' regenerative body shape by resetting their internal bioelectric pattern memory, causing even normal-appearing flatworms to harbor the 'code' to regenerate as two-headed worms.
Share:
FULL STORY

After being treated with an inhibitor of electrical synapses, about 25 percent of wild-type (WT) flatworms regenerated into double-headed forms (DH), while 72 percent regenerated as seemingly normal one-headed worms (CRPT). But further analysis showed that the normal-appearing flatworms in fact contained a hidden, double-headed pattern memory stored in a bioelectric network that causes fragments to continue to reproduce at the same 25/72 percent ratio when cut in plain water in subsequent rounds of regeneration.
Credit: Fallon Durant, Allen Discovery Center at Tufts University
 
Researchers have succeeded in permanently rewriting flatworms' regenerative body shape by resetting their internal bioelectric pattern memory, causing even normal-appearing flatworms to harbor the "code" to regenerate as two-headed worms. The findings, published today in Biophysical Journal (Cell Press), suggest an alternative to genomic editing for large-scale regenerative control, according to the authors.

The research, led by scientists at Tufts University's Allen Discovery Center and its Department of Biology, addresses the forces that determine the shape to which an animal's body regenerates when severely damaged, shows that it is possible to permanently alter the target morphology of an animal with a wild-type genomic sequence, and reveals that alternative body patterns can be encoded within animals with normal anatomy and histology. The research also provides clues about why certain individuals have different biological outcomes when exposed to the same treatment as others in their group.
"With this work, we now know that bioelectric properties can permanently override the default body shape called for by a genome, that regenerative target morphology can be edited to diverge from the current anatomy, and that bioelectric networks can be a control point for investigating cryptic, previously-unobservable phenotypes," said the paper's corresponding author, Michael Levin, Ph.D., Vannevar Bush professor of biology and director of the Allen Discovery Center at Tufts and the Tufts Center for Regenerative and Developmental Biology.

The findings are important because advances in regenerative medicine require an understanding of the mechanisms by which some organisms repair damage to their bodies, said Levin. "Bioelectricity has a powerful instructive role as a mediator in the reprogramming of anatomical structure, with many implications for understanding the evolution of form and the path to regenerative therapies," he added.
Researchers worked with planaria (Dugesia japonica) -- flatworms that are known for their regenerative capacity. When cut into pieces, each fragment of flatworm regenerates what it is missing to complete its anatomy. Normally, regeneration produces an exact copy of the original, standard worm.
Building on previous work in which Levin and colleagues demonstrated it was possible to cause flatworms to grow heads and brains of another species of flatworm by altering their bioelectric circuits, the researchers briefly interrupted the flatworms' bioelectric networks. They did so by using octanol (8 OH) to temporarily interrupt gap junctions, which are protein channels -- electrical synapses -- that enable cells to communicate with each other locally and by forming networks across long distances, passing electrical signals back and forth.

Twenty five percent of the amputated trunk fragments regenerated into two-headed flatworms, while 72 percent regenerated into normal-appearing, one-headed worms; approximately 3 percent of the trunk fragments did not develop properly. At first, the researchers assumed that the single-headed treated flatworms had not been affected by the treatment, as is common when the function of a biological system is altered by any experimental treatment or environmental event. However, when the flatworms with normal body shape were then amputated repeatedly over several months in normal spring water, they produced the same ratio of two-headed worms to one-headed worms.

The flatworms' pattern memory had been altered, although this was not apparent in their intact state and was revealed only upon regeneration. The research showed that the altered target morphology -- the shape to which the worms regenerate upon damage -- was encoded not in their histology, molecular marker expression, or stem cell distribution, but rather in a bioelectric pattern that instructs one of two possible anatomical outcomes after subsequent damage.

"The altered regenerative body plan is stored in the bioelectric networks in the cells of seemingly normal planaria, and the body-wide bioelectric gradients serve as a kind of pattern memory," said Fallon Durant, the paper's first author and a Ph.D. student in the Integrative Graduate Education and Research Traineeship (IGERT) program at the Department of Biology and the Allen Discovery Center at Tufts University. "Bioelectric signals can act as a switch that not only can change body plan anatomy but also undo those changes when reversed."

Story Source:
Materials provided by Tufts University. Note: Content may be edited for style and length.

New species of bus-sized fossil marine reptile unearthed in Russia


Date:
May 25, 2017
Source:
University of Liege
Summary:
A new species of a fossil pliosaur (large predatory marine reptile from the 'age of dinosaur') has been found in Russia and profoundly change how we understand the evolution of the group, says an international team of scientists.
Share:
FULL STORY

This is an artistic reconstruction of Luskhan itilensis.
Credit: Copyright Andrey Atuchin, 2017
 
A new species of a fossil pliosaur (large predatory marine reptile from the 'age of dinosaur') has been found in Russia and profoundly change how we understand the evolution of the group, says an international team of scientists.
Spanning more than 135 Ma during the 'Age of Dinosaurs', plesiosaur marine reptiles represent one of longest-lived radiations of aquatic tetrapods and certainly the most diverse one. Plesiosaurs possess an unusual body shape not seen in other marine vertebrates with four large flippers, a stiff trunk, and a highly varying neck length. Pliosaurs are a special kind of plesiosaur that are characterized by a large, 2m long skull, enormous teeth and extremely powerful jaws, making them the top predators of oceans during the 'Age of Dinosaurs'.

In a new study to be published today in the journal Current Biology, the team reports a new, exceptionally well-preserved and highly unusual pliosaur from the Cretaceous of Russia (about 130 million years ago). It has been found in Autumn 2002 on right bank of the Volga River, close to the city of Ulyanovsk, by Gleb N. Uspensky (Ulyanovsk State University), one of the co-authors of the paper. The skull of the new species, dubbed "Luskhan itilensis," meaning the Master Spirit from the Volga river, is 1.5m in length, indicating a large animal. But its rostrum is extremely slender, resembling that of fish-eating aquatic animals such as gharials or some species of river dolphins. "This is the most striking feature, as it suggests that pliosaurs colonized a much wider range of ecological niches than previously assumed" said Valentin Fischer, lecturer at the Université de Liège (Belgium) and lead author of the study.

By analysing two new and comprehensive datasets that describe the anatomy and ecomorphology of plesiosaurs with cutting edge techniques, the team revealed that several evolutionary convergences (a biological phenomenon where distantly related species evolve and resemble one another because they occupy similar roles, for example similar feeding strategies and prey types in an ecosystem) took place during the evolution of plesiosaurs, notably after an important extinction event at the end of the Jurassic (145 million years ago). The new findings have also ramifications in the final extinction of pliosaurs, which took place several tens of million years before that of all dinosaurs (except some bird lineages). Indeed, the new results suggest that pliosaurs were able to bounce back after the latest Jurassic extinction, but then faced another extinction that would -- this time -- wipe them off the depths of ancient oceans, forever.

Story Source:
Materials provided by University of Liege. Note: Content may be edited for style and length.

Juno mission to Jupiter delivers first science results


King of the planets even more exotic than expected

Date:
May 25, 2017
Source:
Southwest Research Institute
Summary:
NASA's Juno mission is rewriting what scientists thought they knew about Jupiter specifically, and gas giants in general, according to a pair of Science papers released today. The Juno spacecraft has been in orbit around Jupiter since July 2016, passing within 3,000 miles of the equatorial cloudtops.
Share:
FULL STORY

JunoCam image of Jupiter. The SwRI-led Juno mission discovered that Jupiter's signature bands disappear near its poles. This JunoCam image, processed by citizen scientist Bruce Lemons, show a chaotic scene of swirling storms up to the size of Mars against a bluish backdrop.
Credit: Image Courtesy of NASA/Bruce Lemons
 
NASA's Juno mission, led by Southwest Research Institute's Dr. Scott Bolton, is rewriting what scientists thought they knew about Jupiter specifically, and gas giants in general, according to a pair of Science papers released today. The Juno spacecraft has been in orbit around Jupiter since July 2016, passing within 3,000 miles of the equatorial cloudtops.
"What we've learned so far is earth-shattering. Or should I say, Jupiter-shattering," said Bolton, Juno's principal investigator. "Discoveries about its core, composition, magnetosphere, and poles are as stunning as the photographs the mission is generating."

The solar-powered spacecraft's eight scientific instruments are designed to study Jupiter's interior structure, atmosphere, and magnetosphere. Two instruments developed and led by SwRI are working in concert to study Jupiter's auroras, the greatest light show in the solar system. The Jovian Auroral Distributions Experiment (JADE) is a set of sensors detecting the electrons and ions associated with Jupiter's auroras. The Ultraviolet Imaging Spectrograph (UVS) examines the auroras in UV light to study Jupiter's upper atmosphere and the particles that collide with it. Scientists expected to find similarities to Earth's auroras, but Jovian auroral processes are proving puzzling.

"Although many of the observations have terrestrial analogs, it appears that different processes are at work creating the auroras," said SwRI's Dr. Phil Valek, JADE instrument lead. "With JADE we've observed plasmas upwelling from the upper atmosphere to help populate Jupiter's magnetosphere. However, the energetic particles associated with Jovian auroras are very different from those that power the most intense auroral emissions at Earth."
Also surprising, Jupiter's signature bands disappear near its poles. JunoCam images show a chaotic scene of swirling storms up to the size of Mars towering above a bluish backdrop. Since the first observations of these belts and zones many decades ago, scientists have wondered how far beneath the gas giant's swirling façade these features persist. Juno's microwave sounding instrument reveals that topical weather phenomena extend deep below the cloudtops, to pressures of 100 bars, 100 times Earth's air pressure at sea level.
"However, there's a north-south asymmetry. The depths of the bands are distributed unequally," Bolton said. "We've observed a narrow ammonia-rich plume at the equator. It resembles a deeper, wider version of the air currents that rise from Earth's equator and generate the trade winds."

Juno is mapping Jupiter's gravitational and magnetic fields to better understand the planet's interior structure and measure the mass of the core. Scientists think a dynamo -- a rotating, convecting, electrically conducting fluid in a planet's outer core -- is the mechanism for generating the planetary magnetic fields.
"Juno's gravity field measurements differ significantly from what we expected, which has implications for the distribution of heavy elements in the interior, including the existence and mass of Jupiter's core," Bolton said. The magnitude of the observed magnetic field was 7.766 Gauss, significantly stronger than expected. But the real surprise was the dramatic spatial variation in the field, which was significantly higher than expected in some locations, and markedly lower in others. "We characterized the field to estimate the depth of the dynamo region, suggesting that it may occur in a molecular hydrogen layer above the pressure-induced transition to the metallic state."

These preliminary science results were published in two papers in a special edition of Science. Bolton is lead author of "Jupiter's interior and deep atmosphere: The initial pole-to-pole passes with the Juno spacecraft." SwRI's Dr. Frederic Allegrini, Dr. Randy Gladstone, and Valek are co-authors of "Jupiter's magnetosphere and aurorae observed by the Juno spacecraft during its first polar orbits"; lead author is Dr. John Connerney of the Space Research Corporation.
Juno is the second mission developed under NASA's New Frontiers Program. The first was the SwRI-led New Horizons mission, which provided the first historic look at the Pluto system in July 2015 and is now on its way to a new target in the Kuiper Belt. NASA's Jet Propulsion Laboratory in Pasadena, Calif., manages the Juno mission for the principal investigator, SwRI's Bolton. Lockheed Martin of Denver built the spacecraft. The Italian Space Agency contributed an infrared spectrometer instrument and a portion of the radio science experiment.

Story Source:
Materials provided by Southwest Research Institute. Note: Content may be edited for style and length.

Mind-controlled device helps stroke patients retrain brains to move paralyzed hands


Device reads brain signals, converts them into motion

Date:
May 26, 2017
Source:
Washington University School of Medicine
Summary:
Stroke patients who learned to use their minds to open and close a plastic brace fitted over their paralyzed hands gained some ability to control their own hands when they were not wearing the brace, according to a new study. The participants, all of whom had moderate to severe paralysis, showed significant improvement in grasping objects.
Share:
FULL STORY

Medical resident Jarod Roland, MD, tries out a device that detects electrical activity in his brain and causes his hand to open and close in response to brain signals. A new study shows that this device can help chronic stroke patients recover some control over their paralyzed limbs.
Credit: Leuthardt lab
 
Stroke patients who learned to use their minds to open and close a device fitted over their paralyzed hands gained some control over their hands, according to a new study from Washington University School of Medicine in St. Louis.
By mentally controlling the device with the help of a brain-computer interface, participants trained the uninjured parts of their brains to take over functions previously performed by injured areas of the brain, the researchers said.
"We have shown that a brain-computer interface using the uninjured hemisphere can achieve meaningful recovery in chronic stroke patients," said Eric Leuthardt, MD, a professor of neurosurgery, of neuroscience, of biomedical engineering, and of mechanical engineering & applied science, and the study's co-senior author.

The study is published May 26 in the journal Stroke.
Stroke is the leading cause of acquired disability among adults. About 700,000 people in the United States experience a stroke every year, and 7 million are living with the aftermath.
In the first weeks after a stroke, people rapidly recover some abilities, but their progress typically plateaus after about three months.
"We chose to evaluate the device in patients who had their first stroke six months or more in the past because not a lot of gains are happening by that point," said co-senior author Thy Huskey, MD, an associate professor of neurology at the School of Medicine and program director of the Stroke Rehabilitation Center of Excellence at The Rehabilitation Institute of St. Louis. "Some lose motivation. But we need to continue working on finding technology to help this neglected patient population."
David Bundy, PhD, the study's first author and a former graduate student in Leuthardt's lab, worked to take advantage of a quirk in how the brain controls movement of the limbs. In general, areas of the brain that control movement are on the opposite side of the body from the limbs they control. But about a decade ago, Leuthardt and Bundy, who is now a postdoctoral researcher at University of Kansas Medical Center, discovered that a small area of the brain played a role in planning movement on the same side of the body.
To move the left hand, they realized, specific electrical signals indicating movement planning first appear in a motor area on the left side of the brain. Within milliseconds, the right-sided motor areas become active, and the movement intention is translated into actual contraction of muscles in the hand.

A person whose left hand and arm are paralyzed has sustained damage to the motor areas on the right side of the brain. But the left side of the person's brain is frequently intact, meaning many stroke patients can still generate the electrical signal that indicates an intention to move. The signal, however, goes nowhere since the area that executes the movement plan is out of commission.
"The idea is that if you can couple those motor signals that are associated with moving the same-sided limb with the actual movements of the hand, new connections will be made in your brain that allow the uninjured areas of your brain to take over control of the paralyzed hand," Leuthardt said.
That's where the Ipsihand, a device developed by Washington University scientists, comes in. The Ipsihand comprises a cap that contains electrodes to detect electrical signals in the brain, a computer that amplifies the signals, and a movable brace that fits over the paralyzed hand. The device detects the wearer's intention to open or close the paralyzed hand, and moves the hand in a pincer-like grip, with the second and third fingers bending to meet the thumb.

"Of course, there's a lot more to using your arms and hands than this, but being able to grasp and use your opposable thumb is very valuable," Huskey said. "Just because your arm isn't moving exactly as it was before, it's not worthless. We can still interact with the world with the weakened arm."
Leuthardt played a key role in elucidating the basic science, and he worked with Daniel Moran, PhD, a professor of biomedical engineering at Washington University School of Engineering & Applied Science, to develop the technology behind the Ipsihand. He and Moran co-founded the company Neurolutions Inc. to continue developing the Ipsihand, and Leuthardt serves on the company's board of directors. Neurolutions funded this study.

To test the Ipsihand, Huskey recruited moderately to severely impaired stroke patients and trained them to use the device at home. The participants were encouraged to use the device at least five days a week, for 10 minutes to two hours a day. Thirteen patients began therapy, but three dropped out due to unrelated health issues, poor fit of the device or inability to comply with the time commitment. Ten patients completed the study.
Participants underwent a standard motor skills evaluation at the start of the study and every two weeks throughout. The test measured their ability to grasp, grip and pinch with their hands, and to make large motions with their arms. Among other things, participants were asked to pick up a block and place it atop a tower, fit a tube around a smaller tube, and move their hands to their mouths. Higher scores indicated better function.

After 12 weeks of using the device, the patients' scores increased an average of 6.2 points on a 57-point scale.
"An increase of six points represents a meaningful improvement in quality of life," Leuthardt said. "For some people, this represents the difference between being unable to put on their pants by themselves and being able to do so."
Each participant also rated his or her ability to use the affected arm and his or her satisfaction with the skills. Self-reported abilities and satisfaction significantly improved over the course of the study.
How much each patient improved varied, and the degree of improvement did not correlate with time spent using the device. Rather, it correlated with how well the device read brain signals and converted them into hand movements.
"As the technology to pick up brain signals gets better, I'm sure the device will be even more effective at helping stroke patients recover some function," Huskey said.

Story Source:
Materials provided by Washington University School of Medicine. Original written by Tamara Bhandari. Note: Content may be edited for style and length.

Wednesday 24 May 2017

Himalayan powerhouses: How Sherpas have evolved superhuman energy efficiency


Date:
May 23, 2017
Source:
University of Cambridge
Summary:
Sherpas have evolved to become superhuman mountain climbers, extremely efficient at producing the energy to power their bodies even when oxygen is scarce, suggests new research published today in the Proceedings of National Academy of Sciences (PNAS).
Share:
FULL STORY

Island peak( Imja Tse) climbing, Everest region, Nepal.
Credit: © ykumsri / Fotolia
 
Sherpas have evolved to become superhuman mountain climbers, extremely efficient at producing the energy to power their bodies even when oxygen is scarce, suggests new research published today in the Proceedings of the National Academy of Sciences (PNAS).
The findings could help scientists develop new ways of treating hypoxia -- lack of oxygen -- in patients. A significant proportion of patients in intensive care units (ICUs) experience potentially life-threatening hypoxia, a complication associated with conditions from haemorrhage to sepsis.
When oxygen is scarce, the body is forced to work harder to ensure that the brain and muscles receive enough of this essential nutrient. One of the most commonly observed ways the body has of compensating for a lack of oxygen is to produce more red blood cells, which are responsible for carrying blood around the body to our organs. This makes the blood thicker, however, so it flows more slowly and is more likely to clog up blood vessels.

Mountain climbers are often exposed to low levels of oxygen, particularly at high altitudes. This is why they often have to take time during long ascents to acclimatise to their surroundings, giving the body enough time to adapt itself and prevent altitude sickness. In addition, they may take oxygen supplies to supplement the thin air.
Scientists have known for some time that people have different responses to high altitudes. While most climbers require additional oxygen to scale Mount Everest, whose peak is 8,848m above sea level, a handful of climbers have managed to do so without. Most notably, Sherpas, an ethnic group from the mountain regions of Nepal, are able to live at high altitude with no apparent consequences to their health -- as a result, many act as guides to support expeditions in the Himalayas, and two Sherpas are known to have reached the summit of Everest an incredible 21 times.

Previous studies have suggested differences between Sherpas and people living in non-high altitude areas, known collectively as 'lowlanders', including fewer red blood cells in Sherpas at altitude, but higher levels of nitric oxide, a chemical that opens up blood vessels and keeps blood flowing.
Evidence suggests that the first humans were present on the Tibetan Plateau around 30,000 years ago, with the first permanent settlers appearing between 6,000-9,000 years ago. This raises the possibility that they have evolved to adapt to the extreme environment. This is supported by recent DNA studies, which have found clear genetic differences between Sherpa and Tibetan populations on the one hand and lowlanders on the other. Some of these differences were in their mitochondrial DNA -- the genetic code that programmes mitochondria, the body's 'batteries' that generate our energy.
To understand the biological differences between the Sherpas and lowlanders, a team of researchers led by scientists at the University of Cambridge followed two groups as they made a gradual ascent up to Everest Base Camp at an elevation of 5,300m.

The study was part of Xtreme Everest, a project that aims to improve outcomes for people who become critically ill by understanding how our bodies respond to the extreme altitude on the world's highest mountain. This year marks 10 years since the group's first expedition to Everest.
The lowlanders group comprised 10 investigators selected to operate the Everest Base Camp laboratory, where the mitochondrial studies were carried out by James Horscroft and Aleks Kotwica, two PhD students at the University of Cambridge. They took samples, including blood and muscle biopsies, in London to give a baseline measurement, then again when they first arrived at Base Camp and a third time after two months at Base Camp. These samples were compared with those taken from 15 Sherpas, all of whom were living in relatively low-lying areas, rather than being the 'elite' high altitude climbers. The Sherpas' baseline measurements were taken at Kathmandu, Nepal.
The researchers found that even at baseline, the Sherpas' mitochondria were more efficient at using oxygen to produce ATP, the energy that powers our bodies.

As predicted from genetic differences, they also found lower levels of fat oxidation in the Sherpas. Muscles have two ways to get energy -- from sugars, such as glucose, or from burning fat (fat oxidation). The majority of the time we get our energy from the latter source; however, this is inefficient, so at times of physical stress, such as when exercising, we take our energy from sugars. The low levels of fat oxidation again suggest that the Sherpas are more efficient at generating energy.
The measurements taken at altitude rarely changed from the baseline measurement in the Sherpas, suggesting that they were born with such differences. However, for lowlanders, measurements tended to change after time spent at altitude, suggesting that their bodies were acclimatising and beginning to mimic the Sherpas'.
One of the key differences, however, was in phosphocreatine levels. Phosphocreatine is an energy reserve that acts as a buffer to help muscles contract when no ATP is present. In lowlanders, after two months at high altitude, phosphocreatine levels crash, whereas in Sherpas levels actually increase.
In addition, the team found that while levels of free radicals increase rapidly at high altitude, at least initially, levels in Sherpas are very low. Free radicals are molecules created by a lack of oxygen that can be potentially damaging to cells and tissue.
"Sherpas have spent thousands of years living at high altitudes, so it should be unsurprising that they have adapted to become more efficient at using oxygen and generating energy," says Dr Andrew Murray from the University of Cambridge, the study's senior author. "When those of us from lower-lying countries spend time at high altitude, our bodies adapt to some extent to become more 'Sherpa-like', but we are no match for their efficiency."
The team say the findings could provide valuable insights to explain why some people suffering from hypoxia fare much worse in emergency situations that others.
"Although lack of oxygen might be viewed as an occupational hazard for mountain climbers, for people in intensive care units it can be life threatening," explains Professor Mike Grocott, Chair of Xtreme Everest from the University of Southampton. "One in five people admitted to intensive care in the UK each year die and even those that survive might never regain their previous quality of life.
"By understanding how Sherpas are able to survive with low levels of oxygen, we can get clues to help us identify those at greatest risk in ICUs and inform the development of better treatments to help in their recovery."

The 10th anniversary of the original Caudwell Xtreme Everest expedition will be marked this month by a conference at the Royal Society of Medicine, and an event open to the public on the evening of 23rd May at the Royal Geographical Society entitled A Celebration of Six Decades of Medicine on Everest.
The research was part-funded by the British Heart Foundation.

Story Source:
Materials provided by University of Cambridge. The original story is licensed under a Creative Commons License. Note: Content may be edited for style and length.

Computer code that Volkswagen used to cheat emissions tests uncovered

International team of researchers uncovered the system inside cars' onboard computers

Date:
May 23, 2017
Source:
University of California - San Diego
Summary:
An international team of researchers has uncovered the mechanism that allowed Volkswagen to circumvent US and European emission tests over at least six years before the Environmental Protection Agency put the company on notice in 2015 for violating the Clean Air Act. During a year-long investigation, researchers found code that allowed a car's onboard computer to determine that the vehicle was undergoing an emissions test.
Share:
FULL STORY

Diagnostic sensor applied to the exhaust of a car.
Credit: © uleiber / Fotolia
 
An international team of researchers has uncovered the mechanism that allowed Volkswagen to circumvent U.S. and European emission tests over at least six years before the Environmental Protection Agency put the company on notice in 2015 for violating the Clean Air Act. During a year-long investigation, researchers found code that allowed a car's onboard computer to determine that the vehicle was undergoing an emissions test. The computer then activated the car's emission-curbing systems, reducing the amount of pollutants emitted. Once the computer determined that the test was over, these systems were deactivated.
When the emissions curbing system wasn't running, cars emitted up to 40 times the amount of nitrogen oxides allowed under EPA regulations.

The team, led by Kirill Levchenko, a computer scientist at the University of California San Diego will present their findings at the 38th IEEE Symposium on Security and Privacy in the San Francisco Bay Area on May 22 to 24, 2017.
"We were able to find the smoking gun," Levchenko said. "We found the system and how it was used."
Computer scientists obtained copies of the code running on Volkswagen onboard computers from the company's own maintenance website and from forums run by car enthusiasts. The code was running on a wide range of models, including the Jetta, Golf and Passat, as well as Audi's A and Q series.
"We found evidence of the fraud right there in public view," Levchenko said.
During emissions standards tests, cars are placed on a chassis equipped with a dynamometer, which measures the power output of the engine. The vehicle follows a precisely defined speed profile that tries to mimic real driving on an urban route with frequent stops. The conditions of the test are both standardized and public. This essentially makes it possible for manufacturers to intentionally alter the behavior of their vehicles during the test cycle. The code found in Volkswagen vehicles checks for a number of conditions associated with a driving test, such as distance, speed and even the position of the wheel. If the conditions are met, the code directs the onboard computer to activate emissions curbing mechanism when those conditions were met.

A year-long investigation
It all started when computer scientists at Ruhr University, working with independent researcher Felix Domke, teamed up with Levchenko and the research group of computer science professor Stefan Savage at the Jacobs School of Engineering at UC San Diego.
Savage, Levchenko and their team have extensive experience analyzing embedded systems, such as cars' onboard computers, known as Engine Control Units, for vulnerabilities. The team examined 900 versions of the code and found that 400 of those included information to circumvent emissions tests.
A specific piece of code was labeled as the "acoustic condition" -- ostensibly, a way to control the sound the engine makes. But in reality, the label became a euphemism for conditions occurring during an emissions test. The code allowed for as many as 10 different profiles for potential tests. When the computer determined the car was undergoing a test, it activated emissions-curbing systems, which reduced the amount of nitrogen oxide emitted.
"The Volkswagen defeat device is arguably the most complex in automotive history," Levchenko said.
Researchers found a less sophisticated circumventing ploy for the Fiat 500X. That car's onboard computer simply allows its emissions-curbing system to run for the first 26 minutes and 40 seconds after the engine starts -- roughly the duration of many emissions tests.
Researchers note that for both Volkswagen and Fiat, the vehicles' Engine Control Unit is manufactured by automotive component giant Robert Bosch. Car manufacturers then enable the code by entering specific parameters.

Diesel engines pose special challenges for automobile manufacturers because their combustion process produces more particulates and nitrogen oxides than gasoline engines. To curb emissions from these engines, the vehicle's onboard computer must sometimes sacrifice performance or efficiency for compliance.
The study draws attention to the regulatory challenges of verifying software-controlled systems that may try to hide their behavior and calls for a new breed of techniques that work in an adversarial setting.
"Dynamometer testing is just not enough anymore," Levchenko said.
The article is entitled: "How They Did It: An Analysis of Emission Defeat Devices in Modern Automobiles"
The authors are:  Guo Li, Kirill Levchenko and Stefan Savage from UC San Diego; Moritz Contag, Andre Pawlowski and Thorsten Holz from Ruhr University; and independent researcher Felix Domke.
This work was supported by the European Research Council and by the U.S. National Science Foundation (NSF).

Story Source:
Materials provided by University of California - San Diego. Original written by Ioana Patringenaru. Note: Content may be edited for style and length.