Jan 20, 2018

Cystic fibrosis bacterial burden begins during first years of life

This is a microscopic image of Staphylococcus aureus, a problematic pathogen that infects the airways of children with cystic fibrosis.
Cystic fibrosis (CF) shortens life by making the lungs prone to repeated bacterial infections and associated inflammation. UNC School of Medicine researchers have now shown for the first time that the lungs' bacterial population changes in the first few years of life as respiratory infections and inflammation set in.

The study, published in PLoS Pathogens, offers a way to predict the onset of lung disease in children with CF and suggests a larger role for preventive therapies, such as hypertonic saline.

"Lung symptoms in kids with CF are likely due to an increased burden of bacteria," said study senior author Matthew Wolfgang, PhD, associate professor of microbiology and immunology. "This implies there's an opportunity for early intervention that could dramatically increase the quality of life for these kids."

CF affects about 70,000 people globally, and is most common in children of Northern European ethnicity -- about one of every 2,500 births. The disease is caused by a dysfunctional version of the CFTR gene that encodes the CFTR protein. In the absence of this protein, mucus becomes dehydrated and thick -- a sanctuary for bacteria -- leading to repeated infections, inflammation, and eventually structural damage to lungs and upper airway tissues. The life expectancy of CF patients is about 40 years.

Most CF studies have been done in adults and older children, and thus relatively little has been known about how and when inflammation, bacterial infections, and lung damage begin. To shed more light on that question, Wolfgang and colleagues analyzed bacterial DNA in samples of lung-lining fluid gathered from young children as part of an ongoing Australian project called AREST CF.

"It's challenging and rare to get access to such samples," said Wolfgang, member of the UNC Marsico Lung Institute. "Here in the United States, we don't perform bronchoscopies on children diagnosed with CF if they don't yet have clinical symptoms."

The UNC scientists found that in most of the samples from CF children who were less than a year old, there were little or no signs of bacteria. "If there was no significant evidence of bacteria, there was also no sign of inflammation, and the child generally appeared healthy," Wolfgang said.

In the children between ages one and two, the pattern was different: many samples contained a significant amount of bacterial DNA -- from the same bacterial species that normally populate the mouth and throat. These bacteria are not typically regarded as lung pathogens.

"We can't go so far as to say that these kids have active infections, but clearly there's a significant increase in the bacterial burden in their lungs, and we know these bacteria provoke inflammation," Wolfgang said.

In children age three to five, the samples contained increasing evidence of more worrisome bacteria, particularly Pseudomonas aeruginosa, Staphylococcus aureus, and Haemophilus influenzae, which are commonly found in older CF patients with more severe lung disease. As the bacterial burden worsened, molecular signs of inflammation increased. Also, lung X-ray studies of the children revealed mounting signs of structural lung disease as the bacterial burden increased.

"This tells us lung bacterial infections start much earlier than we had expected in children with CF, and these infections are likely the earliest drivers of structural lung disease," Wolfgang said.

Many of the bacterial species in the young children with CF, he noted, were "anaerobic" microbes that thrive in conditions of very low oxygen. This finding suggests that the dehydrated, thickened CF lung mucus creates pockets of low oxygen in lung tissues.

"Therapies aimed at breaking up mucus very early in life might be very beneficial to these kids," Wolfgang said. "These therapies could postpone the increase in bacterial burden, including the shift towards the more pathogenic species."

Doctors already give preventive antibiotics to young children in Australia, Germany, and the UK. However, Wolfgang noted that the children in the AREST CF study, who were treated with antibiotics until the age of two, still showed a clear progression of bacterial burden and inflammation. "It could be that other therapeutic strategies, such as thinning mucus, may be more successful," he said.

Wolfgang and colleagues at the UNC Marsico Lung Institute now hope to do a similar, long-term study analyzing the lung bacteria of individual children and the changes in these "bacteriomes" over several years. The researchers want to evaluate the effectiveness of an early mucus-thinning intervention, for example hypertonic saline -- salt water delivered via inhaler -- which is already used to hydrate mucus in older CF patients.

The co-first authors of the study were Marianne S. Muhlebach, MD, professor of pediatrics at the UNC School of Medicine, and Bryan T. Zorn, a UNC research specialist.

Read more at Science Daily

Flu may be spread just by breathing

A study participant sits in the Gesundheit II machine, which is used to capture and analyze influenza virus in exhaled breath at the University of Maryland School of Public Health.
It is easier to spread the influenza virus (flu) than previously thought, according to a new University of Maryland-led study released today. People commonly believe that they can catch the flu by exposure to droplets from an infected person's coughs or sneezes or by touching contaminated surfaces. But, new information about flu transmission reveals that we may pass the flu to others just by breathing.

The study "Infectious virus in exhaled breath of symptomatic seasonal influenza cases from a college community," published in the Proceedings of the National Academy of Sciences, provides new evidence for the potential importance of airborne transmission because of the large quantities of infectious virus researchers found in the exhaled breath from people suffering from flu.

"We found that flu cases contaminated the air around them with infectious virus just by breathing, without coughing or sneezing," explained Dr. Milton, M.D., MPH, professor of environmental health in the University of Maryland School of Public Health and lead researcher of this study. "People with flu generate infectious aerosols (tiny droplets that stay suspended in the air for a long time) even when they are not coughing, and especially during the first days of illness. So when someone is coming down with influenza, they should go home and not remain in the workplace and infect others."

Researchers from the University of Maryland, San Jose State University, Missouri Western State University and University of California, Berkeley contributed to this study funded by the Centers for Disease Control and Prevention (CDC) and the National Institutes of Health.

Dr. Milton and his research team captured and characterized influenza virus in exhaled breath from 142 confirmed cases of people with influenza during natural breathing, prompted speech, spontaneous coughing, and sneezing, and assessed the infectivity of naturally occurring influenza aerosols. The participants provided 218 nasopharyngeal swabs and 218 30-minute samples of exhaled breath, spontaneous coughing, and sneezing on the first, second, and third days after the onset of symptoms.

The analysis of the infectious virus recovered from these samples showed that a significant number of flu patients routinely shed infectious virus, not merely detectable RNA, into aerosol particles small enough to present a risk for airborne transmission.

Surprisingly, 11 (48%) of the 23 fine aerosol samples acquired in the absence of coughing had detectable viral RNA and 8 of these 11 contained infectious virus, suggesting that coughing was not necessary for infectious aerosol generation in the fine aerosol droplets. In addition, the few sneezes observed were not associated with greater viral RNA copy numbers in either coarse or fine aerosols, suggesting that sneezing does not make an important contribution to influenza virus shedding in aerosols.

"The study findings suggest that keeping surfaces clean, washing our hands all the time, and avoiding people who are coughing does not provide complete protection from getting the flu," said Sheryl Ehrman, Don Beall Dean of the Charles W. Davidson College of Engineering at San José State University. "Staying home and out of public spaces could make a difference in the spread of the influenza virus."

Read more at Science Daily

Jan 19, 2018

Why animals diversified on Earth: Cancer research provides clues

Trilobite fossil
Can tumors teach us about animal evolution on Earth? Researchers believe so and now present a novel hypothesis of why animal diversity increased dramatically on Earth about half a billion years ago. A biological innovation may have been key.

A transdisciplinary and international team, from Lund University in Sweden and University of Southern Denmark presents their findings in Nature Ecology and Evolution.

The new hypothesis holds that the dramatic diversification of animals resulted from a revolution within the animals' own biology, rather than in the surrounding chemistry on Earth's surface.

Life on Earth was dominated by microbes for roughly 4 billion years when multicellular life suddenly -- then in the form of animals in robust ecosystems -- made a vigorous entry. Why animals diversified so late and so dramatically has remained unresolved and is a matter of hot debate.

The diversification of animals occurred over a geologically short period of time and is known as the Cambrian explosion. Many geologists have assumed that the Cambrian explosion was triggered by an increase of atmospheric oxygen.

However, a causal relationship between the Cambrian explosion and increasing atmospheric oxygen lacks convincing evidence.

Historic focus on high oxygen

Indeed, research over the last years weaken the support for a correlation between the Cambrian explosion and increasing atmospheric oxygen. For example, dramatic changes in atmospheric oxygen are noted both before and after the Cambrian, but not specifically when animal diversification took off.

Simple animals are furthermore noted to require surprisingly low oxygen levels, which would have been met well before the Cambrian.

"A heated hunt for the geochemical evidence that oxygen increased when animals diversified goes on but, after decades of discussion, it seems worthwhile to consider the development of multicellularity also from other angles," says geobiologist Emma Hammarlund, PhD and researcher at the division for translational cancer research at Lund University and guest researcher at the Nordic Center for Earth Evolution at the University of Southern Denmark.

Tumors are successful versions of multicellularity, also at low oxygen

In order to understand more about the conditions for multicellular life, Emma Hammarlund contacted tumor biologist, Professor Sven Påhlman at the Department of Laboratory Medicine at Lund University, who has explored the importance of low oxygen concentrations, or so-called hypoxia, in the tumor setting for nearly two decades.

"I wanted to learn what tumor scientists observe on a daily basis, in terms of tissue growth and how it relates to oxygen. Tumours are after all, and unfortunately, successful versions of multicellularity," explains Emma Hammarlund.

The team, including also tumor biologist Dr. Kristoffer von Stedingk at Lund University's Paediatrics division, tackled the historic question of why animals developed so late and dramatically with novel clues from the field of tumour biology.

A shared success factor


Specifically, they tested whether the same molecular tools exploited by many tumors -- to maintain stem cell properties -- could also be relevant to the success of animals in the Cambrian explosion.

Cells with stem cell properties are vital for all multicellular life in order to regenerate tissue. For example, cells in the wall of human small intestine are replaced every 2-4 days, through the division of stem cells.

"Hypoxia is generally seen as a threat, but we forget that oxygen shortage in precise periods and settings also is a prerequisite for multicellular life. Our stem cells are the ones that form new tissue, and they are extremely sensitive to oxygen. The stem cells therefore have various systems for dealing with the effects of both oxygen and oxygen shortage, which is clear in the case of tumors," explains Sven Påhlman.

These systems involve a protein that can 'fool' cells act as if the setting was hypoxic. This can also fool cells to get stem cell-like properties.

Tumor cells cope with oxygen


By studying the ability of tumor cells to imitate the properties of stem cells, Sven Påhlman's team have observed how tumor cells can high-jack specific mechanisms that evade the negative effects that high oxygen has on stem cells. As a consequence, the tumor cells are able to maintain stem cell properties, despite being surrounded by the high oxygen concentrations that are present in the body.

This same ability, according to the authors, is one of the keys that also made animals so successful.

"The ability to construct stem cell properties despite high oxygen levels, so called 'pseudohypoxia', is present also in our normal vertebrate tissue. Therefore, we flip the perspective on the oxic setting: While low oxygen is generally unproblematic for animal cells, the oxic settings pose a fundamental challenge for complex multicellularity. Without additional tools, the oxic setting makes tissue-specific stem cells mature too early," says Sven Påhlman.

A biological revolution

The new hypothesis that gives credit to a biological innovation to have triggered animal diversification is similar to how we think of biological innovations changing life in the past. Just the presence of free oxygen is the result of some microbes finding a way of using sunlight to get energy. This was also a biological event.

A view that fits with other geobiological observations, such that environments with 'enough' oxygen have been present on Earth since long before the Cambrian explosion.

The hypothesis also has implications for how animals may have varying capacities to live in oxygenated environments, and perhaps even for how we see cancer as an evolutionary consequence of our ability to live in oxygenated niches.

Bringing geobiology and cancer research together

Taking an evolutionary approach is unusual for cancer researchers, even though the development of tumors is generally seen as an evolutionary process.

Similarly, geobiological research rarely apply the cellular perspective. But having combined their expertise, both Emma Hammarlund and Sven Påhlman are surprised that we have not previously wondered about our paradoxical ability to renew tissue in the oxic setting.

Read more at Science Daily

Method uses DNA, nanoparticles and lithography to make optically active structures

Northwestern University researchers have developed a new method to precisely arrange nanoparticles of different sizes and shapes in two and three dimensions, resulting in optically active superlattices.
Northwestern University researchers have developed a first-of-its-kind technique for creating entirely new classes of optical materials and devices that could lead to light bending and cloaking devices -- news to make the ears of Star Trek's Spock perk up.

Using DNA as a key tool, the interdisciplinary team took gold nanoparticles of different sizes and shapes and arranged them in two and three dimensions to form optically active superlattices. Structures with specific configurations could be programmed through choice of particle type and both DNA-pattern and sequence to exhibit almost any color across the visible spectrum, the scientists report.

"Architecture is everything when designing new materials, and we now have a new way to precisely control particle architectures over large areas," said Chad A. Mirkin, the George B. Rathmann Professor of Chemistry in the Weinberg College of Arts and Sciences at Northwestern. "Chemists and physicists will be able to build an almost infinite number of new structures with all sorts of interesting properties. These structures cannot be made by any known technique."

The technique combines an old fabrication method -- top-down lithography, the same method used to make computer chips -- with a new one -- programmable self-assembly driven by DNA. The Northwestern team is the first to combine the two to achieve individual particle control in three dimensions.

The study was published online by the journal Science today (Jan. 18). Mirkin and Vinayak P. Dravid and Koray Aydin, both professors in Northwestern's McCormick School of Engineering, are co-corresponding authors.

Scientists will be able to use the powerful and flexible technique to build metamaterials -- materials not found in nature -- for a range of applications including sensors for medical and environmental uses.

The researchers used a combination of numerical simulations and optical spectroscopy techniques to identify particular nanoparticle superlattices that absorb specific wavelengths of visible light. The DNA-modified nanoparticles -- gold in this case -- are positioned on a pre-patterned template made of complementary DNA. Stacks of structures can be made by introducing a second and then a third DNA-modified particle with DNA that is complementary to the subsequent layers.

In addition to being unusual architectures, these materials are stimuli-responsive: the DNA strands that hold them together change in length when exposed to new environments, such as solutions of ethanol that vary in concentration. The change in DNA length, the researchers found, resulted in a change of color from black to red to green, providing extreme tunability of optical properties.

"Tuning the optical properties of metamaterials is a significant challenge, and our study achieves one of the highest tunability ranges achieved to date in optical metamaterials," said Aydin, assistant professor of electrical engineering and computer science at McCormick.

"Our novel metamaterial platform -- enabled by precise and extreme control of gold nanoparticle shape, size and spacing -- holds significant promise for next-generation optical metamaterials and metasurfaces," Aydin said.

The study describes a new way to organize nanoparticles in two and three dimensions. The researchers used lithography methods to drill tiny holes -- only one nanoparticle wide -- in a polymer resist, creating "landing pads" for nanoparticle components modified with strands of DNA. The landing pads are essential, Mirkin said, since they keep the structures that are grown vertical.

The nanoscopic landing pads are modified with one sequence of DNA, and the gold nanoparticles are modified with complementary DNA. By alternating nanoparticles with complementary DNA, the researchers built nanoparticle stacks with tremendous positional control and over a large area. The particles can be different sizes and shapes (spheres, cubes and disks, for example).

"This approach can be used to build periodic lattices from optically active particles, such as gold, silver and any other material that can be modified with DNA, with extraordinary nanoscale precision," said Mirkin, director of Northwestern's International Institute for Nanotechnology.

Mirkin also is a professor of medicine at Northwestern University Feinberg School of Medicine and professor of chemical and biological engineering, biomedical engineering and materials science and engineering in the McCormick School.

The success of the reported DNA programmable assembly required expertise with hybrid (soft-hard) materials and exquisite nanopatterning and lithographic capabilities to achieve the requisite spatial resolution, definition and fidelity across large substrate areas. The project team turned to Dravid, a longtime collaborator of Mirkin's who specializes in nanopatterning, advanced microscopy and characterization of soft, hard and hybrid nanostructures.

Read more at Science Daily

Long-term warming trend continued in 2017: NASA, NOAA

This map shows Earth's average global temperature from 2013 to 2017, as compared to a baseline average from 1951 to 1980, according to an analysis by NASA's Goddard Institute for Space Studies. Yellows, oranges, and reds show regions warmer than the baseline.
Earth's global surface temperatures in 2017 ranked as the second warmest since 1880, according to an analysis by NASA.

Continuing the planet's long-term warming trend, globally averaged temperatures in 2017 were 1.62 degrees Fahrenheit (0.90 degrees Celsius) warmer than the 1951 to 1980 mean, according to scientists at NASA's Goddard Institute for Space Studies (GISS) in New York. That is second only to global temperatures in 2016.

In a separate, independent analysis, scientists at the National Oceanic and Atmospheric Administration (NOAA) concluded that 2017 was the third-warmest year in their record. The minor difference in rankings is due to the different methods used by the two agencies to analyze global temperatures, although over the long-term the agencies' records remain in strong agreement. Both analyses show that the five warmest years on record all have taken place since 2010.

Because weather station locations and measurement practices change over time, there are uncertainties in the interpretation of specific year-to-year global mean temperature differences. Taking this into account, NASA estimates that 2017's global mean change is accurate to within 0.1 degree Fahrenheit, with a 95 percent certainty level.

"Despite colder than average temperatures in any one part of the world, temperatures over the planet as a whole continue the rapid warming trend we've seen over the last 40 years," said GISS Director Gavin Schmidt.

The planet's average surface temperature has risen about 2 degrees Fahrenheit (a little more than 1 degree Celsius) during the last century or so, a change driven largely by increased carbon dioxide and other human-made emissions into the atmosphere. Last year was the third consecutive year in which global temperatures were more than 1.8 degrees Fahrenheit (1 degree Celsius) above late nineteenth-century levels.

Phenomena such as El Niño or La Niña, which warm or cool the upper tropical Pacific Ocean and cause corresponding variations in global wind and weather patterns, contribute to short-term variations in global average temperature. A warming El Niño event was in effect for most of 2015 and the first third of 2016. Even without an El Niño event -- and with a La Niña starting in the later months of 2017 -- last year's temperatures ranked between 2015 and 2016 in NASA's records.

In an analysis where the effects of the recent El Niño and La Niña patterns were statistically removed from the record, 2017 would have been the warmest year on record.

Weather dynamics often affect regional temperatures, so not every region on Earth experienced similar amounts of warming. NOAA found the 2017 annual mean temperature for the contiguous 48 United States was the third warmest on record.

Warming trends are strongest in the Arctic regions, where 2017 saw the continued loss of sea ice.

NASA's temperature analyses incorporate surface temperature measurements from 6,300 weather stations, ship- and buoy-based observations of sea surface temperatures, and temperature measurements from Antarctic research stations.

These raw measurements are analyzed using an algorithm that considers the varied spacing of temperature stations around the globe and urban heating effects that could skew the conclusions. These calculations produce the global average temperature deviations from the baseline period of 1951 to 1980.

NOAA scientists used much of the same raw temperature data, but with a different baseline period, and different methods to analyze Earth's polar regions and global temperatures.

The full 2017 surface temperature data set and the complete methodology used to make the temperature calculation are available at: https://data.giss.nasa.gov/gistemp

Read more at Science Daily

First evidence of sub-Saharan Africa glassmaking

Photo of glass beads.
Scholars from Rice University, University College London and the Field Museum have found the first direct evidence that glass was produced in sub-Saharan Africa centuries before the arrival of Europeans, a finding that the researchers said represents a "new chapter in the history of glass technology."

The discovery is discussed in "Chemical Analysis of Glass Beads from Igbo Olokun, Ile-Ife (SW Nigeria): New Light on Raw Materials, Production and Interregional Interactions," which will appear in an upcoming volume of the Journal of Archaeological Science.

Lead author Abidemi Babatunde Babalola, a recent graduate of Rice with a Ph.D. in anthropology and a visiting fellow at Harvard University, came across evidence of early glassmaking during archaeological excavations at Igbo Olokun, located on the northern periphery of Ile-Ife in southwestern Nigeria. He recovered more than 12,000 glass beads and several kilograms of glass-working debris.

"This area has been recognized as a glass-working workshop for more than a century," Babalola said. "The glass-encrusted containers and beads that have been uncovered there were viewed for many years as evidence that imported glass was remelted and reworked."

However, 10 years ago this idea was challenged when analyses of glass beads attributed to Ile-Ife showed that some had a chemical composition very different from that of known glass production areas. Researchers raised the possibility of local production in Ife, although direct evidence for glassmaking and its chronology was lacking.

"The Igbo Olokun excavations have provided that evidence," Babalola said.

The researchers' analysis of 52 glass beads from the excavated assemblage revealed that none matched the chemical composition of any other known glass-production area in the Old World, including Egypt, the eastern Mediterranean, the Middle East and Asia. Rather, the beads have a high-lime, high-alumina (HLHA) composition that reflects local geology and raw materials, the researchers said. The excavations provided evidence that glass production at Igbo Olokun dates to the 11th through 15th centuries A.D., well before the arrival of Europeans along the coast of West Africa.

Read more at Science Daily

Jan 18, 2018

How massive can neutron stars be?

Emission of gravitational waves during a neutron star merger.
Astrophysicists at Goethe University Frankfurt set a new limit for the maximum mass of neutron stars: It cannot exceed 2.16 solar masses.

Since their discovery in the 1960s, scientists have sought to answer an important question: How massive can neutron stars actually become? By contrast to black holes, these stars cannot gain in mass arbitrarily; past a certain limit there is no physical force in nature that can counter their enormous gravitational force. For the first time, astrophysicists at Goethe University Frankfurt have succeeded in calculating a strict upper limit for the maximum mass of neutron stars.

With a radius of about twelve kilometres and a mass that can be twice as large as that of the sun, neutron stars are amongst the densest objects in the Universe, producing gravitational fields comparable to those of black holes. Whilst most neutron stars have a mass of around 1.4 times that of the sun, massive examples are also known, such as the pulsar PSR J0348+0432 with 2.01 solar masses.

The density of these stars is enormous, as if the entire Himalayas were compressed into a beer mug. However, there are indications that a neutron star with a maximum mass would collapse to a black hole if even just a single neutron were added.

Together with his students Elias Most and Lukas Weih, Professor Luciano Rezzolla, physicist, senior fellow at the Frankfurt Institute for Advanced Studies (FIAS) and professor of Theoretical Astrophysics at Goethe University Frankfurt, has now solved the problem that had remained unanswered for 40 years: With an accuracy of a few percent, the maximum mass of non-rotating neutron stars cannot exceed 2.16 solar masses.

The basis for this result was the "universal relations" approach developed in Frankfurt a few years ago. The existence of "universal relations" implies that practically all neutron stars "look alike," meaning that their properties can be expressed in terms of dimensionless quantities. The researchers combined these "universal relations" with data on gravitational-wave signals and the subsequent electromagnetic radiation (kilonova) obtained during the observation last year of two merging neutron stars in the framework of the LIGO experiment. This simplifies calculations tremendously because it makes them independent of the equation of state. This equation is a theoretical model for describing dense matter inside a star that provides information on its composition at various depths in the star. Such a universal relation therefore played an essential role in defining the new maximum mass.

The result is a good example of the interaction between theoretical and experimental research. "The beauty of theoretical research is that it can make predictions. Theory, however, desperately needs experiments to narrow down some of its uncertainties," says Professor Rezzolla. "It's therefore quite remarkable that the observation of a single binary neutron star merger that occurred millions of light years away combined with the universal relations discovered through our theoretical work have allowed us to solve a riddle that has seen so much speculation in the past."

The research results were published as a Letter in The Astrophysical Journal. Just a few days later, research groups from the USA and Japan confirmed the findings, despite having so far followed different and independent approaches.

Read more at Science Daily

New technique for finding life on Mars

Co-author I. Altshuler sampling permafrost terrain near the McGill Arctic research station, Canadian high Arctic.
Researchers demonstrate for the first time the potential of existing technology to directly detect and characterize life on Mars and other planets. The study, published in Frontiers in Microbiology, used miniaturized scientific instruments and new microbiology techniques to identify and examine microorganisms in the Canadian high Arctic -- one of the closest analogs to Mars on Earth. By avoiding delays that come with having to return samples to a laboratory for analysis, the methodology could also be used on Earth to detect and identify pathogens during epidemics in remote areas.

"The search for life is a major focus of planetary exploration, but there hasn't been direct life detection instrumentation on a mission since the 70s, during the Viking missions to Mars," explains Dr Jacqueline Goordial, one of the study's authors. "We wanted to show a proof-of-concept that microbial life can be directly detected and identified using very portable, low-weight, and low-energy tools."

At present, most instruments on astrobiology missions look for habitable conditions, small organic molecules and other "biosignatures" that generally could not be formed without life. However, these provide only indirect evidence of life. Moreover, current instruments are relatively large and heavy with high energy requirements. This makes them unsuitable for missions to Europa and Enceladus -- moons of Jupiter and Saturn which, along with Mars, are the primary targets in the search for life in our solar system.

Dr Goordial, together with Professor Lyle Whyte and other scientists from Canada's McGill University, took a different approach: the use of multiple, miniature instruments to directly detect and analyze life. Using existing low-cost and low-weight technology in new ways, the team created a modular "life detection platform" able to culture microorganisms from soil samples, assess microbial activity, and sequence DNA and RNA.

To detect and characterize life on Mars, Europa and Enceladus, the platform would need to work in environments with extreme cold temperatures. The team therefore tested it at a remote site in a close analog on Earth: polar regions.

"Mars is a very cold and dry planet, with a permafrost terrain that looks a lot like what we find in the Canadian high Arctic," says Dr Goordial. "For this reason, we chose a site about 900 km from the North Pole as a Mars analog to take samples and test our methods."

Using a portable, miniature DNA sequencing device (Oxford Nanopore MiniON), the researchers show for the first time that not only can the tool be used for examining environmental samples in extreme and remote settings, but that it can be combined with other methodology to detect active microbial life in the field. The researchers were able to isolate extremophilic microorganisms that have never been cultured before, detect microbial activity, and sequence DNA from the active microbes.

"Successful detection of nucleic acids in Martian permafrost samples would provide unambiguous evidence of life on another world," says Dr Goordial. "However, the presence of DNA alone doesn't tell you much about the state of an organism -- it could be dormant or dead, for example. By using the DNA sequencer with the other methodology in our platform, we were able to first find active life, and then identify it and analyze its genomic potential, that is, the kinds of functional genes it has."

While the team showed that such a platform could theoretically be used to detect life on other planets, it is not ready for a space mission just yet. "Humans were required to carry out much of the experimentation in this study, while life detection missions on other planets will need to be robotic," says Dr Goordial. "The DNA sequencer also needs higher accuracy and durability to withstand the long timescales required for planetary missions."

Nevertheless, Dr Goordial and her team hope this study will act as a starting point for future development of life detection tools.

In the meantime, the platform has potential applications here on Earth. "The types of analyses performed by our platform are typically carried out in the laboratory, after shipping samples back from the field. We show that microbial ecology studies can now be done in real time, directly on site -- including in extreme environments like the Arctic and Antarctic," says Dr Goordial.

Read more at Science Daily

Warming Arctic climate constrains life in cold-adapted mammals

Cold-adapted species, like the muskoxen, are feeling the heat, according to wildlife biologist Joel Berger.
Despite the growth in knowledge about the effects of a warming Arctic on its cold-adapted species, how these changes affect animal populations is poorly understood. Research efforts have been hindered by the area's remoteness and complex logistics required to gain access.

A new study led by Joel Berger, professor in the Department of Fish, Wildlife and Conservation Biology at Colorado State University, has uncovered previously unknown effects of rain-on-snow events, winter precipitation and ice tidal surges on the Arctic's largest land mammal, the muskoxen.

The warmer climate is stressing mothers and young muskoxen, said Berger, also a senior scientist with the Wildlife Conservation Society. Rain-on-snow events occurring in the winter -- when muskoxen gestate -- and unusually dry winter conditions have also led to underdeveloped skeletal growth in juvenile muskoxen. This effect can be traced back to their pregnant mothers.

"When rain-on-snow events occur in the Arctic, due to warming temperatures, and the snow freezes again, this leads to mothers not being able to access food for adequate nutrition," said Berger. "The babies then, unfortunately, pay the price."

The smaller size observed in juvenile and young adult muskoxen is associated with poorer health and fitness, due to delayed puberty, and increased mortality, according to the research team.

In addition, scientists documented a mass mortality event due to a one-time extreme ice event caused by a tidal surge. In February 2011, an historically high tidal surge resulted in at least 52 muskoxen being submerged at the northern coast of Bering Land Bridge peninsula.

Researchers also found historical records documenting deaths due to rapid freezing and thawing during winter at a range of single sites of whales -- 170 Beluga and 150 narwhals -- and sea otters along the Aleutian Islands.

"Unlike polar bears, which are on the world's stage, no one really knows about muskoxen or cares," said Berger. "They roamed with wooly mammoths but still survive. Muskoxen are feeling the heat, just as we humans are feeling the extremes of climate. These wild weather swings have massive impacts on us. Solutions are clear, but we fail to respond by changing our consumptive ways."

Measuring a muskox, tracking temperatures


The research team analyzed head size of juvenile muskoxen using digital photo data over the span of seven years and at three sites in Alaska and Russia. They also compiled winter weather data for the Alaskan sites from the closest weather stations maintained by the National Oceanic and Atmospheric Administration. Data used to calculate rain-on-snow events on Wrangel Island, located in the Arctic Ocean, between the Chukchi Sea and East Siberian Sea, were from the Federal Hydro-meteorological Service of Russia, whose records date back to 1926.

Berger acknowledged the role his research plays "in a challenging political era." His collaborators include Russian scientists in Asia and the Alaskan Arctic. Berger is not only a researcher, but he also serves as a diplomat of sorts, working closely with Russian research counterparts and the government. He praised them for their active engagement and willingness to share data.

Read more at Science Daily

World’s oldest known oxygen oasis discovered

Rock layers in the Pongola Basin, South Africa.
In the Earth's early history, several billion years ago, only traces of oxygen existed in the atmosphere and the oceans. Today's air-breathing organisms could not have existed under those conditions. The change was caused by photosynthesizing bacteria, which created oxygen as a by-product -- in vast amounts. 2.5-billion-year-old rock layers on several continents have yielded indications that the first big increase in the proportion of oxygen in the atmosphere took place then.

Now, working with international colleagues, Dr. Benjamin Eickmann and Professor Ronny Schönberg, isotope geochemists from the University of Tübingen have discovered layers in South Africa's Pongola Basin which bear witness to oxygen production by bacteria as early as 2.97 billion years ago. That makes the Basin the earliest known home to oxygen-producing organisms -- known as an oxygen oasis. The study has been published in the latest Nature Geoscience.

Conditions on Earth some three billion years ago were inhospitible to say the least. The atmosphere contained only one-one hundred thousandth of the oxygen it has today. The primeval oceans contained hardly any sulfate; but they did contain large amounts of ferrous iron. When bacteria started producing oxygen, it could initially bond with other elements, but began to enrich the atmosphere in a massive oxygen emission event around 2.5 billion years ago.

"We can see that in the disappearance of reduced minerals in the sediments on the continents. Certain sulfur signatures which can only be formed in a low-oxygen atmosphere are no longer to be found," says Benjamin Eickmann, the study's lead author. This event, which could be described as global environmental pollution, went down in the Earth's history as the Great Oxygenation Event. It was a disaster for the early bacteria types which had evolved under low-oxygen conditions; the oxygen poisoned them. "However, after the first big rise, the atmosphere only contained 0.2 percent oxygen; today it's around 21 percent," Eickmann explains. Exposed to an atmosphere which contained increasing amounts of oxygen, the continents were subject to enhanced erosion. That led to more trace elements entering the oceans. The improved supply of nutrients in turn led to more life forms in the seas.

Sulfur signatures as an archive of Earth history

In their current study the researchers investigated the 2.97-bilion-year-old sediments deposited in the Pongola Basin in what is now South Africa. From the proportions of sulfur isotopes (particularly the of 34S/32S ratio), in the sediments, the researchers are able to conclude that the bacteria used the sulfate in the primeval seas as a source of energy, reducing it chemically.

"Sulfate is a form of oxidized sulfur. A higher concentration of sulfate in the water indicates that sufficient free oxygen must have been present in the shallow sea of the Pongola Basin," Ronny Schönberg says. This free oxygen must have been produced by other, photosynthesizing bacteria. At the same time, another sulfur isotope signature (the 33S/32S ratio) in these sediments indicates a continued reduced, very low-oxygen atmosphere.

Read more at Science Daily

Ancient DNA results end 4,000-year-old Egyptian mummy mystery

The Two Brothers are the Museum's oldest mummies and amongst the best-known human remains in its Egyptology collection. They are the mummies of two elite men -- Khnum-nakht and Nakht-ankh -- dating to around 1800 BC.
Using 'next generation' DNA sequencing scientists have found that the famous 'Two Brothers' mummies of the Manchester Museum have different fathers so are, in fact, half-brothers.

The Two Brothers are the Museum's oldest mummies and amongst the best-known human remains in its Egyptology collection. They are the mummies of two elite men -- Khnum-nakht and Nakht-ankh -- dating to around 1800 BC.

However, ever since their discovery in 1907 there has been some debate amongst Egyptologists whether the two were actually related at all. So, in 2015, 'ancient DNA' was extracted from their teeth to solve the mystery.

But how did the mystery start? The pair's joint burial site, later dubbed The Tomb of The Two Brothers, was discovered at Deir Rifeh, a village 250 miles south of Cairo.

They were found by Egyptian workmen directed by early 20th century Egyptologists, Flinders Petrie and Ernest Mackay. Hieroglyphic inscriptions on the coffins indicated that both men were the sons of an unnamed local governor and had mothers with the same name, Khnum-aa. It was then the men became known as the Two Brothers.

When the complete contents of the tomb were shipped to Manchester in 1908 and the mummies of both men were unwrapped by the UK's first professional female Egyptologist, Dr Margaret Murray. Her team concluded that the skeletal morphologies were quite different, suggesting an absence of family relationship. Based on contemporary inscriptional evidence, it was proposed that one of the Brothers was adopted.

Therefore, in 2015, the DNA was extracted from the teeth and, following hybridization capture of the mitochondrial and Y chromosome fractions, sequenced by a next generation method. Analysis showed that both Nakht-Ankh and Khnum-Nakht belonged to mitochondrial haplotype M1a1, suggesting a maternal relationship. The Y chromosome sequences were less complete but showed variations between the two mummies, indicating that Nakht-Ankh and Khnum-Nakht had different fathers, and were thus very likely to have been half-brothers.

Dr Konstantina Drosou, of the School of Earth and Environmental Sciences at the University of Manchester who conducted the DNA sequencing, said: "It was a long and exhausting journey to the results but we are finally here. I am very grateful we were able to add a small but very important piece to the big history puzzle and I am sure the brothers would be very proud of us. These moments are what make us believe in ancient DNA. "

The study, which is being published in the Journal of Archaeological Science, is the first to successfully use the typing of both mitochondrial and Y chromosomal DNA in Egyptian mummies.

Read more at Science Daily

Jan 17, 2018

Not just for Christmas: Study sheds new light on ancient human-turkey relationship

Wild turkey.
For the first time, research has uncovered the origins of the earliest domestic turkeys in ancient Mexico.

The study also suggests turkeys weren't only prized for their meat -- with demand for the birds soaring with the Mayans and Aztecs because of their cultural significance in rituals and sacrifices.

In an international collaboration, researchers from the University of York, the Institute of Anthropology and History in Mexico, Washington State University and Simon Fraser University, studied the remains of 55 turkeys which lived between 300BC and 1500 AD and had been discovered in Mesoamerica- an area stretching from central Mexico to Northern Costa Rica within which pre-Columbian societies such as the Mayans and Aztecs flourished.

Analysing the ancient DNA of the birds, the researchers were able to confirm that modern European turkeys are descended from Mexican ancestors.

The team also measured the carbon isotope ratios in the turkey bones to reconstruct their diets. They found that the turkeys were gobbling crops cultivated by humans such as corn in increasing amounts, particularly in the centuries leading up to Spanish exploration, implying more intensive farming of the birds.

Interestingly, the gradual intensification of turkey farming does not directly correlate to an increase in human population size, a link you would expect to see if turkeys were reared simply as a source of nutrition.

Lead author of the paper and Marie Sk?odowska-Curie Fellow in the Department of Archaeology at the University of York, Dr Aurélie Manin, said: "Turkey bones are rarely found in domestic refuse in Mesoamerica and most of the turkeys we studied had not been eaten -- some were found buried in temples and human graves, perhaps as companions for the afterlife. This fits with what we know about the iconography of the period, where we see turkeys depicted as gods and appearing as symbols in the calendar.

"The archaeological evidence suggests that meat from deer and rabbit was a more popular meal choice for people in pre-Columbian societies; turkeys are likely to have also been kept for their increasingly important symbolic and cultural role."

The fact that some of the turkey bones were uncovered outside of the natural range of the species also suggests that there was a thriving turkey trade in live birds along Mesoamerica's expanding trade routes.

Senior author of the paper from the Department of Archaeology at the University of York, Dr Camilla Speller, said: "Even though humans in this part of the word had been practicing agriculture for around 10,000 years, the turkey was the first animal, other than the dog, people in Mesoamerica started to take under their control.

"Turkeys would have made a good choice for domestication as there were not many other animals of suitable temperament available and turkeys would have been drawn to human settlements searching for scraps"

Some of the remains the researchers analysed were from a cousin of the common turkey -- the brightly plumed Ocellated turkey. In a strange twist the researchers found that the diets of these more ornate birds remained largely composed of wild plants and insects, suggesting that they were left to roam free and never domesticated.

Read more at Science Daily

Why don't turtles still have tail spikes?

We're all familiar with those awesome armored giants of the Jurassic and Cretaceous periods -- Stegosaurus and Ankylosaurus -- and their amazing, weaponized tails. But why aren't similar weaponized tails found in animals living today? In a study covering 300 million years of evolutionary history, researchers from North Carolina State University and the North Carolina Museum of Natural Sciences found four necessary components to tail weapon development: size, armor, herbivory and thoracic stiffness.

"Weapons like tail clubs and bony spikes are found only in a few extinct animals -- such as ankylosaurs, glyptodonts (large extinct armadillos) and in some ancient turtle species," says Victoria Arbour, former postdoctoral student at NC State, current postdoctoral fellow at the Royal Ontario Museum and corresponding author of a paper describing the research. "These same weapons just don't occur in modern-day animals, and we wanted to know why they were so rare even in the fossil record."

Study co-author Lindsay Zanno, professor of biological sciences at NC State and head of paleontology at the NC Museum of Natural Sciences, agrees, "We kicked off this study with a simple observation: most animal weapons used for combat are located on the most critical part of the body for survival, the head, as opposed to more expendable ones such as the tail. Why, we asked, wasn't evolution producing more animals with weaponized tails, when this would seem to be far less dangerous?"

To answer this question, Arbour and Zanno looked at a data set of 286 amniote species, both living and extinct, to see if there were patterns that pointed to the evolution of three specific types of tail weapons: bony spikes, a stiff tail or a bony knob at the tip of the tail. Amniotes refer to backboned, four-legged reptiles and mammals, as well as birds.

In the case of bony tail weaponry, the researchers found the animals had four things in common. First, they were usually large, weighing over 200 pounds (or 100 kilograms) -- about the weight of the glyptodonts that used to roam South America or a living mountain goat -- or were over three feet (a meter) long.

Second, armor was key. Ancient turtles, armadillos and armored dinosaurs were covered in some sort of hard carapace or bony plated armor. Thoracic stiffness -- referring to a body that doesn't bend side to side easily, perhaps so that it could easily counteract the forces needed to swing a large clubbed or spiked tail -- was also important. Finally, every animal in the fossil record that developed elaborate tail weaponry was an herbivore, or vegetarian.

"It's rare for large herbivores to have lots of bony armor to begin with," Arbour says, "and even rarer to see armored species with elaborate head or tail ornamentation because of the energy cost to the animal. The evolution of tail weaponry in Ankylosaurus and Stegosaurus required a 'perfect storm' of traits that aren't seen in living animals, and this unique combination explains why tail weaponry is rare even in the fossil record."

Read more at Science Daily

Recording a thought's fleeting trip through the brain

Brain activity.
University of California, Berkeley neuroscientists have tracked the progress of a thought through the brain, showing clearly how the prefrontal cortex at the front of the brain coordinates activity to help us act in response to a perception.

Recording the electrical activity of neurons directly from the surface of the brain, the scientists found that for a simple task, such as repeating a word presented visually or aurally, the visual and auditory cortexes reacted first to perceive the word. The prefrontal cortex then kicked in to interpret the meaning, followed by activation of the motor cortex in preparation for a response. During the half-second between stimulus and response, the prefrontal cortex remained active to coordinate all the other brain areas.

For a particularly hard task, like determining the antonym of a word, the brain required several seconds to respond, during which the prefrontal cortex recruited other areas of the brain, including presumably memory networks not actually visible. Only then did the prefrontal cortex hand off to the motor cortex to generate a spoken response. The quicker the brain's handoff, the faster people responded.

Interestingly, the researchers found that the brain began to prepare the motor areas to respond very early, during initial stimulus presentation, suggesting that we get ready to respond even before we know what the response will be.

"This might explain why people sometimes say things before they think," said Avgusta Shestyuk, a senior researcher in UC Berkeley's Helen Wills Neuroscience Institute and lead author of a paper reporting the results in the current issue of Nature Human Behavior.

The findings, including the key role played by the prefrontal cortex in coordinating all the activated regions of the brain, are in line with what neuroscientists have pieced together over the past decades from studies in monkeys and humans.

"These very selective studies have found that the frontal cortex is the orchestrator, linking things together for a final output," said co-author Robert Knight, a UC Berkeley professor of psychology and neuroscience and a professor of neurology and neurosurgery at UCSF. "Here we have eight different experiments, some where the patients have to talk and others where they have to push a button, where some are visual and others auditory, and all found a universal signature of activity centered in the prefrontal lobe that links perception and action. It's the glue of cognition."

While other neuroscientists have used functional magnetic resonance imaging (fMRI) and electroencephelography (EEG) to record activity in the thinking brain, the UC Berkeley scientists employed a much more precise technique, electrocorticograhy (ECoG), which records from several hundred electrodes placed on the brain surface and detects activity in the thin outer region, the cortex, where thinking occurs. ECoG provides better time resolution than fMRI and better spatial resolution than EEG, but requires access to epilepsy patients undergoing highly invasive surgery involving opening the skull to pinpoint the location of seizures.

Clues from epilepsy patients

The current study employed 16 epilepsy patients who agreed to participate in experiments while undergoing epilepsy surgery at UC San Francisco and California Pacific Medical Center in San Francisco, Stanford University in Palo Alto and Johns Hopkins University in Baltimore.

"This is the first step in looking at how people think and how people come up with different decisions; how people basically behave," said Shestyuk, who recorded from the first patient 10 years ago. "We are trying to look at that little window of time between when things happen in the environment and us behaving in response to it."

Once the electrodes were placed on the brains of each patient, Shestyuk and her colleagues conducted a series of eight tasks that included visual and auditory stimuli. The tasks ranged from simple, such as repeating a word or identifying the gender of a face or a voice, to complex, such as determining a facial emotion, uttering the antonym of a word or assessing whether an adjective describes the patient's personality.

During these tasks, the brain showed four different types of neural activity. Initially, sensory areas of the auditory and visual cortex activate to process audible or visual cues. Subsequently, areas primarily in the sensory and prefrontal cortices activate to extract the meaning of the stimulus. The prefrontal cortex is continuously active throughout these processes, coordinating input from different areas of the brain. Finally, the prefrontal cortex stands down as the motor cortex activates to generate a spoken response or an action, such as pushing a button.

"This persistent activity, primarily seen in the prefrontal cortex, is a multitasking activity," Shestyuk said. "fMRI studies often find that when a task gets progressively harder, we see more activity in the brain, and the prefrontal cortex in particular. Here, we are able to see that this is not because the neurons are working really, really hard and firing all the time, but rather, more areas of the cortex are getting recruited."

Read more at Science Daily

Tracking the impact of early abuse and neglect

Maltreatment experienced before age 5 can have negative effects that continue to be seen nearly three decades later.
Children who experience abuse and neglect early in life are more likely to have problems in social relationships and underachieve academically as adults.

Maltreatment experienced before age 5 can have negative effects that continue to be seen nearly three decades later, according to a new study led by Lee Raby, an assistant professor of psychology at the University of Utah.

"It is not a controversial statement to say abuse and neglect can have harmful consequences," Raby said. "This study adds to that by showing that these effects are long term and don't weaken with time. They persist from childhood across adolescence and into adulthood."

The journal Child Development published the study. Co-authors are: Glenn I. Roisman and Madelyn H. Labella, Institute of Child Development, University of Minnesota; Jodi Martin, Department of Psychology, York University; R. Chris Fraley, Department of Psychology, University of Illinois at Urbana-Champaign; and Jeffry A. Simpson, Department of Psychology, University of Minnesota.

Raby said his team wanted to know two things: Does maltreatment early in life have long-term associations that extend into adulthood and do those effects remain stable or weaken over time?

The researchers used data from the Minnesota Longitudinal Study of Risk and Adaptation, which has followed participants since their births in the mid-1970s. The U study looked at data on 267 individuals who had reached ages between 32 and 34.

Information about the participants' exposure to physical abuse, sexual abuse and neglect was gathered from multiple sources during two age periods: 0-5 years and 6-17.5 years. Throughout childhood and adolescence, teachers reported on the children's functioning with peers. The children also completed standardized tests on academic achievement. The participants were interviewed again during their 20s and 30s, during which they discussed romantic experiences and educational attainment.

Unlike studies based on adults' retrospective accounts of their childhood experiences, the data used here were collected in real-time. In addition, because data on the participants has been collected throughout their lifetimes, the researchers were able to disentangle the effects of maltreatment that occurred in their early years from experiences of abuse and neglect during later childhood.

"The design allows us to ask our two questions in a way no other study has before," Raby said.

Raby said the findings showed those who experienced abuse or neglect early in life consistently were less successful in their social relationships and academic performance during childhood, adolescence and even during adulthood. The effects of maltreatment did not weaken as the participants got older.

"The harmful effect of early abuse and neglect was just as important when we were looking at outcomes at age 32 years as when we looked at outcomes at age 5," he said.

The researchers found abuse and neglect in later childhood also impacted these competencies in adulthood, but that later maltreatment did not fully account for persistent and long-term influences attributed to abuse and neglect experienced in early childhood. They also found long-term difficulties with social functioning -- but not academic achievement -- occurred independent of such factors as gender, ethnicity and early socioeconomic status.

Read more at Science Daily

Hubble weighs in on mass of 3 million billion suns

In 2014, astronomers using the NASA/ESA Hubble Space Telescope found that this enormous galaxy cluster contains the mass of a staggering three million billion suns -- so it's little wonder that it has earned the nickname of "El Gordo" ("the Fat One" in Spanish)! Known officially as ACT-CLJ0102-4915, it is the largest, hottest, and brightest X-ray galaxy cluster ever discovered in the distant universe.
In 2014, astronomers using the NASA/ESA Hubble Space Telescope found that this enormous galaxy cluster contains the mass of a staggering three million billion suns -- so it's little wonder that it has earned the nickname of "El Gordo" ("the Fat One" in Spanish)! Known officially as ACT-CLJ0102-4915, it is the largest, hottest, and brightest X-ray galaxy cluster ever discovered in the distant Universe.

Galaxy clusters are the largest objects in the Universe that are bound together by gravity. They form over billions of years as smaller groups of galaxies slowly come together. In 2012, observations from ESO's Very Large Telescope, NASA's Chandra X-ray Observatory and the Atacama Cosmology Telescope showed that El Gordo is actually composed of two galaxy clusters colliding at millions of kilometers per hour.

The formation of galaxy clusters depends heavily on dark matter and dark energy; studying such clusters can therefore help shed light on these elusive phenomena. In 2014, Hubble found that most of El Gordo's mass is concealed in the form of dark matter. Evidence suggests that El Gordo's "normal" matter -- largely composed of hot gas that is bright in the X-ray wavelength domain -- is being torn from the dark matter in the collision. The hot gas is slowing down, while the dark matter is not.

This image was taken by Hubble's Advanced Camera for Surveys and Wide-Field Camera 3 as part of an observing program called RELICS (Reionization Lensing Cluster Survey). RELICS imaged 41 massive galaxy clusters with the aim of finding the brightest distant galaxies for the forthcoming James Webb Space Telescope to study.

From Science Daily

Jan 16, 2018

Genes that aid spinal cord healing in lamprey also present in humans, researchers discover

Jennifer Morgan and Ona Bloom with juvenile lamprey in the MBL Whitman Center.
Many of the genes involved in natural repair of the injured spinal cord of the lamprey are also active in the repair of the peripheral nervous system in mammals, according to a study by a collaborative group of scientists at the Marine Biological Laboratory (MBL) and other institutions. This is consistent with the possibility that in the long term, the same or similar genes may be harnessed to improve spinal cord injury treatments.

"We found a large overlap with the hub of transcription factors that are driving regeneration in the mammalian peripheral nervous system," says Jennifer Morgan, director of the MBL's Eugene Bell Center for Regenerative Biology and Tissue Engineering, one of the authors of the study published this week in Scientific Reports.

Lampreys are jawless, eel-like fish that shared a common ancestor with humans about 550 million years ago. This study arose from the observation that a lamprey can fully recover from a severed spinal cord without medication or other treatment.

"They can go from paralysis to full swimming behaviors in 10 to 12 weeks," says Morgan.

"Scientists have known for many years that the lamprey achieves spontaneous recovery from spinal cord injury, but we have not known the molecular recipe that accompanies and supports this remarkable capacity," says Ona Bloom of the Feinstein Institute for Medical Research and the Zucker School of Medicine at Hofstra/Northwell, a former MBL Whitman Center Fellow who collaborated on the project.

"In this study, we have determined all the genes that change during the time course of recovery and now that we have that information, we can use it to test if specific pathways are actually essential to the process," Bloom says.

The researchers followed the lampreys' healing process and took samples from the brains and spinal cords at multiple points in time, from the first hours after injury until three months later when they were healed. They analyzed the material to determine which genes and signaling pathways were activated as compared to a non-injured lamprey.

As expected, they found many genes in the spinal cord that change over time with recovery. Somewhat unexpectedly, they also discovered a number of injury-induced gene expression changes in the brain. "This reinforces the idea that the brain changes a lot after a spinal cord injury," says Morgan. "Most people are thinking, 'What can you do to treat the spinal cord itself?' but our data really support the idea that there's also a lot going on in the brain."

They also found that many of the genes associated with spinal cord healing are part of the Wnt signaling pathway, which plays a role in tissue development. "Furthermore, when we treated the animals with a drug that inhibits the Wnt signaling pathway, the animals never recovered their ability to swim," says Morgan. Future research will explore why the Wnt pathway seems particularly important in the healing process.

The paper is the result of a collaboration between Morgan, Bloom and other scientists including Jeramiah Smith of University of Kentucky and Joseph Buxbaum of Icahn School of Medicine at Mount Sinai, both former Whitman Center Fellows. The collaboration was made possible by the MBL Whitman Center Fellowship program.

Read more at Science Daily

No evidence to support link between violent video games and behavior

Playing video games.
Researchers at the University of York have found no evidence to support the theory that video games make players more violent.

In a series of experiments, with more than 3,000 participants, the team demonstrated that video game concepts do not 'prime' players to behave in certain ways and that increasing the realism of violent video games does not necessarily increase aggression in game players.

The dominant model of learning in games is built on the idea that exposing players to concepts, such as violence in a game, makes those concepts easier to use in 'real life'.

This is known as 'priming', and is thought to lead to changes in behaviour. Previous experiments on this effect, however, have so far provided mixed conclusions.

Researchers at the University of York expanded the number of participants in experiments, compared to studies that had gone before it, and compared different types of gaming realism to explore whether more conclusive evidence could be found.

In one study, participants played a game where they had to either be a car avoiding collisions with trucks or a mouse avoiding being caught by a cat. Following the game, the players were shown various images, such as a bus or a dog, and asked to label them as either a vehicle or an animal.

Dr David Zendle, from the University's Department of Computer Science, said: "If players are 'primed' through immersing themselves in the concepts of the game, they should be able to categorise the objects associated with this game more quickly in the real world once the game had concluded.

"Across the two games we didn't find this to be the case. Participants who played a car-themed game were no quicker at categorising vehicle images, and indeed in some cases their reaction time was significantly slower."

In a separate, but connected study, the team investigated whether realism influenced the aggression of game players. Research in the past has suggested that the greater the realism of the game the more primed players are by violent concepts, leading to antisocial effects in the real world.

Dr Zendle said: "There are several experiments looking at graphic realism in video games, but they have returned mixed results. There are, however, other ways that violent games can be realistic, besides looking like the 'real world', such as the way characters behave for example.

"Our experiment looked at the use of 'ragdoll physics' in game design, which creates characters that move and react in the same way that they would in real life. Human characters are modelled on the movement of the human skeleton and how that skeleton would fall if it was injured."

The experiment compared player reactions to two combat games, one that used 'ragdoll physics' to create realistic character behaviour and one that did not, in an animated world that nevertheless looked real.

Following the game the players were asked to complete word puzzles called 'word fragment completion tasks', where researchers expected more violent word associations would be chosen for those who played the game that employed more realistic behaviours.

They compared the results of this experiment with another test of game realism, where a single bespoke war game was modified to form two different games. In one of these games, enemy characters used realistic soldier behaviours, whilst in the other game they did not employ realistic soldier behaviour.

Dr Zendle said: "We found that the priming of violent concepts, as measured by how many violent concepts appeared in the word fragment completion task, was not detectable. There was no difference in priming between the game that employed 'ragdoll physics' and the game that didn't, as well as no significant difference between the games that used 'real' and 'unreal' solider tactics.

"The findings suggest that there is no link between these kinds of realism in games and the kind of effects that video games are commonly thought to have on their players.

Read more at Science Daily

'Rainbow' dinosaur had iridescent feathers like a hummingbird

This is holotype fossil of Caihong juji, including line drawing of fossil skeleton.
Scientists discovered a dinosaur fossil with feathers so well-preserved that they were able to see the feathers' microscopic color-bearing structures. By comparing the shapes of those feather structures with the structures in modern bird feathers, they're able to infer that the new dino, Caihong juji ('rainbow with the big crest') had iridescent rainbow feathers like a hummingbird.

Birds are the last remaining dinosaurs. They're also some of the most vibrantly colored animals on Earth. A new study in Nature Communications reveals that iridescent feathers go way back -- a newly discovered species of dinosaur from 161 million years ago had rainbow coloring.

Caihong juji was tiny, about the size of a duck, with a bony crest on its head and long, ribbon-like feathers. And, based on analysis of its fossilized feathers, the feathers on its head, wings, and tail were probably iridescent, with colors that shimmered and shifted in the light. Its name reflects its appearance -- in Mandarin, it means, "rainbow with the big crest." The new species, which was first discovered by a farmer in northeastern China, was described by an international team of scientists led by Dongyu Hu, a professor in the College of Paleontology at the Shenyang Normal University in China.

"When you look at the fossil record, you normally only see hard parts like bone, but every once in a while, soft parts like feathers are preserved, and you get a glimpse into the past," says Chad Eliason, a postdoctoral researcher at The Field Museum and one of the study's authors. Eliason, who began work on the project as a postdoctoral fellow at the University of Texas at Austin, added, "The preservation of this dinosaur is incredible, we were really excited when we realized the level of detail we were able to see on the feathers."

When the scientists examined the feathers under powerful microscopes, they could see the imprints of melanosomes, the parts of cells that contain pigment. For the most part, the pigment that was once present was long gone, but the physical structure of the melanosomes remained. As it turns out, that was enough for scientists to be able to tell what color the feathers were.

That's because color isn't only determined by pigment, but by the structure of the melanosomes containing that pigment. Differently shaped melanosomes reflect light in different colors. "Hummingbirds have bright, iridescent feathers, but if you took a hummingbird feather and smashed it into tiny pieces, you'd only see black dust. The pigment in the feathers is black, but the shapes of the melanosomes that produce that pigment are what make the colors in hummingbird feathers that we see," explains Eliason.

The scientists were able to match the shapes of the pancake-shaped melanosomes in Caihong with the shapes of melanosomes in birds alive today. By finding birds with similarly shaped melanosomes, they were able to determine what kinds of colors Caihong may have flashed. The best matches: hummingbirds.

Colorful plumage is used in modern birds to attract mates -- the rainbow feathers of Caihong might be a prehistoric version of a peacock's iridescent tail. Caihong is the oldest known example of platelet-shaped melanosomes typically found in bright iridescent feathers.

It's also the earliest known animal with asymmetrical feathers -- a feature used by modern birds to steer when flying. Caihong couldn't fly, though -- its feathers were probably primarily used to attract mates and keep warm. While modern birds' asymmetrical feathers are on their wingtips, Caihong's were on its tail. "The tail feathers are asymmetrical but wing feathers not, a bizarre feature previously unknown among dinosaurs including birds," said co-author Xing Xu of the Chinese Academy of Science. "This suggests that controlling [flight] might have been first evolved with tail feathers during some kind of aerial locomotion."

But while Caihong's feathers were a first, it had other traits associated with much earlier species of dinosaurs, including the bony crest on its head. "This combination of traits is rather unusual," says co-author Julia Clarke of the University of Texas at Austin. "It has a velociraptor-type skull on the body of this very avian, fully feathered, fluffy kind of form."

This combination of old and new traits, says Eliason, is evidence of mosaic evolution, the concept of different traits evolving independently from each other. "This discovery gives us insight into the tempo of how fast these features were evolving," he adds.

For Eliason, the study also illuminates the value of big data. "To find the color of Caihong's feathers, we compared its melanosomes with a growing database of thousands of measurements of melanosomes found in modern birds," he says. It's also broadened his own research interests.

Read more at Science Daily

Possible cause of early colonial-era Mexican epidemic identified

Excavated structure at the northern edge of the Grand Plaza at Teposcolula-Yucundaa. Architectural investigations of the Grand Plaza resulted in the unexpected discovery of a large epidemic cemetery associated with the 1545-1550 cocoliztli epidemic. The cemetery was found to contain numerous mass burials, attesting to the catastrophic nature of the epidemic.
An international team, led by researchers from the Max Planck Institute for the Science of Human History (MPI-SHH), Harvard University and the Mexican National Institute of Anthropology and History (INAH), has used ancient DNA and a new data processing program to identify the possible cause of a colonial-era epidemic in Mexico. Many large-scale epidemics spread through the New World during the 16th century but their biological causes are difficult to determine based on symptoms described in contemporaneous historical accounts. In this study, published in Nature Ecology and Evolution, scientists made use of new methods in ancient DNA research to identify Salmonella enterica Paratyphi C, a pathogen that causes enteric fever, in the skeletons of victims of the 1545-1550 cocoliztli epidemic in Mexico.

After European contact, dozens of epidemics swept through the Americas, devastating New World populations. Although many first-hand accounts of these epidemics were recorded, in most cases it has been difficult, if not impossible, for researchers to definitively identify their causes based on historical descriptions of their symptoms alone. In some cases, for example, the symptoms caused by infection of different bacteria or viruses might be very similar, or the symptoms presented by certain diseases may have changed over the past 500 years. Consequently, researchers have hoped that advancements in ancient DNA analysis and other such approaches might provide a breakthrough in identifying the unknown causes of past epidemics.

The first direct evidence for one of the potential causes of the 1545-1550 cocoliztli epidemic

Of all the colonial New World epidemics, the unidentified 1545-1550 "cocoliztli" epidemic was among the most devastating, affecting large parts of Mexico and Guatemala, including the Mixtec town of Teposcolula-Yucundaa, located in Oaxaca, Mexico. Archaeological excavations at the site have unearthed the only known cemetery linked to this particular outbreak to date. "Given the historical and archaeological context of Teposcolula-Yucundaa, it provided us with a unique opportunity to address the question regarding the unknown microbial causes responsible for this epidemic," explains Åshild J. Vågene of the MPI-SHH, co-first author of the study. After the epidemic, the city of Teposcolula-Yucundaa was relocated from the top of a mountain to the neighboring valley, leaving the epidemic cemetery essentially untouched prior to recent archaeological excavations. These circumstances made Teposcolula-Yucundaa an ideal site to test a new method to search for direct evidence of the cause of the disease.

The scientists analyzed ancient DNA extracted from 29 skeletons excavated at the site, and used a new computational program to characterize the ancient bacterial DNA. This technique allowed the scientists to search for all bacterial DNA present in their samples, without having to specify a particular target beforehand. This screening method revealed promising evidence of S. enterica DNA traces in 10 of their samples. Subsequent to this initial finding, a DNA enrichment method specifically designed for this study was applied. With this, the scientists were able to reconstruct full S. enterica genomes, and 10 of the individuals were found to contain a subspecies of S. enterica that causes enteric fever. This is the first time scientists have recovered molecular evidence of a microbial infection from this bacterium using ancient material from the New World. Enteric fever, of which typhoid fever is the best known variety today, causes high fevers, dehydration, and gastro-intestinal complications. Today, the disease is considered a major health threat around the world, having caused an estimated 27 million illnesses in the year 2000 alone. However, little is known about its past severity or worldwide prevalence.

A new tool in discovering past diseases

"A key result of this study is that we were successful in recovering information about a microbial infection that was circulating in this population, and we did not need to specify a particular target in advance," explains Alexander Herbig, also of the MPI-SHH and co-first author of the study. In the past, scientists usually targeted a particular pathogen or a small set of pathogens, for which they had prior indication.

Read more at Science Daily

Jan 15, 2018

Evolution acceptance in children linked to aptitude, not belief

In contrast to adults, acceptance of evolution in schoolchildren in the UK is linked to their scientific aptitude rather than conflicts with belief systems, say scientists at the Milner Centre for Evolution at the University of Bath.

Previous studies in the USA have shown that adults that strongly reject evolution are often highly educated but reject the scientific consensus owing to conflicts with their belief systems. This phenomenon is also seen for other emotive subjects such as climate change and vaccination, where some people reject the scientific consensus despite the large body of evidence supporting it.

Does the same clash of beliefs and evidence prevent effective learning in the classroom? Scientists at the Milner Centre for Evolution found that for UK schoolchildren, surprisingly this was not the case. They conducted a large controlled trial of 1,200 students aged 14-16 in 70 classes from secondary schools across the south and south west of the UK, in which students were tested for acceptance of evolution and understanding of evolution and, as a control subject, genetics.

They found that non-acceptors of evolution tended to be in the foundation science classes where students' understanding of science generally was weak, their understanding of evolution being just one part of that.

The study also asked whether the non-acceptors' ability to improve their understanding of evolution through teaching was any weaker than their ability to improve their understanding of the less emotive, but related topic, basic genetics.

The non-acceptor students had lower prior understanding of both evolution and genetics, and they responded poorly not only to the teaching of evolution, but, importantly, also to genetics. This indicates they were less likely to accept evolution because they struggled to understand science rather than due to psychological conflicts with their beliefs.

The researchers concluded that the current system of science teaching was not optimal for the lower aptitude students.

Professor Laurence Hurst, Director of the Milner Centre for Evolution, led the study. He said: "Previous studies in the USA found strong rejecters of evolution were often highly intelligent and understood concepts but were able to pick holes in the data to match their belief systems.

"So we were surprised to find that in UK schoolchildren there was no evidence of psychological conflict in the low acceptors -- it was simply that they were unlikely to accept evolution if they were struggling to understand the concepts.

"It's unclear as to why our study on children showed contrasting results to previous studies on adults.

"It could be that there is no psychological conflict because younger people's belief systems are not yet fully formed, or alternatively the students avoid the conflicts by the taking the attitude that religious and scientific acceptance are compatible. We found some evidence for the latter.

"Also there are different cultural demographics in UK compared with the USA in terms of religious beliefs and acceptance of science. People tend to adopt the same mindset of folks around them. In the UK this is mostly secular and accepting of the importance of evidence."

Dr Rebecca Mead, a former teacher and first author of the paper, added: "Our findings tell us we need to teach science differently -- The way we are currently teaching science is leaving some students behind.

"Perhaps students should instead be taught according to learning styles rather than ability, to help all students understand the basic concepts of science."

The study included schools from both the state and private systems and comprised a large breadth of social, religious and economic demographics.

Read more at Science Daily

Swiss archaeologist discovers the earliest tomb of a Scythian prince

View of the burial mound Tunnug 1 (Arzhan 0). While the other kurgans in the region were constructed on a terrace, Tunnug 1 (Arzhan 0) is located deep in a swamp.
Deep in a swamp in the Russian republic of Tuva, SNSF-funded archaeologist Gino Caspari has discovered an undisturbed Scythian burial mound. All the evidence suggests that this is not only the largest Scythian princely tomb in South Siberia, but also the earliest -- and that it may be harbouring some outstandingly well-preserved treasures.

Gino Caspari made the most significant find in his career to date not with a shovel, but at a computer. A recipient of Swiss National Science Foundation (SNSF) funding, archaeologist Caspari discovered a circular structure on high-resolution satellite images of the Uyuk River valley (Siberia) on his computer screen. An initial trial dig carried out this summer by the Bern University scientist together with the Russian Academy of Sciences and the Hermitage Museum confirmed his suspicion: the structure is a kurgan, a Scythian princely tomb.

Looking back at the beginnings

Working with a Swiss-Russian team, Caspari was able to prove that the burial mound -- referred to as Tunnug 1 (or Arzhan 0) -- was similar in construction to the kurgan Arzhan 1 located only ten kilometres away to the northeast. Arzhan 1 had long been regarded as the earliest Scythian princely tomb in the region, which is also known as the "Siberian Valley of Kings" owing to the numerous kurgans found there. The earliest princely tombs consist of a stone packing with a circular arrangement of chambers. The walls of the chambers are made of larch logs. Scythian burial objects typically include weapons, horse's harnesses and objects decorated in the so-called animal style.

Wooden beams found by Caspari during the test excavation date back to the 9th century BC, predating Arzhan 1, which was built at the turn of the 9th to the 8th century BC and excavated in the 1970s. "We have a great opportunity here," says a delighted Caspari, commenting on the results of the trial dig published in the current issue of Archaeological Research in Asia (*).

"Archaeological methods have become considerably more sophisticated since the 1970s. Today we have completely different ways of examining material to find out more about the transition from the Late Bronze Age to the Iron Age," remarks the SNSF-funded researcher. He also stresses that the way we look at prehistoric times is changing radically thanks to genetics, isotope analysis and geophysical methods as well as developments in geographic information systems and remote sensing.

Protective armour of ice


The Arzhan 0 burial mound is in an inaccessible location amid swampy terrain, which also makes it harder for grave robbers to reach. "The kurgan is five arduous hours by off-road vehicle from the nearest settlement," Caspari points out. As it may never have been disturbed, it could contain similar treasures to Arzhan 2. Between 2001 and 2004, a German team of archaeologists discovered an undisturbed burial chamber in Arzhan 2 containing the richest collection of burial artefacts ever found in the Eurasian steppe. Over a thousand gold objects had been placed with the two corpses in the tomb's main chamber, in addition to magnificently adorned weapons, pots and horses with exquisite harnesses. Made of solid gold, the necklace of the Scythian prince from Arzhan 2 weighs 2 kilos alone. But the date of the burial is put at the 7th century BC, i.e. well into the Iron Age.

Read more at Science Daily