Dec 16, 2017

Mapping the evolutionary history of a sugar gene

Beef cattle in pasture
Around two million years ago, a genetic change occurred that differentiated humans from most other primates that both protected humans from diseases, yet made red meat a health risk.

At this point in human evolution, a certain gene, known as CMAH, that allows for the synthesis of a sugar called Neu5Gc, went missing. This sugar is present in red meats, some fish and dairy products. When humans consume an animal that has that gene, the body has an immune reaction to the foreign sugar, which can cause inflammation, arthritis, and cancer.

University of Nevada, Reno researchers, led by College of Science Assistant Professor David Alvarez-Ponce, have analyzed 322 animal genome sequences from the National Center for Biotechnology Information looking for animals that show the presence of active CMAH genes. They placed the data from the 322 animal genomes into a "tree" to determine when in an animal's evolutionary history did the CMAH gene became inactive or "turned off." This is useful in explaining why certain species have an active CMAH gene and why similar species don't.

The Alvarez-Ponce lab specializes in studying the evolution of genes and genomes using bioinformatics. When it comes to the few fish investigated so far, there is an insignificant concentration of the Neu5Gc sugar to be a medical concern, but the concentrations of Neu5Gc are significantly higher in fish eggs, better known as caviar. As masters' student Sateesh Peri puts it, "one of the most expensive foods is among the ones with the highest amount of toxic Neu5Gc."

Birds also lack active CMAH genes, implying that consuming animals like chicken, turkey, and geese is not associated with the harmful side effects. Another class of animals that do not have the CMAH gene are reptiles, except for one species of lizard. The presence of the gene in this lizard challenges prior propositions that the gene may have been lost in an ancestor of reptiles and birds.

Aside from dietary issues, the CMAH gene also proves to be a major factor in whether or not a transplanted organ from an animal would be accepted by a human because of the gene. When an animal with a CMAH gene's organs are transplanted into a human who does not have the CMAH gene, the human body might reject the organ because of the presence of the Neu5Gc sugar.

"Inactivation of CMAH during human evolution might have freed humans from a number of pathogens," Alvarez-Ponce said. "For instance, there's this certain type of malaria that requires Neu5Gc for infection -- other primates are susceptible to it, but not humans."

The presence of the CMAH gene determines what animals humans should avoid eating (or should eat in moderation) and which animal bites humans should avoid. If the animal has the CMAH gene, then it shouldn't be eaten because it could cause inflammation, arthritis and cancer. However, consuming red meat in moderation is considered to be fine. If the animal doesn't have the CMAH gene, then those animals are more likely to contain pathogens that attach to Neu5Ac (the precursor of Neu5Gc) and that can affect humans.

Read more at Science Daily

Engineers program tiny robots to move, think like insects

RoboBees manufactured by the Harvard Microrobotics Lab have a 3 centimeter wingspan and weigh only 80 milligrams. Cornell engineers are developing new programming that will make them more autonomous and adaptable to complex environments.
While engineers have had success building tiny, insect-like robots, programming them to behave autonomously like real insects continues to present technical challenges. A group of Cornell engineers has been experimenting with a new type of programming that mimics the way an insect's brain works, which could soon have people wondering if that fly on the wall is actually a fly.

The amount of computer processing power needed for a robot to sense a gust of wind, using tiny hair-like metal probes imbedded on its wings, adjust its flight accordingly, and plan its path as it attempts to land on a swaying flower would require it to carry a desktop-size computer on its back. Silvia Ferrari, professor of mechanical and aerospace engineering and director of the Laboratory for Intelligent Systems and Controls, sees the emergence of neuromorphic computer chips as a way to shrink a robot's payload.

Unlike traditional chips that process combinations of 0s and 1s as binary code, neuromorphic chips process spikes of electrical current that fire in complex combinations, similar to how neurons fire inside a brain. Ferrari's lab is developing a new class of "event-based" sensing and control algorithms that mimic neural activity and can be implemented on neuromorphic chips. Because the chips require significantly less power than traditional processors, they allow engineers to pack more computation into the same payload.

Ferrari's lab has teamed up with the Harvard Microrobotics Laboratory, which has developed an 80-milligram flying RoboBee outfitted with a number of vision, optical flow and motion sensors. While the robot currently remains tethered to a power source, Harvard researchers are working on eliminating the restraint with the development of new power sources. The Cornell algorithms will help make RoboBee more autonomous and adaptable to complex environments without significantly increasing its weight.

"Getting hit by a wind gust or a swinging door would cause these small robots to lose control. We're developing sensors and algorithms to allow RoboBee to avoid the crash, or if crashing, survive and still fly," said Ferrari. "You can't really rely on prior modeling of the robot to do this, so we want to develop learning controllers that can adapt to any situation."

To speed development of the event-based algorithms, a virtual simulator was created by Taylor Clawson, a doctoral student in Ferrari's lab. The physics-based simulator models the RoboBee and the instantaneous aerodynamic forces it faces during each wing stroke. As a result, the model can accurately predict RoboBee's motions during flights through complex environments.

"The simulation is used both in testing the algorithms and in designing them," said Clawson, who helped has successfully developed an autonomous flight controller for the robot using biologically inspired programming that functions as a neural network. "This network is capable of learning in real time to account for irregularities in the robot introduced during manufacturing, which make the robot significantly more challenging to control."

Aside from greater autonomy and resiliency, Ferrari said her lab plans to help outfit RoboBee with new micro devices such as a camera, expanded antennae for tactile feedback, contact sensors on the robot's feet and airflow sensors that look like tiny hairs.

"We're using RoboBee as a benchmark robot because it's so challenging, but we think other robots that are already untethered would greatly benefit from this development because they have the same issues in terms of power," said Ferrari.

One robot that is already benefiting is the Harvard Ambulatory Microrobot, a four-legged machine just 17 millimeters long and weighing less than 3 grams. It can scamper at a speed of .44 meters-per-second, but Ferrari's lab is developing event-based algorithms that will help complement the robot's speed with agility.

Read more at Science Daily

Dec 15, 2017

To sleep or not: Researchers explore complex genetic network behind sleep duration

Fruit fly
Scientists have identified differences in a group of genes they say might help explain why some people need a lot more sleep -- and others less -- than most. The study, conducted using fruit fly populations bred to model natural variations in human sleep patterns, provides new clues to how genes for sleep duration are linked to a wide variety of biological processes.

Researchers say a better understanding of these processes could lead to new ways to treat sleep disorders such as insomnia and narcolepsy. Led by scientists with the National Heart, Lung, and Blood Institute (NHLBI), part of the National Institutes of Health (NIH), the study will be published on Dec. 14 in PLOS Genetics.

"This study is an important step toward solving one of the biggest mysteries in biology: the need to sleep," says study leader Susan Harbison, Ph.D., an investigator in the Laboratory of Systems Genetics at NHLBI. "The involvement of highly diverse biological processes in sleep duration may help explain why the purpose of sleep has been so elusive." Scientists have known for some time that, in addition to our biological clocks, genes play a key role in sleep and that sleep patterns can vary widely. But the exact genes controlling the duration of sleep and the biological processes that are linked to these genes have remained unclear.

To learn more, scientists artificially bred 13 generations of wild fruit flies to produce flies that were either long sleepers (sleeping 18 hours each day) or short sleepers (sleeping 3 hours each day). The scientists then compared genetic data between the long and short sleepers and identified 126 differences among 80 genes that appear to be associated with sleep duration. They found that these genetic differences were tied to several important developmental and cell signaling pathways. Some of the genes identified have known functions in brain development, as well as roles in learning and memory, the researchers said.

"What is particularly interesting about this study is that we created long- and short-sleeping flies using the genetic material present in nature, as opposed to the engineered mutations or transgenic flies that many researchers in this field are using," Harbison said. "Until now, whether sleep at such extreme long or short duration could exist in natural populations was unknown."

The researchers also found that the lifespan of the naturally long and short sleepers did not differ significantly from the flies with normal sleeping patterns. This suggests that there are little physiological consequences -- whether ill effects or benefits -- of being an extreme long or short sleeper, they said.

Read more at Science Daily

Engineers create plants that glow

Illumination of a book ('Paradise Lost,' by John Milton) with the nanobionic light-emitting plants (two 3.5-week-old watercress plants). The book and the light-emitting watercress plants were placed in front of a reflective paper to increase the influence from the light emitting plants to the book pages.
Imagine that instead of switching on a lamp when it gets dark, you could read by the light of a glowing plant on your desk.

MIT engineers have taken a critical first step toward making that vision a reality. By embedding specialized nanoparticles into the leaves of a watercress plant, they induced the plants to give off dim light for nearly four hours. They believe that, with further optimization, such plants will one day be bright enough to illuminate a workspace.

"The vision is to make a plant that will function as a desk lamp -- a lamp that you don't have to plug in. The light is ultimately powered by the energy metabolism of the plant itself," says Michael Strano, the Carbon P. Dubbs Professor of Chemical Engineering at MIT and the senior author of the study

This technology could also be used to provide low-intensity indoor lighting, or to transform trees into self-powered streetlights, the researchers say.

MIT postdoc Seon-Yeong Kwak is the lead author of the study, which appears in the journal Nano Letters.

Nanobionic plants


Plant nanobionics, a new research area pioneered by Strano's lab, aims to give plants novel features by embedding them with different types of nanoparticles. The group's goal is to engineer plants to take over many of the functions now performed by electrical devices. The researchers have previously designed plants that can detect explosives and communicate that information to a smartphone, as well as plants that can monitor drought conditions.

Lighting, which accounts for about 20 percent of worldwide energy consumption, seemed like a logical next target. "Plants can self-repair, they have their own energy, and they are already adapted to the outdoor environment," Strano says. "We think this is an idea whose time has come. It's a perfect problem for plant nanobionics."

To create their glowing plants, the MIT team turned to luciferase, the enzyme that gives fireflies their glow. Luciferase acts on a molecule called luciferin, causing it to emit light. Another molecule called co-enzyme A helps the process along by removing a reaction byproduct that can inhibit luciferase activity.

The MIT team packaged each of these three components into a different type of nanoparticle carrier. The nanoparticles, which are all made of materials that the U.S. Food and Drug Administration classifies as "generally regarded as safe," help each component get to the right part of the plant. They also prevent the components from reaching concentrations that could be toxic to the plants.

The researchers used silica nanoparticles about 10 nanometers in diameter to carry luciferase, and they used slightly larger particles of the polymers PLGA and chitosan to carry luciferin and coenzyme A, respectively. To get the particles into plant leaves, the researchers first suspended the particles in a solution. Plants were immersed in the solution and then exposed to high pressure, allowing the particles to enter the leaves through tiny pores called stomata.

Particles releasing luciferin and coenzyme A were designed to accumulate in the extracellular space of the mesophyll, an inner layer of the leaf, while the smaller particles carrying luciferase enter the cells that make up the mesophyll. The PLGA particles gradually release luciferin, which then enters the plant cells, where luciferase performs the chemical reaction that makes luciferin glow.

The researchers' early efforts at the start of the project yielded plants that could glow for about 45 minutes, which they have since improved to 3.5 hours. The light generated by one 10-centimeter watercress seedling is currently about one-thousandth of the amount needed to read by, but the researchers believe they can boost the light emitted, as well as the duration of light, by further optimizing the concentration and release rates of the components.

Plant transformation

Previous efforts to create light-emitting plants have relied on genetically engineering plants to express the gene for luciferase, but this is a laborious process that yields extremely dim light. Those studies were performed on tobacco plants and Arabidopsis thaliana, which are commonly used for plant genetic studies. However, the method developed by Strano's lab could be used on any type of plant. So far, they have demonstrated it with arugula, kale, and spinach, in addition to watercress.

For future versions of this technology, the researchers hope to develop a way to paint or spray the nanoparticles onto plant leaves, which could make it possible to transform trees and other large plants into light sources.

"Our target is to perform one treatment when the plant is a seedling or a mature plant, and have it last for the lifetime of the plant," Strano says. "Our work very seriously opens up the doorway to streetlamps that are nothing but treated trees, and to indirect lighting around homes."

Read more at Science Daily

Hope for one of the world's rarest primates: First census of Zanzibar Red Colobus monkey

A team of WCS scientists recently completed the first-ever range-wide population census of the Zanzibar red colobus monkey (Piliocolobus kirkii an endangered primate found only on the Zanzibar archipelago off the coast of East Africa.
A team of WCS scientists recently completed the first-ever range-wide population census of the Zanzibar red colobus monkey (Piliocolobus kirkii) an endangered primate found only on the Zanzibar archipelago off the coast of East Africa.

The good news: there are more than three times as many Zanzibar red colobus monkeys (more than 5,800 individual animals) than previously thought, and many more monkeys living within protected areas than outside of them. And the bad news: survivorship of young animals is very low, species now extinct in 4 areas, forest habitat on which the primates and others species depend are rapidly being cleared for agriculture and tourism development projects and hunting is common.

The paper titled "Zanzibar's endemic red colobus Piliocolobus kirkii: first systematic and total assessment of population, demography and distribution" has been published in the online version of the journal Oryx. The authors are: Tim R.B. Davenport; Said A. Fakih; Sylvanos P. Kimiti; Lydia U. Kleine; Lara S. Foley; and Daniela W. De Luca.

"Scientists have known about the Zanzibar red colobus monkey for 150 years, yet this is the first systematic study of this poorly understood species across its entire range," said Dr. Tim Davenport, Director of WCS's Tanzania Country Program and the lead author of the study. "The systematic assessment redefines almost everything we know about this amazing animal, and is now guiding effective management strategies for this species."

Seeking to gain a better understanding of the status and ecological needs of the Zanzibar red colobus monkey, the WCS team of researchers spent two years (4,725 hours spent in the field) searching for and observing the arboreal primates. The surveys occurred both within and outside of protected areas on the main Zanzibar island of Unguja, and the scientists employed a new sweep census technique to collect data on group sizes and structures, demographics, and locations with the help of GPS devices.

The results of the study provided researchers with proof that Zanzibar's protected areas are, to some extent, working. Some 69 percent of the population of Zanzibar red colobus monkeys live inside Unguja's protected area network, and monkey groups found within protected areas boasted both higher average group sizes and more females per group.

Conversely, the assessment also highlighted challenges for conservation. Especially for the more than 30 percent of the monkey's population that live outside of protected areas. The scientists discovered that four of the forests previously known to contain Zanzibar red colobus monkeys no longer do. Four other locations were found to contain only one family group, which are unlikely to survive in isolation.

One of the largest threats to the Zanzibar red colobus monkey is deforestation. Forests on Zanzibar's main island of Unguja are being lost at a rate of more than 19 square kilometers per year due to agricultural activities, residential development, and human population growth. The hunting of monkeys for food and retaliation for crop raiding is also a concern.

The authors recommend creating a new protected area to further safeguard the Zanzibar red colobus monkey as well as increasing primate and forest tourism operations. The team has also suggested making the primate the official national animal of Zanzibar.

"The Zanzibar red colobus monkey is unique to Zanzibar and could be a wonderful example of how conservation efforts can succeed in protecting both wildlife and habitat, which in turn benefits communities" added Davenport, who recently presented the study's results to the Zanzibar government. "The species could serve as a fitting symbol for both Zanzibar and the government's foresight in wildlife management."

Read more at Science Daily

Effects of climate change could accelerate by mid-century

Climate change effects are expected to become more apparent by the middle of the century.
Nature lovers beware, environmental models used by researchers at the University of New Hampshire are showing that the effects of climate change could be much stronger by the middle of the 21st century, and a number of ecosystem and weather conditions could consistently decline even more in the future. If carbon dioxide emissions continue at the current rate, they report that scenarios of future conditions could not only lead to a significant decrease in snow days, but also an increase in the number of summer days over 90 degrees and a drastic decline in stream habitat with 40 percent not suitable for cold water fish.

"While this research was applied to New Hampshire, the approach can be generally applied, and a number of things that people care about will worsen due to climate change," said Wilfred Wollheim, associate professor in the department of natural resources and the environment and one of the study's authors. "For example, right now the average number of snow days is 60 per year, but in 20 to 30 years the models show that the number of snow days could be as low as 18 days per year."

The research, published recently in the journal Ecology and Society, used models bench marked to field measurements to evaluate the Merrimack River watershed in New Hampshire. They found that along with a decrease in snow cover in the winter, other potential impacts could include up to 70 hot summer days per year with temperatures of 90 degrees or more by the end of century, a greater probability of flooding, a considerable loss of cold water fish habitat, and accelerated nitrogen inputs to coastal areas which could lead to eutrophication, an abnormal amount of nutrients which can pollute the water and deplete fish species. Researchers say that the biggest impact will be around urban areas, near where people live.

"Land use and population growth interacting with climate change are also important drivers," said Wollheim. "These models can help guide efforts to make plans to adapt to the changing climate. Alterations in land use policy could reduce these impacts. In particular, prevention of sprawl and investment in storm and waste water infrastructure would further maintain more ecosystem services. Implementing policies that reduce greenhouse gas emissions are essential to limit even further changes."

Read more at Science Daily

Dec 14, 2017

Ancient genetic mutation helps explain origin of some human organs

DNA
A neutral genetic mutation -- a fluke in the evolutionary process that had no apparent biological purpose -- that appeared over 700 million years ago in biological evolution could help explain the origin of complex organs and structures in human beings and other vertebrates, according to an article published in Nature Communications by a team led by CRG group leader Manuel Irimia, university professor Jordi García-Fernàndez, of the Faculty of Biology and the Institute of Biomedicine of the University of Barcelona (IBUB), and Maria Ina Arnone (Anton Dohrn Zoological Station, Italy).

Specifically, this mutation, which likely occurred very early in evolution after the separation of our group from that of sea anemones, affected a gene of the Fgfr (fibroblast growth factor receptors) family. Curiously, this genetic change triggered, millions of years later, the connection between two gene regulatory networks (those controlled by ESRP and by Fgfr), which became key for the origin of many vertebrate organs and structures (lungs, forelimbs and inner ear).

The Nature Communications article, whose lead author is Demian Burguera (CRG and UB-IBUB), took their approach from the field of evolutionary developmental biology (evo-devo). This is a relatively new paradigm in the study of evolution, which focuses on comparing the embryonic development of multiple living beings to understand how their adult forms have changed giving rise to new species.

From chance mutation to formation of organs in vertebrates

A gene can code for different proteins -- with diverse functionality -- through the genetic mechanism of alternative splicing (the cutting and rejoining of genes). In some human cell types, this process is controlled by a family of regulatory proteins called ESRP. They act as a molecular switch: when these regulatory proteins are present, a group of genes involved in morphogenesis and cell-cell interactions generate specific protein variants; when they are absent, different protein variants are produced. And this molecular switch controls how cells behave and interact with their neighbors during embryonic development. However, the evolutionary importance of this mechanism was unknown.

"We have studied the functions of ESRP genes during the embryogenesis of various animals. Our results suggest that these genes were part of an ancient genetic machinery, shared by animals as diverse as fish, sea urchins and ourselves, that controls the integration of certain cells into the linings of developing organs. This is a fundamental step in the formation of some organs, and it is the reverse of a process that is central to cancer metastasis, by which cells leave the tumor to colonize other parts of the body" explains Manuel Irimia, group leader at the Centre for Genomic Regulation (CRG).

The article published in Nature Communications shows how the same regulatory genes have been used to generate different organs and biological structures in living beings during the evolutionary process. In the same vein, the article describes how a chance "mistake" -- an apparently meaningless mutation that took place over 700 million years ago -- became the molecular driver for complex morphological developments in a number of vertebrates (including the human species).

"Clearly, the most exceptional result of the work is the proof of how important serendipity is for evolution. It is surprising to find that a single gene (ESRP), through its ancestral biological role (cell adherence and motility) has been used throughout the animal scale for very different purposes: from the immune system of an echinoderm to the lips, lungs or inner ears of humans," states professor Jordi Garcia-Fernàndez, of the University of Barcelona's Department of Genetics, Microbiology and Statistics and the IBUB.

Read more at Science Daily

Spaghetti-like, DNA 'noodle origami' the new shape of things to come for nanotechnology

A DNA origami with an emoji-like smiley face.
For the past few decades, scientists have been inspired by the blueprint of life, DNA, as the shape of things to come for nanotechnology.

This burgeoning field is called DNA origami. Scientist borrowed its moniker from the paper artists who conjure up birds, flowers and planes from imaginatively folding a single sheet of paper.

Similarly, DNA origami scientists are dreaming up a variety of shapes -- at a scale one thousand times smaller than a human hair -- that they hope will one day revolutionize computing, electronics and medicine.

Now, a team of Arizona State and Harvard scientists has invented a major new advance in DNA nanotechnology. Dubbed "single-stranded origami," their new strategy uses one long, thin noodle-like strand of DNA, or its chemical cousin RNA, that can self-fold -- -without even a single knot -- into the largest, most complex structures to date.

And, the strands forming these structures can be made inside living cells or using enzymes in a test tube, allowing scientists the potential to plug-and-play with new designs and functions for nanomedicine -- -like tiny, nanobots playing doctor and delivering drugs within cells to the site of injury.

"I think this is an exciting breakthrough, and a great opportunity for synthetic biology as well," said Hao Yan, a co-inventor of the technology, director of the ASU Biodesign Institute's Center for Molecular Design and Biomimetics, and the Milton Glick Professor in the School of Molecular Sciences.

"We are always inspired by nature's designs to make information-carrying molecules that can self-fold into the nanoscale shapes we want to make,"

As proof of concept, they've pushed the envelope to make Emoji-like smiley faces, hearts, triangle shapes -- 18 shapes in total -- that significantly expand the design studio space and material scalability for so-called, "bottom-up" nanotechnology.

Size matters

To date, DNA nanotechnology scientists have had to rely on two main methods for making spatially addressable structures with finite dimensions.

The first was molecular bricks, small, short pieces of DNA that can fold together to make a single structure. The second method was scaffolded DNA, where a single strand is shaped into a structure by using helper strands of DNA, that staple the structure into place.

"These two methods are not very scalable in terms of synthesis," said Fei Zhang, a senior co-author on the paper. "When you have so many short pieces of DNA, you can't replicate it using biological systems. One way around this is to engineer one long strand that could fold itself into any design or architecture."

Furthermore, each method has been limited because as the size of the structure increases, the ability to fold correctly becomes more challenging.

Now, there is a new third way.

For Yan and his team to make their breakthrough, they had to go back to the drawing board, which meant looking at nature again for inspiration. They found what they were looking for with a chemical cousin of DNA, in the form of complex, RNA structures.

The complex RNA structures discovered to date contain single-stranded RNA molecules that self-fold into structures without any topological knots. Could this trick work again for single-stranded DNA or RNA origami?

They were able to crack the code of how RNA makes structures to develop a fully programmable single-stranded origami architecture.

"The key innovation of our study is to use DNA and RNA to construct a structurally complex yet knot-free structure that can be folded smoothly from a single strand," Yan said. "This gave us a design strategy to allow us to fold one long strand into complex architecture."

"With help from a computer scientist in the team, we could also codify the design process as a mathematically rigorous formal algorithm and automate the design by developing a user-friendly software tool," said Yan.

The algorithm and software were validated by the automated design and experimental construction of six distinct DNA ssOrigami structures (four rhombuses and two heart shapes).

Form and function

It's one thing to make crafty patterns and smiley faces with DNA, but critics of DNA origami have been wondering when the practical applications would come about.

Now, these are possible. "I think we are much closer to real practical applications of the technology," said Yan. "We are actively looking at the first nanomedicine applications with our ssOrigami technology."

They were also able to demonstrate that a folded ssOrigami structure can be melted and used as a template for amplification by DNA copying enzymes in a test tube and that the ssOrigami strand can be replicated and amplified via clonal production in living cells.

"Single-stranded DNA nanostructures formed via self-folding offer greater potential of being amplifiable, replicable, and clonable, and hence the opportunity for cost-efficient, large-scale production using enzymatic and biological replication, as well as the possibility for using in vitro evolution to produce sophisticated phenotypes and functionalities," said Yan.

These same design rules could be used for DNA's chemical cousin, RNA.

A key design feature of single-stranded origami (ssOrigami) is that the strand can be made and copied in the lab and in living cells and subsequently folded into designer structures by heating and cooling the DNA.

To make it inside the lab, they used the photocopier of cloning sequences, called PCR, to replicate and produce ssDNA.

Inside living cells, they first placed it inside a mule of molecular cloning, called a plasmid, after it was placed into a common lab bacteria called E. coli cells. When they treated the bacteria with enzymes to free up the ssDNA, they could isolate it, and then fold it into its target structure.

"Because plasmid DNA can be easily replicated in E. coli, the production can be scaled up by growing a large volume of E. coli cells with low cost," said Yan. This gets around the constraint of having to synthesize all of the DNA in the lab from scratch, which is far more expensive.

It also moves them in a direction now, where they can potentially make the structures inside of cells.

"Here we show bacteria to make the strand, but still need to do thermal annealing outside the bacteria to form the structure," said Yan. "The ideal situation would be to design an RNA sequence that can get transcribed inside the bacteria, and fold inside the bacteria so we can use bacteria as a nanofactory to produce the material."

Here, they demonstrated a framework to design and synthesize a single DNA or RNA strand to efficiently self-fold into an unknotted compact ssOrigami structure that approximates any arbitrary user-prescribed target shape.

"Its single-strandedness enabled the demonstration of facile replication of the strand in vitro and in living cells, and its programmability allowed us to codify the design process and develop a simple web-based automated design tool."

A new design school

In the software, made through a collaboration with BioNano Research Group, Autodesk Research, first, the user selects a target shape, which is converted into pixelated representation. The user can upload a 2D image or draw a shape using a 2D pixel design editor.

The user can optionally add DNA hairpins or loops, which can serve as surface markers or handles for attaching external entities. The pixels are converted into DNA helical domains and locking domains to do the folding. The software will then generate ssOrigami structures and sequences, and the user can view the molecular structure via an embedded molecular viewer. Finally, the DNA sequence is assigned to the cycle strand, and the expected folded structure manufactured in the lab and visually confirmed by viewing it under a powerful microscope that are the eyes of nanotechnology, atomic force microscopy, or AFM.

Read more at Science Daily

Artificial intelligence, NASA data used to discover eighth planet circling distant star

´With the discovery of an eighth planet, the Kepler-90 system is the first to tie with our solar system in number of planets. Artist's concept.
Our solar system now is tied for most number of planets around a single star, with the recent discovery of an eighth planet circling Kepler-90, a Sun-like star 2,545 light years from Earth. The planet was discovered in data from NASA's Kepler Space Telescope.

The newly-discovered Kepler-90i -- a sizzling hot, rocky planet that orbits its star once every 14.4 days -- was found using machine learning from Google. Machine learning is an approach to artificial intelligence in which computers "learn." In this case, computers learned to identify planets by finding in Kepler data instances where the telescope recorded changes in starlight caused by planets beyond our solar system, known as exoplanets.

"Just as we expected, there are exciting discoveries lurking in our archived Kepler data, waiting for the right tool or technology to unearth them," said Paul Hertz, director of NASA's Astrophysics Division in Washington. "This finding shows that our data will be a treasure trove available to innovative researchers for years to come."

The discovery came about after researchers Christopher Shallue and Andrew Vanderburg trained a computer to learn how to identify exoplanets in the light readings recorded by Kepler -- the miniscule change in brightness captured when a planet passed in front of, or transited, a star. Inspired by the way neurons connect in the human brain, this artificial "neural network" sifted through Kepler data and found weak transit signals from a previously-missed eighth planet orbiting Kepler-90, in the constellation Draco.

Machine learning has previously been used in searches of the Kepler database, and this continuing research demonstrates that neural networks are a promising tool in finding some of the weakest signals of distant worlds.

Other planetary systems probably hold more promise for life than Kepler-90. About 30 percent larger than Earth, Kepler-90i is so close to its star that its average surface temperature is believed to exceed 800 degrees Fahrenheit, on par with Mercury. Its outermost planet, Kepler-90h, orbits at a similar distance to its star as Earth does to the Sun.

"The Kepler-90 star system is like a mini version of our solar system. You have small planets inside and big planets outside, but everything is scrunched in much closer," said Vanderburg, a NASA Sagan Postdoctoral Fellow and astronomer at the University of Texas at Austin.

Shallue, a senior software engineer with Google's research team Google AI, came up with the idea to apply a neural network to Kepler data. He became interested in exoplanet discovery after learning that astronomy, like other branches of science, is rapidly being inundated with data as the technology for data collection from space advances.

"In my spare time, I started Googling for 'finding exoplanets with large data sets' and found out about the Kepler mission and the huge data set available," said Shallue. "Machine learning really shines in situations where there is so much data that humans can't search it for themselves."

Kepler's four-year dataset consists of 35,000 possible planetary signals. Automated tests, and sometimes human eyes, are used to verify the most promising signals in the data. However, the weakest signals often are missed using these methods. Shallue and Vanderburg thought there could be more interesting exoplanet discoveries faintly lurking in the data.

First, they trained the neural network to identify transiting exoplanets using a set of 15,000 previously vetted signals from the Kepler exoplanet catalogue. In the test set, the neural network correctly identified true planets and false positives 96 percent of the time. Then, with the neural network having "learned" to detect the pattern of a transiting exoplanet, the researchers directed their model to search for weaker signals in 670 star systems that already had multiple known planets. Their assumption was that multiple-planet systems would be the best places to look for more exoplanets.

"We got lots of false positives of planets, but also potentially more real planets," said Vanderburg. "It's like sifting through rocks to find jewels. If you have a finer sieve then you will catch more rocks but you might catch more jewels, as well."

Kepler-90i wasn't the only jewel this neural network sifted out. In the Kepler-80 system, they found a sixth planet. This one, the Earth-sized Kepler-80g, and four of its neighboring planets form what is called a resonant chain -- where planets are locked by their mutual gravity in a rhythmic orbital dance. The result is an extremely stable system, similar to the seven planets in the TRAPPIST-1 system.

Their research paper reporting these findings has been accepted for publication in The Astronomical Journal. Shallue and Vanderburg plan to apply their neural network to Kepler's full set of more than 150,000 stars.

Kepler has produced an unprecedented data set for exoplanet hunting. After gazing at one patch of space for four years, the spacecraft now is operating on an extended mission and switches its field of view every 80 days.

"These results demonstrate the enduring value of Kepler's mission," said Jessie Dotson, Kepler's project scientist at NASA's Ames Research Center in California's Silicon Valley. "New ways of looking at the data -- such as this early-stage research to apply machine learning algorithms -- promise to continue to yield significant advances in our understanding of planetary systems around other stars. I'm sure there are more firsts in the data waiting for people to find them."

Read more at Science Daily

New Data From Mars Further Dampens Prospects for Life Around Red Dwarf Stars

The illustration depicts charged particles from a solar storm stripping away charged particles of Mars' atmosphere, one of the processes of Martian atmosphere loss studied by NASA's MAVEN mission, beginning in 2014.
Could a Mars-like planet be habitable if it was orbiting a red dwarf star that is cooler and less bright than our sun? New calculations — based on data from a NASA Mars orbiter — suggest the window for life would be short.

The activity on these red dwarf stars could shorten the time during which habitable conditions exist on a planet by a factor of about 5 to 20, the researchers said. In the worst case, extremely active stars could shorten habitability by at least a factor of 1,000 — barely any time for life to establish itself, let alone thrive.

The news comes in the wake of a separate study of Proxima Centauri b, a rocky planet that orbits a red dwarf star just four light-years from Earth. When the planet was discovered in 2016, researchers were initially excited by Proxima Centauri b, a planet orbiting in its star’s habitable zone. However, a new study concluded that habitability would be tough to come by because the red dwarf that Proxima Centauri b orbits likely stripped away the planet's atmosphere over time.

Together, the two studies indicate that the idea of habitability — usually defined as the zone around a star in which liquid water can exist on a rocky planet — needs revisiting, as this zone can be greatly altered by factors such as a planet's atmosphere or a star's activity.

"Habitability is one of the biggest topics in astronomy, and these estimates demonstrate one way to leverage what we know about Mars and the Sun to help determine the factors that control whether planets in other systems might be suitable for life," said Bruce Jakosky, the principal investigator of MAVEN, the Mars Atmosphere and Volatile Evolution mission.

MAVEN has been orbiting Mars since September 2014, studying the rate of atmospheric loss from the Red Planet. Mars doesn't have a global magnetic field. This means that over time charged particles from the sun can strip away lighter molecules in the atmosphere.

For Mars, that process might have had devastating consequences for life. Geological evidence shows that water used to flow on the Martian surface billions of years ago. Since water is a key ingredient of habitability, some researchers suggest that life could have flourished in the past, but only briefly until the atmosphere became too thin to support water and the surface permanently dried up.

A new study, presented Dec. 13 by MAVEN co-investigator David Brain at the fall meeting of the American Geophysical Union, uses data from MAVEN to better understand habitable rocky planets orbiting other stars.

Since MAVEN arrived at Mars, the sun's activity has varied — sometimes experiencing peaks such as solar storms, solar flares, and coronal mass ejections (ejections of solar particles into space). MAVEN monitored how quickly molecules from the Martian atmosphere escaped.

Brain and his colleagues then applied the data to a theoretical Martian-sized planet at the edge of the habitable zone of a red dwarf star, the most common type of star in our galaxy.

A planet orbiting a red dwarf star needs to huddle closer to that star (compared to our own sun) to get enough light and warmth for water to run on its surface, they found. But this comes at a price: The hypothetical planet would receive 5 to 10 times more ultraviolet radiation than Mars does. This radiation would in turn fuel atmospheric escape at a much greater rate than Mars experiences — 3 to 5 times as many charged particles, and 5 to 10 times more neutral particles.

The charged particles would also contribute to a separate atmospheric loss phenomenon called sputtering, which occurs when energetic particles crash into the atmosphere and disturb molecules, propelling some of them out into space.

But the planet would experience only about the same amount of thermal escape, which happens for lighter molecules like hydrogen. Thermal escape happens at the top of the atmosphere. On the theoretical Mars-like planet at a red dwarf star, the rate of thermal escape only increases if UV radiation moves more hydrogen to the top of the atmosphere, the researchers found.

The researchers added, however, that a planet might have other geological processes that could fight back against atmospheric loss. Perhaps it has a stronger magnetic field, unlike Mars, or active geology that could replenish the atmosphere. Also, planets larger than Mars could better maintain their atmospheres because of stronger gravity.

Read more at Seeker

Dec 13, 2017

Dinosaur parasites trapped in 100-million-year-old amber tell blood-sucking story

Hard tick grasping a dinosaur feather preserved in 99 million-year-old Burmese amber. Modified from the open access article published in Nature Communications: 'Ticks parasitised feathered dinosaurs as revealed by Cretaceous amber assemblages.'
Fossilised ticks discovered trapped and preserved in amber show that these parasites sucked the blood of feathered dinosaurs almost 100 million years ago, according to a new article published in Nature Communications today.

Sealed inside a piece of 99 million-year-old Burmese amber researchers found a so-called hard tick grasping a feather. The discovery is remarkable because fossils of parasitic, blood-feeding creatures directly associated with remains of their host are exceedingly scarce, and the new specimen is the oldest known to date.

The scenario may echo the famous mosquito-in-amber premise of Jurassic Park, although the newly-discovered tick dates from the Cretaceous period (145-66 million years ago) and will not be yielding any dinosaur-building DNA: all attempts to extract DNA from amber specimens have proven unsuccessful due to the short life of this complex molecule.

"Ticks are infamous blood-sucking, parasitic organisms, having a tremendous impact on the health of humans, livestock, pets, and even wildlife, but until now clear evidence of their role in deep time has been lacking," says Enrique Peñalver from the Spanish Geological Survey (IGME) and leading author of the work.

Cretaceous amber provides a window into the world of the feathered dinosaurs, some of which evolved into modern-day birds. The studied amber feather with the grasping tick is similar in structure to modern-day bird feathers, and it offers the first direct evidence of an early parasite-host relationship between ticks and feathered dinosaurs.

"The fossil record tells us that feathers like the one we have studied were already present on a wide range of theropod dinosaurs, a group which included ground-running forms without flying ability, as well as bird-like dinosaurs capable of powered flight," explains Dr Ricardo Pérez-de la Fuente, a research fellow at Oxford University Museum of Natural History and one of the authors of the study.

"So although we can't be sure what kind of dinosaur the tick was feeding on, the mid-Cretaceous age of the Burmese amber confirms that the feather certainly did not belong to a modern bird, as these appeared much later in theropod evolution according to current fossil and molecular evidence."

The researchers found further, indirect evidence of ticks parasitising dinosaurs in Deinocroton draculi, or "Dracula's terrible tick," belonging to a newly-described extinct group of ticks. This new species was also found sealed inside Burmese amber, with one specimen remarkably engorged with blood, increasing its volume approximately eight times over non-engorged forms. Despite this, it has not been possible to directly determine its host animal.

"Assessing the composition of the blood meal inside the bloated tick is not feasible because, unfortunately, the tick did not become fully immersed in resin and so its contents were altered by mineral deposition," explains Dr Xavier Delclòs, an author of the study from the University of Barcelona and IRBio.

But indirect evidence of the likely host for these novel ticks was found in the form of hair-like structures, or setae, from the larvae of skin beetles (dermestids), found attached to two Deinocroton ticks preserved together. Today, skin beetles feed in nests, consuming feathers, skin and hair from the nest's occupants. And as no mammal hairs have yet been found in Cretaceous amber, the presence of skin beetle setae on the two Deinocroton draculi specimens suggests that the ticks' host was a feathered dinosaur.

"The simultaneous entrapment of two external parasites -- the ticks -- is extraordinary, and can be best explained if they had a nest-inhabiting ecology as some modern ticks do, living in the host's nest or in their own nest nearby," says Dr David Grimaldi of the American Museum of Natural History and an author of the work.

Read more at Science Daily

Humans can feel molecular differences between nearly identical surfaces

Humans can differentiate between surfaces that differ by just a single layer of molecules.
How sensitive is the human sense of touch? Sensitive enough to feel the difference between surfaces that differ by just a single layer of molecules, a team of researchers at the University of California San Diego has shown.

"This is the greatest tactile sensitivity that has ever been shown in humans," said Darren Lipomi, a professor of nanoengineering and member of the Center for Wearable Sensors at the UC San Diego Jacobs School of Engineering, who led the interdisciplinary project with V. S. Ramachandran, director of the Center for Brain and Cognition and distinguished professor in the Department of Psychology at UC San Diego.

Humans can easily feel the difference between many everyday surfaces such as glass, metal, wood and plastic. That's because these surfaces have different textures or draw heat away from the finger at different rates. But UC San Diego researchers wondered, if they kept all these large-scale effects equal and changed only the topmost layer of molecules, could humans still detect the difference using their sense of touch? And if so, how?

Researchers say this fundamental knowledge will be useful for developing electronic skin, prosthetics that can feel, advanced haptic technology for virtual and augmented reality and more.

Unsophisticated haptic technologies exist in the form of rumble packs in video game controllers or smartphones that shake, Lipomi added. "But reproducing realistic tactile sensations is difficult because we don't yet fully understand the basic ways in which materials interact with the sense of touch."

"Today's technologies allow us to see and hear what's happening, but we can't feel it," said Cody Carpenter, a nanoengineering Ph.D. student at UC San Diego and co-first author of the study. "We have state-of-the-art speakers, phones and high-resolution screens that are visually and aurally engaging, but what's missing is the sense of touch. Adding that ingredient is a driving force behind this work."

This study is the first to combine materials science and psychophysics to understand how humans perceive touch. "Receptors processing sensations from our skin are phylogenetically the most ancient, but far from being primitive they have had time to evolve extraordinarily subtle strategies for discerning surfaces -- whether a lover's caress or a tickle or the raw tactile feel of metal, wood, paper, etc. This study is one of the first to demonstrate the range of sophistication and exquisite sensitivity of tactile sensations. It paves the way, perhaps, for a whole new approach to tactile psychophysics," Ramachandran said.

Super-sensitive touch

In a paper published in Materials Horizons, UC San Diego researchers tested whether human subjects could distinguish -- by dragging or tapping a finger across the surface -- between smooth silicon wafers that differed only in their single topmost layer of molecules. One surface was a single oxidized layer made mostly of oxygen atoms. The other was a single Teflon-like layer made of fluorine and carbon atoms. Both surfaces looked identical and felt similar enough that some subjects could not differentiate between them at all.

According to the researchers, human subjects can feel these differences because of a phenomenon known as stick-slip friction, which is the jerking motion that occurs when two objects at rest start to slide against each other. This phenomenon is responsible for the musical notes played by running a wet finger along the rim of a wine glass, the sound of a squeaky door hinge or the noise of a stopping train. In this case, each surface has a different stick-slip frequency due to the identity of the molecules in the topmost layer.

In one test, 15 subjects were tasked with feeling three surfaces and identifying the one surface that differed from the other two. Subjects correctly identified the differences 71 percent of the time.

In another test, subjects were given three different strips of silicon wafer, each strip containing a different sequence of 8 patches of oxidized and Teflon-like surfaces. Each sequence represented an 8-digit string of 0s and 1s, which encoded for a particular letter in the ASCII alphabet. Subjects were asked to "read" these sequences by dragging a finger from one end of the strip to the other and noting which patches in the sequence were the oxidized surfaces and which were the Teflon-like surfaces. In this experiment, 10 out of 11 subjects decoded the bits needed to spell the word "Lab" (with the correct upper and lowercase letters) more than 50 percent of the time. Subjects spent an average of 4.5 minutes to decode each letter.

"A human may be slower than a nanobit per second in terms of reading digital information, but this experiment shows a potentially neat way to do chemical communications using our sense of touch instead of sight," Lipomi said.

Basic model of touch

The researchers also found that these surfaces can be differentiated depending on how fast the finger drags and how much force it applies across the surface. The researchers modeled the touch experiments using a "mock finger," a finger-like device made of an organic polymer that's connected by a spring to a force sensor. The mock finger was dragged across the different surfaces using multiple combinations of force and swiping velocity. The researchers plotted the data and found that the surfaces could be distinguished given certain combinations of velocity and force. Meanwhile, other combinations made the surfaces indistinguishable from each other.

"Our results reveal a remarkable human ability to quickly home in on the right combinations of forces and swiping velocities required to feel the difference between these surfaces. They don't need to reconstruct an entire matrix of data points one by one as we did in our experiments," Lipomi said.

Read more at Science Daily

175 years on, study finds where you live still determines your life expectancy

Cigar factory in Liverpool. Date: 1858
Researchers at the University of Liverpool revisited a study carried out 175 years ago which compared the health and life expectancy of people in different parts of the United Kingdom, including Liverpool, to see if its findings still held true.

They found that stark differences still exist and that people living in Liverpool still had lower life expectancy than those living in the rural area of Rutland.

The original study into sanitation conditions by Edwin Chadwick in 1842 charted the average age by death and by occupational group for five areas in the UK -- Liverpool, Leeds, Manchester, Bolton and Rutland, a rural county in Eastern England.

The findings revealed a strong correlation between where you lived, what your job was and the age you lived to. This was the first study to demonstrate the huge geographical differences in health and life circumstances. It showed that a labourer in Rutland could expect to live a longer life than a professional tradesman in Liverpool.

Research led by the University's Department of Geography & Planning undertook a similar analysis using data from the Office of National Statistics and the 2011 census to see if the same pattern still persisted.

They found that whilst the level of inequality wasn't on the scale it had been between the two areas in 1842, individuals in the middle social class bracket in Rutland still lived longer than those in the highest social class in Liverpool.

Dr Mark Green, who conducted the study, said: "On the 175th anniversary of this report, which was ground-breaking at the time, we wanted to see if in the 21st century your geography -- that is where you live -- still determined your health and life expectancy.

"We found that whilst life expectancy has nearly doubled since Chadwick's report, there is still a link between where you live, your social class and how long you live to.

"It is remarkable that after 175 years, mortality rates in Liverpool are still higher than in Rutland within each occupational group. What this demonstrates is that living in certain locations offers very different life chances and health outcomes for people within the same occupational groups."

From Science Daily

Oldest ice core ever drilled outside the polar regions

A joint research team from the United States and China ventured to the Guliya Ice Cap in Tibet in 2015. They drew a core from the ice that was more than 1,000 feet long, the bottom of which dates back to more than half a million years ago.
The oldest ice core ever drilled outside the polar regions may contain ice that formed during the Stone Age -- more than 600,000 years ago, long before modern humans appeared.

Researchers from the United States and China are now studying the core -- nearly as long as the Empire State Building is tall -- to assemble one of the longest-ever records of Earth's climate history.

What they've found so far provides dramatic evidence of a recent and rapid temperature rise at some of the highest, coldest mountain peaks in the world.

At the American Geophysical Union meeting on Thursday, Dec. 14, they report that there has been a persistent increase in both temperature and precipitation in Tibet's Kunlun Mountains over the last few centuries. The change is most noticeable on the Guliya Ice Cap, where they drilled the latest ice core. In this region, the average temperature has risen 1.5 degrees Celsius (2.7 degrees Fahrenheit) in the last 50 years and the average precipitation has risen by 2.1 inches per year over the past 25 years.

Lonnie Thompson, Distinguished University Professor in the School of Earth Sciences at The Ohio State University and co-leader of the international research team, said that the new data lend support to computer models of projected climate changes.

"The ice cores actually demonstrate that warming is happening, and is already having detrimental effects on Earth's freshwater ice stores," Thompson said.

Earth's largest supply of freshwater ice outside of the Arctic and Antarctica resides in Tibet -- a place that was off limits to American glaciologists until 20 years ago, when Ohio State's Byrd Polar and Climate Research Center (BPCRC) began a collaboration with China's Institute of Tibetan Plateau Research. There, glaciologist Yao Tandong secured funding for a series of joint expeditions from the Chinese Academy of Sciences.

"The water issues created by melting ice on the Third Pole, along with that from the Arctic and Antarctica, have been recognized as important contributors to the rise in global sea level. Continued warming in these regions will result in even more ice melt with the likelihood of catastrophic environmental consequences," Yao noted.

The name "Third Pole" refers to high mountain glaciers located on the Tibetan Plateau and in the Himalaya, in the Andes in South America, on Kilimanjaro in Africa, and in Papua, Indonesia -- all of which have been studied by the Ohio State research team.

Of particular interest to the researchers is a projection from the Intergovernmental Panel on Climate Change that future temperatures on the planet will rise faster at high altitudes than they will at sea level. The warming at sea level is expected to reach 3 degrees Celsius by the year 2100, and possibly double that, or 6 degrees Celsius, at the highest mountain peaks in the low latitudes.

"The stable isotopic records that we've obtained from five ice cores drilled across the Third Pole document climate changes over the last 1,000 years, and contribute to a growing body of evidence that environmental conditions on the Third Pole, along with the rest of the world, have changed significantly in the last century," Thompson said. "Generally, the higher the elevation, the greater the rate of warming that's taking place."

Around the world, hundreds of millions of people depend on high-altitude glaciers for their water supply. The Guliya Ice Cap is one of many Tibetan Plateau ice caches that provide fresh water to Central, South, and Southeast Asia.

"There are over 46,000 mountain glaciers in that part of the world, and they are the water source for major rivers," Thompson said.

In September and October of 2015, the team ventured to Guliya and drilled through the ice cap until they hit bedrock. They recovered five ice cores, one of which is more than 1,000 feet long.

The cores are composed of compressed layers of snow and ice that settled on the western Kunlun Mountains year after year. In each layer, the ice captured chemicals from the air and precipitation during wet and dry seasons. Today, researchers analyze the chemistry of the different layers to measure historical changes in climate.

Based on dating of radioactive elements measured by scientists at the Swiss research center ETH Zurich, the ice at the base of the core may be at least 600,000 years old.

The oldest ice core drilled in the Northern Hemisphere was found in Greenland in 2004 by the North Greenland Ice Core Project and was dated to roughly 120,000 years, while the oldest continuous ice core record recovered on Earth to date is from Antarctica, and extends back 800,000.

Over the next few months, the American and Chinese research teams will analyze the chemistry of the core in detail. They will look for evidence of temperature changes caused by ocean circulation patterns in both the North Atlantic and tropical Pacific Oceans, which drive precipitation in Tibet as well as the Indian monsoons. For instance, one important driver of global temperatures, El Niño, leaves its chemical mark in the snow that falls on tropical glaciers.

Read more at Science Daily

Fossil orphans reunited with their parents after half a billion years

This is an image of Pseudooides.
Everyone wants to be with their family over the holidays, but spare a thought for a group of orphan fossils that have been separated from their parents since the dawn of animal evolution, over half a billion years ago.

For decades, paleontologists have puzzled over the microscopic fossils of Pseudooides, which are smaller than sand grains.

The resemblance of the fossils to animal embryos inspired their name, which means 'false egg'.

The fossils preserve stages of embryonic development frozen in time by miraculous processes of fossilisation, which turned their squishy cells into stone.

Pseudooides fossils have a segmented middle like the embryos of segmented animals, such as insects, inspiring grand theories on how complex segmented animals may have evolved.

A team of paleontologists from the University of Bristol's School of Earth Sciences and Peking University have now peered inside the Pseudooides embryos using X-rays and found features that link them to the adult stages of another fossil group.

It turns out that these adult stages were right under the scientists' noses all along: they have been found long ago in the same rocks as Pseudooides.

Surprisingly, these long-lost family members are not complex segmented animals at all, but ancestors of modern jellyfish.

Dr Kelly Vargas from the University of Bristol said: "It seems that, in trying to classify these fossils, we've previously been barking up the wrong branch of the animals family tree."

Professor Philip Donoghue, also from the University of Bristol, co-led the research with Professor Xiping Dong of Peking University.

Professor Donoghue added "We couldn't have reunited these ancient family members without the amazing technology which allowed us to see inside the fossilized bodies of the embryos and adults."

The team used the Swiss Light Source, a gigantic particle accelerator near Zurich, Switzerland, to supply the X-rays used to image the inside of the fossils.

This showed that the details of segmentation in the Pseudooides embryos to be nothing more than the folded edge of an opening, which developed into the rim of the cone-shaped skeleton that once housed the anemone-like stage in the life cycle of the ancient jellyfish.

Luis Porras, who helped make the discovery while still a student at the University of Bristol, said: "Pseudooides fossils may not tell us about how complex animals evolved, but they provide insights into the how embryology of animals itself has evolved.

"The embryos of living jellyfish usually develop into bizarre alien-like larvae which metamorphose into anemone-like adults before the final jellyfish (or 'medusa') phase.

"Pseudooides did things differently and more efficiently, developing directly from embryo to adult. Perhaps living jellyfish are a poor guide to ancestral animals."

Professor Donoghue added: "It is amazing that these organisms were fossilised at all.

"Jellyfish are made up of little more than goo and yet they've been turned to stone before they had any chance to rot: a mechanism which some scientists refer to as the 'Medusa effect', named after the gorgon of Greek mythology who turned into stone anyone that laid eyes upon her."

Read more at Science Daily

Dec 12, 2017

Action games expand the brain's cognitive abilities, study suggests

This study focuses on one specific video game genre, action video (war or shooter) games that have long been considered as mind-numbing.
The human brain is malleable -- it learns and adapts. Numerous research studies have focused on the impact of action video games on the brain by measuring cognitive abilities, such as perception, attention and reaction time. An international team of psychologists, led by the University of Geneva (UNIGE), Switzerland, has assembled data from the last fifteen years to quantify how action video games impact cognition. The research has resulted in two meta-analyses, published in the journal Psychological Bulletin, which reveal a significant improvement in the cognitive abilities of gamers.

Psychologists have been studying the impact of video games on the brain ever since the late 80s, when Pacman and arcade games first took roots. The present study focuses on one specific video game genre, action video (war or shooter) games that have long been considered as mind-numbing. Do they influence the cognitive skills of players?

"We decided to assemble all the relevant data from 2000 to 2015 in an attempt to answer this question, as it was the only way to have a proper overview of the real impact of action video games," explains Daphné Bavelier, professor in the Psychology Section at UNIGE's Faculty of Psychology and Educational Sciences (FPSE). Psychologists from UNIGE and the universities of Columbia, Santa Barbara and Wisconsin dissected the published literature (articles, theses and conference abstracts) over the course of a year. In addition, they contacted over sixty professors, asking them for any unpublished data that might throw light on the role of action video games. Two meta-analyses emerged from the research.

Profile of action gamers

A total of 8,970 individuals between the ages of 6 and 40, including action gamers and non-gamers, took a number of psychometric tests in studies conducted by laboratories across the world with the aim of evaluating their cognitive abilities. The assessments included spatial attention (e.g. quickly detecting a dog in a herd of animals) as well as assessing their skills at managing multiple tasks simultaneously and changing their plans according to pre-determined rules. It was found that the cognition of gamers was better by one-half of a standard deviation compared to non-gamers.

However, this first meta-analysis failed to answer a crucial question. "We needed to think about what the typical gamer profile is," points out Benoit Bediou, researcher in the FPSE Psychology Section. "Do they play action-type video games because they already have certain cognitive skills that make them good players; Or, on the contrary, are their high cognitive abilities actually developed by playing games?"

Training your brain by playing action video games

The psychologists proceeded to analyze intervention studies as part of the second meta-analysis. 2,883 people (men and women) who played for a maximum of one hour a week were first tested for their cognitive abilities and then randomly divided into two groups: one played action games (war or shooter games), the other played control games (SIMS, Puzzle, Tetris). Both groups played for at least 8 hours over a week and up to 50 hours over 12 weeks. At the end of the training, participants underwent cognitive testing to measure any changes in their cognitive abilities. "The aim was to find out whether the effects of action gaming on the brain are causal," continues Bavelier, adding: "That's why these intervention studies always compare and contrast a group that is obliged to play an action game with one obliged to play a video control game, where the mechanics are very different. This active control group ensures that the effects resulting from playing action games really do result from the nature of this kind of game. In other words, they are not due to being part of a group that is asked to undertake an engrossing task or that is the centre of scientific attention (placebo effect)."

The results were beyond dispute: individuals playing action videos increased their cognition more than those playing the control games with the difference in cognitive abilities between these two training groups being of one-third of a standard deviation. "The research, which was carried out over several years all over the world, proves the real effects of action video games on the brain and paves the way for using action video games to expand cognitive abilities," argues Bediou.

Despite the good news for avid gamers, it is worth highlighting that these beneficial effects were observed in studies that asked individuals to space their game play out over a period of many weeks to months rather than to engage in a large amount of gaming in a single sitting. As is true in any learning activity, short bouts of repeated practice is much preferred over binging!

Read more at Science Daily

Telescopes team up to study giant galaxy

The giant radio galaxy Centaurus A as observed by the Murchison Widefield Array telescope.
Astronomers have used two Australian radio telescopes and several optical telescopes to study complex mechanisms that are fuelling jets of material blasting away from a black hole 55 million times more massive than the Sun.

In research published today, the international team of scientists used the telescopes to observe a nearby radio galaxy known as Centaurus A.

"As the closest radio galaxy to Earth, Centaurus A is the perfect 'cosmic laboratory' to study the physical processes responsible for moving material and energy away from the galaxy's core," said Dr Ben McKinley from the International Centre for Radio Astronomy Research (ICRAR) and Curtin University in Perth, Western Australia.

Centaurus A is 12 million light-years away from Earth -- just down the road in astronomical terms -- and is a popular target for amateur and professional astronomers in the Southern Hemisphere due to its size, elegant dust lanes, and prominent plumes of material.

"Being so close to Earth and so big actually makes studying this galaxy a real challenge because most of the telescopes capable of resolving the detail we need for this type of work have fields of view that are smaller than the area of sky Centaurus A takes up," said Dr McKinley.

"We used the Murchison Widefield Array (MWA) and Parkes -- these radio telescopes both have large fields of view, allowing them to image a large portion of sky and see all of Centaurus A at once. The MWA also has superb sensitivity allowing the large-scale structure of Centaurus A to be imaged in great detail," he said.

The MWA is a low frequency radio telescope located at the Murchison Radio-astronomy Observatory in Western Australia's Mid West, operated by Curtin University on behalf of an international consortium. The Parkes Observatory is 64-metre radio telescope commonly known as "the Dish" located in New South Wales and operated by CSIRO.

Observations from several optical telescopes were also used for this work -- the Magellan Telescope in Chile, Terroux Observatory in Canberra, and High View Observatory in Auckland.

"If we can figure out what's going in Centaurus A, we can apply this knowledge to our theories and simulations for how galaxies evolve throughout the entire Universe," said co-author Professor Steven Tingay from Curtin University and ICRAR.

"As well as the plasma that's fuelling the large plumes of material the galaxy is famous for, we found evidence of a galactic wind that's never been seen -- this is basically a high speed stream of particles moving away from the galaxy's core, taking energy and material with it as it impacts the surrounding environment," he said.

Read more at Science Daily

Electrical and chemical coupling between Saturn and its rings

Spacecraft Cassini with an instrument from the Swedish Institute of Space Physics on board (in red circle) passed through Saturn’s atmosphere.
A Langmuir probe, developed in Sweden and flown to Saturn on the Cassini spacecraft, has made exciting discoveries in the atmosphere of the planet. Jan-Erik Wahlund at the Swedish Institute of Space Physics in Uppsala and his colleagues show that there is a strong coupling, both chemically and electrically, between the atmosphere of Saturn and its rings. These research results have now been published in the journal Science.

In April the American space agency NASA put the Cassini spacecraft into an orbit that took it right through the narrow gap between the innermost visible ring (the D-ring) and at the same time very close to the Saturn, so close that it passed through the outer parts of the planet's atmosphere. Cassini made 22 such orbits, and on the 15 September, according to plan, Cassini was sent down into the gas masses of Saturn and burned up. During all of these orbits most of the instruments on board made detailed measurements.

Now the scientific results are starting to take shape, and the results from the Swedish instrument are the first to be published in the well-known journal Science. The instrument, a so-called Langmuir proble, was developed at the Swedish Institute of Space Physics in Uppsala. The upper atmosphere of Saturn is charged and consists primarily of hydrogen and hydrogen ions. The Langmuir probe can be compared with a weather station for electrically charged gas; it measures its density, temperature and velocity. It also measures particles' energy and moreover gives a rough estimate of what the gas consists of.

"The first results are surprising," says Jan-Erik Wahlund, IRF, principle investigator and responsible for the Langmuir probe on Cassini.

Strong variations in density indicate that the electrically charged part of Saturn's atmosphere (the so-called ionosphere) has a strong coupling to the visible rings that consist primarily of ice particles. The ice particles are also electrically charged.

"It is as though the small ice particles in the D-ring suck up electrons from the ionosphere," says Jan-Erik Wahlund. "As a result of the coupling, electrical flows of gas to and from the rings along the magnetic field of Saturn cause the greatest variations in density."

Read more at Science Daily

How much can 252-million-year-old ecosystems tell us about modern Earth? A lot

One of the study's authors, Brandon Peecook, doing fieldwork.
A whopping 252 million years ago, Earth was crawling with bizarre animals, including dinosaur cousins resembling Komodo dragons and bulky early mammal-relatives, a million years before dinosaurs even existed. New research shows us that the Permian equator was both a literal and figurative hotspot: it was, for the most part, a scorching hot desert, on top of having a concentration of unique animals. Here, you could find some of the first tetrapods to emerge from the water and live on land, living right next to newly evolved, dinosaur and crocodile-like reptiles. Many of these species were wiped out after an extinction which changed life on the planet forever.

In a paper published in Earth-Science Reviews, paleontologists studied fossil sites all over the world from the late Permian to get an idea of what lived where. They found an unusual assortment of species near the equator, and one that is comparable to the modern tropics -- except that the array of large, carnivorous reptiles would look very out of place anywhere on Earth today.

"The tropics act as a diversity center -- stuff that has gone extinct elsewhere is still alive there, and there's new stuff evolving," explains Postdoctoral Researcher Brandon Peecook, co-author of the paper. While it makes sense that the warm, wet rainforests we see now have incredible diversity, it seems counterintuitive that these fiery, hot deserts were home to an exceptional range of species, especially because diversity at the equator fluctuates so much historically.

These findings about the late Permian beg the question, "Why are we seeing so much biodiversity at the equator?" This is something scientists have yet to answer, but it shows us that biodiversity at the tropics isn't intuitive, and isn't consistent. What scientists know for sure is that regardless of desert or rainforest, climate change negatively impacts living things.

This unequaled comparison of Permian climate and species distribution to modern events shows us that while many changes are natural and we see them throughout our planet's history, drastic changes like this can be triggered by something much larger -- volcanic activity likely caused this in the Permian, and human activity is the suspected culprit today. After the Permian extinction, "it was almost as though the slate had been wiped clean, and all the ecosystems had to rebuild," says Peecook. This event altered life permanently and while new animals evolved and thrived, the process of recovery took millions of years, and the animals that were lost never returned.

Read more at Science Daily

Life's building blocks observed in spacelike environment

Low-energy electron impact mediates the creation of new complex organic molecules, such as ethanol, in astrophysical/planetary model ices containing methane and oxygen; while some of the new species desorb as ions, many remain in the surface ices.
Where do the molecules required for life originate? It may be that small organic molecules first appeared on earth and were later combined into larger molecules, such as proteins and carbohydrates. But a second possibility is that they originated in space, possibly within our solar system. A new study, published this week in the Journal of Chemical Physics, from AIP Publishing, shows that a number of small organic molecules can form in a cold, spacelike environment full of radiation.

Investigators at the University of Sherbrooke in Canada have created simulated space environments in which thin films of ice containing methane and oxygen are irradiated by electron beams. When electrons or other forms of radiation impinge on so-called molecular ices, chemical reactions occur and new molecules are formed. This study used several advanced techniques including electron stimulated desorption (ESD), X-ray photoelectron spectroscopy (XPS) and temperature programmed desorption (TPD).

The experiments were carried out under vacuum conditions, which both is required for the analysis techniques employed and mimics the high vacuum condition of outer space. Frozen films containing methane and oxygen used in these experiments further mimic a spacelike environment, since various types of ice (not just frozen water) form around dust grains in the dense and cold molecular clouds that exist in the interstellar medium. These types of icy environments also exist on objects in the solar system, such as comets, asteroids and moons.

All of these icy surfaces in space are subjected to multiple forms of radiation, often in the presence of magnetic fields, which accelerate charged particles from the stellar (solar) wind toward these frozen objects. Previous studies investigated chemical reactions that might occur in space environments through the use of ultraviolet or other types of radiation, but this is a first detailed look at the role of secondary electrons.

Copious amounts of secondary electrons are produced when high-energy radiation, such as X-rays or heavy particles, interact with matter. These electrons, also known as low-energy electrons, or LEES, are still energetic enough to induce further chemistry. The work reported this week investigated LEEs interacting with icy films. Earlier studies by this group considered positively charged reaction products ejected from ices irradiated by LEEs, while the work reported this week extended the study to include ejected negative ions and new molecules that form but remain embedded in the film.

The research group found that a variety of small organic molecules were produced in icy films subjected to LEEs. Propylene, ethane and acetylene were all formed in films of frozen methane. When a frozen mixture of methane and oxygen was irradiated with LEEs, they found direct evidence that ethanol was formed.

Read more at Science Daily

Dec 11, 2017

Why meteroids explode before they reach Earth

Our atmosphere is a better shield from meteoroids than researchers thought, according to a new paper published in Meteoritics & Planetary Science.

When a meteor comes hurtling toward Earth, the high-pressure air in front of it seeps into its pores and cracks, pushing the body of the meteor apart and causing it to explode.

"There's a big gradient between high-pressure air in front of the meteor and the vacuum of air behind it," said Jay Melosh, a professor of Earth, Atmospheric and Planetary Sciences at Purdue University and co-author of the paper. "If the air can move through the passages in the meteorite, it can easily get inside and blow off pieces."

Researchers knew that meteoroids often blew up before they reach Earth's surface, but they didn't know why. Melosh's team looked to the 2013 Chelyabinsk event, when a meteoroid exploded over Chelyabinsk, Russia, to explain the phenomenon.

The explosion came as a surprise and brought in energy comparable to a small nuclear weapon. When it entered Earth's atmosphere, it created a bright fire ball. Minutes later, a shock wave blasted out nearby windows, injuring hundreds of people.

The meteoroid weighed around 10,000 tons, but only about 2,000 tons of debris were recovered, which meant something happened in the upper atmosphere that caused it to disintegrate. To solve the puzzle, the researchers used a unique computer code that allows both solid material from the meteor body and air to exist in any part of the calculation.

"I've been looking for something like this for a while," Melosh said. "Most of the computer codes we use for simulating impacts can tolerate multiple materials in a cell, but they average everything together. Different materials in the cell use their individual identity, which is not appropriate for this kind of calculation."

This new code allowed the researchers to push air into the meteoroid and let it percolate, which lowered the strength of the meteoroid significantly, even if it had been moderately strong to begin with.

While this mechanism may protect Earth's inhabitants from small meteoroids, large ones likely won't be bothered by it, he said. Iron meteoroids are much smaller and denser, and even relatively small ones tend to reach the surface.

From Science Daily