It’s not enough to design new drugs.

For drugs to be effective, they have to be delivered safely and intact to affected areas of the body. And drug delivery, much like drug design, is an immensely complex task. Cutting-edge research and development like that conducted at the U.S. Department of Energy’s Oak Ridge National Laboratory can help solve some of the challenges associated with drug delivery.

In fact, ORNL researchers and collaborators at Wayne State University recently used a unique combination of experimentation and simulation to shed light on the design principles for improved delivery of RNA drugs, which are promising candidates in the treatment of a number of medical conditions including cancers and genetic disorders. Specifically, the research team discovered that the motions of a tRNA (or transfer RNA) model system can be enhanced when coupled with nanodiamonds, or diamond nanoparticles approximately 5 to 10 nanometers in size.

Nanodiamonds are good delivery candidates due to their spherical shape, biocompatibility and low toxicity. And because their surfaces can be easily tailored to facilitate the attachment of various medicinal molecules, nanodiamonds have tremendous potential for the delivery of a vast range of therapies.

The discovery involved ORNL’s Spallation Neutron Source, which provides the most intense pulsed neutron beams in the world for scientific research and industrial development, and ORNL’s Titan supercomputer, the nation’s most powerful for open science — a one-two punch for illuminating the physical properties of potential drugs that inform new design principles for safer, improved delivery platforms.

By comparing the SNS neutron scattering data with the data from the team’s molecular dynamics simulations on Titan, the researchers have confirmed that nanodiamonds enhance the dynamics of tRNA when in the presence of water. This cross-disciplinary research was profiled in Journal of Physical Chemistry B.

The best of both worlds

The project began when ORNL’s P. Ganesh and Xiang-Qiang Chu of Wayne State University wondered how the water-phobic surfaces of nanoparticles alter the dynamics of biomolecules coated with water, and if it might be something that they could eventually control. They then formed a team including Gurpreet Dhindsa, Hugh O’Neill, Debsindhu Bhowmik and Eugene Mamontov of ORNL and Liang Hong of Shanghai Jiao Tong University in China to observe the motions of hydrogen atoms from the model system, tRNA, in water using SNS’s BASIS neutron backscattering spectrometer, SNS beam line 2.

Hydration is essential for biomolecules to function, and neutrons are excellent at distinguishing between the motions of hydration water molecules and the biomolecule they are surrounding. Therefore, by measuring the atoms’ neutron scattering signals, the team was able to discern the movement of tRNA in water, providing valuable insight into how the large molecule relaxes in different environmental conditions.

After comparing the results of the individual atoms, it was clear that the nanodiamonds were having a profound effect on their companion RNA molecules. The results were somewhat baffling because similar experiments had demonstrated that companion solid materials (such as nanodiamonds) tended to dampen biomolecule dynamics. Surprisingly however, nanodiamonds did the opposite for tRNA.

“Scientists are always interested in the bio-nano interactions,” said Chu. “While the interfacial layer of the bio-nano systems has very distinctive properties, it is very hard to study this mysterious zone without neutron scattering, which only sees hydrogen.”

To realize the potential of nanodiamonds in the delivery of biomolecules using tRNA as a model, the team turned to Titan to shed a much-needed light on the underlying physics.

“Molecular dynamics simulation can really tell those stories that current experimental advancement might not be able to,” said Bhowmik of ORNL’s Computational Science and Engineering Division, who set up and conducted the simulations alongside Monojoy Goswami of the laboratory’s Computer Science and Mathematics Division and Hong of Shanghai Jiao Tong University. “By combining these two techniques, you can enter a whole new world.”

These simulations revealed that the “weak dynamic heterogeneity” of RNA molecules in the presence of nanodiamonds was responsible for the enhanced effect. In other words, the reactions among the nanodiamonds, water and the RNA molecule forms a water layer on the nanodiamond surface, which then blocks it and prevents strong RNA contact to the nanodiamond.

Since RNA is hydrophilic, or “likes water,” the molecules on the nanodiamond surface swell with excess hydration and weaken the heterogeneous dynamics of the molecules.

“You can fine-tune these dynamics with chemical functionalization on the nanodiamond surface, further enhancing its effectiveness,” said Goswami.

The findings will likely guide future studies not only on the potential of nanodiamonds in drug delivery but also on fighting bacteria and treating viral diseases.

Building the bridge

Using simulation to confirm and gain insight into experiments is nothing new. But mimicking large-scale systems precisely is often a challenge, and the lack of quantitative consistency between the two disciplines makes data comparison difficult and answers more elusive to researchers.

This lack of precision, and by extension lack of consistency, is largely driven by the uncertainty surrounding force-field parameters or the interaction criteria between different particles. The exact parameters are scarce for many macromolecules, often forcing researchers to use parameters that closely, but not exactly, match the experiment.

Miscalculating the precision of these parameters can have major consequences for the interpretation of the experimental results.

To ensure the calculations were correct, Goswami worked with Jose Borreguero and Vickie Lynch, both of ORNL’s Neutron Data Analysis and Visualization Division and Center for Accelerated Materials Modeling, to develop a workflow optimization technique known as Pegasus. This method compares molecular dynamics simulations with neutron scattering data and refines the simulation parameters to validate the results with the proper experimental precision.

“Using the Pegasus workflow to run simulations sampling, the force-field parameter space saved time and eliminated input errors,” said Lynch.

These parameters also helped researchers better characterize the nanodiamond-water interactions and tRNA dynamics in the presence of nanodiamonds.

The researchers then developed an automated system capable of optimizing parameters across a wide spectrum of simulation systems and neutron experiments, an effort that will be of great worth to similar experiments going forward. This new workflow is also compatible with the laboratory’s Compute and Data Environment for Science (CADES), which assists experimentalists with the analysis of vast quantities of data.

“Users of the CADES infrastructure can carry the optimization of the simulations within the Bellerophon Environment for the Analysis of Materials, in active development at ORNL,” said Borreguero. The Bellerophon Environment for the Analysis of Materials (BEAM) is an end-to-end workflow software system, developed at ORNL, enabling user-friendly, remote access to robust data storage and compute capabilities offered at CADES and the Oak Ridge Leadership Computing Facility, home of Titan, for scalable data analysis and modeling.

It’s these in-house resources that make ORNL a world leader in experimentation, modeling and the nexus in between and that make discoveries like this possible.

, ,

There has been much recent talk about how to target the rising tide of antibiotic resistance across the world, one of the biggest threats to global health today.

While there is no doubting the size of the problem facing scientists, healthcare professionals and the pharmaceutical industry, there are innovative ways we can target antibiotic resistance in the short term, which are discussed in three articles published in Essays in Biochemistry.

With only a few antibiotics in development and a long drug development process (often 10-15 years), there is concern that what is being done to combat antibiotic resistance may be ‘too little, too late’.

“If bacteria continue developing resistance to multiple antibiotics at the present rate, at the same time as the antibiotic pipeline continues to dry up, there could be catastrophic costs to healthcare and society globally,” said senior co-author on one of the articles, Dr Tony Velkov, an Australian National Health and Medical Research Council (NHMRC) Career Development Fellow from Monash University, Victoria, Australia.

While any antimicrobial resistance is concerning, the increasing incidence of antibiotic-resistant Gram-negative bacteria has become a particular problem as strains resistant to multiple antibiotics are becoming common and no new drugs to treat these infections (eg, carbapenem-resistant Enterobacteriaceae) will be available in the near future. These Gram-negative bacteria are considered the most critical priority in the list of the 12 families of bacteria that pose the greatest threat to human health that was just released by the World Health Organization.

The reasons for the high levels of antimicrobial resistance observed in these critical Gram-negative organisms are explained in another paper in the same issue written by the Guest Editor of the journal, Dr Rietie Venter, University of South Australia, Adelaide, and colleagues. According to the authors, one of the main contributing factors to the increased resistance observed in Gram-negative bacteria is the permeability barrier caused by their additional outer membrane.

An innovative strategy that is gaining momentum is the synergistic use of antibiotics with FDA-approved non-antibiotics. Using this novel approach, an FDA-approved non-antibiotic drug is combined with a specific antibiotic that enables it to breach the outer membrane barrier and so restore the activity of an antibiotic. The Monash University authors discuss how combining antibiotics with other non-antibiotic drugs or compounds can boost their effectiveness against Gram-negative ‘superbugs’.

For example, loperamide, an anti-diarrheal medication sold in most pharmacies, enhances the effectiveness of eight different antibiotics (all in the tetracycline class). In particular, when added to the tetracycline antibiotic minocycline, along with the Parkinson’s disease drug benserazide, it significantly increased antibiotic activity against multi-drug resistant Pseudomonas aeruginosa, a causative agent in hospital-acquired infections such as ventilator-associated pneumonia.

Polymyxins are a type of antibiotics that target Gram-negative bacterial infections and have traditionally been used as a last resort to treat serious infections such as those caused by Gram-negative ‘superbugs’ Klebsiella pneumoniae, P. aeruginosa and Acinetobacter baumannii. Resistance to polymyxins is not common, but in late 2015 the first transferable resistance gene to colistin (polymyxin E) was discovered (plasmid-borne mcr-1 gene). This caused significant concerns, as once resistance to polymyxins is established, often no other treatments are available.

A number of researchers, including the team based at Monash University, have been testing different combinations of drugs or compounds with polymyxins to try and improve their effectiveness against these bacterial ‘superbugs’.

“Without new antibiotics in the near future, we must explore innovative approaches to preserve the clinical utility of important last-line antibiotics such as the polymyxins.” commented senior co-author on the paper, Professor Jian Li, Head of the Laboratory of Antimicrobial Systems Pharmacology from Monash University, Victoria, Australia.

Some interesting findings have ensued, with a number of different combinations having a beneficial effect. Some notable examples that increased antibiotic activity when combined with polymyxin B include: ivacaftor and lumacaftor, two new drugs used to treat cystic fibrosis; and closantel, a drug used to treat parasitic worm infections.

Another interesting combination that has shown promise against methicillin-resistant Staphylococcus aureus (MRSA), according to Schneider and co-authors, is combining the antibiotics ampicillin or oxacillin with berberine. Berberine is extracted from the roots, stems and bark of plants such as barberry.

In another paper in the same issue of Essays in Biochemistry, Dr Mark Blaskovich, Program Coordinator, Community for Open Antimicrobial Drug Discovery and colleagues from the University of Queensland, Brisbane, Australia, describe the key ways they believe antimicrobial resistance can be targeted.

“In the short term, the greatest potential for reducing further development of antimicrobial resistance lies in developing a rapid test that can quickly tell whether or not you have a bacterial infection (as opposed to a viral cold or flu), and whether you really need an antibiotic,” commented Blaskovich.

“Even better if the test could say what type of bacteria, and what types of antibiotics it is resistant to. You could then treat an infection immediately with the appropriate antibiotic, rather than the trial and error method now used. These tests could be ready within the next 5 years, and would have a huge impact on reducing unnecessary antibiotic use, preserving our existing antibiotics and reducing the spread of antibiotic resistance.”

Regarding antibiotics in particular, Blaskovich and colleagues describe a number of possible strategies to pursue. The first of which is to improve existing antibiotics. For example, the authors recently created a modified version of the antibiotic vancomycin to increase its potency and reduce its toxic side effects.

Another option is to rediscover ‘old’ antibiotics. In the 1950s and 60s many potential antibiotic drugs were described in the scientific literature, but due to so many choices being available at the time, only some were developed for human use. An example of this is octapeptins, which are newly rediscovered antibiotics that are now being developed to combat Gram-negative ‘superbugs’.

Repurposing drugs originally developed and approved for other uses has also had some success. In 2005, the Drugs for Neglected Diseases initiative identified fexinadole as a potential treatment for sleeping sickness and it is now undergoing a Phase III trial. This drug had been developed as an antimicrobial in the 1970s, but only reached pre-clinical development.

In addition to the above, researchers are looking for new, untested sources of antimicrobial activity to try and develop new drugs. A recent success in this area was, teixobactin, a new antibiotic developed by NovoBiotic Pharmaceuticals, discovered by using an ‘iChip’ to culture and isolate soil bacteria in situ.

A final option, mentioned by Blaskovich and colleagues, is crowdsourcing new antibiotics. Using this approach, the Community for Open Antimicrobial Drug Discovery, is searching for new chemical diversity by searching compounds sourced from academic chemists from around the world.

“It’s hard to predict which one of these methods will be the most successful in the future, but we really need to be trying all of them to have any chance of overcoming antibiotic resistance,” said Blaskovich.

“Non-antibiotic strategies are just as important, such as developing vaccines or probiotic therapies to prevent infections, as they can help to reduce the overuse of antibiotics. They will never completely replace antibiotics, but can help to preserve our existing antibiotics so they still work when needed.”

Overall, these articles and others in the new antimicrobial resistance themed issue of Essays in Biochemistry give us hope that there are viable solutions being developed to this seemingly unsurmountable global problem. It is important that all possible avenues are considered, as some less obvious approaches may end up being sources of future success.

Dr Derry Mercer, Principal Scientist at NovaBiotics Ltd, a company that specialises in developing new antimicrobials, commented: “Research and development into new antimicrobials remains a vitally important pursuit for combatting the problem of antibiotic resistance, but alternative approaches to this problem are also urgently needed.”

He added: “Such methods include those described in the papers in the latest issue of Essays in Biochemistry, as well as vaccine development and bacteriophage therapy, to name a few. Approaches that target microbial virulence, for example targeting biofilms and/or quorum sensing, rather than more traditional directly antimicrobial drugs should also be urgently examined.”

, ,

In movies and TV shows, dolphins are often portrayed as heroes who save humans through remarkable feats of strength and tenacity. Now dolphins could save the day for humans in real life, too — with the help of emerging technology that can measure thousands of proteins and an improved database full of genetic data.

“Dolphins and humans are very, very similar creatures,” said NIST’s Ben Neely, a member of the Marine Biochemical Sciences Group and the lead on a new project at the Hollings Marine Laboratory, a research facility in Charleston, South Carolina that includes the National Institute of Standards and Technology (NIST) as one of its partner institutions. “As mammals, we share a number of proteins and our bodies function in many similar ways, even though we are terrestrial and dolphins live in the water all their lives.”

Neely and his colleagues have just finished creating a detailed, searchable index of all the proteins found in the bottlenose dolphin genome. A genome is the complete set of genetic material present in an organism. Neely’s project is built on years of marine mammal research and aims to provide a new level of bioanalytical measurements. The results of this work will aid wildlife biologists, veterinary professionals and biomedical researchers.

Protein Maps Could Help Dolphins and Humans

Although a detailed map of the bottlenose dolphin (Tursiops truncatus) genome was first compiled in 2008, recent technological breakthroughs enabled the creation of a new, more exhaustive map of all of the proteins produced by the dolphins’ DNA.

Neely led the process to generate the new genome with the help of colleagues at the Hollings Marine Laboratory. For this project, the initial genomic sequencing and assembly were completed by Dovetail Genomics, a private U.S.-based company. Next, the genome was annotated by the National Center for Biotechnology Information at the National Library of Medicine (NCBI) using previously deposited data generated in large part by the National Oceanic and Atmospheric Administration’s National Centers for Coastal Ocean Science Marine Genomics Core.

“Once you can identify all of the proteins and know their amounts as expressed by the genome,” Neely explained, “you can figure out what’s going on in the bottlenose dolphin’s biological systems in this really detailed manner.”

Neely’s study is part of an emerging field called proteomics. In the case of dolphins, proteomic work has a wide variety of potential applications.

The zoo and aquarium industry, which generates revenues of approximately $16 billion a year, could use it to improve the care of bottlenose dolphins.

In addition, improved dolphin proteomics could improve assessments of wild dolphin populations, and provide an immense amount of data on environmental contaminants and the safety and health of the world’s oceanic food web.

Comparing the proteins of humans and these other mammals is already providing researchers with a wealth of new information about how the human body works. Those findings could eventually be used to develop new, more precise treatment methods for common medical problems.

As marine mammals descend, they shut off the blood flow to many of their organs, which has long puzzled and intrigued biologists. In contrast, if blood stops flowing to the organs of a human’s body for even a few seconds, the result can be a stroke, kidney failure, or even death.

Studies have recently revealed that lesser-known proteins in the blood of marine mammals may be playing a big role in the dives by protecting bottlenose dolphins’ kidneys and hearts from damage when blood flow and oxygen flow start and stop repeatedly during those underwater forays.

One of these proteins is known as vanin-1. Humans produce vanin-1, but in much smaller amounts. Researchers would like to gather more information on whether or not elevating levels of vanin-1 may offer protection to kidneys.

“There’s this gap in the knowledge about genes and the proteins they make. We are missing a huge piece of the puzzle in how these animals do what they do,” said Mike Janech from the Medical University of South Carolina. His group has been researching vanin-1 (link is external) and has identified numerous other potential biomedical applications for the dolphin genome just created by NIST.

“Genes carry the information of life,” Janech said. “But proteins execute the functions.”

From Macro to Micro

Vanin-1 is just one example of how genomic information about this mammalian cousin might prove useful. There may be hundreds of other similar applications, including some related to the treatment of high blood pressure and diabetes.

This represents another avenue for biomimicry, which seeks solutions to human problems by examining and imitating nature’s patterns and strategies. In the past, biomimicry was solely focused on the structural aspects of animal body parts such as arms and legs or functional patterns of things like noses and sniffing. But as the study of DNA has evolved, so too has our ability to examine the things happening at the most minute levels within another mammal’s body.

“We are now entering what could be called the post-model-organism era,” Neely said. Instead of looking only for a structure to model, imitate or learn from, scientists are looking at the complete molecular landscape of genes and proteins of these creatures for model processes, too. “With abundant genomic resources it is now possible to study non-model organisms with similar molecular machinery in order to tackle difficult biomedical problems.”

Data, New Technology and High-Quality Tissue Samples

To gather the needed protein information, Neely and his team used a specimen provided by the National Marine Mammal Tissue Bank (link is external) (NMMTB), the longest running project of NIST’s Marine Environmental Specimen Bank. Half of the approximately 4,000 marine mammal specimens in the NMMTB are collected as a part of the Marine Mammal Health and Stranding Response Program (link is external). The specimen provided for Neely’s study was known to originate very close to the Hollings Marine Lab.

The new, state-of-the-art genome immediately began providing new biochemical insights. Studies at NIST are ongoing to validate the updated protein maps using an ultra-high-resolution tribrid mass spectrometer, which is the most powerful tool available to identify and quantify proteins.

Other Mammal Proteins Seem Promising, Too

Neely said the results demonstrate the utility of re-mapping genomes with the improved bioanalytical capabilities provided by new genomic sequencing technology coupled to high-resolution mass spectrometers. The data from this project will also be available in the public domain so that the results will be easy for others to access and use for diverse applications and research.

This is the first of many such projects to be undertaken by the Charleston group whereby new analytical techniques could be applied to marine animals. Studying other diving marine mammals can improve our understanding of the molecular mechanisms involved in diving. Also, sea lion proteins may have much to tell us about metastatic cancer, which especially intrigues Neely and his colleagues.

As a research chemist, Neely says he has not really spent much time before now observing marine mammals as a part of his work hours. He does encounter dolphins when he goes out surfing along the Carolina coastline, though.

“It’s amazing to think that we are at a point where cutting-edge research in marine mammals can directly advance human biomedical discoveries,” he said.

, ,

Early this year, about 30 neuroscientists and computer programmers got together to improve their ability to read the human mind.

The hackathon was one of several that researchers from Princeton University and Intel, the largest maker of computer processors, organized to build software that can tell what a person is thinking in real time, while the person is thinking it.

The collaboration between researchers at Princeton and Intel has enabled rapid progress on the ability to decode digital brain data, scanned using functional magnetic resonance imaging (fMRI), to reveal how neural activity gives rise to learning, memory and other cognitive functions.

A review of computational advances toward decoding brain scans appears in the journal Nature Neuroscience, authored by researchers at the Princeton Neuroscience Institute and Princeton’s departments of computer science and electrical engineering, together with colleagues at Intel Labs, a research arm of Intel.

“The capacity to monitor the brain in real time has tremendous potential for improving the diagnosis and treatment of brain disorders as well as for basic research on how the mind works,” said Jonathan Cohen, the Robert Bendheim and Lynn Bendheim Thoman Professor in Neuroscience, co-director of the Princeton Neuroscience Institute, and one of the founding members of the collaboration with Intel.

Since the collaboration’s inception two years ago, the researchers have whittled the time it takes to extract thoughts from brain scans from days down to less than a second, said Cohen, who is also a professor of psychology.

One type of experiment that is benefiting from real-time decoding of thoughts occurred during the hackathon. The study, designed by J. Benjamin Hutchinson, a former postdoctoral researcher in the Princeton Neuroscience Institute who is now an assistant professor at Northeastern University, aimed to explore activity in the brain when a person is paying attention to the environment, versus when his or her attention wanders to other thoughts or memories.

In the experiment, Hutchinson asked a research volunteer — a graduate student lying in the fMRI scanner — to look at a detail-filled picture of people in a crowded café. From his computer in the console room, Hutchinson could tell in real time whether the graduate student was paying attention to the picture or whether her mind was drifting to internal thoughts. Hutchinson could then give the graduate student feedback on how well she was paying attention by making the picture clearer and stronger in color when her mind was focused on the picture, and fading the picture when her attention drifted.

The ongoing collaboration has benefited neuroscientists who want to learn more about the brain and computer scientists who want to design more efficient computer algorithms and processing methods to rapidly sort through large data sets, according to Theodore Willke, a senior principal engineer at Intel Labs in Hillsboro, Oregon, and head of Intel’s Mind’s Eye Lab. Willke directs Intel’s part of the collaborative team.

“Intel was interested in working on emerging applications for high-performance computing, and the collaboration with Princeton provided us with new challenges,” Willke said. “We also hope to export what we learn from studies of human intelligence and cognition to machine learning and artificial intelligence, with the goal of advancing other important objectives, such as safer autonomous driving, quicker drug discovery and ealier detection of cancer.”

Since the invention of fMRI two decades ago, researchers have been improving the ability to sift through the enormous amounts of data in each scan. An fMRI scanner captures signals from changes in blood flow that happen in the brain from moment to moment as we are thinking. But reading from these measurements the actual thoughts a person is having is a challenge, and doing it in real time is even more challenging.

A number of techniques for processing these data have been developed at Princeton and other institutions. For example, work by Peter Ramadge, the Gordon Y.S. Wu Professor of Engineering and professor of electrical engineering at Princeton, has enabled researchers to identify brain activity patterns that correlate to thoughts by combining data from brain scans from multiple people. Designing computerized instructions, or algorithms, to carry out these analyses continues to be a major area of research.

Powerful high-performance computers help cut down the time that it takes to do these analyses by breaking the task up into chunks that can be processed in parallel. The combination of better algorithms and parallel computing is what enabled the collaboration to achieve real-time brain scan processing, according to Kai Li, Princeton’s Paul M. Wythes ’55 P86 and Marcia R. Wythes P86 Professor in Computer Science and one of the founders of the collaboration.

Since the beginning of the collaboration in 2015, Intel has contributed to Princeton more than $1.5 million in computer hardware and support for Princeton graduate students and postdoctoral researchers. Intel also employs 10 computer scientists who work on this project with Princeton, and these experts work closely with Princeton faculty, students and postdocs to improve the software.

These algorithms locate thoughts within the data by using machine learning, the same technique that facial recognition software uses to help find friends in social media platforms such as Facebook. Machine learning involves exposing computers to enough examples so that the computers can classify new objects that they’ve never seen before.

One of the results of the collaboration has been the creation of a software toolbox, called the Brain Imaging Analysis Kit (BrainIAK), that is openly available via the Internet to any researchers looking to process fMRI data. The team is now working on building a real-time analysis service. “The idea is that even researchers who don’t have access to high-performance computers, or who don’t know how to write software to run their analyses on these computers, would be able to use these tools to decode brain scans in real time,” said Li.

What these scientists learn about the brain may eventually help individuals combat difficulties with paying attention, or other conditions that benefit from immediate feedback.

For example, real-time feedback may help patients train their brains to weaken intrusive memories. While such “brain-training” approaches need additional validation to make sure that the brain is learning new patterns and not just becoming good at doing the training exercise, these feedback approaches offer the potential for new therapies, Cohen said. Real-time analysis of the brain could also help clinicians make diagnoses, he said.

The ability to decode the brain in real time also has applications in basic brain research, said Kenneth Norman, professor of psychology and the Princeton Neuroscience Institute. “As cognitive neuroscientists, we’re interested in learning how the brain gives rise to thinking,” said Norman. “Being able to do this in real time vastly increases the range of science that we can do,” he said.

Another way the technology can be used is in studies of how we learn. For example, when a person listens to a math lecture, certain neural patterns are activated. Researchers could look at the neural patterns of people who understand the math lecture and see how they differ from neural patterns of someone who isn’t following along as well, according to Norman.

The ongoing collaboration is now focused on improving the technology to obtain a clearer window into what people are thinking about, for example, decoding in real time the specific identity of a face that a person is mentally visualizing.

One of the challenges the computer scientists had to overcome was how to apply machine learning to the type of data generated by brain scans. A face-recognition algorithm can scan hundreds of thousands of photographs to learn how to classify new faces, but the logistics of scanning peoples’ brains are such that researchers usually only have access to a few hundred scans per person.

Although the number of scans is few, each scan contains a rich trove of data. The software divides the brain images into little cubes, each about one millimeter wide. These cubes, called voxels, are analogous to the pixels in a two-dimensional picture. The brain activity in each cube is constantly changing.

To make matters more complex, it is the connections between brain regions that give rise to our thoughts. A typical scan can contain 100,000 voxels, and if each voxel can talk to all the other voxels, the number of possible conversations is immense. And these conversations are changing second by second. The collaboration of Intel and Princeton computer scientists overcame this computational challenge. The effort included Li as well as Barbara Engelhardt, assistant professor of computer science, and Yida Wang, who earned his doctorate in computer science from Princeton in 2016 and now works at Intel Labs.

Prior to the recent progress, it would take researchers months to analyze a data set, said Nicholas Turk-Browne, professor of psychology at Princeton. With the availability of real-time fMRI, a researcher can change the experiment while it is ongoing. “If my hypothesis concerns a certain region of the brain and I detect in real time that my experiment is not engaging that brain region, then we can change what we ask the research volunteer to do to better engage that region, potentially saving precious time and accelerating scientific discovery,” Turk-Browne said.

One eventual goal is to be able to create pictures from people’s thoughts, said Turk-Browne. “If you are in the scanner and you are retrieving a special memory, such as from childhood, we would hope to generate a photograph of that experience on the screen. That is still far off, but we are making good progress.”

, ,

They traveled a huge distance, evaded a protective barrier, and found themselves in a strange and unwelcoming land.

They’re looked at suspiciously for possible links to dangerous diseases, and are under constant threat of being expelled from their adopted home. Their contribution to the greater community is only beginning to be understood. And every day, more of them arrive.

“They” are bacteria living in human lungs. And new research pinpoints just how they get there, and opens the door to more research on what happens to them — and our bodies — as a result.

Writing in the journal mBio, researchers from the University of Michigan Medical School and VA Ann Arbor Healthcare System offer microbiome-based evidence that most of the bacteria in the lungs of healthy people got there by way of “microaspiration.”

In other words, they rode in on tiny droplets of saliva that made it from the microbe-filled mouth to the lungs. That means they avoided the movable tissue barrier called the epiglottis that keeps most saliva from getting into the lower respiratory tract.

By studying the DNA of these bacteria throughout the lungs of healthy volunteers, the researchers confirmed that the population of microbes in the lungs closely resembles the population found in the mouth. And by studying their distribution within the airways, the researchers could determine their most likely route of entry.

They found that many of the immigrant microbes make their home near the main carina, the spot at the end of the trachea where the airway branches off to the left and right lungs. This spot in the lungs is a “landing pad” where aspirated saliva — because of gravity and our upright posture — is likely to collide with the airway. But some bacteria manage to make it all the way to the deepest reaches of the pulmonary system, and reside in the tiny air sacs called alveoli.

Wherever the bacteria land, the researchers found, they join a community made up of mostly other recently arrived bacteria. Few microbes are thought to be long-term residents of healthy lungs. Unlike the gut, healthy lungs are an inhospitable environment for bacteria, with little nutrition, and constant surveillance by the lung’s immune system.

“This is the most comprehensive topographic survey of the healthy lung microbiome to date. It adds to the evidence that healthy lungs are like an island whose population is determined by the balance of immigration and elimination of species: who moves in and who moves out,” says Robert P. Dickson, M.D., the first author of the new study. “The microbiome of the lung plays by a different ecologic rulebook than the gut microbiome, and this study helps clarify what those rules are.”

He and his colleagues, inspired by classic models of ecology, have proposed an “adapted island model” of the lung microbiome, in which the lung’s ecosystem is determined by the competing pressures of microbial immigration and elimination.

Careful exploration

The new paper doesn’t just describe the lung microbiome ecosystem — it also shows that scientists who want to study it in the future can feel confident using standard techniques that doctors already use to look for signs of lung disease.

Dickson and his colleagues did their study in eight healthy volunteers who underwent bronchoscopy at VAAHS — allowing the researchers to pass a scope down their throat and into their lungs while they were under conscious sedation.

The team got samples of lung bacteria using two different techniques to sample nine separate sites in the lungs of each volunteer. One technique, called protected specimen brushing, used a special catheter that keeps bacteria from the mouth and throat from contaminating the samples taken in the airways.

Suspicion of such contamination has kept some researchers from fully accepting the results of previous lung microbiome research. So, to test for this contamination, the researchers took one sample on a brush held in the middle of the airway, and the rest using brushes that they gently touched to the airway wall.

The researchers got other samples through bronchoalveolar lavage, in which they squirt a small amount of fluid into the deepest part of the lung and then suction it back into the bronchoscope, bringing microbes along with it.

They then analyzed the bacterial DNA that they found in all these samples.

“We found no evidence of upper respiratory tract bacteria in these contamination control specimens,” says Dickson. “This reassured us that the rest of our samples truly reflected lung bacteria, and not just contamination from the procedure,” brought in by the bronchoscope as it passed through the throat

Taken together, the results suggest bronchoscope-based techniques can be used to study the lung microbiome — in sickness and in health.

“The lungs are our largest interface with the outside environment, with 70 square meters of surface area,” says Dickson. “That’s 30 times the size of the skin, and twice the size of the gastrointestinal tract. And this study confirms that they’re under constant bombardment by diverse communities of bacteria.”

The researchers emphasize that these volunteers were healthy, and had no symptoms that suggested pneumonia or other respiratory disease. “These volunteers were as healthy as you or me,” Dickson says. “We are probably all aspirating small amounts of bacteria constantly, and so long as our immune system is intact they rarely make us sick.”

Dickson notes that microaspiration in healthy people has been observed for nearly a century in studies using medical imaging techniques. “We’ve known about the existence of microaspiration for decades,” says Dickson. “But this is the first time we’ve been able to find its ecologic fingerprint.”

Taking the research forward

Now that they understand more about the immigration question, Dickson and his colleagues aim to focus future research on what happens in people who have problems with the elimination part of the “adapted island” equation.

Inability to cough irritants out of the lung, or carry them out through the sweeping action of hair-like cilia on lung cell surfaces, could lead to more microbes staying longer in the lungs than normal. And that could lead to a higher risk of lung infection.

Such research could lead to better understanding of the lung microbiome’s importance in conditions such as chronic obstructive pulmonary disorder, cystic fibrosis or the types of lung failure seen in intensive care units like the one where Dickson works at Michigan Medicine, the U-M academic medical center.

“If healthy lungs are like Antarctica — where conditions aren’t good for microbe reproduction — then diseased lungs are more like a tropical island, where lower rates of elimination and altered environmental conditions permit the persistence and reproduction of certain bacteria,” he says.

As the father of two young children, he also thinks it would be interesting to study how the lung microbiome changes when a person has a viral infection that causes the upper respiratory tract to produce much more nasal secretions. “I certainly look at my kids with their constantly runny noses and wonder if their lung microbiome look more ‘nasal’ than ‘oral.'”

And, as researchers begin to uncover the impact of common medications such as antibiotics and proton pump inhibitors on the microbiome in the digestive system, he notes that it’s not unreasonable to think that these drugs also affect the lung microbiome. He and colleagues have in critically ill patients.

, ,

They traveled a huge distance, evaded a protective barrier, and found themselves in a strange and unwelcoming land.

They’re looked at suspiciously for possible links to dangerous diseases, and are under constant threat of being expelled from their adopted home. Their contribution to the greater community is only beginning to be understood. And every day, more of them arrive.

“They” are bacteria living in human lungs. And new research pinpoints just how they get there, and opens the door to more research on what happens to them — and our bodies — as a result.

Writing in the journal mBio, researchers from the University of Michigan Medical School and VA Ann Arbor Healthcare System offer microbiome-based evidence that most of the bacteria in the lungs of healthy people got there by way of “microaspiration.”

In other words, they rode in on tiny droplets of saliva that made it from the microbe-filled mouth to the lungs. That means they avoided the movable tissue barrier called the epiglottis that keeps most saliva from getting into the lower respiratory tract.

By studying the DNA of these bacteria throughout the lungs of healthy volunteers, the researchers confirmed that the population of microbes in the lungs closely resembles the population found in the mouth. And by studying their distribution within the airways, the researchers could determine their most likely route of entry.

They found that many of the immigrant microbes make their home near the main carina, the spot at the end of the trachea where the airway branches off to the left and right lungs. This spot in the lungs is a “landing pad” where aspirated saliva — because of gravity and our upright posture — is likely to collide with the airway. But some bacteria manage to make it all the way to the deepest reaches of the pulmonary system, and reside in the tiny air sacs called alveoli.

Wherever the bacteria land, the researchers found, they join a community made up of mostly other recently arrived bacteria. Few microbes are thought to be long-term residents of healthy lungs. Unlike the gut, healthy lungs are an inhospitable environment for bacteria, with little nutrition, and constant surveillance by the lung’s immune system.

“This is the most comprehensive topographic survey of the healthy lung microbiome to date. It adds to the evidence that healthy lungs are like an island whose population is determined by the balance of immigration and elimination of species: who moves in and who moves out,” says Robert P. Dickson, M.D., the first author of the new study. “The microbiome of the lung plays by a different ecologic rulebook than the gut microbiome, and this study helps clarify what those rules are.”

He and his colleagues, inspired by classic models of ecology, have proposed an “adapted island model” of the lung microbiome, in which the lung’s ecosystem is determined by the competing pressures of microbial immigration and elimination.

Careful exploration

The new paper doesn’t just describe the lung microbiome ecosystem — it also shows that scientists who want to study it in the future can feel confident using standard techniques that doctors already use to look for signs of lung disease.

Dickson and his colleagues did their study in eight healthy volunteers who underwent bronchoscopy at VAAHS — allowing the researchers to pass a scope down their throat and into their lungs while they were under conscious sedation.

The team got samples of lung bacteria using two different techniques to sample nine separate sites in the lungs of each volunteer. One technique, called protected specimen brushing, used a special catheter that keeps bacteria from the mouth and throat from contaminating the samples taken in the airways.

Suspicion of such contamination has kept some researchers from fully accepting the results of previous lung microbiome research. So, to test for this contamination, the researchers took one sample on a brush held in the middle of the airway, and the rest using brushes that they gently touched to the airway wall.

The researchers got other samples through bronchoalveolar lavage, in which they squirt a small amount of fluid into the deepest part of the lung and then suction it back into the bronchoscope, bringing microbes along with it.

They then analyzed the bacterial DNA that they found in all these samples.

“We found no evidence of upper respiratory tract bacteria in these contamination control specimens,” says Dickson. “This reassured us that the rest of our samples truly reflected lung bacteria, and not just contamination from the procedure,” brought in by the bronchoscope as it passed through the throat

Taken together, the results suggest bronchoscope-based techniques can be used to study the lung microbiome — in sickness and in health.

“The lungs are our largest interface with the outside environment, with 70 square meters of surface area,” says Dickson. “That’s 30 times the size of the skin, and twice the size of the gastrointestinal tract. And this study confirms that they’re under constant bombardment by diverse communities of bacteria.”

The researchers emphasize that these volunteers were healthy, and had no symptoms that suggested pneumonia or other respiratory disease. “These volunteers were as healthy as you or me,” Dickson says. “We are probably all aspirating small amounts of bacteria constantly, and so long as our immune system is intact they rarely make us sick.”

Dickson notes that microaspiration in healthy people has been observed for nearly a century in studies using medical imaging techniques. “We’ve known about the existence of microaspiration for decades,” says Dickson. “But this is the first time we’ve been able to find its ecologic fingerprint.”

Taking the research forward

Now that they understand more about the immigration question, Dickson and his colleagues aim to focus future research on what happens in people who have problems with the elimination part of the “adapted island” equation.

Inability to cough irritants out of the lung, or carry them out through the sweeping action of hair-like cilia on lung cell surfaces, could lead to more microbes staying longer in the lungs than normal. And that could lead to a higher risk of lung infection.

Such research could lead to better understanding of the lung microbiome’s importance in conditions such as chronic obstructive pulmonary disorder, cystic fibrosis or the types of lung failure seen in intensive care units like the one where Dickson works at Michigan Medicine, the U-M academic medical center.

“If healthy lungs are like Antarctica — where conditions aren’t good for microbe reproduction — then diseased lungs are more like a tropical island, where lower rates of elimination and altered environmental conditions permit the persistence and reproduction of certain bacteria,” he says.

As the father of two young children, he also thinks it would be interesting to study how the lung microbiome changes when a person has a viral infection that causes the upper respiratory tract to produce much more nasal secretions. “I certainly look at my kids with their constantly runny noses and wonder if their lung microbiome look more ‘nasal’ than ‘oral.'”

And, as researchers begin to uncover the impact of common medications such as antibiotics and proton pump inhibitors on the microbiome in the digestive system, he notes that it’s not unreasonable to think that these drugs also affect the lung microbiome. He and colleagues have in critically ill patients.

, ,

There has been much recent talk about how to target the rising tide of antibiotic resistance across the world, one of the biggest threats to global health today.

While there is no doubting the size of the problem facing scientists, healthcare professionals and the pharmaceutical industry, there are innovative ways we can target antibiotic resistance in the short term, which are discussed in three articles published in Essays in Biochemistry.

With only a few antibiotics in development and a long drug development process (often 10-15 years), there is concern that what is being done to combat antibiotic resistance may be ‘too little, too late’.

“If bacteria continue developing resistance to multiple antibiotics at the present rate, at the same time as the antibiotic pipeline continues to dry up, there could be catastrophic costs to healthcare and society globally,” said senior co-author on one of the articles, Dr Tony Velkov, an Australian National Health and Medical Research Council (NHMRC) Career Development Fellow from Monash University, Victoria, Australia.

While any antimicrobial resistance is concerning, the increasing incidence of antibiotic-resistant Gram-negative bacteria has become a particular problem as strains resistant to multiple antibiotics are becoming common and no new drugs to treat these infections (eg, carbapenem-resistant Enterobacteriaceae) will be available in the near future. These Gram-negative bacteria are considered the most critical priority in the list of the 12 families of bacteria that pose the greatest threat to human health that was just released by the World Health Organization.

The reasons for the high levels of antimicrobial resistance observed in these critical Gram-negative organisms are explained in another paper in the same issue written by the Guest Editor of the journal, Dr Rietie Venter, University of South Australia, Adelaide, and colleagues. According to the authors, one of the main contributing factors to the increased resistance observed in Gram-negative bacteria is the permeability barrier caused by their additional outer membrane.

An innovative strategy that is gaining momentum is the synergistic use of antibiotics with FDA-approved non-antibiotics. Using this novel approach, an FDA-approved non-antibiotic drug is combined with a specific antibiotic that enables it to breach the outer membrane barrier and so restore the activity of an antibiotic. The Monash University authors discuss how combining antibiotics with other non-antibiotic drugs or compounds can boost their effectiveness against Gram-negative ‘superbugs’.

For example, loperamide, an anti-diarrheal medication sold in most pharmacies, enhances the effectiveness of eight different antibiotics (all in the tetracycline class). In particular, when added to the tetracycline antibiotic minocycline, along with the Parkinson’s disease drug benserazide, it significantly increased antibiotic activity against multi-drug resistant Pseudomonas aeruginosa, a causative agent in hospital-acquired infections such as ventilator-associated pneumonia.

Polymyxins are a type of antibiotics that target Gram-negative bacterial infections and have traditionally been used as a last resort to treat serious infections such as those caused by Gram-negative ‘superbugs’ Klebsiella pneumoniae, P. aeruginosa and Acinetobacter baumannii. Resistance to polymyxins is not common, but in late 2015 the first transferable resistance gene to colistin (polymyxin E) was discovered (plasmid-borne mcr-1 gene). This caused significant concerns, as once resistance to polymyxins is established, often no other treatments are available.

A number of researchers, including the team based at Monash University, have been testing different combinations of drugs or compounds with polymyxins to try and improve their effectiveness against these bacterial ‘superbugs’.

“Without new antibiotics in the near future, we must explore innovative approaches to preserve the clinical utility of important last-line antibiotics such as the polymyxins.” commented senior co-author on the paper, Professor Jian Li, Head of the Laboratory of Antimicrobial Systems Pharmacology from Monash University, Victoria, Australia.

Some interesting findings have ensued, with a number of different combinations having a beneficial effect. Some notable examples that increased antibiotic activity when combined with polymyxin B include: ivacaftor and lumacaftor, two new drugs used to treat cystic fibrosis; and closantel, a drug used to treat parasitic worm infections.

Another interesting combination that has shown promise against methicillin-resistant Staphylococcus aureus (MRSA), according to Schneider and co-authors, is combining the antibiotics ampicillin or oxacillin with berberine. Berberine is extracted from the roots, stems and bark of plants such as barberry.

In another paper in the same issue of Essays in Biochemistry, Dr Mark Blaskovich, Program Coordinator, Community for Open Antimicrobial Drug Discovery and colleagues from the University of Queensland, Brisbane, Australia, describe the key ways they believe antimicrobial resistance can be targeted.

“In the short term, the greatest potential for reducing further development of antimicrobial resistance lies in developing a rapid test that can quickly tell whether or not you have a bacterial infection (as opposed to a viral cold or flu), and whether you really need an antibiotic,” commented Blaskovich.

“Even better if the test could say what type of bacteria, and what types of antibiotics it is resistant to. You could then treat an infection immediately with the appropriate antibiotic, rather than the trial and error method now used. These tests could be ready within the next 5 years, and would have a huge impact on reducing unnecessary antibiotic use, preserving our existing antibiotics and reducing the spread of antibiotic resistance.”

Regarding antibiotics in particular, Blaskovich and colleagues describe a number of possible strategies to pursue. The first of which is to improve existing antibiotics. For example, the authors recently created a modified version of the antibiotic vancomycin to increase its potency and reduce its toxic side effects.

Another option is to rediscover ‘old’ antibiotics. In the 1950s and 60s many potential antibiotic drugs were described in the scientific literature, but due to so many choices being available at the time, only some were developed for human use. An example of this is octapeptins, which are newly rediscovered antibiotics that are now being developed to combat Gram-negative ‘superbugs’.

Repurposing drugs originally developed and approved for other uses has also had some success. In 2005, the Drugs for Neglected Diseases initiative identified fexinadole as a potential treatment for sleeping sickness and it is now undergoing a Phase III trial. This drug had been developed as an antimicrobial in the 1970s, but only reached pre-clinical development.

In addition to the above, researchers are looking for new, untested sources of antimicrobial activity to try and develop new drugs. A recent success in this area was, teixobactin, a new antibiotic developed by NovoBiotic Pharmaceuticals, discovered by using an ‘iChip’ to culture and isolate soil bacteria in situ.

A final option, mentioned by Blaskovich and colleagues, is crowdsourcing new antibiotics. Using this approach, the Community for Open Antimicrobial Drug Discovery, is searching for new chemical diversity by searching compounds sourced from academic chemists from around the world.

“It’s hard to predict which one of these methods will be the most successful in the future, but we really need to be trying all of them to have any chance of overcoming antibiotic resistance,” said Blaskovich.

“Non-antibiotic strategies are just as important, such as developing vaccines or probiotic therapies to prevent infections, as they can help to reduce the overuse of antibiotics. They will never completely replace antibiotics, but can help to preserve our existing antibiotics so they still work when needed.”

Overall, these articles and others in the new antimicrobial resistance themed issue of Essays in Biochemistry give us hope that there are viable solutions being developed to this seemingly unsurmountable global problem. It is important that all possible avenues are considered, as some less obvious approaches may end up being sources of future success.

Dr Derry Mercer, Principal Scientist at NovaBiotics Ltd, a company that specialises in developing new antimicrobials, commented: “Research and development into new antimicrobials remains a vitally important pursuit for combatting the problem of antibiotic resistance, but alternative approaches to this problem are also urgently needed.”

He added: “Such methods include those described in the papers in the latest issue of Essays in Biochemistry, as well as vaccine development and bacteriophage therapy, to name a few. Approaches that target microbial virulence, for example targeting biofilms and/or quorum sensing, rather than more traditional directly antimicrobial drugs should also be urgently examined.”

, ,

These days, it’s a territory mostly dominated by the likes of Raffi and the Wiggles, but there’s new evidence that lullabies, play songs, and other music for babies and toddlers may have some deep evolutionary roots.

A new theory paper, co-authored by Graduate School of Education doctoral student Samuel Mehr and Assistant Professor of Psychology Max Krasnow, proposes that infant-directed song evolved as a way for parents to signal to children that their needs are being met, while still freeing up parents to perform other tasks, like foraging for food, or caring for other offspring. Infant-directed song might later have evolved into the more complex forms of music we hear in our modern world. The theory is described in an open-access paper in the journal Evolution and Human Behavior.

Music is a tricky topic for evolutionary science: it turns up in many cultures around the world in many different contexts, but no one knows why humans are the only musical species. Noting that it has no known connection to reproductive success, Professor of Psychology Steven Pinker, described it as “auditory cheesecake” in his book How the Mind Works.

“There has been a lot of attention paid to the question of where music came from, but none of the theories have been very successful in predicting the features of music or musical behavior,” Krasnow said. “What we are trying to do with this paper is develop a theory of music that is grounded in evolutionary biology, human life history and the basic features of mammalian ecology.”

At the core of their theory, Krasnow said, is the notion that parents and infants are engaged in an “arms race” over an invaluable resource — attention.

“Particularly in an ancestral world, where there are predators and other people that pose a risk, and infants don’t know which foods are poisonous and what activities are hazardous, an infant can be kept safe by an attentive parent,” he said. “But attention is a limited resource.”

While there is some cooperation in the battle for that resource — parents want to satisfy infants appetite for attention because their cries might attract predators, while children need to ensure parents have time for other activities like foraging for food — that mutual interest only goes so far.

Attention, however, isn’t the only resource to cause such disagreements.

The theory of parent-offspring conflict was first put forth over forty years ago by the evolutionary biologist Robert Trivers, then an Assistant Professor at Harvard. Trivers predicted that infants and parents aren’t on the same page when it comes to the distribution of resources.

“His theory covers everything that can be classified as parental investment,” Krasnow said. “It’s anything that a parent could give to an offspring to help them, or that they may want to hold back for themselves and other offspring.”

Sexual reproduction means that every person gets half of their genes from each parent, but which genes in particular can differ even across full siblings.

Krasnow explains, “A gene in baby has only a fifty percent chance of being found in siblings by virtue of sharing two parents. That means that from the baby’s genetic perspective, she’ll want a more self-favoring division of resources, for example, than her mom or her sister wants, from their genetic perspectives.”

Mehr and Krasnow took the idea of parent-offspring conflict and applied it attention. They predict that children should ‘want’ a greater share of their parents’ attention than their parents ‘want’ to give them. But how does the child know it is has her parent’s attention? The solution, Krasnow said, is that parents were forced to develop some method of signaling to their offspring that their desire for attention was being met.

“I could simply look at my children, and they might have some assurance that I’m attending to them,” Krasnow said. “But I could be looking at them and thinking of something else, or looking at them and focusing on my cell phone, and not really attending to them at all. They should want a better signal than that.”

Why should that signal take the form of a song?

What makes such signals more honest, Mehr and Krasnow think, is the cost associated with them — meaning that by sending a signal to an infant, a parent cannot be sending it to someone else, sending it but lying about it, etc. “Infant directed song has a lot of these costs built in. I can’t be singing to you and be talking to someone else,” Krasnow said. “It’s unlikely I’m running away, because I need to control my voice to sing. You can tell the orientation of my head, even without looking at me, you can tell how far away I am, even without looking.”

Mehr notes that infant-directed song provides lots of opportunities for parents to signal their attention to infants: “Parents adjust their singing in real time, by altering the melody, rhythm, tempo, timbre, of their singing, adding hand motions, bouncing, touching, and facial expressions, and so on. All of these features can be finely tuned to the baby’s affective state — or not. The match or mismatch between baby behavior and parent singing could be informative for whether or not the parent is paying attention to the infant.”

Indeed, it would be pretty odd to sing a happy, bubbly song to a wailing, sleep-deprived infant.

Krasnow agrees. “All these things make something like an infant directed vocalization a good cue of attention,” he continued. “And when you put that into this co-evolutionary arms race, you might end up getting something like infant-directed song. It could begin with something like primitive vocalizations, which gradually become more infant directed, and are elaborated into melodies.”

“If a mutation develops in parents that allows them to do that quicker and better, then they have more residual budget to spend on something else, and that would spread,” he said. “Infants would then be able to get even choosier, forcing parents to get better, and so on. This is the same kind of process that starts with drab birds and results in extravagant peacocks and choosy peahens.” And as signals go, Krasnow said, those melodies can prove to be enormously powerful.

“The idea we lay out with this paper is that infant-directed song and things that share its characteristics should be very good at calming a fussy infant — and there is some evidence of that,” he said. “We’re not talking about going from this type of selection to Rock-a-Bye Baby; this theory says nothing about the words to songs or the specific melodies, it’s saying that the acoustic properties of infant directed song should make it better at calming an infant than other music.”

But, could music really be in our genes?

“A good comparison to make is to language,” Krasnow said. “We would say there’s a strong genetic component to language — we have a capability for language built into our genes — and we think the same thing is going to be true for music.”

What about other kinds of music? Mehr is optimistic that this work could be informative for this question down the road.

“Let’s assume for a moment that the theory is right. How, then, did we get from lullabies to Duke Ellington?” he asked. “The evolution of music must be a complex, multi-step process, with different features developing for different reasons. Our theory raises the possibility that infant-directed song is the starting point for all that, with other musical behaviors either developing directly via natural selection, as byproducts of infant-directed song, or as byproducts of other adaptations.”

For Pinker, the paper differs in one important way from other theories of how music evolves in that it makes evolutionary sense.

“In the past, people have been so eager to come up with an adaptive explanation for music that they have advanced glib and circular theories, such as that music evolved to bond the group,” he said. “This is the first explanation that at least makes evolutionary sense — it shows how the features of music could cause an advantage in fitness. That by itself doesn’t prove that it’s true, but at least it makes sense!”

, ,

It’s not enough to design new drugs.

For drugs to be effective, they have to be delivered safely and intact to affected areas of the body. And drug delivery, much like drug design, is an immensely complex task. Cutting-edge research and development like that conducted at the U.S. Department of Energy’s Oak Ridge National Laboratory can help solve some of the challenges associated with drug delivery.

In fact, ORNL researchers and collaborators at Wayne State University recently used a unique combination of experimentation and simulation to shed light on the design principles for improved delivery of RNA drugs, which are promising candidates in the treatment of a number of medical conditions including cancers and genetic disorders. Specifically, the research team discovered that the motions of a tRNA (or transfer RNA) model system can be enhanced when coupled with nanodiamonds, or diamond nanoparticles approximately 5 to 10 nanometers in size.

Nanodiamonds are good delivery candidates due to their spherical shape, biocompatibility and low toxicity. And because their surfaces can be easily tailored to facilitate the attachment of various medicinal molecules, nanodiamonds have tremendous potential for the delivery of a vast range of therapies.

The discovery involved ORNL’s Spallation Neutron Source, which provides the most intense pulsed neutron beams in the world for scientific research and industrial development, and ORNL’s Titan supercomputer, the nation’s most powerful for open science — a one-two punch for illuminating the physical properties of potential drugs that inform new design principles for safer, improved delivery platforms.

By comparing the SNS neutron scattering data with the data from the team’s molecular dynamics simulations on Titan, the researchers have confirmed that nanodiamonds enhance the dynamics of tRNA when in the presence of water. This cross-disciplinary research was profiled in Journal of Physical Chemistry B.

The best of both worlds

The project began when ORNL’s P. Ganesh and Xiang-Qiang Chu of Wayne State University wondered how the water-phobic surfaces of nanoparticles alter the dynamics of biomolecules coated with water, and if it might be something that they could eventually control. They then formed a team including Gurpreet Dhindsa, Hugh O’Neill, Debsindhu Bhowmik and Eugene Mamontov of ORNL and Liang Hong of Shanghai Jiao Tong University in China to observe the motions of hydrogen atoms from the model system, tRNA, in water using SNS’s BASIS neutron backscattering spectrometer, SNS beam line 2.

Hydration is essential for biomolecules to function, and neutrons are excellent at distinguishing between the motions of hydration water molecules and the biomolecule they are surrounding. Therefore, by measuring the atoms’ neutron scattering signals, the team was able to discern the movement of tRNA in water, providing valuable insight into how the large molecule relaxes in different environmental conditions.

After comparing the results of the individual atoms, it was clear that the nanodiamonds were having a profound effect on their companion RNA molecules. The results were somewhat baffling because similar experiments had demonstrated that companion solid materials (such as nanodiamonds) tended to dampen biomolecule dynamics. Surprisingly however, nanodiamonds did the opposite for tRNA.

“Scientists are always interested in the bio-nano interactions,” said Chu. “While the interfacial layer of the bio-nano systems has very distinctive properties, it is very hard to study this mysterious zone without neutron scattering, which only sees hydrogen.”

To realize the potential of nanodiamonds in the delivery of biomolecules using tRNA as a model, the team turned to Titan to shed a much-needed light on the underlying physics.

“Molecular dynamics simulation can really tell those stories that current experimental advancement might not be able to,” said Bhowmik of ORNL’s Computational Science and Engineering Division, who set up and conducted the simulations alongside Monojoy Goswami of the laboratory’s Computer Science and Mathematics Division and Hong of Shanghai Jiao Tong University. “By combining these two techniques, you can enter a whole new world.”

These simulations revealed that the “weak dynamic heterogeneity” of RNA molecules in the presence of nanodiamonds was responsible for the enhanced effect. In other words, the reactions among the nanodiamonds, water and the RNA molecule forms a water layer on the nanodiamond surface, which then blocks it and prevents strong RNA contact to the nanodiamond.

Since RNA is hydrophilic, or “likes water,” the molecules on the nanodiamond surface swell with excess hydration and weaken the heterogeneous dynamics of the molecules.

“You can fine-tune these dynamics with chemical functionalization on the nanodiamond surface, further enhancing its effectiveness,” said Goswami.

The findings will likely guide future studies not only on the potential of nanodiamonds in drug delivery but also on fighting bacteria and treating viral diseases.

Building the bridge

Using simulation to confirm and gain insight into experiments is nothing new. But mimicking large-scale systems precisely is often a challenge, and the lack of quantitative consistency between the two disciplines makes data comparison difficult and answers more elusive to researchers.

This lack of precision, and by extension lack of consistency, is largely driven by the uncertainty surrounding force-field parameters or the interaction criteria between different particles. The exact parameters are scarce for many macromolecules, often forcing researchers to use parameters that closely, but not exactly, match the experiment.

Miscalculating the precision of these parameters can have major consequences for the interpretation of the experimental results.

To ensure the calculations were correct, Goswami worked with Jose Borreguero and Vickie Lynch, both of ORNL’s Neutron Data Analysis and Visualization Division and Center for Accelerated Materials Modeling, to develop a workflow optimization technique known as Pegasus. This method compares molecular dynamics simulations with neutron scattering data and refines the simulation parameters to validate the results with the proper experimental precision.

“Using the Pegasus workflow to run simulations sampling, the force-field parameter space saved time and eliminated input errors,” said Lynch.

These parameters also helped researchers better characterize the nanodiamond-water interactions and tRNA dynamics in the presence of nanodiamonds.

The researchers then developed an automated system capable of optimizing parameters across a wide spectrum of simulation systems and neutron experiments, an effort that will be of great worth to similar experiments going forward. This new workflow is also compatible with the laboratory’s Compute and Data Environment for Science (CADES), which assists experimentalists with the analysis of vast quantities of data.

“Users of the CADES infrastructure can carry the optimization of the simulations within the Bellerophon Environment for the Analysis of Materials, in active development at ORNL,” said Borreguero. The Bellerophon Environment for the Analysis of Materials (BEAM) is an end-to-end workflow software system, developed at ORNL, enabling user-friendly, remote access to robust data storage and compute capabilities offered at CADES and the Oak Ridge Leadership Computing Facility, home of Titan, for scalable data analysis and modeling.

It’s these in-house resources that make ORNL a world leader in experimentation, modeling and the nexus in between and that make discoveries like this possible.

, ,

Brain-computer interface advance allows fast, accurate typing by people with paralysis in Stanford-led study

A clinical research publication led by Stanford University investigators has demonstrated that a brain-to-computer hookup can enable people with paralysis to type via direct brain control at the highest speeds and accuracy levels reported to date.

The report involved three study participants with severe limb weakness — two from amyotrophic lateral sclerosis, also called Lou Gehrig’s disease, and one from a spinal cord injury. They each had one or two baby-aspirin-sized electrode arrays placed in their brains to record signals from the motor cortex, a region controlling muscle movement. These signals were transmitted to a computer via a cable and translated by algorithms into point-and-click commands guiding a cursor to characters on an onscreen keyboard.

Each participant, after minimal training, mastered the technique sufficiently to outperform the results of any previous test of brain-computer interfaces, or BCIs, for enhancing communication by people with similarly impaired movement. Notably, the study participants achieved these typing rates without the use of automatic word-completion assistance common in electronic keyboarding applications nowadays, which likely would have boosted their performance.

One participant, Dennis Degray of Menlo Park, California, was able to type 39 correct characters per minute, equivalent to about eight words per minute.

‘A major milestone’

This point-and-click approach could be applied to a variety of computing devices, including smartphones and tablets, without substantial modifications, the Stanford researchers said.

“Our study’s success marks a major milestone on the road to improving quality of life for people with paralysis,” said Jaimie Henderson, MD, professor of neurosurgery, who performed two of the three device-implantation procedures. The third took place at Massachusetts General Hospital.

Henderson and Krishna Shenoy, PhD, professor of electrical engineering, are co-senior authors of the study, which will be published online Feb. 21 in eLife. The lead authors are former postdoctoral scholar Chethan Pandarinath, PhD, and postdoctoral scholar Paul Nuyujukian, MD, PhD, both of whom spent well over two years working full time on the project at Stanford.

“This study reports the highest speed and accuracy, by a factor of three, over what’s been shown before,” said Shenoy, a Howard Hughes Medical Institute investigator who’s been pursuing BCI development for 15 years and working with Henderson since 2009. “We’re approaching the speed at which you can type text on your cellphone.”

“The performance is really exciting,” said Pandarinath, who now has a joint appointment at Emory University and the Georgia Institute of Technology as an assistant professor of biomedical engineering. “We’re achieving communication rates that many people with arm and hand paralysis would find useful. That’s a critical step for making devices that could be suitable for real-world use.”

Shenoy’s lab pioneered the algorithms used to decode the complex volleys of electrical signals fired by nerve cells in the motor cortex, the brain’s command center for movement, and convert them in real time into actions ordinarily executed by spinal cord and muscles.

“These high-performing BCI algorithms’ use in human clinical trials demonstrates the potential for this class of technology to restore communication to people with paralysis,” said Nuyujukian.

Life-changing accident

Millions of people with paralysis reside in the United States. Sometimes their paralysis comes gradually, as occurs in ALS. Sometimes it arrives suddenly, as in Degray’s case.

Now 64, Degray became quadriplegic on Oct. 10, 2007, when he fell and sustained a life-changing spinal-cord injury. “I was taking out the trash in the rain,” he said. Holding the garbage in one hand and the recycling in the other, he slipped on the grass and landed on his chin. The impact spared his brain but severely injured his spine, cutting off all communication between his brain and musculature from the head down.

“I’ve got nothing going on below the collarbones,” he said.

Degray received two device implants at Henderson’s hands in August 2016. In several ensuing research sessions, he and the other two study participants, who underwent similar surgeries, were encouraged to attempt or visualize patterns of desired arm, hand and finger movements. Resulting neural signals from the motor cortex were electronically extracted by the embedded recording devices, transmitted to a computer and translated by Shenoy’s algorithms into commands directing a cursor on an onscreen keyboard to participant-specified characters.

The researchers gauged the speeds at which the patients were able to correctly copy phrases and sentences — for example, “The quick brown fox jumped over the lazy dog.” Average rates were 7.8 words per minute for Degray and 6.3 and 2.7 words per minute, respectively, for the other two participants.

A tiny silicon chip

The investigational system used in the study, an intracortical brain-computer interface called the BrainGate Neural Interface System*, represents the newest generation of BCIs. Previous generations picked up signals first via electrical leads placed on the scalp, then by being surgically positioned at the brain’s surface beneath the skull.

An intracortical BCI uses a tiny silicon chip, just over one-sixth of an inch square, from which protrude 100 electrodes that penetrate the brain to about the thickness of a quarter and tap into the electrical activity of individual nerve cells in the motor cortex.

Henderson likened the resulting improved resolution of neural sensing, compared with that of older-generation BCIs, to that of handing out applause meters to individual members of a studio audience rather than just stationing them on the ceiling, “so you can tell just how hard and how fast each person in the audience is clapping.”

Shenoy said the day will come — closer to five than 10 years from now, he predicted — when a self-calibrating, fully implanted wireless system can be used without caregiver assistance, has no cosmetic impact and can be used around the clock.

“I don’t see any insurmountable challenges.” he said. “We know the steps we have to take to get there.”

Degray, who continues to participate actively in the research, knew how to type before his accident but was no expert at it. He described his newly revealed prowess in the language of a video game aficionado.

“This is like one of the coolest video games I’ve ever gotten to play with,” he said. “And I don’t even have to put a quarter in it.”

The study’s results are the culmination of a long-running collaboration between Henderson and Shenoy and a multi-institutional consortium called BrainGate. Leigh Hochberg, MD, PhD, a neurologist and neuroscientist at Massachusetts General Hospital, Brown University and the VA Rehabilitation Research and Development Center for Neurorestoration and Neurotechnology in Providence, Rhode Island, directs the pilot clinical trial of the BrainGate system and is a study co-author.

“This incredible collaboration continues to break new ground in developing powerful, intuitive, flexible neural interfaces that we all hope will one day restore communication, mobility and independence for people with neurologic disease or injury,” said Hochberg.

, ,
istanbul escort