Ancient Horse DNA Reveals Gene Flow between Eurasian And North American Horses (Paleontology)

New findings show connections between the ancient horse populations in North America, where horses evolved, and Eurasia, where they were domesticated

A new study of ancient DNA from horse fossils found in North America and Eurasia shows that horse populations on the two continents remained connected through the Bering Land Bridge, moving back and forth and interbreeding multiple times over hundreds of thousands of years.

The new findings demonstrate the genetic continuity between the horses that died out in North America at the end of the last ice age and the horses that were eventually domesticated in Eurasia and later reintroduced to North America by Europeans. The study has been accepted for publication in the journal Molecular Ecology and is currently available online.

“The results of this paper show that DNA flowed readily between Asia and North America during the ice ages, maintaining physical and evolutionary connectivity between horse populations across the Northern Hemisphere,” said corresponding author Beth Shapiro, professor of ecology and evolutionary biology at UC Santa Cruz and a Howard Hughes Medical Institute investigator.

The study highlights the importance of the Bering Land Bridge as an ecological corridor for the movement of large animals between the continents during the Pleistocene, when massive ice sheets formed during glacial periods. Dramatically lower sea levels uncovered a vast land area known as Beringia, extending from the Lena River in Russia to the MacKenzie River in Canada, with extensive grasslands supporting populations of horses, mammoths, bison, and other Pleistocene fauna.

Paleontologists have long known that horses evolved and diversified in North America. One lineage of horses, known as the caballine horses (which includes domestic horses) dispersed into Eurasia over the Bering Land Bridge about 1 million years ago, and the Eurasian population then began to diverge genetically from the horses that remained in North America.

The new study shows that after the split, there were at least two periods when horses moved back and forth between the continents and interbred, so that the genomes of North American horses acquired segments of Eurasian DNA and vice versa.

“This is the first comprehensive look at the genetics of ancient horse populations across both continents,” said first author Alisa Vershinina, a postdoctoral scholar working in Shapiro’s Paleogenomics Laboratory at UC Santa Cruz. “With data from mitochondrial and nuclear genomes, we were able to see that horses were not only dispersing between the continents, but they were also interbreeding and exchanging genes.”

Mitochondrial DNA, inherited only from the mother, is useful for studying evolutionary relationships because it accumulates mutations at a steady rate. It is also easier to recover from fossils because it is a small genome and there are many copies in every cell. The nuclear genome carried by the chromosomes, however, is a much richer source of evolutionary information.

The researchers sequenced 78 new mitochondrial genomes from ancient horses found across Eurasia and North America. Combining those with 112 previously published mitochondrial genomes, the researchers reconstructed a phylogenetic tree, a branching diagram showing how all the samples were related. With a location and an approximate date for each genome, they could track the movements of different lineages of ancient horses.

Paleontologist Aisling Farrell holds a mummified frozen horse limb recovered from a placer gold mine in the Klondike goldfields in Yukon Territory, Canada. Ancient DNA recovered from horse fossils reveals gene flow between horse populations in North America and Eurasia. Image credit: Government of Yukon

“We found Eurasian horse lineages here in North America and vice versa, suggesting cross-continental population movements. With dated mitochondrial genomes we can see when that shift in location happened,” Vershinina explained.

The analysis showed two periods of dispersal between the continents, both coinciding with periods when the Bering Land Bridge would have been open. In the Middle Pleistocene, shortly after the two lineages diverged, the movement was mostly east to west. A second period in the Late Pleistocene saw movement in both directions, but mostly west to east. Due to limited sampling in some periods, the data may fail to capture other dispersal events, the researchers said.

The team also sequenced two new nuclear genomes from well-preserved horse fossils recovered in Yukon Territory, Canada. These were combined with 7 previously published nuclear genomes, enabling the researchers to quantify the amount of gene flow between the Eurasian and North American populations.

“The usual view in the past was that horses differentiated into separate species as soon as they were in Asia, but these results show there was continuity between the populations,” said coauthor Ross MacPhee, a paleontologist at the American Museum of Natural History. “They were able to interbreed freely, and we see the results of that in the genomes of fossils from either side of the divide.”

The new findings are sure to fuel the ongoing controversy over the management of wild horses in the United States, descendants of domestic horses brought over by Europeans. Many people regard those wild horses as an invasive species, while others consider them to be part of the native fauna of North America.

Alisa Vershinina works in the Paleogenomics Lab at UC Santa Cruz where ancient DNA is extracted from fossils for sequencing and analysis. © UC Santa Cruz

“Horses persisted in North America for a long time, and they occupied an ecological niche here,” Vershinina said. “They died out about 11,000 years ago, but that’s not much time in evolutionary terms. Present-day wild North American horses could be considered reintroduced, rather than invasive.”

Coauthor Grant Zazula, a paleontologist with the Government of Yukon, said the new findings help reframe the question of why horses disappeared from North America. “It was a regional population loss rather than an extinction,” he said. “We still don’t know why, but it tells us that conditions in North America were dramatically different at the end of the last ice age. If horses hadn’t crossed over to Asia, we would have lost them all globally.”

This project was a large international collaborative effort involving researchers at multiple institutions working together to obtain DNA from fossils of ancient horses over a wide range of sites in Eurasia and North America. The coauthors include researchers from the University of Toulouse, France, the Arctic University of Norway, and other institutions in the United States, Canada, Sweden, Denmark, Germany, Russia, and China. This work was supported in part by the U.S. National Science Foundation, Gordon & Betty Moore Foundation, and the American Wild Horse Campaign.

Featured image: Ancient horses crossed over the Bering Land Bridge in both directions between North America and Asia multiple times during the Pleistocene. © Illustration by Julius Csotonyi

Featured image: Vershinina, A.O., Heintzman, P.D., Froese, D.G., Zazula, G., Cassatt‐Johnstone, M., Dalén, L., Der Sarkissian, C., Dunn, S.G., Ermini, L., Gamba, C., Groves, P., Kapp, J.D., Mann, D.H., Seguin‐Orlando, A., Southon, J., Stiller, M., Wooller, M.J., Baryshnikov, G., Gimranov, D., Scott, E., Hall, E., Hewitson, S., Kirillova, I., Kosintsev, P., Shidlovsky, F., Tong, H.‐W., Tiunov, M.P., Vartanyan, S., Orlando, L., Corbett‐Detig, R., MacPhee, R.D. and Shapiro, B. (2021), Ancient horse genomes reveal the timing and extent of dispersals across the Bering Land Bridge. Mol Ecol. Accepted Author Manuscript.

Provided by University of California Santa Cruz

Study Finds Potential Causality Between Blood Clot Factors and Migraine With Aura (Neuroscience)

Using genetic methods, researchers identify four coagulation factors that may contribute to patient susceptibility to migraines with auras

Nearly 15 percent of the U.S. population experiences migraine. One subtype of migraine that is not well understood is migraine with aura (MA). Individuals who experience MA often see flashing lights, blind spots, or jagged lines in their visual field prior to onset of their migraine headaches. Individuals who experience MA also face a heightened risk of stroke and cardiovascular disease, although scientists continue to explore why this correlation exists. In a new study from Brigham and Women’s Hospital, researchers used a technique in genetic analysis termed Mendelian randomization to examine 12 coagulation measures, uncovering four that are associated with migraine susceptibility. Interestingly, scientists only observed these associations in individuals who experience MA and did not observe such associations among individuals who experience migraine without aura (MO). Their research suggests that these hemostatic factors could potentially have a causal role in MA. Their results are published in Neurology.

“We’ve always wanted to know why people with MA have this higher risk of stroke and other cardiovascular diseases,” said corresponding author Daniel Chasman, PhD, of the Division of Preventive Medicine at the Brigham. “This study offers promising leads specific to MA. Finding a possible cause for migraine with aura has been an outstanding question in the field for a long time.”

There has been speculation in the field about relationships between coagulation and migraine susceptibility for some time, but previous research has been largely inconclusive. Most individuals first experience migraine at a young age for example, during childhood or young adulthood. Because previous study designs included only middle-aged and older adults, investigators have questioned whether coagulation causes migraine or if causality exists between these two elements at all. In this study, leveraging Mendelian randomization, which can support or refute potential causal effects on a health outcome, scientists for the first time found evidence that hemostatic factors may contribute to risk of MA.

“Even if we see an association between migraine and these coagulation factors when we measure both factors in a population at the same time, we still wonder: Which one came first?” said co-author Pamela Rist, ScD, of the Division of Preventive Medicine at the Brigham. “One of the interesting parts of Mendelian randomization is that it allows you to examine potential causality.”

Researchers used summary statistics from decades of previously collected data from individuals who experience migraine and individuals who do not experience migraine. Because the diagnostic criteria are different for MA versus MO, they could examine these two conditions separately.

Investigators found a strong association between four coagulation factors and migraine susceptibility. They observed that genetically increased levels of three blood clotting factors: coagulation factor VIII, von Willebrand factor, and phosphorylated fibrinopeptide A, and genetically decreased levels of fibrinogen (a protein important in the late stages of the blood clotting process) were all associated, in part, with migraine susceptibility. Interestingly, scientists did not find this association among individuals who experience migraine without aura (MO), indicating a specific relationship between these hemostatic factors and MA.

Scientists note that Mendelian randomization has its limitations. In the future, researchers could examine if the causal associations implied by genetics can be observed in clinical practice.

“It is very exciting that by using Mendelian randomization we were able to show that hemostatic factors are associated with MA,” said first author Yanjun Guo, MD, PhD of the Division of Preventative Medicine at the Brigham. “And because in the observational studies we saw that MA patients have a higher risk of stroke, these findings may reveal a potential connection between MA and stroke.”

Funding for this work was provided by the U.S. National Institutes of Health and U.S. National Institute of Neurological Disorders and Stroke (R21NS09296 and R21NS104398), the National Heart, Lung and Blood Institute (R01HL134894, R01HL139553, and , K01 HL128791), the American Heart Association (grant 18CDA34110116), a Miguel Servet contract from the ISCIII Spanish Health Institute (CP17/00142) and co-financed by the European Social Fund. The WGHS is supported by the National Heart, Lung, and Blood Institute (HL043851 and HL080467) and the National Cancer Institute (CA047988 and UM1CA182913) with funding for genotyping provided by Amgen.

Reference: Guo, Y et al. “Association Between Hemostatic Profile and Migraine: A Mendelian Randomization Analysis” Neurology DOI:

Provided by Brigham and Women’s Hospital

Salk Scientists Reveal Role Of Genetic Switch in Pigmentation and Melanoma (Biology)

Study suggests that turning molecular switch off could be a strategy to treat deadly type of skin cancer

Despite only accounting for about 1 percent of skin cancers, melanoma causes the majority of skin cancer-related deaths. While treatments for this serious disease do exist, these drugs can vary in effectiveness depending on the individual.

A Salk study published on May 18, 2021, in the journal Cell Reports reveals new insights about a protein called CRTC3, a genetic switch that could potentially be targeted to develop new treatments for melanoma by keeping the switch turned off.“We’ve been able to correlate the activity of this genetic switch to melanin production and cancer,” says Salk study corresponding author Marc Montminy, a professor in the Clayton Foundation Laboratories for Peptide Biology.

Melanoma develops when pigment-producing cells that give skin its color, called melanocytes, mutate and begin to multiply uncontrollably. These mutations can cause proteins, like CRTC3, to prompt the cell to make an abnormal amount of pigment or to migrate and be more invasive.

Researchers have known that the CRTC family of proteins (CRTC1, CRTC2, and CRTC3) is involved in pigmentation and melanoma, yet obtaining precise details about the individual proteins has been elusive. “This is a really interesting situation where different behaviors of these proteins, or genetic switches, can actually give us specificity when we start thinking about therapies down the road,” says first author Jelena Ostojić, a former Salk staff scientist and now a principal scientist at DermTech.

The researchers observed that eliminating CRTC3 in mice caused a color change in the animal’s coat color, demonstrating that the protein is needed for melanin production. Their experiments also revealed that when the protein was absent in melanoma cells, the cells migrated and invaded less, meaning they were less aggressive, suggesting that inhibiting the protein could be beneficial for treating the disease.

The team characterized, for the first time, the connection between two cellular communications (signaling) systems that converge on the CRTC3 protein in melanocytes. These two systems tell the cell to either proliferate or make the pigment melanin. Montminy likens this process to a relay race. Essentially, a baton (chemical message) is passed from one protein to another until it reaches the CRTC3 switch, either turning it on or off.

“The fact that CRTC3 was an integration site for two signaling pathways—the relay race—was most surprising,” says Montminy, who holds the J.W. Kieckhefer Foundation Chair. “CRTC3 makes a point of contact between them that increases specificity of the signal.”

Next, the team plans to further investigate the mechanism of how CTRC3 impacts the balance of melanocyte differentiation to develop a better understanding of its role in cancer.

Featured image: Graphical abstract © Authors


Provided by Salk Institute

Novel Simulation Method Predicts Blood Flow Conditions Behind Von Willebrand Disease (Medicine)

Breakthrough could advance diagnosis and treatment of bleeding disorder and lead to improved design of left ventricular assist devices used in heart failure patients

For the first time, researchers can quantitatively predict blood flow conditions that likely cause pathological behavior of the human blood protein von Willebrand factor (vWF). Predictions from this new method of simulation, developed at Lehigh University, can be used to optimize the design of the mechanical pumps known as left ventricular assist devices used in heart failure patients. The method also has the potential to improve diagnosis and treatment of von Willebrand disease, the most common inherited bleeding disorder in the U.S., according to the Centers for Disease Control and Prevention.  The article, “Predicting pathological von Willebrand factor unraveling in elongational flow,” appears today in the May 18 issue of Biophysical Journal. In it, the team of researchers used an enhanced sampling technique called Weighted Ensemble, in conjunction with Brownian Dynamics simulations (i.e., the WEBD method), to identify blood flow conditions that cause pathological unraveling of the human blood protein vWF. The method allowed them to compute the globular-to-unraveled transition rate of the protein on timescales inaccessible to standard simulation methods.

“This method is all about studying the kinetics of rare events,” says co-author Edmund Webb III, an associate professor of mechanical engineering and mechanics in Lehigh’s P.C. Rossin College of Engineering and Applied Science. “That typically means some type of transition. With proteins, that often comes down to folding.”

The protein known as vWF promotes blood clotting by helping platelets in blood stick to collagen within the walls of damaged blood vessels and form a plug that stops the bleeding from a wound.

Typically, vWF circulates in the blood as a compact ball, or globule. When it approaches an injury site, the increase in blood flow caused by the laceration prompts the globule to unravel. As the protein transitions into more of a string-like shape, sites that are typically shielded when vWf is in a globule shape become exposed. Those sites are “sticky”—and they bind with platelets and collagen to initiate blood clot formation. 

There are a number of ways in which the clotting process can go awry, leading to bleeding disorders. One of these is called von Willebrand disease (vWD), and it affects about 1 percent of Americans (or 1 in every 100 people), according to the CDC. Its primary symptoms include frequent nosebleeds, easy bruising, and heavy and/or longer bleeding after injury, childbirth, surgery, or dental work, or during menstrual periods. 

There are several types of vWD, and they range in severity based on the degree to which vWF is depleted in a patient’s blood. Some people don’t even know they have the condition, because the vWF concentration in their blood, though depleted, is still sufficiently high to initiate clotting. Some have to steer clear of certain activities to avoid injury. Others need regular infusions of vWF, because they are severely lacking in the protein. 

The lead study author is Sagar Kania, a Rossin College PhD student in mechanical engineering and mechanics. Kania and Webb performed the work with their Lehigh coauthors, Alp Oztekin, a professor of mechanical engineering and mechanics, Xuanhong Cheng, a professor of bioengineering and materials science and engineering, and X. Frank Zhang, an associate professor of bioengineering. 

The team focused on first understanding blood flow conditions in which otherwise healthy vWF would exhibit undesired unraveling. This is a question whose answer has a direct impact on the design of left ventricular assist devices (LVADs), which have been associated with causing unexpected vWF depletion and associated bleeding disorders, akin to vWD. The symptoms are essentially the same so, medically speaking, in addition to being hereditary, von Willebrand disease can be acquired either by the action of a medical device or as a result of a separate medical condition.

Undesired unraveling of vWF is considered a necessary first step to pathological vWF depletion and associated bleeding disorders. In normal conditions, vWF is made within the walls of blood vessels and then secreted into the blood. 

“And when it gets secreted, it’s way too big,” Webb says. 

So that secretion simultaneously activates an enzyme (called ADAMTS13) that cuts the very long proteins into shorter lengths that are appropriately sized for their blood-clotting duties. Those shorter proteins contract to a globule shape, and then flow through the blood until they encounter an injury site, at which point they unravel, stick to platelets and collagen, and initiate the process of plugging the hole in damaged blood vessels.

The specific problem that the team studied for this paper arises when vWF unravels when it shouldn’t. In other words, not in response to a wound. When that happens, the cutting enzyme may again be activated.

“So upon secretion into blood, von Willebrand factor proteins are circulating in normal blood flow conditions, and yet they’re unraveling because they are very long,” says Webb. “Since they are unraveling, they’re getting cut down to the normal size distribution. But in pathological conditions they continue to unravel, and they get cut to the point where they are too small to be functional. They become short segments that are no longer hemostatically active; so if you get a cut, you can’t clot because you don’t have enough properly sized von Willebrand factor circulating around.”

So what are the blood flow conditions that cause vWF to unravel when it shouldn’t? To answer that question, Kania and coauthors combined the enhanced sampling technique (Weighted Ensemble) with molecular scale (Brownian Dynamics) simulations. This required executing parallel simulations—computations that are performed across many computers at the same time—supported by Lehigh’s Sol and Hawk computational clusters, as well as resources from the Xtreme Science and Engineering Discovery Environment (XSEDE), a national supercomputing resource made possible by the National Science Foundation.  

“The big problem,” says Webb, “is that bleeding disorders can be associated with the clearance of von Willebrand factor on timescales that might be on the order of minutes to hours. Molecular-scale simulations have to go through time sequentially, using a really, really tiny time step. For us to get a one-second simulation is effectively a state-of-the-art calculation.” 

Webb further points out that developing a robust understanding of the statistical nature of rare unraveling events requires many such simulations, making such an approach intractable.

“What this paper did was use a new method of simulation combined with our preexisting method of simulation to answer questions on longer timescales. It’s generally referred to as timescale bridging, because we’re taking a model designed to address questions on the order of micro- to milliseconds and combining it with a new theoretical technique that allows us to answer questions about things that would happen on timescales of seconds, minutes, hours, even days.” 

It’s truly a novel approach to predicting this type of an event, he says.

“This Weighted Ensemble method and the marriage of it to this type of problem has never been done before,” Webb explains. “Sagar has made some predictions about pathological flow conditions that appear to be quantitatively more accurate than what previously exists in the literature. So we can now say, ‘If a certain blood flow exposure occurs, you’re going to have issues.’” 

The breakthrough is potentially very good news for patients with heart failure. It may help manufacturers of LVADs design the blood pumps so the flow they generate doesn’t create pathological conditions for vWF.

Beyond benefiting those in need of medical implants, the team’s method may eventually help medical professionals better understand and potentially manipulate the complex flow conditions affecting the size distribution of vWF in their patients. This could lead to improved treatment of both acquired and hereditary vWD.

The ultimate goal, Webb says, is to take this type of capability and knowledge beyond just vWF to develop targeted drug therapy using flow-responsive molecules that mimic the unraveling of vWF. So if a patient is at risk for heart attack or stroke because of plaque buildup (stenosis), the molecule could deliver a drug to that precise area.

“You design a molecule that unravels in specific flow fields associated with certain degrees of stenosis,” says Webb. “The hope is that you would get the drug to the spot where you want it, and not anywhere else in the body. We’re working on this right now.”

Related Links: Biophysical Journal (Vol 120, Issue 10): “Predicting pathological von Willebrand factor unraveling in elongational flow”

Provided by Lehigh University

LHAASO Discovers a Dozen PeVatrons and Photons Exceeding 1 PeV and Launches Ultra-high-energy Gamma Astronomy Era

China’s Large High Altitude Air Shower Observatory (LHAASO)—one of the country’s key national science and technology infrastructure facilities—has found a dozen ultra-high-energy (UHE) cosmic accelerators within the Milky Way. It has also detected photons with energies exceeding 1 peta-electron-volt (quadrillion electron-volts or PeV), including one at 1.4 PeV. The latter is the highest energy photon ever observed.

These findings overturn the traditional understanding of the Milky Way and open up an era of UHE gamma astronomy. These observations will prompt people to rethink the mechanism by which high-energy particles are generated and propagated in the Milky Way, and will encourage people to explore more deeply violent celestial phenomena and their physical processes as well as test basic physical laws under extreme conditions.

These discoveries are published in the journal Nature on May 17. The LHAASO International Collaboration, which is led by the Institute of High Energy Physics (IHEP) of the Chinese Academy of Sciences, completed this study.

The LHAASO Observatory is still under construction. The cosmic accelerators—known as PeVatrons since they accelerate particles to the PeV range—and PeV photons were discovered using the first half of the detection array, which was finished at the end of 2019 and operated for 11 months in 2020.

Photons with energies exceeding 1 PeV were detected in a very active star-forming region in the constellation Cygnus. LHAASO also detected 12 stable gamma ray sources with energies up to about 1 PeV and significances of the photon signals seven standard deviations greater than the surrounding background. These sources are located at positions in our galaxy that can be measured with an accuracy better than 0.3°. They are the brightest Milky Way gamma ray sources in LHAASO’s field of view.

Although the accumulated data from the first 11 months of operation only allowed people to observe those sources, all of them emit so-called UHE photons, i.e., gamma rays above 0.1 PeV. The results show that the Milky Way is full of PeVatrons, while the largest accelerator on Earth (LHC at CERN) can only accelerate particles to 0.01 PeV. Scientists have already determined that cosmic ray accelerators in the Milky Way have an energy limit. Until now, the predicted limit was around 0.1 PeV, thus leading to a natural cut-off of the gamma-ray spectrum above that.

But LHAASO’s discovery has increased this “limit,” since the spectra of most sources are not truncated. These findings launch an era for UHE gamma astronomic observation. They show that non-thermal radiation celestials, such as young massive star clusters, supernova remnants, pulsar wind nebulas and so on—represented by Cygnus star-forming regions and the Crab nebula—are the best candidates for finding UHE cosmic rays in the Milky Way.

Through UHE gamma astronomy, a century-old mystery-—the origin of cosmic rays—may soon be solved. LHAASO will prompt scientists to rethink the mechanisms of high energy cosmic ray acceleration and propagation in the Milky Way. It will also allow scientists to explore extreme astrophysical phenomena and their corresponding processes, thus enabling examination of the basic laws of physics under extreme conditions.

Extended Materials: 

LHAASO and Its Core Scientific Goals 

LHAASO is a major national scientific and technological infrastructure facility focusing on cosmic ray observation and research. It is located 4,410 meters above sea level on Mt. Haizi in Daocheng County, Sichuan Province. When construction is completed in 2021, LHAASO’s particle detector arrays will comprise 5,195 electromagnetic particle detectors and 1,188 Muon detectors located in the square-kilometer complex array (KM2A), a 78,000 m2 water Cherenkov detector array (WCDA), and 18 wide-field-of-view Cherenkov telescopes (WFCTA). Using these four detection techniques, LHAASO will be able to measure cosmic rays omnidirectionally with multiple variables simultaneously. The arrays will cover an area of about 1.36 km2.

LHAASO’s core scientific goal is to explore the origin of high-energy cosmic rays and study related physics such as the evolution of the universe, the motion and interaction of high-energy astronomical celestials, and the nature of dark matter. LHAASO will extensively survey the universe (especially the Milky Way) for gamma ray sources. It will precisely measure their energy spectra over a broad range—from less than 1 TeV (trillion electron-volts or tera-electron-volts) to more than 1 PeV. It will also measure the components of diffused cosmic rays and their spectra at even higher energies, thus revealing the laws of the generation, acceleration and propagation of cosmic rays, as part of the exploration of new physics frontiers.

PeVatrons and PeV Photons 

The signal of UHE photons around PeVatrons is so weak that just one or two photons at PeV energy can be detected using 1 km2 of detectors per year even when focusing on the Crab Nebula, known as the “standard candle for gamma astronomy.” What’s worse, those one or two photons are submerged in tens of thousands of ordinary cosmic rays. The 1,188 muon detectors in LHAASO’s KM2A are designed to select photon-like signals, making LHAASO the most sensitive UHE gamma ray detector in the world. With its unprecedented sensitivity, in just 11 months, the half-sized KM2A detected one photon around 1 PeV from the Crab Nebula. In addition, KM2A found 12 similar sources in the Milky Way, all of which emit UHE photons and extend their spectra continuously into the vicinity of 1 PeV. Even more important, KM2A has detected a photon with energy of 1.4 PeV—the highest ever recorded. It is clear that LHAASO’s scientific discoveries represent a milestone in identifying the origin of cosmic rays. To be specific, LHAASO’s scientific breakthroughs fall into the following three areas:

1) Revealing the ubiquity of cosmic accelerators capable of accelerating particles to energies exceeding 1 PeV in the Milky Way. All the gamma ray sources that LHAASO has effectively observed radiate photons in the UHE range above 0.1 PeV, indicating that the energy of the parent particles radiating these gamma rays must exceed 1 PeV. As a matter of convention, these sources must have significances of photon signals five standard deviations greater than the surrounding background. The observed energy spectrum of these gamma rays has not truncated above 0.1 peV, demonstrating that there is no acceleration limit below PeV in the galactic accelerators.

This observation violates the prevailing theoretical model. According to current theory, cosmic rays with energies in the PeV range can produce gamma rays of 0.1 PeV by interacting with surrounding gases in the accelerating region. Detecting gamma rays with energies greater than 0.1 PeV is an important way to find and verify PeV cosmic ray sources. Since previous international mainstream detectors work below this energy level, the existence of PeV cosmic ray accelerators had not been solidly confirmed before. But now LHAASO has revealed a large number of PeV cosmic acceleration sources in the Milky Way, all of which are candidates for being UHE cosmic ray generators. This is a crucial step toward determining the origin of cosmic rays.

2) Beginning an era of “UHE gamma astronomy.” In 1989, an experimental group at the Whipple Observatory in Arizona successfully discovered the first object emitting gamma radiation above 0.1 TeV, marking the onset of the era of “very-high-energy” gamma astronomy. Over the next 30 years, more than 200 “very-high-energy” gamma ray sources were discovered. However, the first object emitting UHE gamma radiation was not detected until 2019. Surprisingly, by using a partly complete array for less than a year, LHAASO has already boosted the number of UHE gamma ray sources to 12.

With the completion of LHAASO and the continuous accumulation of data, we can anticipate to find an unexplored “UHE universe” full of surprising phenomena. It is well known that background radiation from the Big Bang is so pervasive it can absorb gamma rays with energies greater than 1 PeV. Even if gamma rays were produced beyond the Milky Way, we wouldn’t be able to detect them. This makes LHAASO’s observational window so special.

3) Photons with energies greater than 1 PeV were first detected from the Cygnus region and the Crab Nebula. The detection of PeV photons is a milestone in gamma astronomy. It fulfills the dream of the gamma astronomy community and has long been a strong driving force in the development of research instruments in the field. In fact, one of the main reasons for the explosion of gamma astronomy in the 1980s was the challenge of the PeV photon limit. The star-forming region in the direction of Cygnus is the brightest area in the northern territory of the Milky Way, with a large number of massive star clusters. Massive stars live only on the order of one million years, so the clusters contain enormous stars in the process of birth and death, with a complex strong shock environment. They are ideal “particle astrophysics laboratories,” i.e., places for accelerating cosmic rays.

The first PeV photons found by LHAASO were from the star-forming region of the constellation Cygnus, making this area the best candidate for exploring the origin of UHE cosmic rays. Therefore, much attention has turned to LHAASO and multi-wavelength observation of this region, which could offer a potential breakthrough in solving the “mystery of the century.”

Extensive observational studies of the Crab Nebula over the years have made the celestial body almost the only standard gamma ray source with a clear emission mechanism. Indeed, precise spectrum measurements across 22 orders of magnitude clearly reveal the signature of an electron accelerator. However, the UHE spectra measured by LHAASO, especially photons at PeV energy, seriously challenge this “standard model” of high-energy astrophysics and even the more fundamental theory of electron acceleration.

Technology Innovations 

LHAASO has developed and/or improved: 1) clock synchronization technology over long distances that ensures timing synchronization accuracy to the sub-nanosecond level for each detector in the array; 2) multiple parallel event trigger algorithms and their realization, with the help of high-speed front-end signal digitization, high-speed data transmission and large on-site computing clusters; and advanced detection technologies include 3) silicon photo multipliers (SiPM) and 4) ultra-large photocathode micro-channel plate photomultiplier tubes (MCP-PMT). They are being employed at LHAASO on a large scale for the first time. They have greatly improved the spatial resolution of photon measurements and lowered the detection energy threshold. These features allow detectors to achieve unprecedented sensitivity in exploring the deep universe at a wide energy range. LHAASO provides an attractive experimental platform for conducting interdisciplinary research in frontier sciences such as atmosphere, high-altitude environment and space weather. It will also serve as a base for international cooperation on high-level scientific research projects.

History of Cosmic Ray Research in China 

Cosmic ray research in China has experienced three stages. LHAASO represents the third generation of high-altitude cosmic ray observatories. High-altitude experiments are a means of making full use of the atmosphere as a detector medium. In this way, scientists can observe cosmic rays on the ground, where the size of the detector can be much larger than in a space-borne detector outside the atmosphere. This is the only way to observe cosmic rays at very high energy.

In 1954, China’s first cosmic ray laboratory was built on Mt. Luoxue in Dongchuan, Yunnan Province, at 3,180 meters above sea level. In 1989, the Sino-Japanese cosmic ray experiment ASg was built at an altitude of 4,300 meters above sea level at Yangbajing, Tibet Autonomous Region.

In 2006, the joint Sino-Italian ARGO-YBJ experiment was built at the same site.

In 2009, at the Xiangshan Science Forum in Beijing, Professor CAO Zhen proposed to build a large-scale composite detection array (i.e., LHAASO) in a high-altitude area. The LHAASO project was approved in 2015 and construction began in 2017. By April 2019, construction was 25% complete and scientific operation had begun. By January 2020, an additional 25% had been completed and put into operation. In December of the same year, 75% of the facility had been completed. The entire facility will be completed in 2021. LHAASO has already become one of the world’s leading UHE gamma detection facilities, and will operate for a long time. With it, scientists will be able to study the origin of cosmic rays from many aspects.

Featured image: Aerial photograph of LHAASO (Image by IHEP)

Reference: Cao, Z., Aharonian, F.A., An, Q. et al. Ultrahigh-energy photons up to 1.4 petaelectronvolts from 12 γ-ray Galactic sources. Nature (2021).

Provided by Chinese Academy of Sciences

Stunning Simulation Of Stars Being Born is Most Realistic Ever (Astronomy)

First high-resolution model to simulate an entire gas cloud where stars are born

A team including Northwestern University astrophysicists has developed the most realistic, highest-resolution 3D simulation of star formation to date. The result is a visually stunning, mathematically-driven marvel that allows viewers to float around a colorful gas cloud in 3D space while watching twinkling stars emerge.

Called STARFORGE (Star Formation in Gaseous Environments), the computational framework is the first to simulate an entire gas cloud — 100 times more massive than previously possible and full of vibrant colors — where stars are born. 

It also is the first simulation to simultaneously model star formation, evolution and dynamics while accounting for stellar feedback, including jets, radiation, wind and nearby supernovae activity. While other simulations have incorporated individual types of stellar feedback, STARFORGE puts them altogether to simulate how these various processes interact to affect star formation. 

Using this beautiful virtual laboratory, the researchers aim to explore longstanding questions, including why star formation is slow and inefficient, what determines a star’s mass and why stars tend to form in clusters.

The researchers have already used STARFORGE to discover that protostellar jets — high-speed streams of gas that accompany star formation — play a vital role in determining a star’s mass. By calculating a star’s exact mass, researchers can then determine its brightness and internal mechanisms as well as make better predictions about its death.

Newly accepted by the Monthly Notices of the Royal Astronomical Society, an advanced copy of the manuscript, detailing the research behind the new model, appeared online today. An accompanying paper, describing how jets influence star formation, was published in the same journal in February 2021.

“People have been simulating star formation for a couple decades now, but STARFORGE is a quantum leap in technology,” said Northwestern’s Michael Grudić, who co-led the work. “Other models have only been able to simulate a tiny patch of the cloud where stars form — not the entire cloud in high resolution. Without seeing the big picture, we miss a lot of factors that might influence the star’s outcome.”

“How stars form is very much a central question in astrophysics,” said Northwestern’s Claude-André Faucher-Giguère, a senior author on the study. “It’s been a very challenging question to explore because of the range of physical processes involved. This new simulation will help us directly address fundamental questions we could not definitively answer before.”

Snapshot from a STARFORGE simulation. A rotating gas core collapses, forming a central star that launches bipolar jets along its poles as it feeds on gas from the surrounding disk. The jets entrain gas away from the core, limiting the amount that the star can ultimately accrete.

Grudić is a postdoctoral fellow at Northwestern’s Center for Interdisciplinary Exploration and Research in Astrophysics (CIERA). Faucher-Giguère is an associate professor of physics and astronomy at Northwestern’s Weinberg College of Arts and Sciences and member of CIERA. Grudić co-led the work with Dávid Guszejnov, a postdoctoral fellow at the University of Texas at Austin.

From start to finish, star formation takes tens of millions of years. So even as astronomers observe the night sky to catch a glimpse of the process, they can only view a brief snapshot.

“When we observe stars forming in any given region, all we see are star formation sites frozen in time,” Grudić said. “Stars also form in clouds of dust, so they are mostly hidden.”

For astrophysicists to view the full, dynamic process of star formation, they must rely on simulations. To develop STARFORGE, the team incorporated computational code for multiple phenomena in physics, including gas dynamics, magnetic fields, gravity, heating and cooling and stellar feedback processes. Sometimes taking a full three months to run one simulation, the model requires one of the largest supercomputers in the world, a facility supported by the National Science Foundation and operated by the Texas Advanced Computing Center.

The resulting simulation shows a mass of gas — tens to millions of times the mass of the sun — floating in the galaxy. As the gas cloud evolves, it forms structures that collapse and break into pieces, which eventually form individual stars. Once the stars form, they launch jets of gas outward from both poles, piercing through the surrounding cloud. The process ends when there is no gas left to form anymore stars.

Pouring jet fuel onto modeling

Already, STARFORGE has helped the team discover a crucial new insight into star formation. When the researchers ran the simulation without accounting for jets, the stars ended up much too large — 10 times the mass of the sun. After adding jets to the simulation, the stars’ masses became much more realistic — less than half the mass of the sun.

“Jets disrupt the inflow of gas toward the star,” Grudić said. “They essentially blow away gas that would have ended up in the star and increased its mass. People have suspected this might be happening, but, by simulating the entire system, we have a robust understanding of how it works.”

Beyond understanding more about stars, Grudić and Faucher-Giguère believe STARFORGE can help us learn more about the universe and even ourselves. 

“Understanding galaxy formation hinges on assumptions about star formation,” Grudić said. “If we can understand star formation, then we can understand galaxy formation. And by understanding galaxy formation, we can understand more about what the universe is made of. Understanding where we come from and how we’re situated in the universe ultimately hinges on understanding the origins of stars.” 

“Knowing the mass of a star tells us its brightness as well as what kinds of nuclear reactions are happening inside it,” Faucher-Giguère said. “With that, we can learn more about the elements that are synthesized in stars, like carbon and oxygen — elements that we are also made of.”

The study, “STARFORGE: Toward a comprehensive numerical mode of star cluster formation and feedback,” was supported by the National Science Foundation and NASA.

Multimedia Downloads

Featured image: Snapshot from the first full STARFORGE simulation. Nicknamed the “Anvil of Creation,” a giant molecular cloud with individual star formation and comprehensive feedback, including protostellar jets, radiation, stellar winds and core-collapse supernovae. © Northwestern University/UT Austin

Provided by Northwestern University

Nanofiber Filter Captures Almost 100% of Coronavirus Aerosols (Material Science)

The filter could help curb airborne spread of COVID-19 virus

A filter made from polymer nanothreads blew three kinds of commercial masks out of the water by capturing 99.9% of coronavirus aerosols in an experiment.

“Our work is the first study to use coronavirus aerosols for evaluating filtration efficiency of face masks and air filters,” said corresponding author Yun Shen, a UC Riverside assistant professor of chemical and environmental engineering. “Previous studies have used surrogates of saline solution, polystyrene beads, and bacteriophages — a group of viruses that infect bacteria.”

The study, led by engineers at UC Riverside and The George Washington University, compared the effectiveness of surgical and cotton masks, a neck gaiter, and electrospun nanofiber membranes at removing coronavirus aerosols to prevent airborne transmission. The cotton mask and neck gaiter only removed about 45%-73% of the aerosols. The surgical mask did much better, removing 98% of coronavirus aerosols. But the nanofiber filter removed almost all of the coronavirus aerosols. 

A nanofiber filter that captures 99.9% of coronavirus aerosols
Left: A nanofiber filter that captures 99.9% of coronavirus aerosols; Right: A highly magnified image of the polymer nanofibers. (Photo: Yun Shen)

The World Health Organization and Centers for Disease Control have both recognized aerosols as a major mechanism of COVID-19 virus transmission. Aerosols are tiny particles of water or other matter that can remain suspended in air for long periods of time and are small enough to penetrate the respiratory system. 

People release aerosols whenever they breathe, cough, talk, shout, or sing. If they are infected with COVID-19, these aerosols can also contain the virus. Inhaling a sufficient quantity of coronavirus-laden aerosols can make people sick. Efforts to curb aerosol spread of COVID-19 focus on minimizing individual exposure and reducing the overall quantity of aerosols in an environment by asking people to wear masks and by improving indoor ventilation and air filtration systems. 

Studying a contagious new virus is dangerous and done in labs with the highest biosecurity ratings, which are relatively rare. To date, all studies during the pandemic on mask or filter efficiency have used other materials thought to mimic the size and behavior of coronavirus aerosols. The new study improved on this by testing both aerosolized saline solution and an aerosol that contained a coronavirus in the same family as the virus that causes COVID-19, but only infects mice. 

Shen and George Washington University colleague Danmeng Shuai produced a nanofiber filter by sending a high electrical voltage through a drop of liquid polyvinylidene fluoride to spin threads about 300 nanometers in diameter — about 167 times thinner than a human hair. The process created pores only a couple of micrometers in diameter on the nanofiber’s surfaces, which helped them capture 99.9% of coronavirus aerosols. 

The production technique, known as electrospinning, is cost effective and could be used to mass produce nanofiber filters for personal protective equipment and air filtration systems. Electrospinning also leaves the nanofibers with an electrostatic charge that enhances their ability to capture aerosols, and their high porosity makes it easier to breathe wearing electrospun nanofiber filters.

“Electrospinning can advance the design and fabrication of face masks and air filters,” said Shen. “Developing new masks and air filters by electrospinning is promising because of its high performance in filtration, economic feasibility, and scalability, and it can meet on-site needs of the masks and air filters.”

The paper, “Development of electrospun nanofibrous filters for controlling coronavirus aerosols,” is published in Environmental Science & Technology Letters. Other authors include Hongchen Shen, Zhe Zhou, Haihuan Wang, Mengyang Zhang, Minghao Han, and David P. Durkin. This work is funded by the National Science Foundation. 

Reference: Hongchen Shen, Zhe Zhou, Haihuan Wang, Mengyang Zhang, Minghao Han, David P. Durkin, Danmeng Shuai, and Yun Shen, “Development of Electrospun Nanofibrous Filters for Controlling Coronavirus Aerosols”, Environ. Sci. Technol. Lett. 2021.

Provided by University of California Riverside

Study Finds Evidence Of Persistent Lyme Infection in Brain Despite Aggressive Antibiotic Therapy (Neuroscience)

Tulane University researchers found the bacterium that causes Lyme disease in the brain tissue of a woman who had long suffered neurocognitive impairment after her diagnosis and treatment for the tick-borne disease. The presence of the corkscrew-shaped Borrelia burgdorferi spirochetes in the former Lyme disease patient’s brain and spinal cord were evidence of a persistent infection.

The findings were published in Frontiers in Neurology.

The 69-year-old woman, who experienced progressively debilitating neurological symptoms throughout her illness, decided to donate her brain to Columbia University for the study of the disease as her condition worsened. While she had first experienced the classic symptoms of Lyme disease 15 years prior and was treated accordingly after her diagnosis, she experienced continual neurological decline including a severe movement disorder and personality changes, and eventually succumbed to Lewy body dementia. Lewy body dementia is associated with abnormal protein deposits in the nerve cells of the brain which can cause impairment in thinking, movement and mood, leading to a severe form of dementia.

“These findings underscore how persistent these spirochetes can be in spite of multiple rounds of antibiotics targeting them,”

Monica Embers

This is the first time researchers have identified a possible correlation between Lyme disease infection and Lewy body dementia.

Using three highly sensitive methods of detection validated with nonhuman primate samples at Tulane National Primate Research Center, the research team concluded that at the donor’s time of death, her central nervous system (CNS) still harbored intact spirochetes in spite of aggressive antibiotic therapy for Lyme disease at different times throughout her illness. They employed the use of immunofluorescence staining to image the spirochetes, polymerase chain reaction or PCR to detect the presence of B. burgdorferi DNA, and RNAscope to determine whether the spirochetes were viable.

Previous research by the lead author of the study, Monica Embers, associate professor of microbiology and immunology at Tulane, concluded that a B. burgdorferi infection could persist in the CNS of rhesus macaques following the standard course of doxycycline prescribed for Lyme disease. Embers hopes that these latest findings help pave the way for investigating the presence of B. burgdorferi in the autopsies of other patients who have experienced severe neurocognitive decline.

“These findings underscore how persistent these spirochetes can be in spite of multiple rounds of antibiotics targeting them,” Embers said. “We will be interested in investigating the role that B. burgdorferi may play in severe neurological disease, as this is an area of research that has not yet been fully explored.”

About 15% of patients diagnosed with Lyme disease experience neuroborreliosis, a systemic infection of spirochetes which can affect both the CNS and peripheral nervous system.

This study was done in collaboration with Dr. Brian Fallon of the Lyme and Tick-Borne Diseases Research Center at Columbia University Irving Medical Center.

Featured image: A microscopic immunofluorescent staining image of Borrelia burgdorferi spirochete, identified by the arrow, in the spinal cord. Image provided by Shiva Gadila, Tulane National Primate Research Center. © Tulane

Reference: Shiva Kumar Goud Gadila, Gorazd Rosoklija, Andrew J. Dwork, Brian A. Fallon and Monica E. Embers, “Detecting Borrelia Spirochetes: A Case Study With Validation Among Autopsy Specimens”, Front. Neurol., 10 May 2021 |

Provided by Tulane

Tulane Researchers Develop Test That Detects Childhood TB An Year Ahead Of Current Screenings (Medicine)

Researchers at Tulane University School of Medicine have developed a highly sensitive blood test that can find traces of the bacteria that causes tuberculosis (TB) in infants a year before they develop the deadly disease, according to a study published in BMC Medicine

Using only a small blood sample, the test detects a protein secreted by Mycobacterium tuberculosis, which causes TB infection. It can screen for all forms of TB and rapidly evaluate a patient’s response to treatment, said lead study author Tony Hu, PhD, Weatherhead Presidential Chair in Biotechnology Innovation at Tulane University. 

“This is a breakthrough for infants with tuberculosis because we don’t have this kind of screening technology to catch early infections among those youngest groups who are most likely to be undiagnosed,” Hu said. “I hope this method can be pushed forward quickly to reach these children as early as possible.”

Each year, nearly a million children develop TB and 205,000 die of TB-related causes. More than 80% of childhood TB deaths occur in those under the age of 5. Most of these deaths occur because their disease is undiagnosed as children with TB, particularly infants, usually have symptoms that are not specific for the disease. These children also have difficulty producing the respiratory samples used for TB detection by the best TB tests now in use.

Even when it is possible to obtain these samples from children, they tend to be less effective for diagnosis, since they often contain much less of the bacteria than samples from adults, Hu said. His test’s assay, however, uses a small blood sample that can be easily obtained from children of any age to detect a specific protein (CFP-10) that the bacteria secrete to maintain the infection that develops into TB.  Since this protein is present at very low levels in the blood, Hu’s assay uses an antibody specific for this protein to enrich it from other proteins in blood and a mass spectrometer to detect it with high sensitivity and accuracy. 

Hu and his team used this test to screen stored blood samples collected from 284 HIV-infected and 235 children without the virus who participated in a large clinical trial conducted between 2004-2008. Hu’s group found their test identified children diagnosed with TB by the current gold-standard TB tests with 100% accuracy. The assay also detected 83.7% of TB cases that were missed by these tests, but that were later diagnosed by a standard checklist employing an array of other information collected by each child’s physician (unconfirmed TB cases). Hu’s test also detected CFP-10 in 77% of the blood samples that were collected 24 weeks before children were diagnosed with TB by other methods, indicating its strong potential for early TB diagnosis. The biomarker from some positive cases can be detected as early as 60 weeks before their TB diseases were confirmed. 

The researchers are working to develop an inexpensive, portable instrument to read the test to allow it to be more easily used in resource-limited settings often encountered in areas where TB is prevalent. 

The study was funded by the National Institute of Allergy and Infectious Diseases, the Eunice Kennedy Shriver National Institute of Child Health and Human Development and the U.S. Department of Defense. 

Video: Tony Hu explains how new screening technology has the potential to make a big difference in preventing children from dying from tuberculosis. Video by Carolyn Scofield.

Featured image: The new test was developed by a research team led by Tony Hu, PhD, Weatherhead Presidential Chair in Biotechnology Innovation at Tulane University. © Tulane University

Provided by Tulane University