Infant Planet Discovered by UH-led Team Using Maunakea Telescopes (Planetary Science)

One of the youngest planets ever found around a distant infant star has been discovered by an international team of scientists led by University of Hawaiʻi at Mānoa faculty, students, and alumni.

Thousands of planets have been discovered around other stars, but what sets this one apart is that it is newly formed and can be directly observed. The planet, named 2M0437b, joins a handful of objects advancing our understanding of how planets form and change with time, helping shed new light on the origin of the Solar System and Earth. The in-depth research was recently published in Monthly Notices of the Royal Astronomical Society.

Taurus cloud
The planet and its parent star lie in a stellar “nursery” called the Taurus Cloud. (Photo credit: NASA)

“This serendipitous discovery adds to an elite list of planets that we can directly observe with our telescopes,” explained lead author Eric Gaidos, a professor in the UH Mānoa Department of Earth Sciences. “By analyzing the light from this planet we can say something about its composition, and perhaps where and how it formed in a long-vanished disk of gas and dust around its host star.”

The researchers estimate that the planet is a few times more massive than Jupiter, and that it formed with its star several million years ago, around the time the main Hawaiian Islands first emerged above the ocean. The planet is so young that it is still hot from the energy released during its formation, with a temperature similar to the lava erupting from Kīlauea Volcano.

Key Maunakea telescopes

two telescopes
Subaru, left, and Keck, right, telescopes were used to observe the infant planet. © UHM

In 2018, 2M0437b was first seen with the Subaru Telescope on Maunakea by UH Institute for Astronomy (IfA) visiting researcher Teruyuki Hirano. For the past several years, it has been studied carefully utilizing other telescopes on the mauna.

Gaidos and his collaborators used the Keck Observatory on Maunakea to monitor the position of the host star as it moved across the sky, confirming that planet 2M0437b was truly a companion to the star, and not a more distant object. The observations required three years because the star moves slowly across the sky.

The planet and its parent star lie in a stellar “nursery” called the Taurus Cloud. 2M0437b is on a much wider orbit than the planets in the Solar System; its current separation is about one hundred times the Earth-Sun distance, making it easier to observe. However, sophisticated “adaptive” optics are still needed to compensate for the image distortion caused by Earth’s atmosphere.

“Two of the world’s largest telescopes, adaptive optics technology and Maunakea’s clear skies were all needed to make this discovery,” said co-author Michael Liu, an astronomer at IfA. “We are all looking forward to more such discoveries, and more detailed studies of such planets with the technologies and telescopes of the future.”

Future research potential

Gathering more in-depth research about the newly-discovered planet may not be too far away. “Observations with space telescopes such as NASA’s Hubble and the soon-to-be-launched James Webb Space Telescope could identify gases in its atmosphere and reveal whether the planet has a moon-forming disk,” Gaidos added.

The star that 2M0437b orbits is too faint to be seen with the unaided eye, but currently from Hawaiʻi, the young planet and other infant stars in the Taurus Cloud are almost directly overhead in the pre-dawn hours, north of the bright star Hokuʻula (Aldeberan) and east of the Makaliʻi (Pleiades) star cluster.

Contributors to this research include several UH graduate students and alumni: Rena Lee (earth science graduate student), Maïssa Salama (IfA graduate student), and IfA alumni Zhoujian ZhangTravis BergerSam Grunblatt and Megan Ansdell.

This work is an example of UH Mānoa’s goal of Excellence in Research: Advancing the Research and Creative Work Enterprise (PDF), one of four goals identified in the 2015–25 Strategic Plan (PDF), updated in December 2020.

Featured image: Discovery image of planet, which lies about 100 times the Earth-Sun distance from its parent star. © UHM


Provided by University of Hawaii

Astrophysicists Reveal Largest-Ever Suite of Universe Simulations (Cosmology)

The AbacusSummit simulations will help scientists extract information about the universe from upcoming cosmological surveys.

Collectively clocking in at nearly 60 trillion particles, a newly released set of cosmological simulations is by far the biggest ever produced.

The simulation suite, dubbed AbacusSummit, will be instrumental in extracting secrets of the universe from upcoming surveys of the cosmos, its creators predict. They present AbacusSummit in several papers published October 25 in Monthly Notices of the Royal Astronomical Society.

AbacusSummit was produced by researchers at the Flatiron Institute’s Center for Computational Astrophysics (CCA) in New York City and the Center for Astrophysics | Harvard & Smithsonian. Made up of more than 160 simulations, it models how gravitational attraction causes particles in a box-shaped universe to move about. Such models, known as N-body simulations, capture the behavior of dark matter, which makes up most of the universe’s material and interacts only via gravity.

“This suite is so big that it probably has more particles than all the other N-body simulations that have ever been run combined — though that’s a hard statement to be certain of,” says Lehman Garrison, lead author of one of the new papers and a CCA research fellow.

Garrison led the development of the AbacusSummit simulations along with graduate student Nina Maksimova and astronomy professor Daniel Eisenstein, both of the Center for Astrophysics. The simulations ran on the U.S. Department of Energy’s Summit supercomputer at the Oak Ridge Leadership Computing Facility in Tennessee.

AbacusSummit will soon come in handy, as several surveys will produce maps of the cosmos with unprecedented detail in the coming years. These include the Dark Energy Spectroscopic Instrument, the Nancy Grace Roman Space Telescope and the Euclid spacecraft. One of the goals of these big-budget missions is to improve estimations of the cosmic and astrophysical parameters that determine how the universe behaves and how it looks.

The AbacusSummit suite comprises hundreds of simulations of how gravity has shaped the distribution of dark matter throughout the universe. Here, a snapshot of one of the simulations is shown at various zoom scales: 10 billion light-years across, 1.2 billion light-years across and 100 million light-years across. The simulation replicates the large-scale structures of our universe, such as the cosmic web and colossal clusters of galaxies. The AbacusSummit Team; layout and design by Lucy Reading-Ikkanda/Simons Foundation

Scientists will make those improved estimates by comparing the new observations to computer simulations of the universe with different values for the various parameters — such as the nature of the dark energy pulling the universe apart. With the improvements offered by the next-generation surveys comes the need for better simulations, Garrison says.

“The galaxy surveys are delivering tremendously detailed maps of the universe, and we need similarly ambitious simulations that cover a wide range of possible universes that we might live in,” he says. “AbacusSummit is the first suite of such simulations that has the breadth and fidelity to compare to these amazing observations.”

The project was daunting. N-body calculations — which attempt to compute the movements of objects, like planets, interacting gravitationally — have been among the foremost challenges in the field of physics since the days of Isaac Newton. They’re tricky because each object interacts with every other object, no matter how far apart they are. This means that as you add more objects, the number of interactions rapidly increases.

There is no general solution to the N-body problem for three or more massive bodies. The calculations available are simply approximations. A common approach is to freeze time, calculate the total force acting on each object, then nudge each one based on the net force it experiences. Time is then moved forward slightly, and the process repeats.

Using that approach, AbacusSummit handled colossal numbers of particles thanks to clever code, a new numerical method and lots of computing power. The Summit supercomputer was the world’s fastest at the time the team ran the calculations.

The team designed their codebase — called Abacus — to take full advantage of Summit’s parallel processing power, whereby multiple calculations can run simultaneously. Summit boasts lots of graphics processing units, or GPUs, that excel at parallel processing.

Running N-body calculations using parallel processing requires careful algorithm design because an entire simulation requires a substantial amount of memory to store. That means Abacus can’t just make copies of the simulation for different nodes of the supercomputer to work on. So the code instead divides each simulation into a grid. An initial calculation provides a fair approximation of the effects of distant particles at any given point in the simulation. (Distant particles play a much smaller role than nearby particles.) Abacus then groups nearby cells and splits them off so that the computer can work on each group independently, combining the approximation of distant particles with precise calculations of nearby particles.

Abacus leverages parallel computer processing to drastically speed up its calculations of how particles move about due to their gravitational attraction. A sequential processing approach (top) computes the gravitational tug between each pair of particles one by one. Parallel processing (bottom) instead divides the work across multiple computing cores, enabling the calculation of multiple particle interactions simultaneously. Lucy Reading-Ikkanda/Simons Foundation

For large simulations, the researchers found that Abacus’ approach offers a significant improvement on other N-body codebases, which divide the simulations irregularly based on the distribution of particles. The uniform divisions used by AbacusSummit make better use of parallel processing, the researchers report. Additionally, the regularity of Abacus’ grid approach allows a large amount of the distant-particle approximation to be computed before the simulation even starts.

Thanks to its design, Abacus can update 70 million particles per second per node of the Summit supercomputer (each particle represents a clump of dark matter with 3 billion times the mass of the sun). The code can even analyze a simulation as it’s running, looking for patches of dark matter indicative of the bright star-forming galaxies that are a focus of upcoming surveys.

“Our vision was to create this code to deliver the simulations that are needed for this particular new brand of galaxy survey,” says Garrison. “We wrote the code to do the simulations much faster and much more accurately than ever before.”

Eisenstein, who is a member of the Dark Energy Spectroscopic Instrument collaboration — which recently began its survey to map an unprecedented fraction of the universe — says he is eager to use Abacus in the future.

“Cosmology is leaping forward because of the multidisciplinary fusion of spectacular observations and state-of-the-art computing,” he says. “The coming decade promises to be a marvelous age in our study of the historical sweep of the universe.”

Additional co-creators of Abacus and AbacusSummit include Sihan Yuan of Stanford University, Philip Pinto of the University of Arizona, Sownak Bose of Durham University in England and Center for Astrophysics researchers Boryana Hadzhiyska, Thomas Satterthwaite and Douglas Ferrer. The simulations ran on the Summit supercomputer under an Advanced Scientific Computing Research Leadership Computing Challenge allocation.

Information for Press

Featured image: A snapshot measuring 10 billion light-years across of one of the AbacusSummit simulations. Credit: The AbacusSummit Team


Provided by Simons Foundation

Astronomers May Have Discovered the First Planet Outside of Our Galaxy (Planetary Science)

Until now, astronomers have found all other known exoplanets and exoplanet candidates in the Milky Way galaxy, almost all less than about 3,000 light-years from Earth.

Signs of a planet transiting a star outside of the Milky Way galaxy may have been detected for the first time. This intriguing result, using NASA’s Chandra X-ray Observatory, opens up a new window to search for exoplanets at greater distances than ever before.

The possible exoplanet candidate is located in the spiral galaxy Messier 51 (M51), also called the Whirlpool Galaxy because of its distinctive profile.

Exoplanets are defined as planets outside of our Solar System. Until now, astronomers have found all other known exoplanets and exoplanet candidates in the Milky Way galaxy, almost all of them less than about 3,000 light-years from Earth. An exoplanet in M51 would be about 28 million light-years away, meaning it would be thousands of times farther away than those in the Milky Way.

“We are trying to open up a whole new arena for finding other worlds by searching for planet candidates at X-ray wavelengths, a strategy that makes it possible to discover them in other galaxies,” said Rosanne Di Stefano of the Center for Astrophysics | Harvard & Smithsonian (CfA) in Cambridge, Massachusetts, who led the study, which was published today in Nature Astronomy.

This new result is based on transits, events in which the passage of a planet in front of a star blocks some of the star’s light and produces a characteristic dip. Astronomers using both ground-based and space-based telescopes – like those on NASA’s Kepler and TESS missions – have searched for dips in optical light, electromagnetic radiation humans can see, enabling the discovery of thousands of planets.

Di Stefano and colleagues have instead searched for dips in the brightness of X-rays received from X-ray bright binaries. These luminous systems typically contain a neutron star or black hole pulling in gas from a closely orbiting companion star. The material near the neutron star or black hole becomes superheated and glows in X-rays.

Because the region producing bright X-rays is small, a planet passing in front of it could block most or all of the X-rays, making the transit easier to spot because the X-rays can completely disappear. This could allow exoplanets to be detected at much greater distances than current optical light transit studies, which must be able to detect tiny decreases in light because the planet only blocks a tiny fraction of the star.

The team used this method to detect the exoplanet candidate in a binary system called M51-ULS-1, located in M51. This binary system contains a black hole or neutron star orbiting a companion star with a mass about 20 times that of the Sun. The X-ray transit they found using Chandra data lasted about three hours, during which the X-ray emission decreased to zero. Based on this and other information, the researchers estimate the exoplanet candidate in M51-ULS-1 would be roughly the size of Saturn, and orbit the neutron star or black hole at about twice the distance of Saturn from the Sun.

While this is a tantalizing study, more data would be needed to verify the interpretation as an extragalactic exoplanet. One challenge is that the planet candidate’s large orbit means it would not cross in front of its binary partner again for about 70 years, thwarting any attempts for a confirming observation for decades.

“Unfortunately to confirm that we’re seeing a planet we would likely have to wait decades to see another transit,” said co-author Nia Imara of the University of California at Santa Cruz. “And because of the uncertainties about how long it takes to orbit, we wouldn’t know exactly when to look.”

Can the dimming have been caused by a cloud of gas and dust passing in front of the X-ray source? The researchers consider this to be an unlikely explanation, as the characteristics of the event observed in M51-ULS-1 are not consistent with the passage of such a cloud. The model of a planet candidate is, however, consistent with the data.

“We know we are making an exciting and bold claim so we expect that other astronomers will look at it very carefully,” said co-author Julia Berndtsson of Princeton University in New Jersey. “We think we have a strong argument, and this process is how science works.”

If a planet exists in this system, it likely had a tumultuous history and violent past. An exoplanet in the system would have had to survive a supernova explosion that created the neutron star or black hole. The future may also be dangerous. At some point the companion star could also explode as a supernova and blast the planet once again with extremely high levels of radiation.

Di Stefano and her colleagues looked for X-ray transits in three galaxies beyond the Milky Way galaxy, using both Chandra and the European Space Agency’s XMM-Newton. Their search covered 55 systems in M51, 64 systems in Messier 101 (the “Pinwheel” galaxy), and 119 systems in Messier 104 (the “Sombrero” galaxy), resulting in the single exoplanet candidate described here.

The authors will search the archives of both Chandra and XMM-Newton for more exoplanet candidates in other galaxies. Substantial Chandra datasets are available for at least 20 galaxies, including some like M31 and M33 that are much closer than M51, allowing shorter transits to be detectable. Another interesting line of research is to search for X-ray transits in Milky Way X-ray sources to discover new nearby planets in unusual environments.

The other authors of this Nature Astronomy paper are Ryan Urquhart (Michigan State University), Roberto Soria (University of the Chinese Science Academy), Vinay Kashap (CfA), and Theron Carmichael (CfA). NASA’s Marshall Space Flight Center manages the Chandra program. The Smithsonian Astrophysical Observatory’s Chandra X-ray Center controls science from Cambridge Massachusetts and flight operations from Burlington, Massachusetts.

Featured image: NASA/CXC/SAO/R. DiStefano, et al.; Optical: NASA/ESA/STScI/Grendler


Reference: Rosanne Di Stefano, A possible planet candidate in an external galaxy detected through X-ray transit, Nature Astronomy (2021). DOI: 10.1038/s41550-021-01495-wwww.nature.com/articles/s41550-021-01495-w


Provided by Center for Astrophysics

Neutron Star Collisions are a “Goldmine” of Heavy Elements, Study Finds (Cosmology)

Mergers between two neutron stars have produced more heavy elements in last 2.5 billion years than mergers between neutron stars and black holes.

Most elements lighter than iron are forged in the cores of stars. A star’s white-hot center fuels the fusion of protons, squeezing them together to build progressively heavier elements. But beyond iron, scientists have puzzled over what could give rise to gold, platinum, and the rest of the universe’s heavy elements, whose formation requires more energy than a star can muster.

A new study by researchers at MIT and the University of New Hampshire finds that of two long-suspected sources of heavy metals, one is more of a goldmine than the other.

The study, published today in Astrophysical Journal Letters, reports that in the last 2.5 billion years, more heavy metals were produced in binary neutron star mergers, or collisions between two neutron stars, than in mergers between a neutron star and a black hole.

The study is the first to compare the two merger types in terms of their heavy metal output, and suggests that binary neutron stars are a likely cosmic source for the gold, platinum, and other heavy metals we see today. The findings could also help scientists determine the rate at which heavy metals are produced across the universe.

“What we find exciting about our result is that to some level of confidence we can say binary neutron stars are probably more of a goldmine than neutron star-black hole mergers,” says lead author Hsin-Yu Chen, a postdoc in MIT’s Kavli Institute for Astrophysics and Space Research. 

Chen’s co-authors are Salvatore Vitale, assistant professor of physics at MIT, and Francois Foucart of UNH.

An efficient flash

As stars undergo nuclear fusion, they require energy to fuse protons to form heavier elements. Stars are efficient in churning out lighter elements, from hydrogen to iron. Fusing more than the 26 protons in iron, however, becomes energetically inefficient.

“If you want to go past iron and build heavier elements like gold and platinum, you need some other way to throw protons together,” Vitale says.

Scientists have suspected supernovae might be an answer. When a massive star collapses in a supernova, the iron at its center could conceivably combine with lighter elements in the extreme fallout to generate heavier elements.

In 2017, however, a promising candidate was confirmed, in the form a binary neutron star merger, detected for the first time by LIGO and Virgo, the gravitational-wave observatories in the United States and in Italy, respectively. The detectors picked up gravitational waves, or ripples through space-time, that originated 130 million light years from Earth, from a collision between two neutron stars — collapsed cores of massive stars, that are packed with neutrons and are among the densest objects in the universe.

The cosmic merger emitted a flash of light, which contained signatures of heavy metals.

“The magnitude of gold produced in the merger was equivalent to several times the mass of the Earth,” Chen says. “That entirely changed the picture. The math showed that binary neutron stars were a more efficient way to create heavy elements, compared to supernovae.”

A binary goldmine

Chen and her colleagues wondered: How might neutron star mergers compare to collisions between a neutron star and a black hole? This is another merger type that has been detected by LIGO and Virgo and could potentially be a heavy metal factory. Under certain conditions, scientists suspect, a black hole could disrupt a neutron star such that it would spark and spew heavy metals before the black hole completely swallowed the star.

The team set out to determine the amount of gold and other heavy metals each type of merger could typically produce. For their analysis, they focused on LIGO and Virgo’s detections to date of two binary neutron star mergers and two neutron star – black hole mergers.

The researchers first estimated the mass of each object in each merger, as well as the rotational speed of each black hole, reasoning that if a black hole is too massive or slow, it would swallow a neutron star before it had a chance to produce heavy elements. They also determined each neutron star’s resistance to being disrupted. The more resistant a star, the less likely it is to churn out heavy elements. They also estimated how often one merger occurs compared to the other, based on observations by LIGO, Virgo, and other observatories.

Finally, the team used numerical simulations developed by Foucart, to calculate the average amount of gold and other heavy metals each merger would produce, given varying combinations of the objects’ mass, rotation, degree of disruption, and rate of occurrence.

On average, the researchers found that binary neutron star mergers could generate two to 100 times more heavy metals than mergers between neutron stars and black holes. The four mergers on which they based their analysis are estimated to have occurred within the last 2.5 billion years. They conclude then, that during this period, at least, more heavy elements were produced by binary neutron star mergers than by collisions between neutron stars and black holes.

The scales could tip in favor of neutron star-black hole mergers if the black holes had high spins, and low masses. However, scientists have not yet observed these kinds of black holes in the two mergers detected to date.

Chen and her colleagues hope that, as LIGO and Virgo resume observations next year, more detections will improve the team’s estimates for the rate at which each merger produces heavy elements. These rates, in turn, may help scientists determine the age of distant galaxies, based on the abundance of their various elements.

“You can use heavy metals the same way we use carbon to date dinosaur remains,” Vitale says. “Because all these phenomena have different intrinsic rates and yields of heavy elements, that will affect how you attach a time stamp to a galaxy. So, this kind of study can improve those analyses.”

This research was funded, in part, by NASA, the National Science Foundation, and the LIGO Laboratory.

Featured image: New research suggests binary neutron stars are a likely cosmic source for the gold, platinum, and other heavy metals we see today.Credits:Credit: National Science Foundation/LIGO/Sonoma State University/A. Simonnet, edited by MIT News


Provided by MIT

Dragging your feet? Lack of sleep affects your walk, new study finds (Physiology)

Periodically catching up on sleep can improve gait control for the chronically sleep-deprived.

Good sleep can be hard to come by. But a new study finds that if you can make up for lost sleep, even for just a few weekend hours, the extra zzz’s could help reduce fatigue-induced clumsiness, at least in how you walk.

There’s plenty of evidence to show sleep, and how much we get of it, can affect how well we do on cognitive tasks such as solving a math problem, holding a conversation, or even reading this article. Less explored is the question of whether sleep influences the way we walk or carry out other activities that are assumed to be less mentally taxing.

The new study, by researchers at MIT and the University of São Paulo in Brazil, reports that walking — and specifically, how well we can control our stride, or gait — can indeed be affected by lack of sleep.

In experiments with student volunteers, the team found that overall, the less sleep students got, the less control they had when walking during a treadmill test. For students who pulled an all-nighter before the test, this gait control plummeted even further.

Interestingly, for those who didn’t stay up all night before the test, but who generally had less-than-ideal sleep during the week, those who slept in on weekends performed better than those who didn’t.

“Scientifically, it wasn’t clear that almost automatic activities like walking would be influenced by lack of sleep,” says Hermano Krebs, principal research scientist in MIT’s Department of Mechanical Engineering. “We also find that compensating for sleep could be an important strategy. For instance, for those who are chronically sleep-deprived, like shift workers, clinicians, and some military personnel, if they build in regular sleep compensation, they might have better control over their gait.”

Krebs and his co-authors, including lead author Arturo Forner-Cordero of the University of São Paulo, have published the study today in the journal Scientific Reports.

Brainy influence

The act of walking was once seen as an entirely automatic process, involving very little conscious, cognitive control. Animal experiments with a treadmill suggested that walking appeared to be an automatic process, governed mainly by reflexive, spinal activity, rather than more cognitive processes involving the brain.

“This is the case with quadrupeds, but the idea was more controversial in humans,” Krebs says.

Indeed, since those experiments, scientists including Krebs have showed that the act of walking is slightly more involved than once thought. Over the last decade, Krebs has extensively studied gait control and the mechanics of walking, in order to develop strategies and assistive robotics for patients who have suffered strokes and other motion-limiting conditions.

In previous experiments, he has shown, for instance, that healthy subjects can adjust their gait to match subtle changes in visual stimuli, without realizing they are doing so. These results suggested that walking involves some subtle, conscious influence, in addition to more automatic processes.

In 2013, he struck up a collaboration with Forner-Cordero through a grant from the MIT-Brazil MISTI program, and the team began to explore whether more subtle stimuli, such as auditory cues, might influence walking. In these initial experiments, volunteers were asked to walk on a treadmill as researchers played and slowly shifted the frequency of a metronome. The volunteers, without realizing it, matched their steps to the subtly changing beat.

“That suggested the concept of gait being only an automatic process is not a complete story,” Krebs says. “There’s a lot of influence coming from the brain.”

Sleep and walking

Forner-Cordero and Krebs continued to investigate the mechanics of walking and general motor control, mostly enlisting student volunteers in their experiments. Cordero in particular noticed that, toward the end of the semester, when students faced multiple exams and project deadlines, they were more sleep-deprived and happened to do worse in the team’s experiments.

“So, we decided to embrace the situation,” Forner-Cordero says.

In their new study, the team enlisted students from the University of São Paulo to take part in an experiment focused on the effects of sleep deprivation on gait control.

The students were each given a watch to track their activity over 14 days. This information gave researchers an idea of when and how long students were sleeping and active each day. The students were given no instruction on how much to sleep, so that the researchers could record their natural sleep patterns. On average, each student slept about six hours per day, although some students compensated, catching up on sleep over the two weekends during the 14-day period.

On the evening before the 14th day, one group of students stayed awake all night in the team’s sleep lab. This group was designated the Sleep Acute Deprivation group, or SAD. On the morning of the 14th day, all students went to the lab to perform a walking test.

Each student walked on a treadmill set at the same speed, as researchers played a metronome. The students were asked to keep step with the beat, as the researchers slowly and subtly raised and lowered the metronome’s speed, without telling the students they were doing so. Cameras captured the students’ walking, and specifically, the moment their heel struck the treadmill, compared with the beat of the metronome.

“They had to synchronize their heel strike to the beat, and we found the errors were larger in people with acute sleep deprivation,” Forner-Cordero says. “They were off the rhythm, they missed beeps, and were performing in general, worse.”

This in itself may not be entirely surprising. But in comparing students who did not pull an all-nighter prior to the test, the researchers found an unexpected difference: The students who did slightly better were those who compensated and got slightly more sleep on the weekends, even when they performed the test at the tail end of the week.

“That’s paradoxical,” Forner-Cordero says. “Even at the peak of when most people would be tired, this compensating group did better, which we didn’t expect.”

“The results show that gait is not an automatic process, and that it can be affected by sleep deprivation,” Krebs says. “They also suggest strategies for mitigating effects of sleep deprivation. Ideally, everyone should sleep eight hours a night. But if we can’t, then we should compensate as much and as regularly as possible.”

This research was supported, in part, by the Office of Naval Research Global.

Featured image: A new study indicates that a lack of sleep affects how well we control our stride, or gait.Credits:Image: Christine Daniloff, MIT; stock image


Reference: Umemura, G.S., Pinho, J.P., Duysens, J. et al. Sleep deprivation affects gait control. Sci Rep 11, 21104 (2021). https://doi.org/10.1038/s41598-021-00705-9


Provided by MIT

Artificial Intelligence Sheds Light On How the Brain Processes Language (Neuroscience)

Neuroscientists find the internal workings of next-word prediction models resemble those of language-processing centers in the brain.

In the past few years, artificial intelligence models of language have become very good at certain tasks. Most notably, they excel at predicting the next word in a string of text; this technology helps search engines and texting apps predict the next word you are going to type.

The most recent generation of predictive language models also appears to learn something about the underlying meaning of language. These models can not only predict the word that comes next, but also perform tasks that seem to require some degree of genuine understanding, such as question answering, document summarization, and story completion. 

Such models were designed to optimize performance for the specific function of predicting text, without attempting to mimic anything about how the human brain performs this task or understands language. But a new study from MIT neuroscientists suggests the underlying function of these models resembles the function of language-processing centers in the human brain.

Computer models that perform well on other types of language tasks do not show this similarity to the human brain, offering evidence that the human brain may use next-word prediction to drive language processing.

“The better the model is at predicting the next word, the more closely it fits the human brain,” says Nancy Kanwisher, the Walter A. Rosenblith Professor of Cognitive Neuroscience, a member of MIT’s McGovern Institute for Brain Research and Center for Brains, Minds, and Machines (CBMM), and an author of the new study. “It’s amazing that the models fit so well, and it very indirectly suggests that maybe what the human language system is doing is predicting what’s going to happen next.”

Joshua Tenenbaum, a professor of computational cognitive science at MIT and a member of CBMM and MIT’s Artificial Intelligence Laboratory (CSAIL); and Evelina Fedorenko, the Frederick A. and Carole J. Middleton Career Development Associate Professor of Neuroscience and a member of the McGovern Institute, are the senior authors of the study, which appears this week in the Proceedings of the National Academy of Sciences. Martin Schrimpf, an MIT graduate student who works in CBMM, is the first author of the paper.

Making predictions

The new, high-performing next-word prediction models belong to a class of models called deep neural networks. These networks contain computational “nodes” that form connections of varying strength, and layers that pass information between each other in prescribed ways.

Over the past decade, scientists have used deep neural networks to create models of vision that can recognize objects as well as the primate brain does. Research at MIT has also shown that the underlying function of visual object recognition models matches the organization of the primate visual cortex, even though those computer models were not specifically designed to mimic the brain.

In the new study, the MIT team used a similar approach to compare language-processing centers in the human brain with language-processing models. The researchers analyzed 43 different language models, including several that are optimized for next-word prediction. These include a model called GPT-3 (Generative Pre-trained Transformer 3), which, given a prompt, can generate text similar to what a human would produce. Other models were designed to perform different language tasks, such as filling in a blank in a sentence.

As each model was presented with a string of words, the researchers measured the activity of the nodes that make up the network. They then compared these patterns to activity in the human brain, measured in subjects performing three language tasks: listening to stories, reading sentences one at a time, and reading sentences in which one word is revealed at a time. These human datasets included functional magnetic resonance (fMRI) data and intracranial electrocorticographic measurements taken in people undergoing brain surgery for epilepsy.

They found that the best-performing next-word prediction models had activity patterns that very closely resembled those seen in the human brain. Activity in those same models was also highly correlated with measures of human behavioral measures such as how fast people were able to read the text.

“We found that the models that predict the neural responses well also tend to best predict human behavior responses, in the form of reading times. And then both of these are explained by the model performance on next-word prediction. This triangle really connects everything together,” Schrimpf says.

“A key takeaway from this work is that language processing is a highly constrained problem: The best solutions to it that AI engineers have created end up being similar, as this paper shows, to the solutions found by the evolutionary process that created the human brain. Since the AI network didn’t seek to mimic the brain directly — but does end up looking brain-like — this suggests that, in a sense, a kind of convergent evolution has occurred between AI and nature,” says Daniel Yamins, an assistant professor of psychology and computer science at Stanford University, who was not involved in the study.

Game changer

One of the key computational features of predictive models such as GPT-3 is an element known as a forward one-way predictive transformer. This kind of transformer is able to make predictions of what is going to come next, based on previous sequences. A significant feature of this transformer is that it can make predictions based on a very long prior context (hundreds of words), not just the last few words.

Scientists have not found any brain circuits or learning mechanisms that correspond to this type of processing, Tenenbaum says. However, the new findings are consistent with hypotheses that have been previously proposed that prediction is one of the key functions in language processing, he says.

“One of the challenges of language processing is the real-time aspect of it,” he says. “Language comes in, and you have to keep up with it and be able to make sense of it in real time.”

The researchers now plan to build variants of these language processing models to see how small changes in their architecture affect their performance and their ability to fit human neural data.

“For me, this result has been a game changer,” Fedorenko says. “It’s totally transforming my research program, because I would not have predicted that in my lifetime we would get to these computationally explicit models that capture enough about the brain so that we can actually leverage them in understanding how the brain works.”

The researchers also plan to try to combine these high-performing language models with some computer models Tenenbaum’s lab has previously developed that can perform other kinds of tasks such as constructing perceptual representations of the physical world.

“If we’re able to understand what these language models do and how they can connect to models which do things that are more like perceiving and thinking, then that can give us more integrative models of how things work in the brain,” Tenenbaum says. “This could take us toward better artificial intelligence models, as well as giving us better models of how more of the brain works and how general intelligence emerges, than we’ve had in the past.”

The research was funded by a Takeda Fellowship; the MIT Shoemaker Fellowship; the Semiconductor Research Corporation; the MIT Media Lab Consortia; the MIT Singleton Fellowship; the MIT Presidential Graduate Fellowship; the Friends of the McGovern Institute Fellowship; the MIT Center for Brains, Minds, and Machines, through the National Science Foundation; the National Institutes of Health; MIT’s Department of Brain and Cognitive Sciences; and the McGovern Institute.

Other authors of the paper are Idan Blank PhD ’16 and graduate students Greta Tuckute, Carina Kauf, and Eghbal Hosseini.

Featured image: MIT neuroscientists find the internal workings of next-word prediction models resemble those of language-processing centers in the brain.


Provided by MIT

Solid, Liquid, Or Gas? Technique Quickly Identifies Physical State of Tissues and Tumors (Medicine)

The method could be a route to quicker, less invasive cancer diagnoses.

As an organism grows, the feel of it changes too. In the initial stages, an embryo takes on an almost fluid-like state that allows its cells to divide and expand. As it matures, its tissues and organs firm up into their final form. In certain species, this physical state of an organism can be an indicator of its developmental stage, and even the general state of its health.

Now, researchers at MIT have found that the way in which a tissue’s cells are arranged can serve as a fingerprint for the tissue’s “phase.” They have developed a method to decode images of cells in a tissue to quickly determine whether that tissue is more like a solid, liquid, or even a gas. Their findings are reported this week in the Proceedings of the National Academy of Sciences.

The team hopes that their method, which they’ve dubbed “configurational fingerprinting,” can help scientists track physical changes in an embryo as it develops. More immediately, they are applying their method to study and eventually diagnose a specific type of tissue: tumors.

In cancer, there has been evidence to suggest that, like an embryo, a tumor’s physical state may indicate its stage of growth. Tumors that are more solid may be relatively stable, whereas more fluid-like growths could be more prone to mutate and metastasize.

The MIT researchers are analyzing images of tumors, both grown in the lab and biopsied from patients, to identify cellular fingerprints that indicate whether a tumor is more like a solid, liquid, or gas. They envision that doctors can one day match an image of a tumor’s cells with a cellular fingerprint to quickly determine a tumor’s phase, and ultimately a cancer’s progression.

“Our method would allow a very easy diagnosis of the states of cancer, simply by examining the positions of cells in a biopsy,” says Ming Guo, associate professor of mechanical engineering at MIT. “We hope that, by simply looking at where the cells are, doctors can directly tell if a tumor is very solid, meaning it can’t metastasize yet, or if it’s more fluid-like, and a patient is in danger.”

Guo’s co-authors are Haiqian Yang, Yulong Han, Wenhui Tang, and Rohan Abeyaratne of MIT, Adrian Pegoraro of the University of Ottawa, and Dapeng Bi of Northeastern University.

Triangular order

In a perfect solid, the material’s individual constituents are configured as an orderly lattice, such as the atoms in a cube of crystal. If you were to cut a slice of the crystal and lay it on a table, you would see that the atoms are arranged such that you could connect them in a pattern of repeating triangles. In a perfect solid, as the spacing between atoms would be exactly the same, the triangles that connect them would typically be equilateral in shape.

Guo took this construct as a template for a perfectly solid structure, with the idea that it could serve as a reference for comparing the cell configurations of actual, less-than-perfectly-solid tissues and tumors.

“Real tissues are never perfectly ordered,” Guo says. “They are mostly disordered. But still, there are subtle differences in how much they are disordered.”

Following this idea, the team started with images of various types of tissues and used software to map triangular connections between a tissue’s cells. In contrast to the equilateral triangles in a perfect solid, the maps produced triangles of various shapes and sizes, indicating cells with a range of spatial order (and disorder).

For each triangle in an image, they measured two key parameters: volumetric order, or the space within a triangle; and shear order, or how far a triangle’s shape is from equilateral. The first parameter indicates a material’s density fluctuation, while the second illustrates how prone the material is to deforming. These two parameters, they found, were enough to characterize whether a tissue was more like a solid, liquid, or gas.

“We’re directly calculating the exact value of both parameters, compared to those of a perfect solid, and using those exact values as our fingerprints,” Guo explains.

Vapor tendrils

The team tested its new fingerprinting technique in several different scenarios. The first was a simulation in which they modeled the mixing of two types of molecules, the concentration of which they increased gradually. For each concentration, they mapped the molecules into triangles, then measured each triangle’s two parameters. From these measurements, they characterized the phase of the molecules and were able to reproduce the transitions between gas, liquid, and solid, that was expected.

“People know what to expect in this very simple system, and this is what we see exactly,” Guo says. “This demonstrated the capability of our method.”

The researchers then went on to apply their method in systems with cells rather than molecules. For instance, they looked at videos, taken by other researchers, of a growing fruitfly wing. Applying their method, they could identify regions in the developing wing that morphed from solid to a more fluid state.

“As a fluid, this may help with growth,” Guo says. “How exactly that happens is still under investigation.”

He and his team also grew small tumors from cells of human breast tissue and watched as the tumors grew appendage-like tendrils — signs of early metastasis. When they mapped the configuration of cells in the tumors, they found that the noninvasive tumors resembled something between a solid and a liquid, and the invasive tumors were more gas-like, while the tendrils showed an even more disordered state. 

“Invasive tumors were more like vapor, and they want to spread out and go everywhere,” Guo says. “Liquids can barely be compressed. But gases are compressible — they can swell and shrink easily, and that’s what we see here.”

The team is working with samples of human cancer biopsies, which they are imaging and analyzing to hone their cellular fingerprints. Eventually, Guo envisions that mapping a tissue’s phases can be a quick and less invasive way to diagnose multiple types of cancer.

“Doctors typically have to take biopsies, then stain for different markers depending on the cancer type, to diagnose,” Guo says. “Perhaps one day we can use optical tools to look inside the body, without touching the patient, to see the position of cells, and directly tell what stage of cancer a patient is in”

This research was supported in part by the National Institutes of Health, MathWorks, and the Jeptha H. and Emily V. Wade Award at MIT.

Featured image: MIT researchers have developed a way to decode images of cells to determine whether a tissue is more like a solid, liquid, or even a gas. These visual “fingerprints” may help to quickly diagnose and track various cancers. Credit: breast cancer cell by Anne Weston, Francis Crick Institute, edited by MIT News


Provided by MIT

Breakthrough Listen Releases Analysis Of Previously Detected Signal (Astronomy)

Findings Published in Nature Astronomy; Publicly Available at seti.berkeley.edu/blc1.

An intriguing candidate signal picked up last year by the Breakthrough Listen project has been subjected to intensive analysis that suggests it is unlikely to originate from the Proxima Centauri system. Instead, it appears to be an artifact of Earth-based interference from human technologies, the Breakthrough Initiatives announced today. Two research papers, published in Nature Astronomy, discuss both the detection of the candidate signal and an advanced data analysis process that can finely discern “false positives.”

“The significance of this result is that the search for civilizations beyond our planet is now a mature, rigorous field of experimental science,” said Yuri Milner, founder of Breakthrough Inititatives.

Breakthrough Listen (a program of the Breakthrough Initiatives) is an astronomical science program searching for technosignatures – signs of technology that may have been developed by extraterrestrial intelligence. Listen’s science team, led by Dr. Andrew Siemion at the University of California, Berkeley, uses some of the largest radio telescopes in the world, equipped with the most capable digital processing systems, to capture data across broad swaths of the radio spectrum in the direction of a wide range of celestial targets. The search is challenging because Earth is awash with radio signals from human technology – cell phones, radar, satellites, TV transmitters, and so on. Searching for a faint signal from a distant star is akin to picking out a needle in a vast digital haystack – and one that is changing constantly over time.

The CSIRO Parkes Telescope in New South Wales, Australia (one of the largest telescopes in the Southern Hemisphere, known as ‘Murriyang’ in Wiradjuri) is among the facilities participating in Breakthrough Listen’s search. One of the targets being monitored by Parkes is Proxima Centauri, the Sun’s nearest neighboring star, at a distance of just over 4 light years. The star is a red dwarf orbited by two known exoplanets. The Listen team scanned the target across a frequency range of 700 MHz to 4 GHz, with a resolution of 3.81 Hz – in other words, performing the equivalent of tuning to over 800 million radio channels at a time, with exquisite detection sensitivity.

Shane Smith, an undergraduate researcher working with Listen Project Scientist Dr. Danny Price in the summer 2020 Breakthrough Listen internship program, ran the data from these observations through Breakthrough Listen’s search pipeline. He detected over 4 million “hits” – frequency ranges that had signs of radio emission. This is actually quite typical for Listen’s observations; the vast majority of these hits make up the haystack of emissions from human technology.

As with all of Listen’s observations, the pipeline filters out signals which look like they are unlikely to be coming from a transmitter at a large distance from Earth, according to two main criteria:

  • Firstly, is the signal steadily changing in frequency with time? A transmitter on a distant planet would be expected to be in motion with respect to the telescope, leading to a Doppler drift akin to the change in pitch of an ambulance siren as it moves relative to an observer. Rejecting hits with no such signs of motion reduces the number of hits from 4 million to around 1 million for this particular dataset.
  • Secondly, for the hits that remain, do they appear to be coming from the direction of the target? To determine this, the telescope points in the direction of Proxima Centauri, and then points away, repeating this “ON – OFF” pattern several times. Local interfering sources are expected to affect both ON and OFF observations, whereas a candidate technosignature should appear only in the ON observations.

Even after both of these data filters are applied, a handful of candidates remain that must be inspected visually. Sometimes a faint signal is actually visible in the OFF observations but is not quite strong enough to be picked up by automated algorithms. Sometimes similar signals appear in neighboring observations, indicative of interfering sources that may be turning on and off at just the wrong period, or the team can track down the signals to satellites that commonly broadcast in certain frequency bands.

Occasionally an intriguing signal remains and must be subjected to further checks. Such a signal-of-interest was discovered by Smith in Listen’s observations of Proxima Centauri using the Parkes telescope. A narrow-band, Doppler-drifting signal, persisting over five hours of observations, that appears to be present only in “ON” observations of the target star and not in the interspersed “OFF” observations, had some of the characteristics expected from a technosignature candidate.

Dr. Sofia Sheikh, currently a postdoctoral researcher with the Listen team at UC Berkeley, dug into a larger dataset of observations taken at other times. She found around 60 signals that share many characteristics of the candidate, but are also seen in their respective OFF observations.

“We can therefore confidently say that these other signals are local to the telescope and human-generated,” says Sheikh. “The signals are spaced at regular frequency intervals in the data, and these intervals appear to correspond to multiples of frequencies used by oscillators that are commonly used in various electronic devices. Taken together, this evidence suggests that the signal is interference from human technology, although we were unable to identify its specific source. The original signal found by Shane Smith is not obviously detected when the telescope is pointed away from Proxima Centauri – but given a haystack of millions of signals, the most likely explanation is still that it is a transmission from human technology that happens to be ‘weird’ in just the right way to fool our filters.”

Executive Director of the Breakthrough Initiatives Dr. S. Pete Worden remarked, “While we were unable to conclude a genuine technosignature, we are increasingly confident that we have the necessary tools to detect and validate such signatures if they exist.”

Breakthrough Listen is making all of the data from the Parkes scans available to the public to examine for themselves. The team has also just published two papers (led by Smith and Sheikh) outlining the details of the data acquisition and analysis, and a research note describing follow-up observations of Proxima Centauri conducted with the Parkes Telescope in April 2021. Listen will continue monitoring of Proxima Centauri, which remains a compelling target for technosignature searches, using a suite of telescopes around the world. And the team continues to refine algorithms to improve their ability to discriminate between “needles” and “hay”, including as part of a recently-completed crowdsourced data processing competition in collaboration with kaggle.com.

“In the case of this particular candidate,” remarks Siemion, “our analysis suggests that it’s highly unlikely that it is really from a transmitter out at Proxima Centauri. However, this is undoubtedly one of the most intriguing signals we’ve seen to date.”

Preprints of the papers, links to the data and associated software, artwork, videos, and supplementary content may be accessed at seti.berkeley.edu/blc1.

Featured image: Artist’s impression of the Proxima Centauri system. Credit: Breakthrough Listen / Zayna Sheikh


Provided by Breakthrough Initiatives

Searching for Earth 2.0? Zoom In On An Star (Planetary Science)

Astronomers searching for Earth-like planets in other solar systems have made a breakthrough by taking a closer look at the surface of stars.

A new technique developed by an international team of researchers — led by Yale astronomers Rachael Roettenbacher, Sam Cabot, and Debra Fischer — uses a combination of data from ground-based and orbiting telescopes to distinguish between light signals coming from stars and signals coming from planets orbiting those stars.

A study detailing the discovery has been accepted by The Astronomical Journal.

“Our techniques pull together three different types of contemporaneous observations to focus on understanding the star and what its surface looks like,” said Roettenbacher, a 51 Pegasi b postdoctoral fellow at Yale and lead author of the paper. “From one of the data sets, we create a map of the surface that allows us to reveal more detail in the radial velocity data as we search for signals from small planets.

“This procedure shows the value of obtaining multiple types of observation at once.”

For decades, astronomers have used a method called radial velocity as one way to look for exoplanets in other solar systems. Radial velocity refers to the motion of a star along an observer’s sightline.

Astronomers look for variations in a star’s velocity that might be caused by the gravitational pull of an orbiting planet. This data comes via spectrometers — instruments that look at light being emitted by a star and stretch the light into a spectrum of frequencies that can be analyzed.

As astronomers have rushed to develop methods for detecting Earth-like planets, however, they have run into a barrier that has stopped progress for years. The energy emitted by stars creates a boiling cauldron of convecting plasma that distorts measurements of radial velocity, obscuring signals from small, rocky planets.

But a new generation of advanced instruments is attacking this problem. These instruments include the EXtreme PREcision Spectrograph (EXPRES), which was designed and built by Fischer’s team at Yale, the Transiting Exoplanet Survey Satellite (TESS), and the Center for High Angular Resolution Astronomy (CHARA) interferometric telescope array.

For the new study, the researchers used TESS data to reconstruct the surface of Epsilon Eridani, a star in the southern constellation of Eridanus that is visible from most of Earth’s surface. They then looked for starspots — cooler regions on the surface of a star caused by strong magnetic fields.

“With the reconstructions, you know the locations and sizes of spots on the star, and you also know how quickly the star rotates,” said Cabot. “We developed a method that then tells you what kind of signal you would see with a spectrometer.”

The researchers then compared their TESS reconstructions with EXPRES spectrometer data collected simultaneously from Epsilon Eridani.

“This allowed us to directly tie contributions of the radial velocity signature to specific features on the surface,” Fischer said. “The radial velocities from the starspots match up beautifully with the data from EXPRES.”

The researchers also used another technique, called interferometry, to detect a starspot on Epsilon Eridani — the first interferometric detection of a starspot on a star similar to the Sun.

Interferometry combines separated telescopes to create a much larger telescope. For this, the researchers used the CHARA Array, the world’s largest optical interferometer, located in California.

Roettenbacher said she and her colleagues will apply their new technique to sets of interferometric observations in order to directly image the entire surface of a star and determine its radial velocity contribution.

“Interferometric imaging is not something that is done for a lot of stars because the star needs to be nearby and bright. There are a handful of other stars on which we can also apply our pioneering approach,” Roettenbacher said.

Former Yale researchers Lily Zhao, who is now at the Flatiron Institute, and John Brewer, who is now at San Francisco State University, are among the study’s co-authors.

The research was supported by the Heising-Simons Foundation, an anonymous Yale alumnus, the National Science Foundation, and NASA.

Featured image: Reconstructed surface of the spotted star Epsilon Eridani with each panel showing the star advanced one-fifth of its rotation. (Visualization by Sam Cabot)


Provided by Yale University

Eternal in Knowledge, Eternal in Contents..