ALMA Scientists Uncover the Mystery of Early Massive Galaxies Running on Empty (Cosmology)

New study reveals that early galaxies have no fuel, and something is stopping them from refilling the tank

Early massive galaxies—those that formed in the three billion years following the Big Bang —should have contained large amounts of cold hydrogen gas, the fuel required to make stars. But scientists observing the early Universe with the Atacama Large Millimeter/submillimeter Array (ALMA) and the Hubble Space Telescope have spotted something strange: half a dozen early massive galaxies that ran out of fuel. The results of the research are published today in Nature.

Known as “quenched” galaxies—or galaxies that have shut down star formation—the six galaxies selected for observation from the REsolving QUIEscent Magnified galaxies at high redshift, or the REQUIEM survey, are inconsistent with what astronomers expect of the early Universe.

“The most massive galaxies in the Universe lived fast and furious, creating their stars in a remarkably short amount of time. Gas, the fuel of star formation, should be plentiful at these early times in the Universe,” said Kate Whitaker, lead author on the study, and assistant professor of astronomy at the University of Massachusetts, Amherst. “We originally believed that these quenched galaxies hit the brakes just a few billion years after the Big Bang. In our new research, we’ve concluded that early galaxies didn’t actually put the brakes on, but rather, they were running on empty.” 

To better understand how the galaxies formed and died, the team observed them using Hubble, which revealed details about the stars residing in the galaxies. Concurrent observations with ALMA revealed the galaxies’ continuum emission—a tracer of dust—at millimeter wavelengths, allowing the team to infer the amount of gas in the galaxies. The use of the two telescopes is by careful design, as the purpose of REQUIEM is to use strong gravitational lensing as a natural telescope to observe dormant galaxies with higher spatial resolution. This, in turn, gives scientists a clear view of galaxies’ internal goings-on, a task often impossible with those running on empty.

“If a galaxy isn’t making many new stars it gets very faint very fast so it is difficult or impossible to observe them in detail with any individual telescope. REQUIEM solves this by studying galaxies that are gravitationally lensed, meaning their light gets stretched and magnified as it bends and warps around other galaxies much closer to the Milky Way,” said Justin Spilker, a co-author on the new study, and a NASA Hubble postdoctoral fellow at the University of Texas at Austin. “In this way, gravitational lensing, combined with the resolving power and sensitivity of Hubble and ALMA, acts as a natural telescope and makes these dying galaxies appear bigger and brighter than they are in reality, allowing us to see what’s going on and what isn’t.” 

The new observations showed that the cessation of star formation in the six target galaxies was not caused by a sudden inefficiency in the conversion of cold gas to stars. Instead, it was the result of the depletion or removal of the gas reservoirs in the galaxies. “We don’t yet understand why this happens, but possible explanations could be that either the primary gas supply fueling the galaxy is cut off, or perhaps a supermassive black hole is injecting energy that keeps the gas in the galaxy hot,” said Christina Williams, an astronomer at the University of Arizona and co-author on the research. “Essentially, this means that the galaxies are unable to refill the fuel tank, and thus, unable to restart the engine on star production.”

The study also represents a number of important firsts in the measurement of early massive galaxies, synthesizing information that will guide future studies of the early Universe for years to come. “These are the first measurements of the cold dust continuum of distant dormant galaxies, and in fact, the first measurements of this kind outside the local Universe,” said Whitaker, adding that the new study has allowed scientists to see how much gas individual dead galaxies have. “We were able to probe the fuel of star formation in these early massive galaxies deep enough to take the first measurements of the gas tank reading, giving us a critically missing viewpoint of the cold gas properties of these galaxies.”

Although the team now knows that these galaxies are running on empty and that something is keeping them from refilling the tank and from forming new stars, the study represents just the first in a series of inquiries into what made early massive galaxies go, or not. “We still have so much to learn about why the most massive galaxies formed so early in the Universe and why they shut down their star formation when so much cold gas was readily available to them,” said Whitaker. “The mere fact that these massive beasts of the cosmos formed 100 billion stars within about a billion years and then suddenly shut down their star formation is a mystery we would all love to solve, and REQUIEM has provided the first clue.”

Featured image: Credit: ALMA (ESO/NAOJ/NRAO)/S. Dagnello (NRAO), STScI, K. Whitaker et al.


“Quenching of star formation from a lack of inflowing gas to galaxies,” K. Whitaker et al., 2021 Sept. 23, Nature, preprint: []

A complementary press release has been published by STScI at:

Provided by NRAO

Scientists use seasons to find water for future Mars astronauts (Astronomy)

An international team of researchers has used seasonal variations to identify likely sub-surface deposits of water ice in the temperate regions of Mars where it would be easiest for future human explorers to survive. The results are being presented this week by Dr Germán Martínez at the European Planetary Science Conference (EPSC) 2021.

Using data from NASA’s Mars Odyssey, which has spent almost 20 years orbiting the Red Planet, Martínez and his colleagues have identified two areas of particular interest: Hellas Planitia and Utopia Rupes, respectively in the southern and northern hemisphere. Seasonal variations in levels of hydrogen detected suggests that significant quantities water ice can be found in the metre or so below the surface in these regions. 

Martínez, of the Lunar and Planetary Institute, said: ‘Data from Mars Odyssey’s Neutron Spectrometer showed signs of hydrogen beneath the surface Mars from mid to equatorial latitudes, but we still had the challenge of working out whether this is in the form of water ice, which can readily be used as a resource, or locked away in mineral salts or in soil grains and minerals. This is where the seasonal variation provides an important clue. As the coldest ground temperatures occur at the same time as the largest observed increase in hydrogen content, it suggests that water ice is forming in the shallow subsurface of these regions during the fall and winter seasons, and then sublimating into gas during the warm season of each hemisphere.’ 

Water ice in the shallow subsurface has been found in plentiful supply at the poles. However, the frigid temperatures and the limited solar light make polar regions a hostile environment for human exploration. The areas from equatorial to mid latitudes are much more hospitable for both humans and robotic rovers, but only deeper reservoirs of water ice have been detected to date, and these are hard to reach. 

To survive on Mars, astronauts would need to rely on resources already available in-situ, as sending regular supplies across the 55 million kilometres between Earth and Mars at their closest point is not an option. As liquid water is not available in the cold and arid Martian environment, ice is a vital resource. Water will not only be essential for life-support of the explorers, or the growth of plants and food, but could also be broken down into oxygen and hydrogen for use as rocket fuel. 

Two other regions are rich in hydrogen: Tharsis Montes and the Medusae Fossae Formation. However, these do not display seasonal variations and appear to be the less accessible forms of water.  

‘Definitely, those regions too are interesting for future missions,’ added Martínez. ‘What we plan to do now for them or Hellas Planitia and Utopia Rupes, is to study their mineralogy with other instruments in the hope of spotting types of rock altered by water. Such areas would be ideal candidates for robotic missions, including sample return ones, as the ingredients for rocket fuel would be available there too.’

Featured image: Global map of Mars with overlaid topography indicating areas with significant seasonal variations in hydrogen content during northern spring (top) and fall (bottom). Green (red) represents increase (decrease) in hydrogen content. The areas highlighted in orange are Hellas Planitia in the southern hemisphere, and Utopia Rupes in the northern hemisphere. These are the only extended regions undergoing a significant variation throughout the Martian year.  Credit: G. Martínez.

Further information:

EPSC2021-443: Looking for Non-Polar Shallow Subsurface Water Ice in Preparation for Future Human Exploration of Mars  


Provided by Europlanet

Earth and Venus Grew up as Rambunctious Planets (Planetary Science)

What doesn’t stick comes around: Using machine learning and simulations of giant impacts, researchers at the Lunar and Planetary Laboratory found that the planets residing in the inner solar system were likely born from repeated hit-and-run collisions, challenging conventional models of planet formation. 

Planet formation – the process by which neat, round, distinct planets form from a roiling, swirling cloud of rugged asteroids and mini planets – was likely even messier and more complicated than most scientists would care to admit, according to new research led by researchers at the University of Arizona Lunar and Planetary Laboratory.

The findings challenge the conventional view, in which collisions between smaller building blocks cause them to stick together and, over time, repeated collisions accrete new material to the growing baby planet.

Instead, the authors propose and demonstrate evidence for a novel “hit-and-run-return” scenario, in which pre-planetary bodies spent a good part of their journey through the inner solar system crashing into and ricocheting off of each other, before running into each other again at a later time. Having been slowed down by their first collision, they would be more likely to stick together the next time. Picture a game of billiards, with the balls coming to rest, as opposed to pelting a snowman with snowballs, and you get the idea.

The research is published in two reports appearing in the Sept. 23 issue of The Planetary Science Journal, with one focusing on Venus and Earth, and the other on Earth’s moon. Central to both publications, according to the author team, which was led by planetary sciences and LPL professor Erik Asphaug, is the largely unrecognized point that giant impacts are not the efficient mergers scientists believed them to be.

“We find that most giant impacts, even relatively ‘slow’ ones, are hit-and-runs. This means that for two planets to merge, you usually first have to slow them down in a hit-and-run collision,” Asphaug said. “To think of giant impacts, for instance the formation of the moon, as a singular event is probably wrong. More likely it took two collisions in a row.”

The inner planets: Mercury, Venus, Earth and Mars
The terrestrial planets of the inner solar system, shown to scale. According to ‘late stage accretion’ theory, Mars and Mercury (front left and right) are what’s left of an original population of colliding embryos, and Venus and Earth grew in a series of giant impacts. New research focuses on the preponderance of hit-and-run collisions in giant impacts, and shows that proto-Earth would have served as a ‘vanguard’, slowing down planet-sized bodies in hit-and-runs. But it is proto-Venus, more often than not, that ultimately accretes them, meaning it was easier for Venus to acquire bodies from the outer solar system. Lsmpascal – Wikimedia commons

One implication is that Venus and Earth would have had very different experiences in their growth as planets, despite being immediate neighbors in the inner solar system. In this paper, led by Alexandre Emsenhuber, who did this work during a postdoctoral fellowship in Asphaug’s lab and is now at Ludwig Maximilian University in Munich, the young Earth would have served to slow down interloping planetary bodies, making them ultimately more likely to collide with and stick to Venus.

“We think that during solar system formation, the early Earth acted like a vanguard for Venus,” Emsenhuber said.

The solar system is what scientists call a gravity well, the concept behind a popular attraction at science exhibits. Visitors toss a coin into a funnel-shaped gravity well, and then watch their cash complete several orbits before it drops into the center hole. The closer a planet is to the sun, the stronger the gravitation experienced by planets. That’s why the inner planets of the solar system on which these studies were focused – Mercury, Venus, Earth and Mars – orbit the sun faster than, say, Jupiter, Saturn and Neptune. As a result, the closer an object ventures to the sun, the more likely it is to stay there.

So when an interloping planet hit the Earth, it was less likely to stick to Earth, and instead more likely to end up at Venus, Asphaug explained.

“The Earth acts as a shield, providing a first stop against these impacting planets,” he said. “More likely than not, a planet that bounces off of Earth is going to hit Venus and merge with it.”

Emsenhuber uses the analogy of a ball bouncing down a staircase to illustrate the idea of what drives the vanguard effect: A body coming in from the outer solar system is like a ball bouncing down a set of stairs, with each bounce representing a collision with another body.

“Along the way, the ball loses energy, and you’ll find it will always bounce downstairs, never upstairs,” he said. “Because of that, the body cannot leave the inner solar system anymore. You generally only go downstairs, toward Venus, and an impactor that collides with Venus is pretty happy staying in the inner solar system, so at some point it is going to hit Venus again.”

Earth has no such vanguard to slow down its interloping planets. This leads to a difference between the two similar-sized planets that conventional theories cannot explain, the authors argue.

“The prevailing idea has been that it doesn’t really matter if planets collide and don’t merge right away, because they are going to run into each other again at some point and merge then,” Emsenhuber said. “But that is not what we find. We find they end up more frequently becoming part of Venus, instead of returning back to Earth. It’s easier to go from Earth to Venus than the other way around.”

Simulated aftermath of a hit-and-run collisions between the young Earth and  another planetary body
The moon is thought to be the aftermath of a giant impact. According to a new theory, there were two giant impacts in a row, separated by about 1 million years, involving a Mars-sized ‘Theia’ and proto-Earth. In this image, the proposed hit-and-run collision is simulated in 3D, shown about an hour after impact. A cut-away view shows the iron cores. Theia (or most of it) barely escapes, so a follow-on collision is likely.A. Emsenhuber/University of Bern/University of Munich

To track all these planetary orbits and collisions, and ultimately their mergers, the team used machine learning to obtain predictive models from 3D simulations of giant impacts. The team then used these data to rapidly compute the orbital evolution, including hit-and-run and merging collisions, to simulate terrestrial planet formation over the course of 100 million years. In the second paper, the authors propose and demonstrate their hit-and-run-return scenario for the moon’s formation, recognizing the primary problems with the standard giant impact model.

“The standard model for the moon requires a very slow collision, relatively speaking,” Asphaug said, “and it creates a moon that is composed mostly of the impacting planet, not the proto-Earth, which is a major problem since the moon has an isotopic chemistry almost identical to Earth.”

In the team’s new scenario, a roughly Mars-sized protoplanet hits the Earth, as in the standard model, but is a bit faster so it keeps going. It returns in about 1 million years for a giant impact that looks a lot like the standard model.

“The double impact mixes things up much more than a single event,” Asphaug said, “which could explain the isotopic similarity of Earth and moon, and also how the second, slow, merging collision would have happened in the first place.”

The researchers think the resulting asymmetry in how the planets were put together points the way to future studies addressing the diversity of terrestrial planets. For example, we don’t understand how Earth ended up with a magnetic field that is much stronger than that of Venus, or why Venus has no moon.

Their research indicates systematic differences in dynamics and composition, according to Asphaug.

“In our view, Earth would have accreted most of its material from collisions that were head-on hits, or else slower than those experienced by Venus,” he said. “Collisions into the Earth that were more oblique and higher velocity would have preferentially ended up on Venus.”

This would create a bias in which, for example, protoplanets from the outer solar system, at higher velocity, would have preferentially accreted to Venus instead of Earth. In short, Venus could be composed of material that was harder for the Earth to get ahold of.

“You would think that Earth is made up more of material from the outer system because it is closer to the outer solar system than Venus. But actually, with Earth in this vanguard role, it makes it actually more likely for Venus to accrete outer solar system material,” Asphaug said.

The co-authors on the two papers are Saverio Cambioni and Stephen R. Schwartz at the Lunar and Planetary Laboratory and Travis S. J. Gabriel at Arizona State University in Tempe, Arizona.

Featured image: Artist’s illustration of two massive objects colliding.NASA/JPL-Caltech

Provided by University of Arizona

Gamma Rays and Neutrinos from Mellow Supermassive Black Holes (Cosmology)

The Universe is filled with energetic particles, such as X rays, gamma rays, and neutrinos. However, most of the high-energy cosmic particles’ origins remain unexplained.

Now, an international research team has proposed a scenario that explains these; black holes with low activity act as major factories of high-energy cosmic particles.

Details of their research were published in the journal Nature Communications.

Gamma rays are high-energy photons that are many orders of magnitude more energetic than visible light. Space satellites have detected cosmic gamma rays with energies of megaelectron to gigaelectron volts.

Neutrinos are subatomic particles whose mass is nearly zero. They rarely interact with ordinary matter. Researchers at the IceCube Neutrino Observatory have also measured high-energy cosmic neutrinos.

Both gamma rays and neutrinos should be created by powerful cosmic-ray accelerators or surrounding environments in the Universe. However, their origins are still unknown. It is widely believed that active supermassive black holes (so-called active galactic nuclei), especially those with powerful jets, are the most promising emitters of high-energy gamma rays and neutrinos. However, recent studies have revealed that they do not explain the observed gamma rays and neutrinos, suggesting that other source classes are necessary.

The new model shows that not only active black holes but also non-active, “mellow” ones are important, acting as gamma-ray and neutrino factories.

A schematic picture of mellow supermassive black holes. Hot plasma is formed around a supermassive black hole. Electrons are heated up to ultrahigh temperature, which emits gamma-rays efficiently. Protons are accelerated to high energies, and they emit neutrinos. ©Shigeo S. Kimura

All galaxies are expected to contain supermassive black holes at their centers. When matter falls into a black hole, a huge amount of gravitational energy is released. This process heats the gas, forming high-temperature plasma. The temperature can reach as high as tens of billions of Celsius degrees for low-accreting black holes because of inefficient cooling, and the plasma can generate gamma rays in the megaelectron volt range.

Such mellow black holes are dim as individual objects, but they are numerous in the Universe. The research team found that the resulting gamma rays from low-accreting supermassive black holes may contribute significantly to the observed gamma rays in the megaelectron volt range.

In the plasma, protons can be accelerated to energies roughly 10,000 times higher than those achieved by the Large Hadron Collider — the largest human-made particle accelerator. The sped-up protons produce high-energy neutrinos through interactions with matter and radiation, which can account for the higher-energy part of the cosmic neutrino data. This picture can be applied to active black holes as demonstrated by previous research. The supermassive black holes including both active and non-active galactic nuclei can explain a large fraction of the observed IceCube neutrinos in a wide energy range.

Future multi-messenger observational programs are crucial to identify the origin of cosmic high-energy particles. The proposed scenario predicts gamma-ray counterparts in the megaelectron volt range to the neutrino sources. Most of the existing gamma-ray detectors are not tuned to detect them; but future gamma-ray experiments, together with next-generation neutrino experiments, will be able to detect the multi-messenger signals.

Publication Details:

Title: Soft gamma rays from low accreting supermassive black holes and connection to energetic neutrinos
Authors: : Shigeo S Kimura, Kohta Murase, Péter Mészáros
Journal: Nature Communications
DOI: 10.1038/s41467-021-25111-7

Provided by Tohoku University

White Dwarfs Become Magnetic As They Get Older! (Planetary Science)

An international team of astronomers from Armagh Observatory in Northern Ireland and the University of Western Ontario in Canada published a paper today in the Monthy Notices of the Royal Astronomical Society sharing new insights into the origin and evolution of the magnetic field of white dwarfs. The team used ESPaDOnS at CFHT, the ISIS spectrograph/spectropolarimeter at the William Herschel Telescope (WHT), and FORS2 at the European Southern Observatory (ESO) to carry out a spectropolarimetric white dwarf survey out to 20 parsecs from the Sun.

More than 90% of the stars of our Galaxy end their lives as white dwarfs. Although many have a magnetic field, it’s still unknown when it appears on the surface, whether it evolves during the cooling phase of the white dwarf and, above all, what the mechanisms are that generate it.

At least one out of four white dwarfs will end its life as a magnetic star, thus magnetic fields are an essential component of understanding their complexities. New insights into the magnetism of these stars from the team’s survey provide the best evidence obtained so far of how magnetism in white dwarfs correlates with age. This could help to explain the origin and evolution of magnetic fields in white dwarfs.

“White dwarfs are the remnants of stars that have run out of fuel and collapse. By nature, they become cooler and fainter with time,” says Dr. Stefano Bagnulo, Armagh Observatory and co-author of the paper. “Observations tend to favor the study of the brightest, most massive, hottest white dwarfs, which are the youngest. In our survey, we chose to include older, fainter white dwarfs with the hopes we could learn more about the continued evolution of these remnants.”

The team observed all the white dwarfs from the Gaia catalogue that lack previous high-precision magnetic measurements in the region within 20 parsecs, 65 light years, of the Sun; obtaining new data for 87 of the 152 stars in the region. The team took spectra of the white dwarfs using three spectropolarimeters, critical instruments to understanding magnetic fields. Spectroscopy breaks the light from one star into its component rainbow or spectra allowing astronomers to learn more about the object’s composition, temperature, etc. An instrument with spectropolarimetric capabilities, like those used by the team, enhances the study of an object by increasing the sensitivity and detection of magnetic fields by more than two orders of magnitude better than spectroscopy alone.

“Most white dwarf observations are made using spectroscopic technique sensitive to only the strongest magnetic fields, thus failing to identify a large fraction of magnetic white dwarfs,” said Dr. John Landstreet of the University of Western Ontario and a co-author on the paper. “Two thirds of the stars in our survey were observed for the first time in spectropolarimetric mode, enabling our team to record previously undetected magnetic fields.”

The team used CFHT’s ESPaDOnS to make a quarter of the observations for the survey. ESPaDOnS is a high resolution spectrograph which can be used in a high resolution spectropolarimetric mode for observations like those made by the team. Landstreet was the primary investigator in Canada for the NSERC grant that funded the development of ESPaDOnS’ camera and has worked with the instrument since its commissioning in 2004.

The team found magnetic fields are rare at the beginning of the life of a white dwarf. The star no longer produces energy in its interior and starts its cooling phase. These observations demonstrate that magnetic fields do not appear to be a characteristic of a white dwarf since its “birth”. Most often, the magnetic field is either created or brought to the stellar surface during the white dwarf’s cooling phase.

The team also found the magnetic fields of white dwarfs do not show obvious evidence of decaying over time. The results indicate the magnetic fields are generated during the cooling phase or at least continue to emerge at the stellar surface as the white dwarf ages. This picture is different from what is observed in larger, hotter magnetic main sequence stars, like Ap and Bp type stars. In these large stars, astronomers find magnetic fields are present as soon as the star reaches the zero-age main sequence, when they start to fuse hydrogen in their core, and that the magnetic field strength quickly decreases with time (details also uncovered with data from ESPaDOnS). Magnetism in white dwarfs therefore seems to be a totally different phenomenon than magnetism of larger, hotter, Ap and Bp type stars.

Magnetic fields in white dwarfs appear more frequently after the star’s carbon-oxygen core begins to crystallise. One explanation for the cause of these magnetic fields is a dynamo mechanism, which explains the weakest fields detected by the team. A dynamo mechanism occurs when a rotating object, like a white dwarf or the Earth, contains a molten, electrically conducting fluid. In a white dwarf, the crystallising carbon-oxygen core may generate the magnetic field in the same way the Earth’s molten iron core generates its magnetic field.

While the dynamo mechanism holds potential to understand white dwarf magnetic fields, further theoretical and observational investigation is necessary. Dynamos require fast rotation in an object, a trait not generally observed in white dwarfs. Dynamo mechanisms can explain magnetic fields up to 100,000 Gauss (the Earth’s field is 1 Gauss for reference), and astronomers have observed magnetic fields up to several hundred million Gauss in some white dwarfs. The team plans further work to untangle the mystery of white dwarf magnetic fields.

“John Landstreet’s used CFHT for decades and brings an expertise to observations that make the best use of CFHT,” said Dr. Nadine Manset, ESPaDOnS instrument scientist at CFHT. “After years of observations pushing the limits of ESPaDOnS, these results expand our understanding of white dwarfs and create new questions to be explored.”

Featured image: Artistic rendition of the magnetic field of a white dwarf.
Credit: ESO/L. Calçada.

Additional links
Scientific article at

Provided by CFHT Hawaii

How Is the Shadow Of Lorentzian Traversable Wormhole? (Cosmology)

Farook Rahaman and colleagues investigated the shadow cast by a certain class of rotating traversable wormhole i.e. Lorentzian. They showed that the throat of a wormhole plays very crucial role in shadow formation. They found that the shadow of wormhole is slanted as well as can be altered depending on the different parameters present in the wormhole spacetime. Their study recently appeared in Classical and Quantum Gravity.

Wormholes are topologically non-trivial structures of the spacetime connecting our Universe with other universes. Their nature plays a crucial role on shadow effect, which actually arises during the strong gravitational lensing. Current observations have inspired scientists to construct the shadow of wormholes, as well as, analyse the shape of the shadows.

Now, Farook Rahaman and colleagues explored the shadow cast by a certain class of rotating wormhole. To search this, they first composed the null geodesics and studied the effects of the parameters on the photon orbit.

They have found that, for small spin and smaller wormhole throat size, the shadow of a wormhole mimics those of the black hole.

However, with increasing either the spin or the throat size, the shadow of a wormhole start deviating from that of a black hole. Detection of such deviation may possibly indicate the presence of a wormhole.

They also constrained the size and the spin of the wormhole using the results from M87* observation, by investigating the average diameter of the wormhole as well as deviation from circularity with respect to the wormhole throat size. Their results indicated that a wormhole having reasonable spin or throat size, can be distinguished from a black hole through observations of their shadow.

“In a future observation, this type of study may help to indicate the presence of a wormhole in a galactic region.”, they concluded.

Reference: Farook Rahaman, Ksh. Newton Singh, Rajibul Shaikh, Tuhina Manna and Somi Aktar, “Shadows of Lorentzian traversable wormholes”, Classical and Quantum Gravity, 2021.

Note for editors of other websites: To reuse this article fully or partially kindly give credit either to our author S. Aman or provide a link of our article

Part of the Universe’s Missing Matter Found Thanks to the MUSE Instrument (Cosmology)

  • Galaxies exchange matter with their external environment thanks to galactic winds.
  • The MUSE instrument from the Very Large Telescope has, for the very first time, mapped the galactic wind that drive these exchanges between galaxies and nebulae.
  • This observation led to the detection of some of the Universe’s missing matter.

Galaxies can receive and exchange matter with their external environment thanks to the galactic winds created by stellar explosions. Thanks to the MUSE instrument1 from the Very Large Telescope at the ESO, an international research team, led on the French side by the CNRS and l’Université Claude Bernard Lyon 12, has mapped a galactic wind for the first time. This unique observation, which is detailed in a study published in MNRAS on 16 September 2021, helped to reveal where some of the Universe’s missing matter is located and to observe the formation of a nebula around a galaxy.

Galaxies are like islands of stars in the Universe, and possess ordinary or baryonic matter, which consists of elements from the periodic table, as well as dark matter, whose composition remains unknown. One of the major problems in understanding the formation of galaxies is that approximately 80% of the baryons3 that make up the normal matter of galaxies is missing. According to models, they were expelled from galaxies into inter-galactic space by the galactic winds created by stellar explosions.  

An international team4, led on the French side by researchers from the CNRS and l’Université Claude Bernard Lyon 1, successfully used the MUSE instrument to generate a detailed map of the galactic wind driving exchanges between a young galaxy in formation and a nebula (a cloud of gas and interstellar dust).

The team chose to observe galaxy Gal1 due to the proximity of a quasar, which served as a “lighthouse” for the scientists by guiding them toward the area of study. They also planned to observe a nebula around this galaxy, although the success of this observation was initially uncertain, as the nebula’s luminosity was unknown.

The perfect positioning of the galaxy and the quasar, as well as the discovery of gas exchange due to galactic winds, made it possible to draw up a unique map. This enabled the first observation of a nebula in formation that is simultaneously emitting and absorbing magnesium—some of the Universe’s missing baryons—with the Gal1 galaxy.

This type of normal matter nebula is known in the near Universe, but their existence for young galaxies in formation had only been supposed.

Scientists thus discovered some of the Universe’s missing baryons, thereby confirming that 80–90% of normal matter is located outside of galaxies, an observation that will help expand models for the evolution of galaxies.

Featured image: Observation of a part of the Universe thanks to MUSE
Left: Demarcation of the quasar and the galaxy studied here, Gal1.
Center: Nebula consisting of magnesium represented with a size scale
Right: superimposition of the nebula and the Gal1 galaxy.
© Johannes Zabl


MusE GAs FLOw and Wind (MEGAFLOW) – VIII. Discovery of a Mg II emission halo probed by a quasar sightline. Johannes Zabl, Nicolas F. Bouché, Lutz Wisotzki, Joop Schaye, Floriane Leclercq, Thibault Garel, Martin Wendt, Ilane Schroetter, Sowgat Muzahid, Sebastiano Cantalupo, Thierry Contini, Roland Bacon, Jarle Brinchmann and Johan Richard. MNRAS, 16 September 2021.

Provided by CNRS

Astronomers Solve 900-year-old Cosmic Mystery Surrounding Chinese Supernova Of 1181AD (Cosmology)

A 900-year-old cosmic mystery surrounding the origins of a famous supernova first spotted over China in 1181AD has finally been solved, according to an international team of astronomers.

New research published today (September 15, 2021) says that a faint, fast expanding cloud (or nebula), called Pa30, surrounding one of the hottest stars in the Milky Way, known as Parker’s Star, fits the profile, location and age of the historic supernova.

There have only been five bright supernovae in the Milky Way in the last millennium (starting in 1006). Of these, the Chinese supernova, which is also known as the ‘Chinese Guest Star’ of 1181AD has remained a mystery. It was originally seen and documented by Chinese and Japanese astronomers in the 12th century who said it was as bright as the planet Saturn and remained visible for six months. They also recorded an approximate location in the sky of the sighting, but no confirmed remnant of the explosion has even been identified by modern astronomers. The other four supernovae are all now well known to modern day science and include the famous Crab nebula.

The source of this 12th century explosion remained a mystery until this latest discovery made by a team of international astronomers from Hong Kong, the UK, Spain, Hungary and France, including Professor Albert Zijlstra from The University of Manchester. In the new paper, the astronomers found that the Pa 30 nebula is expanding at an extreme velocity of more than 1,100 km per second (at this speed, traveling from the Earth to the Moon would take only 5 minutes). They use this velocity to derive an age at around 1,000 years, which would coincide with the events of 1181AD.

 Prof Albert Zijlstra, Professor in Astrophysics at Jodrell Bank

The historical reports place the guest star between two Chinese constellations, Chuanshe and Huagai. Parker’s Star fits the position well. That means both the age and location fit with the events of 1181

— Prof Albert Zijlstra, Professor in Astrophysics at Jodrell Bank„

Prof Zijlstra (Professor in Astrophysics at the University of Manchester) explains: “The historical reports place the guest star between two Chinese constellations, Chuanshe and Huagai. Parker’s Star fits the position well. That means both the age and location fit with the events of 1181.”

Pa 30 and Parker’s Star have previously been proposed as the result of a merger of two White Dwarfs. Such events are thought to lead to a rare and relatively faint type of supernova, called a ‘Type Iax supernova’.

Prof Zijlstra added: “Only around 10% of supernovae are of this type and they are not well understood. The fact that SN1181 was faint but faded very slowly fits this type. It is the only such event where we can study both the remnant nebula and the merged star, and also have a description of the explosion itself.”

The merging of remnant stars, white dwarfs and neutron stars, give rise to extreme nuclear reactions and form heavy, highly neutron-rich elements such as gold and platinum. Prof. Zijlstra said: “Combining all this information such as the age, location, event brightness and historically recorded 185-day duration, indicates that Parker’s star and Pa30 are the counterparts of SN 1181. This is the only Type Iax supernova where detailed studies of the remnant star and nebula are possible. It is nice to be able to solve both a historical and an astronomical mystery.”

Paper: The Remnant and Origin of the Historical Supernova 1181 AD – Andreas Ritter1,2, Quentin A. Parker1,2, Foteini Lykou1,2,3, Albert A. Zijlstra2,4, Martín A. Guerrero5, and Pascal Le D6 Published 2021 September 15 • © 2021. The Author(s). Published by the American Astronomical Society. The Astrophysical Journal LettersVolume 918Number 2 – 

Provided by University of Manchester

Have We Detected Dark Energy? Cambridge Scientists Say it’s A Possibility (Cosmology)

Dark energy, the mysterious force that causes the universe to accelerate, may have been responsible for unexpected results from the XENON1T experiment, deep below Italy’s Apennine Mountains.

It was surprising that this excess could in principle have been caused by dark energy rather than dark matter. When things click together like that, it’s really special.

Sunny Vagnozzi

A new study, led by researchers at the University of Cambridge and reported in the journal Physical Review D, suggests that some unexplained results from the XENON1T experiment in Italy may have been caused by dark energy, and not the dark matter the experiment was designed to detect.

They constructed a physical model to help explain the results, which may have originated from dark energy particles produced in a region of the Sun with strong magnetic fields, although future experiments will be required to confirm this explanation. The researchers say their study could be an important step toward the direct detection of dark energy.

Everything our eyes can see in the skies and in our everyday world – from tiny moons to massive galaxies, from ants to blue whales – makes up less than five percent of the universe. The rest is dark. About 27% is dark matter – the invisible force holding galaxies and the cosmic web together – while 68% is dark energy, which causes the universe to expand at an accelerated rate.

“Despite both components being invisible, we know a lot more about dark matter, since its existence was suggested as early as the 1920s, while dark energy wasn’t discovered until 1998,” said Dr Sunny Vagnozzi from Cambridge’s Kavli Institute for Cosmology, the paper’s first author. “Large-scale experiments like XENON1T have been designed to directly detect dark matter, by searching for signs of dark matter ‘hitting’ ordinary matter, but dark energy is even more elusive.”

To detect dark energy, scientists generally look for gravitational interactions: the way gravity pulls objects around. And on the largest scales, the gravitational effect of dark energy is repulsive, pulling things away from each other and making the universe’s expansion accelerate.

About a year ago, the XENON1T experiment reported an unexpected signal, or excess, over the expected background. “These sorts of excesses are often flukes, but once in a while they can also lead to fundamental discoveries,” said co-author Dr Luca Visinelli, from Frascati National Laboratories in Italy. “We explored a model in which this signal could be attributable to dark energy, rather than the dark matter the experiment was originally devised to detect.”

At the time, the most popular explanation for the excess were axions – hypothetical, extremely light particles – produced in the Sun. However, this explanation does not stand up to observations, since the amount of axions that would be required to explain the XENON1T signal would drastically alter the evolution of stars much heavier than the Sun, in conflict with what we observe.

We are far from fully understanding what dark energy is, but most physical models for dark energy would lead to the existence of a so-called fifth force. There are four fundamental forces in the universe, and anything that can’t be explained by one of these forces is sometimes referred to as the result of an unknown fifth force.

However, we know that Einstein’s theory of gravity works extremely well in the local universe. Therefore, any fifth force associated to dark energy is unwanted and must be hidden, or screened, when it comes to small scales, and can only operate on the largest scales where Einstein’s theory of gravity fails to explain the acceleration of the Universe. To hide the fifth force, many models for dark energy are equipped with so-called screening mechanisms, which dynamically hide the fifth force.

Vagnozzi and his co-authors constructed a physical model, which used a type of screening mechanism known as chameleon screening, to show that dark energy particles produced in the Sun’s strong magnetic fields could explain the XENON1T excess.

“Our chameleon screening shuts down the production of dark energy particles in very dense objects, avoiding the problems faced by solar axions,” said Vagnozzi. “It also allows us to decouple what happens in the local very dense Universe from what happens on the largest scales, where the density is extremely low.”

The researchers used their model to show what would happen in the detector if the dark energy was produced in a region of the Sun called the tachocline, where the magnetic fields are particularly strong.

“It was really surprising that this excess could in principle have been caused by dark energy rather than dark matter,” said Vagnozzi. “When things click together like that, it’s really special.”

Their calculations suggest that experiments like XENON1T, which are designed to detect dark matter, could also be used to detect dark energy. However, the original excess still needs to be convincingly confirmed. “We first need to know that this wasn’t simply a fluke,” said Visinelli. “If XENON1T actually saw something, you’d expect to see a similar excess again in future experiments, but this time with a much stronger signal.”

If the excess was the result of dark energy, upcoming upgrades to the XENON1T experiment, as well as experiments pursuing similar goals such as LUX-Zeplin and PandaX-xT, mean that it could be possible to directly detect dark energy within the next decade.

Sunny Vagnozzi et al. ‘Direct detection of dark energy: the XENON1T excess and future prospects.’ Physical Review D (2021). DOI: 10.1103/PhysRevD.104.063023

Provided by University of Cambridge

Eternal in Knowledge, Eternal in Contents..