Tag Archives: #universe

From Dusk To Dawn Of the Universe (Cosmology)

Can the darkness of twilight – the photons of the cosmic microwave background – along with the light of dawn – the ultraviolet radiation emitted by the first stars that light up – unravel the story of reionization and tell us something about primordial inflation? We talk about it with Daniela Paoletti of the National Institute of Astrophysics, co-author of two new studies that try to answer this question

Almost a year later , Media Inaf met again Daniela Paoletti , researcher at Inaf Oas in Bologna and first author of a new study that investigates the history of reionization in an original way , the period – not yet well defined – in which the primordial gas, which was pervaded the universe in the early stages of its evolution, passes from the neutral to the ionized state. The work – with the evocative title “ Dark Twilight Joined with the Light of Dawn to Unveil the Reionization History”(The darkness of twilight together with the light of dawn to reveal the history of reionization) – presents an extensive analysis of the history of reionization based on recent cosmological and astrophysical data. Among the authors also Dhiraj Kumar Hazra , Fabio Finelli of Inaf Oas and the Nobel Prize for physics in 2006 George Smoot . In addition to this work, another article was presented in the same period that sees her among the authors, which reports a detailed study on what the latest Planck data have to say beyond the standard inflation model . On this hot summer day, we discover with her the details of the analysis, the implications and the hopes for the future.

Artistic impression showing a part of the history of the universe, centered on the epoch of reionization, a process that ionized most of the material in the universe. From left to right: the oldest light in the universe, the first stars, the reionization process and the first galaxies. Credits: Esa – C. Carreau

A year after the publication of your study on the history of reionization, you are about to publish a new one on the same burning topic. What is it about?

“This is a study on how the darkness of twilight and the light of the dawn of the universe, together, can make us understand how one of the most important phases in the history of the universe could have gone. The article is the continuation of what we published a year ago in the journal Physical Review Letters in which we presented an original approach to study the history of the early universe, combining astrophysical data and the cosmic microwave background (or Cmb, from English cosmic microwave background ). In this article we are going to describe this approach in detail and to present many more things, compared to the previous article, which for reasons of space at the time we were not able to present ».

How did you come up with the idea of ​​the title, so suggestive?

«The idea for the title came to me when I was preparing the seminar on Physical Review Letters . The innovative aspect of our approach is to bring together two completely different types of data: microwave background radiation and ultraviolet radiation. The first, which has always been defined as the first light of the universe , consists of the first photons that have been emitted which, however, think about it, are the same as the first twilight, because when this radiation has cooled down, the universe has entered that which is called dark era . While ultraviolet radiation represents a tracer of the first stars: it therefore traces the dawn of the universe, when it comes out of the night of the dark age. If the title had been in Italian I would have used the wordaurora , which in my opinion would be the most beautiful definition, but in English aurora and dusk are indicated in the same way. So I used dawn, as a terminology, because I liked this idea of ​​combining twilight before night and light after night ».

The figure shows the comparison between the ionization histories that want the quasar and UV data combined separately with the CMB versus the combination of all three. Credits: Paoletti et al.

Microwave radiation and ultraviolet radiation: how did you manage to reconcile such different data?

“They are two totally different types of data and precisely because they are so different we had to devise and develop this new technique which, instead of evaluating the fraction of ionized matter over time, solves the equation for the ionization fraction, thus succeeding to test even those that are the data of ultraviolet radiation, which otherwise we could not use in the classical approach. These data are telling us what is happening to the ionizing source, that is, to the first stars. Then we also use other data, which in this work proved to be very interesting: quasar data and gamma ray bursts (Grb) data. Some items at redshiftvery high, therefore very far away, they can tell us what the ionization situation is around the source, in their local world. If we assume that this is also representative of what is outside, they give us a precise idea of ​​what is happening at that redshift at that moment ».

So is the method the same as presented in 2020?

«Yes, this is the basis of the method that we had already developed in the study presented in Physical Review Lettersin 2020, but in this case we went to see what happens when we start using different data or changing assumptions. The very first thing we did was to go and check what happens when in the reionization source we leave free a term that would otherwise be set by the simulations, because it is a term on which we have very little data and those we have are not very sensitive. . In the first work we had fixed it to the value of the simulations while now we have left it free to be guided by the data and we have seen that in reality above all the data of the quasars have a good ability to constrain it and that, fortunately, it turned out to be perfectly in agreement with the simulations. This therefore confirmed what we had previously assumed ».

The figure shows the difference of the two ionization histories with different cuts in magnitude, where it is evident that the cut at 15 induces a softer reionization. Credits: Paoletti et al.

What is the main novelty of the new study?

“An extremely interesting result of this new study is when we go to change the ultraviolet data. For ultraviolet brightness, what we measure with our instruments is the brightness function, which we then convert into ultraviolet radiation brightness density. As this brightness function needs to be integrated, we need to choose a cutoff magnitude. In other words, we do not consider sources that are weaker than the value assumed as a cut. Until now, we had always assumed a fairly conservative magnitude value of minus 17, given that for the weaker sources the data show a change in the behavior of the brightness function that we do not know if it is real or if it depends on the uncertainty on the data. We have now used a more aggressive, more optimistic cut instead. We wondered what would happen if we went to minus 15 . With this new cut we are in fact considering the contributions of sources that are so distant and so weak, but which are many and which therefore lead to a slightly different history of reionization. We note that we start to have a contribution to higher redshifts: instead of being an extremely steep climb, it becomes slower, which lasts longer precisely because we have a contribution from these very weak sources, capable of ionizing ».

Do you still have the doubt that you are not considering realistic sources?

«Yes, that always remains. Obviously the error bars get bigger because they are more difficult measures. But the beauty is that in October they launch Jwstwho will see very well all that queue that we are considering. I am thrilled to make predictions for Jwst because I am very curious to see what the impact will be as the error bars go down. Because if this contribution of the sources is really so great, the classical model that is used in cosmology – which foresees a very steep transition that lasts very little – begins to be problematic, because at that point in reality astrophysics would be telling us that reionization is a little slower. We must always take into account that reionization, beyond the importance in itself – because it is a phase transition, when the entire universe changes state completely – is fundamental in cosmology, because it represents one of our greatest uncertainties. Suffice it to say that the optical thickness – the parameter that concretizes the reionization in the cosmic microwave background – is the only one that, after Planck, has more than 1% of error. It therefore impacts a whole series of extended models, among which we have also demonstrated the inflation extension models, such as those in the other article we wrote ».

Will future experiments already planned be able to help in this sense?

“Yes, with the generation of cosmological experiments coming in the next few years, we need to be particularly careful about how reionization will be considered and the possibility of using this astrophysical data to make us tell how the history of reionization went could, cascade, also have an impact on the constraints on cosmological parameters. Furthermore, it is very interesting, also for the future, what we have shown on the data of quasars, which have proved to be very powerful because – although they extend less in redshift than ultraviolet radiation – they are precise points of the reionization fraction. George Smoot pointed out to me that actually in the future, with Desi and Euclid, we start talking about having no more five, six, ten points but bands of thousands of quasars. So in the next ten years the approach and perspective will also completely change ».

«An instrument that could have done exceptional things would have been Theseus, because the gamma ray bursts are very powerful: while the quasar has a continuous emission and therefore it ionizes the medium around itself, in the case of the Grb no because it is too fast. It doesn’t have time to ionize. It is precisely a precise point that indicates that fraction. Unfortunately, in the discussion we have used, we only have one of points, which however already shows how a single point out of the dozen points used is able to narrow the error bars ».

What astrophysical data are you using? At what distance?

“We use six galaxy clusters from the Hubble Frontier Field . The quasars used reach up to redshift 8 while the ultraviolet sources, interpolating, reach up to redshift 10 ».

There is a parallel study to this, submitted in the same days, with an equally curious title reminiscent of Toy Story. What is it about?

“The original title of this study by Dhiraj – the first author who was in Bologna until January 2020 while he is now a professor in Imsc Chennai (India) – was precisely” Inflation story: to slow-roll and beyond “(towards slow- roll and beyond, like the leitmotiv of Toy Story ) because the slow-roll is the standard model of inflation and to “go beyond.” Already that is the non plus ultra but we go further. That was the idea, quoting Buzz Lightyear. A couple of writers weren’t exactly in agreement because in fact slow-rollwe considered it as a starting point, so it would not have been correct to use the “to”. In the end, of course, we eliminated the “to” even if we both liked it because it resumed Buzz’s wanting to go further and, thinking about it, our end is similar to that of the Toy Story hero: we try, to go further , but there’s a problem. We know that the standard model is a beautiful fit to the data, but we also know that Planck confirmed what we had already seen with Wmap, namely that there are anomalies in this data. These anomalies are extremely elusive because they are all at 2.8 sigma of significance, when the threshold for saying that something is anomalous is 3 sigma. So we are somewhat in a limbo that does not allow us to understand if what we are seeing is a statistical fluctuation or is it really an anomaly ».

If it were an anomaly would it be more intriguing?

“Yeah, the nice thing is to go and see if it’s an anomaly. Two of the biggest anomalies are the lack of power at very large angular scales, which has been observed and confirmed by Planck, and small oscillations in the angular power spectrum. It is these oscillation blocks, or single oscillations, which are called features that are not produced in the standard model. One possibility might be that inflation wasn’t all slow-roll: we have a scalar field – the one that generates inflation – that moves very slowly on an extremely flat potential, which if before ending slowly rolling on the flat potential it had a little less flat potential, or had made a jump or a cusp, then it could lead to this loss of power and the generation of small oscillations ».

Power spectra of the best fit residues showing the lack of power and the oscillations that have been studied. Credits: Hazra et al.

How do you check it?

“Usually what needs to be done, which has been dealt with in so many cases (including in Planck’s article on inflation), involves taking different physical models that go to see if the data is better or worse. The nice thing here is that we use this framework called Wiggly Whipped Inflation – literally, swinging whipped inflation . It is a phenomenological approach: we do not ask ourselves what caused that thing, but we ask ourselves: if the inflationary potential were done in a certain way, can we rent the data? Of course, if we fit the data better then we can be reasonably sure that we have found an inflationary model that works best. First there is the whip – the so-called potentialwhipped inflation which tells us that before the scalar field rolled a little faster, but then it starts to slow-roll . In this case I have a lack of power because when the head rolls quickly it does not generate many perturbations; it generates them when it arrives on slow-roll . So you have this lack of power. Then you go to test when the wiggles are also present, that is, these small fluctuations. These oscillations can be produced with discontinuities: when the potential has a jump, a wiggle is generated. If the jump is bigger, you generate a lot at certain scales, which depend on how much you jump. It is a general framework that simply gives us an idea of ​​what can best fit the data ».

It sounds simple, but I guess it really isn’t …

There are two problems. On the one hand, we know that if we used 100,000 parameters we could fit the entire universe. I could write a 150-parameter dependent Universe Lagrangian and I would have the Universe Wave Function. But it doesn’t work like that because the degrees of freedom would be too many. So saying that a model improves the data is always a balancing act between how free the model is – how many degrees of freedom it has – and how much better the data is. If the model, as has happened to us in some of these cases, fits the data better than the standard model but uses many parameters, “it is not valid”. Furthermore, it must be said that Planck’s data from 2018 reduced the evidence of anomalies. In temperature there is still the loss of power,

What’s new in this second study?

“The novelty lies in the fact that we have also used polarization . Theoretically the same thing that is done in temperature can be done in polarization. The problem is that in the polarization, on the large angular scales, there is the reionization that increases the power, masking a possible drop in power. While temperature still favors the models that generate this power shortage, when considering polarization it actually becomes apparent that these models are not favored in any way over the standard slow-roll model with a power-law power spectrum. In addition to this, there is another novelty: going to use for the small angular scales (less than about 8 degrees) not the likelihoodPlanck official but CamSpec , the non-binned version, which takes into account all the single points without considering the averages. This likelihood is the result of a reworking of the Planck data made by George Efstathiou and Steven Gratton after the publication of the Planck results, which are able to use more sky, slightly improving the error bars. We wanted to use that because it is more complete and more evolved. At the moment there is no evidence in favor of these more particular models that respect a Lambda Cdm ».

Daniela Paoletti, researcher of INAF of Bologna. Credits: Daniela Paoletti

What does the future hold on this front?

«The future will be very interesting for two reasons. The first is linked to the improvement of Cmb data thanks to LiteBird which will allow us to study the “E” polarization limited only by the cosmic variance. Then we will have the experiments on the ground, which will trace the small and medium scales where these oscillations are present, which can be seen in a more precise way than the Planck data. Furthermore, there is the large-scale structure: since this oscillation is also present at small scales, it can also be seen by an experiment like Euclid. Another possibility is to use non-Gaussianities, because the same effects of oscillations that are seen in the power spectrum are also seen in moments of higher order, therefore in non-Gaussianities ».

So, for now, any significant news on the history of inflation?

“For the moment, Planck’s data tells us that they still prefer a standard model. But the prospects for the future are good: in the next ten years I expect that we can really start to say whether it was just a standard slow-roll model or not. For now we are like Buzz: we can’t go much further than our standard model room but let’s remember that Buzz found his rocket too and took off for real and so it will be for us with future data, and maybe we will really go beyond slow roll “.

To know more:


Provided by INAF

Curiosity and Technology Drive Quest To Reveal Fundamental Secrets of the Universe (Astronomy)

Argonne-driven technology is part of a broad initiative to answer fundamental questions about the birth of matter in the universe and the building blocks that hold it all together.

Imagine the first of our species to lie beneath the glow of an evening sky. An enormous sense of awe, perhaps a little fear, fills them as they wonder at those seemingly infinite points of light and what they might mean. As humans, we evolved the capacity to ask big insightful questions about the world around us and worlds beyond us. We dare, even, to question our own origins.

“The place of humans in the universe is important to understand,” said physicist and computational scientist Salman Habib. ​“Once you realize that there are billions of galaxies we can detect, each with many billions of stars, you understand the insignificance of being human in some sense. But at the same time, you appreciate being human a lot more.”

“To say that we understand the universe would be incorrect. To say that we sort of understand it is fine. We have a theory that describes what the universe is doing, but each time the universe surprises us, we have to add a new ingredient to that theory.” — Salman Habib, physicist and computational scientist

With no less a sense of wonder than most of us, Habib and colleagues at the U.S. Department of Energy’s (DOE) Argonne National Laboratory are actively researching these questions through an initiative that investigates the fundamental components of both particle physics and astrophysics.

The breadth of Argonne’s research in these areas is mind-boggling. It takes us back to the very edge of time itself, to some infinitesimally small portion of a second after the Big Bang when random fluctuations in temperature and density arose, eventually forming the breeding grounds of galaxies and planets.

It explores the heart of protons and neutrons to understand the most fundamental constructs of the visible universe, particles and energy once free in the early post-Big Bang universe, but later confined forever within a basic atomic structure as that universe began to cool.

And it addresses slightly newer, more controversial questions about the nature of dark matter and dark energy, both of which play a dominant role in the makeup and dynamics of the universe but are little understood.

“And this world-class research we’re doing could not happen without advances in technology,” said Argonne Associate Laboratory Director Kawtar Hafidi, who helped define and merge the different aspects of the initiative.

“We are developing and fabricating detectors that search for signatures from the early universe or enhance our understanding of the most fundamental of particles,” she added. ​“And because all of these detectors create big data that have to be analyzed, we are developing, among other things, artificial intelligence techniques to do that as well.”

Decoding messages from the universe

Fleshing out a theory of the universe on cosmic or subatomic scales requires a combination of observations, experiments, theories, simulations and analyses, which in turn requires access to the world’s most sophisticated telescopes, particle colliders, detectors and supercomputers.

Argonne is uniquely suited to this mission, equipped as it is with many of those tools, the ability to manufacture others and collaborative privileges with other federal laboratories and leading research institutions to access other capabilities and expertise.

As lead of the initiative’s cosmology component, Habib uses many of these tools in his quest to understand the origins of the universe and what makes it tick.

And what better way to do that than to observe it, he said.

“If you look at the universe as a laboratory, then obviously we should study it and try to figure out what it is telling us about foundational science,” noted Habib. ​“So, one part of what we are trying to do is build ever more sensitive probes to decipher what the universe is trying to tell us.”

To date, Argonne is involved in several significant sky surveys, which use an array of observational platforms, like telescopes and satellites, to map different corners of the universe and collect information that furthers or rejects a specific theory. 

For example, the South Pole Telescope survey, a collaboration between Argonne and a number of national labs and universities, is measuring the cosmic microwave background (CMB), considered the oldest light in the universe. Variations in CMB properties, such as temperature, signal the original fluctuations in density that ultimately led to all the visible structure in the universe.

Additionally, the Dark Energy Spectroscopic Instrument and the forthcoming Vera C. Rubin Observatory are specially outfitted, ground-based telescopes designed to shed light on dark energy and dark matter, as well as the formation of luminous structure in the universe.

Darker matters

All the data sets derived from these observations are connected to the second component of Argonne’s cosmology push, which revolves around theory and modeling. Cosmologists combine observations, measurements and the prevailing laws of physics to form theories that resolve some of the mysteries of the universe.

But the universe is complex, and it has an annoying tendency to throw a curve ball just when we thought we had a theory cinched. Discoveries within the past 100 years have revealed that the universe is both expanding and accelerating its expansion — realizations that came as separate but equal surprises.

“To say that we understand the universe would be incorrect. To say that we sort of understand it is fine,” exclaimed Habib. ​“We have a theory that describes what the universe is doing, but each time the universe surprises us, we have to add a new ingredient to that theory.”

Modeling helps scientists get a clearer picture of whether and how those new ingredients will fit a theory. They make predictions for observations that have not yet been made, telling observers what new measurements to take.

Habib’s group is applying this same sort of process to gain an ever-so-tentative grasp on the nature of dark energy and dark matter. While scientists can tell us that both exist, that they comprise about 68 and 26% of the universe, respectively, beyond that not much else is known.

Observations of cosmological structure — the distribution of galaxies and even of their shapes — provide clues about the nature of dark matter, which in turn feeds simple dark matter models and subsequent predictions. If observations, models and predictions aren’t in agreement, that tells scientists that there may be some missing ingredient in their description of dark matter.

But there are also experiments that are looking for direct evidence of dark matter particles, which require highly sensitive detectors. Argonne has initiated development of specialized superconducting detector technology for the detection of low-mass dark matter particles.

This technology requires the ability to control properties of layered materials and adjust the temperature where the material transitions from finite to zero resistance, when it becomes a superconductor. And unlike other applications where scientists would like this temperature to be as high as possible — room temperature, for example — here, the transition needs to be very close to absolute zero.

Habib refers to these dark matter detectors as traps, like those used for hunting — which, in essence, is what cosmologists are doing. Because it’s possible that dark matter doesn’t come in just one species, they need different types of traps.

“It’s almost like you’re in a jungle in search of a certain animal, but you don’t quite know what it is — it could be a bird, a snake, a tiger — so you build different kinds of traps,” he said.

Lab researchers are working on technologies to capture these elusive species through new classes of dark matter searches. Collaborating with other institutions, they are now designing and building a first set of pilot projects aimed at looking for dark matter candidates with low mass.

Tuning in to the early universe

Amy Bender is working on a different kind of detector — well, a lot of detectors — which are at the heart of a survey of the cosmic microwave background (CMB).

“The CMB is radiation that has been around the universe for 13 billion years, and we’re directly measuring that,” said Bender, an assistant physicist at Argonne.

The Argonne-developed detectors — all 16,000 of them — capture photons, or light particles, from that primordial sky through the aforementioned South Pole Telescope, to help answer questions about the early universe, fundamental physics and the formation of cosmic structures.

Now, the CMB experimental effort is moving into a new phase, CMB-Stage 4 (CMB-S4). This larger project tackles even more complex topics like inflationary theory, which suggests that the universe expanded faster than the speed of light for a fraction of a second, shortly after the Big Bang.

Yellow and green pattern. (Image by Argonne National Laboratory.)
A section of a detector array with architecture suitable for future CMB experiments, such as the upcoming CMB-S4 project. Fabricated at Argonne’s Center for Nanoscale Materials, 16,000 of these detectors currently drive measurements collected from the South Pole Telescope. (Image by Argonne National Laboratory.)

While the science is amazing, the technology to get us there is just as fascinating.

Technically called transition edge sensing (TES) bolometers, the detectors on the telescope are made from superconducting materials fabricated at Argonne’s Center for Nanoscale Materials, a DOE Office of Science User Facility.

Each of the 16,000 detectors acts as a combination of very sensitive thermometer and camera. As incoming radiation is absorbed on the surface of each detector, measurements are made by supercooling them to a fraction of a degree above absolute zero. (That’s over three times as cold as Antarctica’s lowest recorded temperature.)

Changes in heat are measured and recorded as changes in electrical resistance and will help inform a map of the CMB’s intensity across the sky.

CMB-S4 will focus on newer technology that will allow researchers to distinguish very specific patterns in light, or polarized light. In this case, they are looking for what Bender calls the Holy Grail of polarization, a pattern called B-modes.

Capturing this signal from the early universe — one far fainter than the intensity signal — will help to either confirm or disprove a generic prediction of inflation.

It will also require the addition of 500,000 detectors distributed among 21 telescopes in two distinct regions of the world, the South Pole and the Chilean desert. There, the high altitude and extremely dry conditions keep water vapor in the atmosphere from absorbing millimeter wavelength light, like that of the CMB.

While previous experiments have touched on this polarization, the large number of new detectors will improve sensitivity to that polarization and grow our ability to capture it.

“Literally, we have built these cameras completely from the ground up,” said Bender. ​“Our innovation is in how to make these stacks of superconducting materials work together within this detector, where you have to couple many complex factors and then actually read out the results with the TES. And that is where Argonne has contributed, hugely.”

Down to the basics

Argonne’s capabilities in detector technology don’t just stop at the edge of time, nor do the initiative’s investigations just look at the big picture.

Most of the visible universe, including galaxies, stars, planets and people, are made up of protons and neutrons. Understanding the most fundamental components of those building blocks and how they interact to make atoms and molecules and just about everything else is the realm of physicists like Zein-Eddine Meziani.

“From the perspective of the future of my field, this initiative is extremely important,” said Meziani, who leads Argonne’s Medium Energy Physics group. ​“It has given us the ability to actually explore new concepts, develop better understanding of the science and a pathway to enter into bigger collaborations and take some leadership.”

Taking the lead of the initiative’s nuclear physics component, Meziani is steering Argonne toward a significant role in the development of the Electron-Ion Collider, a new U.S. Nuclear Physics Program facility slated for construction at DOE’s Brookhaven National Laboratory.

Argonne’s primary interest in the collider is to elucidate the role that quarks, anti-quarks and gluons play in giving mass and a quantum angular momentum, called spin, to protons and neutrons — nucleons — the particles that comprise the nucleus of an atom.

Video: Electrons colliding with ions will exchange virtual photons with the nuclear particles to help scientists “see” inside the nuclear particles; the collisions will produce precision 3D snapshots of the internal arrangement of quarks and gluons within ordinary nuclear matter; like a combination CT/MRI scanner for atoms. (Image by Brookhaven National Laboratory.)

While we once thought nucleons were the finite fundamental particles of an atom, the emergence of powerful particle colliders, like the Stanford Linear Accelerator Center at Stanford University and the former Tevatron at DOE’s Fermilab, proved otherwise.

It turns out that quarks and gluons were independent of nucleons in the extreme energy densities of the early universe; as the universe expanded and cooled, they transformed into ordinary matter.

“There was a time when quarks and gluons were free in a big soup, if you will, but we have never seen them free,” explained Meziani. ​“So, we are trying to understand how the universe captured all of this energy that was there and put it into confined systems, like these droplets we call protons and neutrons.”

Some of that energy is tied up in gluons, which, despite the fact that they have no mass, confer the majority of mass to a proton. So, Meziani is hoping that the Electron-Ion Collider will allow science to explore — among other properties — the origins of mass in the universe through a detailed exploration of gluons.

And just as Amy Bender is looking for the B-modes polarization in the CMB, Meziani and other researchers are hoping to use a very specific particle called a J/psi to provide a clearer picture of what’s going on inside a proton’s gluonic field.

But producing and detecting the J/psi particle within the collider — while ensuring that the proton target doesn’t break apart — is a tricky enterprise, which requires new technologies. Again, Argonne is positioning itself at the forefront of this endeavor.

“We are working on the conceptual designs of technologies that will be extremely important for the detection of these types of particles, as well as for testing concepts for other science that will be conducted at the Electron-Ion Collider,” said Meziani.

Argonne also is producing detector and related technologies in its quest for a phenomenon called neutrinoless double beta decay. A neutrino is one of the particles emitted during the process of neutron radioactive beta decay and serves as a small but mighty connection between particle physics and astrophysics.

“Neutrinoless double beta decay can only happen if the neutrino is its own anti-particle,” said Hafidi. ​“If the existence of these very rare decays is confirmed, it would have important consequences in understanding why there is more matter than antimatter in the universe.”

Argonne scientists from different areas of the lab are working on the Neutrino Experiment with Xenon Time Projection Chamber (NEXT) collaboration to design and prototype key systems for the collaborative’s next big experiment. This includes developing a one-of-a-kind test facility and an R&D program for new, specialized detector systems.

“We are really working on dramatic new ideas,” said Meziani. ​“We are investing in certain technologies to produce some proof of principle that they will be the ones to pursue later, that the technology breakthroughs that will take us to the highest sensitivity detection of this process will be driven by Argonne.”

The tools of detection

Ultimately, fundamental science is science derived from human curiosity. And while we may not always see the reason for pursuing it, more often than not, fundamental science produces results that benefit all of us. Sometimes it’s a gratifying answer to an age-old question, other times it’s a technological breakthrough intended for one science that proves useful in a host of other applications.

Through their various efforts, Argonne scientists are aiming for both outcomes. But it will take more than curiosity and brain power to solve the questions they are asking. It will take our skills at toolmaking, like the telescopes that peer deep into the heavens and the detectors that capture hints of the earliest light or the most elusive of particles.

We will need to employ the ultrafast computing power of new supercomputers. Argonne’s forthcoming Aurora exascale machine will analyze mountains of data for help in creating massive models that simulate the dynamics of the universe or subatomic world, which, in turn, might guide new experiments — or introduce new questions.

And we will apply artificial intelligence to recognize patterns in complex observations — on the subatomic and cosmic scales — far more quickly than the human eye can, or use it to optimize machinery and experiments for greater efficiency and faster results.

“I think we have been given the flexibility to explore new technologies that will allow us to answer the big questions,” said Bender. ​“What we’re developing is so cutting edge, you never know where it will show up in everyday life.”

Funding for research mentioned in this article was provided by Argonne Laboratory Directed Research and Development; Argonne program development; DOE Office of High Energy Physics: Cosmic Frontier, South Pole Telescope-3G project, Detector R&D; and DOE Office of Nuclear Physics.

Featured image: The South Pole Telescope is part of a collaboration between Argonne and a number of national labs and universities to measure the CMB, considered the oldest light in the universe. The high altitude and extremely dry conditions of the South Pole keep water vapor from absorbing select light wavelengths. (Image by Argonne National Laboratory.)


Provided by Argonne National Laboratory

Danish Student Solves How The Universe is Reflected Near Black Holes (Cosmology)

In the vicinity of black holes, space is so warped that even light rays may curve around them several times. This phenomenon may enable us to see multiple versions of the same thing. While this has been known for decades, only now do we have an exact, mathematical expression, thanks to Albert Sneppen, student at the Niels Bohr Institute. The result, which even is more useful in realistic black holes, has just been published in the journal Scientific Reports.

You have probably heard of black holes — the marvelous lumps of gravity from which not even light can escape. You may also have heard that space itself and even time behave oddly near black holes; space is warped.

In the vicinity of a black hole, space curves so much that light rays are deflected, and very nearby light can be deflected so much that it travels several times around the black hole. Hence, when we observe a distant background galaxy (or some other celestial body), we may be lucky to see the same image of the galaxy multiple times, albeit more and more distorted.

Galaxies in multiple versions

The mechanism is shown on the figure below: A distant galaxy shines in all directions — some of its light comes close to the black hole and is lightly deflected; some light comes even closer and circumvolves the hole a single time before escaping down to us, and so on. Looking near the black hole, we see more and more versions of the same galaxy, the closer to the edge of the hole we are looking.

Light from the background galaxy circles a black hole an increasing number of times, the closer it passes the hole, and we therefore see the same galaxy in several directions (credit: Peter Laursen).
Light from the background galaxy circles a black hole an increasing number of times, the closer it passes the hole, and we therefore see the same galaxy in several directions (credit: Peter Laursen).

How much closer to the black hole do you have to look from one image to see the next image? The result has been known for over 40 years, and is some 500 times (for the math aficionados, it is more accurately the “exponential function of two pi“, written e2π).

Calculating this is so complicated that, until recently, we had not yet developed a mathematical and physical intuition as to why it happens to be this exact factor. But using some clever, mathematical tricks, master’s student Albert Sneppen from the Cosmic Dawn Center — a basic research center under both the Niels Bohr Institute and DTU Space — has now succeeded in proving why.

There is something fantastically beautiful in now understanding why the images repeat themselves in such an elegant way. On top of that, it provides new opportunities to test our understanding of gravity and black holes,” Albert Sneppen clarifies.

Proving something mathematically is not only satisfying in itself; indeed, it brings us closer to an understanding of this marvelous phenomenon. The factor “500” follows directly from how black holes and gravity work, so the repetitions of the images now become a way to examine and test gravity.

Spinning black holes

As a completely new feature, Sneppen’s method can also be generalized to apply not only to “trivial” black holes, but also to black holes that rotate. Which, in fact, they all do.

The situation seen "face-on", i.e. how we would actually observe it from Earth. The extra images of the galaxy become increasingly squeezed and distorted, the closer we look at the black hole (credit: Peter Laursen).
The situation seen “face-on”, i.e. how we would actually observe it from Earth. The extra images of the galaxy become increasingly squeezed and distorted, the closer we look at the black hole (credit: Peter Laursen).

“It turns out that when the it rotates really fast, you no longer have to get closer to the black hole by a factor 500, but significantly less. In fact, each image is now only 50, or 5, or  even down to just 2 times closer to the edge of the black hole“, explains Albert Sneppen.

Having to look 500 times closer to the black hole for each new image, means that the images are quickly “squeezed” into one annular image, as seen in the figure on the right. In practice, the many images will be difficult to observe. But when black holes rotate, there is more room for the “extra” images, so we can hope to confirm the theory observationally in a not-too-distant future. In this way, we can learn about not just black holes, but also the galaxies behind them:

The travel time of the light increases, the more times it has to go around the black hole, so the images become increasingly “delayed”. If, for example, a star explodes as a supernova in a background galaxy, one would be able to see this explosion again and again.

Albert Sneppen’s article has just been accepted for publication in the journal Scientific Reports, and can be read here: Divergent reflections around the photon sphere of a black hole.

Featured image: A disk of glowing gas swirls into the black hole “Gargantua” from the movie Interstellar. Because space curves around the black hole, it is possible to look round its far side and see the part of the gas disk that would otherwise be hidden by the hole. Our understanding of this mechanism has now been increased by Danish master’s student at NBI, Albert Sneppen (credit: interstellar.wiki/CC BY-NC License).


Provided by University of Copenhagen

New Radio Receiver Opens Wider Window to Radio Universe (Cosmology)

Researchers have used the latest wireless technology to develop a new radio receiver for astronomy. The receiver is capable of capturing radio waves at frequencies over a range several times wider than conventional ones, and can detect radio waves emitted by many types of molecules in space at once. This is expected to enable significant progresses in the study of the evolution of the Universe and the mechanisms of star and planet formation.

Interstellar molecular clouds of gas and dust provide the material for stars and planets. Each type of molecule emits radio waves at characteristic frequencies and astronomers have detected emissions from various molecules over a wide range of frequencies. By observing these radio waves, we can learn about the physical properties and chemical composition of interstellar molecular clouds. This has been the motivation driving the development of a wideband receiving system.

In general, the range of radio frequencies that can be observed simultaneously by a radio telescope is very limited. This is due to the characteristics of the components that make up a radio receiver. In this new research, the team of researchers in Osaka Prefecture University (OPU) and the National Astronomical Observatory of Japan (NAOJ) has widened the bandwidth of various components, such as the horn that brings radio waves into the receiver, the waveguide (metal tube) circuit that propagates the radio waves, and the radio frequency converter. By combining these components into a receiver system, the team has achieved a range of simultaneously detectable frequencies several times larger than before. Furthermore, this receiver system was mounted on the OPU 1.85-m radio telescope in NAOJ’s Nobeyama Radio Observatory, and succeeded in capturing radio waves from actual celestial objects. This shows that the results of this research are extremely useful in actual astronomical observations.

“It was a very emotional moment for me to share the joy of receiving radio waves from the Orion Nebula for the first time with the members of the team, using the receiver we had built,” comments Yasumasa Yamasaki, an OPU graduate student and the lead author of the paper describing the development of the wideband receiver components. “I feel that this achievement was made possible by the cooperation of many people involved in the project.”

When compared to the receivers currently used in the Atacama Large Millimeter/submillimeter Array (ALMA), the breadth of frequencies that can be simultaneously observed with the new receivers is striking. To cover the radio frequencies between 211 and 373 GHz, ALMA uses two receivers, Band 6 and 7, but can use only one of them at a given time. In addition, ALMA receivers can observe two strips of frequency ranges with widths of 5.5 and 4 GHz using the Band 6 and 7 receivers, respectively. In contrast, the new wideband receiver can cover all the frequencies with a single unit. In addition, especially in the higher frequency band, the receiver can detect radio waves in a frequency range of 17 GHz at a time.

“It was a very valuable experience for me to be involved in the development of this broadband receiver from the beginning to successful observation,” says Sho Masui, a graduate student at OPU and the lead author of the research paper reporting the development of the receiver and the test observations. “Based on these experiences, I would like to continue to devote further efforts to the advancement of astronomy through instrument development.”

This wideband technology has made it possible to observe the interstellar molecular clouds along the Milky Way more efficiently using the 1.85-m radio telescope. In addition, widening the receiver bandwidth is listed as one of the high priority items in the ALMA Development Roadmap which aims to further improve the performance of ALMA. This achievement is expected to be applied to ALMA and other large radio telescopes, and to make a significant contribution to enhance our understanding of the evolution of the Universe.

These research results are presented in the following two papers published in the Publications of the Astronomical Society of Japan.

Featured image: Distribution of CO isotopologues in the Orion molecular cloud observed simultaneously with the newly developed broadband receiver. (Credit: Osaka Prefecture University/NAOJ)


References


Provided by NAOJ

Does Singularities of the Accelerated Stephani Universe Model Affect The Light & Test Particle Motion? (Cosmology)

The Lambda-CDM model, based on Friedmann solution, is the simplest model that provides a reasonably good description of the observed universe’s accelerated expansion. However, this model don’t solve some problems, like “dark energy” and the coincidence problem. Thus, there are alternative approaches. One of the possibilities here is to consider inhomogeneous cosmological models such as the “Stephani solution”.

It allows building of the model of the universe with accelerated expansion within general relativity with no modifications or suggestions of the exotic types of matter. This is a non-static solution for expanding perfect fluid with zero shear and rotation, which contains the known Friedmann solution as a particular case. Initially, it has no symmetries, but the spatial sections of the Stephani space–time in the case of spherical symmetry have the same geometry as corresponding subspaces of the Friedmann solution. Therefore, these models have an intuitively clear interpretation. The spatial curvature in the Stephani solution depends on time that allows the attainment of the accelerated expansion of the universe.

Recently, Elena Kopteva and colleagues investigated the inhomogeneous spherically symmetric stephani universe filled with a perfect fluid with uniform energy density and non-uniform pressure, as a possible model of the acceleration of universe expansion. These models are characterized by the spatial curvature, depending on time. It has been shown that, despite possible singularities, the model can describe the current stage of the universe’s evolution.

Now, they investigated the geodesic structure of this model to verify if the singularities of the model can affect the light and test particle motion within the observable area. Their study recently appeared in the Journal Symmetry.

Figure 1. The spiralling out trajectory of the test particle in the case χ = const. © Elena Kopteva et al.

They showed that, in the case of purely radial motion, the radial velocity slightly decreases with time and radial distance, due to the universe expansion. They also showed that, both particles and photons spiral out of the center when the radial coordinate is constant.

(article continues below images)

Figure 2. The observable radial velocity in the general case of motion. The constants are chosen as follows: β = –0.111113, vr0 = 0.00005, L = 0.001, χin = 0.084, k = –1.01 (the red line), k = –1.5 (the green line) and k = –2.5 (the blue line) © Elena Kopteva et al.
Figure 3. The dependence vr(R) in the general case of motion. The constants are chosen as follows: β = –0.111113, vr0 = 0.00005, L = 0.001, χin = 0.084, k = –1.01 (the red line), k = –1.5 (the green line) and k = –2.5 (the blue line) © Elena Kopteva et al.

In addition, one interesting thing has been found in this model is that, in the case of the test particle motion with arbitrary initial velocity, the observable radial distance increases even under negative observable radial velocity, which is caused by the fact that radial distance depends on both time (T) and singularity (χ), so it can grow even when χ decreases.

“The singularities are indistinguishable for observations and do not influence the test particles and photons motion up to the current age of the universe as well as in the far enough future.”

Finally, their analysis of the geodesic structure with respect to the singularity behavior showed that the closer the exponent k to −1, the slower the solution χ(T) tends towards singularity.

Figure 4. Relative positions of the singularity (the violet line) and χ(T) (the blue line) in the general case. The dashed line indicates the current age of the universe. The constants are β = –0.111113, vr0 = 0.00005, L = 0.001, χin = 0.084, k = –2.5. © Elena Kopteva et al.

Reference: Bormotova, I.; Kopteva, E.; Stuchlík, Z. Geodesic Structure of the Accelerated Stephani Universe. Symmetry 2021, 13, 1001. https://doi.org/10.3390/sym13061001


Note for editors of other websites: To reuse this article fully or partially kindly give credit either to our author/editor S. Aman or provide a link of our article

Observation, Simulation, and AI Join Forces to Reveal a Clear Universe (Cosmology)

Japanese astronomers have developed a new artificial intelligence (AI) technique to remove noise in astronomical data due to random variations in galaxy shapes. After extensive training and testing on large mock data created by supercomputer simulations, they then applied this new tool to actual data from Japan’s Subaru Telescope and found that the mass distribution derived from using this method is consistent with the currently accepted models of the Universe. This is a powerful new tool for analyzing big data from current and planned astronomy surveys.

Wide area survey data can be used to study the large-scale structure of the Universe through measurements of gravitational lensing patterns. In gravitational lensing, the gravity of a foreground object, like a cluster of galaxies, can distort the image of a background object, such as a more distant galaxy. Some examples of gravitational lensing are obvious, such as the “Eye of Horus”. The large-scale structure, consisting mostly of mysterious “dark” matter, can distort the shapes of distant galaxies as well, but the expected lensing effect is subtle. Averaging over many galaxies in an area is required to create a map of foreground dark matter distributions.

But this technique of looking at many galaxy images runs into a problem; some galaxies are just innately a little funny looking. It is difficult to distinguish between a galaxy image distorted by gravitational lensing and a galaxy that is actually distorted. This is referred to as shape noise and is one of the limiting factors in research studying the large-scale structure of the Universe.

To compensate for shape noise, a team of Japanese astronomers first used ATERUI II, the world’s most powerful supercomputer dedicated to astronomy, to generate 25,000 mock galaxy catalogs based on real data from the Subaru Telescope. They then added realist noise to these perfectly known artificial data sets, and trained an AI to statistically recover the lensing dark matter from the mock data.

After training, the AI was able to recover previously unobservable fine details, helping to improve our understanding of the cosmic dark matter. Then using this AI on real data covering 21 square degrees of the sky, the team found a distribution of foreground mass consistent with the standard cosmological model.

“This research shows the benefits of combining different types of research: observations, simulations, and AI data analysis.” comments Masato Shirasaki, the leader of the team, “In this era of big data, we need to step across traditional boundaries between specialties and use all available tools to understand the data. If we can do this, it will open new fields in astronomy and other sciences.”

These results appeared as Shirasaki et al. “Noise reduction for weak lensing mass mapping: an application of generative adversarial networks to Subaru Hyper Suprime-Cam first-year data” in the June 2021 issue of Monthly Notices of the Royal Astronomical Society.

Featured image: Artist’s visualization of this research. Using AI driven data analysis to peel back the noise and find the actual shape of the Universe. (Credit: The Institute of Statistical Mathematics)


Provided by NAOJ

A Massive Protocluster of Merging Galaxies in the Early Universe (Cosmology)

Submillimeter galaxies (SMGs) are a class of the most luminous, distant, and rapidly star-forming galaxies known and can shine brighter than a trillion Suns (about one hundred times more luminous in total than the Milky Way). They are generally hard to detect in the visible, however, because most of their ultraviloet and optical light is absorbed by dust which in turn is heated and radiates at submillimeter wavelengths – the reason they are called submillimeter galaxies. The power source for these galaxies is thought to be high rates of star formation, as much as one thousand stars per year (in the Milky Way, the rate is more like one star per year). SMGs typically date from the early universe; they are so distant that their light has been traveling for over ten billion years, more than 70% of the lifetime of the universe, from the epoch about three billion years after the big bang. Because it takes time for them to have evolved, astronomers think that even a billion years earlier they probably were actively making stars and influencing their environments, but very little is known about this phase of their evolution.

SMGs have recently been identified in galaxy protoclusters, groups of dozens of galaxies in the universe when it was less than a few billion years old. Observing massive SMGs in these distant protoclusters provides crucial details for understanding both their early evolution and that of the larger structures to which they belong. CfA astronomers Emily Pass and Matt Ashby were members of a team that used infrared and optical data from the Spitzer IRAC and Gemini-South instruments, respectively, to study a previosly identified protocluster, SPT2349-56, in the era only 1.4 billion years after the big bang. The protocluster was spotted by the South Pole Telescope millimeter wavelengths and then observed in more detail with Spitzer, Gemini, and the ALMA submillimeter array.

The protocluster contains a remarkable concentration of fourteen SMGs, nine of which were detected by these optical and infrared observations. The astronomers were then able to estimate the stellar masses, ages, and gas content in these SMGs, as well as their star formation histories, a remarkable acheievment for such distant objects. Among other properties of the protocluster, the scientists deduce that its total mass is about one trillion solar-masses, and its galaxies are making stars in a manner similar to star formation processes in the current universe. They also conclude that the whole ensemble is probably in the midst of a colossal merger.

Featured image: An artist’s impression of the protocluster of galaxies SPT2349-56, a group of over a dozen interacting galaxies in the early Universe. Astronomers have observed the protocluster in optical, infrared, and millimeter radiation, and determined that several member galaxies are “submillimeter galaxies,” among the most luminous, rapidly star-forming galaxies known. © ESO/M. Kornmesser


Reference: “Optical and near-infrared observations of the SPT2349-56 proto-cluster core at z = 4.3,” K. M. Rotermund, S. C. Chapman, K. A. Phadke, R. Hill, E. Pass, M. Aravena, M. L. N. Ashby, A. Babul, M. Bethermin, R. Canning, C. de Breuck, C. Dong, A. H. Gonzalez, C. C. Hayward, S. Jarugula, D. P. Marrone, D. Narayanan, C. Reuter, D. Scott, J. S. Spilker, J. D. Vieira, G. Wang and A. Weiss, Monthly Notices of the Royal Astronomical Society 502, 1797, 2021.


Provided by CFA Harvard

NASA’s Webb Will Use Quasars to Unlock the Secrets of the Early Universe (Cosmology)

Quasars are very bright, distant and active supermassive black holes that are millions to billions of times the mass of the Sun. Typically located at the centers of galaxies, they feed on infalling matter and unleash fantastic torrents of radiation. Among the brightest objects in the universe, a quasar’s light outshines that of all the stars in its host galaxy combined, and its jets and winds shape the galaxy in which it resides. 

Shortly after its launch later this year, a team of scientists will train NASA’s James Webb Space Telescope on six of the most distant and luminous quasars. They will study the properties of these quasars and their host galaxies, and how they are interconnected during the first stages of galaxy evolution in the very early universe. The team will also use the quasars to examine the gas in the space between galaxies, particularly during the period of cosmic reionization, which ended when the universe was very young. They will accomplish this using Webb’s extreme sensitivity to low levels of light and its superb angular resolution.

Webb: Visiting the Young Universe

As Webb peers deep into the universe, it will actually look back in time. Light from these distant quasars began its journey to Webb when the universe was very young and took billions of years to arrive. We will see things as they were long ago, not as they are today.

“All these quasars we are studying existed very early, when the universe was less than 800 million years old, or less than 6 percent of its current age. So these observations give us the opportunity to study galaxy evolution and supermassive black hole formation and evolution at these very early times,” explained team member Santiago Arribas, a research professor at the Department of Astrophysics of the Center for Astrobiology in Madrid, Spain. Arribas is also a member of Webb’s Near-Infrared Spectrograph (NIRSpec) Instrument Science Team.

The light from these very distant objects has been stretched by the expansion of space. This is known as cosmological redshift. The farther the light has to travel, the more it is redshifted. In fact, the visible light emitted at the early universe is stretched so dramatically that it is shifted out into the infrared when it arrives to us. With its suite of infrared-tuned instruments, Webb is uniquely suited to studying this kind of light. 

Studying Quasars, Their Host Galaxies and Environments, and Their Powerful Outflows

The quasars the team will study are not only among the most distant in the universe, but also among the brightest. These quasars typically have the highest black hole masses, and they also have the highest accretion rates — the rates at which material falls into the black holes.

“We’re interested in observing the most luminous quasars because the very high amount of energy that they’re generating down at their cores should lead to the largest impact on the host galaxy by the mechanisms such as quasar outflow and heating,” said Chris Willott, a research scientist at the Herzberg Astronomy and Astrophysics Research Centre of the National Research Council of Canada (NRC) in Victoria, British Columbia. Willott is also the Canadian Space Agency’s Webb project scientist. “We want to observe these quasars at the moment when they’re having the largest impact on their host galaxies.”

An enormous amount of energy is liberated when matter is accreted by the supermassive black hole. This energy heats and pushes the surrounding gas outward, generating strong outflows that tear across interstellar space like a tsunami, wreaking havoc on the host galaxy. 

Outflows play an important role in galaxy evolution. Gas fuels the formation of stars, so when gas is removed due to outflows, the star-formation rate decreases. In some cases, outflows are so powerful and expel such large amounts of gas that they can completely halt star formation within the host galaxy. Scientists also think that outflows are the main mechanism by which gas, dust and elements are redistributed over large distances within the galaxy or can even be expelled into the space between galaxies – the intergalactic medium. This may provoke fundamental changes in the properties of both the host galaxy and the intergalactic medium.

Examining Properties of Intergalactic Space During the Era of Reionization

More than 13 billion years ago, when the universe was very young, the view was far from clear. Neutral gas between galaxies made the universe opaque to some types of light. Over hundreds of millions of years, the neutral gas in the intergalactic medium became charged or ionized, making it transparent to ultraviolet light. This period is called the Era of Reionization. But what led to the reionization that created the “clear” conditions detected in much of the universe today? Webb will peer deep into space to gather more information about this major transition in the history of the universe. The observations will help us understand the Era of Reionization, which is one of the key frontiers in astrophysics.   

The team will use quasars as background light sources to study the gas between us and the quasar. That gas absorbs the quasar’s light at specific wavelengths. Through a technique called imaging spectroscopy, they will look for absorption lines in the intervening gas. The brighter the quasar is, the stronger those absorption line features will be in the spectrum. By determining whether the gas is neutral or ionized, scientists will learn how neutral the universe is and how much of this reionization process has occurred at that particular point in time.

“If you want to study the universe, you need very bright background sources. A quasar is the perfect object in the distant universe, because it’s luminous enough that we can see it very well,” said team member Camilla Pacifici, who is affiliated with the Canadian Space Agency but works as an instrument scientist at the Space Telescope Science Institute in Baltimore. “We want to study the early universe because the universe evolves, and we want to know how it got started.”

The team will analyze the light coming from the quasars with NIRSpec to look for what astronomers call “metals,” which are elements heavier than hydrogen and helium. These elements were formed in the first stars and the first galaxies and expelled by outflows. The gas moves out of the galaxies it was originally in and into the intergalactic medium. The team plans to measure the generation of these first “metals,” as well as the way they’re being pushed out into the intergalactic medium by these early outflows. 

The Power of Webb

Webb is an extremely sensitive telescope able to detect very low levels of light. This is important, because even though the quasars are intrinsically very bright, the ones this team is going to observe are among the most distant objects in the universe. In fact, they are so distant that the signals Webb will receive are very, very low. Only with Webb’s exquisite sensitivity can this science be accomplished. Webb also provides excellent angular resolution, making it possible to disentangle the light of the quasar from its host galaxy.

The quasar programs described here are Guaranteed Time Observations involving the spectroscopic capabilities of NIRSpec.

The James Webb Space Telescope will be the world’s premier space science observatory when it launches in 2021. Webb will solve mysteries in our solar system, look beyond to distant worlds around other stars, and probe the mysterious structures and origins of our universe and our place in it. Webb is an international program led by NASA with its partners, ESA (European Space Agency) and the Canadian Space Agency.

For more information about Webb, visit www.nasa.gov/webb.

Featured image: This is an artist’s concept of a galaxy with a brilliant quasar at its center. A quasar is a very bright, distant and active supermassive black hole that is millions to billions of times the mass of the Sun. Among the brightest objects in the universe, a quasar’s light outshines that of all the stars in its host galaxy combined. Quasars feed on infalling matter and unleash torrents of winds and radiation, shaping the galaxies in which they reside. Using the unique capabilities of Webb, scientists will study six of the most distant and luminous quasars in the universe. Credits: NASA, ESA and J. Olmsted (STScI)


Provided by NASA

SAMI Maps Our Nearby Universe (Cosmology)

Our Monthly Media for June comes from Dr Sam Vaughan, from our University of Sydney node, on the final data release from the SAMI galaxy survey led by Professor Scott Croom, also at the University of Sydney. SAMI has observed over three thousand nearby galaxies in exquisite detail, studying the orbits of their stars, the properties of their interstellar gas and the kind of chemical elements which their stars are made from.

Using observations from Siding Springs observatory in rural New South Wales, SAMI is able to measure a “3D” view of its targets by taking a spectrum from many different locations in a single galaxy at the same time. This means that astronomers can build a picture of the complicated motions and orbits of stars in these galaxies, as well as investigating whether stars in the inner regions of galaxies are different from those in the outskirts.

In particular, SAMI has been a game changer for studying why the stars in some galaxies rotate together in a nice, ordered manner whilst the stars in other galaxies seem to have been thrown together in a jumbled mess. SAMI has produced one of the largest ever collections of galaxies with these measurements, and is finding tentative hints that the local environment a galaxy lives in plays a role in determining the orbits of its stars. Getting to the bottom of this question is key to understanding how the galaxies in our Universe were born and have evolved throughout their lifetimes.

This final data release is a product of over ten years of effort from nearly 100 astronomers, both within in Australia and around the world. Congratulations to Scott and the SAMI team for all their hard work!

Featured image: Data from the SAMI galaxy survey. Each of these points is the map of the interstellar gas in a galaxy from the survey. The galaxies are arranged by whether they are very low mass (on the left) or high mass (the right) and whether they live in an area full of lots of other galaxies (at the top) or not (at the bottom). The colour of the map tells you whether the gas contains a large fraction of heavy elements (red) or whether this fraction is low (blue). The four points at the top are zoomed-in versions of four galaxies in the survey, showing how a galaxy’s fraction of heavy elements can vary within a single object. © ASTRO3D


Provided by Astro 3D