Yu-Zhu Chen and colleagues discussed gravitational waves with the exact cylindrical gravitational wave solutions. They showed that, there are two kinds of singularities in gravitational waves: source singularity and resonance singularity. Their study recently appeared in the Journal Symmetry.
In the weak field or say, linear approximation, gravitational waves are regarded as linear waves, which ignores the spacetime singularities. Most results about gravitational waves are deduced in this approximation, such as the gravitational quadrupole radiation, the resonance between the gravitational wave and the detector, and the linear superposition of two gravitational waves. But, there’s one another interesting theory called nonlinear theory—exact wave solutions of the Einstein equation. When you consider this theory, some new properties of gravitational waves come into sight.
Now, Yu-Zhu Chen and colleagues discussed gravitational waves with the exact cylindrical gravitational wave solutions rather than gravitational wave solutions in the linear approximation.
“Our paper is motivated by problems such as, the behavior of singularities in gravitational wave solutions and the new physical effects of gravitational wave solutions in addition to, e.g., the reflexion and the transmission.”
— they wrote.
Based on the exact solution, they analyzed singularities in gravitational waves. They showed that there are two kinds of singularities in gravitational waves.
The first kind of singularities lies at a fixed spatial position which corresponds to a source. They called it the “source singularity”.
While, by considering a cylindrical gravitational wave as a complete solution, they showed that singularities in cylindrical gravitational waves carry the information about the source. The second kind of singularities arise as time proceeds to infinity. They recognized this singularity as a resonance and called it the “resonance singularity”.
Unlike other researchers, who considered a resonance between gravitational radiation and the matter (especially the gravitational radiation detectors), Yu-Zhu Chen and colleagues suggested that a gravitational wave resonates with other gravitational waves. They mentioned that the resonance singularity only emerges when a gravitational wave with a source singularity and a gravitational wave without a source singularity possess the same frequency. Two gravitational waves with source singularities or two gravitational waves without source singularities do not resonate. The resonance also indicates that the gravitational wave with sources and the gravitational wave without sources are two of different kinds.
“We suppose that the resonance between gravitational waves is irrelevant to the symmetry of the system. In recent years, gravitational wave detection has produced rapid progress. We expect that the resonance between gravitational waves will be found in the future.”
— they wrote.
Moreover, they investigated the interference of two gravitational waves. They showed, how the interference terms of two cylindrical gravitational waves behave. Interference appears in both the metric and the energy-momentum tensor. Specifically, they showed that the interference term in the source vanishes in the sense of time-averaging.
“A gravitational wave with a source should be regarded as a gravitational radiation. Gravitational radiations will result in the energy loss of the source. With the conservation law of the energy, we may define the energy of the cylindrical gravitational radiation in our framework. We can also consider the resonance between matter waves and gravitational waves based on our previous works on scattering.”
Countless number of primordial blackholes formation scenarios have been already discussed by us on our website. Now, Shinsuke Kawai and Jinsu Kim proposed another interesting mechanism for the primordial blackhole formation. They considered a model in which a scalar field is coupled to the Gauss-Bonnet term, and showed that primordial blackholes may be seeded when a scalar potential term and the Gauss-Bonnet coupling term are nearly balanced. Large curvature perturbation in this model not only leads to the production of primordial blackholes but it also sources gravitational waves at the second order. Their study recently appeared in Arxiv.
Cosmic inflation provides a natural framework for the production of primordial blackholes. Single field inflation is capable of generating large primordial curvature perturbation in small scales compared to the scale of the cosmic microwave background. In the single field inflation models for which the primordial blackhole production and the secondary gravitational waves are studied, the gravity sector is usually assumed to be the Einstein gravity. The Einstein gravity however is by no means a complete theory. From the effective field theory viewpoint, for example, higher curvature terms are expected to arise. One such higher curvature term is the Gauss-Bonnet term,
which leads to a relatively well-behaved theory of higher curvature gravity.
Previously, Shinsuke Kawai and Jinsu Kim investigated a model in which a scalar field ϕ is coupled to the Gauss-Bonnet term and discussed the features of a de Sitter-like fixed point as an alternative to cosmic inflation; in the presence of the Gauss-Bonnet coupling term there may exist a nontrivial de Sitter-like fixed point where the scalar potential term is balanced with the higher curvature Gauss-Bonnet term. Near the nontrivial fixed point, the standard slow-roll approximation is invalid and the ultra-slow-roll regime of inflation naturally arises. Furthermore, they pointed out that the primordial curvature power spectrum may become enhanced near the nontrivial de Sitter-like fixed point, which potentially leads to production of primordial blackholes.
Now, they investigated the production of primordial blackholes and the scalar-induced second-order gravitational waves in such a setup.
By considering two benchmark parameter sets they showed that, a large enhancement occurs in the curvature power spectrum by numerically solving the equations of motion.
A mode with large enhancement of the curvature perturbation may experience gravitational collapse when reentering the horizon, thereby producing primordial blackholes. For their two benchmark sets, they computed the present abundance of primordial blackholes. One set accounts for the totality of the dark matter relic density today, while in the other case primordial blackholes constitute only a portion of the present dark matter relic abundance.
A large curvature perturbation that leads to the production of primordial blackholes inevitably source the scalar-induced second-order gravitational waves. They also obtained the present density parameter of the gravitational waves by utilizing the approximated analytical expression together with their numerical results of the curvature power spectrum. Both of their two benchmark sets are found to be within the sensitivity bounds of future gravitational wave experiments such as LISA, DECIGO, BBO, and SKA.
“While we focused on the scalar potential of the natural inflation model and assumed a smeared step function for the Gauss-Bonnet coupling function in this work, some of the features that we have found are generic. When there is a balance between a scalar potential term and a Gauss-Bonnet coupling term, a nontrivial fixed point may exist. Near the nontrivial fixed point the ultra-slow-roll inflation generically occurs, during which period a large enhancement of the curvature perturbation is guaranteed. We thus expect that the production of primordial blackholes and the secondary gravitational wave signals are natural in higher curvature gravity theories.”
— they concluded.
Reference: Shinsuke Kawai, Jinsu Kim, “Primordial blackholes from Gauss-Bonnet-corrected single field inflation”, Arxiv, pp. 1-9, 2021. https://arxiv.org/abs/2108.01340
Note for editors of other websites: To reuse this article fully or partially kindly give credit either to our author/editor S. Aman or provide a link of our article
Can the darkness of twilight – the photons of the cosmic microwave background – along with the light of dawn – the ultraviolet radiation emitted by the first stars that light up – unravel the story of reionization and tell us something about primordial inflation? We talk about it with Daniela Paoletti of the National Institute of Astrophysics, co-author of two new studies that try to answer this question
Almost a year later , Media Inaf met again Daniela Paoletti , researcher at Inaf Oas in Bologna and first author of a new study that investigates the history of reionization in an original way , the period – not yet well defined – in which the primordial gas, which was pervaded the universe in the early stages of its evolution, passes from the neutral to the ionized state. The work – with the evocative title “ Dark Twilight Joined with the Light of Dawn to Unveil the Reionization History”(The darkness of twilight together with the light of dawn to reveal the history of reionization) – presents an extensive analysis of the history of reionization based on recent cosmological and astrophysical data. Among the authors also Dhiraj Kumar Hazra , Fabio Finelli of Inaf Oas and the Nobel Prize for physics in 2006 George Smoot . In addition to this work, another article was presented in the same period that sees her among the authors, which reports a detailed study on what the latest Planck data have to say beyond the standard inflation model . On this hot summer day, we discover with her the details of the analysis, the implications and the hopes for the future.
A year after the publication of your study on the history of reionization, you are about to publish a new one on the same burning topic. What is it about?
“This is a study on how the darkness of twilight and the light of the dawn of the universe, together, can make us understand how one of the most important phases in the history of the universe could have gone. The article is the continuation of what we published a year ago in the journal Physical Review Letters in which we presented an original approach to study the history of the early universe, combining astrophysical data and the cosmic microwave background (or Cmb, from English cosmic microwave background ). In this article we are going to describe this approach in detail and to present many more things, compared to the previous article, which for reasons of space at the time we were not able to present ».
How did you come up with the idea of the title, so suggestive?
«The idea for the title came to me when I was preparing the seminar on Physical Review Letters . The innovative aspect of our approach is to bring together two completely different types of data: microwave background radiation and ultraviolet radiation. The first, which has always been defined as the first light of the universe , consists of the first photons that have been emitted which, however, think about it, are the same as the first twilight, because when this radiation has cooled down, the universe has entered that which is called dark era . While ultraviolet radiation represents a tracer of the first stars: it therefore traces the dawn of the universe, when it comes out of the night of the dark age. If the title had been in Italian I would have used the wordaurora , which in my opinion would be the most beautiful definition, but in English aurora and dusk are indicated in the same way. So I used dawn, as a terminology, because I liked this idea of combining twilight before night and light after night ».
Microwave radiation and ultraviolet radiation: how did you manage to reconcile such different data?
“They are two totally different types of data and precisely because they are so different we had to devise and develop this new technique which, instead of evaluating the fraction of ionized matter over time, solves the equation for the ionization fraction, thus succeeding to test even those that are the data of ultraviolet radiation, which otherwise we could not use in the classical approach. These data are telling us what is happening to the ionizing source, that is, to the first stars. Then we also use other data, which in this work proved to be very interesting: quasar data and gamma ray bursts (Grb) data. Some items at redshiftvery high, therefore very far away, they can tell us what the ionization situation is around the source, in their local world. If we assume that this is also representative of what is outside, they give us a precise idea of what is happening at that redshift at that moment ».
So is the method the same as presented in 2020?
«Yes, this is the basis of the method that we had already developed in the study presented in Physical Review Lettersin 2020, but in this case we went to see what happens when we start using different data or changing assumptions. The very first thing we did was to go and check what happens when in the reionization source we leave free a term that would otherwise be set by the simulations, because it is a term on which we have very little data and those we have are not very sensitive. . In the first work we had fixed it to the value of the simulations while now we have left it free to be guided by the data and we have seen that in reality above all the data of the quasars have a good ability to constrain it and that, fortunately, it turned out to be perfectly in agreement with the simulations. This therefore confirmed what we had previously assumed ».
What is the main novelty of the new study?
“An extremely interesting result of this new study is when we go to change the ultraviolet data. For ultraviolet brightness, what we measure with our instruments is the brightness function, which we then convert into ultraviolet radiation brightness density. As this brightness function needs to be integrated, we need to choose a cutoff magnitude. In other words, we do not consider sources that are weaker than the value assumed as a cut. Until now, we had always assumed a fairly conservative magnitude value of minus 17, given that for the weaker sources the data show a change in the behavior of the brightness function that we do not know if it is real or if it depends on the uncertainty on the data. We have now used a more aggressive, more optimistic cut instead. We wondered what would happen if we went to minus 15 . With this new cut we are in fact considering the contributions of sources that are so distant and so weak, but which are many and which therefore lead to a slightly different history of reionization. We note that we start to have a contribution to higher redshifts: instead of being an extremely steep climb, it becomes slower, which lasts longer precisely because we have a contribution from these very weak sources, capable of ionizing ».
Do you still have the doubt that you are not considering realistic sources?
«Yes, that always remains. Obviously the error bars get bigger because they are more difficult measures. But the beauty is that in October they launch Jwstwho will see very well all that queue that we are considering. I am thrilled to make predictions for Jwst because I am very curious to see what the impact will be as the error bars go down. Because if this contribution of the sources is really so great, the classical model that is used in cosmology – which foresees a very steep transition that lasts very little – begins to be problematic, because at that point in reality astrophysics would be telling us that reionization is a little slower. We must always take into account that reionization, beyond the importance in itself – because it is a phase transition, when the entire universe changes state completely – is fundamental in cosmology, because it represents one of our greatest uncertainties. Suffice it to say that the optical thickness – the parameter that concretizes the reionization in the cosmic microwave background – is the only one that, after Planck, has more than 1% of error. It therefore impacts a whole series of extended models, among which we have also demonstrated the inflation extension models, such as those in the other article we wrote ».
Will future experiments already planned be able to help in this sense?
“Yes, with the generation of cosmological experiments coming in the next few years, we need to be particularly careful about how reionization will be considered and the possibility of using this astrophysical data to make us tell how the history of reionization went could, cascade, also have an impact on the constraints on cosmological parameters. Furthermore, it is very interesting, also for the future, what we have shown on the data of quasars, which have proved to be very powerful because – although they extend less in redshift than ultraviolet radiation – they are precise points of the reionization fraction. George Smoot pointed out to me that actually in the future, with Desi and Euclid, we start talking about having no more five, six, ten points but bands of thousands of quasars. So in the next ten years the approach and perspective will also completely change ».
«An instrument that could have done exceptional things would have been Theseus, because the gamma ray bursts are very powerful: while the quasar has a continuous emission and therefore it ionizes the medium around itself, in the case of the Grb no because it is too fast. It doesn’t have time to ionize. It is precisely a precise point that indicates that fraction. Unfortunately, in the discussion we have used, we only have one of points, which however already shows how a single point out of the dozen points used is able to narrow the error bars ».
What astrophysical data are you using? At what distance?
“We use six galaxy clusters from the Hubble Frontier Field . The quasars used reach up to redshift 8 while the ultraviolet sources, interpolating, reach up to redshift 10 ».
There is a parallel study to this, submitted in the same days, with an equally curious title reminiscent of Toy Story. What is it about?
“The original title of this study by Dhiraj – the first author who was in Bologna until January 2020 while he is now a professor in Imsc Chennai (India) – was precisely” Inflation story: to slow-roll and beyond “(towards slow- roll and beyond, like the leitmotiv of Toy Story ) because the slow-roll is the standard model of inflation and to “go beyond.” Already that is the non plus ultra but we go further. That was the idea, quoting Buzz Lightyear. A couple of writers weren’t exactly in agreement because in fact slow-rollwe considered it as a starting point, so it would not have been correct to use the “to”. In the end, of course, we eliminated the “to” even if we both liked it because it resumed Buzz’s wanting to go further and, thinking about it, our end is similar to that of the Toy Story hero: we try, to go further , but there’s a problem. We know that the standard model is a beautiful fit to the data, but we also know that Planck confirmed what we had already seen with Wmap, namely that there are anomalies in this data. These anomalies are extremely elusive because they are all at 2.8 sigma of significance, when the threshold for saying that something is anomalous is 3 sigma. So we are somewhat in a limbo that does not allow us to understand if what we are seeing is a statistical fluctuation or is it really an anomaly ».
If it were an anomaly would it be more intriguing?
“Yeah, the nice thing is to go and see if it’s an anomaly. Two of the biggest anomalies are the lack of power at very large angular scales, which has been observed and confirmed by Planck, and small oscillations in the angular power spectrum. It is these oscillation blocks, or single oscillations, which are called features that are not produced in the standard model. One possibility might be that inflation wasn’t all slow-roll: we have a scalar field – the one that generates inflation – that moves very slowly on an extremely flat potential, which if before ending slowly rolling on the flat potential it had a little less flat potential, or had made a jump or a cusp, then it could lead to this loss of power and the generation of small oscillations ».
How do you check it?
“Usually what needs to be done, which has been dealt with in so many cases (including in Planck’s article on inflation), involves taking different physical models that go to see if the data is better or worse. The nice thing here is that we use this framework called Wiggly Whipped Inflation – literally, swinging whipped inflation . It is a phenomenological approach: we do not ask ourselves what caused that thing, but we ask ourselves: if the inflationary potential were done in a certain way, can we rent the data? Of course, if we fit the data better then we can be reasonably sure that we have found an inflationary model that works best. First there is the whip – the so-called potentialwhipped inflation which tells us that before the scalar field rolled a little faster, but then it starts to slow-roll . In this case I have a lack of power because when the head rolls quickly it does not generate many perturbations; it generates them when it arrives on slow-roll . So you have this lack of power. Then you go to test when the wiggles are also present, that is, these small fluctuations. These oscillations can be produced with discontinuities: when the potential has a jump, a wiggle is generated. If the jump is bigger, you generate a lot at certain scales, which depend on how much you jump. It is a general framework that simply gives us an idea of what can best fit the data ».
It sounds simple, but I guess it really isn’t …
There are two problems. On the one hand, we know that if we used 100,000 parameters we could fit the entire universe. I could write a 150-parameter dependent Universe Lagrangian and I would have the Universe Wave Function. But it doesn’t work like that because the degrees of freedom would be too many. So saying that a model improves the data is always a balancing act between how free the model is – how many degrees of freedom it has – and how much better the data is. If the model, as has happened to us in some of these cases, fits the data better than the standard model but uses many parameters, “it is not valid”. Furthermore, it must be said that Planck’s data from 2018 reduced the evidence of anomalies. In temperature there is still the loss of power,
What’s new in this second study?
“The novelty lies in the fact that we have also used polarization . Theoretically the same thing that is done in temperature can be done in polarization. The problem is that in the polarization, on the large angular scales, there is the reionization that increases the power, masking a possible drop in power. While temperature still favors the models that generate this power shortage, when considering polarization it actually becomes apparent that these models are not favored in any way over the standard slow-roll model with a power-law power spectrum. In addition to this, there is another novelty: going to use for the small angular scales (less than about 8 degrees) not the likelihoodPlanck official but CamSpec , the non-binned version, which takes into account all the single points without considering the averages. This likelihood is the result of a reworking of the Planck data made by George Efstathiou and Steven Gratton after the publication of the Planck results, which are able to use more sky, slightly improving the error bars. We wanted to use that because it is more complete and more evolved. At the moment there is no evidence in favor of these more particular models that respect a Lambda Cdm ».
What does the future hold on this front?
«The future will be very interesting for two reasons. The first is linked to the improvement of Cmb data thanks to LiteBird which will allow us to study the “E” polarization limited only by the cosmic variance. Then we will have the experiments on the ground, which will trace the small and medium scales where these oscillations are present, which can be seen in a more precise way than the Planck data. Furthermore, there is the large-scale structure: since this oscillation is also present at small scales, it can also be seen by an experiment like Euclid. Another possibility is to use non-Gaussianities, because the same effects of oscillations that are seen in the power spectrum are also seen in moments of higher order, therefore in non-Gaussianities ».
So, for now, any significant news on the history of inflation?
“For the moment, Planck’s data tells us that they still prefer a standard model. But the prospects for the future are good: in the next ten years I expect that we can really start to say whether it was just a standard slow-roll model or not. For now we are like Buzz: we can’t go much further than our standard model room but let’s remember that Buzz found his rocket too and took off for real and so it will be for us with future data, and maybe we will really go beyond slow roll “.
In the early stages of the Universe, quarks and gluons were quickly confined to protons and neutrons which went on to form atoms. With particle accelerators reaching increasingly higher energy levels the opportunity to study this fleeting primordial state of matter has finally arrived.
Quark-Gluon Plasma (QGP) is a state of matter which existed only for the briefest of times at the very beginning of the Universe with these particles being quickly clumped together to form the protons and neutrons that make up the everyday matter that surrounds us. The challenge of understanding this primordial state of matter falls to physicists operating the world’s most powerful particle accelerators. A new special issue of EPJ Special Topics entitled ‘Quark-Gluon Plasma and Heavy-Ion Phenomenology’ edited by Munshi G. Mustafa, Saha Institute of Nuclear Physics, Kolkata, India, brings together seven papers that detail our understanding of QGP and the processes that transformed it into the baryonic matter around us on an everyday basis.
“Quark-Gluon Plasma is the strongly interacting deconfined matter which existed only briefly in the early universe, a few microseconds after the Big Bang,” says Mustafa. “The discovery and characterisation of the properties of QGP remain some of the best orchestrated international efforts in modern nuclear physics.” Mustafa highlights Heavy Ion Phenomenology as providing a very reliable tool to determine the properties of QGP and in particular, the dynamics of its evolution and cooling.
Improvements at colliders such as the Relativistic Heavy Ion Collider (RHIC) and the Large Hadron Collider (LHC) have radically increased the energy levels that can be attained by heavy nuclei collisions at near-light speeds bringing them in line with those of the infant Universe. In addition to this, future experiments at the Facility for Antiproton and Ion Research (FAIR) and at the Nuclotron-based Ion Collider fAcility (NICA) will generate a wealth of data on QGP and the conditions in the early Universe.
“This collection is so timely as it calls for a better theoretical understanding of particle properties of hot and dense deconfined matter, which reflect both static and dynamical properties of QGP,” explains Mustafa. “This improved theoretical understanding of Quark-Gluon Plasma and Heavy Ion Phenomenology is essential for uncovering the properties of the putative QGP which occupied the entire universe, a few microseconds after Big Bang.”
Mustafa points out that this improved understanding should also open the doorway to understanding the equation of state of this strongly interacting matter and prepare the platform to explore the theory of quark-hadron transition and the possible thermalisation of the QGP. This could in turn help us understand the steps that led from QGP to the everyday baryonic matter that surrounds us.
“The quarks and gluons which formed the neutrons and protons were confined into them, a few microseconds after the Big Bang,” concludes Mustafa. “This is the first time when we have seen them being liberated from their eternal confinement!”
All articles are available here and are freely accessible until 12 September 2021. For further information read the Editorial.
Featured image: Girolamo Sferrazaa Papa | Getty Images
Astronomers spotted an unusual set of rings in X-rays around a black hole with a companion star.
These rings are created by light echoes, a phenomenon similar to echoes on Earth from sound waves bouncing off hard surfaces.
NASA’s Chandra X-ray Observatory and Neil Gehrels Swift Observatory were used to detect X-rays ricocheting off dust clouds.
The rings provide information about the black hole, its companion, and the intervening dust clouds.
This image features a spectacular set of rings around a black hole, captured using NASA’s Chandra X-ray Observatory and Neil Gehrels Swift Observatory. The X-ray images of the giant rings reveal information about dust located in our galaxy, using a similar principle to the X-rays performed in doctor’s offices and airports.
The black hole is part of a binary system called V404 Cygni, located about 7,800 light years away from Earth. The black hole is actively pulling material away from a companion star — with about half the mass of the Sun — into a disk around the invisible object. This material glows in X-rays, so astronomers refer to these systems as “X-ray binaries.”
On June 5, 2015, Swift discovered a burst of X-rays from V404 Cygni. The burst created the high-energy rings from a phenomenon known as light echoes. Instead of sound waves bouncing off a canyon wall, the light echoes around V404 Cygni were produced when a burst of X-rays from the black hole system bounced off of dust clouds between V404 Cygni and Earth. Cosmic dust is not like household dust but is more like smoke, and consists of tiny, solid particles.
In this composite image, X-rays from Chandra (light blue) were combined with optical data from the Pan-STARRS telescope in Hawaii that show the stars in the field of view. The image contains eight separate concentric rings. Each ring is created by X-rays from V404 Cygni flares observed in 2015 that reflect off different dust clouds. (An artist’s illustration explains how the rings seen by Chandra and Swift were produced. To simplify the graphic, the illustration shows only four rings instead of eight.)
A team of researchers led by Sebastian Heinz of the University of Wisconsin in Madison analyzed 50 Swift observations of the system made in 2015 between June 30 and August 25, and Chandra observations made on July 11 and 25, 2015. It was such a bright event that the operators of Chandra purposely placed V404 Cygni in between the detectors so that another bright burst would not damage the instrument.
The rings tell astronomers not only about the black hole’s behavior, but also about the landscape between V404 Cygni and Earth. For example, the diameter of the rings in X-rays reveals the distances to the intervening dust clouds the light ricocheted off. If the cloud is closer to Earth, the ring appears to be larger, and vice versa. The light echoes appear as narrow rings rather than wide rings or haloes because the X-ray burst lasted only a relatively short period of time.
The researchers also used the rings to probe the properties of the dust clouds themselves. They compared the X-ray spectra — that is, the brightness of X-rays over a range of wavelengths — to computer models of dust with different compositions. Different compositions of dust will result in different amounts of the lower energy X-rays being absorbed and prevented from being detected with Chandra. This is a similar principle to how different parts of our body or our luggage absorb different amounts of X-rays, giving information about their structure and composition.
The team determined that the dust most likely contains mixtures of graphite and silicate grains. In addition, by analyzing the inner rings with Chandra, they found that the densities of the dust clouds are not uniform in all directions. Previous studies have assumed that they did not.
A paper describing the V404 Cygni results was published in the July 1, 2016, issue of The Astrophysical Journal (preprint). The authors of the study are Sebastian Heinz, Lia Corrales (University of Michigan); Randall Smith (Center for Astrophysics | Harvard & Smithsonian); Niel Brandt (The Pennsylvania State University); Peter Jonker (Netherlands Institute for Space Research); Richard Plotkin (University of Nevada, Reno); and Joey Neilson (Villanova University).
There have been multiple papers published every year reporting studies of the V404 Cygni outburst in 2015 that caused these rings. Previous outbursts were recorded in 1938, 1956 and 1989, so astronomers may still have many years to continue analyzing the 2015 one.
NASA’s Marshall Space Flight Center manages the Chandra program. The Smithsonian Astrophysical Observatory’s Chandra X-ray Center controls science from Cambridge, Massachusetts, and flight operations from Burlington, Massachusetts.
The detection of a non-Gaussian signature in the early Universe would be a smoking gun for many inflation models. Despite a number of searches, no evidence has been found for primordial non-Gaussianity in the Cosmic Microwave Background (CMB). Now, Oliver Philcox and colleagues for the first time reported the detection of the non-Gaussian 4 point-correlation function of Galaxies using the BOSS CMASS sample. Their study published in Arxiv on 3 Aug 2021.
What makes this detection possible, is the use of an estimator that has been recently presented by Oliver Philcox et al. for computing the N-point galaxy correlation functions of Ng galaxies in O(Ng^2) time and a new modification to subtract the disconnected 4PCF contribution (arising from the product of two 2PCFs) at the estimator level.
“This is unlike previous works, and ensures that our measurement is specifically one of non-Gaussianity, rather than a recapitulation of known physics. The estimator is fast (scaling quadratically with the galaxy number density), corrected for the non-uniform survey geometry, and implemented in the public encore code. We verify its performance on a suite of lognormal simulations at high redshift, before applying it to the BOSS dataset and Patchy simulations.”
Additionally, analysis of the higher-point functions like 4PCF is hampered by their high dimension; so they have to implement a signal-to-noise-based compression scheme, which allows them to project the 4PCF into a set of ∼ 50 numbers with minimal impact on the detection significance.
“The compression has minimal impact on detection significance and facilitates traditional classical χ²-like analysis using a suite of mock catalogs”
Finally, by performing a classical χ²-like analysis in the compressed subspace they detected an 8.1σ of the non-Gaussian 4PCF.
“The detectability of the 4PCF in the quasi-linear regime implies that it will become a useful tool in constraining cosmological and galaxy formation parameters from upcoming spectroscopic surveys.”
References: (1) Oliver H. E. Philcox, Zachary Slepian, Jiamin Hou, Craig Warner, Robert N. Cahn, Daniel J. Eisenstein, “ENCORE: Estimating Galaxy N-point Correlation Functions in O(N2g) Time”, Arxiv, pp. 1-24, 2021. https://arxiv.org/abs/2105.08722 (2) Oliver H. E. Philcox, Jiamin Hou, and Zachary Slepian, “A First Detection of the Connected 4-Point Correlation Function of Galaxies Using the BOSS CMASS Sample”, Arxiv, pp. 1-26, 2021. arXiv:2108.01670
Note for editors of other websites: To reuse this article fully or partially kindly give credit either to our author/editor S. Aman or provide a link of our article
The great sensitivity of the 30-meter Iram radio telescope allows it to detect very weak emissions, such as that from rare molecules such as HCNH +. A group of astronomers led by INAF studied this molecule (abundant in Titan’s atmosphere) in detail and for the first time it was clarified that the abundance of HCNH + is different depending on whether they are young star-forming regions ( cold) or evolved (hot). Details on A&A
Using the 30-meter Spanish Iram radio telescope, five researchers led by the National Institute of Astrophysics (INAF) have completed the first survey in star-forming regions of a rare but important interstellar molecule: HCNH + or protonated hydrogen cyanide . It is an ionized particle, one of the most abundant so far found in the atmosphere of Titan (the largest natural satellite of the planet Saturn). Experts believe that the HCNH + molecule is a crucial species in astrochemical reactions, but it has so far only been identified in a handful of star-forming regions, and therefore its chemistry is poorly understood. The team led by INAF hasobserved 26 large mass targets in different evolutionary stages, detecting the molecule in 16 regions. This represents the largest sample of sources in which this molecular ion has been found to date. The results are presented in an article recently published in the journal Astronomy & Astrophysics .
We interviewed the first author of the study, Francesco Fontani , an astronomer at the INAF in Florence and an adjunct professor of physics of the interstellar medium at the University of Florence. His research field mainly concerns the formation of stars and the presence in space of molecules of biological importance that can be linked to the origin of life in the Universe.
What is the HCNH + molecule and where can it be found?
«In Italian, the molecule is protonated hydrogen cyanide . On Earth it is not a compound that can be used because it is extremely unstable: if present in the atmosphere, due to the high density of the gas it would react almost instantly with something else forming a more stable compound. But in the rarefied, low-pressure gas of an interstellar cloud, where interactions between particles are much rarer, it can survive for a long time and be seen with radio telescopes. Other similar molecules (H3 +, N2H +, C3 +) are “interstellar” rather than terrestrial, because they survive for a long time only in that type of environment. HCNH + is probably the most abundant ionized particle in the atmosphere of Titan, the only satellite in the Solar System with an atmosphere, and also richer in nitrogen than Earth’s ».
Why is it an important molecule to study in interstellar space?
” It is thought to be a key species for chemical processes in the regions where stars form and is the major” progenitor “of the molecules HCN (hydrogen cyanide) and HNC (hydrogen isocyanide), two molecules very abundant everywhere in space and both involved in various synthesis theories of prebiotic molecules. In fact, it seems that from the polymerization of HCN it is possible to reach adenine, the nitrogenous base of Dna and Rna. In general, it is believed that HCNH + is a key molecule in interstellar chemical processes involving nitriles, organic compounds characterized by the functional group -CN (carbon-nitrogen), important in astrochemistry and astrobiology because they are possible progenitors of organic acids “.
What is the novelty of your studio compared to the state of the art?
“In our work we publish the detection of the emission of HCNH + in 16 regions or clusters in which stars of great mass are formed, that is 8-10 times the Sun large. The 16 regions have various” ages “: ranging from very young objects , cold and quiescent, in which the star formation process is at the beginning, to objects in more advanced stages, hot and turbulent. This allows us to study the major processes that form (and destroy) HCNH + under very different conditions. It is also the first study in which we compare the abundance of HCNH + in regions with very different physical and evolutionary properties, and for the first time we analyze the main chemical reactions related to HCNH + in a hot and evolved environment, while previous studies had focused only on cold and young environments ».
What did you find out?
« The most important observational result is that HCNH + is significantly more abundant in the young phases of star formation, thus constituting an important reservoir of HCN and HNC already from these first moments. It has been seen ch the progenitors of the HCNH + molecule are different in gases with different physical and evolutionary conditions, and it is the first time that we realize this. Entering a bit into the technical, in young and cold regions (10 K, about -260 degrees Celsius), the “parents” are mainly the HCN + and HNC + ions; in more evolved and warm regions (relatively … we always talk about 30 K, or about -240 degrees Celsius) the parents are mainly HCN and HCO +. We found that the abundances of HCNH + relative to H2, the major interstellar molecule, are higher in cold regions than in warm ones, and this also has another interesting consequence: the abundance of HCNH + can be used to understand whether a region whose developmental age is not well known is “younger or older”. Molecules with these properties are also calledchemical clocks or chemical evolutionary indicators “.
Was the molecule studied by chance while you were looking for something else? What in particular?
“Yes, we were looking for molecules containing deuterium, the stable isotope of hydrogen with a proton and a neutron in the nucleus, following the idea that the abundance of these molecules changes with the physical and chemical evolution of the region that contains them. These observations have already been published , and have actually confirmed our hypothesis, which is that some molecules containing deuterium are indicators of the age of the host region. But sifting through the entire spectral band observed also showed the emission of many other molecules, including HCNH +, which we did not expect to be another indicator of age! ».
What are we left to discover about HCNH +?
“On the basis of our results we have understood for the first time what are the chemical processes that” on average “are dominant in forming HCNH + in regions, let’s say, colder and younger or warmer and more evolved, thanks to a big leap in terms of number and properties of the observed regions compared to previous studies (from 5-6 to 16 regions). In the future, we will need to observe even more regions with different physical properties to arrive at a full understanding of the role of this molecule in the important context of interstellar nitrile chemistry. ‘
Featured image: The molecular cloud G034.43 + 00.24, one of the targets of the study by Fontani et al. (2021), seen in the multi-band images of Glimpse (Benjamin et al. 2003), the infrared survey of the inner part of the Milky Way obtained with the Spitzer space telescope (red = 8 micrometers; green = 5.8 micrometers; blue = 4.5 micrometers ). The dark infrared filament, made up of cold, dense material, is filled with molecules (including HCNH +) that emit light at radio wavelengths. Credits: Ashley Thomas Barnes and Francesco Fontani
This is the first time anyone has had such a detailed look at a complete shock cooling curve in any supernova.
In a world-first, astronomers at The Australian National University (ANU), working with NASA and an international team of researchers, have captured the first moments of a supernova – the explosive death of stars – in detail never-before-seen.
NASA’s Kepler space telescope captured the data in 2017.
The ANU researchers recorded the initial burst of light that is seen as the first shockwave travels through the star before it explodes.
PhD scholar Patrick Armstrong, who led the study, said researchers are particularly interested in how the brightness of the light changes over time prior to the explosion. This event, known as the “shock cooling curve”, provides clues as to what type of star caused the explosion.
“This is the first time anyone has had such a detailed look at a complete shock cooling curve in any supernova,” Mr Armstrong, from the ANU Research School of Astronomy and Astrophysics, said.
“Because the initial stage of a supernova happens so quickly, it is very hard for most telescopes to record this phenomenon.
“Until now, the data we had was incomplete and only included the dimming of the shock cooling curve and the subsequent explosion, but never the bright burst of light at the very start of the supernova.
“This major discovery will give us the data we need to identify other stars that became supernovae, even after they have exploded.”
The ANU researchers tested the new data against a number of existing star models.
Based on their modelling, the astronomers determined the star that caused the supernova was most likely a yellow supergiant, which was more than 100 times bigger than our sun.
Astrophysicist and ANU researcher Dr Brad Tucker said the international team was able to confirm that one particular model, known as SW 17, is the most accurate at predicting what types of stars caused different supernovae.
“We’ve proven one model works better than the rest at identifying different supernovae stars and there is no longer a need to test multiple other models, which has traditionally been the case,” he said.
“Astronomers across the world will be able to use SW 17 and be confident it is the best model to identify stars that turn into supernovas.”
Supernovae are among the brightest and most powerful events we can see in space and are important because they are believed to be responsible for the creation of most of the elements found in our universe.
By better understanding how these stars turn into supernovae, researchers are able to piece together information that provides clues as to where the elements that make up our universe originate.
Although the Kepler telescope was discontinued in 2017, new space telescopes such as NASA’s Transiting Exoplanet Survey Satellite (TESS) will likely capture more supernovae explosions.
“As more space telescopes are launched, we will likely observe more of these shock cooling curves,” Mr Armstrong said.
“This will provide us with further opportunities to improve our models and build our understanding of supernovae and where the elements that make up the world around us come from.”
Reference: P Armstrong, B E Tucker, A Rest, R Ridden-Harper, Y Zenati, A L Piro, S Hinton, C Lidman, S Margheim, G Narayan, E Shaya, P Garnavich, D Kasen, V Villar, A Zenteno, I Arcavi, M Drout, R J Foley, J Wheeler, J Anais, A Campillay, D Coulter, G Dimitriadis, D Jones, C D Kilpatrick, N Muñoz-Elgueta, C Rojas-Bravo, J Vargas-González, J Bulger, K Chambers, M Huber, T Lowe, E Magnier, B J Shappee, S Smartt, K W Smith, T Barclay, G Barentsen, J Dotson, M Gully-Santiago, C Hedges, S Howell, A Cody, K Auchettl, A Bódi, Zs Bognár, J Brimacombe, P Brown, B Cseh, L Galbany, D Hiramatsu, T W-S Holoien, D A Howell, S W Jha, R Könyves-Tóth, L Kriskovics, C McCully, P Milne, J Muñoz, Y Pan, A Pál, H Sai, K Sárneczky, N Smith, Á Sódor, R Szabó, R Szakáts, S Valenti, J Vinkó, X Wang, K Zhang, G Zsidi, SN2017jgh – A high-cadence complete shock cooling lightcurve of a SN IIb with the Kepler telescope, Monthly Notices of the Royal Astronomical Society, 2021;, stab2138, https://doi.org/10.1093/mnras/stab2138
Using the Five-hundred-meter Aperture Spherical radio Telescope (FAST), a team of international astronomers detected a giant filamentary H I structure “Cattail”, which is possibly the furthest (Rgc∼22 kpc) and largest (∼1.1 kpc) filament to date. Their study recently appeared in Arxiv.
Gas filaments are the largest known structures in the universe, consisting of walls of gravitationally bound galaxy superclusters. The largest elongated molecular cloud structures are called giant molecular filaments with length greater than 10 pc. Compared to gaint molecular filaments, H I filaments are not well studied. Most of the H I filaments are aligned with the Galactic plane, which is similar to the situation of giant molecular filaments. The H I filaments are normally cold with a typical excitation temperature Tex∼50 K and often associated with CO dark molecular gas. However, detailed physical properties of H I filaments, as well as their distribution in the Galaxy, are not well characterized.
Now, a team of international astronomers led by Keping Qiu, with the help of FAST, observed the sky region of Right Ascension of 307°.7 < α < 311°.0 and Declination of 40°.9 < δ < 43°.4 on 2019 August 24. This sky region covers the main part of the Cygnus-X North molecular cloud, which has a velocity range of – 30 km s¯1 to 20 km s¯1 and is located 1.4 kpc away from the Sun.
They detected a gaint filamentary H I structure having a velocity between –170 km s¯1 to –130 km s¯1, and a mean velocity of – 150 km s¯1 at a Galactocentric distance of 22 kpc.
This has a length of 1.1 kpc, which appears to be so far the furthest and largest giant filament in the Galaxy. They named it Cattail. Its mass is calculated to be 6.5 × 10⁴ M and the linear mass density is 60 M pc¯1. Its width is 207 pc, corresponding to an aspect ratio of 5:1.
Cattail possesses a small velocity gradient (0.02 km s¯1 pc¯1) along its major axis. Together with the HI4PI data, they found that Cattail could have an even larger length, up to 5 kpc. They also identified another new elongated structure to be the extension into the Galactic first quadrant of the Outer Scutum-Centaurus (OSC) arm, and Cattail appears to be located far behind the OSC.
“Based on the above analysis, we suggest two possible explanations for Cattail: it is a giant filament with a length of ∼5 kpc, or part of a new arm in the Extreme Outer Galaxy (EOG) ”
— they wrote.
The question about how such a huge filament is produced at the extreme Galactic location remains open. Alternatively, Cattail might be part of a new arm beyond the OSC, though it is puzzling that the structure does not fully follow the warp of the Galactic disk.