On the trail of ultra-low gravitational waves (Astronomy)

Analyzed in detail a promising signal that could be due to the so-called gravitational wave background produced by the gravitational energy released by pairs of supermassive black holes in mutual approach. The study, which was also attended by researchers from INAF, represents a step forward on the road to the detection of gravitational waves of very low frequency, of the order of one billionth of a Hertz.

The collaboration Epta (European Pulsar Timing Array) published today in Monthly Notices of the Royal Astronomical Society an article in which the detailed analysis of a promising signal is reported that could be due to the so-called gravitational wave background ( Gwb ), due to astronomers around the world have been chasing for some time, produced by the gravitational energy released by pairs of supermassive black holes as they approach each other, eventually leading them to merge. The results of the study were made possible thanks to the pulsar data collected, in twenty-four years of observations, with five large-aperture European radio telescopes – including the 64-meter-diameter Sardinia Radio Telescope ( Srt ), located near Cagliari.

The beams of radiation emitted by the magnetic poles of pulsars rotate with the star, and we observe them as radio pulses as they cross our line of sight, like the beams of light from a distant beacon. Pulsar Timing Arrays ( Pta ) are made up of an array of pulsars that have a very stable rotation, and for this property they are used as gravitational wave detectors on a galactic scale. In the presence of a gravitational wave, space-time is in fact deformed and the very regular cadence of the radio pulses of a pulsar is therefore in turn altered. PTAs are sensitive to very low frequency gravitational waves, in the one-billionth hertz regime: a gravitational wave of this type makes a single oscillation in about 30 years.

The Pta are therefore able to widen the window of observability of gravitational waves, currently limited only to the high frequencies (of the order of hundreds of hertz), which are studied by the detectors on the ground Ligo, Virgo and Kagra. These instruments are able to pick up the gravitational signals generated in short-lived collisions involving stellar-mass black holes and neutron stars, while PTAs can detect gravitational waves produced by binary systems of supermassive black holes located in the center of galaxies. during their slow spiraling motion of mutual approach. The cumulative effect of the signals produced by this population of extreme celestial objects is, in fact, the background of gravitational waves.

“The presence of a gravitational wave background”, explains Andrea Possenti , researcher at INAF in Cagliari and co-author of the work, “manifests itself in the form of very low frequency fluctuations in the rhythm of radio pulses coming from all pulsars, a sort of Additional “noise” that disturbs the regular pulse pattern, which we could otherwise compare to the ticking of a very precise clock. Speaking in a very simplified way, an experiment such as the one conducted by Epta therefore consists in the repeated observation of the array of pulsars, every few weeks and for many years, in search of a very low frequency “noise” that afflicts their ticking in a common way. all pulsars, and that it is not attributable to causes other than gravitational waves ».

Artist’s impression of the Epta experiment. A group of European radio telescopes observed a network of pulsars spread across the sky. The variation recorded in the time of arrival on Earth of the radio pulses emitted by these celestial bodies allows astronomers to study the smallest perturbations of space-time. These perturbations, called gravitational waves, propagate relentlessly from the most remote and therefore oldest borders of the universe, as the first galaxies merged with each other and the supermassive black holes housed in their central regions orbited each other. and they produced them © INAF

In fact, the expected amplitude of the “noise” due to the gravitational background is incredibly small, from a few tens to a couple of hundred billionths of a second of advance or delay in the arrival times of the radio pulses: in principle many other effects they could induce a similar “noise”. In order to reduce the role of the other sources of perturbation and validate the results, the analysis of the data collected by Epta’s measurements therefore made use of two completely independent procedures, with three different modeling of the corrections due to the bodies of the Solar System, and adopting different statistical treatments. This allowed the team to pinpoint a clear signal that could potentially be identified as belonging to the gravitational wave background. Particularly,

“Epta had already found indications of the presence of this signal in the data set published in 2015,” recalls Nicolas Caballero , researcher at the Kavli Institute for Astronomy and Astrophysics in Beijing and lead co-author of the publication. “Since the results were then affected by large statistical uncertainties, they were strictly discussed only as upper limits for the amplitude of the signal. Our new data now clearly confirms the presence of this signal common to all pulsars, making it a candidate for the gravitational wave background. ‘

The general relativity Einstein predicts very specific relationship between the deformation of space-time experienced by radio signals from pulsars located in different directions in the sky. Scientists call this the “spatial correlation” of the signal. Its detection will uniquely identify the observed noise as due to a gravitational wave background. “At the moment, the statistical uncertainties in our measurements do not yet allow us to identify the presence of the predicted spatial correlation for the background signal of gravitational waves. To confirm the nature of the signal, ”explains Siyuan Chen , researcher at Lpc2Eof the French CNRS in Orleans, first author of the study, “we therefore need to include more pulsar data in the analysis. However, we can say that the current results are very encouraging ».

The Cagliari team that participated in the study. From left: Andrea Possenti, Marta Burgay and Delphine Perrodin. Credits: G. Alvito, P. Soletta / Inaf Cagliari

Epta is a founding member of the International Pulsar Timing Array ( Ipta ). Since the independent data analyzes performed by the other IPTA partners – i.e. the NanoGrav and Ppta experiments – have also indicated the presence of a similar signal, the IPTA members are working together to better prepare the next steps, thanks to progress. obtained by comparing all their data and methods of analysis.

“As it was for high frequency gravitational waves in 2015, the detection of very low frequency gravitational waves would be an epochal achievement for physics, astrophysics and cosmology”, concludes Delphine Perrodin, researcher at the INAF of Cagliari and co-author of the work. «In particular, the discovery and study of the gravitational wave background will give us direct information on the size and evolution of supermassive black holes, and on their contribution in shaping galaxies and the current universe. A challenge in which INAF has been immersed since 2006, the year of the birth of the Epta collaboration, and which now makes use of the asset represented by Srt and its involvement as part of Leap, the Large European Array for Pulsars, in which the Epta’s telescopes work in a synchronized way to reach the capabilities of a single 200 meter diameter antenna, and thus greatly increase Epta’s sensitivity to gravitational waves ».

Featured image: The five large European radio telescopes used in this study. From top left: The Effelsberg radio telescope in Germany, the Nancay radio telescope in France, the Sardinia Radio Telescope in Italy, the Westerbork Synthesis Radio Telescope in the Netherlands and the Lovell Telescope in the United Kingdom © INAF

To know more:

Provided by INAF

First Steps Towards Quantum Artificial Intelligence (Quantum)

New research by a team from Los Alamos National Laboratory shows that Quantum Convolutional Neural Networks (Qcnn) do not have the Barren Plateau problem. According to Nicolò Parmiggiani (Inaf), the two theorems demonstrated in the study published in Physical Review X are very important for guaranteeing the applicability of Qcnn and for the future of quantum neural networks.

The convolutional neural networks running on quantum computers have generated considerable hype for their potential to analyze quantum information can do better than classical computers. However, the application of these neural networks with large data sets has always been a problem, because of a kind of “Achilles heel” known as Barren Plateau , or barren plateau . But now new research led by a team at Los Alamos National Laboratory , published in Physical Review X , appears to have managed to overcome this.

Media Inaf interviewed Nicolò Parmiggiani , researcher at the National Institute of Astrophysics, machine learning expert and winner of the first edition of the National Award for research on big data and artificial intelligence  for his studies on machine learning technologies applied to analysis of the data produced by the Agile space telescope .

Parmiggiani, can you explain to us what this new study is about and why it gives us hope for the analysis of complex systems?

«The article describes a quantum neural network (Qnn) model based on the Convolutional Neural Network (Cnn) architecture used in classical machine learning . CNNs are a particular architecture of neural networks in which (artificial) neurons are connected to each other with a pattern inspired by the structure of the animal visual cortex. These models have obtained successful results in various different fields, including astrophysics where they are often used to analyze two-dimensional images and solve classification or object detection problems . We, for example, as an Agile team have developed a CNN model to identify Gamma-Ray Bursts within the counting maps produced with the Grid instrument on board the satellite ».

So not only classical, but also quantum machine learning. How are the two different?

“In recent years, a lot of research has been done to combine the advantages of quantum computers with neural networks. Quantum neural networks are models of neural networks that rely on quantum mechanics to develop more efficient algorithms by exploiting the properties of quantum information. The article  describes the architecture of the Quantum Convolutional Neural Networks (Qcnn) which use the same principles of the classic CNN architecture to analyze a quantum state instead of classical inputs, such as images. The main purpose of the Qcnn is to be able to analyze very complex quantum systems that would not be tractable with machine learningclassic. For example, a problem that can be analyzed with the Qnn is the many-body problem ».

Nicolò Parmiggiani, researcher at the INAF OAS in Bologna © Inaf

Are there any limits to this approach?  

«Despite the enormous potential, quantum machine learning has some problems, including the so-called Barren Plateau , similar to the Vanishing Gradient problem that occurs in classic machine learning . During the training phase of very deep classical neural networks (with many layers ), the optimization gradient of the parameters that make up the model can become very small, not allowing the updating of the parameters of the neural network that is not trained correctly. A very similar situation can also be found in the NQF ».

Why is this problem visually traced back to a plateau , i.e. a plateau?

“The plateau metaphor, used in quantum machine learning , arises from the fact that if the algorithm is to minimize a cost function (to optimize the parameters) it will try to identify the direction (the gradient) to reach the valleys whose function cost is lower. However, it may happen that the algorithm is unable to find a direction to go and therefore finds itself in the so-called plateau where all directions seem the same or with very small differences.. In this situation, the NQF often cannot be trained efficiently and is therefore unusable. This problem can get worse as the number of parameters increases and therefore does not allow to exploit the advantages of quantum computers to analyze large complex systems ».

How important is the study published by the Los Alamos National Laboratory team ?

“The published study shows that the Qnn, which are Qnn built with a particular architecture, do not have this problem of the Barren Plateau but can be managed with a complexity that grows in a polynomial way with respect to the size of the system and which guarantees the possibility of training the model . In fact, one of the main advantages of using the Qnn is that of being able to analyze with polynomial complexity systems that would have an exponential complexity in classical machine learning and therefore obtain a clear computational advantage. The two theorems demonstrated in this article are therefore very important to guarantee the applicability of the Qcnn and for the future of quantum neural networks ».

Featured image: New evidence that it is possible to secure the training of certain quantum convolutional networks paves the way for quantum artificial intelligence to aid in the discovery of materials, as well as in many other applications. Credits: Lanl

To know more:

Provided by INAF

How Deep is the Great Red Spot? (Planetary Science)

Two studies just published in Science, both based on data obtained with the Juno probe, offer – through complementary methods – estimates on the depth of Jupiter’s iconic vortex. One of the two managed to “push” up to 500 km using gravity measurements. We interviewed the first author, Marzia Parisi, a researcher from Rome today in California, at NASA’s Jet Propulsion Laboratory

The Great Red Spot of Jupiter – the iconic vortex that has shaken the planet’s atmosphere for centuries – is the largest storm in the entire solar system: it has an extension of over 16 thousand km, well over the diameter of the Earth. But how deep is it? Two studies published today in Science , conducted using data collected – with two complementary techniques – through NASA’s Juno probe , orbiting Jupiter since 2016, have now managed to provide a reliable estimate. And one of the two is led by an Italian researcher, Marzia Parisi .

Born and raised in Rome – “which I miss a lot”, she confesses to Media Inaf – Parisi earned her doctorate at Sapienza University with Professor Luciano Iess, principal investigator  of Juno’s KaT radioscience tool, before joining NASA – at Jet Propulsion Laboratory, in California, where it is located today – thanks to the participation, since the time of the doctorate, in missions of ESA and of NASA itself. His passions are languages ​​and reading, as well as science, of course, and if you are wondering, well no: he is not related to the recent Nobel Prize winner Giorgio Parisi, “But I am very happy to share his surname”. Its result – complementary to that of the team of colleague Scott Bolton, who used the microwave data collected with the Mwr instrument to estimate the depth of the Great Red Spot – was obtained with gravity measurements, thus exploiting the same Juno probe. as a sort of “sensor” sensitive to the fluctuations in the gravitational field of the planet caused by the storm.

Parisi, how can gravity be measured through a space probe which is Juno? Do you observe the way its orbit changes?

«From Earth we record the Doppler signal (therefore a frequency variation) of radio signals in the X band and Ka band – the latter supplied by the Italian Space Agency and produced by Thales Alenia Space. The Doppler signal is directly related to fluctuations in the probe’s velocity along the line of sight – that is, the line that connects the probe to the antenna on Earth. These variations are, in turn, linked to the gravity of the planet, through the equations of state ».

Artist’s impression of Juno in orbit around Jupiter. Credits: Nasa / Jpl-Caltech

What accuracy can you achieve? I read of a noise of just 5-10 micrometers per second: it seems incredible …

«The uncertainties on the radial velocity are just of the order of 10 micrometers per second. Consider that the relative speed of the probe is of the order of tens of km per second. So these accuracies are quite phenomenal, and are partly due to the quality of the Italian instrument ».

What kind of gravitational effect does the Great Red Spot exert on Juno? Does it attract more or less than the other areas of the Jovian atmosphere?

“It’s hard to say whether the Great Red Spot attracts more or less when the spacecraft flies over it, for geometry reasons. What I can say is that we see an excess of mass near its surface, and a mass defect at depth (hundreds of km below the Great Red Spot). This is due to the strong winds and the conditions of the atmosphere surrounding the vortex, in fact the pressure and density tend to increase with depth ».

The Great Red Spot in detail, taken by Juno during the 11 July 2017 flyover. Credits: Nasa / Swri / Msss / Gerald Eichstädt / Seán Doran

What are the pros and cons of your Great Red Spot depth measurement method versus Scott Bolton and colleagues’ microwave-based one?

«I would say that the two techniques cannot be compared, as they measure fundamentally different and complementary things. Theirs is a direct measurement of the ” brightness temperature ” on 6 channels, one of which can see up to a depth of about 350 km. In contrast, we with gravity can go much deeper with our measurements – up to 500 km but even beyond ».

Eventually you get that the Great Red Spot is between 200 and 500 km deep. It’s a lot? Is it little?

“It depends on the point of view. Reading various articles in the past I got the impression that most scientists expected the Great Red Spot to be very thin (we are talking about tens of km and not hundreds), so – if you take this point of view – 500 km represents a very deep vortex. On the other hand, if one compares the thickness with the radius of Jupiter (70 thousand km), we are clearly talking about a shallow phenomenon limited to the upper area of ​​the atmosphere “.

But how did you get the idea? And did he expect it, to be able to obtain information about the Great Red Spot using gravity measurements?  

‘The idea was born during brain storming sessions with my postdoctoral supervisor in Israel, Professor Yohai Kaspi. And yes, I have always had faith in the fact that we would be able to isolate the signal of the Great Red Spot, even if it is very elusive ».

To know more:

Provided by INAF