Gamma-ray Bursts: Duration Isn’t Everything (Cosmology)

The discovery of supernova emission associated with the short gamma-ray burst Grb 200826A seems to contradict current scenarios on the origin of these extreme phenomena. But how much can we really rely on observed duration to locate the progenitor of a Grb? We host an editorial on the subject by Lorenzo Amati, research manager at INAF in Bologna, author of an article “News & Views” published today in the same issue of Nature Astronomy which reports the discovery

Despite enormous observational efforts, it took a good thirty years since their discovery in the 1960s to unravel the cosmological origin of Grb , flashes of X / gamma photons detected about once a day from random directions and so intense that they obscure any another high energy source in the sky. It then took another twenty years of effort by many space and ground-based telescopes, as well as intense theoretical work and highly sophisticated numerical simulations, to build and consolidate the current scenario for their ancestors: collapse of the core of peculiar very massive stars. for the longer ones and coalescence of a binary system formed by two neutron stars or a neutron star and a black hole for the shorter ones.

In this context, the discovery, reported in today’s issue of Nature Astronomy , by Tomas Ahumada , Bin-Bin Zhang and their collaborators, of the association of a short Grb with a stellar explosion is apparently unsettling . So do we need to reconsider our main paradigm on Grb? To understand this, let’s take a few steps back.

The growing evidence of the existence of two main classes of Grb – the short (from a few tens of milliseconds up to 1-2 seconds) and the long ones (typically, from a few seconds to a few minutes) -, based on the bi-modal distribution of durations of these events, was one of the first substantial advances in our understanding of these exceptional but elusive phenomena. A further very important step forward in this field of study occurred towards the end of the 1990s, when the first systematic localizations with the precision of a few arc minutes made it possible to discover that long flashes are followed by a weaker emission. , called afterglow, which decays according to roughly a power law and is observable from X-rays to radio waves. The localization up to a few arc seconds and the spectroscopy of the afterglow emission in the optical and in the near infrared by large terrestrial and Hst telescopes led to the confirmation that these events come from cosmological distances (at least up to redshift of about 9-10, or a few hundred million years from the “Big-Bang”), as well as the first identifications and characterizations of their host galaxies, up to the direct detection of a peculiar supernova (Sn) of type Ib / c associated with Grb 980425 .

This impressive wealth of discoveries provided strong support for the hypothesis that long Grbs are produced by the collapse of the nucleus of peculiar massive stars, a scenario already postulated in the 1970s, able to explain the long duration and the very high radiated energy ( up to at least 10 53 ergs, or about as much as a star like the Sun radiates in 10 billion years). This scenario has been further strengthened in the last twenty years by the detection, in nearby events, of optical-infrared emission with spectral characteristics similar to those of Sn 1998bw or, more generally, typical of type Ib / c supernovae, superimposed on the typical afterglow light curves. Other evidence supporting this origin for long Grbs includes their typical location in star-forming regions in their host galaxies, evidence of a metal  enriched circum-burst environment, and their redshift distribution , which roughly follows that of the star formation rate in the universe.

On the other hand, since the early 2000s, we have begun to reveal and characterize the afterglow emission of short Grbs too, learning that the redshift distributionof these events extends to much lower values ​​than that of the long ones and shows no relationship with the evolution of the star formation rate in the universe, that their typical energy released is about two orders of magnitude lower and that often they are found in the outermost regions of their host galaxies, with no association with star-forming regions. Together with their duration, these properties have increasingly supported the hypothesis that short Grbs originated from the merger of binary systems consisting of two neutron stars (Ns-Ns) or a neutron star and a black hole (Ns- Bh). This scenario was finally confirmed in a direct and spectacular way by the historical revelation of a short gamma-ray burst (Grb 170817A) associated with thefirst gravitational wave signal produced by the coalescence of a binary system of neutron stars (Gw 170817) ever detected (by Ligo / Virgo).

The observations and analyzes reported today by Ahumada et al . and Zhang et al. in Nature Astronomy they seem, however, to undermine this standard scenario on the progenitors of Grb. For the first time, in fact, an excess compared to the normal afterglow emission was revealed even for a short gamma-ray burst (Grb 200826A)in the optical and near infrared, excess showing the typical photometric and temporal evolution properties of supernovae associated with long Grb. At first glance, this result clearly questions our current understanding of the GRB phenomenon. However, a more global view of its properties, combined with several additional evidences that have emerged in recent years, show that this event, although peculiar and of great interest, may not be so “special”.

Lorenzo Amati, research manager at INAF OAS Bologna, author of the “News & Views” on gamma-ray bursts published today in Nature Astronomy © INAF

First of all, the duration of about 1 second, while making Grb 200826A by far the shortest gamma-ray bursts with evidence of supernova association, falls within the range where the duration distributions of short and long Grb overlap. again, potentially making this an extreme event in the short-lived queue of the distribution of long GRB durations. The position in the spectral hardness plane vs. duration further supports a classification of this event as “short” Grb, but the probability that Grb 200826A actually belongs to the “long” class is not negligible, as reported by Ahuamada et al . and Zhang et al. Furthermore, theoretical considerations and numerical simulations show that the duration of a Grb produced by the collapse of the core of a supermassive star, which depends both on the time during which the “central engine” (for example, an accreting black hole) is to work that from the time taken by the relativistic jet that then produces the lightning to exit the stellar envelope, can be even shorter than 0.5 seconds.

This is why, in reality, duration is increasingly considered only one of the indicators of the origin of a Grb. So much so that, more and more frequently, instead of “long” or “short” Grb, we speak of “Type I” events, those produced by the coalescence of binary systems Ns-Ns or Ns-Bh, and flashes of ” Type II ”, or the flashes produced by the core-collapse of super-massive stars. In addition to duration, spectral hardness, location in the host galaxy and properties of the host galaxy, the indicators used to discriminate these two classes of events include time-lag , i.e. the delay of the emission peak as a function of the energy band, and the relationship between the photon energy at which the peak of the Grb energy spectrum occurs (peak energy, or Ep, i) and the equivalent isotropic radiated energy ( Eiso ). The analysis of these indicators for Grb 200826A shows that, despite its very short duration, it actually belongs to the Type II class. Thus, the association of this event with a type Ib / c supernova is no longer surprising, but instead provides strong confirmation of the new paradigm’s efficiency to identify the progenitor of a Grb. In this respect, however, Grb 200826A is the opposite case of another famous and demanding event, Grb 060614 , which was a technically “long” Grb (lasting several tens of seconds) for which there was strong evidence of none association with a supernova. Even in that case it was thanks to indicators other than duration, for example time-lagand placement in the Ep plan , i vs. Eiso , that it was possible to overcome the apparent paradox and classify the event as a Type I gamma-ray burst.

The extended sensitivity and energy bandwidth of next-generation Grb detectors (e.g. those aboard the Franco-Chinese Svom mission , which will be launched next year), combined with improved follow-up capabilities provided by future observers ( e.g. example, Elt or Tmt in the optical / infrared, Ska in the radio, Athena in the X, Cta at very high energies), will allow us to definitively go beyond the classification of Grb based on duration and probably to identify a richer variety of progenitors which extremely magnetized neutron stars ( magnetar) and the possible connection with Fast Radio Burst ( Frb ) sources for short Grb, and different types of progenitor stars (possibly including those of population-III ) and explosive mechanisms ( core-collapse , pair instability) for those long. A better understanding of the subclasses and ancestors of Grb is also crucial for the growing relevance of Type II events for cosmology (study of the early universe and cosmological parameters) and Type I events for multi-messenger astrophysics ( as demonstrated by the extraordinary case of Gw 170817 / Grb 170817A).

Featured image: At the center, the bi-modal distribution of the Grb durations, flanked by the images of two typical host galaxies of the “short” and “long” Grb and the location of these events in them. The two panels at the top show the different placement of “short” and “long” Grb in the observed peak energy planes (Ep) vs. duration (T90) and intrinsic peak energy (Ep, i) –vs. radiated energy (Eiso). The two panels. The two panels below illustrate the two different scenarios for the origin of “short” and “long” (or, better, “Type I” and “Type II”, respectively). Credits: Nature Astronomy, Volume 5, Issue 7, by Springer Nature (republished with kind permission of the publisher) © INAF

To know more:

Provided by INAF

Hierarchical Mergers Of Black Holes (Cosmology)

While most of the fusion events between extreme objects detected by Ligo and Virgo are produced by so-called “first generation” black holes formed by the collapse of stars, others could instead be second (or third) generation, in which one of the that melts already comes from one (or more) previous fusion of black holes. An article published today in Nature Astronomy reviews all the theoretical results, models and events of gravitational waves detected and coming from hierarchical mergers of stellar-mass black holes

Generations in comparison. Generations of black holes, which merge into a single entity producing space-time “vibrations”, detected on Earth thanks to the Ligo and Virgo interferometers . And if the first generation comes from two black holes formed by the collapse of massive stars at the end of their lives, it is possible that history will repeat itself, and the resulting black hole fuses again with a similar object, giving rise to a black hole of second generation. To recognize the exact position of these extreme stellar objects in the family tree of black holes, it is necessary to observe the imprint that mergers leave on gravitational waves. And that’s not all, because in reality such black holes may have already been observed. This is what we read in an articlereview published today in Nature Astronomy by David Gerosa , young astrophysicist at the University of Birmingham close to return to Italy with a one-way ticket signed ERC , which will see him in the role of associate professor at the University Milano-Bicocca. Gerosa deals with the astronomy of gravitational waves, both studying the dynamics of the sources (the binary pairs of black holes, in fact) from a theoretical point of view, statistically analyzing the data.

Your review talks about second or third generation black holes. Are there statistical estimates on how many such objects may exist, on what are the probabilities of forming them and, among these, how many are actually observable?

«To date, Ligo and Virgo have observed about 50 events. One of these ( Gw 190521 ) has characteristics (in particular the mass) typical of a second generation event. Based on the observed sample alone, a rough estimate of the incidence rate is therefore 1/50. However, we must take into account the observational bias – events involving high masses are easier to observe – and how different astrophysical environments can assemble successive generations of mergers . It is a very open problem, and I hope this transpires from our review ».

Are these newly formed objects, to be looked for in the nearby universe?

“Not necessarily, but with current equipment we are sensitive only to events that occur at redshifts less than 1″

What differentiates them from black holes that come from a first generation merger?

«The mass involved, first of all, which is higher for second or third generation events. This is particularly interesting because stellar evolution models predict an upper limit on the mass of black holes that form from the collapse of massive stars, which is about 50 times the mass of the Sun. If Ligo or Virgo measure a higher mass event, a different training scenario must be involved. We talked about this idea in a 2017 article  (and also presented in another independent article released the same day) and the 2019 event seems to have confirmed it. The mass, alone, however, can also have other origins ».

What else, then?

“The production of black holes of a generation other than the first also has a very characteristic effect on spin. It is a relativistic phenomenon called orbital hang up which regulates the number of orbits performed by the binary as the spin varies: hierarchical black holes typically have spin around the characteristic value 0.7 (in dimensionless units in which the spin varies between 0 and 1) ” .

Being a signal generated by massive objects, is it easier to detect with current detectors?

“And how. In fact, we tend to have already seen one, Gw 190521. The signal is easier to identify for more massive objects. Care must be taken that they are not too massive, otherwise the signal goes out of the band sampled by the detectors, even if these objects are intrinsically few ».

Are there any observed cases that have dubious characteristics and could belong to this category?

«In the case of Gw 190521, as we have said, the main interpretation is that it is a second generation merger . But it is not the only one . Another second generation event, albeit a little more uncertain, could be Gw 190412 ».

How come?

“Because the mass involved is lower, and in this case the ratio between the masses and the spin suggest that it is a mixed merger in which one black hole is second generation and the other is the first. These two data – relationship between masses and spin – are however “weaker” indicators: forming such an event using “normal” theoretical models is rare but not impossible ».

How to definitively confirm that the event derives from a second or third generation merging ?

“Certainty in science builds up slowly. With more events (we expect thousands within a few years) it will become clear whether a subpopulation of next-generation black holes is needed to explain the data. ‘

Featured image: Davide Gerosa, astrophysicist at the University of Birmingham and first author of the review article on generations of black holes © INAF

To know more:

Provided by INAF

Quartet Of Galaxies For the Vst (Cosmology)

It comes from a study led by Rossella Ragusa of INAF of Naples, published in Astronomy & Astrophysics, the image of the week of the European Southern Observatory. This is the group of Hcg 86 galaxies immortalized by the Vlt Survey Telescope, an instrument also “made in Naples”

Space can be a lonely place. But not so for the quartet of galaxies that make up Hcg 86 , which can be seen in the image on the right observed with the VLT Survey Telescope ( VST ) ESO. The four galaxies, located approximately 270 million light-years from Earth in the constellation Sagittarius, are seen from Earth as arranged in a triangular shape, with three of them in a straight line and one to the south (the bright objects to the right of the group are not part of the quartet).

The acronym ‘Hcg’ stands for Hickson Compact Group , and is used to describe groups composed of a minimum of four up to a maximum of ten galaxies physically very close to each other. Because of their compactness, such groups are ideal environments for studying galactic interactions, which can sometimes lead to the merger of their members.

“With Vst we are able to investigate very weak structures in the periphery of galaxies, which are the remains of past gravitational interactions and fusion events”, explains the INAF researcher from Naples and the Federico II University Rossella Ragusa , at the helm of the team that – as part of the Vst Early-type Galaxy Survey ( Vegas ) program – obtained the image of Hcg 86. “In fact, as in the case of Hcg 86, revealing the distribution of diffused light within the group, or the weakest member of the group, and by studying its physical properties, we are able to add fundamental elements in the knowledge we have of the history of formation and evolution of cosmic structures ».

Rossella Ragusa, researcher at the Capodimonte Astronomical Observatory of INAF and at the University of Naples Federico II, first author of the study on the group of galaxies Hcg 86 © INAF

«These images are among the deepest obtained by the Vegas survey and have been reduced with the AstroWise pipeline , developed for the data reduction of the Vst telescope. The pipeline has also proved to be very effective for the detection and study of low surface brightness structures, such as those involved in the work on the Hcg 86 group », adds Marilena Spavone of  INAF of Naples, responsible for data reduction. Specifically, by mapping the distribution of light in and around the galaxies of HCG 86, the team concluded that these faint structures are the remnants of the satellite galaxies devoured by the group about seven billion years ago.

Located at the ESO Observatory in Paranal, Chile, the Vst is one of the largest survey telescopes in the world, dedicated to mapping the sky into the wavelengths of visible light. 2021 marks the anniversary of its first decade of activity , a period during which it achieved significant scientific results in the search for extra-solar planets and in the study not only of our galaxy but of the wider universe.

Featured image: The group of galaxies Hcg 86 immortalized by Vst. Credits: Eso / Ragusa, Spavone et al.

To know more:

Provided by INAF

Researchers Reveal For The First Time That Mild Kidney Disease Increase Cancer Risk (Medicine)

Using a more sensitive test than is commonly used in the NHS, researchers have been able to show, for the first time, that even mild kidney disease is associated with an increased risk of developing and dying from cancer.

The new research, led by the University of Glasgow and published today in the journal EClinicalMedicine, shows that the more sensitive ‘cystatin C’ test was able to identify a heightened risk of developing and dying from cancer in people with chronic kidney disease.

Using data from the UK Biobank alongside the simple blood test, researchers were able to demonstrate that mild kidney disease is associated with a 4% increased risk of developing cancer and a 15% increased risk of dying from cancer. In people with more advanced kidney disease, researchers found a 19% increased risk in developing cancer and a 48% increased risk in dying from cancer.

This heightened risk of developing and dying from cancer was not identified when kidney function is estimated using serum creatinine – the test most commonly used in healthcare settings – to estimate a patient’s kidney function.

Chronic kidney disease, characterised by gradual loss of kidney function over time, is common, affecting around 10% of the population. Cancer is already known to be more common in people with kidney failure, especially in people requiring dialysis or a kidney transplant. Although kidney failure is relatively uncommon, mild kidney disease may be present in one third of the population, although it is usually asymptomatic, not routinely diagnosed and therefore monitored infrequently.

Chronic kidney disease is also associated with premature cardiovascular disease and mortality. Using cystatin C testing researchers are already able to show that mild kidney disease is associated with 20-30% increase in risk of cardiovascular disease and early death, and this heightened risk is more pronounced in people with more advanced kidney disease.

Dr Jennifer Lees said: “Our results show that mild kidney disease is clinically important in predicting cancer risk, as well as the risk of cardiovascular disease and early death. However, identifying this excess risk requires measurement of more sensitive markers of kidney dysfunction such as cystatin C. We were not able to see the same risk when using the less sensitive, but more routinely used serum creatinine test.”

Chronic kidney disease is not currently considered in cancer risk prediction tools used by GPs to guide referrals for investigation of potential cancer symptoms. Researchers now believe that if chronic kidney disease was recognised as an important risk factor for cancer, it may be that milder symptoms in patients with this condition would trigger earlier referrals, prompting earlier treatment and better outcomes/survival.

Dr Lees said: “Our research suggests that greater uptake of cystatin C testing could be used to improve patient outcomes by identifying cancer risks earlier, thereby increasing patients’ quality of life and chance of survival.

“Although cystatin C testing is available in most developed countries, it is more expensive than creatinine testing in many laboratories. However, we believe more widespread use could drive down the costs of testing and aid further research into identifying and addressing the factors responsible for worse cancer outcomes in people with kidney disease.”

The study, ‘Kidney function and cancer risk: an analysis using creatinine and cystatin C in a cohort study’ is published in EClinicalMedicine. The work was funded by the Scottish Government Chief Scientist Office, ANID Becas Chile, the Medical Research Council, the British Medical Association and the British Heart Foundation.

Provided by University of Glasgow

New Imaging System Brings Brains Into Sharper Focus (Neuroscience)

One of the greatest challenges in science is the study of the brain’s anatomy and cellular architecture.
Accurately visualising the brain’s complex structure at high resolutions is critically important for improving our understanding of the functions of the central nervous system.
A promising new technique, developed by scientists in Italy, the UK and Germany, is now bringing the microscopic details of the brain into sharper focus even over macroscopic volumes.
In a paper published today in the journal Nature Methods, the researchers describe how their system, called Rapid Autofocus via Pupil-split Image phase Detection (or RAPID), represents a breakthrough in the imaging of mouse brains.
This new technique could have significant repercussions in neuroscience, making a quantitative analysis of the brain-wide architecture possible at the subcellular level.

Dr Ludovico Silvestri, first author of the study and researcher in Physics of Matter at University of Florence in Italy, said: “The lack of instruments capable of analysing large volumes at high resolution has limited our studies of brain-wide structure to a rough, low-resolution level.

“The currently employed method of light-sheet microscopy combined with chemical protocols capable of rendering biological tissues transparent, fails to maintain high resolution in samples larger than a few hundred microns.”

Dr Leonardo Sacconi, of the National Institute of Optics of the National Research Council (CNR-INO), a co-author of the paper, added:  “Beyond these dimensions, the biological tissue begins to behave like a lens, disrupting the alignment of the microscope and consequently making the images blurry.”

With RAPID, the researchers propose a new auto-focusing technology compatible with light-sheet microscopy that is capable of automatically correcting the misalignments introduced by the sample itself in real time. In cubic centimetre-sized, cleared samples, such as intact mouse brains, the autofocussing removes image degradation to enable enhanced quantitative analyses.

The new method is inspired by the optical autofocus systems found in reflex cameras, where a set of prisms and lenses transforms the blur of the image into a lateral movement. This allows the alignment of the microscope to be stabilized in real time, producing sharper, more richly-detailed images. 

An image of a mouse brain created using the RAPID imaging system

Dr Caroline Müllenbroich, a Marie Skłodowska Curie fellow and lecturer at the University of Glasgow’s School of Physics and Astronomy is a co-author of the paper. Dr Müllenbroich contributed to the design and implementation of the microscope and autofocusing system.

Dr Müllenbroich said: “While we originally invented RAPID for light-sheet microscopy, this autofocusing technology is actually suitable for all wide-field microscopy techniques. It is very versatile and sample agnostic with multiple applications beyond neuroscience.”

The high resolution guaranteed by RAPID – which is also the subject of an international patent owned by Unifi, the European Laboratory of Nonlinear Spectroscopy (LENS) and CNR – has allowed researchers to study on a whole-brain scale problems previously analysed only in small, local areas.

For example, the spatial distribution of a particular type of neurons – which express somatostatin – has been investigated, showing how these cells tend to organize themselves in spatial clusters, which are suspected to make their inhibitory action more effective.

Another application concerns microglia, a set of cells with different functions (from the response to pathogens to the regulation of neuronal plasticity), whose shape changes according to the role they play. The analysis of microglia performed with RAPID revealed significant differences between various brain regions, paving the way for new studies on the role of this cell population.

The research was led by led by the University of Florence’s departments of Physics and Astronomy, Information Engineering and Biology, and CNR. RAPID was developed in LENS by researchers from the Biophotonics Area, headed by Francesco Pavone, Professor of Physics of Matter at the University of Florence.

In addition to the University of Glasgow, the Italian researchers also collaborated with scholars from the European Molecular Biology Laboratory in Heidelberg (Germany). The study was carried out within the European Flagship Human Brain Project, of which LENS and CNR are partners.

The team’s paper, titled ‘Universal autofocus for quantitative volumetric microscopy of whole mouse brains’, is published in Nature Methods.

Provided by University of Glasgow

Needle In A Haystack: Planetary Nebulae in Distant Galaxies (Cosmology)

Using data from the MUSE instrument, researchers at the Leibniz Institute for Astrophysics Potsdam (AIP) succeeded in detecting extremely faint planetary nebulae in distant galaxies. The method used, a filter algorithm in image data processing, opens up new possibilities for cosmic distance measurement – and thus also for determining the Hubble constant.

Planetary nebulae are known in the neighbourhood of the Sun as colourful objects that appear at the end of a star’s life as it evolves from the red giant to white dwarf stage: when the star has used up its fuel for nuclear fusion, it blows off its gas envelope into interstellar space, contracts, becomes extremely hot, and excites the expanding gas envelope to glow. Unlike the continuous spectrum of the star, the ions of certain elements in this gas envelope, such as hydrogen, oxygen, helium and neon, emit light only at certain wavelengths. Special optical filters tuned to these wavelengths can make the faint nebulae visible. The closest object of this kind in our Milky Way is the Helix Nebula, 650 light years away.

The planetary nebula NGC 7294 (“Helix Nebula”), an object in the neighbourhood of the Sun.Credit: NASA, NOAO, ESA, the Hubble Helix Nebula Team, M. Meixner (STScI), and T.A. Rector (NRAO)

As the distance of a planetary nebula increases, the apparent diameter in an image shrinks, and the integrated apparent brightness decreases with the square of the distance. In our neighbouring galaxy, the Andromeda Galaxy, at a distance almost 4000 times greater, the Helix Nebula would only be visible as a dot, and its apparent brightness would be almost 15 million times fainter. With modern large telescopes and long exposure times, such objects can nevertheless be imaged and measured using optical filters or imaging spectroscopy. Martin Roth, first author of the new study and head of the innoFSPEC department at AIP: “Using the PMAS instrument developed at AIP, we succeeded in doing this for the first time with integral field spectroscopy for a handful of planetary nebulae in the Andromeda Galaxy in 2001 to 2002 on the 3.5m telescope of the Calar Alto Observatory. However, the relatively small PMAS field-of-view did not allow yet to investigate a larger sample of objects.”

It took a good 20 years to develop these first experiments further using a more powerful instrument with a more than 50 times larger field-of-view on a much larger telescope. MUSE at the Very Large Telescope in Chile was developed primarily for the discovery of extremely faint objects at the edge of the universe currently observable to us – and has produced spectacular results for this purpose since the first observations. It is precisely this property that also comes into play in the detection of extremely faint PN in a distant galaxy.

The galaxy NGC 474 is a particularly fine example of a galaxy that, through collision with other, smaller galaxies, has formed a conspicuous ring structure from the stars scattered by gravitational effects. It lies roughly 110 million light years away, which is about 170,000 times further than the Helix Nebula. The apparent brightness of a planetary nebula in this galaxy is therefore almost 30 billion times lower than that of the Helix Nebula and is in the range of cosmologically interesting galaxies for which the team designed the MUSE instrument.

A team of researchers at the AIP, together with colleagues from the USA, has developed a method for using MUSE to isolate and precisely measure the extremely faint signals of planetary nebulae in distant galaxies with high sensitivity. A particularly effective filter algorithm in image data processing plays an important role here. For the ring galaxy NGC 474, ESO archive data were available, based on two very deep MUSE exposures with 5 hours of observation time each. The result of the data processing: after applying the filter algorithm, a total of 15 extremely faint planetary nebulae became visible.

MUSE image data in the two marked fields in the above image of the ring structure of NGC 474. Left: Image in the continuum with the band of unresolved stars as well as globular clusters marked by circles. Right: filtered image in the redshifted oxygen emission line, from which the planetary nebulae emerge as point sources from the noise. The artefacts created by instrumental effects have completely disappeared.Credit: AIP/M. Roth

This highly sensitive procedure opens up a new method for distance measurement that is suitable for contributing to the solution of the currently discussed discrepancy in the determination of the Hubble constant. Planetary nebulae have the property that, physically, a certain maximum luminosity cannot be exceeded. The distribution function of the luminosities of a sample in a galaxy, i.e. the luminosity function of planetary nebulae (PNLF), breaks off at the bright end. This property is that of a standard candle, which can be used to calculate a distance by statistical methods. The PNLF method has been developed already in 1989 by team members George Jacoby (NSF’s NOIRLab) and Robin Ciardullo (Penn State University). It has been successfully applied to more than 50 galaxies over the past 30 years, but was limited by the filter measurements used so far. Galaxies with distances greater than that of the Virgo or Fornax clusters were beyond the range. The study, now published in the Astrophysical Journal, shows that MUSE can achieve more than twice the range, allowing an independent measurement of the Hubble constant.

Featured image: The ring galaxy NGC 474 at a distance of about 110 million light years. The ring structure was formed by merging processes of colliding galaxies.Credit: DES/DOE/Fermilab/NCSA & CTIO/NOIRLab/NSF/AURA

Further information

Original publication

Precision Cosmology With Improved PNLF Distances Using VLT-MUSE I. Methodology and Tests. Martin M. Roth, George H. Jacoby, Robin Ciardullo, Brian D. Davis, Owen Chase, Peter M. Weilbacher. The Astrophysical Journal, 22 July 2021


Provided by AIP

Mini Radar Could Scan The Moon For Water And Habitable Tunnels (Planetary Science)

A miniature device that scans deep below ground is being developed to identify ice deposits and hollow lava tubes on the Moon for possible human settlement.

The prototype device, known as MAPrad, is just one tenth the size of existing ground penetrating radar systems, yet can see almost twice as deeply below ground – more than 100 metres down – to identify minerals, ice deposits, or voids such as lava tubes.

Local start-up CD3D PTY Limited has now received a grant from the Australian Space Agency’s Moon to Mars initiative to further develop the prototype with RMIT University, including testing it by mapping one of Earth’s largest accessible systems of lava tubes.  

CD3D CEO and RMIT Honorary Professor, James Macnae, said their unique geophysical sensor had several advantages over existing technology that made it more suitable for space missions.

“MAPrad is smaller, lighter and uses no more power than existing ground penetrating radar devices, yet can see up to hundreds of meters below the surface, which is around twice as deep as existing technology,” Macnae said. 

“It was able to achieve this improved performance, even after being shrunken to a hand-held size, because it operates in a different frequency range: using the magnetic rather than the electric component of electromagnetic waves.”

Lab technician holds MAPrad prototype in the Micro Nano Research Facility clean rooms at RMIT.
Lab technician holds MAPrad prototype in the Micro Nano Research Facility clean rooms at RMIT. © RMIT

The magnetic waves emitted and detected by the device measure conductivity and electromagnetic wave reflections to identify what lies underground. Voids and water-ice provide strong reflections, while various metal deposits have high conductivity at unique levels.

From mining to Moon mission

The specialised radar system was developed by RMIT University and Canadian company International Groundradar Consulting in a collaborative research project funded through the AMIRA Global network.

Successful field tests have since been carried out in Australia and Canada using a backpacked prototype for mining and mineral prospecting.

“MAPrad’s initial development was specifically focussed on facilitating drone surveys for mining applications, but it has obvious applications in space where size and weight are at a premium, so that’s where we’re now focusing our efforts,” Macnae said.

To further prove the technology’s usefulness for a range of Moon missions, the researchers will be seeking permission to scan one the world’s largest accessible systems of lava tubes at the spectacular Undara caves in Far North Queensland, Australia.

Undara is an Aboriginal word meaning ‘long way’, in reference to the unusually long system of lava tubes that are located within the park. The tubes have diameters of up to 20 metres and some are several hundred metres in length.

Inside one of the vast lava tubes at Undara Volcanic National Park, in Far North Queensland. Getty Images
Inside one of the vast lava tubes at Undara Volcanic National Park, in Far North Queensland. Getty Images

RMIT University engineer, Dr Graham Dorrington, said they would traverse the park above the caves to detect the voids below, some of which have not been completely mapped yet. 

“We know the dimensions of the main tubes, so comparison with surface scans to check accuracy should be possible,” he said.

“Undara will be an excellent testing site for us since it’s the closest thing on Earth to the lava tubes thought to exist on the Moon and Mars.” 

The search for water and shelter in space

Massive tunnels left by ancient volcanic lava flows may exist at shallow depths below the surface of the Moon and Mars. 

It’s thought these enclosures could be suitable for the construction of space colonies as they provide protection from the Moon’s frequent meteorite impacts, high-energy ultra-violet radiation and energetic particles, not to mention extreme temperatures.

On the Moon’s surface, for example, daytime temperatures are often well above 100°C, dropping dramatically to below -150°C at night, while the insulated tunnels could provide a stable environment of around -22°C.

The team hopes to qualify MAPrad for space use so it can help uncover the resources available on the Moon (pictured) and Mars to support life. Credit: NASA.
The team hopes to qualify MAPrad for space use so it can help uncover the resources available on the Moon (pictured) and Mars to support life. Credit: NASA.

But of more immediate concern is mapping ice-water deposits on the Moon and getting a clearer picture of the resources available there to support life. 

Dorrington said their system could be mounted on a space rover, or even attached to a spacecraft in low orbit, to monitor for minerals in near-future missions and for lava tubes in later missions.

“After the lava tube testing later this year, the next step will be optimising the device so as not to interfere or interact with any of the space rover or spacecraft’s metal components, or cause incompatible electromagnetic interference with communications or other instruments,” Dorrington said. 

“Qualifying MAPrad for space usage, especially for use on the Moon, will be a significant technical challenge for us, but we don’t foresee any showstoppers.”

The team will use the unique capabilities of the RMIT Micro Nano Research Facility and the Advanced Manufacturing Precinct and are also looking to collaborate on later stages of development with specialists in spacecraft integration or organisations with payload availability.

The research team at RMIT University includes Honorary Professor James Macnae, Professor Pier Marzocca, Professor Gary Bryant, Professor Arnan Mitchell, Dr Gail Iles and Dr Graham Dorrington

Provided by RMIT University

Hubble Views a Faraway Galaxy Through a Cosmic Lens (Cosmology)

The center of this image from the NASA/ESA Hubble Space Telescope is framed by the tell-tale arcs that result from strong gravitational lensing, a striking astronomical phenomenon which can warp, magnify, or even duplicate the appearance of distant galaxies. 

Gravitational lensing occurs when light from a distant galaxy is subtly distorted by the gravitational pull of an intervening astronomical object. In this case, the relatively nearby galaxy cluster MACSJ0138.0-2155 has lensed a significantly more distant inactive galaxy – a slumbering giant known as MRG-M0138 which has run out of the gas required to form new stars and is located 10 billion light-years away. Astronomers can use gravitational lensing as a natural magnifying glass, allowing them to inspect objects like distant dormant galaxies which would usually be too difficult for even Hubble to resolve.

This image was made using observations from eight different infrared filters spread across two of Hubble’s most advanced astronomical instruments: the Advanced Camera for Surveys and the Wide Field Camera 3. These instruments were installed by astronauts during the final two servicing missions to Hubble and provide astronomers with superbly detailed observations across a large area of sky and a wide range of wavelengths.

Featured image: The center of this image from the NASA / ESA Hubble Space Telescope is framed by the tell – tale arcs that result from strong gravitational lensing. Credit: ESA/Hubble & NASA, A. Newman, M. Akhshik, K. Whitaker

Provided by NASA

Artificial Intelligence Helps Improve NASA’s Eyes on the Sun (Planetary Science)

A group of researchers is using artificial intelligence techniques to calibrate some of NASA’s images of the Sun, helping improve the data that scientists use for solar research. The new technique was published in the journal Astronomy & Astrophysics on April 13, 2021. 

A solar telescope has a tough job. Staring at the Sun takes a harsh toll, with a constant  bombardment by a never-ending stream of solar particles and intense sunlight. Over time, the sensitive lenses and sensors of solar telescopes begin to degrade. To ensure the data such instruments send back is still accurate, scientists recalibrate periodically to make sure they understand just how the instrument is changing. 

Launched in 2010, NASA’s Solar Dynamics Observatory, or SDO, has provided high-definition images of the Sun for over a decade. Its images have given scientists a detailed look at various solar phenomena that can spark space weather and affect our astronauts and technology on Earth and in space. The Atmospheric Imagery Assembly, or AIA, is one of two imaging instruments on SDO and looks constantly at the Sun, taking images across 10 wavelengths of ultraviolet light every 12 seconds. This creates a wealth of information of the Sun like no other, but – like all Sun-staring instruments – AIA degrades over time, and the data needs to be frequently calibrated.  

Seven of the ultraviolet wavelengths observed by the AIA on NASA’s SDO. The top row is taken from May 2010 and the bottom row shows from 2019, without any corrections, showing how the instrument degraded over time.
This image shows seven of the ultraviolet wavelengths observed by the Atmospheric Imaging Assembly on board NASA’s Solar Dynamics Observatory. The top row is observations taken from May 2010 and the bottom row shows observations from 2019, without any corrections, showing how the instrument degraded over time.Credits: Luiz Dos Santos/NASA GSFC

Since SDO’s launch, scientists have used sounding rockets to calibrate AIA. Sounding rockets are smaller rockets that typically only carry a few instruments and take short flights into space –  usually only 15 minutes. Crucially, sounding rockets fly above most of Earth’s atmosphere, allowing instruments on board to to see the ultraviolet wavelengths measured by AIA. These wavelengths of light are absorbed by Earth’s atmosphere and can’t be measured from the ground. To calibrate AIA, they would attach an ultraviolet telescope to a sounding rocket and compare that data to the measurements from AIA. Scientists can then make adjustments to account for any changes in AIA’s data. 

There are some drawbacks to the sounding rocket method of calibration. Sounding rockets can only launch so often, but AIA is constantly looking at the Sun. That means there’s downtime where the calibration is slightly off in between each sounding rocket calibration. 

“It’s also important for deep space missions, which won’t have the option of sounding rocket calibration,” said Dr. Luiz Dos Santos, a solar physicist  at NASA’s Goddard Space Flight Center in Greenbelt, Maryland, and lead author on the paper. “We’re tackling two problems at once.” 

Virtual calibration

With these challenges in mind, scientists decided to look at other options to calibrate the instrument, with an eye towards constant calibration. Machine learning, a technique used in artificial intelligence, seemed like a perfect fit. 

As the name implies, machine learning requires a computer program, or algorithm, to learn how to perform its task.

First, researchers needed to train a machine learning algorithm to recognize solar structures and how to compare them using AIA data. To do this, they give the algorithm images from sounding rocket calibration flights and tell it the correct amount of calibration they need. After enough of these examples, they give the algorithm similar images and see if it would identify the correct calibration needed. With enough data, the algorithm learns to identify how much calibration is needed for each image.

Because AIA looks at the Sun in multiple wavelengths of light, researchers can also use the algorithm to compare specific structures across the wavelengths and strengthen its assessments.

To start, they would teach the algorithm what a solar flare looked like by showing it solar flares across all of AIA’s wavelengths until it recognized solar flares in all different types of light. Once the program can recognize a solar flare without any degradation, the algorithm can then determine how much degradation is affecting AIA’s current images and how much calibration is needed for each. 

“This was the big thing,” Dos Santos said. “Instead of just identifying it on the same wavelength, we’re identifying structures across the wavelengths.” 

This means researchers can be more sure of the calibration the algorithm identified. Indeed, when comparing their virtual calibration data to the sounding rocket calibration data, the machine learning program was spot on. 

Two lines of images of the Sun. The top line gets darker and harder to see, while the bottom row stays a consistent brightly visible image.
The top row of images show the degradation of AIA’s 304 Angstrom wavelength channel over the years since SDO’s launch. The bottom row of images are corrected for this degradation using a machine learning algorithm.Credits: Luiz Dos Santos/NASA GSFC

With this new process, researchers are poised to constantly calibrate AIA’s images between calibration rocket flights, improving the accuracy of SDO’s data for researchers. 

Machine learning beyond the Sun

Researchers have also been using machine learning to better understand conditions closer to home. 

One group of researchers led by Dr. Ryan McGranaghan – Principal Data Scientist and Aerospace Engineer at ASTRA LLC and NASA Goddard Space Flight Center –  used machine learning to better understand the connection between Earth’s magnetic field and the ionosphere, the electrically charged part of Earth’s upper atmosphere. By using data science techniques to large volumes of data, they could apply machine learning techniques to develop a newer model that helped them better understand how energized particles from space rain down into Earth’s atmosphere, where they drive space weather. 

As machine learning advances, its scientific applications will expand to more and more missions. For the future, this may mean that deep space missions – which travel to places where calibration rocket flights aren’t possible – can still be calibrated and continue giving accurate data, even when getting out to greater and greater distances from Earth or any stars.

Header image caption (same as image in the story): The top row of images show the degradation of AIA’s 304 Angstrom wavelength channel over the years since SDO’s launch. The bottom row of images are corrected for this degradation using a machine learning algorithm. Credits: Luiz Dos Santos/NASA GSFC

Provided by NASA