It is a gas giant, with dimensions and time of revolution comparable to Jupiter, and until a few years ago it was the closest exoplanet to us. Precisely on this record, NASA, proposing a beautiful artistic representation as the “Image of the day” on its website, ran into a small oversight, now corrected
Anyone who happened to be on the NASA web page this morning will have been a bit dumbfounded. In fact , an artistic representation of a gas giant with the title “Nearest Exoplanet to Our Solar System” stood out in plain sight in the box dedicated to the “Image of the day”.
The image is indeed very impressive. Signed by Greg Bacon of the Space Telescope Science Institute, it dates back to at least 2006 and depicts how the gas giant Epsilon Eridani b could be : an exoplanet of mass comparable to that of Jupiter orbiting a very young star – Epsilon Eridani, just 800 million years of age – 10.5 light years from us. Precisely by virtue of its tender age, Epsilon Eridani is still surrounded by a disk of dust – here represented by an oblique and greyish line – which extends for over 30 billion km. The rings and satellites surrounding the planet are purely hypothetical but plausible, explains the accompanying text. Not failing to underline how – although the planet itself, as a gas giant, is hardly compatible with life as we know it – its eventual moons could instead present conditions suitable for life.
«The existence of Epsilon Eridani b has been well known for 20 years now. The latest update on the system architecture, a couple of years ago, found the planet in a practically circular orbit, no longer as eccentric as the first measurements seemed to indicate. And obviously in the meantime Epsilon Eridani b has climbed into second position as the closest exoplanet to the Sun, given that since 2016 the record belongs to Proxima b, and there are no closest stars “, Alessandro Sozzetti , astronomer at INAF reminds Media Inaf of Turin and expert on exoplanets.
In short, if you are among those who have found themselves displaced, know that you were right: for once it was NASA that made a mistake. A few minutes ago the error was corrected (the title still not), the news was removed from the home page and the text now reports that Epsilon Eridani b was – it was – the closest exoplanet to the Solar System. However, to this day it remains the closest extrasolar gas giant to us.
Featured image: Artist’s impression of Epsilon Eridani b. Image Credits: Nasa, Esa and G. Bacon (Stsci)
It is the first exoplanet discovered entirely thanks to the data collected by the ESA space telescope for astrometry. Identified with the transit method, it has a mass similar to that of Jupiter and makes one revolution every three days. The confirmation came thanks to follow-up observations conducted with the Large Binocular Telescope
Born to compile the largest ever census of the stars of the Milky Way, measuring the properties of over a billion sources on several occasions, the Gaia ESA Space Telescope has now managed to discover its first exoplanet. Shadows of the passage of exoplanets in front of their stars, to tell the truth, Gaia had already caught many others in the past, but this announced yesterday by the Cu7 team – the unit of the Data Processing and Analysis Consortium of Gaia that deals with phenomena variables – is the first previously unknown, therefore identified for the first time by the ESA telescope.
The star around which it orbits has a name that looks like the digits of pi, a sequence of letters and numbers that alone is enough to give an idea of the quantity of sources observed by Gaia: it is a solar-type star cataloged as Edr3 3026325426682637824 , where the first part of the impossible acronym stands for early data release 3and the 19 digits that follow are the code that uniquely identifies the star itself. But how did Gaia realize that there is a planet around Edr3 3026325426682637824? To betray its presence were four slight dimming of the apparent brightness of the star. Very slight, completely imperceptible to the human eye: we are talking about just 1.5 percent. Inaudible but systematic, and repeated at regular intervals. Extremely regular: they recur every 3.0525 +/- 0.0001 days. Let’s add that there in deep space – one and a half million kilometers from Earth, where Gaia is– there are no clouds, no passing planes and no atmospheric variations, and here it becomes inevitable to interpret these variations of luminosity as periodic partial eclipses: in other words, the passage of a body between the very sensitive eye of Gaia and the disc of the star observed. The transit , as astronomers say.
Yeah, but which body? There are not only the planets, to transit regularly in front of the stars. Although more unlikely, in principle it could be – in a binary system, for example – also another star, a brown dwarf, opaque and with dimensions comparable to that of a planet. To establish with certainty the planetary nature of the body that obscures the light of Edr3 3026325426682637824 it was therefore necessary to resort to a second observational method, able this time to allow the estimation not so much of the dimensions of the obscuring object – as happens with the transit method, since the dimming of the brightness is proportional to the area of the body that passes in front of the star – how much of its mass.
This is what the radial velocity method allows to do . Starting from the principle that it is not only the planet that orbits the star but also, albeit to a much lesser extent, the star that orbits the planet, it is in fact possible to calculate the mass of the planet by measuring – through the periodic displacement of the spectral lines caused by the Doppler effect – the slight movements of the star induced by the planet itself. It is a bit like trying to estimate the mass of the Earth by measuring the movements that its revolution induces on the Sun. However, it is a type of measurement that Gaia is unable to perform, at least not with the required accuracy. You need a very high resolution spectrograph mounted on a large telescope. A tool like Pepsi, the spectropolarimeter installed on Lbt , the Large Binocular Telescope, in Arizona. A telescope with two 8.4-meter mirrors and partly Italian, being INAF member of the international collaboration of the Lbt Observatory . And it is precisely by exploiting the so-called Italian Director’s Discretionary Time of Lbt – the time at the telescope made available at the discretion of the director of the Observatory – that it was possible to carry out the required observation. It was conducted by two astronomers of the INAF OAS of Bologna, Felice Cusano and Andrea Rossi , together with Ilya Ilyinof the Leibniz Institute for Astrophysics in Potsdam. Result: the object passing in front of the star has a mass equal to 1.1 times that of Jupiter. So yes, there is confirmation: it is indeed a planet. The first exoplanet discovered by Gaia.
“It was just a matter of time, but we knew that Gaia would eventually demonstrated its potential to reveal even planetary transits”, he tells Media Inaf Gisella Clementini , Oas astrophysics INAF Bologna and member unit CU7 Gaia who signed the discovery. “That the confirmation of this first planet in transit discovered by Gaia has been obtained thanks to the spectra acquired with an Inaf partnership telescope such as Lbt fills us even more with satisfaction”.
A series of 14 articles published in Astronomy & Astrophysics, many of which were also attended by researchers from INAF, reports the discovery of very weak radio signals produced by young stars of great mass that exploded at the edge of the universe. Results obtained thanks to data from the Low Frequency Array (Lofar) radio telescope, among the most accurate ever collected at low radio frequencies
An international group of astronomers and astronomers, many of them from the National Institute of Astrophysics (Inaf), has published a series of articles concerning an important survey carried out with the powerful European Low Frequency Array ( Lofar ) telescope , the largest network in the world, currently operational, for low frequency radio astronomy observations. These are the most accurate images of the universe ever taken at low radio frequencies : by repeatedly observing the same regions of the sky and combining the data to create a single very long exposure image, the team detected the faint radio glow of exploding stars. in supernovae, in tens of thousands of galaxies at the edge of the universe. The first 14 articlesdescribing these results were published in a special issue of the scientific journal Astronomy & Astrophysics .
By combining the high sensitivity of Lofar and the large area of sky covered by the survey – about 300 times the apparent size of the full Moon – the researchers were able to reveal tens of thousands of galaxies similar to the Milky Way, but located in the most remote regions. of the universe. The light from these galaxies travels for billions of years before reaching Earth: this causes the galaxies to appear as they were ‘when young’, when their stars were in the process of forming.
Isabella Prandoni , researcher of INAF in Bologna, explains: «Stars are formed in regions rich in dust, which block much of the light produced by the stars themselves at optical wavelengths. Using radio band observations it is possible to penetrate these layers of dust and obtain a much more precise and complete measure of the star formation activity in progress in galaxies even very distant from us ».
The images of Lofar have made it possible to establish a new relationship between the radio emission of galaxies and the rate at which these galaxies are forming stars; this in turn has allowed a much more accurate estimate of the number of new stars that are forming in the early stages of galaxy life. With the ” Lofar Two-meter Sky Survey: Deep Fields ” survey, the researchers collected an enormous amount of data that allowed them to carry out further scientific studies, ranging from the nature of the spectacular radio emission jets produced by huge black holes, to that resulting from the collisions of huge clusters of galaxies.
Lofar does not directly produce maps of the sky: the signals coming from more than 70 thousand antennas must be suitably combined and processed in order to produce the final images. For this purpose, more than 4 petabytes of raw data have been acquired and processed, equivalent to about one million DVDs.
Equally important for the scientific exploitation of the survey was the comparison of the radio images with the data obtained at other wavelengths. “The regions of the sky observed with Lofar were chosen among the most studied in the northern hemisphere”, underlines Matteo Bonato , a young researcher of INAF from Bologna, “this allowed astronomers to combine optical data, in the near infrared, in the far infrared and sub-millimeter for the galaxies revealed by Lofar, a fundamental step for interpreting the results of the survey ».
“And this is just the beginning,” adds Marco Bondi , also a member of the INAF of Bologna. «The Italian community is analyzing the observations of another very interesting sky region, the so-called Euclid Deep Field North . These data will be part of the next release date » .
Lofar is a powerful tool of the latest generation managed by Astron and made up of thousands of antennas grouped in 51 radio stations scattered throughout Europe, the result of a great collaboration between 9 nations: France, Germany, Ireland, Italy, Latvia, Sweden, the Netherlands. , Poland and the United Kingdom. We recall that INAF leads the Italian consortium and from 2018 to 2022 has planned to invest approximately 2.5 million euros in Lofar, also participating with its staff in the development of the new generation of electronic devices that will equip the radio telescope and the software it regulates. the operation of the telescope. Lofar is designed to capture radio waves at the lowest frequencies that can be captured by the Earth, between 10 and 240 MHz (mega-Hertz).
Gianfranco Brunetti , INAF of Bologna and coordinator of the Italian consortium Lofar, concludes: «At low radio frequencies Lofar offers some observational potential that will remain unique even in the era of the Square Kilometer Array Observatory (Skao). For example, these observations and the ultra-sensitive observations already programmed by Lofar will provide us with very important information on the origin of relativistic particles and on the nature of dark matter in galaxy clusters ».
Featured image: The deepest Lofar image ever made, showing the region of the sky known as “Elais-N1”. Elais-N1 is one of the three fields studied within this research. The image arises from a single Lofar aiming observed repeatedly for a total of 164 hours. This revealed more than 80,000 radio sources, including some spectacular objects showing large-scale radio emission produced by huge black holes. However, most of the sources are associated with distant galaxies similar to the Milky Way, in the phase of star formation. Credits: Sabater et al. 2021
Why do asteroids in the solar system have the sizes we observe? Two researchers at the Max Planck Institute for Astronomy have found an answer to that fundamental question: For the birth planets and planet precursors in our solar system 4.5 billion years ago, turbulence played a key role, helping to bring together pebble-like objects to form larger aggregations known as planetesimals. The presence of turbulence also sets a minimal mass and thus a minimal size for the resulting objects. From this model, the two researchers can predict the size distribution of the remaining objects of this type in the present solar system, namely the asteroids.
In a way, the asteroid belt between Mars and Jupiter and the Edgeworth-Kuyper belt beyond the orbit of Neptune are like cosmic museums: Both contain small bodies that represent an intermediate state of planet formation within our solar system. Now, astronomers at the Max Planck Institute for Astronomy have been able to show how fundamental physics determined the size of the original asteroids – a fundamental length scale within the early solar system. The result rewrites a chapter of planet formation around the Sun, makes specific predictions that could be tested by space probes in the outer solar systems, and is set to give astronomers key information as they interpret the diversity of exoplanets.
An ancient length-scale mystery
Both asteroids and comets are what remains of so-called planetesimals: Solid objects, large enough to be bound by their own gravity, that formed roughly 4.5 billion years ago, when the Sun was still surrounded by a disk of gas and dust. Many planetesimals went on to eventually form the current planets. But in the asteroid belt, the gravitational influence of nearby Jupiter kept planetesimals from clumping together, and in the outer solar system, beyond Neptune, planetesimals simply did not encounter each other frequently enough to bond. That is why, in those regions, we still have these objects around, which provide us with a glimpse about what the early solar system looked like. We do not call these objects planetesimals, though – we call them asteroids.
The intervening 4.5 billion years did not, of course, leave the asteroids completely unchanged. While the asteroid belt is much emptier than science fiction movies make it seem, and collisions are rare, collisions did happen over those past billions of years, and each left behind numerous smaller fragments. Those fragments then move on fairly similar orbits, spreading out over time. About a quarter of all known asteroids can be assigned to a family – a group that has originated from the same collision.
By plotting the orbital parameters of known asteroids, astronomers can estimate which of the objects belong to a cloud of fragments. Take the 500-m-sized asteroid 101955 Bennu, which is currently being visited by the NASA space craft OSIRIS-Rex (Kevin Walsh), with the aim of bringing some of its material back to Earth. Bennu is believed to be a fragment of a much larger asteroid, and is possibly a member of the Polana or Eulalia family of asteroids.
But when researchers around Marco Delbo, at the Observatoire de la Côte d’Azur, performed a thorough asteroid family tree analysis in 2017, they were able to pick out 17 asteroids that apparently had not undergone any collision at all, and were still in the same primordial state as they would have been when they were formed.
The primordial asteroids, and thus presumably the original planetesimals, have a very narrow size distribution. Objects with a diameter of around 100 kilometers are by far more common than either larger or smaller objects, following a so-called Gaussian or normal distribution. But why the 100 kilometers? What is special about this scale?
The challenge of forming large clumps
This is where the research by Hubert Klahr comes into play. Klahr is head of the Planet and Star Formation Theory Group at the Max Planck Institute for Astronomy, and he and his colleagues have spent the past decade trying to understand how planets form. Recently, Klahr together with his PhD student and later postdoctoral researcher, Andreas Schreiber, was able to make significant progress – and solve the question of the preferred 100-km scale at the same time.
The broad-brush story of planet formation has been known for a long time. Take a popular astronomy book from the 1970s, and you can read how there was matter left over from the initial disk of gas and dust surrounding the young Sun, and how that matter clumped together to form planets. But the details have long been surprisingly difficult. The dust in the gas disk surrounding a newborn star can indeed clump together to form what astrophysicists have taken to calling pebbles – clumps between a few millimeters and a few centimeters in size.
But the step from there to kilometer-size objects has long been troubling planet-formation researchers. As pebbles grow larger, several things happen: Pebbles become more likely to fracture as they collide, rather than stick together. For a while, researchers had hopes that water ice on pebbles might help them cling together. But that did not turn out all that convincing either, not least because ice at very low temperatures is not all that sticky. Still, a number of researchers in the field still see ice playing a role for the transition from pebbles to larger objects.
Overall, the conventional scenarios continue to have a time-scale problem. Since the surrounding gas rotates at a slower speed than would be needed for a solitary solid object orbiting a star, larger pebbles tend to drift inward, and eventually fall into their star. For slower growth rates, the objects in questions would have ended up inside their stars before they would have reached the necessary size. Only objects larger than about one meter can escape that fatal drift – they become largely independent of the buffeting by the surrounding gas. But how can objects reach that size?
Turbulence to the rescue
For a few years now, Klahr and his colleagues have been on the track of the role of turbulence – chaotic flows within as or another fluid – as a solution to the larger-than-pebbles problem. Observations show that in protoplanetary disks, gas flow is turbulent, with chaotic local variations in gas speed. Without turbulence, dust and pebbles would form a disk as razor-thin as Saturn’s rings. But observations show that instead, dust is present throughout the much thicker gas disk that surrounds young stars.
On larger scales, turbulent gas motion in protoplanetary disks can create regions of greatly increased pebble and dust concentrations. Intermittently, such regions can become veritable pebble traps, where pebbles from the surrounding regions become trapped. In such regions, pebbles can accumulate with sufficient total mass for them to be bound together by their mutual gravity – leading to the formation of larger objects on the required much shorter time-scales.
First indications that turbulence plays a key role in planet formation came from numerical simulations and the comparison with detailed observations of the protoplanetary disks around new stars. Simulations by then-MPIA-PhD student Anders Johansen showed that turbulence forces pebbles together, and leads relatively quickly to the formation of planetesimals (see the Nature article Johansen et al. 2007). But simulations are one thing. Klahr and Schreiber set out to gain a deeper understanding – to understand in terms of the underlying physical laws what was happening in those simulations, and presumably around young stars.
The physics behind the turbulent formation of planetary embryos proved surprisingly straightforward – and has fundamental similarities to how stars themselves form: Astronomers have long known that there is a minimum mass for a newborn star. This is due to the fact that the gas clouds that give birth to young stars have internal pressure; in order for gravity to overcome that pressure and pull the gas together into a star, a newly forming star needs to reach a certain mass. This mass is known as the Jeans mass, and depends on the gas density and the temperature. Klahr and Schneider found a new kind of Jeans mass, for the formation of planetesimals. There, the pressure to be overcome is due not to the gas temperature, but to the turbulent motion of gas and dust.
Setting the scales for planetesimals
This new Jeans mass depends only on the local strength of the turbulence, which in turn depends on how the structure of the gas disk changes as one moves farther away from the central star. If gas pressure drops sufficiently fast with distance, the so-called “streaming instability” will unavoidably produce turbulent motions of the gas and dust. Instead of sinking quietly towards regions of larger pressure, towards the central stars, pebbles move chaotically, stirring the surrounding gas. For most regions within our Solar System, the turbulent-pressure Jeans mass of a pebble cloud corresponds to planetesimals of a size of around 100 kilometers. Pebble clouds of lower mass are less likely to collapse – they would need a rare chance fluctuation to bring them together all at once. Larger clouds are less likely to form, as clouds should collapse as soon as they exceed the critical mass. Thus, both smaller and larger planetesimals are possible, but naturally much rarer.
This, then, was a suitable candidate for the physics behind the universal Solar-System length scale of primordial asteroid sizes – a limiting mass for the formation of planetesimals.
Klahr’s and Schreiber’s calculations also make a prediction for remnants of the early planet-formation process in the outer Solar System. Based on what we know from the properties of our Sun’s protoplanetary disks, the size of primordial objects which formed in that outer region shrinks to 10 km at a hundred times the Earth-Sun distance. It would be a worthy goal of a future outer system space mission to study how the characteristic size of remnants in the outer solar system so-called Kuiper Belt Objects, decreases with increasing distance from the Sun.
Visits to a planetary museum
Comets visiting us from that outer part of the solar system, the Edgeworth-Kuiper belt, are not likely to be in pristine shape – simulations suggest that they will unavoidably have undergone several collisions since the Solar System came into being. But a direct mission into the Edgeworth-Kuiper Belt, where collisions are less likely, should be able to identify and examine truly pristine and primordial planetesimals.
Such a planetesimal was very briefly visited by the New Horizons mission after its Pluto flyby in early 2019. At the time, the object that has since been called Arrokoth [original designation (486958) 2014 MU69] was 45 times as far from the Sun as the Earth, making it the most distant primordial object ever visited. Arrokoth looks like a snow-man made of two planetesimals stuck together, one with a diameter of 21 kilometers, the other 15 kilometers. Indeed, the object’s surface structure and colour hint at direct formation from a single, rotating pebble cloud. This fits the size predictions of the pebble model for planetesimals forming at this particular distance from the Sun.
Another possibility for finding primordial planetesimals are the so-called Trojan asteroids, which were captured during the birth of the Solar System by Jupiter’s gravity. Ever since, they have been orbiting the Sun in two groups, one ahead of Jupiter, one behind (“Lagrange points 4 and 5”). NASA’s LUCY probe, slated for launch in late 2021, is meant to visit six of those Trojan asteroids as part of a 12-year-mission. Based on previous observations, the Trojans apparently originate from different regions of the early solar system – it’s as if LUCY is visiting a museum of planet formation! Both LUCY and a possible Edgeworth-Kuiper mission could test the prediction of the Klahr-Schreiber scenario for the sizes of primordial Solar System Objects – both for the size distribution itself and for the frequency of binary objects, where two planetesimals have become stuck together.
Understanding planets around other stars
The new prediction for planetesimal sizes also promises considerable impact on our understanding of the diversity of exoplanets – of planets around stars other than the Sun. Perhaps the greatest value that the so-far 4400 and still increasing numbers of known exoplanets have for our understanding of cosmic history is that they provide a statistical sample. Unlike the single case of our Solar System, the many data points for exoplanets allow us to make deductions about the way planets are formed in our galaxy.
If we understand the physics of planet formation, we can predict the probability of planetary systems of different kinds – massive planets, smaller planets, narrower or wider orbits – to form. By comparing the actual distribution of planetary systems of different kinds, we can test our predictions, and in this way find out if our simulations are realistic.
There are a number of ongoing attempts at “population synthesis,” that is, at creating ensembles of realistic planetary systems, extracting the frequencies with which certain properties (such as mass ranges, or orbital parameter ranges) occur, and compare the result to observational data. But so far, those attempts needed to put in the spatial and size distribution of planetesimals and planetary embryos “by hand” as an educated guess. The new results by Klahr and Schreiber, on the other hand, allow researchers to deduce the planetesimal size distribution for each simulation run from the results for the developing population of pebbles, combined with the results for the gas pressure. This closes a fundamental gap in the chain of reasoning of population synthesis studies.
Where the mass concentration within the disk is higher, the effect of turbulence in allowing larger structures to form will be greater. As the gas within the disk is depleted – either by falling into the star or being scooped up by what then become gas planets – the capacity for forming larger planetesimals drops. The results can be brought into a form that allows the researchers running population synthesis models to include the birth of planetesimals and planetary embryos in a simplified way, as a function of the gas pressure that is an integral part of the underlying models.
A model with predictive power
All in all, the new results have closed an important gap in our previous knowledge about planet formation. Hubert Klahr says: “The strength of our model lies in its predictive power. Using the model, we can describe when and where planetesimals should form, as well as the sizes of the newborn planetesimals. Given that there are competing models, we will need to convince our colleagues that we have indeed found the underlying physics of planetesimal foundation. That is where testable predictions can help.”
Parts of the planet-formation community favor alternative explanations, involving, for instance, the stickiness of ice, or “fluffy aggregates” (fluffy silicate flakes) as intermediate steps. For Klahr and Schreiber, it is pretty clear that while there may well be a role for those mechanisms at smaller scales, but that they have little to contribute when it comes to bringing objects into the 100-km region.
Klahr says: “Even if collisions were to lead to growth up to 100 km without eventually switching to a gravitational collapse, this method would predict too large a number of asteroids smaller than 100 km. It would also fail to describe the high frequency of binary objects in the Edgeworth-Kuiper belt. Both properties of our Solar System are easily reconcilable with the gravitational pebble cloud collapse.”
Research highlights potential pathway for new treatments for neurodegenerative diseases.
Scientists at the University of Alberta have identified a mechanism for a protein that decreases the chance of developing Alzheimer’s disease—a discovery that highlights a new potential avenue for developing therapeutic treatments.
The protein, called CD33, is known for its connection to Alzheimer’s disease susceptibility, but its exact role was unclear until now.
There are two versions, or isoforms, of the CD33 protein: a long version that increases susceptibility to Alzheimer’s disease and a short version that decreases the chance of getting the debilitating neurodegenerative disease.
“We found that the short isoform has a completely different and opposite function from the long isoform,” explained Matthew Macauley, assistant professor in the Department of Chemistry and Canada Research Chair in Chemical Glycoimmunology.
“The short version of the CD33 protein makes immune cells in the brain, called microglia, better able to consume plaque-causing proteins, which contrasts to the long versions of CD33, which we previously showed represses this process.”
About 10 per cent of the population have a different version of the CD33 protein that causes them to make more of the short isoform of CD33—which means that 90 per cent of people could benefit from a therapeutic intervention, Macauley explained.
“Our ultimate goal is to translate this new information into a therapeutic treatment strategy,” he explained. “There are several different avenues to accomplish this, which we are pursuing with new funding our laboratory has received.”
Featured image: Matthew Macauley led a research team that identified how one protein can protect against Alzheimer’s disease—a discovery that could point the way to new treatments for neurodegenerative diseases. (Photo: John Ulan)
By helping immature red blood cells mature faster, dexamethasone may help reduce infectivity, U of A scientists say
A team led by Shokrollah Elahi, immunologist with the University of Alberta’s Department of Dentistry, has identified a new role played by immature red blood cells in HIV-1.
Immature red blood cells-also known as erythroid precursors or CD71+ erythroid cells (CECs)-are now shown to have multiple effects on HIV. They make HIV target cells more susceptible to HIV infection, carry the virus to new places in the body, possibly facilitate mother-to-child infection, and even harbour the virus, hindering treatment.
Allowing more rapid progression of HIV-1
After three years of study, Elahi and his team have determined that these cells release a chemical called reactive oxygen species (ROS) that makes HIV target cells more susceptible to HIV infection.
“If we place immature red blood cells in a tissue culture plate with HIV and HIV target cells, the HIV goes crazy,” Elahi says. “HIV replication increases in some cases up to twentyfold.”
This is not the only role CECs play in progressing the spread of HIV in the body. Elahi explains the virus binds to a protein (called CD235a) on the surface of the immature red blood cells and travels to new parts of the body. “The HIV basically hitchhikes or gets a free ride on these cells, which travel all over the body and brain. This discovery is novel. No one has yet shown that HIV interacts with CD235a.”
Combined, the increased susceptibility and the “hitchhiking” may cause HIV to progress very fast in individuals with high levels of immature red blood cells.
In a healthy adult, these cells remain in the bone marrow, where they are produced. “Every second, our bodies produce 2.5 million red blood cells,” says Elahi. “These cells eventually mature and leave the marrow and do their main job as oxygen transporters. However, in anemia and some other chronic infections, the bone marrow is unable to produce enough red blood cells for the body. Therefore, red blood cell production may occur in other organs such as the spleen and liver to meet the body’s oxygen supply. This seems to cause the immature red blood cells to leave these organs early and appear in blood circulation.”
Alarmingly, Elahi and his team have also discovered that the virus can enter the immature red blood cells and remain inside for at least 72 hours, even in the presence of antiretroviral drugs used to treat HIV. “We retested the samples after 72 hours and the HIV had come out of the immature red blood cells and infected the target cells.”
Adults with anemia or any condition that results in the abundance of CECs in the blood are therefore highly susceptible to HIV-1, and for the usual HIV treatment to be effective, doctors will need to get the anemia under control first.
“For people who have HIV and are anemic, doctors need to treat their anemia first. If they don’t, it can negatively impact the HIV disease outcome,” says Elahi.
He also calls for anemia to be treated as an immunological disorder. “We have shown that CECs have immunosuppressive properties. It means they suppress your immune system, making you more prone to disease and infection. With more of these cells, your response to vaccines will be compromised, too. It’s more serious than we thought.”
Mother-to-child HIV transmission
CECs are also naturally abundant in fetuses and newborns. This has huge implications for mother-to-child transmission of HIV. “Immature red blood cells become abundant during pregnancy-especially in the third trimester-and are also abundant in the cord blood and placenta,” Elahi explains. “If the mother has HIV during pregnancy, the presence of immature red blood cells first accelerates the infection in the mother, then transports the virus into the placenta, where it spreads rapidly throughout the fetus.”
Mounting evidence-much of it by Elahi and his team-shows that immature red blood cells suppress the immune system.
Healthy pregnancies require the immunosuppressive function of the immature red blood cells. This protects the fetus from the mother’s immune system, which otherwise would attack it as a half-foreign object. Therefore, while this discovery could explain the rapid progression of HIV infection in newborns (and why they usually die within two years), Elahi calls for further study to come up with a solution.
Scientists at Oxford Brookes University have developed a new single-cell transcriptomic method which will aid multiple fields of biology, including the study of human health, disease and injury.
Single-cell transcriptomic methods allow scientists to study thousands of individual cells from living organisms, one-by-one, and sequence each cell’s genetic material. Genes are activated differently in each cell type, giving rise to cell types such as neurons, skin cells and muscle cells.
Single-cell transcriptomics allows scientists to identify the genes that are active in each individual cell type, and discover how these genetic differences change cellular identity and function. Careful study of this data can allow new cell types to be discovered, including previously unobserved stem cells, and help scientists trace complex developmental processes.
“Single-cell transcriptomics have revolutionised biology but are still an area in active development,” explains Helena Garcia Castro, a PhD student in the Department of Biological and Medical Science at Oxford Brookes University and co-author of the paper.
“Current methods use cell dissociation protocols with ‘live’ tissues, which put cells under stress, causing them to change, and limiting accurate investigations.”
To solve this problem, the research team used historical research and revived a process from the 19th and 20th centuries to create the ACME (ACetic acid MEthanol dissociation) method.
“This means scientists can now exchange samples between labs, preserve the cell material and large sample sets can be frozen in order to be analysed simultaneously, without destroying the integrity of the genetic material in the cell.”
— Dr Jordi Solana, Research Fellow, Oxford Brookes University
Scientists realised that with this method, cells did not suffer from the dissociation as it stops their biological activity and ‘fixes’ them from the very beginning of the investigation.
The ACME method then allows cells to be cryopreserved, one or several times throughout the process, either immediately after the dissociation process, in the field or when doing multi-step protocols.
Dr Jordi Solana, Research Fellow at Oxford Brookes University adds: “This means scientists can now exchange samples between labs, preserve the cell material and large sample sets can be frozen in order to be analysed simultaneously, without destroying the integrity of the genetic material in the cell.
“We took the method from the old papers and repurposed it to make it work with current single-cell transcriptomic techniques. With our new method, we will now set out to characterise cell types in many animals.”
Scientists are now able to collaborate with other laboratories and research a wider variety of animal cells, thanks to the ACME method. This would not have been possible without the technology to dissociate and freeze live cell tissues.
The paper, ACME dissociation: a versatile cell fixation-dissociation method for single-cell transcriptomics, is published in Genome Biology.
Researchers at the University of Oxford, alongside international collaborators, have found that there is a significant knowledge gap in the risks posed by climate change to mammals.
In their systematic review, published in the Journal of Animal Ecology, the scientists identify that there are significant blanks about the risks to mammals in regions most vulnerable to climate change, including boreal and tropic areas.
They identify that there is currently research evaluating the effects of climate change on mammal populations for just 87 species. This represents only <1% of the rich diversity of mammals, a group that contains over 6,400 species, and for which about 25% is currently endangered.
“Quite simply, we can’t protect what we don’t understand. By examining how the building blocks of any species change with the climate, we will be able to provide robust forecasts of how each individual species will fare before climate change and other man-led activities.”
This study provides, for the first time, a key understanding of mammal responses to climate change. Their research shows that the responses of these three demographic processes – rates of survival, growth, and reproduction – are both species-specific, and often have negative, neutral, or even positive effect on different demographic processes differently.
However, the severely limited available data, they say, means it is impossible to draw the much-needed generalities to develop global partnerships with governments, policy-makers, funding agencies, and NGOs to efficiently combat the effects of climate change on mammal species worldwide.
Dr. Rob Salguero-Gómez, from the Department of Zoology, University of Oxford, said: ‘Quite simply, we can’t protect what we don’t understand. By examining how the building blocks of any species change with the climate, we will be able to provide robust forecasts of how each individual species will fare before climate change and other man-led activities.
‘Most high-research institutions are based in temperate and Mediterranean habitats. While these habitats have got their key value, they are not currently ranked as highly vulnerable as others such as the tropics and Tundra.’
This is the first biogeographically explicit call for guided prioritisation of where research in mammal population ecology should focus. The scientists hope that this research will help shape how funding agencies, population ecologists, and managers decide to invest their limited economic support and time.
“Most high-research institutions are based in temperate and Mediterranean habitats. While these habitats have got their key value, they are not currently ranked as highly vulnerable as others such as the tropics and Tundra.”
The scientists say that their findings prove that researchers urgently need to seek out regions – and active international collaborations with local researchers – around the globe where the environment is going to change most. These locations may be inconveniently located and far from research institution hubs.
If this approach is not possible, they suggest implementing experimental manipulations of the surrounding. Although they acknowledge that this has important bioethical considerations.
Dr. Rob Salguero-Gómez said: ‘Key to these challenges is the fact that, for instance, the UK has recently rescinded all Official Development Assistance (ODA) projects. This brings huge negative consequences, as it is precisely most ODA countries that house the largest proportion of mammals (and other species) worldwide, for example the “African green belt” (Cameroon, CAR, DRC, Kenya, Tanzania), Chile, Colombia, Ecuador, Philippines, Indonesia.’
Featured image: Wild Reindeer Family – Spitsbergen, Svalbard
Researchers at Lund University in Sweden have discovered that bird blood produces more heat in winter, when it is colder, than in autumn. The study is published in the FASEB Journal.
The secret lies in the energy factories of cells, the mitochondria. Mammals have no mitochondria in their red blood cells, but birds do, and according to the research team from Lund and Glasgow this means that the blood can function as a central heating system when it is cold.
“In winter, the mitochondria seem to prioritize producing more heat instead of more energy. The blood becomes a type of radiator that they can turn up when it gets colder”, says Andreas Nord, researcher in evolutionary ecology at Lund University who led the study.
Until now, the common perception has been that birds keep warm by shivering with their large pectoral muscles and fluffing up their feathers. Less is known about other heat-regulating processes inside birds.
To investigate the function of mitochondria, the researchers examined great tits, coal tits and blue tits on two different occasions: early autumn and late winter. The researchers took blood samples from the birds and isolated the red blood cells. By using a so-called cell respirometer, a highly sensitive instrument that can measure how much oxygen the mitochondria consume, the researchers were able to calculate how much of the oxygen consumption was spent on producing energy and how much was spent on creating heat. Finally, they also measured the amount of mitochondria in each blood sample.
The results show that the blood samples taken in winter contained more mitochondria and that the mitochondria worked harder. However, the work was not to produce more energy, something the researchers had assumed since birds have a much higher metabolism in winter.
“We had no idea that the birds could regulate their blood as a heating system in this way, so we were surprised”, says Andreas Nord.
The researchers will now investigate whether cold weather is the whole explanation for the birds’ blood producing more heat in winter. Among other things, they will study whether the food that the birds eat in winter affects the mitochondria.