Whats The Effect of Imidazole on Carbon Steel Weldment in District Heating Water? (Material Science)

Sang-Jin Ko and colleagues investigated the effect of imidazole as a corrosion inhibitor on carbon steel weldment in alkaline district heating water. They showed that, depending on the concentration of imidazole, the ratio of interaction between carbon steel and imidazole affected inhibition efficiency. Their study recently appeared in the Journal Materials.

Corrosion of metal in aqueous systems has led to structural degradation and accidents. In huge fluid transport systems such as those used in the petrochemical industry and district heating systems, the effect of corrosion is more extensive because it is hard to use expensive high corrosion-resistant metals in such large systems. The corrosion in a district heating system directly affects the lifespan and function of pipes by causing metal ion solvation and corrosion byproducts. Weldments of carbon steel are especially susceptible due to properties such as having different microstructures of base metal.

To reduce this problem, low-cost water treatment methods have been applied including pH control, deaeration, and addition of inhibitors. The organic inhibitor is one of the major methods used to reduce corrosion rate by its adsorbing on metal surfaces. Many research studies have been conducted on the corrosion inhibition performance of imidazole in acidic environments such as in the piping of a petrochemical plant. However, there has been no study on the effect of imidazole in alkaline conditions such as a local district water heating environment.

Thus, Sang-Jin Ko and colleagues investigated the effect of imidazole as a corrosion inhibitor on carbon steel weldment in alkaline district heating water. Along with, the inhibition efficiency and electrochemical properties by potentiodynamic polarization test and electrochemical impedance spectroscopy. They showed that, as the concentration of imidazole increased up to 500 ppm, inhibition efficiency increased up to 91.7%. At 1000 ppm, inhibition efficiency decreased.

Figure 1: Inhibition efficiency (%) from potentiodynamic polarization test and value of ndl from EIS. © Ko et al.

Additionally, to clarify the effect of imidazole as a corrosion inhibitor and to observe surface morphology after corrosion, they obtained surface images using Optical Microscopy (OM) after 40 h immersion in the solutions of 0, 100, 300, 500, and 1000 ppm of imidazole. It has been shown from OM measurements that, 500 ppm imidazole solution offered the highest inhibition efficiency and corrosion resistance. In 300 and 1000 ppm solution samples, micro-scale pittings were observed implying the lower inhibition efficiency and corrosion resistance.

Figure 2. Optical microscopy images (15×) of carbon steel weldment after 40 h immersion in the solution with addition of (a) 0 ppm, (c) 100 ppm, (e) 300 ppm, (g) 500 ppm, and (i) 1000 ppm of imidazole and magnified images (45×) of (b) 0 ppm, (d) 100 ppm, (f) 300 ppm, (h) 500 ppm, and (j) 1000 ppm of imidazole; white dotted lines show weldment area and red circles show micro-scale pitting © Ko et al.

Moreover, they conducted atomic force microscopy (AFM) to investigate surface properties of carbon steel weldment after 6h immersion in 500 and 1000 ppm of imidazole. They showed that the surface coverage of imidazole at 1000 ppm is lower than that of imidazole at 500 ppm.

Figure 3. Topography and surface potential of carbon steel weldment after 6 h immersion in 500 and 1000 ppm of imidazole solution as measured by AFM with SKPFM mode. © Ko et al.

Finally, X-ray photoelectron spectroscopy (XPS) was conducted in order to investigate how imidazole adsorbed on to the carbon steel surface. They showed that, with 500 ppm of imidazole, the amount of pyrrole type interaction is 4.8 times larger than pyridine type interaction. At 1000 ppm of imidazole, the amount of pyridine type interaction is 3.49 times larger than pyrrole type interaction.

Figure 4. XPS results (N1s) of carbon steel weldment after 6 h immersion in (a) 500 ppm and (b) 1000 ppm of imidazole. © Ko et al.

Their results demonstrated that, depending on the concentration of imidazole, the ratio of interaction between carbon steel and imidazole affected inhibition efficiency.

“Our study shows that samples from 1000 ppm of imidazole solution show a lower inhibition efficiency than samples from 500 ppm of imidazole.”

— they concluded.

Featured image: Molecular structure of imidazole. © Sang-Jin Ko et al.


Reference: Ko, S.-J.; Choi, S.-R.; Hong, M.-S.; Kim, W.-C.; Kim, J.-G. Effect of Imidazole as Corrosion Inhibitor on Carbon Steel Weldment in District Heating Water. Materials 2021, 14, https://doi.org/10.3390/ma14164416


Note for editors of other websites: To reuse this article fully or partially kindly give credit either to our author/editor S. Aman or provide a link of our article

New Aroid Species Found in Myanmar (Botany)

Typhonium is the largest genus in the aroid family (Araceae). It comprises of about 100 species of tuberous perennial herbs, and is most often found in wooded areas. 12 species in the genus Typhonium have been found in Myanmar. 

During the exploration of family Araceae in Sagaing Region, Myanmar, a researcher came across an enigmatic Typhonium species collected in Monywa and Budalin Township of Monywa District in August 2020.  

After careful morphological examination and comparing it with the relevant literature, the researchers confirmed that it was a new species, and named it Typhonium edule, referring to the inflorescence and the leaves of the species eaten by local people. The study was published in Phytotaxa.  

Typhonium edule is the 13th representative of the genus Typhonium in Myanmar,” said Mark Arcebal K. Naive, a Filipino PhD candidate at Xishuangbanna Tropical Botanical Garden (XTBG) of the Chinese Academy of Sciences. 

Typhonium edule is a seasonally dormant herb. It grows in tropical dry forest with open to semi open canopy at elevations between 50–85 m above sea level. The Burmese people call it ‘kyee-chay’ and cook its inflorescences and leaves. 

Owing to the insufficient information on the distribution and population size of Typhonium edule in the wild, the researchers proposed that the species should be treated as ‘data deficient’ (DD) following the Red List criteria of the International Union for Conservation of Nature (IUCN). 

Featured image: Typhonium edule (Image by K.Z.Hein)  


Reference: Mark Naive, Khant Hein, “Taxonomic studies of Araceae in Myanmar II: Typhonium edule, a remarkable new aroid species from Monywa District, Sagaing Region”, Phytotaxa, Vol. 513 No. 2: 4 August 2021. DOI: https://doi.org/10.11646/phytotaxa.513.2.7


Provided by Chinese Academy of Sciences

3D Ordered Channel Enhances Electrocatalysis (Chemistry)

A team led by Prof. YU Shuhong and Prof. HOU Zhonghuai from the University of Science and Technology of China (USTC) of the Chinese Academy of Sciences developed a theory-guided microchemical engineering (MCE) approach to manipulate the reaction kinetics and thus optimize the electrocatalytic performance of methanol oxidation reaction (MOR) in 3D ordered and crossed-linked channel (3DOC). The study was published in Journal of the American Chemical Society.

In the micro-nanoscale chemical engineering, two primary factors generally affect electrocatalytic kinetics at the electrode-electrolyte interface, i.e., the reaction on the electrode surface and mass transfer from the electrolyte to the near-surface and within the diffusion layer.

The surface reaction can be optimized by designing catalysts to nanoscale and increasing porosity to increase the active sites, as well as by adjusting the electronic structure and binding energy to increase the intrinsic activity of active sites. For macrocatalyst-involved electrocatalysis, the mass transfer from the bulk electrolyte to the catalyst surface is fast enough due to the negligible characteristic length of the diffusion layer compared to the catalyst size.

However, as the catalyst downsizes to the nanoscale, the mass transfer deviates greatly from the prediction by traditional theory owing to the comparable diffusion layer length. Therefore, a novel methodology of optimizing the kinetics of given catalysts remains urgent to maximize the electrocatalytic performances.

In this study, the researchers proposed a MCE approach involving catalyst process optimization. 

They selected platinum nanotubes (Pt NTs) as the model catalyst, employed air-liquid interface assembly and in-situ electrochemical etching to construct an ideal 3D ordered and crossed-linked channel, and used MOR as the model reaction to test the electrocatalytic performance of 3DOC. The measurement results indicated that there is an optimal channel size of 3DOC for MOR. 

Besides, based on the free energy density function of the electrode surface, the researchers established a comprehensive kinetic model coupling the surface reaction and mass transfer to accurately regulate the kinetics and optimize the MOR performance. The results showed that increasing the channel size of 3DOC promoted the mass transfer from the bulk electrolyte onto the catalyst surface, and weakened the vertical electron flow of the reaction in 3DOC. 

This competition between the mass transfer and surface reaction led to the best MOR performance on 3DOC with a specific size. Under the optimized channel size, mass transfer and surface reaction in the channeled microreactor were both well regulated. 

This structural optimization, different from the traditionally thermodynamic catalyst design, ensures a significant increase in heterogeneous electrocatalytic performance. Using proposed MCE coupling mass transfer and surface reaction, the kinetic optimization in electrocatalysis can be realized. This MCE strategy will bring about a leap forward in structured catalyst design and kinetic modulation.


Provided by Chinese Academy of Sciences

How COVID-19 Delta Variant is Impacting Younger People? (Medicine)

The number of COVID-19 infections, mostly with the delta variant, continues to rise, especially in parts of the U.S. where vaccination rates are low. Dr. Nipunie Rajapakse, a pediatric infectious diseases physician at Mayo Clinic Children’s Center, says younger people are among those being infected with the highly contagious virus.

“The most important thing we can do to protect kids under 12 years of age, who are not yet eligible to be vaccinated themselves, is to ensure that as many people who are around them and who are interacting with them are vaccinated.”

Dr. Nipunie Rajapakse

Watch: Dr. Nipunie Rajapakse discusses children, younger people affected by delta variant.

Journalists: Broadcast-quality sound bites are in the downloads at the bottom of the page. Please courtesy: “Nipunie Rajapakse, M.D./Pediatric Infectious Diseases/Mayo Clinic.”

Dr. Rajapakse answered these questions about the delta variant and how it is affecting young people in this Q&A with the Mayo Clinic News Network:

Who is being impacted by the delta variant?

With the delta variant, we are seeing an increased number of cases amongst children. In the last couple of weeks, the American Academy of Pediatrics has reported a significant increase in COVID-19 cases amongst people under 18 years of age. We know that the delta variant is much more transmissible than the other prior variants of COVID-19. 

And that extends to children as much as to older teenagers and adults as well. What we’re still working to understand is whether people get more sick with delta variant or not. These are somewhat surprisingly difficult things to tease out. When you have a large increase in the number of people getting infected, that proportionately results in more people in hospital and more people die. But it doesn’t necessarily mean that the virus itself is more deadly. 

What are ways adults can protect their children?

The most important thing we can do to protect kids under 12 years of age who are not yet eligible to be vaccinated themselves is to ensure that as many people who are around them and who are interacting with them are vaccinated. Anyone over 12 years of age should be getting their vaccine, both to protect themselves, but also to protect people who are not yet eligible, such as children under 12 years of age.

We know the vaccines are highly effective in preventing serious illness, hospitalizations and deaths in people who get the vaccine. We also know that they will significantly reduce your risk of spreading the infection to someone else. 

What about the delta breakthrough cases in vaccinated people?

I want to emphasize those cases are getting a lot of headlines, but they’re very rare events. And they are not what is driving the current surge in cases that we’re seeing. The current surge is really amongst unvaccinated people predominantly young and middle-aged, who are winding up in hospital or in ICU because of infection. And we know that these are largely preventable.

Why should 12- to 18-year-olds get their COVID-19 vaccine before returning to school?

We do want kids to return to school, we know all the benefits of going to school. But right now, with what’s going on in our country and in our communities with delta variant spread, you’re really making a choice between getting vaccinated or getting COVID-19. This delta variant is just that contagious.


Provided by Mayo Clinic

The Perseid Meteor Shower Arrives (Astronomy)

The Perseids are produced by the impact in our atmosphere of fragments of the meteoroid cloud of Comet 109P / Swift-Tuttle, and are also recorded on the surface of the Moon. During the peak, around August 11, up to fifty perseids per hour can be observed in places away from light pollution

During the second half of July the Perseids began to cross the night sky. At the moment, only between one and two events can be observed every hour, but the true astronomical spectacle will take place in mid-August, when this meteor shower will reach its maximum activity: in places far from the light pollution of the big cities, up to fifty perseids per hour.

The Perseids, one of the classic astronomical spectacles of the summer in the Northern Hemisphere, originate from Comet 109P/Swift-Tuttle. This comet completes an orbit around the Sun approximately every 133 years and, each time it approaches our star, the Swift-Tuttle heats up, emitting jets of gas and small solid particles that form the comet’s tail. Every year, between the end of July and the end of August, our planet crosses the remains of this tail, causing these particles, called meteoroids, to collide with the Earth’s atmosphere at high speed.

As the Earth enters this cloud of meteoroids that the comet leaves behind, the number of particles is increasing, so that the activity of the perseids rises. In 2021 that activity will peak during the night of August 11-12. However, during the night before and after the activity of the Perseids will also be high, which will allow us to see a large number of shooting stars. The Moon, which reaches its new phase on August 8, will be at the beginning of its first quarter phase, so its brightness will be very low and will not interfere with observation.

“If the observation conditions were ideal, they could be seen on the order of one hundred shooting stars per hour, but the brightness of the Moon will be one of the factors that will cause the real number of visible Perseids to drop to about fifty”, points out José María Madiedo, researcher at the Institute of Astrophysics of Andalusia (IAA-CSIC).

“Most of the meteoroids released from 109P/Swift-Tuttle are as small as a grain of sand, or even smaller. When they cross with our planet, they enter the Earth’s atmosphere at a speed of more than 210,000 kilometers per hour, which is equivalent to traveling our country from north to south in less than twenty seconds”, points out José Luis Ortiz, researcher at the IAA- CSIC.

At these speeds, the collision with the atmosphere is so abrupt that the temperature of these particles increases to about five thousand degrees Celsius in a fraction of a second, so they disintegrate, emitting a flash of light that is called a meteor or shooting star. This disintegration occurs at high altitude, usually between 100 and 80 kilometers above ground level. Larger particles (the size of a pea or larger) can produce much brighter shooting stars, which are called bolides or fireballs.

HOW TO OBSERVE THEM

To enjoy the Perseids it is not necessary to use telescopes or any other type of optical instrument. It is enough to observe the sky, preferably from somewhere as dark as possible and away from the light pollution of the cities.

These shooting stars can appear anywhere in the sky. By prolonging their trajectory backwards they will appear to come from a point located in the constellation of Perseus, from which their name comes. This point is called radiant. Since the constellation Perseus rises above the horizon after dark, the probability of seeing Perseids increases as the night progresses and peaks near dawn.

ALSO ON THE MOON

The Perseids also hit the Moon. Unlike Earth, the Moon lacks an atmosphere to protect it, so meteoroids collide directly with the lunar soil at more than 210,000 kilometers per hour. This causes the meteoroids and part of the lunar soil in which they impact to be abruptly destroyed, thus forming a new crater.

In each of these collisions, a brief flash of light is also released that the human eye cannot perceive directly, but that can be detected from Earth with the help of telescopes. “The study of these flashes allows astrophysicists to obtain very relevant data on the collisions that occur against the Moon and against the Earth. For this reason, during the nights of greatest activity of the Perseids, our telescopes of the MIDAS project will also point to the Moon to be able to record how the particles detached from Comet 109P/Swift-Tuttle disintegrate against the lunar soil”, concludes José María Madiedo.


Provided by IAA CSIC

From Dusk To Dawn Of the Universe (Cosmology)

Can the darkness of twilight – the photons of the cosmic microwave background – along with the light of dawn – the ultraviolet radiation emitted by the first stars that light up – unravel the story of reionization and tell us something about primordial inflation? We talk about it with Daniela Paoletti of the National Institute of Astrophysics, co-author of two new studies that try to answer this question

Almost a year later , Media Inaf met again Daniela Paoletti , researcher at Inaf Oas in Bologna and first author of a new study that investigates the history of reionization in an original way , the period – not yet well defined – in which the primordial gas, which was pervaded the universe in the early stages of its evolution, passes from the neutral to the ionized state. The work – with the evocative title “ Dark Twilight Joined with the Light of Dawn to Unveil the Reionization History”(The darkness of twilight together with the light of dawn to reveal the history of reionization) – presents an extensive analysis of the history of reionization based on recent cosmological and astrophysical data. Among the authors also Dhiraj Kumar Hazra , Fabio Finelli of Inaf Oas and the Nobel Prize for physics in 2006 George Smoot . In addition to this work, another article was presented in the same period that sees her among the authors, which reports a detailed study on what the latest Planck data have to say beyond the standard inflation model . On this hot summer day, we discover with her the details of the analysis, the implications and the hopes for the future.

Artistic impression showing a part of the history of the universe, centered on the epoch of reionization, a process that ionized most of the material in the universe. From left to right: the oldest light in the universe, the first stars, the reionization process and the first galaxies. Credits: Esa – C. Carreau

A year after the publication of your study on the history of reionization, you are about to publish a new one on the same burning topic. What is it about?

“This is a study on how the darkness of twilight and the light of the dawn of the universe, together, can make us understand how one of the most important phases in the history of the universe could have gone. The article is the continuation of what we published a year ago in the journal Physical Review Letters in which we presented an original approach to study the history of the early universe, combining astrophysical data and the cosmic microwave background (or Cmb, from English cosmic microwave background ). In this article we are going to describe this approach in detail and to present many more things, compared to the previous article, which for reasons of space at the time we were not able to present ».

How did you come up with the idea of ​​the title, so suggestive?

«The idea for the title came to me when I was preparing the seminar on Physical Review Letters . The innovative aspect of our approach is to bring together two completely different types of data: microwave background radiation and ultraviolet radiation. The first, which has always been defined as the first light of the universe , consists of the first photons that have been emitted which, however, think about it, are the same as the first twilight, because when this radiation has cooled down, the universe has entered that which is called dark era . While ultraviolet radiation represents a tracer of the first stars: it therefore traces the dawn of the universe, when it comes out of the night of the dark age. If the title had been in Italian I would have used the wordaurora , which in my opinion would be the most beautiful definition, but in English aurora and dusk are indicated in the same way. So I used dawn, as a terminology, because I liked this idea of ​​combining twilight before night and light after night ».

The figure shows the comparison between the ionization histories that want the quasar and UV data combined separately with the CMB versus the combination of all three. Credits: Paoletti et al.

Microwave radiation and ultraviolet radiation: how did you manage to reconcile such different data?

“They are two totally different types of data and precisely because they are so different we had to devise and develop this new technique which, instead of evaluating the fraction of ionized matter over time, solves the equation for the ionization fraction, thus succeeding to test even those that are the data of ultraviolet radiation, which otherwise we could not use in the classical approach. These data are telling us what is happening to the ionizing source, that is, to the first stars. Then we also use other data, which in this work proved to be very interesting: quasar data and gamma ray bursts (Grb) data. Some items at redshiftvery high, therefore very far away, they can tell us what the ionization situation is around the source, in their local world. If we assume that this is also representative of what is outside, they give us a precise idea of ​​what is happening at that redshift at that moment ».

So is the method the same as presented in 2020?

«Yes, this is the basis of the method that we had already developed in the study presented in Physical Review Lettersin 2020, but in this case we went to see what happens when we start using different data or changing assumptions. The very first thing we did was to go and check what happens when in the reionization source we leave free a term that would otherwise be set by the simulations, because it is a term on which we have very little data and those we have are not very sensitive. . In the first work we had fixed it to the value of the simulations while now we have left it free to be guided by the data and we have seen that in reality above all the data of the quasars have a good ability to constrain it and that, fortunately, it turned out to be perfectly in agreement with the simulations. This therefore confirmed what we had previously assumed ».

The figure shows the difference of the two ionization histories with different cuts in magnitude, where it is evident that the cut at 15 induces a softer reionization. Credits: Paoletti et al.

What is the main novelty of the new study?

“An extremely interesting result of this new study is when we go to change the ultraviolet data. For ultraviolet brightness, what we measure with our instruments is the brightness function, which we then convert into ultraviolet radiation brightness density. As this brightness function needs to be integrated, we need to choose a cutoff magnitude. In other words, we do not consider sources that are weaker than the value assumed as a cut. Until now, we had always assumed a fairly conservative magnitude value of minus 17, given that for the weaker sources the data show a change in the behavior of the brightness function that we do not know if it is real or if it depends on the uncertainty on the data. We have now used a more aggressive, more optimistic cut instead. We wondered what would happen if we went to minus 15 . With this new cut we are in fact considering the contributions of sources that are so distant and so weak, but which are many and which therefore lead to a slightly different history of reionization. We note that we start to have a contribution to higher redshifts: instead of being an extremely steep climb, it becomes slower, which lasts longer precisely because we have a contribution from these very weak sources, capable of ionizing ».

Do you still have the doubt that you are not considering realistic sources?

«Yes, that always remains. Obviously the error bars get bigger because they are more difficult measures. But the beauty is that in October they launch Jwstwho will see very well all that queue that we are considering. I am thrilled to make predictions for Jwst because I am very curious to see what the impact will be as the error bars go down. Because if this contribution of the sources is really so great, the classical model that is used in cosmology – which foresees a very steep transition that lasts very little – begins to be problematic, because at that point in reality astrophysics would be telling us that reionization is a little slower. We must always take into account that reionization, beyond the importance in itself – because it is a phase transition, when the entire universe changes state completely – is fundamental in cosmology, because it represents one of our greatest uncertainties. Suffice it to say that the optical thickness – the parameter that concretizes the reionization in the cosmic microwave background – is the only one that, after Planck, has more than 1% of error. It therefore impacts a whole series of extended models, among which we have also demonstrated the inflation extension models, such as those in the other article we wrote ».

Will future experiments already planned be able to help in this sense?

“Yes, with the generation of cosmological experiments coming in the next few years, we need to be particularly careful about how reionization will be considered and the possibility of using this astrophysical data to make us tell how the history of reionization went could, cascade, also have an impact on the constraints on cosmological parameters. Furthermore, it is very interesting, also for the future, what we have shown on the data of quasars, which have proved to be very powerful because – although they extend less in redshift than ultraviolet radiation – they are precise points of the reionization fraction. George Smoot pointed out to me that actually in the future, with Desi and Euclid, we start talking about having no more five, six, ten points but bands of thousands of quasars. So in the next ten years the approach and perspective will also completely change ».

«An instrument that could have done exceptional things would have been Theseus, because the gamma ray bursts are very powerful: while the quasar has a continuous emission and therefore it ionizes the medium around itself, in the case of the Grb no because it is too fast. It doesn’t have time to ionize. It is precisely a precise point that indicates that fraction. Unfortunately, in the discussion we have used, we only have one of points, which however already shows how a single point out of the dozen points used is able to narrow the error bars ».

What astrophysical data are you using? At what distance?

“We use six galaxy clusters from the Hubble Frontier Field . The quasars used reach up to redshift 8 while the ultraviolet sources, interpolating, reach up to redshift 10 ».

There is a parallel study to this, submitted in the same days, with an equally curious title reminiscent of Toy Story. What is it about?

“The original title of this study by Dhiraj – the first author who was in Bologna until January 2020 while he is now a professor in Imsc Chennai (India) – was precisely” Inflation story: to slow-roll and beyond “(towards slow- roll and beyond, like the leitmotiv of Toy Story ) because the slow-roll is the standard model of inflation and to “go beyond.” Already that is the non plus ultra but we go further. That was the idea, quoting Buzz Lightyear. A couple of writers weren’t exactly in agreement because in fact slow-rollwe considered it as a starting point, so it would not have been correct to use the “to”. In the end, of course, we eliminated the “to” even if we both liked it because it resumed Buzz’s wanting to go further and, thinking about it, our end is similar to that of the Toy Story hero: we try, to go further , but there’s a problem. We know that the standard model is a beautiful fit to the data, but we also know that Planck confirmed what we had already seen with Wmap, namely that there are anomalies in this data. These anomalies are extremely elusive because they are all at 2.8 sigma of significance, when the threshold for saying that something is anomalous is 3 sigma. So we are somewhat in a limbo that does not allow us to understand if what we are seeing is a statistical fluctuation or is it really an anomaly ».

If it were an anomaly would it be more intriguing?

“Yeah, the nice thing is to go and see if it’s an anomaly. Two of the biggest anomalies are the lack of power at very large angular scales, which has been observed and confirmed by Planck, and small oscillations in the angular power spectrum. It is these oscillation blocks, or single oscillations, which are called features that are not produced in the standard model. One possibility might be that inflation wasn’t all slow-roll: we have a scalar field – the one that generates inflation – that moves very slowly on an extremely flat potential, which if before ending slowly rolling on the flat potential it had a little less flat potential, or had made a jump or a cusp, then it could lead to this loss of power and the generation of small oscillations ».

Power spectra of the best fit residues showing the lack of power and the oscillations that have been studied. Credits: Hazra et al.

How do you check it?

“Usually what needs to be done, which has been dealt with in so many cases (including in Planck’s article on inflation), involves taking different physical models that go to see if the data is better or worse. The nice thing here is that we use this framework called Wiggly Whipped Inflation – literally, swinging whipped inflation . It is a phenomenological approach: we do not ask ourselves what caused that thing, but we ask ourselves: if the inflationary potential were done in a certain way, can we rent the data? Of course, if we fit the data better then we can be reasonably sure that we have found an inflationary model that works best. First there is the whip – the so-called potentialwhipped inflation which tells us that before the scalar field rolled a little faster, but then it starts to slow-roll . In this case I have a lack of power because when the head rolls quickly it does not generate many perturbations; it generates them when it arrives on slow-roll . So you have this lack of power. Then you go to test when the wiggles are also present, that is, these small fluctuations. These oscillations can be produced with discontinuities: when the potential has a jump, a wiggle is generated. If the jump is bigger, you generate a lot at certain scales, which depend on how much you jump. It is a general framework that simply gives us an idea of ​​what can best fit the data ».

It sounds simple, but I guess it really isn’t …

There are two problems. On the one hand, we know that if we used 100,000 parameters we could fit the entire universe. I could write a 150-parameter dependent Universe Lagrangian and I would have the Universe Wave Function. But it doesn’t work like that because the degrees of freedom would be too many. So saying that a model improves the data is always a balancing act between how free the model is – how many degrees of freedom it has – and how much better the data is. If the model, as has happened to us in some of these cases, fits the data better than the standard model but uses many parameters, “it is not valid”. Furthermore, it must be said that Planck’s data from 2018 reduced the evidence of anomalies. In temperature there is still the loss of power,

What’s new in this second study?

“The novelty lies in the fact that we have also used polarization . Theoretically the same thing that is done in temperature can be done in polarization. The problem is that in the polarization, on the large angular scales, there is the reionization that increases the power, masking a possible drop in power. While temperature still favors the models that generate this power shortage, when considering polarization it actually becomes apparent that these models are not favored in any way over the standard slow-roll model with a power-law power spectrum. In addition to this, there is another novelty: going to use for the small angular scales (less than about 8 degrees) not the likelihoodPlanck official but CamSpec , the non-binned version, which takes into account all the single points without considering the averages. This likelihood is the result of a reworking of the Planck data made by George Efstathiou and Steven Gratton after the publication of the Planck results, which are able to use more sky, slightly improving the error bars. We wanted to use that because it is more complete and more evolved. At the moment there is no evidence in favor of these more particular models that respect a Lambda Cdm ».

Daniela Paoletti, researcher of INAF of Bologna. Credits: Daniela Paoletti

What does the future hold on this front?

«The future will be very interesting for two reasons. The first is linked to the improvement of Cmb data thanks to LiteBird which will allow us to study the “E” polarization limited only by the cosmic variance. Then we will have the experiments on the ground, which will trace the small and medium scales where these oscillations are present, which can be seen in a more precise way than the Planck data. Furthermore, there is the large-scale structure: since this oscillation is also present at small scales, it can also be seen by an experiment like Euclid. Another possibility is to use non-Gaussianities, because the same effects of oscillations that are seen in the power spectrum are also seen in moments of higher order, therefore in non-Gaussianities ».

So, for now, any significant news on the history of inflation?

“For the moment, Planck’s data tells us that they still prefer a standard model. But the prospects for the future are good: in the next ten years I expect that we can really start to say whether it was just a standard slow-roll model or not. For now we are like Buzz: we can’t go much further than our standard model room but let’s remember that Buzz found his rocket too and took off for real and so it will be for us with future data, and maybe we will really go beyond slow roll “.

To know more:


Provided by INAF

Emergent Magnetic Monopoles Controlled at Room Temperature (Physics)

Three dimensional (3D) nano-network promise a new era in modern solid state physics with numerous applications in photonics, bio-medicine, and spintronics. The realization of 3D magnetic nano-architectures could enable ultra-fast and low-energy data storage devices. Due to competing magnetic interactions in these systems magnetic charges or magnetic monopoles can emerge, which can be utilized as mobile, binary information carriers. Researchers at University of Vienna have now designed the first 3D artificial spin ice lattice hosting unbound magnetic charges. The results published in the journal npj Computational Materials present a first theoretical demonstration that, in the new lattice, the magnetic monopoles are stable at room temperature and can be steered on-demand by external magnetic fields.

Emergent magnetic monopoles are observed in a class of magnetic materials called spin ices. However, the atomic scales and required low temperatures for their stability limit their controllability. This led to the development of 2D artificial spin ice, where the single atomic moments are replaced by magnetic nano-islands arranged on different lattices. The up-scaling allowed the study of emergent magnetic monopoles on more accessible platforms. Reversing the magnetic orientation of specific nano-islands propagates the monopoles one vertex further, leaving a trace behind. This trace, Dirac Strings, necessarily stores energy and bind the monopoles, limiting their mobility.

Researchers around Sabri Koraltan and Florian Slanovc, and led by Dieter Suess at the University of Vienna, have now designed a first 3D artificial spin ice lattice that combines the advantages of both atomic- and 2D artificial spin ices.

In a cooperation with Nanomagnetism and Magnonics group from University of Vienna, and Theoretical Division of Los Alamos Laboratory, USA, the benefits of the new lattice are studied employing micromagnetic simulations. Here, flat 2D nano-islands are replaced by magnetic rotational ellipsoids, and a high symmetry three-dimensional lattice is used. “Due to the degeneracy of the ground state the tension of the Dirac strings vanish unbinding the magnetic monopoles”, remarks Sabri Koraltan, one of the first-authors of the study. The researchers took the study further to the next step, where in their simulations one magnetic monopole was propagated through the lattice by applying external magnetic fields, demonstrating its application as information carriers in a 3D magnetic nano-network.

Sabri Koraltan adds “We make use of the third dimension and high symmetry in the new lattice to unbind the magnetic monopoles, and move them in desired directions, almost like true electrons”. The other first-author Florian Slanovc concludes, “The thermal stability of the monopoles around room temperature and above could lay the foundation for ground breaking new generation of 3D storage techonologies”.

Featured image: Researchers at the University of Vienna have designed a new 3D magnetic nanonetwork, where magnetic monopoles emerge due to rising magnetic frustration among the nanoelements, and are stable at room temperature. (© Sabri Koraltan)


Publication in npj Computational Materials:
Koraltan, S., Slanovc, F., Bruckner, F. et al. Tension-free Dirac strings and steered magnetic charges in 3D artificial spin ice. npj Comput Mater 7, 125 (2021). https://doi.org/10.1038/s41524-021-00593-7


Provided by University of Wein

Understanding the Ionisation of Proton-impacted Helium (Particle Physics)

Advanced mathematical analysis of the ionisation of a helium atom by an impacting proton has revealed where discrepancies arise between experiments and existing theoretical calculations of the process.

When an atom is impacted by a fast-moving proton, one of its orbiting electrons may be knocked away, leaving behind a positively-charged ion. To understand this process, it is important for researchers to investigate distributions in the angles at which electrons travel when knocked away. In a new study published in EPJ D, M. Purkait and colleagues at Ramakrishna Mission Residential College in India have clearly identified particular areas where discrepancies arise between the angular distributions measured in theories and experiments.

The team’s results could lead to more advanced calculations of this ionisation process. In turn, improved theoretical techniques could be applied in areas as wide-ranging as plasma physics, cancer therapy, and the development of new laser technologies. With the latest experimental techniques, physicists can now accurately measure how the angular paths of emitted electrons will vary, depending both on the energy of the electron, and the momentum transferred from the impacting proton. These distributions are described in calculations named ‘fully differential cross sections’ (FDCSs) – which are essential to guiding theoretical models of the ionisation process. So far, however, theoretical calculations have often contrasted in uncertain ways with experimentally-obtained FDCSs.

In their study, Purkait’s team investigated the ionisation of a helium atom by a proton impact. Since a helium nucleus contains two protons and two neutrons, the researchers studied the process using a ‘four-body distorted wave’ (DW-4B) approximation. With this toolset, they could approximate the deeply complex interactions involved using simpler mathematics. This allowed them to account for the behaviours of the emitted electron and impacting proton in the electric field of the helium nucleus, and how the position of the nucleus is distorted in turn. By comparing their results with FDCSs measured in recent experiments, the team found that they agreed reasonably well at high impact energies. Clear discrepancies only arose for higher values of proton-electron momentum transfer, and for intermediate-energy electrons. The team now hopes their results could lead to improvements to theoretical techniques in future research.


References: Jana, D., Samaddar, S., Purkait, K. et al. Fully differential cross sections for single ionization of helium by proton impact. Eur. Phys. J. D 75, 164 (2021). https://doi.org/10.1140/epjd/s10053-021-00160-1


Provided by Springer

Using Particle Accelerators to investigate the Quark-gluon Plasma of the Infant Universe (Cosmology)

In the early stages of the Universe, quarks and gluons were quickly confined to protons and neutrons which went on to form atoms. With particle accelerators reaching increasingly higher energy levels the opportunity to study this fleeting primordial state of matter has finally arrived.

Quark-Gluon Plasma (QGP) is a state of matter which existed only for the briefest of times at the very beginning of the Universe with these particles being quickly clumped together to form the protons and neutrons that make up the everyday matter that surrounds us. The challenge of understanding this primordial state of matter falls to physicists operating the world’s most powerful particle accelerators. A new special issue of EPJ Special Topics entitled ‘Quark-Gluon Plasma and Heavy-Ion Phenomenology’ edited by Munshi G. Mustafa, Saha Institute of Nuclear Physics, Kolkata, India, brings together seven papers that detail our understanding of QGP and the processes that transformed it into the baryonic matter around us on an everyday basis.

“Quark-Gluon Plasma is the strongly interacting deconfined matter which existed only briefly in the early universe, a few microseconds after the Big Bang,” says Mustafa. “The discovery and characterisation of the properties of QGP remain some of the best orchestrated international efforts in modern nuclear physics.” Mustafa highlights Heavy Ion Phenomenology as providing a very reliable tool to determine the properties of QGP and in particular, the dynamics of its evolution and cooling.

Improvements at colliders such as the Relativistic Heavy Ion Collider (RHIC) and the Large Hadron Collider (LHC) have radically increased the energy levels that can be attained by heavy nuclei collisions at near-light speeds bringing them in line with those of the infant Universe. In addition to this, future experiments at the Facility for Antiproton and Ion Research (FAIR) and at the Nuclotron-based Ion Collider fAcility (NICA) will generate a wealth of data on QGP and the conditions in the early Universe.

“This collection is so timely as it calls for a better theoretical understanding of particle properties of hot and dense deconfined matter, which reflect both static and dynamical properties of QGP,” explains Mustafa. “This improved theoretical understanding of Quark-Gluon Plasma and Heavy Ion Phenomenology is essential for uncovering the properties of the putative QGP which occupied the entire universe, a few microseconds after Big Bang.”

Mustafa points out that this improved understanding should also open the doorway to understanding the equation of state of this strongly interacting matter and prepare the platform to explore the theory of quark-hadron transition and the possible thermalisation of the QGP. This could in turn help us understand the steps that led from QGP to the everyday baryonic matter that surrounds us.

“The quarks and gluons which formed the neutrons and protons were confined into them, a few microseconds after the Big Bang,” concludes Mustafa. “This is the first time when we have seen them being liberated from their eternal confinement!”

All articles are available here and are freely accessible until 12 September 2021. For further information read the Editorial.

Featured image: Girolamo Sferrazaa Papa | Getty Images


References: Mustafa. M. G., ‘Quark-Gluon Plasma and Heavy-ion Phenomenology’ Eur. Phys. J. Spec. Top. 230, 603–605 (2021). https://doi.org/10.1140/epjs/s11734-021-00018-y


Provided by Springer