Tag Archives: #entropy

What’s The Temperature Of A Wormhole? (Cosmology / Astronomy)

Hong and Kim studied (3+1) Morris-Thorne wormhole to investigate its higher dimensional embedding structures and thermodynamic properties. They showed that the wormhole is embedded in (5+2) global embedding Minkowski space. This embedding enables them to construct the wormhole entropy and wormhole temperature by exploiting Unruh effects.

There have been considerable discussions on the theoretical existence of wormhole geometry, since Morris and Thorne (MT) proposed a possibility of traversable wormhole, through which observers can pass travelling between two universes as a short cut. According to the Einstein field equations, the MT wormhole needs the exotic matter, which violates the weak energy condition.

On the other hand, it has been discovered the novel aspects that the thermodynamics of higher dimensional black holes can often be interpreted in terms of lower dimensional black hole solutions. In fact, a slightly modified solution of (2+1) dimensional Banados-Teitelboim-Zanelli black hole yields a solution to the string theory, so-called the black string. Since the thermal Hawking effects on a curved manifold were studied as Unruh effects in a higher flat dimensional space-time, following the global embedding Minkowski space (GEMS) approach several authors recently have shown that this approach could yield a unified derivation of temperature for various curved manifolds in (2+1) dimensions and in (3+1) dimensions. Moreover, the MT wormhole has been described in terms of its embedding profile surface geometry.

Hong and Kim, in their paper, analyzed the geometries of the MT wormhole manifolds to construct their higher dimensional flat embeddings, which they showed to be related with the embedding profile surface geometry of the wormhole. In these GEMS embeddings, they investigated the Hawking temperature and entropy via the Unruh effects to propose a possibility of “negative temperature” associated with the “exotic matter.”

They have thus showed that, on these flat embedding geometries, the wormhole temperature has negative (positive) values inside (outside) the exotic matter distribution accumulated mostly around the wormhole shape radius, and the wormhole entropy lower bound is twice the throat area of the wormhole.


Reference: Soon-Tae Hong, Sung-Won Kim, “Can wormholes have negative temperatures?”, Modern Physics Letters A, Vol. 21, No. 10, pp. 789-793 (2006). https://www.worldscientific.com/doi/10.1142/S0217732306019839


Copyright of this article totally belongs to our author S. Aman. One is allowed to reuse it only by giving proper credit either to him or to us.

What Stops Flows in Glassy Materials? (Physics / Material Science)

Various glass materials have been essential to the development of modern civilization due to their advantageous properties. Specifically, glasses have a liquid-like disordered structure but solid-like mechanical properties. This leads to one of the central mysteries of glasses: “Why don’t glasses flow like liquids?” This question is so important that it was selected by the journal Science in 2005 as one of 125 key, unanswered scientific questions, and one of 11 unsolved important physical issues.

Spatial correlations between slow-dynamics (red ellipses) and low-structural-entropy (light blue) regions in translational and rotational motion of colloidal ellipsoids with different aspect ratios. Scale bar: 20 μm. (Image by WANG Yuren)

We can hardly observe the movements of atoms at a ~0.1 nanometer length scale and a ~1 nanosecond time scale. Fortunately, however, scientists have found that colloidal systems have similar phase behaviors as atomic systems. Colloids are regarded as big “atoms” that reveal microscopic information about phase transitions that can’t be easily obtained from atomic materials. 

In the past decade, colloidal glasses have drawn a lot of interest, resulting in numerous important discoveries. However, most of these studies are about spherical particles that tend to form local or intermediate-range crystalline structures. Unfortunately, such studies are not broadly applicable since most glasses are not composed of spheres and have no crystalline structure. 

To counter this problem, researchers from the Institute of Mechanics of the Chinese Academy of Sciences and Hong Kong University of Science and Technology recently conducted experimental studies for the first time on glassy systems composed of nonspherical particles. 

The researchers found that the monolayers of monodisperse ellipsoids are good glass formers and do not form local crystalline structures. Thus, they provide an ideal and general system for detecting the structural origin of slowing dynamics as the glass transition is approached. 

In fact, glass formers have strong dynamic heterogeneities, i.e., some regions move fast and some move slow. These results show that structures with low structural entropy correspond well with slow dynamics, whereas fast relaxing (flowing) regions have high structural entropy. 

In glasses composed of spherical particles, some polyhedral structures were usually regarded as responsible for the slow dynamics. However, a type of polyhedron only exists in certain systems of spheres. Structural entropy measures the level of disorder in a structure, including various specific local structures, e.g. virous polyhedra that exist in systems composed of spheres. So, the low structural entropy is a general structural feature of slow dynamics in glassy matter, which holds in systems composed of spheres and non-spheres.

In addition, the researchers observed Ising-like critical behaviors at an ideal glass transition point in both static structures and slow dynamics. Such behaviors are a quantitative feature of thermodynamic transition that explains whether glass transition is purely dynamic or thermodynamic (structural), since there are no ordering structures in glasses. 

“The observation of critical behaviors in ellipsoid glasses provides much more solid quantitative evidence of the thermodynamic nature of glass transition,” said WANG Yuren, corresponding author of the study. “The results shed new light on both the mysteries of glass theory and designing materials with high stability and glass forming ability.”

Reference: Zhongyu Zheng, Ran Ni, Yuren Wang and Yilong Han, “Translational and rotational critical-like behaviors in the glass transition of colloidal ellipsoid Monolayers”, Science Advances  15 Jan 2021: Vol. 7, no. 3, eabd1958 DOI: 10.1126/sciadv.abd1958 https://advances.sciencemag.org/content/7/3/eabd1958/tab-article-info

Provided by Chinese Academy of Sciences

Accelerated Expansion of the Universe Is Due To Spacetime Vorticity (Cosmology /Astronomy)

Babur Mirza and colleagues presented a general relativistic mechanism for accelerated cosmic expansion and the Hubble’s constant. They showed that spacetime vorticity coupled to the magnetic field density in galaxies causes the galaxies to recede from one another at a rate equal to the Hubble’s constant.

Accelerated expansion of the universe, as observed, for example, in the cosmological redshift measurements using type-Ia supernovae (SNe Ia) as standard candles, implies the need for an expansion energy effective at least up to the Mpc scale. A number of independent observations (including the SNe Ia redshift, the Hubble’s constant measurements, the cosmic microwave background (CMB), baryon acoustic oscillations, and various cosmological probes), have measured the contributions of matter and the cosmological constant to the energy density of the universe, providing an accurate measurement of the cosmic acceleration. However the amount of energy for this acceleration implies a hidden or dark form of energy which is approximately three times of the observed gravitational mass-energy density in the universe.

Within Einstein’s general theory of relativity, the observed expansion rate can be accounted for by including a cosmological constant, whose origin remains somewhat mysterious. In this context various mechanisms have been postulated, including new forms of hypothetical particles, or modifications of the Newtonian-Einsteinian law of gravitation at large distances, among others. However these theories are specialized in the sense that they fail to account for other observed features of the universe, such as the high degree of isotropy in CMB, or even some feature of the expansion, such as the correct value of the Hubble’s constant.

Now, Babur and colleagues in their work showed that the specific form of the cosmological constant, hence cosmic acceleration, which can be described by spacetime vorticity, is generated by galactic rotations. They showed that this vorticity coupled to the local (galactic) magnetic field provides the requisite push (repulsive energy) causing the individual galaxies to recede at an accelerated rate. They are therefore led to an oscillatory universe, where expansion and conversely contraction rate is determined by local spacetime vorticity, rather than global geometry (curvature) of the spacetime.

“To recapitulate we remark that, within the above model of the accelerated expansion of the universe, local spacetime vorticity and magnetic field energy generation within galaxies and galactic clusters act as the feedback mechanism for expansion. Thus, contrary to some recent suggestions that accelerated expansion must imply a violation of the law of conservation of energy, we see that energy conservation remains strictly valid not only locally but globally as well. The continued universal acceleration depends on the energy generation within galaxies, which in turn is determined by accretion rate in galactic nuclei.”, said Babur.

Friends, conversion of matter-energy density into the magnetic field energy under such conditions can only take a finite amount of time, hence the magnetic field driven acceleration cannot continue indefinitely for a finite total mass. Since the acceleration aB ∼ B’², where B’² is the magnetic energy density per unit volume, they saw that with the decreasing feedback magnetic field, universal acceleration after reaching an maximum will gradually decrease. With the decrease of Magnetic energy generation via accretion, a gradual deacceleration under gravitational attraction is likely to cause cosmic contraction. They therefore proposed that they have an oscillatory universe, where magneto-vorticity coupling rather than global spacetime curvature causes the expansion and contraction phases.

According to Babur, “A very high degree of entropy must have existed at the early stage of the universe, as inferred from the Planckian shape of the CMB radiation. This raises the paradox for other cosmological models, since entropy should decrease closer to the initial singularity (big bang). Our model implies that this must be so due to the expansion started before the cross-over R = Rs, where Rs is Schwarzschild singularity. Subsequently, as this expansion (inflation) stops, and matter formation starts, expansion under spacetime vorticity must now cause matter entropy to gradually increase with time. As deduced in our paper, this explains the high isotropy and the Planckian profile of the CMB spectrum, carrying the imprint of this initial inflation over R > Rs.”

Reference: Babur M. Mirza, “Can accelerated expansion of the universe be due to spacetime vorticity?”, Modern Physics Letters A, Vol. 33, No. 40, 1850240 (2018). https://www.worldscientific.com/doi/abs/10.1142/S0217732318502401 https://doi.org/10.1142/S0217732318502401

Copyright of this article totally belongs to our author S. Aman. One is allowed to reuse it only by giving proper credit either to him or to us.

Entropy Production Gets a System Update (Physics)

Nature is not homogenous. Most of the universe is complex and composed of various subsystems — self-contained systems within a larger whole. Microscopic cells and their surroundings, for example, can be divided into many different subsystems: the ribosome, the cell wall, and the intracellular medium surrounding the cell.

(Image: Pete LInforth/Pixabay)

The Second Law of Thermodynamics tells us that the average entropy of a closed system in contact with a heat bath — roughly speaking, its “disorder”— always increases over time. Puddles never refreeze back into the compact shape of an ice cube and eggs never unbreak themselves. But the Second Law doesn’t say anything about what happens if the closed system is instead composed of interacting subsystems.

New research by SFI Professor David Wolpert published in the New Journal of Physics considers how a set of interacting subsystems affects the second law for that system.

“Many systems can be viewed as though they were subsystems. So what? Why actually analyze them as such, rather than as just one overall monolithic system, which we already have the results for,” Wolpert asks rhetorically.

The reason, he says, is that if you consider something as many interacting subsystems, you arrive at a “stronger version of the second law,” which has a nonzero lower bound for entropy production that results from the way the subsystems are connected. In other words, systems made up of interacting subsystems have a higher floor for entropy production than a single, uniform system.

All entropy that is produced is heat that needs to be dissipated, and so is energy that needs to be consumed. So a better understanding of how subsystem networks affect entropy production could be very important for understanding the energetics of complex systems, such as cells or organisms or even machinery 

Wolpert’s work builds off another of his recent papers which also investigated the thermodynamics of subsystems. In both cases, Wolpert uses graphical tools for describing interacting subsystems.

For example, the following figure shows the probabilistic connections between three subsystems — the ribosome, cell wall, and intracellular medium.

Like a little factory, the ribosome produces proteins that exit the cell and enter the intracellular medium. Receptors on the cell wall can detect proteins in the intracellular medium. The ribosome directly influences the intracellular medium but only indirectly influences the cell wall receptors. Somewhat more mathematically: A affects B and B affects C, but A doesn’t directly affect C.

Why would such a subsystem network have consequences for entropy production?

“Those restrictions — in and of themselves — result in a strengthened version of the second law where you know that the entropy has to be growing faster than would be the case without those restrictions,” Wolpert says.

A must use B as an intermediary, so it is restricted from acting directly on C. That restriction is what leads to a higher floor on entropy production.

Plenty of questions remain. The current result doesn’t consider the strength of the connections between A, B, and C — only whether they exist. Nor does it tell us what happens when new subsystems with certain dependencies are added to the network. To answer these and more, Wolpert is working with collaborators around the world to investigate subsystems and entropy production. “These results are only preliminary,” he says.

References : David H Wolpert, “Minimal entropy production rate of interacting systems”, New Journal of Physics, Volume 22, 1 November 2020. https://iopscience.iop.org/article/10.1088/1367-2630/abc5c6

Provided by Santa Fe Intitute

New Research Explores The Thermodynamics Of Off-equilibrium Systems (Physics)

Arguably, almost all truly intriguing systems are ones that are far away from equilibrium — such as stars, planetary atmospheres, and even digital circuits. But, until now, systems far from thermal equilibrium couldn’t be analyzed with conventional thermodynamics and statistical physics.

When physicists first explored thermodynamics and statistical physics during the 1800s, and through the 1900s, they focused on analyzing physical systems that are at or near equilibrium. Conventional thermodynamics and statistical physics have also focused on macroscopic systems, which contain few, if any, explicitly distinguished subsystems.

In a paper published in the journal Physical Review Letters, SFI Professor David Wolpert presents a new hybrid formalism to overcome all of these limitations.

Fortunately, at the turn of the millennium, “a formalism now known as nonequilibrium statistical physics was developed,” says Wolpert. “It applies to systems that are arbitrarily far away from equilibrium and of any size.”

Nonequilibrium statistical physics is so powerful that it has resolved one of the deepest mysteries about the nature of time: how does entropy evolve within an intermediate regime? This is the space between the macroscopic world, where the second law of thermodynamics tells us that it must always increase, and the microscopic world where it can’t ever change.

We now know it’s only the expected entropy of a system that can’t decrease with time. “There’s always a non-zero probability that any particular sample of the dynamics of a system will result in decreasing entropy — and the probability of shrinking entropy grows as the system gets smaller,” he says.

At the same time that this revolution in statistical physics was occurring, major advances involving so-called graphical models were being made within the machine learning community.

In particular, the formalism of Bayesian networks was developed, which provides a method to specify systems with many subsystems that interact probabilistically with each other. Bayes nets can be used to formally describe the synchronous evolution of the elements of a digital circuit — fully accounting for noise within that evolution.

Wolpert combined these advances into a hybrid formalism, which is allowing him to explore thermodynamics of off-equilibrium systems that have many explicitly distinguished subsystems coevolving according to a Bayes net.

As an example of the power of this new formalism, Wolpert derived results showing the relationship between three quantities of interest in studying nanoscale systems like biological cells: the statistical precision of any arbitrarily defined current within the subsystem (such as the probabilities that the currents differ from their average values), the heat generated by running the overall Bayes net composed of those subsystems, and the graphical structure of that Bayes net.

“Now we can start to analyze how the thermodynamics of systems ranging from cells to digital circuits depend on the network structures connecting the subsystems of those systems,” says Wolpert.

References : David H. Wolpert, “Uncertainty Relations and Fluctuation Theorems for Bayes Nets”, Phys. Rev. Lett. 125, 200602 – Published 10 November 2020. https://journals.aps.org/prl/abstract/10.1103/PhysRevLett.125.200602

Provided by Santa Fe Institute

AI Speeds Up Development Of New High-entropy Alloys (Material Science)

Developing new materials takes a lot of time, money and effort. Recently, a POSTECH research team has taken a step closer to creating new materials by applying AI to develop high-entropy alloys (HEAs) which are coined as “alloy of alloys.”

New materials that apply AI to develop high-entropy alloys (HEAs) which are coined as alloy of alloys. ©Seungchul Lee (POSTECH)

A joint research team led by Professor Seungchul Lee, Ph.D. candidate Soo Young Lee, Professor Hyungyu Jin and Ph.D. candidate Seokyeong Byeon of the Department of Mechanical Engineering along with Professor Hyoung Seop Kim of the Department of Materials Science and Engineering have together developed a technique for phase prediction of HEAs using AI. The findings from the study were published in the latest issue of Materials and Design, an international journal on materials science.

Metal materials are conventionally made by mixing the principal element for the desired property with two or three auxiliary elements. In contrast, HEAs are made with equal or similar proportions of five or more elements without a principal element. The types of alloys that can be made like this are theoretically infinite and have exceptional mechanical, thermal, physical, and chemical properties. Alloys resistant to corrosion or extremely low temperatures, and high-strength alloys have already been discovered.

However, until now, designing new high-entropy alloy materials was based on trial and error, thus requiring much time and budget. It was even more difficult to determine in advance the phase and the mechanical and thermal properties of the high-entropy alloy being developed.

To this, the joint research team focused on developing prediction models on HEAs with enhanced phase prediction and explainability using deep learning. They applied deep learning in three perspectives: model optimization, data generation and parameter analysis. In particular, the focus was on building a data-enhancing model based on the conditional generative adversarial network. This allowed AI models to reflect samples of HEAs that have not yet been discovered, thus improving the phase prediction accuracy compared to the conventional methods.

In addition, the research team developed a descriptive AI-based HEA phase prediction model to provide interpretability to deep learning models, which acts as a black box, while also providing guidance on key design parameters for creating HEAs with certain phases.

“This research is the result of drastically improving the limitations of existing research by incorporating AI into HEAs that have recently been drawing much attention,” remarked Professor Seungchul Lee. He added, “It is significant that the joint research team’s multidisciplinary collaboration has produced the results that can accelerate AI-based fabrication of new materials.”

Professor Hyungyu Jin also added, “The results of the study are expected to greatly reduce the time and cost required for the existing new material development process, and to be actively used to develop new high-entropy alloys in the future.”

References: http://dx.doi.org/10.1016/j.matdes.2020.109260

Provided by POSTECH

Solved: The Mystery Of How Dark Matter In Galaxies Is Distributed (Astronomy)

The gravitational force in the Universe under which it has evolved from a state almost uniform at the Big Bang until now, when matter is concentrated in galaxies, stars and planets, is provided by what is termed ‘dark matter’. But in spite of the essential role that this extra material plays, we know almost nothing about its nature, behaviour and composition, which is one of the basic problems of modern physics. In a recent article in Astronomy & Astrophysics Letters, scientists at the Instituto de Astrofísica de Canarias (IAC)/University of La Laguna (ULL) and of the National University of the North-West of the Province of Buenos Aires (Junín, Argentina) have shown that the dark matter in galaxies follows a ‘maximum entropy’ distribution, which sheds light on its nature.

Dark matter in two galaxies simulated on a computer. The only difference between them is the nature of dark matter. Without collisions on the left and with collisions on the right. The work suggests that dark matter in real galaxies looks more like the image on the right, less clumpy and more diffuse than the one on the left. The circle marks the end of the galaxy. Image taken from the article Brinckmann et al. (2018, Monthly Notices of the Royal Astronomical Society, 474, 746; https://doi.org/10.1093/mnras/stx2782).

Dark matter makes up 85% of the matter of the Universe, but its existence shows up only on astronomical scales. That is to say, due to its weak interaction, the net effect can only be noticed when it is present in huge quantities. As it cools down only with difficulty, the structures it forms are generally much bigger than planets and stars. As the presence of dark matter shows up only on large scales the discovery of its nature probably has to be made by astrophysical studies.

MAXIMUM ENTROPY

To say that the distribution of dark matter is organized according to maximum entropy (which is equivalent to ‘maximum disorder’ or ‘thermodynamic equilibrium’) means that it is found in its most probable state. To reach this ‘maximum disorder’ the dark matter must have had to collide within itself, just as gas molecules do, so as to reach equilibrium in which its density, pressure, and temperature are related. However, we do not know how the dark matter has reached this type of equilibrium.

“Unlike the molecules in the air, for example, because gravitational action is weak, dark matter particles ought hardly to collide with one another, so that the mechanism by which they reach equilibrium is a mystery”, says Jorge Sánchez Almeida, an IAC researcher who is the first author of the article. “However if they did collide with one another this would give them a very special nature, which would partly solve the mystery of their origin”, he adds.

The maximum entropy of dark matter has been detected in dwarf galaxies, which have a higher ratio of dark matter to total matter than have more massive galaxies, so it is easier to see the effect in them. However, the researchers expect that it is general behaviour in all types of galaxies.

The study implies that the distribution of matter in thermodynamic equilibrium has a much lower central density that astronomers have assumed for many practical applications, such as in the correct interpretation of gravitational lenses, or when designing experiments to detect dark matter by its self-annihilation.

This central density is basic for the correct interpretation of the curvature of the light by gravitational lenses: if it is less dense the effect of the lens is less. To use a gravitational lens to measure the mass of a galaxy one needs a model, if this model is changed, the measurement changes.

The central density also is very important for the experiments which try to detect dark matter using its self-annihilation. Two dark matter particles could interact and disappear in a process which is highly improbable, but which would be characteristic of their nature. For two particles to interact they must collide. The probability of this collision depends on the density of the dark matter; the higher the concentration of dark matter, the higher is the probability that the particles will collide.

“For that reason, if the density changes so will the expected rate of production of the self-annihilations, and given that the experiments are designed on the prediction of a given rate, if this rate were very low the experiment is unlikely to yield a positive result”, says Sánchez Almeida.

Finally, thermodynamic equilibrium for dark matter could also explain the brightness profile of the galaxies. This brightness falls with distance from the centre of a galaxy in a specific way, whose physical origin is unknown, but for which the researchers are working to show that it is the result of an equilibrium with maximum entropy.

SIMULATION VERSUS OBSERVATION

The density of dark matter in the centres of galaxies has been a mystery for decades. There is a strong discrepancy between the predictions of the simulations (a high density) and that which is observed (a low value). Astronomers have put forward many types of mechanisms to resolve this major disagreement.

In this article, the researchers have shown, using basic physical principles, that the observations can be reproduced on the assumption that the dark matter is in equilibrium, i.e., that it has maximum entropy. The consequences of this result could be very important because they indicate that the dark matter has interchanged energy with itself and/or with the remaining “normal” (baryonic) matter.

“The fact that equilibrium has been reached in such a short time, compared with the age of the Universe, could be the result of a type of interaction between dark matter and normal matter in addition to gravity”, suggests Ignacio Trujillo, an IAC researcher and a co-author of this article. “The exact nature of this mechanism needs to be explored, but the consequences could be fascinating to understand just what is this component which dominates the total amount of matter in the Universe”.

References: Jorge Sánchez Almeida, Ignacio Trujillo and Ángel Ricardo Plastino. “The principle of maximum entropy explains the cores observed in the mass distribution of dwarf galaxies”. 2020, A&A Letters. DOI: https://doi.org/10.1051/0004-6361/202039190

Provided by IAC

RUDN University Ecologists Developed New Models To Identify Environmental Pollution Sources (Nature)

According to a team of ecologists from RUDN University, polycyclic aromatic hydrocarbons (PAHs) can be used as pollution indicators and help monitor the movement of pollutants in environmental components such as soils, plants, and water. To find this out, the team conducted a large-scale study of a variety of soil, water, and plant samples collected from a vast area from China to the Antarctic. The results of the study were published in the Applied Geochemistry journal.

According to a team of ecologists from RUDN University, polycyclic aromatic hydrocarbons (PAHs) can be used as pollution indicators and help monitor the movement of pollutants in environmental components such as soils, plants, and water. To find this out, the team conducted a large-scale study of a variety of soil, water, and plant samples collected from a vast area from China to the Antarctic. ©RUDN Univerisity.

Geochemical barriers mark the borders between natural environments at which the nature of element transfer changes dramatically. For example, the concentration of oxygen rapidly increases at groundwater outlets, because different chemical elements oxidize and accumulate on the barrier. A team of ecologists from RUDN University was the first in the world to suggest a model that describes the energy of mass transfer, i.e. the movement of matter in an ecosystem. In this model, polycyclic aromatic hydrocarbons (PAHs) are used as the markers of moving substances. PAHs are mainly toxic organic substances that accumulate in the soil. The team used their composition to monitor pollutions and track down their sources. To do so, the ecologists calculated the physical and chemical properties of PAHs and classified them.

“We developed a model that shows the accumulation, transformation, and migration of PAHs. It is based on quantitative measurements that produce more consistent results than descriptive visualizations. This helped us understand how physical and chemical properties of PAHs determine their accumulation in the environment,” said Prof. Aleksander Khaustov, a PhD in Geology and Mineralogy, from the Department of Applied Ecology at RUDN University.

PAHs can form due to natural causes (e.g. wildfires) or as a result of human activity, for example as the waste products of the chemical and oil industry. The team studied 142 water, plant, soil, and silt samples from different geographical regions. Namely, some samples were taken in the hydrologic systems of the Kerch Peninsula, some came from leather industry areas in China, from the vicinity of Irkutsk aluminum smelter, and different regions of the Arctic and Antarctic. Several snow samples were taken on RUDN University campus in Moscow. All collected data were unified, and then the amount of PAHs in each sample was calculated. After that, the results were analyzed in line with the thermodynamic theory to calculate entropy, enthalpy, and Gibbs energy variations. The first value describes the deviation of an actual process from the ideal one; the second one shows the amounts of released or consumed energy, and the third points out the possibility of mass transfer.

“Though our samples were not genetically uniform, they allowed us to apply thermodynamic analysis to matter and energy transfer in natural dissipative systems,” added Prof. Aleksander Khaustov.

The team identified several factors that have the biggest impact on PAHs accumulation. For example, in the ecosystems surrounding leather facilities in China, the key factor turned to be entropy variations, while on RUDN University campus it was the changes in Gibbs energy. The team described three types of processes that are characterized by the reduction, stability, or increase of all three thermodynamic parameters, respectively. Based on this classification and the composition of PAHs one can monitor pollution and track down its source.

References: Aleksander Khaustov, Margarita Redina, “Fractioning of the polycyclic aromatic hydrocarbons in the components of the non-equilibrium geochemical systems (thermodynamic analysis)”, Applied Geochemistry, Volume 120, September 2020, 104684, doi: https://doi.org/10.1016/j.apgeochem.2020.104684 link: https://www.sciencedirect.com/science/article/abs/pii/S0883292720301761?via%3Dihub

Provided by RUDN University

Why Doesn’t A “Reverse Microwave” For Cooling Food Exist? (Physics)

Most kids are full of questions: Why is the sky blue? Why do we have eyebrows? Why can’t we feel the planet spinning? Your grownup curiosities probably became more advanced, but if you’re like us, a few of those questions from childhood never got answered. Like this one: Why isn’t there a “reverse” microwave for cooling food? The answer is more complicated than you might think.

It’s easy to heat food in a microwave. When you pop in a bag of frozen veggies and press the start button, the microwave sends a specific frequency of radio waves into the food to excite the water molecules within. The activity, or energy level, of a group of molecules is essentially a measure of its heat: the more excited the molecules, the hotter they are.

But there’s no frequency of radio waves that can calm molecules down to make them colder. Radio waves are a type of electromagnetic radiation: an umbrella term that includes visible light, infrared, and X-rays, all of which are a form of energy. Energy excites molecules, and excited molecules are hotter.

But excited molecules aren’t just hotter — they’re also in a state of higher entropy. Entropy is basically a scientific measurement of disorder, and according to that old chestnut known as the second law of thermodynamics, any process in a closed system progresses toward increasing disorder. That’s why it’s so much easier to heat food up than it is to cool it down: You can’t reduce entropy, and cold things are at a state of lower entropy than hot things.

When you put a lukewarm ice tray in your freezer, for instance, heat flows from the water to the colder air of the freezer. That may sound like decreasing entropy, but not if you take the entire fridge into account: It’s using a ton of energy to take heat out of the things inside and transfer it into the surrounding air (feel how warm the back of your fridge is!). That’s an overall increase in entropy. Meanwhile, the cold air is a poor conductor, meaning it doesn’t do a very good job at removing heat from the water. That’s why you have to wait for hours before you have solid ice cubes.

There are certain materials that can cool quickly, but they don’t lend themselves to eating. A gas cools by expansion, which is why a freshly sprayed aerosol can feels so cold. But gas isn’t all that filling, compared to solids or liquids. (Our all-helium diet is failing us, but we sound hilarious.)

There are also other cooling methods: Conduction can make heat flow from food onto some colder surface, like oysters served on ice, and convection transfers heat from one place to another in a fluid, which is how you can thaw frozen meat under running water.

But barring some dangerously cold substance like liquid nitrogen, nothing can instantly cool food the way a microwave can instantly heat it. When it comes to popsicles freezing and beer chilling, you’ll just have to wait.