Tag Archives: #thermodynamics

New Quantum Theory Heats Up Thermodynamic Research (Quantum)

Researchers have developed a new quantum version of a 150-year-old thermodynamical thought experiment that could pave the way for the development of quantum heat engines.

Mathematicians from the University of Nottingham have applied new quantum theory to the Gibbs paradox and demonstrated a fundamental difference in the roles of information and control between classical and quantum thermodynamics. Their research has been published today in Nature Communications.

The classical Gibbs paradox led to crucial insights for the development of early thermodynamics and emphasises the need to consider an experimenter’s degree of control over a system.

The research team developed a theory based on mixing two quantum gases – for example, one red and one blue, otherwise identical – which start separated and then mix in a box. Overall, the system has become more uniform, which is quantified by an increase in entropy. If the observer then puts on purple-tinted glasses and repeats the process; the gases look the same, so it appears as if nothing changes. In this case, the entropy change is zero.

Quantum Gas experiment theory by Beth Morris, Maths PhD © University of Nottingham

The lead authors on the paper, Benjamin Yadin and Benjamin Morris, explain: “Our findings seem odd because we expect physical quantities such as entropy to have meaning independent of who calculates them. In order to resolve the paradox, we must realise that thermodynamics tells us what useful things can be done by an experimenter who has devices with specific capabilities. For example, a heated expanding gas can be used to drive an engine. In order to extract work (useful energy) from the mixing process, you need a device that can “see” the difference between red and blue gases.”

Classically, an “ignorant” experimenter, who sees the gases as indistinguishable, cannot extract work from the mixing process. The research shows that in the quantum casedespite being unable to tell the difference between the gases, the ignorant experimenter can still extract work through mixing them.

Considering the situation when the system becomes large, where quantum behaviour would normally disappear, the researchers found that the quantum ignorant observer can extract as much work as if they had been able to distinguish the gases. Controlling these gases with a large quantum device would behave entirely differently from a classical macroscopic heat engine. This phenomenon results from the existence of special superposition states that encode more information than is available classically.

Professor Gerardo Adesso said: “Despite a century of research, there are so many aspects we don’t know or we don’t understand yet at the heart of quantum mechanics. Such a fundamental ignorance, however, doesn’t prevent us from putting quantum features to good use, as our work reveals. We hope our theoretical study can inspire exciting developments in the burgeoning field of quantum thermodynamics and catalyse further progress in the ongoing race for quantum-enhanced technologies.

“Quantum heat engines are microscopic versions of our everyday heaters and refrigerators, which may be realised with just one or a few atoms (as already experimentally verified) and whose performance can be boosted by genuine quantum effects such as superposition and entanglement. Presently, to see our quantum Gibbs paradox played out in a laboratory would require exquisite control over the system parameters, something which may be possible in fine-tuned “optical lattice” systems or Bose-Einstein condensates – we are currently at work to design such proposals in collaboration with experimental groups.”


Reference: Yadin et al. Mixing indistinguishable systems leads to a quantum Gibbs paradox. Nat Commun 12, 1471 (2021). DOI: 10.1038/s41467-021-21620-7


Provided by University of Nottingham

New Nanostructured Alloy For Anode is a Big Step Toward Revolutionizing Energy Storage (Nanotechnology)

Researchers in the Oregon State University College of Engineering have developed a battery anode based on a new nanostructured alloy that could revolutionize the way energy storage devices are designed and manufactured.

Researchers in the Oregon State University College of Engineering have developed a battery anode based on a new nanostructured alloy that could revolutionize the way energy storage devices are designed and manufactured. The zinc- and manganese-based alloy further opens the door to replacing solvents commonly used in battery electrolytes with something much safer and inexpensive, as well as abundant: seawater. © Zhenxing Feng, Oregon State University

The zinc- and manganese-based alloy further opens the door to replacing solvents commonly used in battery electrolytes with something much safer and inexpensive, as well as abundant: seawater.

Findings were published today in Nature Communications.

“The world’s energy needs are increasing, but the development of next-generation electrochemical energy storage systems with high energy density and long cycling life remains technically challenging,” said Zhenxing Feng, a chemical engineering researcher at OSU. “Aqueous batteries, which use water-based conducting solutions as the electrolytes, are an emerging and much safer alternative to lithium-ion batteries. But the energy density of aqueous systems has been comparatively low, and also the water will react with the lithium, which has further hindered aqueous batteries’ widespread use.”

A battery stores power in the form of chemical energy and through reactions converts it to the electrical energy needed to power vehicles, cellphones, laptops and many other devices and machines. A battery consists of two terminals – the anode and cathode, typically made of different materials – as well as a separator and electrolyte, a chemical medium that allows for the flow of electrical charge.

In a lithium-ion battery, as its name suggests, a charge is carried via lithium ions as they move through the electrolyte from the anode to the cathode during discharge, and back again during recharging.

“Electrolytes in lithium-ion batteries are commonly dissolved in organic solvents, which are flammable and often decompose at high operation voltages,” Feng said. “Thus there are obviously safety concerns, including with lithium dendrite growth at the electrode-electrolyte interface; that can cause a short between the electrodes.”

Dendrites resemble tiny trees growing inside a lithium-ion battery and can pierce the separator like thistles growing through cracks in a driveway; the result is unwanted and sometimes unsafe chemical reactions.

Combustion incidents involving lithium-ion batteries in recent years include a blaze on a parked Boeing 787 jet in 2013, explosions in Galaxy Note 7 smartphones in 2016 and Tesla Model S fires in 2019.

Aqueous batteries are a promising alternative for safe and scalable energy storage, Feng said. Aqueous electrolytes are cost-competitive, environmentally benign, capable of fast charging and high power densities and highly tolerant of mishandling.

Their large-scale use, however, has been hindered by a limited output voltage and low energy density (batteries with a higher energy density can store larger amounts of energy, while batteries with a higher power density can release large amounts of energy more quickly).

But researchers at Oregon State, the University of Central Florida and the University of Houston have designed an anode made up of a three-dimensional “zinc-M alloy” as the battery anode – where M refers to manganese and other metals.

“The use of the alloy with its special nanostructure not only suppresses dendrite formation by controlling the surface reaction thermodynamics and the reaction kinetics, it also demonstrates super-high stability over thousands of cycles under harsh electrochemical conditions,” Feng said. “The use of zinc can transfer twice as many charges than lithium, thus improving the energy density of the battery.

“We also tested our aqueous battery using seawater, instead of high purity deionized water, as the electrolyte,” he added. “Our work shows the commercial potential for large-scale manufacturing of these batteries.”

Feng and Ph.D. student Maoyu Wang used X-ray absorption spectroscopy and imaging to track the atomic and chemical changes of the anode in different operation stages, which confirmed how the 3D alloy was functioning in the battery.

“Our theoretical and experimental studies proved that the 3D alloy anode has unprecedented interfacial stability, achieved by a favorable diffusion channel of zinc on the alloy surface,” Feng said. “The concept demonstrated in this collaborative work is likely to bring a paradigm shift in the design of high-performance alloy anodes for aqueous and non-aqueous batteries, revolutionizing the battery industry.”

The National Science Foundation supported this research.

Reference: Tian, H., Li, Z., Feng, G. et al. Stable, high-performance, dendrite-free, seawater-based aqueous batteries. Nat Commun 12, 237 (2021). https://www.nature.com/articles/s41467-020-20334-6 https://doi.org/10.1038/s41467-020-20334-6

Provided by OSU College of Engineering

About the OSU College of Engineering: The 10th largest engineering program based on undergraduate enrollment, the college received nearly $60 million in sponsored research awards in the 2019-20 fiscal year and is global leader in health-related engineering, artificial intelligence, robotics, advanced manufacturing, clean water and energy, materials science, computing and resilient infrastructure. The college ranks second nationally among land grant universities and third among the nation’s 94 public R1 universities for percentage of tenured or tenure-track faculty who are women.

Entropy Production Gets a System Update (Physics)

Nature is not homogenous. Most of the universe is complex and composed of various subsystems — self-contained systems within a larger whole. Microscopic cells and their surroundings, for example, can be divided into many different subsystems: the ribosome, the cell wall, and the intracellular medium surrounding the cell.

(Image: Pete LInforth/Pixabay)

The Second Law of Thermodynamics tells us that the average entropy of a closed system in contact with a heat bath — roughly speaking, its “disorder”— always increases over time. Puddles never refreeze back into the compact shape of an ice cube and eggs never unbreak themselves. But the Second Law doesn’t say anything about what happens if the closed system is instead composed of interacting subsystems.

New research by SFI Professor David Wolpert published in the New Journal of Physics considers how a set of interacting subsystems affects the second law for that system.

“Many systems can be viewed as though they were subsystems. So what? Why actually analyze them as such, rather than as just one overall monolithic system, which we already have the results for,” Wolpert asks rhetorically.

The reason, he says, is that if you consider something as many interacting subsystems, you arrive at a “stronger version of the second law,” which has a nonzero lower bound for entropy production that results from the way the subsystems are connected. In other words, systems made up of interacting subsystems have a higher floor for entropy production than a single, uniform system.

All entropy that is produced is heat that needs to be dissipated, and so is energy that needs to be consumed. So a better understanding of how subsystem networks affect entropy production could be very important for understanding the energetics of complex systems, such as cells or organisms or even machinery 

Wolpert’s work builds off another of his recent papers which also investigated the thermodynamics of subsystems. In both cases, Wolpert uses graphical tools for describing interacting subsystems.

For example, the following figure shows the probabilistic connections between three subsystems — the ribosome, cell wall, and intracellular medium.

Like a little factory, the ribosome produces proteins that exit the cell and enter the intracellular medium. Receptors on the cell wall can detect proteins in the intracellular medium. The ribosome directly influences the intracellular medium but only indirectly influences the cell wall receptors. Somewhat more mathematically: A affects B and B affects C, but A doesn’t directly affect C.

Why would such a subsystem network have consequences for entropy production?

“Those restrictions — in and of themselves — result in a strengthened version of the second law where you know that the entropy has to be growing faster than would be the case without those restrictions,” Wolpert says.

A must use B as an intermediary, so it is restricted from acting directly on C. That restriction is what leads to a higher floor on entropy production.

Plenty of questions remain. The current result doesn’t consider the strength of the connections between A, B, and C — only whether they exist. Nor does it tell us what happens when new subsystems with certain dependencies are added to the network. To answer these and more, Wolpert is working with collaborators around the world to investigate subsystems and entropy production. “These results are only preliminary,” he says.

References : David H Wolpert, “Minimal entropy production rate of interacting systems”, New Journal of Physics, Volume 22, 1 November 2020. https://iopscience.iop.org/article/10.1088/1367-2630/abc5c6

Provided by Santa Fe Intitute

Researchers Minimize Quantum Backaction in Thermodynamic Systems via Entangled Measurement (Quantum)

Led by academician Prof. GUO Guangcan from the Chinese Academy of Sciences (CAS), Prof. LI Chuanfeng’s group and Prof. XIANG Guoyong’s group from University of Science and Technology of China (USTC), CAS, in cooperation with theoretical physicists from Germany, Italy and Switzerland, conducted the first experiment using entangled collective measurement for minimizing quantum measurement backaction based on photonic system.

The result was published online in Physical Review Letters on Nov. 16.

Conceptual design of the quantum work and its experimental realization. ©WU Kangda et al.

When an observable object is measured twice on an evolving coherent quantum system, the first measurement usually changes the statistical information of the second measurement because the first measurement broke the quantum coherence of the system, which is called measurement backaction.

A former theoretical work of Dr. Mart í Perarnau Llobet in 2017 pointed out that, without violating the basic requirements of quantum thermodynamics, measurement backaction can’t be completely avoided, but the degree of backaction caused by projective measurement can be reduced through collective measurement.

Based on the above theoretical research results, Prof. XIANG and the coauthors realized the quantum collective measurement and successfully observed the reduction of measurement backaction in 2019.

Since the quantum collective measurements used in previous works were separable, a natural question can be raised: whether there is quantum entangled collective measurement which reduces more backaction than what we have achieved.

Prof. XIANG and his theoretical collaborators studied the optimal collective measurement in the two qubit system. They found that there is an optimal entanglement collective measurement theoretically, which can minimize the backaction in a two qubit system, and the backaction can be suppressed to zero in the case of strongly coherent evolution.

Then, they designed and implemented the entanglement measurement via photonic quantum walk with fidelity up to 98.5%, and observed the reduction of the reaction of projection measurement.

This work is significant to the study of collective measurement and quantum thermodynamics. The referees commented the work as representing a major advance in the field: “The experiment is well executed, as the results follow closely what one would expect from an ideal implementation. Overall, I find the article a highly interesting contribution to the topic of quantum backaction and a great combination of new theory and flawless experimental implementation.”

References: Kang-Da Wu, Elisa Bäumer, Jun-Feng Tang, Karen V. Hovhannisyan, Martí Perarnau-Llobet, Guo-Yong Xiang, Chuan-Feng Li, and Guang-Can Guo, “Minimizing Backaction through Entangled Measurements”, Phys. Rev. Lett. 125, 210401 – Published 16 November 2020. https://journals.aps.org/prl/abstract/10.1103/PhysRevLett.125.210401 https://dx.doi.org/10.1103/PhysRevLett.125.210401

Provided by University of Science and technology of China

New Research Explores The Thermodynamics Of Off-equilibrium Systems (Physics)

Arguably, almost all truly intriguing systems are ones that are far away from equilibrium — such as stars, planetary atmospheres, and even digital circuits. But, until now, systems far from thermal equilibrium couldn’t be analyzed with conventional thermodynamics and statistical physics.

When physicists first explored thermodynamics and statistical physics during the 1800s, and through the 1900s, they focused on analyzing physical systems that are at or near equilibrium. Conventional thermodynamics and statistical physics have also focused on macroscopic systems, which contain few, if any, explicitly distinguished subsystems.

In a paper published in the journal Physical Review Letters, SFI Professor David Wolpert presents a new hybrid formalism to overcome all of these limitations.

Fortunately, at the turn of the millennium, “a formalism now known as nonequilibrium statistical physics was developed,” says Wolpert. “It applies to systems that are arbitrarily far away from equilibrium and of any size.”

Nonequilibrium statistical physics is so powerful that it has resolved one of the deepest mysteries about the nature of time: how does entropy evolve within an intermediate regime? This is the space between the macroscopic world, where the second law of thermodynamics tells us that it must always increase, and the microscopic world where it can’t ever change.

We now know it’s only the expected entropy of a system that can’t decrease with time. “There’s always a non-zero probability that any particular sample of the dynamics of a system will result in decreasing entropy — and the probability of shrinking entropy grows as the system gets smaller,” he says.

At the same time that this revolution in statistical physics was occurring, major advances involving so-called graphical models were being made within the machine learning community.

In particular, the formalism of Bayesian networks was developed, which provides a method to specify systems with many subsystems that interact probabilistically with each other. Bayes nets can be used to formally describe the synchronous evolution of the elements of a digital circuit — fully accounting for noise within that evolution.

Wolpert combined these advances into a hybrid formalism, which is allowing him to explore thermodynamics of off-equilibrium systems that have many explicitly distinguished subsystems coevolving according to a Bayes net.

As an example of the power of this new formalism, Wolpert derived results showing the relationship between three quantities of interest in studying nanoscale systems like biological cells: the statistical precision of any arbitrarily defined current within the subsystem (such as the probabilities that the currents differ from their average values), the heat generated by running the overall Bayes net composed of those subsystems, and the graphical structure of that Bayes net.

“Now we can start to analyze how the thermodynamics of systems ranging from cells to digital circuits depend on the network structures connecting the subsystems of those systems,” says Wolpert.

References : David H. Wolpert, “Uncertainty Relations and Fluctuation Theorems for Bayes Nets”, Phys. Rev. Lett. 125, 200602 – Published 10 November 2020. https://journals.aps.org/prl/abstract/10.1103/PhysRevLett.125.200602

Provided by Santa Fe Institute

What Is An Arrow Of Time? (Quantum Mechanics)

Time appears to have a direction, to be inherently directional: the past lies behind us and is fixed and immutable, and accessible by memory or written documentation; the future, on the other hand, lies ahead and is not necessarily fixed, and, although we can perhaps predict it to some extent, we have no firm evidence or proof of it. Most of the events we experience are irreversible: for example, it is easy for us to break an egg, and hard, if not impossible, to unbreak an already broken egg. It appears inconceivable to us that that this progression could go in any other direction. This one-way direction or asymmetry of time is often referred to as the arrow of time, and it is what gives us an impression of time passing, of our progressing through different moments. The arrow of time, then, is the uniform and unique direction associated with the apparent inevitable “flow of time” into the future.

The idea of an arrow of time was first explored and developed to any degree by the British astronomer and physicist Sir Arthur Eddington back in 1927, and the origin of the phrase is usually attributed to him. What interested Eddington is that exactly the same arrow of time would apply to an alien race on the other side of the universe as applies to us. It is therefore nothing to do with our biology or psychology, but with the way the universe is. The arrow of time is not the same thing as time itself, but a feature of the universe and its contents and the way it has evolved.

Is the Arrow of Time an Illusion?

In order to know you must have knowledge of Relativity or Relativistic Time. If you a beginner, let me tell you, according to the Theory of Relativity, the reality of the universe can be described by four-dimensional space-time, so that time does not actually “flow”, it just “is”. The perception of an arrow of time that we have in our everyday life therefore appears to be nothing more than an illusion of consciousness in this model of the universe, an emergent quality that we happen to experience due to our particular kind of existence at this particular point in the evolution of the universe.

Perhaps even more interesting and puzzling is the fact that, although events and processes at the macroscopic level – the behaviour of bulk materials that we experience in everyday life – are quite clearly time-asymmetric (i.e. natural processes DO have a natural temporal order, and there is an obvious forward direction of time), physical processes and laws at the microscopic level, whether classical, relativistic or quantum, are either entirely or mostly time-symmetric. If a physical process is physically possible, then generally speaking so is the same process run backwards, so that, if you were to hypothetically watch a movie of a physical process, you would not be able to tell if it is being played forwards or backwards, as both would be equally plausible.

In theory, therefore, most of the laws of physics do not necessarily specify an arrow of time. There is, however, an important exception: the Second Law of Thermodynamics.

Thermodynamic Arrow of Time

Most of the observed temporal asymmetry at the macroscopic level – the reason we see time as having a forward direction – ultimately comes down to thermodynamics, the science of heat and its relation with mechanical energy or work, and more specifically to the Second Law of Thermodynamics. This law states that, as one goes forward in time, the net entropy (degree of disorder) of any isolated or closed system will always increase (or at least stay the same).

The concept of entropy and the decay of ordered systems was explored and clarified by the German physicist Ludwig Boltzmann in the 1870s, building on earlier ideas of Rudolf Clausius, but it remains a difficult and often misunderstood idea. Entropy can be thought of, in most cases, as meaning that things (matter, energy, etc) have a tendency to disperse. Thus, a hot object always dissipates heat to the atmosphere and cools down, and not vice versa; coffee and milk mix together, but do not then separate; a house left unattended will eventually crumble away, but a pile of bricks never spontaneously forms itself into a house; etc. However, as discussed below, it is not quite as simple as that, and a better way of thinking of it may be as a tendency towards randomness.

It should be noted that, in thermodynamic systems that are NOT closed, it is quite possible that entropy can decrease with time (e.g. the formation of certain crystals; many living systems, which may reduce local entropy at the expense of the surrounding environment, resulting in a net overall increase in entropy; the formation of isolated pockets of gas and dust into stars and planets, even though the entropy of the universe as a whole continues to increase; etc). Any localized or temporary instances of order within the universe are therefore in the nature of epiphenomena within the overall picture of a universe progressing inexorably towards disorder.

It is also perhaps counter-intuitive, but nevertheless true, that overall entropy actually increases even as large-scale structure forms in the universe (e.g. galaxies, clusters, filaments, etc), and that dense and compact black holes have incredibly high entropy, and actually account for the overwhelming majority of the entropy in today’s universe. Likewise, the relatively smooth configuration of the very early universe (see the section on Time and the Big Bang) is actually an indication of very low overall entropy (i.e. high entropy does not necessarily imply smoothness: random “lumpiness”, like in our current universe, is actually a characteristic of high entropy).

Most of the processes that appear to us to be irreversible in time are those that start out, for whatever reason, in some very special, highly-ordered state. For example, a new deck of cards are in number order, but as soon as we shuffle them they become disordered; an egg is a much more ordered state than a broken or scrambled egg; etc. There is nothing in the laws of physics that prevents the act of shuffling a deck of cards from producing a perfectly ordered set of cards – there is always a chance of that, it is just a vanishingly small chance. To give another example, there are many more possible disordered arrangements of a jigsaw than the one ordered arrangement that makes a complete picture. So, the apparent asymmetry of time is really just an asymmetry of chance – things evolve from order to disorder not because the reverse is impossible, but because it is highly unlikely. The Second Law of Thermodynamics is therefore more a statistical principle than a fundamental law (this was Boltzmann’s great insight). But the upshot is that, provided the initial condition of a system is one of relatively high order, then the tendency will almost always be towards disorder.

Thermodynamics, then, appears to be one of the only physical processes that is NOT time-symmetric, and so fundamental and ubiquitous is it in our universe that it may be single-handedly responsible for our perception of time as having a direction. Indeed, several of the other arrows of time noted below (arguably) ultimately come back to the asymmetry of thermodynamics. Indeed, so clear is this law that the measurement of entropy has been put forward a way of distinguishing the past from the future, and the thermodynamic arrow of time has even been put forward as the reason we can remember the past but not the future, due to the fact that the entropy or disorder was lower in the past than in the future.

Cosmological Arrow of Time

It has been argued that the arrow of time points in the direction of the universe’s expansion, as the universe continues to grow bigger and bigger since its beginning in the Big Bang. It became apparent towards the beginning of the 20th Century, thanks to the work of Edwin Hubble and others, that space is indeed expanding, and the galaxies are moving ever further apart. Logically, therefore, at a much earlier time, the universe was much smaller, and ultimately concentrated in a single point or singularity, which we call the Big Bang. Thus, the universe does seem to have some intrinsic (outward) directionality. In our everyday lives, however, we are not physically conscious of this movement, and it is difficult to see how we can perceive the expansion of the universe as an arrow of time.

The cosmological arrow of time may be linked to, or even dependent on, the thermodynamic arrow, given that, as the universe continues to expand and heads towards an ultimate “Heat Death” or “Big Chill”, it is also heading in a direction of increasing entropy, ultimately arriving at a position of maximum entropy, where the amount of usable energy becomes negligible or even zero. This accords with the Second Law of Thermodynamics in that the overall direction is from the current semi-ordered state, marked by outcroppings of order and structure, towards a completely disordered state of thermal equilibrium. What remains a major unknown in modern physics, though, is exactly why the universe had a very low entropy at its origin, the Big Bang.

It is also possible – although less likely according to the predictions of current physics – that the present expansion phase of the universe could eventually slow, stop, and then reverse itself under gravity. The universe would then contract back to a mirror image of the Big Bang known as the “Big Crunch” (and possibly a subsequent “Big Bounce” in one of a series of cyclic repetitions). As the universe contracts and collapses, entropy will in theory start to reduce and, presumably, the arrow of time will reverse itself and time will effectively begin to run backwards. In this scenario, then, the arrow of time that we experience is merely a function of our current place in the evolution of the universe and, at some other time, it could conceivably change its direction.

However, there are paradoxes associated with this view because, looked at from a suitably distant and long-term viewpoint, time will continue to progress “forwards” (in some respects at least), even if the universe happens to be in a contraction phase rather than an expansion phase. So, the cosmic asymmetry of time could still continue, even in a “closed” universe of this kind.

Radiative Arrow of Time

Waves, like light, radio waves, sound waves, water waves, etc, are always radiative and expand outwards from their sources. While theoretical equations do allow for the opposite (covergent) waves, this is apparently never seen in nature. This asymmetry is regarded by some as a reason for the asymmetry of time.

It is possible that the radiative arrow may also be linked to the thermodynamic arrow, because radiation suggests increased entropy while convergence suggests increased order. This becomes particularly clear when we consider radiation as having a particle aspect (i.e. as consisting of photons) as quantum mechanics suggests.

Quantum Arrow of Time

The whole mechanism of quantum mechanics (or at least the conventional Copenhagen interpretation of it) is based on Schrödinger’s Equation and the collapse of wave functions, and this appears to be a time-asymmetric phenomenon. For example, the location of a particle is described by a wave function, which essentially gives various probabilities that the particle is in many different possible positions (or superpositions), and the wave function only collapses when the particle is actually observed. At that point, the particle can finally be said to be in one particular position, and all the information from the wave function is then lost and cannot be recreated. In this respect, the process is time-irreversible, and an arrow of time is created.

Some physicists, including the team of Aharonov, Bergmann and Lebowitz in the 1960s, have questioned this finding, though. Their experiments concluded that we only get time-asymmetric answers in quantum mechanics when we ask time-asymmetric questions, and that questions and experiments can be framed in such a way that the results are time-symmetric. Thus, quantum mechanics does not impose time asymmetry on the world; rather, the world imposes time asymmetry on quantum mechanics.

It is not clear how the quantum arrow of time, if indeed it exists at all, is related to the other arrows, but it is possible that it is linked to the thermodynamic arrow, in that nature shows a bias for collapsing wave functions into higher entropy states versus lower ones.

Weak Nuclear Force Arrow of Time

Of the four fundamental forces in physics (gravity, electromagnetism, the strong nuclear force and the weak nuclear force), the weak nuclear force is the only one that does not always manifest complete time symmetry. To some limited extent, therefore, there is a weak force arrow of time, and this is the only arrow of time which appears to be completely unrelated to the thermodynamic arrow.

The weak nuclear force is a very weak interaction in the nucleus of an atom, and is responsible for, among other things, radioactive beta decay and the production of neutrinos. It is perhaps the least understood and strangest of the fundamental forces. In some situations the weak force is time-reversible, e.g. a proton and an electron can smash together to produce a neutron and a neutrino, and a neutron and a neutrino smashed together CAN also produce a proton and an electron (even if the chances of this happening in practice are very small). However, there are examples of the weak interaction that are time-irreversible, for example the case of the oscillation and decay of neutral kaon and anti-kaon particles. Under certain conditions, it has been shown experimentally that kaons and anti-kaons actually decay at different rates, indicating that the weak force is not in fact time-reversible, thereby establishing a kind of arrow of time.

It should be noted, though, that this is not such a strong or fundamental arrow of time as the thermodynamic arrow (the difference is between a process that could go either way but in a slightly different way or at a different rate, and a truly irreversible process – like entropy – that just cannot possibly go both ways). Indeed, it is such a rare occurrence, so small and barely perceivable in its effect, and so divorced from any of the other arrows, that it is usually characterized as an inexplicable anomaly.

Causal Arrow of Time

Although not directly related to physics, causality appears to be intimately bound up with time’s arrow. By definition, a cause precedes its effect. Although it is surprisingly difficulty to satisfactorily define cause and effect, the concept is readily apparent in the events of our everyday lives. If we drop a wineglass on a hard floor, it will subsequently shatter, whereas shattered glass on the floor is very unlikely to subsequently result in an unbroken held wine glass. By causing something to happen, we are to some extent controlling the future, whereas whatever might do we cannot change or control the past.

Once again, though, the underlying principle may well come back to the thermodynamic arrow: while disordered shattered glass can easily be made out of a well-ordered wineglass, the reverse is much more difficult and unlikely.

Psychological Arrow of Time

A variant of the causal arrow is sometimes referred to as the psychological or perceptual arrow of time. We appear to have an innate sense that our perception runs from the known past to the unknown future. We anticipate the unknown, and automatically move forward towards it, and, while we are able to remember the past, we do not normally waste time in trying to change the already known and fixed past.

Stephen Hawking has argued that even the psychological arrow of time is ultimately dependent on the thermodynamic arrow, and that we can only remember past things because they form a relatively small set compared to the potentially infinite number of possible disordered future sets.

Anthropic Principle

Some thinkers, including Stephen Hawking again, have pinned the direction of the arrow of time on what is sometimes called the weak anthropic principle, the idea that the laws of physics are as they are solely because those are the laws that allow the development of sentient, questioning beings like ourselves. It is not that the universe is in some way “designed” to allow human beings, merely that we only find ourselves in such a universe because it is as it is, even though the universe could easily have developed in a quite different way with quite different laws.

Thus, Hawking argues, a strong thermodynamic arrow of time is a necessary condition for intelligent life as we know it to develop. For example, beings like us need to consume food (a relatively ordered form of energy) and convert it into heat (a relatively disordered form of energy), for which a thermodynamic arrow like the one we see around us is necessary. If the universe were any other way, we would not be here to observe it.

This article is republished here from exactly what is time under common creative licenses

Researchers Discover A Uniquely Quantum Effect In Erasing Information (Quantum)

Researchers from Trinity College Dublin have discovered a uniquely quantum effect in erasing information that may have significant implications for the design of quantum computing chips. Their surprising discovery brings back to life the paradoxical “Maxwell’s demon”, which has tormented physicists for over 150 years.

A bit of information can be encoded in the position of a particle (left or right). A demon can erase a classical bit (blue) by raising one side until the particle is definitely on the right. A quantum particle (red) can also tunnel under the barrier, which generates more heat. © Professor Goold, Trinity College Dublin

The thermodynamics of computation was brought to the fore in 1961 when Rolf Landauer, then at IBM, discovered a relationship between the dissipation of heat and logically irreversible operations. Landauer is known for the mantra “Information is Physical”, which reminds us that information is not abstract and is encoded on physical hardware.

The “bit” is the currency of information (it can be either 0 or 1) and Landauer discovered that when a bit is erased there is a minimum amount of heat released. This is known as Landauer’s bound and is the definitive link between information theory and thermodynamics.

Professor John Goold’s QuSys group at Trinity is analysing this topic with quantum computing in mind, where a quantum bit (a qubit, which can be 0 and 1 at the same time) is erased.

In just-published work in the journal, Physical Review Letters, the group discovered that the quantum nature of the information to be erased can lead to large deviations in the heat dissipation, which is not present in conventional bit erasure.

Thermodynamics and Maxwell’s demon

One hundred years previous to Landauer’s discovery people like Viennese scientist, Ludwig Boltzmann, and Scottish physicist, James Clerk Maxwell, were formulating the kinetic theory of gases, reviving an old idea of the ancient Greeks by thinking about matter being made of atoms and deriving macroscopic thermodynamics from microscopic dynamics.

Professor Goold says:

“Statistical mechanics tells us that things like pressure and temperature, and even the laws of thermodynamics themselves, can be understood by the average behavior of the atomic constituents of matter. The second law of thermodynamics concerns something called entropy which, in a nutshell, is a measure of the disorder in a process. The second law tells us that in the absence of external intervention, all processes in the universe tend, on average, to increase their entropy and reach a state known as thermal equilibrium.

“It tells us that, when mixed, two gases at different temperatures will reach a new state of equilibrium at the average temperature of the two. It is the ultimate law in the sense that every dynamical system is subject to it. There is no escape: all things will reach equilibrium, even you!”

However, the founding fathers of statistical mechanics were trying to pick holes in the second law right from the beginning of the kinetic theory. Consider again the example of a gas in equilibrium: Maxwell imagined a hypothetical “neat-fingered” being with the ability to track and sort particles in a gas based on their speed.

Maxwell’s demon, as the being became known, could quickly open and shut a trap door in a box containing a gas, and let hot particles through to one side of the box but restrict cold ones to the other. This scenario seems to contradict the second law of thermodynamics as the overall entropy appears to decrease and perhaps physics’ most famous paradox was born.

But what about Landauer’s discovery about the heat-dissipated cost of erasing information? Well, it took another 20 years until that was fully appreciated, the paradox solved, and Maxwell’s demon finally exorcised.

Landauer’s work inspired Charlie Bennett – also at IBM – to investigate the idea of reversible computing. In 1982 Bennett argued that the demon must have a memory, and that it is not the measurement but the erasure of the information in the demon’s memory which is the act that restores the second law in the paradox. And, as a result, computation thermodynamics was born.

New findings

Now, 40 years on, this is where the new work led by Professor Goold’s group comes to the fore, with the spotlight on quantum computation thermodynamics.

In the recent paper, published with collaborator Harry Miller at the University of Manchester and two postdoctoral fellows in the QuSys Group at Trinity, Mark Mitchison and Giacomo Guarnieri, the team studied very carefully an experimentally realistic erasure process that allows for quantum superposition (the qubit can be in state 0 and 1 at same time).

Professor Goold explains:

“In reality, computers function well away from Landauer’s bound for heat dissipation because they are not perfect systems. However, it is still important to think about the bound because as the miniaturisation of computing components continues, that bound becomes ever closer, and it is becoming more relevant for quantum computing machines. What is amazing is that with technology these days you can really study erasure approaching that limit.

“We asked: ‘what difference does this distinctly quantum feature make for the erasure protocol?’ And the answer was something we did not expect. We found that even in an ideal erasure protocol – due to quantum superposition – you get very rare events which dissipate heat far greater than the Landauer limit.

“In the paper we prove mathematically that these events exist and are a uniquely quantum feature. This is a highly unusual finding that could be really important for heat management on future quantum chips – although there is much more work to be done, in particular in analysing faster operations and the thermodynamics of other gate implementations.

“Even in 2020, Maxwell’s demon continues to pose fundamental questions about the laws of nature.”

References: Harry J. D. Miller, Giacomo Guarnieri, Mark T. Mitchison, and John Goold, “Quantum Fluctuations Hinder Finite-Time Information Erasure near the Landauer Limit”, Phys. Rev. Lett. 125, 160602 – Published 15 October 2020. https://journals.aps.org/prl/abstract/10.1103/PhysRevLett.125.160602

Provided by Trinity College Dublin

RUDN University Ecologists Developed New Models To Identify Environmental Pollution Sources (Nature)

According to a team of ecologists from RUDN University, polycyclic aromatic hydrocarbons (PAHs) can be used as pollution indicators and help monitor the movement of pollutants in environmental components such as soils, plants, and water. To find this out, the team conducted a large-scale study of a variety of soil, water, and plant samples collected from a vast area from China to the Antarctic. The results of the study were published in the Applied Geochemistry journal.

According to a team of ecologists from RUDN University, polycyclic aromatic hydrocarbons (PAHs) can be used as pollution indicators and help monitor the movement of pollutants in environmental components such as soils, plants, and water. To find this out, the team conducted a large-scale study of a variety of soil, water, and plant samples collected from a vast area from China to the Antarctic. ©RUDN Univerisity.

Geochemical barriers mark the borders between natural environments at which the nature of element transfer changes dramatically. For example, the concentration of oxygen rapidly increases at groundwater outlets, because different chemical elements oxidize and accumulate on the barrier. A team of ecologists from RUDN University was the first in the world to suggest a model that describes the energy of mass transfer, i.e. the movement of matter in an ecosystem. In this model, polycyclic aromatic hydrocarbons (PAHs) are used as the markers of moving substances. PAHs are mainly toxic organic substances that accumulate in the soil. The team used their composition to monitor pollutions and track down their sources. To do so, the ecologists calculated the physical and chemical properties of PAHs and classified them.

“We developed a model that shows the accumulation, transformation, and migration of PAHs. It is based on quantitative measurements that produce more consistent results than descriptive visualizations. This helped us understand how physical and chemical properties of PAHs determine their accumulation in the environment,” said Prof. Aleksander Khaustov, a PhD in Geology and Mineralogy, from the Department of Applied Ecology at RUDN University.

PAHs can form due to natural causes (e.g. wildfires) or as a result of human activity, for example as the waste products of the chemical and oil industry. The team studied 142 water, plant, soil, and silt samples from different geographical regions. Namely, some samples were taken in the hydrologic systems of the Kerch Peninsula, some came from leather industry areas in China, from the vicinity of Irkutsk aluminum smelter, and different regions of the Arctic and Antarctic. Several snow samples were taken on RUDN University campus in Moscow. All collected data were unified, and then the amount of PAHs in each sample was calculated. After that, the results were analyzed in line with the thermodynamic theory to calculate entropy, enthalpy, and Gibbs energy variations. The first value describes the deviation of an actual process from the ideal one; the second one shows the amounts of released or consumed energy, and the third points out the possibility of mass transfer.

“Though our samples were not genetically uniform, they allowed us to apply thermodynamic analysis to matter and energy transfer in natural dissipative systems,” added Prof. Aleksander Khaustov.

The team identified several factors that have the biggest impact on PAHs accumulation. For example, in the ecosystems surrounding leather facilities in China, the key factor turned to be entropy variations, while on RUDN University campus it was the changes in Gibbs energy. The team described three types of processes that are characterized by the reduction, stability, or increase of all three thermodynamic parameters, respectively. Based on this classification and the composition of PAHs one can monitor pollution and track down its source.

References: Aleksander Khaustov, Margarita Redina, “Fractioning of the polycyclic aromatic hydrocarbons in the components of the non-equilibrium geochemical systems (thermodynamic analysis)”, Applied Geochemistry, Volume 120, September 2020, 104684, doi: https://doi.org/10.1016/j.apgeochem.2020.104684 link: https://www.sciencedirect.com/science/article/abs/pii/S0883292720301761?via%3Dihub

Provided by RUDN University

What Is Heat Death Paradox? (Astronomy)

Formulated in 1862 by Lord Kelvin, Hermann von Helmholtz and William John Macquorn Rankine, the heat death paradox, also known as Clausius’s paradox and thermodynamic paradox, is a reductio ad absurdum argument that uses thermodynamics to show the impossibility of an infinitely old universe.

Assuming that the universe is eternal, a question arises: How is it that thermodynamic equilibrium has not already been achieved?

This paradox is based upon the classical model of the universe in which the universe is eternal. Clausius’s paradox is a paradox of paradigm. It was necessary to amend the fundamental ideas about the universe, which brought about the change of the paradigm. The paradox was solved when the paradigm was changed.

The paradox was based upon the rigid mechanical point of view of the Second principle of thermodynamics postulated by Rudolf Clausius according to which heat can only be transferred from a warmer to a colder object. If the universe was eternal, as claimed in the classical stationary model of the universe, it should already be cold.

Any hot object transfers heat to its cooler surroundings, until everything is at the same temperature. For two objects at the same temperature as much heat flows from one body as flows from the other, and the net effect is no change. If the universe were infinitely old, there must have been enough time for the stars to cool and warm their surroundings. Everywhere should therefore be at the same temperature and there should either be no stars, or everything should be as hot as stars.

Since there are stars and the universe is not in thermal equilibrium it cannot be infinitely old.

The paradox does not arise in Big Bang or Steady State cosmology. In Big Bang cosmology, the current age of the universe is not old enough to have reached equilibrium; while in a Steady State system, sufficient hydrogen is replenished or regenerated continuously to allow for a constant average density and preventing stars from running down.

References: Cucic (2009). “Paradoxes of Thermodynamics and Statistical Physics”, pp.1-15, arXiv:0912.1756