Quantum technologies are based on quantum properties of light, electrons, and atoms. In recent decades, scientists have learned to master these phenomena and exploit them in applications. Thus, the construction of a quantum computer for commercial applications is also coming within reach. One of the emerging technologies that is currently being advanced very successfully is ion trap quantum computers. Here, charged particles are trapped with electromagnetic fields in a vacuum chamber and prepared in such a way that they can serve as carriers for information and be used for computing, which includes cooling them to the lowest temperatures permitted by quantum mechanics. However, the quantum mechanical properties exploited in this process are highly error prone. Even the smallest deficiencies can heat up the strongly cooled particles and thereby lead to errors in the processing of quantum information. Possible sources of such faults are weakly conducting or non-conducting materials, which are used, for example, as insulators in a metallic ion trap, or optics, which are necessary for coupling ions with laser light. “Even for ion traps made exclusively of metal, oxide layers on the metals would cause such failures,” explains Tracy Northup at the Department of Experimental Physics of the University of Innsbruck in Austria. Northups team together with collaborators in Innsbruck and in the U.S. have found a way to determine the influence of dielectric materials on the charged particles in ion traps.
This was achieved because the Innsbruck quantum physicists have an ion trap in which they can precisely set the distance between the ions and dielectric optics. Based on an earlier proposal by Rainer Blatt’s group, the physicists computed the amount of noise caused by the dielectric material for this ion trap and compared it with data from experiment. “Theory and experiment agree very well, confirming that this method is well suited for determining the influence of dielectric materials on the ions,” explains Markus Teller from the Innsbruck team. To calculate the noise, the so-called fluctuation-dissipation theorem from statistical physics was used, which mathematically describes the response of a system in thermal equilibrium to a small external perturbation.
“In quantum computers, there are many possible sources of noise, and it is very difficult to sort out the exact sources,” says Tracy Northup. “Our method is the first to quantify the influence of dielectric materials in a given ion trap on the charged particles. In the future, designers of ion trap quantum computers will be able to assess this effect much more accurately and design their devices to minimize these perturbations.” After having successfully demonstrated the method on their own ion trap, the Innsbruck physicists now want to apply it to the ion traps of collaborators in the U.S. and Switzerland.
The research was financially supported by the Austrian Science Fund FWF and the European Union, among others. The results have been published in the journal Physical Review Letters.
Publication: Heating of a trapped ion induced by dielectric materials. Markus Teller, Dario A. Fioretto, Philip C. Holz, Philipp Schindler, Viktor Messerer, Klemens Schüppert, Yueyang Zou, Rainer Blatt, John Chiaverini, Jeremy Sage, and Tracy E. Northup. Phys. Rev. Lett. 126, 230505 doi: 10.1103/PhysRevLett.126.230505
Researchers from TU Graz in Austria and the Universities of Cambridge and Surrey succeeded to track down the first step in ice formation at a surface, revealing that additional energy is needed for water before ice can start to form.
Water freezes and turns to ice when brought in contact with a cold surface – a well-known fact. However, the exact process and its microscopic details remained elusive up to know. Anton Tamtögl from the Institute of Experimental Physics at TU Graz explains: “The first step in ice formation is called ‘nucleation’ and happens in an incredibly short length of time, a fraction of a billionth of a second, when highly mobile individual water molecules ‘find each other’ and coalesce.” Conventional microscopes are far too slow to follow the motion of water molecules and so it is impossible to use them to ‘watch’ how molecules combine on top of solid surfaces.
Findings turn previous understanding of ice formation upside down
With the help of a new experimental technique and computational simulations, Tamtögl and a group of researchers from the Universities of Cambridge and Surrey were able to track down the first step in ice formation on a graphene surface. In a paper published in Nature Communications, they made the remarkable observation that the water molecules repel each other and need to gain sufficient energy to overcome that repulsion before ice can start to form: It has to become hot, so to speak, before ice forms.Talking in the general sense, the lead author Anton Tamtögl says “repulsion between water molecules has simply not been considered during ice nucleation – this work will change all that”.
Following the ‘dance’ of water molecules
The effect was discovered with a method called Helium Spin-Echo (HeSE) – a technique developed at the Cavendish Laboratory in Cambridge and specially designed to follow the motion of atoms and molecules. The machine scatters helium from moving molecules on a surface, similar to the way radio waves scatter from vehicles in a radar speed-trap. By registering the number of scattered helium and their energy / velocity after scattering, it allows to follow the movement of atoms and molecules.
The HeSE experiments show that water molecules on a graphene surface, i.e. a single atomic layer of carbon, repel each other. The repulsion arises due to the same alignment of the molecules, perpendicular to the surface. The scenario is analogous to bringing two magnets with like-poles together: They will push themselves apart. In order for the nucleation of ice to begin, one of the two molecules must reorient itself, only then can they approach each other. Such a reorientation requires additional energy and thus represents a barrier that must be overcome for the growth of ice crystals.
Computational simulations in which the precise energy of water molecules in different configurations was mapped and the interactions between molecules near to each other were calculated, confirm the experimental findings. Moreover, simulations allow to ‘switch’ the repulsion on and off, providing thus further proof of the effect. The combination of experimental and theoretical methods allowed the international scientific team to unravel the behaviour of the water molecules. It captures for the first time, exactly how the first step of ice formation at a surface evolves and allowed them to propose a previously unknown physical mechanism.
Relevance for other fields and applications
The group further suggests the newly observed effect may occur more widely, on other surfaces. “Our findings pave the way for new strategies to control ice formation or prevent icing,” says Tamtögl, thinking, for example, of surface treatments specifically for wind power, aviation or telecommunications.
Understanding the microscopic processes at work during ice formation, is also essential to predicting the formation and melting of ice, from individual crystals to glaciers and ice sheets. The latter is crucial to our ability to quantify environmental transformation in connection with climate change and global warming.
Quantum entanglement is key for next-generation computing and communications technology, Aalto researchers can now produce it using temperature differences.
A joint group of scientists from Finland, Russia, China and the USA have demonstrated that temperature difference can be used to entangle pairs of electrons in superconducting structures. The experimental discovery, published in Nature Communications, promises powerful applications in quantum devices, bringing us one step closer towards applications of the second quantum revolution.
The team, led by Professor Pertti Hakonen from Aalto University, has shown that the thermoelectric effect provides a new method for producing entangled electrons in a new device. “Quantum entanglement is the cornerstone of the novel quantum technologies. This concept, however, has puzzled many physicists over the years, including Albert Einstein who worried a lot about the spooky interaction at a distance that it causes”, says Prof. Hakonen.
In quantum computing, entanglement is used to fuse individual quantum systems into one, which exponentially increases their total computational capacity. “Entanglement can also be used in quantum cryptography, enabling the secure exchange of information over long distances”, explains Prof. Gordey Lesovik, from the Moscow Institute of Physics and Technology, who has acted several times as a visiting professor at Aalto University School of Science. Given the significance of entanglement to quantum technology, the ability to create entanglement easily and controllably is an important goal for researchers.
The researchers designed a device where a superconductor was layered withed graphene and metal electrodes. “Superconductivity is caused by entangled pairs of electrons called “cooper pairs”. Using a temperature difference, we cause them to split, with each electron then moving to different normal metal electrode,” explains doctoral candidate Nikita Kirsanov, from Aalto University. “The resulting electrons remain entangled despite being separated for quite long distances.”
Along with the practical implications, the work has significant fundamental importance. The experiment has shown that the process of Cooper pair splitting works as a mechanism for turning temperature difference into correlated electrical signals in superconducting structures. The developed experimental scheme may also become a platform for original quantum thermodynamical experiments.
The work was carried out using the OtaNano research infrastructure. OtaNano provides state-of-the-art working environment and equipment for nanoscience and -technology, and quantum technologies research in Finland. OtaNano is operated by Aalto University and VTT, and is available for academic and commercial users internationally. To find out more, visit their website. The work was supported by funded from QTF (Academy of Finland CoE). Gordey Lesovik’s visiting professorship funding came fro Aalto University School of Science and Zhenbing Tan’s post doctoral grant came from the Academy of Finland.
Heating up cancer cells while targeting them with chemotherapy is a highly effective way of killing them, according to a new study led by UCL researchers.
The study, published in the Journal of Materials Chemistry B, found that “loading” a chemotherapy drug on to tiny magnetic particles that can heat up the cancer cells at the same time as delivering the drug to them was up to 34% more effective at destroying the cancer cells than the chemotherapy drug without added heat.
The magnetic iron oxide nanoparticles that carry the chemotherapy drug shed heat when exposed to an alternating magnetic field. This means that, once the nanoparticles have accumulated in the tumour area, an alternating magnetic field can be applied from outside the body, allowing heat and chemotherapy to be delivered simultaneously.
The effects of the two treatments were synergistic – that is, each treatment enhanced the effectiveness of the other, meaning they were more potent when combined than when separate. The study was carried out on cells in a lab and further research is needed ahead of clinical trials involving patients.
Senior author Professor Nguyen T. K. Thanh (Biophysics Group, UCL Physics & Astronomy) said: “Our study shows the enormous potential of combining chemotherapy with heat treatment delivered via magnetic nanoparticles.
“While this combination of therapy is already approved for the treatment of fast-growing glioblastomas, our results suggest it has potential to be used more widely as a broad anti-cancer therapy.
“This therapy also has potential to reduce the side effects of chemotherapy, by ensuring it is more highly targeted on cancer cells rather than healthy tissue. This needs to be explored in further pre-clinical tests.”
In the study, researchers combined the magnetic nanoparticles with a commonly used chemotherapy drug, doxorubicin, and compared the effects of this composite in various scenarios on human breast cancer cells, glioblastoma (brain cancer) cells, and mouse prostate cancer cells.
In the most successful scenario, they found that heat and doxorubicin together killed 98% of brain cancer cells after 48 hours, when doxorubicin without heat killed 73%. Meanwhile, for the breast cancer cells, 89% were killed by heat and doxorubicin together, while 77% were killed after 48 hours by doxorubicin alone.
Cancer cells are more susceptible to heat than healthy cells – they undergo a slow death (apoptosis) once the temperature reaches 42 degrees Celsius, whereas healthy cells are able to withstand temperatures up to 45 degrees Celsius.
The researchers found that heating cancer cells by only a few degrees, to 40 degrees Celsius, enhanced the effectiveness of the chemotherapy, meaning the treatment could be effective with lower doses of nanoparticles.
They found the combination of therapies was most effective when the nanoparticles were absorbed, or internalized, by the cancer cells, but they found the chemotherapy was also enhanced when the nanoparticles shed heat while remaining outside the cancer cells (which would be an easier form of treatment to deliver). However, the effects at lower temperatures only occurred when the iron oxide nanoparticles were internalized or tightly deposited on to the surface of the cancer cells.
The nanoparticles also have a polymer coating that prevents the chemotherapy drug from leaching out into healthy tissue. The coating is heat and pH-sensitive, and is designed to release the drug when temperature rises and the nanoparticles are internalized within tiny pockets in cells called “lysosomes”, which have a lower pH than the rest of the cell medium. This intracellular delivery of the drug was particularly effective for the mouse prostate cancer cells, which showed superior and synergetic cell death effect, especially when the temperature reached 42°C.
Co-author Dr Olivier Sandre, of the University of Bordeaux, said: “Since heat can be generated through the alternating magnetic field, the release of the drug can be highly localised to cancer cells, potentially reducing side effects.”
Researchers received funding from the Engineering and Physical Sciences Research Council (EPSRC), the Asian Office of Aerospace Research and Development (AOARD), the European Cooperation in Science and Technology (COST), UCL, the University of Bordeaux, and collaborated with Resonant Circuits Limited.
By embedding titanium-based sheets in water, a group led by scientists from the RIKEN Center for Emergent Matter Science has created a material using inorganic materials that can be converted from a hard gel to soft matter using temperature changes. Science fiction often features inorganic life forms, but in reality, organisms and devices that respond to stimuli such as temperature changes are nearly always based on organic materials, and hence, research in the area of “adaptive materials” has almost exclusively focused on organic substances. However, there are advantages to using inorganic materials such as metals, including potentially better mechanical properties. Considering this, the RIKEN-led group decided to attempt to recreate the behavior displayed by organic hydrogels, but using inorganic materials. The inspiration for the material comes from an aquatic creature called a sea cucumber. Sea cucumbers are fascinating animals, related to starfishes (but not to cucumbers!)–that have the ability to morph their skin from a hard layer to a kind of jelly, allowing them to throw out their internal organs–which are eventually regrown–to escape from predators. In the case of the sea cucumbers, chemicals released by their nervous systems trigger the change in the configuration of a protein scaffold, creating the change.
To make it, the researchers experimented with arranging nanosheets–thin sheets of titanium oxide in this case–in water, with the nanosheets making up 14 percent and water 86 percent of the material by weight.
According to Koki Sano of RIKEN CEMS, the first author of the paper, “The key to whether a material is a soft hydrogel or a harder gel is based on the balance between attractive and repulsive forces among the nanosheets. If the repulsive forces dominate, it is softer, but if the attractive ones are strong, the sheets become locked into a three-dimensional network, and it can rearrange into a harder gel. By using finely tuned electrostatic repulsion, we tried to make a gel whose properties would change depending on temperature.”
The group was ultimately successful in doing this, finding that the material changed from a softer repulsion-dominated state to a harder attraction-dominated state at a temperature of around 55 centigrade. They also found that they could repeat the process repeatedly without significant deterioration. “What was fascinating,” he continues, “is that this transition process is completed within just two seconds even though it requires a large structural rearrangement. This transition is accompanied by a 23-fold change in the mechanical elasticity of the gel, reminiscent of sea cucumbers.”
To make the material more useful, they next doped it with gold nanoparticles that could convert light into heat, allowing them to shine laser light on the material to heat it up and change the structure.
According to Yasuhiro Ishida of RIKEN CEMS, one of the corresponding authors of the paper, “This is really exciting work as it greatly opens the scope of substance that can be used in next-generation adaptive materials, and may even allow us to create a form of ‘inorganic life’.”
Nature is not homogenous. Most of the universe is complex and composed of various subsystems — self-contained systems within a larger whole. Microscopic cells and their surroundings, for example, can be divided into many different subsystems: the ribosome, the cell wall, and the intracellular medium surrounding the cell.
The Second Law of Thermodynamics tells us that the average entropy of a closed system in contact with a heat bath — roughly speaking, its “disorder”— always increases over time. Puddles never refreeze back into the compact shape of an ice cube and eggs never unbreak themselves. But the Second Law doesn’t say anything about what happens if the closed system is instead composed of interacting subsystems.
New research by SFI Professor David Wolpert published in the New Journal of Physics considers how a set of interacting subsystems affects the second law for that system.
“Many systems can be viewed as though they were subsystems. So what? Why actually analyze them as such, rather than as just one overall monolithic system, which we already have the results for,” Wolpert asks rhetorically.
The reason, he says, is that if you consider something as many interacting subsystems, you arrive at a “stronger version of the second law,” which has a nonzero lower bound for entropy production that results from the way the subsystems are connected. In other words, systems made up of interacting subsystems have a higher floor for entropy production than a single, uniform system.
All entropy that is produced is heat that needs to be dissipated, and so is energy that needs to be consumed. So a better understanding of how subsystem networks affect entropy production could be very important for understanding the energetics of complex systems, such as cells or organisms or even machinery
Wolpert’s work builds off another of his recent papers which also investigated the thermodynamics of subsystems. In both cases, Wolpert uses graphical tools for describing interacting subsystems.
For example, the following figure shows the probabilistic connections between three subsystems — the ribosome, cell wall, and intracellular medium.
Like a little factory, the ribosome produces proteins that exit the cell and enter the intracellular medium. Receptors on the cell wall can detect proteins in the intracellular medium. The ribosome directly influences the intracellular medium but only indirectly influences the cell wall receptors. Somewhat more mathematically: A affects B and B affects C, but A doesn’t directly affect C.
Why would such a subsystem network have consequences for entropy production?
“Those restrictions — in and of themselves — result in a strengthened version of the second law where you know that the entropy has to be growing faster than would be the case without those restrictions,” Wolpert says.
A must use B as an intermediary, so it is restricted from acting directly on C. That restriction is what leads to a higher floor on entropy production.
Plenty of questions remain. The current result doesn’t consider the strength of the connections between A, B, and C — only whether they exist. Nor does it tell us what happens when new subsystems with certain dependencies are added to the network. To answer these and more, Wolpert is working with collaborators around the world to investigate subsystems and entropy production. “These results are only preliminary,” he says.
Scientists using an instrument aboard NASA’s Mars Atmosphere and Volatile EvolutioN, or MAVEN, spacecraft have discovered that water vapor near the surface of the Red Planet is lofted higher into the atmosphere than anyone expected was possible. There, it is easily destroyed by electrically charged gas particles — or ions — and lost to space.
Researchers said that the phenomenon they uncovered is one of several that has led Mars to lose the equivalent of a global ocean of water up to hundreds of feet (or up to hundreds of meters) deep over billions of years. Reporting on their finding on Nov. 13 in the journal Science, researchers said that Mars continues to lose water today as vapor is transported to high altitudes after sublimating from the frozen polar caps during warmer seasons.
“We were all surprised to find water so high in the atmosphere,” said Shane W. Stone, a doctoral student in planetary science at the University of Arizona’s Lunar and Planetary Laboratory in Tucson. “The measurements we used could have only come from MAVEN as it soars through the atmosphere of Mars, high above the planet’s surface.”
To make their discovery, Stone and his colleagues relied on data from MAVEN’s Neutral Gas and Ion Mass Spectrometer (NGIMS), which was developed at NASA’s Goddard Space Flight Center in Greenbelt, Maryland. The mass spectrometer inhales air and separates the ions that comprise it by their mass, which is how scientists identify them.
Stone and his team tracked the abundance of water ions high over Mars for more than two Martian years. In doing so, they determined that the amount of water vapor near the top of the atmosphere at about 93 miles, or 150 kilometers, above the surface is highest during summer in the southern hemisphere. During this time, the planet is closest to the Sun, and thus warmer, and dust storms are more likely to happen.
The warm summer temperatures and strong winds associated with dust storms help water vapor reach the uppermost parts of the atmosphere, where it can easily be broken into its constituent oxygen and hydrogen. The hydrogen and oxygen then escape to space. Previously, scientists thought that water vapor was trapped close to the Martian surface like it is on Earth.
“Everything that makes it up to the higher part of the atmosphere is destroyed, on Mars or on Earth,” Stone said, “because this is the part of the atmosphere that is exposed to the full force of the Sun.”
The researchers measured 20 times more water than usual over two days in June 2018, when a severe global dust storm enveloped Mars (the one that put NASA’s Opportunity rover out of commission). Stone and his colleagues estimated Mars lost as much water in 45 days during this storm as it typically does throughout an entire Martian year, which lasts two Earth years.
“We have shown that dust storms interrupt the water cycle on Mars and push water molecules higher in the atmosphere, where chemical reactions can release their hydrogen atoms, which are then lost to space,” said Paul Mahaffy, director of the Solar System Exploration Division at NASA Goddard and principal investigator of NGIMS.
Other scientists have also found that Martian dust storms can lift water vapor far above the surface. But nobody realized until now that the water would make it all the way to the top of the atmosphere. There are abundant ions in this region of the atmosphere that can break apart water molecules 10 times faster than they’re destroyed at lower levels.
“What’s unique about this discovery is that it provides us with a new pathway that we didn’t think existed for water to escape the Martian environment,” said Mehdi Benna, a Goddard planetary scientist and co-investigator of MAVEN’s NGIMS instrument. “It will fundamentally change our estimates of how fast water is escaping today and how fast it escaped in the past.”
Silicon-polymer hybrid modulators capable of optical data rates of 200 Gbit/s at temperatures up to 110 °C could help reduce datacenter cooling costs.
Datacenters could benefit from lower cooling costs in part to ultra-fast electro-optic modulators developed by researchers in Japan using a polymer that is stable even at temperatures that would boil water.
Reported in the journal Nature Communications, the silicon-polymer hybrid modulators can transmit 200 gigabits of data per second at up to 110 °C and could enable optical data interconnections that are both extremely fast and reliable at high temperatures, reducing the need for cooling and expanding applications in harsh environments like rooftops and cars.
Demand for high-speed data transmission such as for high-definition media streaming has exploded in recent years, and optical communications are central to many of the necessary data connections. A critical component is the modulator, which puts data on a beam of light passing through an electro-optic material that can change its optical properties in response to an electric field.
Most modulators currently use inorganic semiconductors or crystals as the electro-optic material, but organic-based polymers have the advantages that they can be fabricated with excellent electro-optic properties at a low cost and operated at low voltages.
“Polymers have great potential for use in modulators, but reliability issues still need to be overcome for many industry applications,” explains Shiyoshi Yokoyama, professor of Kyushu University’s Institute for Materials Chemistry and Engineering and leader of the research collaboration.
One challenge is that parts of the molecules in the polymer layer must be organized through a process called poling to obtain good electro-optic properties, but this organization can be lost when the layer gets warm enough to begin softening–a point referred to as the glass transition temperature.
However, if the modulators and other components can operate rapidly and reliably even at high temperatures, datacenters could run warmer, thereby reducing their energy usage–nearly 40% of which is currently estimated to go toward cooling.
Employing a polymer they designed to exhibit superb electro-optic properties and a high glass transition temperature of 172 °C through the incorporation of appropriate chemical groups, the research team achieved ultra-fast signaling at elevated temperatures in a silicon-polymer hybrid modulator based on a Mach-Zehnder interferometer configuration, which is less sensitive to temperature changes than some other architectures.
In the modulators, composed of multiple layers including the polymer and silicon, an incoming laser beam is split into two arms of equal length. Applying an electric field across the electro-optic polymer in one of the arms changes the optical properties such that the light wave slightly shifts. When the two arms come back together, interference between the modified and unmodified beams changes the strength of the mixed output beam depending on the amount of phase shift, thereby encoding data in the light.
Using a simple data signaling scheme of just on and off states, rates of over 100 Gbit/s were achieved, while a more complicated method using four signal levels could achieve a rate of 200 Gbit/s.
This performance was maintained with negligible changes even when operating the devices over temperatures ranging from 25 °C to 110 °C and after subjecting the devices to 90 °C heat for 100 hours, demonstrating the robustness and stability of the modulators over an extraordinarily wide range of temperatures.
“Stable operation even when the temperature fluctuates up to 110 °C is wonderful,” says Yokoyama. “This temperature range means operation in controlled environments such as datacenters, even at higher than normal temperatures, and many harsh environments where temperature is not well controlled is possible.”
The current devices are millimeter sized, making them relatively large compared to other designs, but the researchers are looking into ways to further reduce the footprint for incorporation of a dense arrays of such modulators in a small area.
“This kind of performance shows just how promising polymers are for future telecommunications technologies,” Yokoyama states.
References: Guo-Wei Lu, Jianxun Hong, Feng Qiu, Andrew M. Spring, Tsubasa Kashino, Juro Oshima, Masa-aki Ozawa, Hideyuki Nawata, and Shiyoshi Yokoyama, “High-temperature-resistant silicon-polymer hybrid modulator operating at up to 200 Gbit s-1 for energy-efficient datacentres and harsh-environment applications,” Nature Communications (2020). http://dx.doi.org/10.1038/s41467-020-18005-7https://doi.org/10.1038/s41467-020-18005-7
Can low level nuclear reactions, or dark matter annihilation, heat massive white dwarf stars (WD)? Recently Cheng and colleagues identified a number of WD, with masses between 1.08 and 1.23M, that appear to have an additional heat source. This extra heat may maintain the star’s luminosity near ≈ 10-³L for multiple billion years. Latent heat from crystallization and gravitational energy released from conventional ²²Ne sedimentation do not appear to be large enough to explain this luminosity. Note that ²²Ne sedimentation is significantly slowed down by C/O crystallization however Blouin and colleagues speculate that Ne phase separation could enhance the heating from conventional Ne sedimentation.
Now, in this study, Horowitz and colleagues, explored heating from electron capture and pycnonuclear reactions.
They assume isolated stars that are not in binary systems. They are interested in reactions that may take place preferentially at the very high central densities of massive WD and may be less important at lower densities in less massive stars. In principle, even relatively slow nuclear reactions could contribute noticeable heat. This is because, in the absence of nuclear reactions, there is only a modest luminosity from WD cooling. Alternatively, dark matter annihilation in massive WD could produce additional heating. Dark matter can produce noticeable heating even when the dark matter is made of particles with properties, such as scattering cross sections and masses that may be difficult to observe in laboratory experiments. Furthermore, massive WD have large escape velocities. These stars may trap lower mass, higher velocity, dark matter particles that can escape less massive stars.
Finally, dark matter could collect in massive WD. If this dark matter concentrates to very high densities, its gravity can modify the structure of a WD and increase the star’s central density. This in turn could further increase the rate of electron capture and or pycnonuclear fusion reactions.
The central density ρC of massive WD follows from hydrostatic equilibrium and an equation of state dominated by relativistic electrons. In Fig. 1 Horowitz plot ρC of a WD with electron fraction Ye = 0.5, this could be made of C and O or O and Ne. They also showed ρC for a possible Fe WD with Ye ≈ 0.464. They assume a simple relativistic free Fermi gas equation of state and neglect Coulomb corrections. The central density of a C/O WD can exceed 10^9 g/cm³ for star masses above 1.35M.
High densities can drive electron capture reactions. In Table I they list the threshold densities ρT for a variety of electron capture reactions. This density is where the electron Fermi energy is high enough to provide for the reaction Q value. They calculate ρT from atomic masses. In general, the threshold density is seen to decrease as the mass number increases. For C/O or O/Ne stars, electron capture (at zero temperature) is not expected until ρC > ρT ≈ 6×10^9 g/cm³ and this density is not reached until the mass of the star is above 1.40M, see Table I.
The threshold densities in Table I are for ground state to ground state transitions. These transitions may be forbidden by the high spin of the daughter nucleus. However, a large forbidden matrix element was recently observed for the transition corresponding to electron capture from the (0+) 20Ne ground state to the (2+) 20F ground state. If the reaction must proceed via an excited state of the daughter nucleus, to obtain a significant rate, ρT will be even higher than the value in Table I.
Cheng and colleagues considered C/O or O/Ne WD with masses between 1.08 and 1.23M. They inferred these masses by comparing the stars absolute magnitudes and colors to WD cooling models. Note that the absolute magnitudes were determined by recent Gaia parallax measurements. The central density of a 1.23M WD (assuming Ye = 0.5) is only 1.9 × 10^8 g/cm³. This is too low for electron capture on ¹²C, 16^O, or 20^Ne, see Table I. Therefore conventional electron capture reactions are likely not significantly heating the stars Cheng considers.
It may be possible to form WD with Fe cores where the electron fraction is Ye ≈26/56=0.464. For example a failed SN could leave behind an Fe core. Not only do these stars have higher central densities, see Fig. 1, but the threshold density for electron capture on Fe is also lower ≈ 1.1 × 10^9 g/cm³, see Table I. This density is reached in a 1.16M Fe WD. Furthermore, impurities could have even lower threshold densities. For example, 54^Fe has a sizable isotopic abundance on Earth ≈ 6% and a very low threshold density for electron capture, see Table I. They concluded that electron capture could very well be significant in massive Fe WD.
In addition to electron capture, pycnonuclear, or density driven, fusion reactions can also take place. In pycnonuclear fusion, quantum zero point motion allows two nuclei to approach and tunnel through the coulomb barrier. The pycnonuclear fusion reaction that occurs first, at the lowest density, is likely to be ¹²C + ¹²C. This is because heavier nuclei, in general, will need to tunnel through larger coulomb barriers. Pycnonuclear reactions are greatly aided by the strong screening of the coulomb barrier by other nearby ions. At present there are significant uncertainties in pycnonuclear reaction rates because they depend very sensitively on the exact distribution of ions within the crystal lattice. In addition, there is some uncertainty in the nuclear S factor at very low energies.
Nevertheless, there are useful estimates of pycnonuclear rates either in the pure pycnonuclear regime near zero temperature or in the thermally enhanced pycnonuclear regime at somewhat higher temperatures. In order to have a luminosity near 10-³ L from ¹²C+¹²C fusion Horowitz and colleagues estimated needing a reaction rate of roughly R ≈ 5 × 10¹¹ cm³ ss-¹. Using the rate shown in the insert to Fig. 2, this requires a density of very roughly ρT ≈ 3 × 10^9g/cm³ as listed in Table I. This density is an order of magnitude larger than the 2 × 10^8 g/cm³ central density of a 1.25M (C/O) WD. Although there is considerable uncertainty in the pycnonuclear rate, it is unlikely the uncertainty is this large. Furthermore, the pycnonuclear rate depends strongly on the density. Therefore, if pycnonuclear fusion were to provide 10-³L for a 1.25M star, the luminosity would likely be very much smaller for even slightly smaller stars and very much larger for even slightly more massive stars. They concluded that pycnonuclear fusion is unlikely to provide significant heating for many of the massive WD that Cheng et al. considers.
They also explored heating from dark matter annihilation and found that WD appear to be too small to capture enough dark matter for this to be important. Finally, if dark matter condenses to very high densities inside a WD then this will also increase the density of conventional matter and could start pycnonuclear or electron capture reactions. What happens next may depend on the dynamical scenario. One possibility is the ignition of a Type Ia supernova and the complete destruction of the star. Another possibility, if the high density region is very small indeed, is that the tiny amount of material in this region is burned to Fe without releasing enough heat to start material burning at the lower densities outside the small dark matter core. In this case the dark matter may become encased in a more or less inert Fe core with little overall change to the star. In neither case would there be a modest amount of heat for billions of years.