Small Telescopes, Big Science (Astronomy)

A quest to open up the dynamic, infrared sky begins

Our Milky Way galaxy is chock-full of dust. Stars are essentially dust-making factories that infuse the galaxy with a haze of dusty elements required for making planets and even life. But all that dust can make viewing the cosmos difficult. Telescopes that detect visible, or optical, light cannot see through the murkiness, and thus some of what goes on in the universe remains enshrouded.

Luckily for astronomers, infrared light, which has longer wavelengths than optical light, can sneak past dust. Several infrared-sensing telescopes, such as NASA’s Spitzer Space Telescope, have taken advantage of this fact and revealed much of the so-called infrared sky, including hidden planets, stars, supermassive black holes, and more. The next frontier for infrared astronomy involves watching how the infrared sky changes over time, an effort that Mansi Kasliwal (MS ’07, PhD ’11), an assistant professor of astronomy at Caltech, refers to as opening up the dynamic infrared sky.

To that end, Kasliwal has planned a series of four small ground-based infrared telescopes that will reveal everything from never-before-seen star explosions, asteroids, and even infrared counterparts to stellar collisions that send ripples through space and time known as gravitational waves.

The ambitious plan begins with Palomar Gattini-IR, a robotic instrument now in operation at Palomar Observatory, and will eventually include two additional instruments, called WINTER (Wide-field Infrared Transient Explorer) and DREAMS (Dynamic Red All-Sky Monitoring Survey), both of which are under construction. The final step in the plan is to build an instrument destined for Antarctica, where the chilly temperatures lead to even crisper views of the infrared sky.

“We are changing the game,” says Kasliwal. “We are building little telescopes that do big science.”

The little structure at Palomar Observatory where Gattini is housed.Credit: Caltech

Palomar Gattini-IR, or just Gattini for short, refers to the Italian word for kittens, gattini, and came from Kasliwal’s collaborator Anna Moore, a professor of astronomy at Australian National University, who used the term to casually refer to her own fleet of small telescopes in the Antarctic. “The name just stuck,” explains Kasliwal. Palomar Gattini-IR has been busy robotically scanning the skies from its perch in a small dome at Palomar Observatory since 2019 and has already produced some interesting results.

Stars That Go Bang

One recent paper accepted in The Astrophysical Journal reports the first real estimate of the number of nova explosions, or novae, that go off in our Milky Way galaxy per year (the answer is about 46). Novae are not as bright as supernovae, but powerful nonetheless and can briefly shine brighter than one million suns. They occur when a white dwarf, the burned-out core of a star, siphons enough material off a companion star to cause an explosion. These bursts are thought to seed our universe with many of the elements that make up our periodic table; in fact, novae are thought to be the main producers of lithium in our galaxy.

But novae can be hard to find because they often lie within the thick and dusty band of our Milky Way. Previous estimates of the rate of novae in our galaxy were wildly uncertain, with only about a dozen novae discovered each year.

“There was little consensus before now on the rate of novae in our galaxy,” says Kishalay De (MS ’18), a graduate student at Caltech and lead author of the Gattini study on novae. “The novae can be hidden behind huge columns of dust, so optical surveys could not find them.”

The novae results demonstrate the power of an infrared survey like Gattini, which scans the whole Northern sky every two nights. The newfound novae were “insanely easy to pick out,” according to Kasliwal, because they glow brightly when viewed in infrared light.

Mansi Kasliwal with members of her team who helped build the Gattini telescope. From left to right: Kishalay De, Scott Adams, Alex Delacroix, and Timothee Greffe of Caltech; Jamie Soon of Australian National University, and Kasliwal. Credit: Caltech

“This is truly a ground-breaking study,” says Allen Shafter, a nova expert at San Diego State University. “Dust limits the reach of optical nova surveys to a relatively small volume of space near the sun. As a result, optical estimates of the Galactic nova rate require a large and uncertain extrapolation of the nova rate in the solar neighborhood to the full extent of our Milky Way galaxy. The new Gattini infrared nova study has greatly increased the volume of space that can be directly surveyed, thereby reducing the extent of the required extrapolation and resulting in a more accurate estimate of the Galactic nova rate than has been hitherto possible.”

An Infrared Legacy

Caltech is a pioneer in the field of infrared astronomy. The late astronomy professors Gerry Neugebauer (PhD ’60) and Robert Leighton (BS ’41 and PhD ’47) designed and built one of the world’s first infrared telescopes. Later, Neugebauer and Tom Soifer (BS ’68), the Harold Brown Professor of Physics, Emeritus, helped create the first space mission to perform an all-sky infrared survey mission, called IRAS (Infrared Astronomical Satellite), which launched in 1983 and led to the creation of Caltech’s Infrared Processing and Analysis Center, now called simply IPAC.

Other IPAC infrared projects include the ground-based 2MASS (Two Micron All-Sky Survey), which scanned the entire sky from 1997 to 2001; NASA’s Spitzer Space Telescope, a sister telescope to the Hubble Space Telescope that ceased operations in 2020; and NASA’s WISE (Wide-field Infrared Survey Explorer), now called NEOWISE (Near-Earth Object WISE) and dedicated primarily to the search for asteroids.

These previous infrared surveys catalogued millions of never-before-seen asteroids, stars, galaxies, and other objects, and had better resolutions than Gattini, but they did not scan the whole sky as quickly.

“We’re doing a large chunk of what 2MASS did every night,” says De. “Gattini is the first-ever survey of the dynamic, or changing, infrared sky. We have traded in resolution for a wide field of view to enable us to regularly capture the whole night sky.” Gattini’s telescope is only 30 centimeters in size but its field of view is a whopping 25 square degrees, 40 times larger than any past or current infrared telescopes.

“Caltech is a pioneer for both infrared astronomy and time-domain astronomy, so it only makes sense that we would combine the two in the first dynamic infrared sky survey,” says Kasliwal. Time-domain astronomy refers to nightly surveys of the changing sky; Caltech’s Zwicky Transient Facility (ZTF) is a key instrument in this growing field, but unlike Gattini, it detects optical light.

From the Ground Up

The Gattini instrument was built at Caltech by Kasliwal and her team, including both graduate and undergraduate students. It was first installed at Palomar in 2018 and took some time to calibrate and set up to work automatically. “We left the telescope on its own to operate robotically,” says De. “Then the data came pouring down from the sky to our computers thanks to our data pipeline.”

One of the challenges in designing a survey instrument like Gattini is the development of software. Gattini’s software automatically sifts through enormous amounts of data to detect changes in the sky every night. De spent six months developing the software and data pipeline for the project as part of his PhD thesis.

“These software techniques are of prime importance to future space-based telescopes as well,” says De, “because they remove the blurring caused by the earth’s atmosphere and hence can in principle get extremely sharp images.”

Now that Gattini is up and running, astronomers have been mining its data for use in various projects. For instance, Caltech professor of astronomy Lynne Hillenbrand and her team used the instrument’s data to help discover a rare bursting young star hidden by clouds of dust. Hillenbrand’s group had previously discovered a similar star with the help of NEOWISE.

“Gattini can uniquely detect objects that are so buried in dust to not be seen in visible light, and which brighten so rapidly that only Gattini scans the sky fast enough to pick them out,” says Hillenbrand.

Next-Generation “Kittens”

Next up in Kasliwal’s plan to open the dynamic infrared skies are WINTER and DREAMS. WINTER, which is currently being built at MIT under the leadership of Kasliwal’s collaborator Rob Simcoe (PhD ’04), a professor of physics, is scheduled to begin operations at Palomar in the fall of 2021. DREAMS is being built by a team led by Moore in Australia and is scheduled to begin operations at Siding Springs Observatory in 2022. Both telescopes will use next-generation infrared detectors that are more efficient than those on Gattini.

The final step is to build an infrared survey telescope in Antarctica that will take advantage of the frigid air. “The night sky is blindingly bright in infrared light, but it’s 40 times darker in Antarctica at infrared wavelengths, which is partly due to the cold temperatures,” explains Kasliwal.

Another reason for building a survey telescope at the South Pole is because, together with those in the North, they will cover the entire sky. “It’s always nighttime somewhere,” she says.

A Goldmine of a Find

One of Kasliwal’s dreams is to be able to identify cataclysmic mergers of neutron stars, dramatic events that produce what astronomers call kilonovas. These explosions are even more powerful than novae, and are thought to generate a significant amount of the universe’s heaviest elements, including gold and platinum. Kasliwal’s team identified one such explosion along with other groups back in 2017, when LIGO (Laser Interferometer Gravitational-wave Observatory) first identified the gravitational waves produced by the collision. The occasion marked the first time that both gravitational waves and light were detected from the same event, and helped usher in the field of multi-messenger astronomy (where gravitational waves, light, and neutrinos are the messengers).

Since that time, LIGO has detected dozens of additional gravitational-wave events, but none have been seen simultaneously in light. Kasliwal suspects this may be due to the fact that kilonovas inherently produce much more infrared than optical light and are thus being missed by optical telescopes. Each step in Kasliwal’s plan—Gattini, WINTER, DREAMS, and a future instrument in Antarctica—has the ability to sleuth out the hidden kilonovas with increasing sensitivities. It is also possible that one of the telescopes may even catch a long-sought neutron star and black hole merger, which could be even more luminous in infrared light than neutron star collisions.

“There is a lot you can do with small ground-based telescopes,” she says. “Our small teams are very agile, and enable us to have some fun, take risks, and try something new. We have the freedom to dream big.”

Palomar Gattini-IR is funded by Caltech, Australian National University, the Mt. Cuba Foundation, the Heising-Simons Foundation, and the US-Israel Binational Science Foundation. The instrument is a collaborative project among Caltech, Australian National University, University of New South Wales, Columbia University, University of Chinese Academy of Sciences, and the Weizmann Institute of Science.

Featured image: The Gattini telescope. Credit: Caltech


Reference: De, Kishalay and Kasliwal, Mansi M. and Hankins, Matthew J. and Sokoloski, Jennifer L. and Adams, Scott M. and Ashley, Michael C. B. and Babul, Aliya-Nur and Bagdasaryan, Ashot and Delacroix, Alexandre and Dekany, Richard and Greffe, Timothee and Hale, David and Jencson, Jacob E. and Karambelkar, Viraj R. and Lau, Ryan M. and Mahabal, Ashish and McKenna, Daniel and Moore, Anna M. and Ofek, Eran O. and Sharma, Manasi and Smith, Roger M. and Soon, Jamie and Soria, Roberto and Srinivasaragavan, Gokul and Tinyanont, Samaporn and Travouillon, Tony D. and Tzanidakis, Anastasios and Yao, Yuhan (2021) A population of heavily reddened, optically missed novae from Palomar Gattini-IR: Constraints on the Galactic nova rate., ArXiv, pp. 1-25, 2021. https://arxiv.org/abs/2101.04045


Provided by Caltech

Where is Dark Matter Hiding? (Cosmology / Astronomy)

Scientists turn to new ideas and experiments in the search for dark matter particles.

Every second, millions to trillions of particles of dark matter flow through your body without even a whisper or trace. This ghostly fact is sometimes cited by scientists when they describe dark matter, an invisible substance that accounts for about 85 percent of all matter in the universe. Unlike so-called normal matter, which includes everything from electrons to people to planets, dark matter does not absorb, reflect, or shine with any light. It is … dark. But if we cannot see dark matter, how do scientists know it is there? The answer is gravity. Astronomers indirectly detect dark matter through its gravitational influences on stars and galaxies. Wherever normal matter resides, dark matter can be found lurking unseen by its side.

The first real evidence for dark matter came in 1933, when Caltech’s Fritz Zwicky used the Mount Wilson Observatory to measure the visible mass of a cluster of galaxies and found that it was much too small to prevent the galaxies from escaping the gravitational pull of the cluster. Something else, concluded Zwicky, was acting like glue to hold clusters of galaxies together. He named the substance dunkle Materie, or dark matter in German. In the 1970s, Vera Rubin and Kent Ford, while based at the Carnegie Institution for Science, measured the rotation speeds of individual galaxies and found evidence that, like Zwicky’s galaxy cluster, dark matter was keeping the galaxies from flying apart. Other evidence throughout the years has confirmed the existence of dark matter and shown how abundant it is in the universe. In fact, dark matter is about five times more common than normal matter.

Researchers are building a new dark matter experiment, called SuperCDMS, deep underground in a mine in Canada. Photo: David Hawkins, SNOLAB

“The universe is hitting us over the head with evidence of dark matter,” says Phil Hopkins, professor of theoretical astrophysics at Caltech. “Whether it is the motion of galaxies, or the fact that dark matter bends light, or the expansion of the universe, or the growth of structures in the universe, there are many different types of measurements that have been made and every single one of them fits the same paradigm of dark matter.”

Yet, despite its preponderance, scientists have not been able to identify the particles that make up dark matter. They know dark matter exists and where it is but cannot directly see it. Since the 1990s, scientists have been building large experiments designed to catch elusive dark matter particles, but they continue to come up empty-handed. What some still consider the leading candidate for dark matter, called WIMPs (weakly interacting massive particles), have not been found in any of the data collected so far, nor have particles called axions; both WIMPs and axions are hypothetical elementary particles proposed to solve outstanding theoretical mysteries in the widely accepted model of particle physics, called the Standard Model, which classifies all known elementary particles and describes three of the four known fundamental forces the electromagnetic, weak, and strong interactions, leaving out gravity). Additional dark matter candidates include particles called sterile neutrinos, along with primordial black holes. Some theorists have proposed that modifications to our theories of gravity might explain away dark matter, though this idea is less favored.

Kathryn Zurek is a theoretical physicist and one of the pioneers of hidden-sector theories of dark matter. Photo:Lance Hayashida/Caltech

In the past decade, another set of dark matter candidates has emerged and is growing in popularity. These candidates collectively belong to a category known as the hidden, or dark, sector. At Caltech, hidden-sector ideas are in full bloom, with several scientists cultivating new theories and experiments.

“When you look beyond WIMPs and axions, a whole range of observational consequences open up,” says Kathryn Zurek, a professor of theoretical physics at Caltech and one of the pioneers of hidden-sector theories. “The WIMP and axion paradigms are great because they are very predictive and led to the building of direct-detection experiments. Now we can leverage this beautiful technology to look for hidden-sector dark matter.”

Hidden Valleys

In 2006, Zurek and colleagues proposed the idea that dark matter could be part of a hidden sector, with its own dynamics, independent of normal matter like photons, electrons, quarks, and other particles that fall under the Standard Model. Unlike normal matter, the hidden-sector particles would live in a dark universe of their own. Somewhat like a school of fish who swim only with their own kind, these particles would interact strongly with one another but might occasionally bump softly into normal particles via a hypothetical messenger particle. This is in contrast to the proposed WIMPs, for example, which would interact with normal matter through the known weak force by exchanging a heavy particle.

A key feature of hidden-sector particles is that they would be much lower in mass and energy than other proposed dark matter candidates like WIMPs. Hiddensector dark matter is proposed to range in mass from about one-trillionth that of a proton to 1 proton. Technically, this translates to masses between milli- and giga-electron-volts (eV); a proton has a mass of about one giga-eV.

“We are moving to a new frontier of lighter dark matter,” says Zurek. “At first, we called these particles hidden valleys because the idea was that you would climb a mountain pass and look down to very low-energy particles.” But now, she says, the phrase hidden valley has morphed into hidden, or dark, sectors.

“I am humbled by the universe. We should be embarrassed at some level about how little we know, but this can also be an opportunity to learn more.”

— MARK WISE

Sean Carroll, research professor of physics at Caltech, and his colleagues also wrote an early paper, in 2008, on the idea that dark matter might interact just with itself. Similar to the hidden-sector ideas, the team proposed that, “just like ordinary matter couples to a long-range force known as ‘electromagnetism’ mediated by particles called ‘photons,’ dark matter couples to a new long-range force known (henceforth) as ‘dark electromagnetism,’” Carroll wrote in his blog, Preposterous Universe, in 2008.

“Now, years later, scientists are in a phase of ruling out more and more models,” says Carroll. “Our dark photon models are still possible but less likely than other models like Kathryn’s.”

So how does one go about finding a hypothetical particle less massive than a proton? Zurek and others have proposed tabletop-size experiments much smaller than other dark matter experiments, which can weigh on the order of tons. Although hidden-sector particles are thought to only rarely and weakly interact with normal matter, when they do, they cause disturbances that could, in theory, be detected.

Zurek and her team have proposed a way to detect a disturbance caused by the hidden sector using a type of quasiparticle called a phonon. A specialized sensor would be used to catch the phonon vibrations, indicating the presence of dark matter. (A quasiparticle is a collective phenomenon that behaves like a single particle. Zurek is also developing other methods to help in the hunt for dark matter, including gravitational-based techniques that measure how clumps of dark matter in the cosmos affect the timing of flashing stellar remnants called pulsars. Like many scientists in the field, she feels that it is important to take a multipronged approach to the problem and look for dark matter with different but compatible methods.

Mark B. Wise, the John A. McCone Professor of High Energy Physics, who, in 1982, was among the first to propose that axions could be the missing dark matter particles, says dark matter could be axions, hidden-sector particles, or something else entirely. Wise made his proposal along with John Preskill, Caltech’s Richard P. Feynman Professor of Theoretical Physics and director of the Institute for Quantum Information and Matter (IQIM), and Frank Wilczek of MIT. “We look where we can and where nature tells us to look,” Wise says. “As a theoretical physicist, I am humbled by the universe. We should be embarrassed at some level about how little we know, but this can also be an opportunity to learn more.”

Deep Underground

SuperCDMS is seen here coming together in a cleanroom inside a mine in Canada. Photo: David Hawkins, SNOLAB

About 6,000 feet underground, in a working nickel mine in Ontario, Canada, a dark matter experiment is taking shape. Unlike the small experiments proposed by Zurek and others, this one is a massive undertaking. Scheduled to begin operations in 2022, SuperCDMS (Super Cryogenic Dark Matter Search) is designed to find lighter WIMPs than those sought before, with masses of 1 giga-eV, which is close to the mass of a proton. Because SuperCDMS is looking for lower-mass particles, it also has the ability to find lighter hidden-sector particles.

“When you enter the lab, it’s an interesting process,” says Sunil Golwala, professor of physics at Caltech. “You go down the mine elevator, sometimes with the miners, and then you walk about a kilometer in mine clothes: full-body mine suit, hard hat, boots, and all that. And then when you get to the lab entrance, you take all that off and take a shower. Then you put on a bunny suit and go into a lab, all of which is kept as a clean room. So, the lab is kept extremely clean even though it’s sitting in the middle of a dirty mine.”

Sunil Golwala is an experimental physicist working on SuperCDMS. Photo courtesy of Sunil Golwala

Golwala helps manage the fabrication of the detector assemblies for SuperCDMS; the detectors are being built at the SLAC National Accelerator Laboratory, which leads the SuperCDMS project. Golwala explains that most dark matter experiments searching for WIMPs and hidden-sector dark matter are performed underground, often in mines, in order to shield the instruments from cosmic rays that could mask the dark matter signals.

WIMPs were proposed in the late 1970s and early 1980s based on the idea that heavier hypothetical particles than those in the Standard Model, with a mass of more than 100 protons, could explain mysterious features of the model and, importantly, would just happen to be produced in the early universe in the amount needed to explain the dark matter abundance. In addition, a theory known as supersymmetry (which states that every particle has a partner with a complementary spin) predicts partner particles, one of which could be a WIMP. But, over the years, evidence for supersymmetry has failed to materialize.

“There had been hope for many years that if we saw dark matter it would be a hint that supersymmetry exists, but people are getting less and less convinced. Kathryn is one of the people who has strongly advocated for looking at other models, given that supersymmetry has not turned out to be discovered yet,” says Golwala. “That’s been one of the big motivations for wanting to look down at the mass range that we’re looking at. Twenty years ago, if you told someone you were looking for dark matter at 1 giga-eV, they might say, ‘Why are you doing that? There’s no supersymmetric particle down at 1 giga-eV.’ And now, 20 years later, as far as we know, there’s no supersymmetry, so we had better pay attention to these other models. The nice thing about SuperCDMS is that we are looking for WIMPs and the hidden sector.”

In addition to contributing to SuperCDMS, Golwala is working on tabletop experiments specifically designed to uncover the hidden-sector particles. He says he pays close attention to Zurek’s guidance on what types of interactions to look for between dark matter and normal matter. “Kathryn is a theorist, and I am an experimentalist. She says, ‘This is the thing you should build in order to look for this model of dark matter.’ I take her ideas and say, ‘OK, we are going to build an experiment that can do that.’”

Just Add Dark Matter

Phil Hopkins and his team run computer simulations of galaxies to test theories of dark matter. Photo:Lance Hayashida/Caltech

Outside the lab, there are other ways to probe the hidden sector. Phil Hopkins and his team have embraced the various new dark matter ideas and folded them into their computer simulations of galaxies and the universe. Like baking a batch of cookies, researchers can mix and match cosmic ingredients in a computer simulation and see what arises. If a resulting galaxy looks like the real thing, then scientists know they are closer to understanding its ingredients.

“If you assume that most matter in the universe is dark matter, and that dark matter interacts only with gravity, then it is actually pretty simple to set up your computer simulation,” says Hopkins. “You have one force, gravity, and you let everything evolve from there.”

Recently, Hopkins and his students have refined this simple simulation to include hidden-sector physics. He says his research serves as a bridge between that of Zurek and Golwala, in that Zurek comes up with the theories, Hopkins tests them in computers to help refine the physics, and Golwala looks for the actual particles. In the galaxy simulations, the hidden sector dark matter is “harder to squish” because of its self-interacting properties, explains Hopkins, and this trait ultimately affects the properties of galaxies. The team’s computer creations allow them to make predictions about the structure of galaxies on fine scales, which next-generation telescopes, such as the upcoming Vera C. Rubin Observatory, scheduled to begin operations in Chile in 2022, should be able to resolve.

“You can imagine a whole dark universe or this hidden sector where all sorts of things are happening underneath normal matter or ‘under the hood,’ as you might say. What we have tried to do is ask, ‘What are the astrophysical consequences?’” says Hopkins.

Several other Caltech researchers are also on a quest to uncover the nature of dark matter, including cosmologists who study its effects on vast scales that span the history of the universe, as well as particle physicists who search for dark matter candidates produced in high-energy colliders such as CERN’s Large Hadron Collider, or LHC. Cristián Peña (MS ’15, PhD ’17), a Lederman Postdoctoral Fellow at Fermilab and a research scientist with the High Energy Physics group and INQNET (INtelligent Quantum NEtworks and Technologies) at Caltech, was among the first, in 2016, to attempt to discover dark matter in high-energy proton-proton collisions at the LHC. Those searches for dark matter were made with data collected by the Compact Muon Solenoid instrument.

In this simulation, gas flows into the center of a galaxy via dark matter filaments, while matter is blown out by exploding stars and other factors. Photo: Phil Hopkins and Chris Hayward/Caltech

Now, Peña is developing quantum-sensing experiments to detect dark matter. The state-of-the-art sensors he is using are being developed as part of a quantum internet project involving INQNET in collaboration with Fermilab, JPL, and the National Institute of Standards and Technology, among others. INQNET was founded in 2017 with AT&T and is led by Maria Spiropulu, Caltech’s Shang-Yi Ch’en Professor of Physics. A research thrust of this program focuses on building quantum-internet prototypes including both fiber-optic quantum links and optical communication through the air, between sites at Caltech and JPL as well as other quantum network test beds at Fermilab. The optimized sensors developed with JPL for this program are also well-suited to detect very-low-mass dark matter and, as Peña says, any “feeble interactions” of hidden-sector states beyond the Standard Model of particle physics.

“Quantum sensing is an emerging research area at the intersection between particle physics and quantum information science and technology,” he says.

While most researchers agree that finding dark matter is a long shot, they feel confident that the pursuit, and all the science and technology that has been and will be acquired along the way, is worth the journey. After all, we know that dark matter exists and, as Carroll says, it is “not really all that mysterious.” When Carroll explains dark matter to the public, he has them imagine that our moon is made of dark matter and thus invisible. “We would still experience the moon’s tides on Earth even though we couldn’t see the moon. We know dark matter is there, we just can’t see it.”


Provided by Caltech

Green Chemistry And Biofuel: The Mechanism Of A Key Photoenzyme Decrypted (Chemistry)

The functioning of the enzyme FAP, useful for producing biofuels and for green chemistry, has been decrypted. This result mobilized an international team of scientists, including many French researchers from the CEA, CNRS, Inserm, École Polytechnique, the universities of Grenoble Alpes, Paris-Saclay and Aix Marseille, as well as the European Synchrotron and Synchrotron SOLEIL. The discovery is published in Science on April 09, 2021.The researchers decrypted the operating mechanisms of FAP (Fatty Acid Photodecarboxylase), which is naturally present in microscopic algae such as Chlorella. The enzyme had been identified in 2017 as able to use light energy to form hydrocarbons from fatty acids produced by these microalgae. To achieve this new result, research teams used a complete experimental and theoretical toolkit.

Understanding how FAP works is essential because this photoenzyme opens up a new opportunity for sustainable biofuel production from fatty acids naturally produced by living organisms. FAP is also very promising for producing high added-value compounds for fine chemistry, cosmetics and pharmaceutics.
In addition, due to their light-induced reaction, photoenzymes give access to ultrarapid phenomena that occur during enzymatic reactions. FAP therefore offers a unique opportunity to understand in detail a chemical reaction taking place in living organisms.

More specifically, in this work, researchers show that when FAP is illuminated and absorbs a photon, an electron is stripped in 300 picoseconds from the fatty acid produced by the algae. This fatty acid is then dissociated into a hydrocarbon precursor and carbon dioxide (CO2). Most of the CO2 generated is then turned in 100 nanoseconds into bicarbonate (HCO3-) within the enzyme. This activity uses light but does not prevent photosynthesis: the flavin molecule within the FAP, which absorbs the photon, is bent. This conformation shifts the molecule’s absorption spectrum towards the red, so that it uses photons not used for the microalgae’s photosynthetic activity.

It is the combined interpretation of the results of various experimental and theoretical approaches by the international consortium that yields the detailed, atomic-scale picture of FAP at work. This multidisciplinary study combined bioengineering work, optical and vibrational spectroscopy, static and kinetic crystallography performed with synchrotrons or an X-ray free electron laser, as well as quantum chemistry calculations.

In France[1], the study involved researchers from the Biosciences and Biotechnologies Institute of Aix-Marseille (CEA/CNRS/Aix-Marseille University), the Institute of Structural Biology (CEA/CNRS/Grenoble Alpes University), the Laboratory for Optics and Biosciences (CNRS/École Polytechnique-Institut Polytechnique de Paris/Inserm), the Advanced Spectroscopy Laboratory for Interactions, Reactivity and the Environment (CNRS/University of Lille), the Institute for Integrative Biology of the Cell (CEA/CNRS/Paris-Saclay University), the SOLEIL synchrotron and also from the European Synchrotron (ESRF) and the Laue Langevin Institute (ILL), two major European instruments based in Grenoble. received funding from the French National Research Agency.

[1] The study also involved researchers from the Max Planck Institute in Heidelberg (Germany), Moscow State University (Russia) and the SLAC National Accelerator Laboratory (USA).


SOURCE

D. Sorigué et al., Mechanism and dynamics of fatty acid photodecarboxylase, Science, 2021. DOI


Provided by UGA

Is The Lens SW05, A Fossil Group? (Astronomy)

Fossil galaxy groups, fossil groups, or fossil clusters are believed to be the end-result of galaxy merging within a normal galaxy group, leaving behind the X-ray halo of the progenitor group. Galaxies within a group interact and merge. The physical process behind this galaxy-galaxy merger is dynamical friction.

Now, Denzel and colleagues presented models for the system J143454.4+522850 (SW05) of the lensing and stellar mass content, followed by a comparison with simulations, which indicate several differences compared with regular early-type galaxies, suggesting the system may indeed be a fossil group.

“Its mass estimate fits the expectations of a massive elliptical galaxy, with a halo mass in the galaxy group range. Its stellar mass dominates the centre, and in total it makes up to 2.7% of the entire mass budget. These results, along with a lack of nearby bright galaxies suggest that SW05 is a fossil group.”

— told Denzel, first author of the study.

J143454.4+522850 or SW05 was discovered in the SpaceWarps citizen-science project. Out of the 29 promising lens candidates in that work, SW05 probably has the best lens image quality. It is a relatively large gravitational lens with four clearly separated images within a radial distance between 3.5 and 5.25 arcsec from the centre (as shown in Fig. 1).

Figure 1. Stacked observational picture of SW05: Observational data from CFHTLS (stored in the CFHT Science Archive) was taken with the wide-field imager MegaPrime in five optical bands (u, g, r, i, and z). The false-colour image was generated using a stacking procedure, where the i, r, and g bands are transformed into rgb colours. © Denzel et al.

Denzel and colleagues combined gravitational lensing with stellar population-synthesis to separate the total mass of the lens into stars and dark matter. They contrasted enclosed mass profiles with state-of-the-art galaxy formation simulations, to conclude that SW05 is likely a fossil group with a high stellar to dark matter mass fraction (0.027±0.003) with respect to expectations from abundance matching (0.012±0.004), indicative of a more efficient conversion of gas into stars in fossil groups.

Figure 2. Cumulative mass profiles of the SW05 lensing galaxy models and galaxies from MassiveFIRE simulations (A1, A4, C1, D7). The solid curves (lens model in dark-blue) denote the ensemble median of the enclosed total mass. The dashed curves (lens model in dark-blue) give the ensemble median of the enclosed stellar mass. The grey areas show the 99.7 % confidence range of the SW05 lens models. The enclosed lens mass is best constrained at the radial location of the images of the background source (vertical thin dotted lines). © Denzel et al.

“Our lens models indicate that there is no other major group member in the neighbourhood of SW05, supporting our conclusion. This explanation also seems to fit with the discrepancies found in the comparison with numerical simulations.”

— wrote authors of the study in their paper.

Featured image: a false-colour image enhancing the candidate lensed images. © Denzel et al.


Reference: Philipp Denzel, Onur Çatmabacak, Jonathan P. Coles, Claude Cornen, Robert Feldmann, Ignacio Ferreras, Xanthe Gwyn Palmer, Rafael Küng, Dominik Leier, Prasenjit Saha, Aprajita Verma, “The lens SW05 J143454.4+522850: a fossil group at redshift 0.6?”, ArXiv, pp. 1-8, 2021. https://arxiv.org/abs/2104.03324


Copyright of this article totally belongs to our author S. Aman. One is allowed to reuse it only by giving proper credit either to him or to us

Better Metric For Thermoelectric Materials Means Better Design Strategies (Material Science)

New quantity helps experimentally classify dimensionality of thermoelectric materials

Researchers from Tokyo Metropolitan University have shown that a quantity known as “thermoelectric conductivity” is an effective measure for the dimensionality of newly developed thermoelectric nanomaterials. Studying films of semiconducting single-walled carbon nanotubes and atomically thin sheets of molybdenum sulfide and graphene, they found clear distinctions in how this number varies with conductivity, in agreement with theoretical predictions in 1D and 2D materials. Such a metric promises better design strategies for thermoelectric materials.

Thermoelectric devices take differences in temperature between different materials and generate electrical energy. The simplest example is two strips of different metals welded together at both ends to form a loop; heating one of the junctions while keeping the other cool creates an electrical current. This is called the Seebeck effect. Its potential applications promise effective usage of the tremendous amount of power that is wasted as dissipated heat in everyday life, whether it be in power transmission, industrial exhaust, or even body heat. In 1993, it was theorized that atomically thin, one-dimensional materials would have the ideal mix of properties required to create efficient thermoelectric devices. The resulting search led to nanomaterials such as semiconducting single-walled carbon nanotubes (SWCNTs) being applied.

However, there was an ongoing issue that prevented new designs and systems from being accurately characterized. The key properties of thermoelectric devices are thermal conductivity, electrical conductivity, and the Seebeck coefficient, a measure of how much voltage is created at the interface between different materials for a given temperature difference. As material science advanced into the age of nanotechnology, these numbers weren’t enough to express a key property of the new nanomaterials that were being created: the “dimensionality” of the material, or how 1D, 2D or 3D-like the material behaves. Without a reliable, unambiguous metric, it becomes difficult to discuss, let alone optimize new materials, particularly how the dimensionality of their structure leads to enhanced thermoelectric performance.

To tackle this dilemma, a team led by Professor Kazuhiro Yanagi of Tokyo Metropolitan University set out to explore a new parameter recently flagged by theoretical studies, the “thermoelectric conductivity.” Unlike the Seebeck coefficient, the team’s theoretical calculations confirmed that this value varied differently with increased conductivity for 1D, 2D and 3D systems. They also confirmed this experimentally, preparing thin films of single-walled carbon nanotubes as well as atomically thin sheets of molybdenum sulfide and graphene, archetypal materials in 1D and 2D respectively. Measurements conclusively showed that the thermoelectric conductivity of the 1D material decreased at higher values of conductivity, while the curve for 2D materials plateaued. They also note that this demonstrates how the dimensionality of the material is retained even when the material is prepared in macroscopic films, a great boost for efforts to leverage the specific dimensionality of certain structures to improve thermoelectric performance.

Combined with theoretical calculations, the team conclude that high thermoelectric conductivity, high conventional electrical conductivity, and low thermal conductivity are key goals for the engineering of new devices. They hope these measurable, tangible targets will bring much needed clarity and unity to the development of state-of-the-art thermoelectric devices.

This work was supported by JSPS KAKENHI Grants-in-Aid for Scientific Research (17H06124, 17H01069, 18H01816, 19J21142, 20H02573, 20K15117, 26102012, 25000003, 19K22127, 19K15383, 20H05189) and the JST CREST Program (MJCR17I5).

Featured image: Theoretical calculations of Seebeck coefficient and thermoelectric conductivity for 1D, 2D and 3D materials: (a)-(c) show how the Seebeck coefficient varies for 1D, 2D and 3D materials, while (d)-(f) show the thermoelectric conductivity for the same systems. No major changes in the shape of the curves are seen for (a)-(c); drastic changes are seen for (d)-(e) beyond a threshold range marked in yellow, making thermoelectric conductivity a much more sensitive, unambiguous measure for dimensionality. © Tokyo Metropolitan University


Reference: Yota Ichinose, Manaho Matsubara, Yohei Yomogida, Akari Yoshida, Kan Ueji, Kaito Kanahashi, Jiang Pu, Taishi Takenobu, Takahiro Yamamoto, and Kazuhiro Yanagi, “One-dimensionality of thermoelectric properties of semiconducting nanomaterials”, Phys. Rev. Materials 5, 025404 – Published 26 February 2021. https://journals.aps.org/prmaterials/abstract/10.1103/PhysRevMaterials.5.025404


Provided by Tokyo Metropolitan University

Mutant KRAS and p53 Cooperate To Drive Pancreatic Cancer Metastasis (Medicine)

Preclinical research identifies CREB1 as new therapeutic target downstream of frequently mutated genes

Researchers at The University of Texas MD Anderson Cancer Center have discovered that mutant KRAS and p53, the most frequently mutated genes in pancreatic cancer, interact through the CREB1 protein to promote metastasis and tumor growth. Blocking CREB1 in preclinical models reversed these effects and reduced metastases, suggesting an important new therapeutic target for the deadly cancer.

The findings were published today in Cancer Discoveryand presented at the virtual American Association for Cancer Research (AACR) Annual Meeting 2021 by Michael Kim, M.D., assistant professor of Surgical Oncology and Genetics.

“To our knowledge, this is the first study to show how these two major genetic drivers work together to promote tumor growth and metastasis,” Kim said. “We learned that signaling downstream of mutant KRAS directly promotes mutant p53 activity. This discovery provides not only a new therapeutic target but unveils a vast transcriptional network that is activated downstream of these mutant proteins.”

Mutations in KRAS and TP53, the two most frequently mutated genes in all human cancers, co-occur in roughly 70% of patients with pancreatic cancer. Mutant KRAS, found in 95% of pancreatic cancers, leads to an activated protein that aberrantly triggers many downstream signaling pathways. Mutant TP53 results in the loss of the proteins’ tumor suppressor function, leaving the mutant protein capable of fueling additional oncogenic processes, such as metastasis.

Unfortunately, no current therapies are able to block the mutant forms of KRAS or p53 prevalent in pancreatic cancer, so there is a need to identify common, alternative therapeutic targets downstream of these proteins that could lead to more effective treatment regimens for pancreatic cancer, Kim explained.

To learn how mutant KRAS and p53 might be interacting, Kim’s team of researchers collaborated with Gigi Lozano, Ph.D., chair of Genetics, to develop a novel mouse model of pancreatic cancer that expresses oncogenic KRAS and mutant p53 specifically in tumor cells, leaving the tumor microenvironment unaltered.

In this model, the team observed more than twice as many metastatic lesions than was seen when p53 was genetically removed, suggesting that the mutant proteins together cause a significant increase in metastatic potential. With further study, the researchers discovered mutant KRAS activates CREB1, a transcription factor that then directly interacts with mutant p53 to promote the aberrant expression of hundreds of genes.

Michael Kim, M.D. © MD Anderson Cancer Center

This activation results in the increased expression of FOXA1, which in turns creates a new cascade of events leading to increased activity of the Wnt/β-catenin pathway, both of which promote cancer metastasis.

Using an available small-molecule drug to target CREB1 in this model resulted in decreased expression of FOXA1β-catenin and associated target genes, along with a corresponding reduction in metastases. While early, these findings suggest that targeting CREB1 may be a viable strategy to block the metastatic effects of mutant KRAS and p53 in pancreatic cancer.

“The identification of this cooperative node suggests that there should be increased focus on CREB1 as a target that could be therapeutically exploited to improve patient outcomes,” Kim said. “With the frequency of KRAS and TP53 mutations in human cancers, the implications of our findings may extend far beyond pancreatic cancer.”

Going forward, the researchers hope to discover other important elements working downstream of mutant p53 that may affect the cancer cells or the surrounding tumor microenvironment. A greater understanding of this complex network may point to additional therapeutic targets or combination approaches to better treat pancreatic cancer.

The research was supported by the National Institutes of Health (NIH) (K08CA218690, P01CA117969, R01CA82577, T32 CA 009599, 1S10OD024976-01, P30CA16672), the American College of Surgeons Faculty Research Fellowship, the Cancer Prevention & Research Institute of Texas (CPRIT) (RP17002), the Richard K. Lavine Pancreatic Fund, the Ben and Rose Cole Charitable Pria Foundation, and the Skip Viragh Foundation.

In addition to Kim, MD Anderson collaborators on the study include: Xinqun Li, M.D., Ph.D., Jenying Deng, Ph.D., Bingbing Dai, Ph.D., Tara G. Hughes, M.D., Christian Siangco, Jithesh Augustine, and Yaan Kang, M.D., all of Surgical Oncology; Yun Zhang, Ph.D., Joy M. McDaniel, Ph.D., Shunbin Xiong, Ph.D., Amanda R. Wasylishen, Ph.D., and Guillermina Lozano, Ph,.D., all of Genetics; Kendra Allton, Bin Liu, Ph.D., and Michelle C. Barton, Ph.D., all of Epigenetics and Molecular Carcinogenesis; Eugene Koay, M.D., Ph.D., of Radiation Oncology; Florencia McAllister, M.D., of Gastrointestinal Medical Oncology and Clinical Cancer Prevention; Christopher Bristow, Ph.D., and Timothy P. Heffernan, Ph.D., of the TRACTION platform; and Anirban Maitra, M.B.B.S., of the Sheikh Ahmed Bin Zayed Al Nahyan Center for Pancreatic Cancer Research. Additional co-authors include Jason B. Fleming, M.D., of Moffitt Cancer Center and Research Institute, Tampa, FL. The authors declare no conflicts of interest.

Featured image credit: Image courtesy Michael Kim, M.D.


References: (1) Michael P Kim, Xinqun Li, Jenying Deng, Yun Zhang, Bingbing Dai, Kendra L Allton, Tara G. Hughes, Christian Siangco, Jithesh J. Augustine, Ya’an Kang, Joy M McDaniel, Shunbin Xiong, Eugene J Koay, Florencia McAllister, Christopher A. Bristow, Timothy P. Heffernan, Anirban Maitra, Bin Liu, Michelle C. Barton, Amanda R Wasylishen, Jason B. Fleming and Guillermina Lozano, “Oncogenic KRAS recruits an expansive transcriptional network through mutant p53 to drive pancreatic cancer metastasis”, Cancer Discovery, 2021. DOI: 10.1158/2159-8290.CD-20-1228 (2) Michael Paul Kim, Xinqun Li, Jenying Deng, Yun Zhang, Bingbing Dai, Kendra Allton, Tara Hughes, Christian Siangco, Jithesh Augustine, Yaan Kang, Joy M. McDaniel, Shunbin Xiong, Eugene Koay, Florencia McAllister, Christopher Bristow, Timothy Heffernan, Anirban Maitra, Bin Liu, Michelle Barton, Amanda Wasylischen, Jason Fleming, Guillermina Lozano, “Mutant p53 and oncogenic KRAS converge on CREB1 to drive pancreatic cancer metastasis”, Session PO.MCB03.05 – Nuclear Oncoproteins, Oncogenes, and Tumor Suppressor Genes, 2021. ABSTRACT #2417


Provided by MD Anderson Cancer Center

Immune-stimulating Drug Before Surgery Shows Promise in Early-stage Pancreatic Cancer (Medicine)

For the first time, researchers led by SU2C “Dream Team” show how CD40 agonist drives an immune response to hard-to-penetrate tumors

Giving early-stage pancreatic cancer patients a CD40 immune-stimulating drug helped jumpstart a T cell attack to the notoriously stubborn tumor microenvironment before surgery and other treatments, according to a new study from researchers in the Abramson Cancer Center (ACC) at the University of Pennsylvania. Changing the microenvironment from so-called T cell “poor” to T cell “rich” with a CD40 agonist earlier could help slow eventual progression of the disease and prevent cancer from spreading in more patients.

The data–which included 16 patients treated with the CD40 agonist selicrelumab–was presented today by Katelyn T. Byrne, PhD, an instructor of Medicine in the division of Hematology-Oncology in the Perelman School of Medicine at the University of Pennsylvania, during a plenary session at the American Association for Cancer Research annual meeting (Abstract #CT005).

“Many patients with early-stage disease undergo surgery and adjuvant chemotherapy. But it’s often not enough to slow or stop the cancer,” Byrne said. “Our data supports the idea that you can do interventions up front to activate a targeted immune response at the tumor site–which was unheard of five years ago for pancreatic cancer–even before you take it out.”

The purpose of CD40 agonists is to help “push the gas” on the immune system both by activating antigen-presenting cells, such as dendritic cells, to “prime” T cells and by enhancing immune-independent destruction of the tumor site. The therapies have mostly been investigated in patients with metastatic pancreatic cancer patients in combination with other therapies, such as chemotherapy or other immunotherapies. This is the first time the drug has been shown to drive an immune response in early-stage patients both at the tumor site and systemically–which mirrors what researchers found in their mouse studies.

The phase 1b clinical trial was conducted at four sites, including the ACC, Fred Hutchinson Cancer Research Center at the University of Washington, Case Western Reserve University, and Johns Hopkins University.

Sixteen patients were treated with selicrelumab before surgery. Of those patients, 15 underwent surgery and received adjuvant chemotherapy and a CD40 agonist. Data collected from those patients’ tumors and responses were compared to data from controls (patients who did not receive the CD40 agonist before surgery) treated at Oregon Health and Science University and Dana Farber Cancer Institute.

Multiplex imaging of immune responses revealed major differences between the two groups. Eighty-two percent of tumors in patients who received the CD40 agonist before surgery were T-cell enriched, compared to 37 percent of untreated tumors and 23 percent chemotherapy or chemoradiation-treated tumors. Selicrelumab tumors also had less tumor-associated fibrosis (bundles of tissue that prevent T cells and traditional therapies from penetrating tumors), and antigen-presenting cells known as dendritic cells were more mature.

In the treatment group, disease-free survival was 13.8 months and median overall survival was 23.4 months, with eight patients alive at a median of 20 months after surgery.

“This is a first step in building a backbone for immunotherapy interventions in pancreatic cancer,” Byrne said.

Based on these findings, researchers are now investigating how other therapies combined with CD40 could help strengthen the immune response even further in pancreatic cancer patients before surgery.

“We’re starting to turn the tide,” said Robert H. Vonderheide, MD, DPhil, director of the ACC and senior author. “This latest study adds to growing evidence that therapies such as CD40 before surgery can trigger an immune response in patients, which is the biggest hurdle we’ve faced. We’re excited to see how the next-generation of CD40 trials will take us even closer to better treatments.”


Reference: Katelyn T. Byrne, Courtney B. Betts, Rosemarie Mick, Shamilene Sivagnanam, David L. Bajor, Daniel A. Laheru, E. Gabriela Chiorean, Mark H. O’Hara, Shannon M. Liudahl, Craig Newcomb, Cécile Alanio, Ana P. Ferreira, Byung S. Park, Takuya Ohtani, Austin P. Huffman, Sara A. Väyrynen, Andressa Dias Costa, Judith C. Kaiser, Andreanne M. Lacroix, Colleen Redlinger, Martin Stern, Jonathan A. Nowak, E. John Wherry, Martin A. Cheever, Brian M. Wolpin, Emma E. Furth, Elizabeth M. Jaffee, Lisa M. Coussens, Robert H. Vonderheide, “CT005 – T cell inflammation in the tumor microenvironment after agonist CD40 antibody: Clinical and translational results of a neoadjuvant clinical trial”, Annual Meeting 2021. https://www.abstractsonline.com/pp8/#!/9325/presentation/5136


Provided by University of Pennsylvania School of Medicine

New Multiple Sclerosis Subtypes Identified Using Artificial Intelligence (Medicine)

Scientists at UCL have used artificial intelligence (AI) to identify three new multiple sclerosis (MS) subtypes. Researchers say the groundbreaking findings will help identify those people more likely to have disease progression and help target treatments more effectively.

MS affects over 2.8 million people globally and 130,000 in the UK, and is classified into four* ‘courses’ (groups), which are defined as either relapsing or progressive. Patients are categorised by a mixture of clinical observations, assisted by MRI brain images, and patients’ symptoms. These observations guide the timing and choice of treatment.

For this study, published in Nature Communications, researchers wanted to find out if there were any – as yet unidentified – patterns in brain images, which would better guide treatment choice and identify patients who would best respond to a particular therapy.  

Explaining the research, lead author Dr Arman Eshaghi (UCL Queen Square Institute of Neurology) said: “Currently MS is classified broadly into progressive and relapsing groups, which are based on patient symptoms; it does not directly rely on the underlying biology of the disease, and therefore cannot assist doctors in choosing the right treatment for the right patients.

“Here, we used artificial intelligence and asked the question: can AI find MS subtypes that follow a certain pattern on brain images? Our AI has uncovered three data-driven MS subtypes that are defined by pathological abnormalities seen on brain images.”

In this study, researchers applied the UCL-developed AI tool, SuStaIn (Subtype and Stage Inference), to the MRI brain scans of 6,322 MS patients. The unsupervised SuStaIn trained itself and identified three (previously unknown) patterns.

The new MS subtypes were defined as ‘cortex-led’, ‘normal-appearing white matter-led’, and ‘lesion-led.’ These definitions relate to the earliest abnormalities seen on the MRI scans within each pattern.

Once SuStaIn had completed its analysis on the training MRI dataset, it was ‘locked’ and then used to identify the three subtypes in a separate independent cohort of 3,068 patients thereby validating its ability to detect the new MS subtypes.

Dr Eshaghi added: “We did a further retrospective analysis of patient records to see how people with the newly identified MS subtypes responded to various treatments.

While further clinical studies are needed, there was a clear difference, by subtype, in patients’ response to different treatments and in accumulation of disability over time. This is an important step towards predicting individual responses to therapies.”

NIHR Research Professor Olga Ciccarelli (UCL Queen Square Institute of Neurology), the senior author of the study, said: “The method used to classify MS is currently focused on imaging changes only; we are extending the approach to including other clinical information.

“This exciting field of research will lead to an individual definition of MS course and individual prediction of treatment response in MS using AI, which will be used to select the right treatment for the right patient at the right time.”

One of the senior authors, Professor Alan Thompson, Dean of the UCL Faculty of Brain Sciences, said: “We are aware of the limitations of the current descriptors of MS which can be less than clear when applied to prescribing treatment. Now with the help of AI and large datasets, we have made the first step towards a better understanding of the underlying disease mechanisms which may inform our current clinical classification. This is a fantastic achievement and has the potential to be a real game-changer, informing both disease evolution and selection of patients for clinical trials.”

Researchers say the findings suggest that MRI-based subtypes predict MS disability progression and response to treatment and can now be used to define groups of patients in interventional trials. Prospective research with clinical trials is required as the next step to confirm these findings.

Dr Clare Walton, Head of Research at the MS Society, said: “We’re delighted to have helped fund this study through our work with the International Progressive MS Alliance. MS is unpredictable and different for everyone, and we know one of our community’s main concerns is how their condition might develop. Having an MRI-based model to help predict future progression and tailor your treatment plan accordingly could be hugely reassuring to those affected. These findings also provide valuable insight into what drives progression in MS, which is crucial to finding new treatments for everyone. We’re excited to see what comes next.”

MS is a neurological (nerve) condition and is one of the most common causes of disability in young people. It arises when the immune system mistakenly attacks the coating (myelin sheaths) that wrap around nerves in the brain and spinal cord. This results in the electrical signals, which pass messages along the nerves, to be disrupted, travel more slowly, or fail to get through at all.

Most people are diagnosed between the ages of 20 and 50, however the first signs of MS often start years earlier. Common early signs include tingling, numbness, a loss of balance and problems with vision, but because other conditions cause the same symptoms, it can take time to reach a definitive diagnosis.

Many patients have relapsing MS at first, a form of the disease where symptoms come and go as nerves are damaged, repaired and damaged again. But about half have a progressive form of the condition in which nerve damage steadily accumulates and causes ever worsening disability. Patients may experience tremors, speech problems and muscle stiffness or spasms, and may need walking aids or a wheelchair.

The study was carried out with researchers at: Montreal Neurological Institute, McGill University, Canada; Harvard Medical School, USA; and VU University Medical Centre, the Netherlands.

* In present clinical guidelines, MS is currently categorised as either clinically isolated syndrome (CIS), relapsing-remitting MS (RRMS), primary-progressive MS (PPMS) or secondary progressive MS (SPMS).

Featured image: New MS subtypes defined as ‘cortex-led’, ‘normal appearing white matter-led’ and ‘lesion-led’ © UCL


Links


Provided by University College London

Are Rotating Bose Stars Stable Or Unstable? (Planetary Science)

Summary:

  • Dmitriev and colleagues studied rotating Bose stars, i.e. gravitationally bound clumps of Bose–Einstein condensate composed of non-relativistic particles with nonzero angular momentum ‘l’.
  • They analytically proved  that these objects are unstable, if particle self–interactions are attractive or negligibly small.
  • On the other hand, they numerically showed that in models with sufficiently strong repulsive self–interactions the Bose star is stable. But, although this Bose star becomes stable at sufficiently strong repulsive self–couplings, the fate of the higher ‘l’ objects is far less trivial.
  • They also computed the lifetimes of the unstable rotating stars and found that their lifetimes are always comparable to the inverse binding energies; hence, these objects cannot be considered long–living.

Friends, every object in the Universe can rotate around its center of mass and carry angular momentum. There is, however, a unique substance — Bose–Einstein condensate of particles in a quantum state ψ(t, x) — that does not rotate easily, and if it does, it rotates in its own peculiar way. Indeed, the condensate velocity can be identified with the phase gradient divided by the particle mass:

v = ∇ arg ψ(t, x)/m .

This vector is explicitly irrotational at nonzero density: rot v = 0 at ψ ≠ 0. Hence, the only way to add rotation is to drill a hole through the condensate, i.e. introduce a vortex line ψ = 0 through its center in Fig. 1. And this costs energy! As a by–product, the angular momentum of the condensate is quantized with the number ‘l’ of vortex lines.

“One can rotate Bose star by driving a vortex through its center”

— told Dmitriev, first author of the study

FIG. 1. (Not to scale) Bose–Einstein condensate (shaded region) rotating around the vortex line ψ = 0 (solid). © Dmitriev et al.

In the present–day Universe, the Bose–Einstein condensate of dark matter particles may exist in the form of gravitationally self–bound Bose stars. During decades, the studies of these objects were migrating from the periphery of scientific interest towards its focal point. Now, it is clear that the Bose stars may form abundantly by universal gravitational mechanisms in the mainstream models with light dark matter. If the latter consists of QCD axions, they nucleate inside the typical axion miniclusters — widespread smallest–scale structures conceived at the QCD phase transition. In the case of fuzzy dark matter, gigantic Bose stars (“solitonic cores”) appear in the centers of galaxies during structure formation. In both cases these objects cease growing beyond certain mass.

Rotating Bose stars, if stable, would be important for astrophysics and cosmology. Their centrifugal barriers can resist to bosenovas — collapses of overly massive stars due to attractive self–interactions of bosons. This means, in particular, that fast–rotating QCD axion stars would reach larger masses and densities which may be sufficient to ignite observable parametric radioemission. Besides, the angular momenta of the Bose stars are detectable in principle: directly by observing gravitational waves from their mergers or indirectly if they eventually collapse into spinning black holes which merge and emit gravitational waves.

Now, Dmitriev and colleagues studied rotating Bose stars, i.e. gravitationally bound clumps of Bose–Einstein condensate composed of non-relativistic particles with nonzero angular momentum ‘l’. They analytically proved that these objects are unstable at arbitrary l ≠ 0 if particle self–interactions are attractive (λ < 0) or negligibly small (λ = 0). This result is applicable in the popular cases of fuzzy and QCD axion dark matter. On the other hand, they numerically showed that in models with sufficiently strong repulsive self–interactions (λ > 0) the Bose star with l = 1 is stable. But, although the l = 1 Bose star becomes stable at sufficiently strong repulsive self–couplings λ > λ_0, the fate of the higher ‘l’ objects is far less trivial.

FIG. 2. (Not to scale) Instability of the rotating Bose star. © Dmitriev et al.

Their approach reveals the mechanism for the instability: it is caused by the pairwise transitions of the condensed particles from the original state with the angular momentum l to the l + ∆l and l − ∆l states, see Fig. 2 above. This process conserves the total spin and decreases the potential energy of the Bose star. Piling up due to Bose factors, the particle transitions lead to exponential growth of the axially asymmetric perturbations:

δψ ∝ e^µt,

where µ is the complex exponent and (Re µ)^−1 is the lifetime of the rotating configuration.

Fig. 3: Three–dimensional numerical evolution of the perturbed l = 1 Bose star. The panels (a)—(f) display horizontal sections of the solution at fixed time moments. The simulation starts in Fig. 4a with the star profile distorted by an invisibly small asymmetric perturbation δψ ∼ 10¯6 ψs. The latter grows exponentially with time, becomes discernible at the moment of Fig. 4b and reaches a fully nonlinear regime δψ ∼ ψs in Fig. 4c. At this point, a bound system of two spherical Bose stars appears. They oscillate and rotate around the mutual center of mass in Figs. 4c—e. Finally, one of the stars gets tidally disrupted, whereas the other survives. The evolution ends in Fig. 4f with nonspinning Bose star surrounded by a cloud of diffuse axions. They rotate together around the center of mass. © Dmitriev et al.

They also computed the lifetimes of the unstable rotating stars and found that their lifetimes are always comparable to the inverse binding energies; hence, these objects cannot be considered long–living. This observation has a number of phenomenological consequences.

First, the scenario with rotating axion stars reaching threshold for the explosive parametric radioemission cannot be realized. One still can consider emission during the intermediate stages when dense and short–living rotating configuration shakes off its angular momentum. But a specific formation scenario for the latter should be suggested in the first place.

Second, instability of rotating Bose stars provides a universal mechanism to destroy the angular momentum. One can imagine e.g. that a subset of dark matter Bose stars collapses gravitationally into black holes with suppressed spins. This is possible in models with positive self–coupling or in axionic models with near–Planckian decay constants. Formation of such non–spinning black holes may explain observational hints in GWTC-1, advanced LIGO & VIRGO etc.


Reference: A.S. Dmitriev, D.G. Levkov, A.G. Panin, E.K. Pushnaya, I.I. Tkachev, “Instability of rotating Bose stars”, ArXiv, pp. 1-18, 2 Apr, 2021. https://arxiv.org/abs/2104.00962


Copyright of this article totally belongs to our author S. Aman. One is allowed to reuse it only by giving proper credit either to him or to us