UCLA Study Finds Combination Therapy Suppresses Pancreatic Tumor Growth in Mice (Medicine)

CLA Jonsson Comprehensive Cancer Center researchers have uncovered a potential new way to target pancreatic tumors that express high intratumoral interferon signaling (IFN). The team found that high type I IFN signaling is present in a subset of pancreatic tumors and it triggers a decrease in the level of NAD and NADH in pancreatic cancer cells, which are vital cofactors in critical metabolic processes.

After the researchers delineated the mechanism by which the NAD depletion occurs, they demonstrated that cells with high IFN signaling were more sensitive to NAMPT inhibitors, which inhibit a major pathway in NAD synthesis. Based on this mechanism, recently developed second-generation NAMPT inhibitors could potentially be used in combination with new systemic drugs, called STING agonists, which increase type I IFN signaling. When tested in mice, the combination of IFN signaling and NAMPT inhibitors not only decreased pancreatic tumor growth, but also resulted in fewer liver metastases.

“With the advent of these two new and improved therapeutics, our findings are timely as their combination may sensitize tumors to NAD depletion,” said lead author Dr. Alexandra Moore, a resident physician in the department of surgery at the David Geffen School of Medicine at UCLA.

Pancreatic cancer continues to be one of the most difficult cancers to treat. One of the hallmarks of the disease is its extensively reprogrammed metabolic network. All cells, including cancer cells, have the need to transform nutrients from the environment into building blocks for cellular processes and many of these processes require NAD or NADH as a vital cofactor. This research focused on harnessing IFN-induced NAD depletion in combination with the inhibition of NAD synthesis to develop new approaches to better treat pancreatic cancer.

The team first used cell lines and cell culture to determine the mechanism of NAD depletion induced by IFN signaling by looking at the mRNA levels of NAD-consuming enzymes after treatment with IFN. There was an increase in mRNA levels as well as protein expression of PARP9, PARP10, and PARP14. After confirming the findings, the team translated the research into an in vivo model. Researchers used two different mouse models and injected cancer cells into the pancreas of mice prior to treatment.

The findings provide evidence that if tumors with high IFN signaling can be identified, or if IFN signaling can be amplified in tumor cells, those tumors may have greater sensitivity to treatment with NAMPT inhibitors. If so, the combination could potentially help improve the prognosis for one of the most difficult cancers to treat.

“This is a study that identifies a potential vulnerability created by type I IFNs in pancreatic cancer that can be leveraged for what appears to be an effective therapeutic strategy,” said senior author Dr. Timothy Donahue, professor of surgery and chief of surgical oncology.

The senior authors of the study are Dr. Timothy Donahue, professor of surgery and chief of surgical oncology, and Dr. Caius Radu, professor of molecular and medical pharmacology. Both are members of the UCLA Jonsson Comprehensive Cancer Center. The first authors are Dr. Alexandra Moore, resident physician in the department of surgery at the David Geffen School of Medicine at UCLA, and Dr. Lei Zhou, a visiting assistant project scientist in the department of surgery.

The study was published online in the Proceedings of the National Academy of Sciences.

The research was supported by funding from the National Cancer Institute and the Hirshberg Foundation for Pancreatic Cancer Research.

Featured image: UCLA cancer researchers (from left) Drs. Timothy Donahue, Alexandra Moore and Caius Radu

Reference: Alexandra M. Moore, Lei Zhou, Jing Cui, Luyi Li, Nanping Wu, Alice Yu, Soumya Poddar, Keke Liang, Evan R. Abt, Stephanie Kim, Razmik Ghukasyan, Nooneh Khachatourian, Kristina Pagano, Irmina Elliott, Amanda M. Dann, Rana Riahi, Thuc Le, David W. Dawson, Caius G. Radu, Timothy R. Donahue, “NAD+ depletion by type I interferon signaling sensitizes pancreatic cancer cells to NAMPT inhibition”, Proceedings of the National Academy of Sciences Feb 2021, 118 (8) e2012469118; DOI: 10.1073/pnas.2012469118

Provided by UCLA

Sudden Death in the Universe – The Agony of a Massive Dusty Galaxy as Seen By its Blue Companion (Astronomy)

Heavily dust-obscured ultramassive star-forming galaxies in the early Universe contributed significantly to the cosmic star formation rate. But, how did such objects manage to build up their stellar masses at a relatively short time? Were they once starburst galaxies or were they gradually forming stars, exhausting their hydrogen reservoirs?

In the last two decades, astronomers have gained massive knowledge of how galaxies form and evolve. This advancement in the field of extragalactic astronomy was coupled with the rapid development of the instrumentation aspect of astronomy, which led to accumulating a large amount of observations of galaxies at different epochs of the lifetime of the Universe. With giant multiwavelength datasets, we realized that galaxies are not just entities in space and time. They are more like living organisms: They are born, they get old and then they die! Such evolution of a lifetime of galaxies is governed by the amount by which they form stars. Logically, and since hydrogen is the main ingredient of star formation, galaxies will build up their stellar population until they exhaust their hydrogen reservoir.

Years worth of observations have suggested that when the Universe was adolescent, galaxies formed more stars. Specifically, when the Universe was only 3 Billion years old, that is 10 Billion years ago, galaxies were the most efficient in turning their gas content into stars. This epoch of the Universe is called the cosmic noon.

To make things even more ambiguous, it turns out that some of the galaxies during the cosmic noon were even more massive than old galaxies like our own, the Milky Way. This has made the field of extragalactic astronomy one of the most active themes of research, since it involves interdisciplinary fields such as physics, chemistry and analytical modeling. And given the amount of good quality data, it became possible to trace these ultramassive galaxies when the Universe was younger, and this led to a lot of natural questions such as: how did these galaxies manage to become so massive at an early time? Were they once exhibiting an over-to-normal star formation activity, or were they very efficient in turning their hydrogen into stars?

To answer these questions, and to investigate the nature of these ultramassive objects, an international team of researchers under the leadership of Mahmoud Hamed, along with his PhD supervisor dr. hab. Katarzyna Malek (Astrophysics Division of the NCBJ), and with close collaboration with dr. Laure Ciesla and dr. Matthieu Béthermin (both from the astrophysics laboratory of Marseille – LAM), shed a new light on the physical processes involved in the interstellar medium in these “giants”. To better understand the nature of such heavily dust-obscured galaxies, they detected an interesting system of two galaxies at the epoch of the cosmic noon, and analyzed it with different wavelengths observations in order to constrain their underlying physics and chemistry. This system consists of one ultramassive galaxy and its satellite galaxy, which we called Astarte and Adonis respectively, as the Phoenician gods.

Astarte is not only ultramassive, but it is also ultra-dusty – it is very bright in the infrared spectrum, which is a thermal radiation emitted by the dust in the interstellar medium. In fact, Astarte is so dusty that it is almost not visible in the shorter wavelengths at ultraviolet and visible light. This is a common feature of dust behavior in galaxies, it absorbs shorter wavelength photons which are typically the same size of the dust grains on average.

Adonis is less dusty and is not bright in the longer wavelengths of infrared bands, and together with Astarte, it forms an interesting system of opposites, which can reveal a lot about their evolution and can possibly answer the puzzle of how these massive galaxies managed to become more massive than their local environments in very short timescale.

The ultramassive Astarte was observed with the Atacama Large Millimeter Array (ALMA), as part of an observation program (PI: Béthermin). ALMA can observe cold dust and emission from excited molecules in the interstellar medium. With this observation program, we detected the emission coming from the carbon monoxide (CO) from the molecular clouds of Astarte. With this CO emission, it was possible to estimate the mass of hydrogen in that galaxy, based on conversion ratios that are already established in galaxies of the local Universe, and from our knowledge of the expected abundance ratios in the interstellar medium of such dusty galaxies like Astarte.

The various wavelengths through which Astarte and Adonis were detected helped in modeling the total spectrum coming from them. This in turn allowed us to constrain the physical properties of the system, such as how many stars do these galaxies have and at what rate new stars are being born. The surprising element of this study was that the estimated rate at which Astarte is forming stars is too high, much higher than what can be explained from its hydrogen reservoir. In fact, if Astarte continues to form stars at this rate, it will exhaust all its gas in the molecular clouds in the next 220 million years, which is rather short compared to the timescales that we deal with in the Universe. Adonis on the other hand, is forming too many stars for its mass, this is what we commonly refer to as a strong starburst.

One important conclusion of this study was that the ultramassive Astarte, which is more massive than our old and mature Milky Way, is dying; Astarte is turning its hydrogen into stars by inertia from its past when it was once a starburst too, like its neighbouring Adonis. This indeed motivates the quest to understand how very massive galaxies form and evolve, and how they deplete their resources while converting them efficiently into stars.

Featured image: The two galaxies as seen with the VISTA telescope, with ALMA detection of Astarte. © NCBJ

Publication: M. Hamed, L. Ciesla, M. Béthermin, K. Małek, E. Daddi, M. T. Sargent, R. Gobat “Multiwavelength dissection of a massive heavily dust-obscured galaxy and its blue companion at z~2”, Astronomy & Astrophysics, 2021. https://www.aanda.org/articles/aa/full_html/2021/02/aa39577-20/aa39577-20.html DOI: https://doi.org/10.1051/0004-6361/202039577

Provided by NCBJ

What Wormholes Generate? (Astronomy)

Traversable wormhole arises as a solution to the Einstein field equations and was first proposed by Morris and Thorne as time travel machines. The idea of wormhole spacetime was given by J.A. Wheeler in his attempt to apply quantum mechanics at the Planck scale. The resulting spacetime turns out to be fluctuating giving rise a number of topologies including the wormhole. A static and spherically symmetric wormhole possesses interesting geometry having a throat that flares out in two opposite directions. The throat connects either two different asymptotically flat regions in the same spacetime or entirely two distinct spacetimes. Later on, Ellis termed this geometry as a ‘drainhole’ that could render particle motion from either mouths. The throat has the tendency to get closed in a very short time (of the order of Planck time) thereby limiting the time travel possibility. In order to create a stable wormhole, a negative energy (or the exotic matter) is required to keep the wormhole’s throat open. Such a negative energy thus violates the null energy condition (NEC). Since NEC is the weakest energy condition, it implies that all the energy conditions (weak, strong and dominant) will be violated automatically. These energy conditions are generally obeyed by the classical matter but are violated by certain quantum fields which exhibit the Casimir effect and the Hawking evaporation process.

A week before, I wrote an article entitled, “How Would A Particle Travel Through A Rotating Wormhole?”, in which I described about the study which showed that neutral test particles propagating towards the rotating wormhole radially, start moving about the wormhole in the spiral path. After entering the wormhole’s throat, the particles pass through the throat and move away from the throat following the spiral trajectories.

You might be thinking why I am talking about that now. Well, it’s because few days before, I came across research work of Mubashir Jamil et al., in which they considered the possibility of a rotating wormhole surrounded by a cloud of charged particles and due to slow rotation of the wormhole, the charged particles are dragged, thereby producing an electromagnetic field.

We considered stationary and axially symmetric wormhole having non-zero angular velocity surrounded by the continuum of charged particles that are dragged by the wormhole in the angular direction.

— told Mubashir Jamil.

They assumed, rotation to be negligible, so that quadratic terms in the angular velocity are ignored. They found that frame dragging effects on the charged particles produces a poloidal electromagnetic field. They also determined resulting field around the wormhole under the slow rotation approximation. The source of this field is the charge density which surrounds the wormhole. The distribution of charges is assumed to be spherically symmetrical.

Our model predicts the production of the electromagnetic field with a certain radiation flux due to wormhole rotation.

— said Mubashir Jamil, lead author of the study

They concluded that this feature of the wormhole physics can be of interest from astronomical point of view.

Reference: Jamil, M. Can a Wormhole Generate Electromagnetic Field?. Int J Theor Phys 49, 1549–1555 (2010). https://link.springer.com/article/10.1007/s10773-010-0335-0 https://doi.org/10.1007/s10773-010-0335-0

Copyright of this article totally belongs to our author S. Aman. One is allowed to reuse it only by giving proper credit either to him or to us

LHC / ATLAS: A Unique Observation Of Particle Pair Creation in Photon-photon Collisions (Physics)

Creation of matter in an interaction of two photons belongs to a class of very rare phenomena. From the data of the ATLAS experiment at the LHC, collected with the new AFP proton detectors at the highest energies available to-date, a more accurate – and more interesting – picture of the phenomena occurring during photon collisions is emerging.

If you point a glowing flashlight towards another one, you do not expect any spectacular phenomena. The photons emitted by both flashlights simply pass by each other. However, in certain collisions involving high-energy protons the situation is different. The photons emitted by two colliding particles may interact and create a pair of matter and antimatter particles. Traces of processes such as these have just been observed in the ATLAS experiment at the Large Hadron Collider (LHC) at CERN near Geneva. Precise observations were carried out using the new AFP (ATLAS Forward Proton) spectrometer, developed with significant participation of scientists from the Institute of Nuclear Physics of the Polish Academy of Sciences (IFJ PAN) in Cracow. The Polish physicists, funded by the National Science Centre and the Ministry of Science and Higher Education, have been involved in the development of AFP detectors since the conception of these devices.

“Observations of the creation of particles of matter and antimatter from electromagnetic radiation go back to the beginnings of nuclear physics”, says Prof. Janusz Chwastowski, head of the team of physicists at the IFJ PAN involved in the AFP detectors.

Indeed, it was February 1933 when Patrick Blackett (Nobel 1948) and Giuseppe Occhialini reported an observation of the creation of an electron-positron pair initiated by a quantum of cosmic radiation. The creation of matter and antimatter was therefore noticed earlier than the reverse process, i.e. the famous and spectacular positron annihilation. The first observations of the latter were made in August 1933 by Theodor Heiting, and three months later by Frédéric Joliot.

“In the most commonly recorded events of creation, one photon transforms into a particle and an antiparticle. In contrast, the phenomenon we are studying is of a different nature. The particle-antiparticle pair arises here due to the interaction of two photons. The possibility of such processes was first reported by Gregory Breit and John A. Wheeler in 1934”, continues Prof. Chwastowski.

As a charged particle, the proton moving inside the LHC beam pipe is surrounded by an electric field. Since the carriers of electromagnetic interactions are photons, the proton can be treated as an object surrounded by photons.

“In the LHC beam pipe, protons reach velocities very close to the speed of light. A proton and the surrounding field undergo the Lorentz contraction along the direction of motion. Thus, from our point of view, a proton moving at almost the speed of light is associated with particularly violent oscillations of the electromagnetic field. When such a proton approaches another one accelerated in the opposite directions – and this is the situation we are dealing with at the LHC – an interaction between the photons may occur”, explains Dr. Rafal Staszewski (IFJ PAN).

In the LHC accelerator, collisions between photons can happen when protons fly past each other inside the ATLAS detector. Pairs of the created leptons are detected inside the ATLAS, while the protons that were the photon sources are observed by AFP detectors located approximately 200 m from the collision point. (Source: IFJ PAN)

At the LHC, collisions of highly energetic proton beams occur in several places, including the one located inside the giant ATLAS detector. If two photons collide, the result could be an electron-positron pair or a muon-antimuon pair (a muon is about 200 times more massive than an electron). These particles, which belong to the lepton family, produced at large angles with respect to the proton beams, are recorded inside the main ATLAS detector. Such phenomena have been observed at the LHC before.

“The point is, we have two more protagonists of two-photon processes! These are, naturally, the photon sources, i.e. the two passing protons. Thus we get to the essence of our measurement, says Dr. Staszewski and explains: “As a result of the photon emission, each proton loses some energy but, importantly, it practically does not change the direction of its motion. So, it escapes the detector together with other protons in the beam. However, the proton that emitted the photon has a slightly lower energy than the beam protons. Therefore, the accelerator magnetic field deflects it more, and this means that it gradually moves away from the beam. These are the protons we are hunting for with our AFP spectrometers”.

Each of the four AFP tracking units contains four sensors: 16×20 mm semiconductor pixel plates, placed one behind the other. A proton that passes through the sensors deposits some energy and thus it activates the pixels on its path. By analysing all the activated pixels, the proton path and properties can be reconstructed.

The need to record protons only slightly deflected from the main beam means that the AFP spectrometers have to be inserted directly inside the LHC beam pipe, just a few millimetres away from the circulating beams.

“When you are operating so close to a particle beam with such high energies, you have to be aware of the risks. The smallest error in the positioning of the spectrometer could result in burning a hole in it. It would be very upsetting, but that would really be the least of our problems. The resulting debris would contaminate at least a part of the accelerator causing its shut down for some time”, notes Prof. Chwastowski.

The measurements described here were carried out with AFP spectrometers placed at a distance of about 200 m from the point at which the protons collided.

“Protons interact at the LHC in many ways. As a result, the protons observed in the AFP spectrometers may originate from processes other than those associated with photon-photon interactions. To search for the right protons, we needed to have precise knowledge about the properties of each particle”, emphasises PhD student Krzysztof Ciesla (IFJ PAN), who dealt with the initial analysis of the raw data collected by the AFP spectrometers in 2017 and converting them into information about the energies and momenta of the registered protons. The results of the proton energy measurements were then juxtaposed with the energies of the created lepton pair and, based on conservation principles, it was determined whether the observed proton could be the source of the interacting photon.

The measurements using the AFP spectrometers proved to be highly statistically significant, at nine standard deviations (sigma). For comparison, a five-sigma measurement is usually sufficient to announce a scientific discovery. So, the AFP spectrometers have successfully passed the test, proved usefulness of the method and provided very interesting, though still unclear, results. It turned out that theoretical predictions do not fully agree with the determined characteristics of the investigated interactions. Clearly there are hidden nuances in the two-photon processes observed in high energy proton-proton collisions that require better understanding and further measurements.

Featured image: A picture of the AFP detector taken during its installation in the LHC tunnel. The quartz time-of-flight detector is on the left, the silicon pixel detector – on the right. (Source: IFJ PAN)

Reference: G. Aad et al. (ATLAS Collaboration), “Observation and Measurement of Forward Proton Scattering in Association with Lepton Pairs Produced via the Photon Fusion Mechanism at ATLAS”, Phys. Rev. Lett. 125, 261801 – Published 23 December 2020. https://journals.aps.org/prl/abstract/10.1103/PhysRevLett.125.261801

Provided by IFJ Pan Press

Investigating the Wave Properties of Matter With Vibrating Molecules (Physics)

Almost 100 years ago, a revolutionary discovery was made in the field of physics: microscopic matter exhibits wave properties. Over the decades, more and more precise experiments have been used to measure the wave properties of electrons in particular. These experiments were mostly based on spectroscopic analysis of the hydrogen atom and they enabled verifying the accuracy of the quantum theory of the electron.

For heavy elementary particles – for example protons – and nuclides (atomic nuclei), it is difficult to measure their wave properties accurately. In principle, however, these properties can be seen everywhere. In molecules, the wave properties of atomic nuclei are obvious and can be observed in the internal vibrations of the atomic nuclei against each other. Such vibrations are enabled by the electrons in molecules, that create a bond between the nuclei that is ‘soft’ rather than rigid. For example, nuclear vibrations occur in every molecular gas under normal conditions, such as in air.

The wave properties of the nuclei are demonstrated by the fact that the vibration cannot have an arbitrary strength – i.e. energy – as would be the case with a pendulum for example. Instead, only precise, discrete values known as ‘quantized’ values are possible for the energy.

A quantum jump from the lowest vibrational energy state to a higher energy state can be achieved by radiating light onto the molecule, whose wavelength is precisely set so that it corresponds exactly to the energy difference between the two states.

To investigate the wave properties of nuclides very accurately, one needs both a very precise measuring method and a very precise knowledge of the binding forces in the specific molecule, because these determine the details of the wave motion of the nuclides. This then makes it possible to test fundamental laws of nature by comparing their specific statements for the nuclide investigated with the measurement results.

Unfortunately, it is not yet possible to make precise theoretical predictions regarding the binding forces of molecules in general – the quantum theory to be applied is mathematically too complex to handle. Consequently, it is not possible to investigate the wave properties in any given molecule accurately. This can only be achieved with particularly simple molecules.

A device for storing molecular ions. (Photo: HHU / David Offenberg) © HHU / David Offenberg

Together with its long-standing cooperation partner V. I. Korobov from the Bogoliubov Laboratory of Theoretical Physics at the Joint Institute for Nuclear Research in Dubna, Russia, Prof. Schiller’s research team is dedicated to precisely one such molecule, namely the hydrogen molecular ion HD+. HD+ consists of a proton (p) and the nuclide deuteron (d). The two are linked together by a single electron. The relative simplicity of this molecule means that extremely accurate theoretical calculations can now be performed. It was V.I. Korobov who achieved this, after refining his calculations continuously for over twenty years.

For charged molecules such as the hydrogen molecule, an accessible yet highly precise measuring technique did not exist until recently. Last year, however, the team led by Prof. Schiller developed a novel spectroscopy technique for investigating the rotation of molecular ions. The radiation used then is referred to as ‘terahertz radiation’, with a wavelength of about 0.2 mm.

The team has now been able to show that the same approach also works for excitation of molecular vibrations using radiation with a wavelength that is 50 times shorter. To do this, they had to develop a particularly frequency-sharp laser that is one of a kind worldwide.

They demonstrated that this extended spectroscopy technique has a resolution capacity for the radiation wavelength for vibrational excitation that is 10,000 times higher than in previous techniques used for molecular ions. Systematic disturbances of the vibrational states of the molecular ions, for example through interfering electrical and magnetic fields, could also be suppressed by a factor of 400.

Ultimately, it emerged that the prediction of quantum theory regarding the behaviour of the atomic nuclei proton and deuteron was consistent with the experiment with a relative inaccuracy of less than 3 parts in 100 billion parts.

If it is assumed that V.I. Korobov’s prediction based on quantum theory is complete, the result of the experiment can also be interpreted differently – namely as the determination of the ratio of electron mass to proton mass. The value derived corresponds very well with the values determined by experiments by other working groups using completely different measuring techniques.

Prof. Schiller emphasises: “We were surprised at how well the experiment worked. And we believe that the technology we developed is applicable not only to our ‘special’ molecule but also in a much wider context. It will be exciting to see how quickly the technology is adopted by other working groups.”

Featured image: HD+ molecular ions (yellow and red dot pairs) in an ion trap (grey) are irradiated by a laser wave (red). This causes quantum jumps, whereby the vibrational state of the molecular ions changes. (Image: HHU / Soroosh Alighanbari) © HHU / Soroosh Alighanbari

Reference: I. V. Kortunov, S. Alighanbari, M. G. Hansen, G. S. Giri, V. I. Korobov & S. Schiller, Proton-electron mass ratio by high-resolution optical spectroscopy of ion ensembles in the resolved-carrier regime, Nature Physics 2021. DOI: 10.1038/s41567-020-01150-7 https://www.nature.com/articles/s41567-020-01150-7

Provided by Heinrich-Heine University Duesseldorf

The Seafloor Was Inhabited by Giant Predatory Worms Until 5.3 Million Years Ago (Paleontology)

An international study in which the University of Granada participated—recently published in the journal Scientific Reports—has identified a new fossil record of these mysterious animals in the northeast of Taiwan (China), in marine sediments from the Miocene Age (between 23 and 5.3 million years ago) 

These organisms, similar to today’s Bobbit worm (Eunice aphroditois), were approximately 2 m long and 3 cm in diameter and lived in burrows

An international study in which the University of Granada (UGR) participated (recently published in the prestigious journal Scientific Reports) has revealed that the seafloor was inhabited by giant predatory worms during the Miocene Age (23–5.3 million years ago).

Photograph during the fieldwork in Taiwan © UGR

The scientists identified a new fossil record (indirect remains of animal activity such as, for instance, dinosaur tracks, fossilised droppings, insect nests, or burrows) linked to these mysterious animals, which are possible predecessors of today’s Bobbit worm (Eunice aphroditois). Based on the reconstruction of giant burrows observed in Miocene-age marine sediments from northeast Taiwan (China), the researchers concluded that these trace fossils may have colonised the seafloor of the Eurasian continent about 20 million years ago.

Olmo Míguez Salas of the UGR’s Department of Stratigraphy and Palaeontology (Ichnology and Palaeoenvironment Research Group) participated in the study, which was conducted as part of a project funded by the Taiwanese Ministry of Science and Technology (MOST, 2018) of which the researcher was a beneficiary.

The fossil record Pennichnus formosae © UGR

Míguez Salas and the other researchers reconstructed this new fossil record, which they have named Pennichnus formosae. Itconsists of an L-shaped burrow, approximately 2 m long and 2–3 cm in diameter, indicating the size and shape of the organism— Eunice aphroditois—that made the structure.   

Bobbit worms hide in long, narrow burrows in the seafloor and propel themselves upward to grab prey with their strong jaws. The authors suggest that the motion involved in capturing their prey and retreating into their burrow to digest it caused various alterations to the structure of the burrows. These alterations are conserved in the Pennichnus formosae and are indicative of the deformation of the sediment surrounding the upper part of the burrow. Detailed analysis revealed a high concentration of iron in this upper section, which may, the researchers believe, indicate that the worms continuously rebuilt the opening to the burrow by secreting a type of mucus to strengthen the wall, because bacteria that feed on this mucus create environments rich in iron.

Schematic model of the predatory Bobbit worm (image credit: Pan et al., 2021)

Although marine invertebrates have existed since the early Paleozoic, their bodies primarily comprise soft tissue and are therefore rarely preserved. The fossil record discovered in this study is believed to be the earliest known specimen of a subsurface-dwelling ambush predator.

Olmo Míguez Salas notes that this finding “provides a rare view of the behaviour of these creatures under the seafloor and also highlights the value of studying fossil records to understand the behaviour of organisms from the past.”

The UGR researcher Olmo Míguez Salas, one of the authors of this work © UGR

Featured image: Eunice aphroditois (image courtesy of Ms. Chutinun Mora)


Pan, Y-Y., Nara, M., Löwemark, L., Miguez-Salas, O., Gunnarson, B., Iizuka, Y., Chen, T-T. & Dashtgard, S.E. (2021) ‘The 20-million-year old lair of an ambush-predatory worm preserved in northeast Taiwan’, Scientific Reports. https://www.nature.com/articles/s41598-020-79311-0

Provided by University of Granada

An Efficient Method For Separating O-18 from O-16, Essential for Use in Cancer Treatment (Medicine)

Positron Emission Tomography (PET) plays a major role in the early detection of various types of cancer. A research group led by Specially Appointed Professor Katsumi Kaneko of the Research Initiative for Supra-Materials (RISM), Shinshu University have discovered a method to separate oxygen-18 from oxygen-16, an essential isotope for PET diagnosis, at high speed and high efficiency. The results of this research were recently published online in the journal Nature Communications.

The novel method for the rapid and efficient separation of O-18 from O2-16, which is abundant in the atmosphere, was carried out with nanoporous carbon, which is made of pores smaller than 1 nanometer. When a mixture of O2-16 and O2-18 is introduced into the nanoporous carbon, the O2-18 is preferentially adsorbed and is efficiently separated from O2-16. The experimental separation of O2-18 from O2-16 was also conducted using the low-temperature waste heat from a natural gas storage facility.

O-18 plays a major role in the early detection of cancer. Taking advantage of the property of cancer cells which take up much more glucose than normal cells, doctors inject a drug called 18F-FDG (fluorodeoxyglucose), which is an index of glucose metabolism and uses a PET machine to clarify which part of the body has cancer. 18F-FDG is a drug in which fluorine-18 (18F), which emits positive electricity, is attached to glucose. 18F-FDG is produced by a nuclear reaction in which O-18 is introduced before the protons are injected. Therefore, O-18 is an important substance indispensable for PET diagnosis but was difficult to procure because only 0.2% of naturally occurring oxygen is O-18. In order to separate O-18 from the majority of O-16 found in the atmosphere, it was necessary to distill O-18 from O-16, even though they have very similar boiling points. This distillation required precise technology and took more than 6 months to complete.

The novel method using nanoporous carbon to distill O-18 can be used not only for PET diagnosis but for research on dementia, and this novel method can be applied to the separation of carbon and nitrogen isotopes, and other molecules useful for isotopic analysis methods and therapeutic cancer drugs. The group expects more demand for this method and substance in the future.

Featured image: Comparison of S at different times at 100 K and 112 K for the CDC in this work with other separation methods from the literature. The inset shows illustrative models for the pore filling of CDC by O2-16 and O2-18 molecules after 1 min and 30 min. © Copyright 2021, Nature Communications, Licensed under CC BY 4.0

Reference: Ujjain, S.K., Bagusetty, A., Matsuda, Y. et al. Adsorption separation of heavier isotope gases in subnanometer carbon pores. Nat Commun 12, 546 (2021). https://www.nature.com/articles/s41467-020-20744-6 https://doi.org/10.1038/s41467-020-20744-6

Provided by Shinshu University

Study Reveals a New Potential Mechanism Underlying Loss of Muscle Mass During Menopause (Medicine)

Menopause is associated with several physiological changes, including loss of skeletal muscle mass. However, the mechanisms underlying muscle wasting are not clear. A new study conducted in collaboration between the universities of Minnesota (USA) and Jyväskylä (Finland) reveals that estrogen deficiency alters the microRNA signalling in skeletal muscle, which may activate signalling cascades leading to loss of muscle mass.

Menopause leads to an estrogen deficiency that is associated with decreases in skeletal muscle mass and strength. This is likely due to changes in both muscle function and the size of muscle cells commonly referred to as fibers.

“The mechanistic role of estrogen in the loss of muscle mass had not been established. In our study, we focused on signaling cascades in skeletal muscle that eventually lead to cell death,” explains Academy of Finland postdoctoral researcher Sira Karvinen from the Gerontology Research Center, Faculty of Sport and Health Sciences, University of Jyväskylä, Finland.

One possible signaling route leading to cell death involves microRNA molecules. MicroRNA molecules regulate gene expression by inhibiting targeted protein synthesis. To date, several microRNAs have been found to regulate key steps in cell death pathways and hence may regulate the number of muscle cells.

”In our previous studies we have established estrogen responsive microRNAs in both blood and muscle of menopausal women,” says the principal investigator, Academy research fellow Eija Laakkonen. “Now we investigated this observation in more detail by utilizing an animal model of low and high systemic estrogen levels provided by Professor Dawn Lowe’s group working at the University of Minnesota.”

The study revealed that estrogen deficiency downregulated several microRNAs linked to cell death pathways in muscle. This observation was associated with upregulation of cell death proteins.

”Thus, estrogen responsive micro-RNAs may share a mechanistic role in muscle wasting during menopause,” says Karvinen. ”One preventative strategy recommended is for women to engage in resistance training especially at middle-age to aid in maintaining muscle mass and power.”

The study was carried out in collaboration between the universities of Minnesota (USA) and Jyväskylä (Finland) and was funded by National Institutes of Health (NIH, USA) and the Academy of Finland.

Original article:

Estradiol deficiency and skeletal muscle apoptosis: Possible contribution of microRNAs. Karvinen, S., Juppi, H-K., Le, G., Cabelka, C.A., Mader, T.L., Lowe, D.A., and Laakkonen, E.K. Experimental Gerontology.  https://doi.org/10.1016/j.exger.2021.111267.

Provided by University of Jyväskylä

Quantum Computing: When Ignorance is Wanted (Quantum)

Quantum technologies for computers open up new concepts of preserving the privacy of input and output data of a computation. Scientists from the University of Vienna, the Singapore University of Technology and Design and the Polytechnic University of Milan have shown that optical quantum systems are not only particularly suitable for some quantum computations, but can also effectively encrypt the associated input and output data. This demonstration of a so-called quantum homomorphic encryption of a quantum computation has now been published in “NPJ Quantum Information”.

Quantum computers promise not only to outperform classical machines in certain important tasks, but also to maintain the privacy of data processing. The secure delegation of computations has been an increasingly important issue since the possibility of utilizing cloud computing and cloud networks. Of particular interest is the ability to exploit quantum technology that allows for unconditional security, meaning that no assumptions about the computational power of a potential adversary need to be made.

Different quantum protocols have been proposed, all of which make trade-offs between computational performance, security, and resources. Classical protocols, for example, are either limited to trivial computations or are restricted in their security. In contrast, homomorphic quantum encryption is one of the most promising schemes for secure delegated computation. Here, the client’s data is encrypted in such a way that the server can process it even though he cannot decrypt it. Moreover, opposed to other protocols, the client and server do not need to communicate during the computation which dramatically boosts the protocol’s performance and practicality.

In an international collaboration led by Prof. Philip Walther from the University of Vienna scientists from Austria, Singapore and Italy teamed up to implement a new quantum computation protocol where the client has the option of encrypting his input data so that the computer cannot learn anything about them, yet can still perform the calculation. After the computation, the client can then decrypt the output data again to read out the result of the calculation. For the experimental demonstration, the team used quantum light, which consists of individual photons, to implement this so-called homomorphic quantum encryption in a quantum walk process. Quantum walks are interesting special-purpose examples of quantum computation because they are hard for classical computers, whereas being feasible for single photons.

By combining an integrated photonic platform built at the Polytechnic University of Milan, together with a novel theoretical proposal developed at the Singapore University of Technology and Design, scientist from the University of Vienna demonstrated the security of the encrypted data and investigated the behavior increasing the complexity of the computations.

The team was able to show that the security of the encrypted data improves the larger the dimension of the quantum walk calculation becomes. Furthermore, recent theoretical work indicates that future experiments taking advantage of various photonic degrees of freedom would also contribute to an improvement in data security; one can anticipate further optimizations in the future. “Our results indicate that the level of security improves even further, when increasing the number of photons that carry the data”, says Philip Walther and concludes “this is exciting and we anticipate further developments of secure quantum computing in the future”.

Publication in “NPJ Quantum Information”:
Jonas Zeuner, Ioannis Pitsios, Si-Hui Tan, Aditya Sharma, Joseph Fitzsimons, Roberto Osellame and Philip Walther, Experimental Quantum Homomorphic Encryption, npj Quantum Information 7, 25 (2021); DOI: 10.1038/s41534-020-00340-8

Featured image: Artistic image of a homomorphic-encrypted quantum computation using a photonic quantum computer. (© Equinox Graphics, Universität Wien)

Provided by Universität Wein