Tag Archives: #research

Artificial Intelligence Improves Control of Powerful Plasma Accelerators (Physics)

AI for controlling next generation accelerators improves potential for applications in research, medicine and industry.

An international team of accelerator experts with participation of researchers from DESY has successfully demonstrated that an algorithm was able to tune the complex parameters involved in controlling the next generation of plasma accelerators. The algorithm was able to optimize the accelerator much more quickly than a human operator. The experiments led by Imperial College London researchers were conducted at the Central Laser Facility at the STFC Rutherford Appleton Laboratory, UK. The results are published today in Nature Communications.

The Electrons are accelerated in the plasma cell (centre). A laser beam coming from the right ignites the plasma in the cell (photo: Imperial College).

The technology of plasma-based acceleration promises to deliver a new generation of accelerators, more powerful, compact and versatile than current ones. The accelerated electrons or X-rays produced by them can be used for scientific research, such as probing the atomic structure of materials; in industrial applications, such as for producing consumer electronics; and could also be used in medical applications, such as cancer treatments and medical imaging.

However, for enabling this new technology to serve this large variety of applications, precise and reliable control of the acceleration process itself must be achieved. Artificial intelligence or machine learning is one of the most promising approaches to operating these complex machines. Rob Shalloo (DESY), former researcher at Imperial College and first author of the paper says: “For plasma accelerators to become prevalent in scientific, industrial or medical applications, they need to move from research project towards something resembling a plug-and-play device. This is challenging for a machine which operates under such extreme conditions but with machine learning, we’re starting to show it is possible. The techniques we have developed will be instrumental in getting the most out of a new generation of advanced plasma accelerators facilities.“

The team worked with laser wakefield accelerators. These combine the world’s most powerful lasers with a source of plasma (ionised gas) to create concentrated beams of electrons and x-rays. Traditional accelerators need hundreds of metres to accelerate electrons, but wakefield accelerators can manage the same acceleration within the space of centimetres, drastically reducing the size and cost of the equipment.

View of a fluorescence screen. With its help, the energy distribution of the electrons accelerated almost to the speed of light can be measured (photo: Imperial College).

However, because wakefield accelerators operate in the extreme conditions created when lasers are combined with plasma, they can be difficult to control and optimise to get the best performance. In wakefield acceleration, an ultrashort laser pulse is driven into plasma, creating a wave that is used to accelerate electrons. Both the laser and plasma have several parameters that can be tweaked to control the interaction, such as the shape and intensity of the laser pulse, or the density and length of the plasma.

While a human operator can tweak these parameters, it is difficult to know how to optimise so many parameters at once. Instead, the team turned to artificial intelligence, creating a machine learning algorithm to optimise the performance of the accelerator.

The algorithm set up to six parameters controlling the laser and plasma, fired the laser, analysed the data, and re-set the parameters, performing this loop many times in succession until the optimal parameter configuration was reached.

The data gathered during the optimisation process also provided new insight into the dynamics of the laser-plasma interaction inside the accelerator, potentially informing future designs to further improve accelerator performance.

Lead researcher Matthew Streeter, who completed the work at Imperial College and is now at Queen’s University Belfast, says: “Our work resulted in an autonomous plasma accelerator, the first of its kind. As well as allowing us to efficiently optimise the accelerator, it also simplifies their operation and allows us to spend more of our efforts on exploring the fundamental physics behind these extreme machines.”

“The computer was able to reliably optimise the plasma accelerator from scratch within minutes; this is difficult to achieve by ‘human learning’ even when you are an experienced operator”, says Jens Osterhoff, head of plasma accelerator research at DESY. “These are very promising first steps towards application of AI in accelerator operation in general and I am certain hardly any future accelerator will go without machine learning.”

Similar experiments being conducted at the LUX accelerator, a joint project of DESY and Hamburg University, are supporting the hypothesis that the application of AI for steering accelerators is raised to a new level these days (S. Jalas et al., submitted). “With AI and machine learning deployed on our current and next generation of accelerators at DESY and elsewhere, we expect to see unprecedented performance levels which is truly exciting”, concludes Wim Leemans, Director of the Accelerator Division at DESY.

The experiment was conducted by a team of researchers from Imperial College London, the Central Laser Facility, the York Plasma Institute, the University of Michigan, the University of Oxford and DESY.

Reference: ‘Automation and control of laser wakefield accelerators using Bayesian optimisation’; R.J. Shalloo et al.; Nature Communications; DOI: 10.1038/s41467-020-20245-6 https://www.nature.com/articles/s41467-020-20245-6

Provided by DESY

What Does It Mean To Be A “Depleted” Comet? (Planetary Science)

Friends, comets are some of the least altered bodies left over from the formation of the Solar System. Made of dust and ice, these bodies spend much of their lives far from the Sun until some gravitational perturbation causes them to enter the inner Solar System. Once there, the ices sublime. The gas flows away from the small, and gravitationally weak, nucleus, entraining dust in its flow.

Cometary spectra have been obtained in surveys for over 150 years and it has long been noted that the spectra of most comets are remarkably similar. Photometric and spectroscopic surveys have been undertaken to assess if all comets are spectrally similar or whether there are comets with different compositions. Based on production rate ratios from optical observations, these surveys found that approximately 75% of the observed comets had very similar compositions, termed “typical”. The other 25% were depleted in C2 and are designated “depleted” comets. All of the surveys use Q(C2/OH) or Q(C2/CN) to define typical versus depleted, where Q is the production rate in molecules sec-¹. In 2012, Cochran and colleagues also observed C3. Using a very strict definition of C3 depletion, they found that about 9% of all observed comets were depleted in both C2 and C3. With a slightly less strict definition, A’Hearn and collagues found about 24% of all comets were depleted in both C2 and C3.

Comet 21P/Giacobini-Zinner (hereafter GZ) is the prototype of the depleted comets because it was the first one discovered; it is part of the group that are depleted in both C2 and C3 in the definitions of both Cochran et al. and A’Hearn et al. Figure 1 given below shows a low resolution optical spectrum of GZ, along with a spectrum from the same instrument of typical comet 8P/Tuttle. The two comets were at essentially the same heliocentric and geocentric distances at the times of the observations. Inspection of these spectra show a prominent emission at ∼ 3880˚A due to CN in both spectra. Tuttle also shows strong emissions of C2 and C3 that are very weak in the GZ spectrum.

Figure 1. Spectra in the optical of GZ (top) and Tuttle (bottom) are shown. Emission due to CN is prominent in both spectra. However, while Tuttle shows strong molecular emissions due to C3 and C2, those features appear extremely weak in the GZ spectrum.

Comparison of the spectra in Figure 1 raises the question of what it means for a comet to be depleted in a species. Does this mean that there is some of that species, but the distribution of the relative line strengths in the spectrum is very different from a typical comet? Or does it mean that all the same lines that we observe in a typical comet’s spectrum are present with the same relative strengths as a typical comet, but that they are all just much weaker than we would expect for a typical comet? In order to answer this question, Anita Cochran and collagues undertook high spectral resolution observations of GZ in 2018.

They obtained these observations with the Tull coudé spectrograph on the 2.7m Harlan J. Smith Telescope of McDonald Observatory. They have shown that C2/C3 depleted comets look remarkably similar to the “typical” comets for species such as CN, that generally do not show the depletions.

The spectra also do not showed them significantly different relative line ratios in the depleted species (such as C2) for GZ when compared with the line ratios of the same species in Tuttle. As shown above, A’Hearn and colleagues showed that CN/H2O was very similar for GZ and Tuttle. The depleted species they detected are depleted by much more relative to CN than the differences in this ratio. Therefore, this implies that the depleted species they detected were also depleted relative to water.

Table 1. Ratio of count rates of species for GZ/Tuttle

According to Anita Cochran, “If the solar nebula was more or less homogeneous when the comets formed, and the cometary compositions therefore were also homogeneous, then much lower abundances of some species might come about because of loss of the parent volatile ice by sublimation from the nucleus. One would then expect that comets that have been near the Sun over longer periods of time and/or more often would be more depleted. Thus, we would expect Jupiter Family comets to be depleted, but long period comets to be unlikely to be depleted.”

While a higher percentage of Jupiter Family comets (37%) are depleted than long period comets (19.5%), they saw comets that have been in the inner Solar System frequently (e.g. 2P/Encke) that are typical, and comets that have rarely been near the Sun being depleted.

If not the result of dynamical evolution, then the depletions must be from differences in formation. Long period comets generally enter the inner Solar System from the Oort cloud, having first formed in the giant planet region and then being perturbed outward to the Oort Cloud. Jupiter Family comets formed in the Kuiper Belt and are perturbed into the inner Solar System from the scattered disk. This scenario would explain a difference of depletion type if all of the depleted comets were from one reservoir. However, that is not the explanation these dynamical scenarios would suggest unless the two reservoirs have been mixed at some time and the original formation regions are thus intertwined. There is a growing consensus that such mixing is likely fairly common.

The existence of depleted comets in both reservoirs implies that the formation regions were not totally homogeneous. Pockets with lower quantities of the parents of C2 and C3 must have existed to form the depleted comets. Dynamical studies to determine how materials were mixed are beyond the scope of their paper.

Dr. Roth and colleagues obtained complementary IR observations of GZ during a similar period as their (A. Cochran et al.) observations. That study targeted the hyper-volatiles CO, CH4 and C2H6 along with observations of H2O. They found that the CO mixing ratio looked like other Jupiter family comets, but that CH4 might be enriched (though they had some caveats to that statement), while C2H6 was depleted. They found that some of the species were variable, which leads to the question of whether the depleted comets are always depleted or if the depletion is just a time-variable property. They concluded that the variability is much smaller than the amount of depletion detected. They also pointed out that GZ has been observed over more than 1 apparition and always shows depletion. This is consistent with what has been found in the optical. Therefore, the depletion is a real effect of the whole body and not just observing different regions of the nucleus.

Faggi and colleagues also observed GZ in the IR around the time that Roth and colleagues did. They detected H2O, C2H6, CH3OH, HCN and CO and derived only upper limits for C2H2, H2CO, CH4 and NH3. Acetylene is presumed to be the parent for C2 and they found that their upper limits of acetylene to water were consistent with the depletion seen in the optical. Faggi and colleagues also saw some variability of the measured species, with ethane and methanol being depleted sometimes and enhanced at others. However, as with Roth and colleagues, the variability is much smaller than the depletion seen in C2 and other depleted species.

Interestingly, observations of Tuttle in the IR showed it to also be hydrocarbon poor, though it looks quite typical in the optical. This suggests that carbon chain depletion may not be tracing hydrocarbons.

They summarises that depleted comets do not totally lack the normal species that exist in larger amounts in typical comets. The small amounts of some species relative to typical comets behave normally when the comet is heated and sublime. There is just not as much of these species’ parents to sublime.

References: Anita L. Cochran, Tyler Nelson, Adam J. McKay, “What Does It Mean to be a “Depleted” Comet? High Spectral Resolution Observations of the
Prototypical Depleted Comet 21P/Giacobini-Zinner from McDonald Observatory”, ArXiv, pp. 1-13, 2020. https://arxiv.org/abs/2009.01308

Copyright of this article totally belongs to uncover reality. One is only allowed to use it only by giving proper credits either to us or to our author.

Do You Know The Mass Of Milky Way Within 100 Kpc? (Astronomy)

Astronomers using a large sample of halo stars estimated the mass of the Milky Way out to 100 kpc and found that it is 6.31 ± 0.32(stat.) ± 1.26(sys.) × 10¹¹ M

The total mass of the Milky Way has been an historically difficult parameter to pin down. Despite decades of measurements, there remains an undercurrent of elusiveness surrounding “the mass of the Milky Way”. However, the continued eagerness to provide an accurate measure is perhaps unsurprising — the mass of a halo is arguably its most important characteristic. For example, almost every property of a galaxy is dependent on its halo mass, and thus this key property is essential to place our “benchmark” Milky Way galaxy in context within the general galaxy population. In addition, the host halo mass is inherently linked to its subhalo population, so most of the apparent small scale discrepancies with the ΛCDM model are strongly dependent on the Milky Way mass. Moreover, tests of alternative dark matter candidates critically depend on the total mass of the Milky Way, particularly for astrophysical tests.

Milky way © wallpaper cave

The uncertainty has stemmed from two major shortcomings:
(1) a lack of luminous tracers with full 6D phase-space information out to the viral radius of the Galaxy, and (2) an underestimated, or unquantified, systematic uncertainty in the mass estimate.

However, there has been significant progress since the first astrometric data release from the Gaia satellite. This game-changing mission for Milky Way science provided the much needed tangential velocity components for significant numbers of halo stars, globular clusters and satellite galaxies. Indeed, there are encouraging signs that we are converging to a total mass of just over 1×10¹²M. However, mass estimates at very large distances (i.e. beyond 50 kpc), are dominated by measures using the kinematics of satellite galaxies, which probe out to the virial radius of the Galaxy. It is well-known that the dwarf satellites of the Milky Way have a peculiar planar alignment, and, without independent measures at these large distances, there remains uncertainty over whether or not the satellites are biased kinematic tracers of the halo.

Arguably the most promising tracers at large radii are the halo stars. They are significantly more numerous than the satellite galaxies and globular clusters, and are predicted to reach out to the virial radius of the Galaxy. There currently exist thousands of halo stars with 6D phase-space measurements, thanks to the exquisite Gaia astrometry and wide-field spectroscopic surveys such as the Sloan Digital Sky Survey (SDSS) and the Large Sky Area Multi-Object Fibre Spectroscopic Telescope (LAMOST) survey. Moreover, with future Gaia data releases and the next generation of wide-field spectroscopic surveys from facilities such as the Dark Energy Spectroscopic Instrument, the WHT Enhanced Area Velocity Explorer, and the 4-metre Multi-Object Spectroscopic Telescope, there will be hundreds of thousands of halo stars with 6D measurements. The magnitude limit of Gaia and the complementary spectroscopic surveys will likely limit the samples of halo stars to within ∼100 kpc, but this is still an appreciable fraction of the virial radius (∼0.5𝑟200c), and will probe relatively unchartered territory beyond 50 kpc.

As we enter a regime of more precise mass measures, and significantly reduced statistical uncertainties, it is vital to be mindful of any systematic influences in our mass estimates. Although many mass-modelling techniques assume dynamical equilibrium, it is well-documented that “realistic” stellar haloes can be a mash-up of several coherent streams and substructures. Thus, comparisons with cosmologically motivated models of stellar haloes are crucial. However, while cosmological simulations can provide much needed context, the unique assembly history of the Milky Way is most relevant for Galactic mass measurements. For example, the influence of the Sagittarius (Sgr) stream, which contributes a significant fraction to the total stellar halo mass, needs to be considered. Furthermore, and perhaps more importantly, it has recently been recognised that the recent infall of the massive Large Magellanic Cloud (LMC) can imprint significant velocity gradients in the Milky Way halo. Indeed, Erkal et al. (2020) showed that these velocity gradients can bias equilibrium based mass modelling, and is thus an effect that can no longer ignore.

In this work, researchers compile a sample of distant (𝑟 > 50 kpc) halo stars from the literature with 6D phase-space measurements, and use a distribution function analysis to measure the total mass within 100 kpc. They pay particular attention to systematic influences, such as the Sagittarius (Sgr) stream and the LMC, and, where possible, correct for these perturbative effects.

The resulting circular velocity (left panel) and mass (right panel) profiles as a function of galactocentric radius. The results when no velocity offset is applied (dashed lines) and when Sgr stars are included (purple lines) are also shown. The shaded regions indicate the 1- 𝜎 uncertainty. © deason et al.

They used a rigid Milky Way-LMC model to constrain the systematic reflex motion effect of the massive LMC on their halo mass estimate. And found that, simple velocity offset correction in 𝑣los and 𝑣𝑏 can minimize the overestimate caused by the reflex motion induced by the LMC, and, assuming a rigid LMC mass of 1.5 × 10¹¹ M, they can recover the true mass within 1-𝜎.

Phase-space diagrams (𝑣𝑟 vs. 𝑟) for four example Auriga haloes, and the Milky Way data (bottom
panels). The top two panels show Auriga haloes with shell-type structures in the radial range 50-100 kpc. The middle two panels show cases with no obvious shells. Typically, the presence of shells causes the mass estimates to be underestimated. In the bottom two panels they show the 𝑣los vs. 𝑟 diagram for the observational sample in the distance range 50 < 𝑟/kpc < 100. In the bottom left panel they show each individual star and the associated 𝑣los errors. The red points indicate the stars that likely belong to the Sgr stream. The bottom right panel shows a 2D histogram in the 𝑣los-𝑟 space. Here, they have taken into account uncertainties in the distance and velocity of each star. Note that Sgr stars are excluded in the right-hand panel. ©deason et al.

Then by applying their method to a sample of Milky Way-mass haloes from the Auriga simulation they found that the halo masses are typically underestimated by 10%. However, this bias is reduced to ∼ 5% if we only consider haloes with relatively quiescent recent accretion histories. The residual bias is due to the presence of long-lived shell-like structures in the outer halo. The halo-to-halo scatter is ∼20% for the quiescent haloes, and represents the dominant source of error in the mass estimate of the Milky Way.

They also found that the mass of milky way within 100 kpc is 6.31 ± 0.32(stat.) ± 1.26(sys.) × 10¹¹ M. A systematic bias correction (+5%), and additional uncertainty (20%), are included based on their results from the Auriga simulations and found that the mass estimates are slightly higher when they do not include a velocity offset to correct for the reflex motion induced by the LMC, or slightly lower when Sgr stars are included in their analysis.

Their mass estimate within 100 kpc is in good agreement with recent, independent measures in the same radial range. If they assume the predicted mass-concentration relation for Navarro-Frenk-White haloes, their measurement favours a total (pre-LMC infall) Milky Way mass of 𝑀200c = 1.05 ± 0.25 × 10¹²M, or (post-LMC infall) mass 𝑀200c = 1.20 ± 0.25 × 10¹²M when a rigid 1.5 × 10¹¹M LMC is included.

References: Alis J. Deason, Denis Erkal, Vasily Belokurov, Azadeh Fattahi, Facundo A. Gómez, Robert J. J. Grand, Rüdiger Pakmor, Xiang-Xiang Xue, Chao Liu, Chengqun Yang, Lan Zhang, Gang Zhao, “The mass of the Milky Way out to 100 kpc using halo stars”, ArXiv, 2020. https://arxiv.org/abs/2010.13801

Copyright of this article totally belongs to uncover reality.. One is allowed to use it only by giving proper credit to uncover reality

New Approach To Diagnose Genetic Diseases Using RNA Sequencing Increases Yield (Medicine)

In the world of rare genetic diseases, exome and genome sequencing are two powerful tools used to make a diagnosis. A recent addition to the toolkit, RNA sequencing, has been demonstrated to help researchers narrow down disease candidate variants identified first on exome and genome sequencing. A new study from Baylor College of Medicine finds that starting genetic analysis with RNA sequencing can increase diagnostic yield even further. The results are published in the Journal of Clinical Investigation.

Adding RNA sequencing to exome and genome sequencing can improve diagnostic yield and increase confidence in diagnosis of rare genetic diseases. Image courtesy of the National Human Genome Research Institute/Ernesto del Aguila III.

Baylor has been a leader in developing clinical applications of exome and genome sequencing, a technique that is now being used in clinics worldwide. Researchers at the National Institutes of Health (NIH) Undiagnosed Diseases Network (UDN) have successfully used exome and genome sequencing to increase the diagnostic rate of rare genetic diseases to about 35%.

“That’s impressive because these cases have already had such an extensive workup already,” said Dr. Brendan Lee, corresponding author on the study and professor and chair of molecular and human genetics at Baylor. “The 35% diagnostic rate is great, but unfortunately that means there’s still 65% that remain undiagnosed.”

Exome and genome sequencing do have limits. Over the years, scientists have identified about 4,000 disease-causing genes and 7,000 associated distinct clinical diseases, but relatively little is known about the roughly 16,000 other genes in the genome. Further, only about 1 or 2% of the genome is coding, meaning it is translated into RNA and proteins. Researchers are limited in their ability to interpret genetic changes in the noncoding regions of the genome.

“When a patient’s exome and genome is sequenced, we interpret the results and come up with a list of gene mutations that could cause a disease,” said Lee, the Robert and Janice McNair Endowed Chair in Molecular and Human Genetics at Baylor. “Often, we can’t assign a definitive effect to many DNA mutations because there’s not enough information.”

To assist in interpretation of exome and genome sequencing, the UDN has increasingly turned to RNA sequencing. While exome and genome sequencing identify genetic changes in the DNA, RNA sequencing can reveal the effects of those changes, for example if a gene has lower expression than expected. RNA sequencing can also tell us about the effects of noncoding changes, something that is very important as we transition from exome to genome sequencing.

Trying a new approach

RNA sequencing has been used as a secondary tool to help prioritize disease gene candidates identified with exome and genome sequencing, and it has been shown to increase diagnostic yield to variable degrees. The Baylor team wanted to try a different approach with the UDN cases. Using a novel pipeline developed with collaborators in Germany, they started with the RNA sequencing to first identify unique differences in gene expression and splicing. Researchers could then trace the problem back to a corresponding genetic change in the exome and genome sequencing data.

“This strategy really flips the way we normally approach a case, looking at the end result in the RNA and working backwards to find the cause in the exome or genome. It allows biology to tell us where to look to make a diagnosis,” said Dr. David Murdock, lead author of the study and assistant professor of molecular and human genetics at Baylor.

“We found this RNA sequencing first approach was extremely powerful,” Lee said. “It was able to very rapidly point at the gene we should look at. Moreover, it did so in cases in which we would not have been able to identify the disease gene candidate using exome and genome sequencing alone. If we had used the old way and generated a priority candidate list, these gene mutations would not have even been in the priority list.”

Increased diagnostic yield

The team found that exome and genome sequencing sometimes missed small deletions in genes. However, the RNA sequencing data showed that these deletions could dramatically affect gene expression. RNA sequencing also picked up changes in expression and splicing caused by variants in noncoding regions that would not have been flagged in regular exome and genome sequencing.

“We see lots of changes in the noncoding region, and there’s no way to know if they’re important,” Lee said. “But the RNA sequencing will show an 80% decrease in expression related to that gene, and then we can assign causality to that noncoding variant.”

The Baylor study found that beginning with RNA sequencing could increase the diagnostic yield 17% from the traditional exome and genome sequencing approach, bringing their overall diagnosis rate to roughly 50%. As part of the study, researchers analyzed both skin cells and blood cells and found that skin cells were more informative in diagnosis because of their homogenous nature and better gene expression.

“I see RNA sequencing with this approach as eventually becoming standard practice, especially as we move more from exome to genome sequencing. It allows us to diagnose so many more patients and gives families the answer they’ve been seeking,” Murdock said.

“A diagnosis is not black and white. It involves variable amounts of certainty,” Lee said. “The availability of this tool will change the certainty level, improving diagnostic yield and increasing confidence in that diagnosis.”

References: David R. Murdock, … , Neil A. Hanchard, Brendan Lee, “Transcriptome-directed analysis for Mendelian disease diagnosis overcomes limitations of conventional genomic testing”, J Clin Invest. 2020. https://doi.org/10.1172/JCI141500. Link: https://www.jci.org/articles/view/141500

Provided by Baylor College Of Medicine

Gravitational Waves From The Universe Filled With Primordial Black Holes (Astronomy)

Friends, primordial black holes (PBHs) are attracting increasing attention since they may play a number of important roles in Cosmology. They may indeed constitute part or all of the dark matter, they may explain the generation of large-scale structures through Poisson fluctuations, they may provide seeds for supermassive black holes in galactic nuclei, and they may account for the progenitors of the black-hole merging events recently detected by the LIGO/VIRGO collaboration through their gravitational wave emission.

There are several constraints on the abundance of PBHs, ranging from microlensing constraints, dynamical constraints (such as constraints from the abundance of wide dwarfs in our local galaxy, or from the existence of a star cluster near the centres of ultra-faint dwarf galaxies), constraints from the cosmic microwave background due to
the radiation released in PBH accretion, and constraints from the extragalactic gamma-ray background to which Hawking evaporation of PBHs contributes. However, all these constraints are restricted to certain mass ranges for the black holes, and no constraint applies to black holes with masses smaller than ∼ 10^9g, since those would Hawking evaporate before big-bang nucleosynthesis.

Nonetheless, various scenarios have been proposed where ultra-light black holes are abundantly produced in the early universe, so abundantly that they might even dominate the energy budget of the universe for a transient period. By Hawking evaporating before big-bang nucleosynthesis takes place, those PBHs would leave no direct imprint. It thus seems rather frustrating that such a drastic change in the cosmological standard model, where an additional matter-dominated epoch driven by PBHs is introduced, and where reheating proceeds from PBH evaporation, cannot be constrained by the above-mentioned probes. This situation could however be improved by noting that a gas of gravitationally interacting PBHs is expected to emit gravitational waves, & that these gravitational waves would propagate in the universe until today, leaving an indirect imprint of the PBHs past existence.

In the present work, Theodoros Papanikolaou and colleagues have studied the gravitational waves induced at second order by the gravitational potential of a gas of primordial black holes. In particular, they have considered scenarios where ultralight PBHs, with masses < 10^9g, dominate the universe content during a transient period, before Hawking evaporating.

There are several ways PBHs can be involved in the production of gravitational waves. First, the induction of gravitational waves can proceed from the primordial, large curvature perturbations that must have preceded and given rise to the existence of PBHs in the very early universe. Second, the relic Hawking-radiated gravitons may also contribute to the stochastic gravitational-wave background. Third, gravitational waves are expected to be emitted by PBHs mergers. But in the current paper, researchers investigated a fourth effect, namely the production of gravitational waves induced by the large-scale density perturbations underlain by PBHs themselves.

Contrary to the first effect which I mentioned above, more commonly studied, where PBHs and gravitational waves have a common origin (namely the existence of a large primordial curvature perturbation), in the problem at hand the gravitational waves are produced by the PBHs, via the curvature perturbation they underlie. They also notice that since they make use of cosmological perturbations theory, they would restrict their analysis to those scales where the density field remains linear, while the inclusion of smaller scales would require to resolve non-linear mechanisms such as merging, described in the third effect I mentioned above.

This fourth route is a very powerful one to constrain scenarios where the universe is transiently dominated by PBHs, since the mere requirement that the energy contained in the emitted gravitational waves does not overtake the one of the background after PBHs have evaporated (which would lead to an obvious backreaction problem), leads to tight constraints on the abundance of PBHs at the time they form. In particular, it excludes the possibility that PBHs dominate the universe upon their time of formation, independently of their mass.

In practice, they considered that PBHs are initially randomly distributed in space, since recent works suggest that initial clustering is indeed negligible. They also assume that the mass distribution of PBHs is monochromatic, since it was shown to be the case in most formation mechanisms. If PBHs form during the radiation era, their contribution to the total energy density increases as an effect of the expansion. Therefore, if their initial abundance is sufficiently large, they dominate the universe content before they evaporate, and researchers compute the amount of gravitational waves produced during the PBH-dominated era.

By neglecting clustering at formation, they found that the Poissonian fluctuations in their (PBH) number density underlay small-scale density perturbations, which in turn induce the production of gravitational waves at second order. In practice, they have computed the gravitational-wave energy spectrum, as well as the integrated energy density of gravitational waves, as a function of the two parameters of the problem, namely the mass of the PBHs, mPBH (assuming that all black holes form with roughly the same mass), and their relative abundance at formation ΩPBH,f. This calculation was performed by researchers both numerically and by means of well-tested analytical approximations. They have found that the amount of gravitational waves increases with mass of primordial black holes i.e. mPBH, since heavier black holes take longer to evaporate, hence dominate the universe for a longer period; and with ΩPBH,f, since more abundant black holes dominate the universe earlier, hence for a longer period too.

Requiring that the energy contained in gravitational waves never overtakes the one of the background universe led them to the constraint:

Equation 5.1.

Let them stress that since PBHs with masses smaller than 10^9g evaporate before big-bang nucleosynthesis, they cannot be directly constrained (at least without making further assumption). To their knowledge, the above constraint is therefore the first one ever derived on ultra-light PBHs. In particular, it shows that scenarios where PBHs dominate from their formation time on, ΩPBH,f ≈ 1, are excluded (given that m > 10g for inflation to proceed at less than 10^16GeV).

They also mentioned that the condition (equation given above) simply comes from avoiding a backreaction problem, and does not implement observational constraints. However, even if the above condition is satisfied, gravitational waves induced by a dominating gas of PBHs might still be detectable in the future with gravitational-waves experiments. Since they have found that the energy spectrum peaks at the Hubble scale at the time black holes start dominating, this corresponds to a frequency f = Hd/(2πa0), where a0 is the value of the scale factor today and Hd is the comoving Hubble scale at domination time. This
leads to

Equation 5.2

where, H0 is the value of the Hubble parameter today and zeq is the redshift at matter radiation equality. In Fig. given below, this frequency is shown in the region of parameter space that satisfies the condition. Covering 14 orders of magnitude, one can see that it intersects the detection bands of the einstein telescope (ET), the Laser Interferometer Space Antenna (LISA) and the Square Kilometre Array (SKA) facility.

Frequency at which the gravitational waves induced by a dominating gas of primordial black holes peak, as a function of their energy density fraction at the time they form, ΩPBH,f (horizontal axis), and their mass mPBH (colour coding). The region of parameter space that is displayed corresponds to values of mPBH and ΩPBH,f such that black holes dominate the universe content for a transient period, that they form after inflation and Hawking evaporate before big-bang nucleosynthesis, & that the induced gravitational waves do not lead to a backreaction problem. In practice, Eq. (5.2) is displayed with geff = 100, zeq = 3387 and H0 = 70 kms–¹ Mpc–¹. For comparison, the detection bands of ET, LISA and SKA are also shown by researchers.

This may help to further constrain ultra-light primordial black holes, and set potential targets for these experiments.

References: Theodoros Papanikolaou, Vincent Vennin, David Langlois, “Gravitational waves from a universe filled with primordial black holes”, ArXiv, pp. 1-18, 2020. arXiv:2010.11573 link: https://arxiv.org/abs/2010.11573

Copyrights of this article totally belongs to uncover reality. No one is allowed to use it without permission or giving improper credits

NTU Singapore Scientists Devise ‘Trojan Horse’ Approach To Kill Cancer Cells Without Using Drugs (Oncology / Medicine)

Cancer cells are killed in lab experiments and tumour growth reduced in mice, using a new approach that turns a nanoparticle into a ‘Trojan horse’ that causes cancer cells to self-destruct, a research team at the Nanyang Technological University, Singapore (NTU Singapore) has found.

Image: The anti-cancer therapeutic nanoparticle is ultrasmall, with a diameter of 30 nanometres, or approximately 30,000 times smaller than a strand of human hair, and is named Nano-pPAAM.

The researchers created their ‘Trojan horse’ nanoparticle by coating it with a specific amino acid – L-phenylalanine – that cancer cells rely on, along with other similar amino acids, to survive and grow. L-phenylalanine is known as an ‘essential’ amino acid as it cannot be made by the body and must be absorbed from food, typically from meat and dairy products.

Studies by other research teams have shown that cancer tumour growth can be slowed or prevented by ‘starving’ cancer cells of amino acids. Scientists believe that depriving cancer cells of amino acids, for example through fasting or through special diets lacking in protein, may be viable ways to treat cancer.

However, such strict dietary regimes would not be suitable for all patients, including those at risk of malnutrition or those with cachexia – a condition arising from chronic illness that causes extreme weight and muscle loss. Furthermore, compliance with the regimes would be very challenging for many patients.

Seeking to exploit the amino acid dependency of cancer cells but avoid the challenges of strict dietary regimes, the NTU researchers devised a novel alternative approach.

Figure 1. Schematics illustrating the working principle of A) conventional nutrient deprivation and B) the proposed nanoparticles mediated approach to kill cancer cells. Illustration created with BioRender.

They took a silica nanoparticle designated as ‘Generally Recognized As Safe’ by the US Food and Drug Administration and coated it with L-phenylalanine, and found that in lab tests with mice it killed cancer cells effectively and very specifically, by causing them to self-destruct.

The anti-cancer therapeutic nanoparticle is ultrasmall, with a diameter of 30 nanometres, or approximately 30,000 times smaller than a strand of human hair, and is named “Nanoscopic phenylalanine Porous Amino Acid Mimic”, or Nano-pPAAM,

Their findings, published recently in the scientific journal Small, may hold promise for future design of nanotherapies, said the research team.

Assistant Professor Dalton Tay from the School of Materials Science and Engineering, lead author of the study, said: “Against conventional wisdom, our approach involved using the nanomaterial as a drug instead as a drug-carrier. Here, the cancer-selective and killing properties of Nano-pPAAM are intrinsic and do not need to be ‘activated’ by any external stimuli. The amino acid L-phenylalanine acts as a ‘trojan horse’ – a cloak to mask the nanotherapeutic on the inside.”

“By removing the drug component, we have effectively simplified the nanomedicine formulation and may overcome the numerous technological hurdles that are hindering the bench-to-bedside translation of drug-based nanomedicine.”

Intrinsic anti-cancer therapeutic properties of Nano-pPAAM

As a proof of concept, the scientists tested the efficacy of Nano-pPAAM in the lab and in mice and found that the nanoparticle killed about 80 per cent of breast, skin, and gastric cancer cells, which is comparable to conventional chemotherapeutic drugs like Cisplatin. Tumour growth in mice with human triple negative breast cancer cells was also significantly reduced compared to control models.

Further investigations showed that the amino acid coating of Nano-pPAAM helped the nanoparticle to enter the cancer cells through the amino acid transporter cell LAT1. Once inside the cancer cells, Nano-pPAAM stimulates excessive reactive oxygen species (ROS) production – a type of reactive molecule in the body – causing cancer cells to self-destruct while remaining harmless to the healthy cells.

Co-author Associate Professor Tan Nguan Soon from NTU’s Lee Kong Chian School of Medicine said: “With current chemotherapy drug treatment, a common issue faced is that recurrent cancer becomes resistant to the drug. Our strategy does not involve the use of any pharmacological drugs but relies on the nanoparticles’ unique properties to release catastrophic level of reactive oxygen species (ROS) to kill cancer cells.”

Providing an independent view, Associate Professor Tan Ern Yu, a breast cancer specialist at Tan Tock Seng Hospital said, “This novel approach could hold much promise for cancer cells that have failed to respond to conventional treatment like chemotherapy. Such cancers often have evolved mechanisms of resistance to the drugs currently in use, rendering them ineffective. However, the cancer cells could potentially still be susceptible to the ‘Trojan horse’ approach since it acts through a completely different mechanism – one that the cells will not have adapted to.”

The scientists are now looking to further refine the design and chemistry of the Nano-pPAAM to make it more precise in targeting specific cancer types and achieve higher therapeutic efficacy.

This includes combining their method with other therapies such as immunotherapy which uses the body’s immune system to fight cancer.

References: Zhuoran Wu, Hong Kit Lim, Shao Jie Tan, Archana Gautam, Han Wei Hou, Kee Woei Ng, Nguan Soon Tan, and Chor Yong Tay, “Potent-By-Design: Amino Acids Mimicking Porous
Nanotherapeutics with Intrinsic Anticancer Targeting Properties”, Small Journal, pp. 1-12, DOI: 10.1002/smll.202003757

Provided by Nanyang Technological University

This Technology Of POSTECH Can Diagnose Covid-19 In Just 30 Minutes (Medicine)

POSTECH professors Jeong Wook Lee and Gyoo Yeol Jung’s team develops a one-pot diagnostic method for detecting pathogenic RNAs with PCR-level sensitivity. Diagnostic technology for new infectious diseases can be developed within a week to prevent confusion caused by new epidemics in the future

The year 2020 can be summarized simply by one word – COVID-19 – as it was the culprit that froze the entire world. For more than 8 months so far, movement between nations has been paralyzed all because there are no means to prevent or treat the virus and the diagnosis takes long.

In Korea, there are many confirmed cases among those arriving from abroad but diagnosis does not take place at the airport currently. Overseas visitors can enter the country if they show no symptoms and must visit the screening clinic nearest to their site of self-isolation on their own. Even this, when the clinic closes, they have no choice but to visit it the next day. Naturally, there have been concerns of them leaving the isolation facilities. What if there was a way to diagnose and identify the infected patients right at the airport?

A joint research team comprised of Professor Jeong Wook Lee and Ph.D. candidate Chang Ha Woo and Professor Gyoo Yeol Jung and Dr. Sungho Jang of the Department of Chemical Engineering at POSTECH have together developed a SENSR (SENsitive Splint-based one-pot isothermal RNA detection) technology that allows anyone to easily and quickly diagnose COVID-19 based on the RNA sequence of the virus.

This technology can diagnose infections in just 30 minutes, reducing the stress on one single testing location and avoiding contact with infected patients as much as possible. The biggest benefit is that a diagnostic kit can be developed within week even if a new infectious disease appears other than COVID-19.

The PCR molecular test currently used for COVID-19 diagnosis has very high accuracy but entails a complex preparation process to extract or refine the virus. It is not suitable for use in small farming or fishing villages, or airport or drive-thru screening clinics as it requires expensive equipment as well as skilled experts.

RNA is a nucleic acid that mediates genetic information or is involved in controlling the expression of genes. The POSTECH researchers designed the test kit to produce nucleic acid binding reaction to show fluorescence only when COVID-19 RNA is present. Therefore, the virus can be detected immediately without any preparation process with high sensitivity in a short time. And it is as accurate as the current PCR diagnostic method.

Using this technology, the research team found the SARS-CoV-2 virus RNA, the cause of COVID-19, from an actual patient sample in about 30 minutes. In addition, five pathogenic viruses and bacterial RNAs were detected which proved the kit’s usability in detecting pathogens other than COVID-19.

Another great advantage of the SENSR technology is the ease of creating the diagnostic device that can be developed into a simple portable and easy-to-use form.

If this method is introduced, it not only allows onsite diagnosis before going to the screening clinic or being hospitalized, but also allows for a more proactive response to COVID-19 by supplementing the current centralized diagnostic system.

“This method is a fast and simple diagnostic technology which can accurately analyze the RNA without having to treat a patient’s sample,” commented Professor Jeong Wook Lee. “We can better prepare for future epidemics as we can design and produce a diagnostic kit for new infectious diseases within a week”

Professor Gyoo Yeol Jung added, “The fact that pathogenic RNAs can be detected with high accuracy and sensitivity, and that it can be diagnosed on the spot is drawing attention from academia as well as industry circles.” He explained, “We hope to contribute to our response to COVID-19 by enhancing the current testing system.

The study, which was published in Nature Biomedical Engineering on September 18 (KST), was conducted with the support from the National Research Foundation’s C1 Gas Refinery Program and New Research Program, and by the Industry Specialist Training Program from the Korea Institute of Energy Technology Evaluation and Planning.

References: Woo, C.H., Jang, S., Shin, G. et al. Sensitive fluorescence detection of SARS-CoV-2 RNA in clinical samples via one-pot isothermal ligation and transcription. Nat Biomed Eng (2020). https://doi.org/10.1038/s41551-020-00617-5 link: https://www.nature.com/articles/s41551-020-00617-5

Provided by Pohang University Of Science and Technology

Your Smartphone Is Designed To Hack Your Brain (Psychology)

The word “hack” gets thrown around a lot these days. “Life hacks” include everything from life-changing study techniques to using a shoe-organizer to organize things besides shoes. And then there are “brain hacks”, which supposedly teach us to access powers we didn’t know our brains possessed. But there’s a more insidious form of brain-hacking — when your brain is the thing being hacked. And smartphone developers are doing it to you all the time.

“This thing is a slot machine,” says former Google Design Ethicist Tristan Harris, holding up his phone. “Every time I check my phone, I’m playing the slot machine to see, ‘What did I get?'” It’s incredibly addictive, especially since you don’t have to pay a single cent to pull the lever. And smartphone developers know that, so they’ve designed their software to tickle your rewards center.

One example? When you get “likes” on Instagram, you don’t necessarily find out when they happen. Instead, the ‘gram sometimes saves up your notifications and delivers them all in one big burst. That kind of a windfall can feel like a rush, even if what you won is essentially valueless. And it keeps you coming back for more.

It’s all about “intermittent variable rewards”, which encourage you to keep on checking your smartphone over and over because it might pay off: maybe that guy you’re fighting on Twitter has posted an asinine reply, maybe your friends have responded to the picture you uploaded, maybe something hilarious and exciting is going down and you’re the last to know about it. Whatever the reward, there’s a chance that it’s waiting for you on your phone — and there’s a chance it’s not, as well. The only way to find out is to check it, and check it, and check it, ad infinitum.

Of course, your reward center isn’t the only primal heartstring your phone knows how to tug. When you upload a group picture, Facebook tries to guess who’s in it and encourages you to tag them. That makes you feel connected to your friends and family, and when you follow through on the suggestion, it draws them back in as well. Snapchat makes a game out of users’ habits by tracking how many days in a row they’ve snapped something — you gotta keep that streak going.

The only question left to answer is “why” — if you’re not gambling with money when you hit that slot machine, then what do the companies get out of your addictive use? The answer, ominously, is you. The more they can encourage users to stay logged in, to keep returning to the well, the more they can charge their advertisers. It’s like they say: “If you’re not paying, you’re the product.”

Now, we’re not saying that you have to give up your social media entirely. But it’s worthwhile to take a minute to recognize what you’re getting out of the accounts that you’ve signed up for. And once you realize that, you might figure out a better, healthier way to scratch that itch.

To Harris, the problem starts in tech companies assuming that and behaving as if their technology is neutral. And the only solution comes in a redesign from the technology out. In other words, the attention economy is inherently flawed because it will inevitably lead to more and more powerful hacks meant to hijack your brain and direct it in the most profitable direction.

But there are ways to start dealing with a personal smartphone addiction at home — and they aren’t much different from breaking any other addiction. The Week provides a set of five suggestions that would be pretty useful for any narcotic

• First, say “I don’t,” not “I can’t.” That takes some of the pressure off and reminds you that it’s not that you can’t, it’s that it’s not who you want to be.
• Next, try making your phone inaccessible. That could be as simple as leaving it one room to charge while you stay in another. But the longer it’s in your pocket, the more it preys on your mind.
• Try setting a stopping rule. That might be something like, “I don’t go past the first page of Reddit.” Voila — you’re no longer losing hours to the internet.
• The next tip is to replace your bad habits with habits you want to encourage instead. That’s as simple as picking up a book.
• And finally, be ready for pushback. Your brain doesn’t react well to losing its addictions.

And it’s important to forgive yourself for relapsing. But stick to all of these methods, and you’ll have your smartphone habit well in hand in no time.

There’s A Strange Reason Why So Many People Regain Weight After Dieting (Biology)

Anyone who has tried to lose weight and keep it off knows how difficult the task can be. It seems like it should be simple: Just exercise to burn more calories and reduce your calorie intake. But many studies have shown that this simple strategy doesn’t work very well for the vast majority of people.

A dramatic example of the challenges of maintaining weight loss comes from a recent National Institutes of Health study. The researchers followed 14 contestants who had participated in the “World’s Biggest Loser” reality show. During the 30 weeks of the show, the contestants lost an average of over 125 pounds per person. But in the six years after the show, all but one gained back most of their lost weight, despite continuing to diet and exercise.

Why is it so hard to lose weight and keep it off? Weight loss often leads to declines in our resting metabolic rate — how many calories we burn at rest, which makes it hard to keep the weight off. So why does weight loss make resting metabolism go down, and is there a way to maintain a normal resting metabolic rate after weight loss? As someone who studies musculo-skeletal physiology, I will try to answer these questions.

Activating muscles deep in the leg that help keep blood and fluid moving through our bodies is essential to maintaining resting metabolic rate when we are sitting or standing quietly. The function of these muscles, called soleus muscles, is a major research focus for us in the Clinical Science and Engineering Research Center at Binghamton University. Commonly called “secondary hearts,” these muscles pump blood back to our heart, allowing us to maintain our normal rate of metabolic activity during sedentary activities.

Resting metabolic rate (RMR) refers to all of the biochemical activity going on in your body when you are not physically active. It is this metabolic activity that keeps you alive and breathing, and very importantly, warm.

Quiet sitting at room temperature is the standard RMR reference point; this is referred to as one metabolic equivalent, or MET. A slow walk is about two MET, bicycling four MET, and jogging seven MET. While we need to move around a bit to complete the tasks of daily living, in modern life we tend not to move very much. Thus, for most people, 80 percent of the calories we expend each day are due to RMR.

When you lose weight, your RMR should fall a small amount, as you are losing some muscle tissue. But when most of the weight loss is fat, we would expect to see only a small drop in RMR, as fat is not metabolically very active. What is surprising is that relatively large drops in RMR are quite common among individuals who lose body fat through diet or exercise.

The “World’s Biggest Loser” contestants, for example, experienced a drop in their resting metabolic rate of almost 30 percent even though 80 percent of their weight loss was due to fat loss. A simple calculation shows that making up for such a large drop in RMR would require almost two hours a day of brisk walking, seven days a week, on top of a person’s normal daily activities. Most people cannot fit this activity level into their lifestyle.

There’s no question that eating a balanced diet and regular exercise are good for you, but from a weight management perspective, increasing your resting metabolic rate may be the more effective strategy for losing weight and maintaining that lost weight.

Metabolic activity is dependent on oxygen delivery to the tissues of the body. This occurs through blood flow. As a result, cardiac output is a primary determinant of metabolic activity.

The adult body contains about four to five liters of blood, and all of this blood should circulate throughout the body every minute or so. However, the amount of blood the heart can pump out with each beat is dependent on how much blood is returned to the heart between beats.

If the “plumbing” of our body, our veins in particular, was made of rigid pipes, and the skin of our legs was tough like that of bird legs, cardiac outflow would always equal cardiac inflow, but this is not the case. The veins in our body are are quite flexible and can expand many times their resting size, and our soft skin also allows lower body volume expansion.

As a result, when we are sitting quietly, blood and interstitial fluid (the fluid which surrounds all the cells in our body) pools in the lower parts of the body. This pooling significantly reduces the amount of fluid returning to the heart, and correspondingly, reduces how much fluid the heart can pump out during each contraction. This reduces cardiac output, which dictates a reduced RMR.

Our research has shown that for typical middle-aged women, cardiac output will drop about 20 percent when sitting quietly. For individuals who have recently lost weight, the fluid pooling situation can be greater because their skin is now much looser, providing much more space for fluids to pool. This is especially the case for people experiencing rapid weight loss, as their skin has not had time to contract.

For young, healthy individuals, this pooling of fluid when sitting is limited because specialized muscles in the calves of the legs — the soleus muscles — pump blood and interstitial fluid back up to heart. This is why soleus muscles are often referred to as our “secondary hearts.” However, our modern, sedentary lifestyles mean that our secondary hearts tend to weaken, which permits excessive fluid pooling into the lower body. This situation is now commonly referred to as “sitting disease.”

Moreover, excessive fluid pooling can create a vicious cycle. Fluid pooling reduces RMR, and reduced RMR means less body heat generation, which results in a further drop in body temperature; people with low RMR often have persistently cold hands and feet. As metabolic activity is strongly dependent on tissue temperature, RMR will therefore fall even more. Just a 1 degree Fahrenheit drop in body temperature can produce a 7 percent drop in RMR.

One logical, though expensive, approach to reduce fluid pooling after weight loss would be to undergo cosmetic surgery to remove excess skin to eliminate the fluid pooling space created by the weight loss. Indeed, a recent study has confirmed that people who had body contouring surgery after losing large amounts of weight due to gastric banding surgery had better long-term control of their body mass index than people who did not have body contouring surgery.

A much more convenient approach to maintaining RMR during and after weight loss is to train up your secondary hearts, or soleus muscles. The soleus muscles are deep postural muscles and so require training of long duration and low intensity.

Tai chi, for instance, is an effective approach to accomplish this. However, we’ve observed that many people find the exercises onerous.

Over the last several years, investigators in the Clinical Science and Engineering Research Lab at Binghamton University have worked to develop a more practical approach for retraining the soleus muscles. We have created a device, which is now commercially available through a university spin-off company, that uses a specific mechanical vibration to activate receptors on the sole of the foot, which in turn makes the soleus muscles undergo a reflex contraction.

In a study of 54 women between the ages of 18 and 65 years, we found that 24 had secondary heart insufficiency leading to excessive fluid pooling in the legs, and for those women, soleus muscle stimulation led to a reversal of this fluid pooling. The ability to prevent or reverse fluid pooling, allowing individuals to maintain cardiac output, should, in theory, help these individuals maintain RMR while performing sedentary activities.

This premise has been confirmed, in part, by recent studies undertaken by our spin-off venture. These unpublished studies show that by reversing fluid pooling, cardiac output can be raised back to normal levels. Study results also indicate that by raising cardiac output back to normal resting levels, RMR returns to normal levels while individuals are sitting quietly. While these data are preliminary, a larger clinical trial is currently underway.