Dark Matter Heats Up In Dwarf Galaxies (Cosmology /Astronomy)

Read and colleagues in their paper showed that gravitational potential fluctuations driven by bursty star formation can kinematically ‘heat up’ dark matter at the centres of dwarf galaxies. They estimated pre-infall halo masses for their sample of dwarfs, using HI rotation curve measurements for the gas rich dwarf irregular galaxies (dIrrs) sample and abundance matching for the gas-poor dwarf spheroidal galaxies (dSphs) sample. With this, they showed that their ρDM(150 pc) as a function of M200 is in good agreement with models in which DM is kinematically ‘heated up’ by bursty star formation.

Dark Matter © shutterstock

The standard Λ Cold Dark Matter (ΛCDM) cosmological model gives a remarkable description of the growth of structure in the Universe on large scales. Yet, on smaller scales inside the dark matter halos of galaxies, there have been long-standing tensions. The oldest of these is the ‘cusp-core’ problem. Pure dark matter (DM) structure formation simulations in ΛCDM predict a universal DM halo profile that has a dense ‘cusp’ at the centre, with inner density ρDM ∝ r-¹. By contrast, observations of gas rich dwarf galaxy rotation curves have long favoured DM ‘cores’, with ρDM ∼ constant.

The cusp-core problem has generated substantial interest over the past two decades because it may point to physics beyond the collisionless ‘Cold Dark Matter’ (CDM) typically assumed to date. Spergel & Steinhardt, were the first to suggest that ‘Self Interacting Dark Matter’ (SIDM) – that invokes a new force acting purely in the dark sector – could transform a dense cusp to a core through energy transfer between the DM particles. Warm Dark Matter (WDM) has also been proposed as a solution to the cusp-core problem. Other solutions include ‘fuzzy DM’, ‘fluid’ DM and ‘wave-like’ DM.

However, there is a more prosaic explanation for the cusp-core problem. If gas is slowly accreted onto a dwarf galaxy and then suddenly removed (for example by stellar winds or supernovae feedback) this causes the DM halo to expand, irreversibly lowering its central density. In 2002, Gnedin & Zhao showed that, for reasonable gas fractions and collapse factors, the overall effect of this ‘DM heating’ is small. However, if the effect repeats over several cycles of star formation, it accumulates, leading eventually to complete DM core formation. Indeed, recent numerical simulations of dwarf galaxies that resolve the impact of individual supernovae on the interstellar medium find that the gas mass within the projected half light radius of the stars, R1/2, naturally rises and falls on a timescale comparable to the local dynamical time, transforming an initial DM cusp to a core. Such simulations have already made several testable predictions. In 2013, Teyssier et al. showed that the gas flows that transform DM cusps to cores lead to a bursty star formation history, with a peak-to-trough ratio of 5-10 and a duty cycle comparable to the local dynamical time. Furthermore, the stars are dynamically ‘heated’ similarly to the DM, leading to a stellar velocity dispersion that approaches the local rotational velocity of the stars (v/σ ∼ 1) inside R1/2. Both of these predictions are supported by observations of dwarf galaxies. Further evidences for ‘DM heating’ come from the observed age gradients in dwarfs.

While there is strong evidence that dwarf galaxies have bursty star formation histories, this is only circumstantial evidence for DM heating. The real ‘smoking gun’ for DM cusp-core transformations lies in another key prediction from recent numerical models: DM core formation requires several cycles of gas inflow and outflow. Thus, at fixed halo mass, galaxies that have formed more stars (i.e. that have undergone more gas inflow-outflow cycles) will have a lower central DM density. By contrast, solutions to the cusp-core problem that invoke exotic DM predict no relationship between the central DM densities of dwarfs and their star formation histories (SFHs).

Whether or not a dwarf will form a DM core depends primarily on the number and amplitude of gas inflow-outflow cycles, & on the amount of DM that needs to be evacuated from the centre of the dwarf to form the core. This can be posed in the form of an energy argument, whereby the total energy available to move gas around depends on the total stellar mass formed, M∗, while the energy required to unbind the DM cusp depends on the DM halo mass, M200. Thus, whether or not a DM core will form in a given dwarf galaxy depends primarily on its stellar mass to halo mass ratio, M∗/M200. However, since M200 is challenging to extrapolate from the data, in this paper, read and colleagues consider also a proxy for the ratio M∗/M200: the star formation ‘truncation time’, ttrunc. They define this to be the time when the dwarf’s star formation rate (SFR) fell by a factor of two from its peak value. This can be used as a proxy for M∗/M200 so long as the SFR is approximately constant (as is the case for the sample of dwarfs that they consider in this paper). In this case, dwarfs with ttrunc → 0 Gyrs have M∗/M200 → 0, while those with ttrunc →13.8 Gyrs (i.e. the age of the Universe) have formed stars for as long as possible and have, therefore, maximised both M∗/M200 and their ability to produce a DM core. Unlike M200, however, ttrunc has the advantage that it is readily estimated from the dwarf’s star formation history.

In their paper, Read and colleagues set out to test the above key prediction of DM heating models, that dwarfs with ‘extended star formation’ (i.e. ttrunc → 13.8 Gyrs and maximal M∗/M200) have DM cores, while those with ‘truncated star formation’ (i.e. ttrunc → 0 Gyrs and minimal M∗/M200) have DM cusps. To achieve this, they estimated the central DM density, M∗, ttrunc and M200 for a sample of nearby dwarf galaxies with a wide range of star formation histories (SFHs). Their sample includes gas-poor dwarf spheroidal galaxies (dSphs) whose star formation ceased shortly after the beginning of the Universe, dSphs with extended star formation that shut down only very recently, and gas rich dwarf irregular galaxies (dIrrs) that are still forming stars today. This requires them to accurately infer the DM distribution in both gas rich and gas poor galaxies. For the former, they used HI rotation curves as in their previous paper; for the latter, they used line of sight stellar kinematics. However, with only line of sight velocities, there is a well-known degeneracy between the radial density profile (that they would like to measure) and the velocity anisotropy of the dwarf.

In 2017 and 2018 paper i.e. Read & Steger (2017) and Read et al. (2018), Read introduced a new mass modelling tool – GravSphere – that breaks this degeneracy by using ‘Virial Shape Parameters’ (VSPs). They used a large suite of mock data to demonstrate that with ∼ 500 radial velocities, GravSphere is able to correctly infer the dark matter density profile over the radial range 0.5 < r/R1/2 < 2, within its 95% confidence intervals. Here, in this paper, they used GravSphere to infer the inner DM density of eight Milky Way dSphs and eight dwarf irregular (dIrr) galaxies with a wide range of star formation histories. Their key findings are as follows:

• For all galaxies, they estimated the dark matter density at a common radius of 150 pc, ρDM(150pc) and found that their sample of dwarfs falls into two distinct classes. Galaxies with only old stars (> 6 Gyrs old) had central DM densities, ρDM(150 pc) > 10^8 M kpc-³, consistent with DM cusps; those with star formation until at least 3 Gyrs ago had ρDM(150 pc) < 10^8 Mkpc-³, consistent with DM cores (Figure 1).

Figure 1. The inner DM density of their sample of dwarfs, ρDM(150 pc), as a function of their their stellar masses, M∗. The blue points mark those dwarfs that stopped forming stars ttrunc < 3 Gyrs ago; the black points those with ttrunc > 6 Gyrs; and the purple points those with 3 < ttrunc/Gyrs < 6. The square symbols denote dIrr galaxies, whose central densities were determined from their HI rotation curves, the circle symbols denote dSph galaxies, whose central densities were determined from their stellar kinematics. Notice that dwarfs with extended star formation (blue) have ρDM(150 pc) < 10^8 M kpc-³ , while those with only old stars (black) have ρDM(150 pc) > 10^8 M kpc-³.

• They estimated pre-infall halo masses for their sample of dwarfs, using HI rotation curve measurements for the dIrr sample and abundance matching for the dSph sample. With this, they showed that their ρDM(150 pc) as a function of M200 is in good agreement with models in which DM is kinematically ‘heated up’ by bursty star formation. The dwarfs with only old-age stars lay along the track predicted by the NFW profile in ΛCDM, consistent with having undergone no measurable DM heating. By contrast, those with extended star formation lay along the track predicted by the coreNFW profile from Read et al. (2016a), consistent with maximal DM heating (Figure 2, left panel).

Figure 2: Left: ρDM(150 pc) as a function of pre-infall halo mass, M200, as extrapolated from HI rotation curves (for the dIrrs) and abundance-matching (for the dSphs). The grey band marks the inner DM density of ΛCDM halos assuming no cusp-core transformations take place, where the width of the band corresponds to the 1σ scatter in DM halo concentrations. The blue band marks the same, but for the coreNFW profile from Read et al. (2016a), assuming maximal core formation. Thus, these two bands bracket the extremum cases of no cusp-core transformation and complete cusp-core transformation in ΛCDM. Notice that dwarfs with extended star formation (blue) lie along the blue track, consistent with having DM cores, while those whose star formation shut down long ago (black) lie along the grey track, consistent with having DM cusps. Right: ρDM(150 pc) as a function of the stellar mass to halo mass ratio, M∗/M200. Notice that dwarfs that have formed more stars as a fraction of their pre-infall halo mass have a lower central dark matter density. This is consistent with models in which DM is ‘heated up’ by repeated gas inflows and outflows driven by stellar feedback. The vertical dashed line marks the approximate M∗/M200 ratio below which recent models predicted that DM cusp-core transformations should become inefficient.

• They found that ρDM(150 pc) for their sample of dwarfs is anti-correlated with their stellar mass to pre-infall halo mass ratio, M∗/M200 (Figure 2, right panel) i.e. using abundance matching to infer pre-infall halo masses, M200, they showed that this dichotomy is in excellent agreement with models in which dark matter is heated up by bursty star formation. In particular, they found that ρDM(150 pc) steadily decreases with increasing stellar mass-to-halo mass ratio, M∗/M200.

Their results suggested that, to leading order, dark matter is a cold, collisionless, fluid that can be kinematically ‘heated up’ and moved around.

Reference: J. I. Read, M. G. Walker, P. Steger, “Dark matter heats up in dwarf galaxies”, Monthly Notices of the Royal Astronomical Society, Volume 484, Issue 1, March 2019, Pages 1401–1420, https://doi.org/10.1093/mnras/sty3404 https://academic.oup.com/mnras/article/484/1/1401/5265085

Copyright of this article totally belongs to our author S. Aman. One is allowed to reuse it only by giving proper credit either to him or to us.

See Live Cells With 7 Times Greater Sensitivity Using New Microscopy Technique (Physics)

Upgrade to quantitative phase imaging can increase image clarity by expanding dynamic range.

Experts in optical physics have developed a new way to see inside living cells in greater detail using existing microscopy technology and without needing to add stains or fluorescent dyes.

Researchers at the University of Tokyo have found a way to enhance the sensitivity of existing quantitative phase imaging so that all structures inside living cells can be seen simultaneously, from tiny particles to large structures. This artistic representation of the technique shows pulses of sculpted light (green, top) traveling through a cell (center), and exiting (bottom) where changes in the light waves can be analyzed and converted into a more detailed image. © s-graphics.co.jp, CC BY-NC-ND

Since individual cells are almost translucent, microscope cameras must detect extremely subtle differences in the light passing through parts of the cell. Those differences are known as the phase of the light. Camera image sensors are limited by what amount of light phase difference they can detect, referred to as dynamic range.

“To see greater detail using the same image sensor, we must expand the dynamic range so that we can detect smaller phase changes of light,” said Associate Professor Takuro Ideguchi from the University of Tokyo Institute for Photon Science and Technology.

The research team developed a technique to take two exposures to measure large and small changes in light phase separately and then seamlessly connect them to create a highly detailed final image. They named their method adaptive dynamic range shift quantitative phase imaging (ADRIFT-QPI) and recently published their results in Light: Science & Applications.

Images of silica beads taken using conventional quantitative phase imaging (top) and a clearer image produced using a new ADRIFT-QPI microscopy method (bottom) developed by a research team at the University of Tokyo. The photos on the left are images of the optical phase and images on the right show the optical phase change due to the mid-infrared (molecular specific) light absorption by the silica beads. In this proof-of-concept demonstration, researchers calculated that they achieved approximately 7 times greater sensitivity by ADRIFT-QPI than that by conventional QPI. Image by Toda et al., CC-BY https://creativecommons.org/licenses/by/4.0/

“Our ADRIFT-QPI method needs no special laser, no special microscope or image sensors; we can use live cells, we don’t need any stains or fluorescence, and there is very little chance of phototoxicity,” said Ideguchi.

Phototoxicity refers to killing cells with light, which can become a problem with some other imaging techniques, such as fluorescence imaging.

Quantitative phase imaging sends a pulse of a flat sheet of light towards the cell, then measures the phase shift of the light waves after they pass through the cell. Computer analysis then reconstructs an image of the major structures inside the cell. Ideguchi and his collaborators have previously pioneered other methods to enhance quantitative phase microscopy.

Quantitative phase imaging is a powerful tool for examining individual cells because it allows researchers to make detailed measurements, like tracking the growth rate of a cell based on the shift in light waves. However, the quantitative aspect of the technique has low sensitivity because of the low saturation capacity of the image sensor, so tracking nanosized particles in and around cells is not possible with a conventional approach.

A standard image (top) taken using conventional quantitative phase imaging and a clearer image (bottom) produced using a new ADRIFT-QPI microscopy method developed by a research team at the University of Tokyo. The photos on the left are images of the optical phase and images on the right show the optical phase change due to the mid-infrared (molecular specific) light absorption mainly by protein. Blue arrow points towards the edge of the nucleus, white arrow points towards the nucleoli (a substructure inside the nucleus), and green arrows point towards other large particles. Image by Toda et al., CC-BY https://creativecommons.org/licenses/by/4.0/

The new ADRIFT-QPI method has overcome the dynamic range limitation of quantitative phase imaging. During ADRIFT-QPI, the camera takes two exposures and produces a final image that has seven times greater sensitivity than traditional quantitative phase microscopy images.

The first exposure is produced with conventional quantitative phase imaging – a flat sheet of light is pulsed towards the sample and the phase shifts of the light are measured after it passes through the sample. A computer image analysis program develops an image of the sample based on the first exposure then rapidly designs a sculpted wavefront of light that mirrors that image of the sample. A separate component called a wavefront shaping device then generates this “sculpture of light” with higher intensity light for stronger illumination and pulses it towards the sample for a second exposure.

If the first exposure produced an image that was a perfect representation of the sample, the custom-sculpted light waves of the second exposure would enter the sample at different phases, pass through the sample, then emerge as a flat sheet of light, causing the camera to see nothing but a dark image.

“This is the interesting thing: We kind of erase the sample’s image. We want to see almost nothing. We cancel out the large structures so that we can see the smaller ones in great detail,” Ideguchi explained.

In reality, the first exposure is imperfect, so the sculptured light waves emerge with subtle phase deviations.

The second exposure reveals tiny light phase differences that were “washed out” by larger differences in the first exposure. These remaining tiny light phase difference can be measured with increased sensitivity due to the stronger illumination used in the second exposure.

Additional computer analysis reconstructs a final image of the sample with an expanded dynamic range from the two measurement results. In proof-of-concept demonstrations, researchers estimate the ADRIFT-QPI produces images with seven times greater sensitivity than conventional quantitative phase imaging.

Ideguchi says that the true benefit of ADRIFT-QPI is its ability to see tiny particles in context of the whole living cell without needing any labels or stains.

“For example, small signals from nanoscale particles like viruses or particles moving around inside and outside a cell could be detected, which allows for simultaneous observation of their behavior and the cell’s state,” said Ideguchi.

Reference: K. Toda, M. Tamamitsu, T. Ideguchi. November 2020. Adaptive dynamic range shift (ADRIFT) quantitative phase imaging. Light: Science & Applications. DOI: 10.1038/s41377-020-00435-z https://doi.org/10.1038/s41377-020-00435-z

Provided by University of Tokyo

Stretching Diamond For Next-generation Microelectronics (Physics)

Diamond is the hardest material in nature. But out of many expectations, it also has great potential as an excellent electronic material. A joint research team led by City University of Hong Kong (CityU) has demonstrated for the first time the large, uniform tensile elastic straining of microfabricated diamond arrays through the nanomechanical approach. Their findings have shown the potential of strained diamonds as prime candidates for advanced functional devices in microelectronics, photonics, and quantum information technologies.

Stretching of microfabricated diamonds pave ways for applications in next-generation microelectronics. (credit: Dang Chaoqun / City University of Hong Kong)

The research was co-led by Dr Lu Yang, Associate Professor in the Department of Mechanical Engineering (MNE) at CityU and researchers from Massachusetts Institute of Technology (MIT) and Harbin Institute of Technology (HIT). Their findings have been recently published in the prestigious scientific journal Science, titled “Achieving large uniform tensile elasticity in microfabricated diamond”.

“This is the first time showing the extremely large, uniform elasticity of diamond by tensile experiments. Our findings demonstrate the possibility of developing electronic devices through ‘deep elastic strain engineering’ of microfabricated diamond structures,” said Dr Lu.

Diamond: “Mount Everest” of electronic materials

Well known for its hardness, industrial applications of diamonds are usually cutting, drilling, or grinding. But diamond is also considered as a high-performance electronic and photonic material due to its ultra-high thermal conductivity, exceptional electric charge carrier mobility, high breakdown strength and ultra-wide bandgap. Bandgap is a key property in semi-conductor, and wide bandgap allows operation of high-power or high-frequency devices. “That’s why diamond can be considered as ‘Mount Everest’ of electronic materials, possessing all these excellent properties,” Dr Lu said.

However, the large bandgap and tight crystal structure of diamond make it difficult to “dope”, a common way to modulate the semi-conductors’ electronic properties during production, hence hampering the diamond’s industrial application in electronic and optoelectronic devices. A potential alternative is by “strain engineering”, that is to apply very large lattice strain, to change the electronic band structure and associated functional properties. But it was considered as “impossible” for diamond due to its extremely high hardness.

Then in 2018, Dr Lu and his collaborators discovered that, surprisingly, nanoscale diamond can be elastically bent with unexpected large local strain. This discovery suggests the change of physical properties in diamond through elastic strain engineering can be possible. Based on this, the latest study showed how this phenomenon can be utilized for developing functional diamond devices.

Uniform tensile straining across the sample

Fig 2: Illustration of tensile straining of microfabricated diamond bridge samples. (Credit: Dang Chaoqun / City University of Hong Kong)

The team firstly microfabricated single-crystalline diamond samples from a solid diamond single crystals. The samples were in bridge-like shape – about one micrometre long and 300 nanometres wide, with both ends wider for gripping (See image: Tensile straining of diamond bridges). The diamond bridges were then uniaxially stretched in a well-controlled manner within an electron microscope. Under cycles of continuous and controllable loading-unloading of quantitative tensile tests, the diamond bridges demonstrated a highly uniform, large elastic deformation of about 7.5% strain across the whole gauge section of the specimen, rather than deforming at a localized area in bending. And they recovered their original shape after unloading.

By further optimizing the sample geometry using the American Society for Testing and Materials (ASTM) standard, they achieved a maximum uniform tensile strain of up to 9.7%, which even surpassed the maximum local value in the 2018 study, and was close to the theoretical elastic limit of diamond. More importantly, to demonstrate the strained diamond device concept, the team also realized elastic straining of microfabricated diamond arrays.

Tuning the bandgap by elastic strains

The team then performed density functional theory (DFT) calculations to estimate the impact of elastic straining from 0 to 12% on the diamond’s electronic properties. The simulation results indicated that the bandgap of diamond generally decreased as the tensile strain increased, with the largest bandgap reduction rate down from about 5 eV to 3 eV at around 9% strain along a specific crystalline orientation. The team performed an electron energy-loss spectroscopy analysis on a pre-strained diamond sample and verified this bandgap decreasing trend.

Their calculation results also showed that, interestingly, the bandgap could change from indirect to direct with the tensile strains larger than 9% along another crystalline orientation. Direct bandgap in semi-conductor means an electron can directly emit a photon, allowing many optoelectronic applications with higher efficiency.

These findings are an early step in achieving deep elastic strain engineering of microfabricated diamonds. By nanomechanical approach, the team demonstrated that the diamond’s band structure can be changed, and more importantly, these changes can be continuous and reversible, allowing different applications, from micro/nanoelectromechanical systems (MEMS/NEMS), strain-engineered transistors, to novel optoelectronic and quantum technologies. “I believe a new era for diamond is ahead of us,” said Dr Lu.

Members of the CityU research team:(front row from the left) Dr Alice Hu, Dr Lu Yang, PhD graduate Dang Chaoqun, (back row from the left), PhD student Lin Weitong and Visiting Assistant Professor Dr Fan Rong. © City U

Dr Lu, Dr Alice Hu, who is also from MNE at CityU, Professor Li Ju from MIT and Professor Zhu Jiaqi from HIT are the corresponding authors of the paper. The co-first authors are Dang Chaoqun, PhD graduate, and Dr Chou Jyh-Pin, former postdoctoral fellow from MNE at CityU, Dr Dai Bing from HIT, and Chou Chang-Ti from National Chiao Tung University. Dr Fan Rong and Lin Weitong from CityU are also part of the team. Other collaborating researchers are from the Lawrence Berkeley National Laboratory, University of California, Berkeley, and Southern University of Science and Technology.

The research at CityU was funded by the Hong Kong Research Grants Council and the National Natural Science Foundation of China.

Reference: Chaoqun Dang, et al. Achieving large uniform tensile elasticity in microfabricated diamond. Science, Jan 1st, 2021 DOI: 10.1126/science.abc4174

Provided by CityU

Controlling The Nanoscale Structure of Membranes is Key For Clean Water (Engineering)

A desalination membrane acts as a filter for salty water: push the water through the membrane, get clean water suitable for agriculture, energy production and even drinking. The process seems simple enough, but it contains complex intricacies that have baffled scientists for decades — until now.

Fig. 1 Quantifying the 3D nanoscale inhomogeneity of PA RO membranes through the combination of energy-filtered TEM and electron tomography. (A and B) 3D isosurfaces of the PA1 (A) and PA4 (B) membranes. (C to J) 12-Å thick xz plane with colorized voxels of PA1 (C) and PA4 (D) corresponding to colorized gradients under the density [(E) and (F)], sFFV [(G) and (H)], and diffusion coefficient [(I) and (J)] of water histograms for the PA1 and PA4 membranes, respectively. (A), (C), (E), (G), and (I) show data for PA1; (B), (D), (F), (H), and (J) show data for PA4. All studied membranes show internal nanoscale inhomogeneity. Length axis arrows and scale bars are 200 nm. Histograms were obtained from reconstructions of PA films with >10^8 voxels.

Researchers from Penn State, The University of Texas at Austin, Iowa State University, Dow Chemical Company and DuPont Water Solutions published a key finding in understanding how membranes actually filter minerals from water, online today (Dec. 31) in Science. The article will be featured on the print edition’s cover, to be issued tomorrow (Jan. 1).

“Despite their use for many years, there is much we don’t know about how water filtration membranes work,” said Enrique Gomez, professor of chemical engineering and materials science and engineering at Penn State, who led the research. “We found that how you control the density distribution of the membrane itself at the nanoscale is really important for water-production performance.”

Co-led by Manish Kumar, associate professor in the Department of Civil, Architectural and Environmental Engineering at UT Austin, the team used multimodal electron microscopy, which combines the atomic-scale detailed imaging with techniques that reveal chemical composition, to determine that desalination membranes are inconsistent in density and mass. The researchers mapped the density variations in polymer film in three dimensions with a spatial resolution of approximately one nanometer — that’s less than half the diameter of a DNA strand. According to Gomez, this technological advancement was key in understanding the role of density in membranes.

“You can see how some places are more or less dense in a coffee filter just by your eye,” Gomez said. “In filtration membranes, it looks even, but it’s not at the nanoscale, and how you control that mass distribution is really important for water-filtration performance.”

This was a surprise, Gomez and Kumar said, as it was previously thought that the thicker the membrane, the less water production. Filmtec, now a part of DuPont Water Solutions, which makes numerous desalination products, partnered with the researchers and funded the project because their in-house scientists found that thicker membranes were actually proving to be more permeable.

The researchers found that the thickness does not matter as much as avoiding highly dense nanoscale regions, or “dead zones.” In a sense, a more consistent density throughout the membrane is more important than thickness for maximizing water production, according to Gomez.

This understanding could increase membrane efficiency by 30% to 40%, according to the researchers, resulting in more water filtered with less energy — a potential cost-saving update to current desalination processes.

“Reverse osmosis membranes are so widely used for cleaning water, but there’s still a lot we don’t know about them,” Kumar said. “We couldn’t really say how water moves through them, so all the improvements over the last 40 years have essentially been done in the dark.”

Reverse osmosis membranes work by applying pressure on one side. The minerals stay there, while the water passes through. While more efficient than non-membrane desalination processes, this still takes an immense amount of energy, the researchers said, but improving the efficiency of the membranes could reduce that burden.

“Freshwater management is becoming a crucial challenge throughout the world,” Gomez said. “Shortages, droughts — with increasing severe weather patterns, it is expected this problem will become even more significant. It’s critically important to have clean water available, especially in low resource areas.”

The team continues to study the structure of the membranes, as well as the chemical reactions involved in the desalination process. They are also examining how to develop the best membranes for specific materials, such as sustainable yet tough membranes that can prevent the formation of bacterial growth.

“We’re continuing to push our techniques with more high-performance materials with the goal of elucidating the crucial factors of efficient filtration,” Gomez said.

Other contributors include first author Tyler E. Culp, Kaitlyn P. Brickey, Michael Geitner and Andrew Zydney, all of whom are affiliated with the Penn State Department of Chemical Engineering; Biswajit Khara and Baskar Ganapathysubramanian, both with the Department of Mechanical Engineering at Iowa State University; Tawanda J. Zimudzi of the Materials Research Institute (MRI) at Penn State; Jeffrey D. Wilbur and Steve Jons, both with DuPont Water Solutions; and Abhishek Roy and Mou Paul, both with Dow Chemical Company. Gomez is also affiliated with MRI. The microscopic work was conducted on electron microscopes in the Materials Characterization Lab in MRI. DuPont and the National Science Foundation funded the research.

Reference: Tyler E. Culp, Biswajit Khara, Kaitlyn P. Brickey, Michael Geitner, Tawanda J. Zimudzi, Jeffrey D. Wilbur, Steven D. Jons, Abhishek Roy, Mou Paul, Baskar, “Nanoscale control of internal inhomogeneity enhances water transport in desalination membranes”, Science 01 Jan 2021: Vol. 371, Issue 6524, pp. 72-75 DOI: 10.1126/science.abb8518 https://science.sciencemag.org/content/371/6524/72

Provided by Penn State

Traditional Ghanaian Medicines Show Promise Against Tropical Diseases (Medicine)

The discovery of new drugs is vital to achieving the eradication of neglected tropical diseases (NTDs) in Africa and around the world. Now, researchers reporting in PLOS Neglected Tropical Diseases have identified traditional Ghanaian medicines which work in the lab against schistosomiasis, onchocerciasis and lymphatic filariasis, three diseases endemic to Ghana.

Chemical and Biological Investigation of Traditional medicines for Activity against NTDs.
Photo of Schistosomiasis and Onchocerciasis sourced from Centers for Disease Control and Prevention DPDx – Laboratory Identification of Parasites of Public Health Concern under a CC-BY license (available at https://www.cdc.gov/dpdx/schistosomiasis/images/7/S_mansoni_adult_Lammie1.jpg and https://www.cdc.gov/dpdx/onchocerciasis/modules/O_volvulus_LifeCycle.gif) © Osei-Safo 2020 (CC-BY 2.0)

The major intervention for NTDs in Ghana is currently mass drug administration of a few repeatedly recycled drugs, which can lead to reduced efficacy and the emergence of drug resistance. Chronic infections of schistosomiasis, onchocerciasis and lymphatic filariasis can be fatal. Schistosomiasis is caused by the blood flukes Schistosome haematobium and S. mansoni. Onchocerciasis, or river blindness, is caused by the parasitic worm Onchocerca volvulus. Lymphatic filariasis, also called elephantiasis, is caused by the parasitic filarial worm Wuchereria bancrofti.

In the new work, Dorcas Osei-Safo of the University of Ghana, and colleagues obtained–from the Ghana Federation of Traditional Medicines Practitioners Association–15 traditional medicines used for treating NTDs in local communities. The medicines were available in aqueous herbal preparations or dried powdered herbs. In all cases, crude extracts were prepared from the herbs and screened in the laboratory for their ability to treat various NTDs.

Two extracts, NTD-B4-DCM and NTD-B7-DCM, displayed high activity against S. mansoni adult worms, decreasing the movement of the worms by 78.4% and 84.3% respectively. A different extract, NTD-B2-DCM, was the most active against adult Onchocera onchengi worms, killing 100% of males and more than 60% of females. Eight of 26 crude extracts tested, including NTD-B4-DCM and NTD-B2-DCM, also exhibited good activity against trypanosomes–parasites that cause other human diseases but weren’t the original targets of the traditional medicines.

“By embracing indigenous knowledge systems which have evolved over centuries, we can potentially unlock a wealth of untapped research and shape it by conducting sound scientific investigations to produce safe, efficacious and good quality remedies,” the researchers say.

Reference: Twumasi EB, Akazue PI, Kyeremeh K, Gwira TM, Keiser J, Cho-Ngwa F, et al. (2020) Antischistosomal, antionchocercal and antitrypanosomal potentials of some Ghanaian traditional medicines and their constituents. PLoS Negl Trop Dis 14(12): e0008919. https://journals.plos.org/plosntds/article?id=10.1371/journal.pntd.0008919 https://doi.org/10.1371/journal.pntd.0008919

Provided by PLOS

Multiple Mosquito Blood Meals Accelerate Malaria Transmission (Medicine)

Multiple bouts of blood feeding by mosquitoes shorten the incubation period for malaria parasites and increase malaria transmission potential, according to a study published December 31 in the open-access journal PLOS Pathogens by Lauren Childs of Virginia Tech, Flaminia Catteruccia of the Harvard T.H. Chan School of Public Health, and colleagues. Given that mosquitoes feed on blood multiple times in natural settings, the results suggest that malaria elimination may be substantially more challenging than suggested by previous experiments, which typically involve a single blood meal.

Plasmodium falciparum parasites developing in the mosquito midgut. © W. Robert Shaw, 2020 (CCBY 2.0)

Malaria remains a devastating disease for tropical and subtropical regions, accounting for an estimated 405,000 deaths and 228 million cases in 2018. In natural settings, the female Anopheles gambiae mosquito — the major malaria vector — feeds on blood multiple times in her lifespan. Such complex behavior is regularly overlooked when mosquitoes are experimentally infected with malaria parasites, limiting our ability to accurately describe potential effects on transmission. In the new study, the researchers examine how additional blood feeding affects the development and transmission potential of Plasmodium falciparum malaria parasites in An. gambiae females.

“We wanted to capture the fact that, in endemic regions, malaria-transmitting mosquitoes are feeding on blood roughly every 2-3 days”, says W. Robert Shaw, a lead author of this study. “Our study shows that this natural behavior strongly promotes the transmission potential of malaria parasites, in previously unappreciated ways”.

The results show that an additional blood feed three days after infection with P. falciparum accelerates the growth of the malaria parasite, thereby shortening the incubation period required before transmission to humans can occur. Incorporating these data into a mathematical model across sub-Saharan Africa reveals that malaria transmission potential is likely higher than previously thought, making disease elimination more difficult. In addition, parasite growth is accelerated in genetically modified mosquitoes with reduced reproductive capacity, suggesting that control strategies using this approach, with the aim of suppressing Anopheles populations, may inadvertently favor malaria transmission. The data also suggest that parasites can be transmitted by younger mosquitoes, which are less susceptible to insecticide killing, with negative implications for the success of insecticide-based strategies. Taken together, the results suggest that younger mosquitoes and those with reduced reproductive ability may provide a larger contribution to infection than previously thought.

According to the authors, the findings have important implications for accurately understanding malaria transmission potential and estimating the true impact of current and future mosquito control measures.

Reference: Shaw WR, Holmdahl IE, Itoe MA, Werling K, Marquette M, Paton DG, et al. (2020) Multiple blood feeding in mosquitoes shortens the Plasmodium falciparum incubation period and increases malaria transmission potential. PLoS Pathog 16(12): e1009131. https://doi.org/10.1371/journal.ppat.1009131 https://journals.plos.org/plospathogens/article?id=10.1371/journal.ppat.1009131

Provided by PLOS