Biologists Determined the 3D Structure Of A Protein Required For The Assembly Of Photosynthetic Membranes (Botany)

An international study has elucidated the structure of a protein that is required for the assembly and stability of photosynthetic membranes.

Plants, algae and cyanobacteria convert carbon dioxide and water into biomass and oxygen with the aid of photosynthesis. This process forms the basis of most forms of life on Earth. Global warming is exposing photosynthetic organisms to increasing levels of stress. This reduces growth rates, and in the longer term presents a threat to food supplies for human populations. An international project, in which LMU biologist Jörg Nickelsen and his research group played a significant role, has now determined the three-dimensional structure of a protein involved in the formation and maintenance of the membranes in which photosynthesis takes place. The insights provided by the study will facilitate biotechnological efforts to boost the ability of plants to cope with environmental stresses.

The initial steps in photosynthesis take place within the ‘thylakoid’ membranes, which harbor pigment-protein complexes that absorb energy from sunlight. It has been known for decades that, in virtually all photosynthetic organisms, a protein called VIPP1 (which stands for ‘vesicle-inducing protein in plastids’) is indispensable for the assembly of thylakoids. “However, how VIPP1 actually performs this essential function has remained enigmatic up to now,” says Steffen Heinz, a postdoc in Nickelsen’s group and joint first author of the new publication. Thanks to the new study, which was led by the Helmholtz Zentrum München, researchers now know a great deal more.

Assembly of photosynthetic membranes

The team used cryo-electron microscopy to determine the three-dimensional structure of VIPP1 at high resolution. Analysis of this structure, in combination with functional investigation of the protein’s mode of action, demonstrated how small numbers of VIPP1 molecules form short strands, which are interwoven to form a basket-like structure. This then serves as a scaffold for the assembly of the thylakoid membrane, and determines its curvature. Using a related technique known as cryo-electron tomography, the scientists were also able to image VIPP1 membranes in their natural state in algal cells. By introducing site-specific mutations into VIPP1, they showed that the interaction of VIPP1 with thylakoid membranes is vital for the maintenance of their structural integrity under high levels of light stress. This finding demonstrates that the protein not only mediates the assembly of thylakoids, but also plays a role in enabling them to adapt to environmental fluctuations.

The results provide the basis for a better understanding of the mechanisms that underlie the formation and stabilization of thylakoids. They will also open up new opportunities to enhance the ability of green plants to withstand extreme environmental stresses.

Featured image: VIPP1 play a central role in the assembly of thylakoid membranes, which are indispensable for plant growth. | © Helmholtz Zentrum München / Ben Engel

Reference: Gupta et al., 2021: Structural basis for VIPP1 oligomerization and maintenance of thylakoid membrane integrity. Cell, 2021

Provided by LMU Munchen

How Mice See The World? (Neuroscience)

Researchers based in Munich and Tübingen have developed an open-source camera system that images natural habitats as they appear to rodents.

During the course of evolution, animals have adapted to the particular demands of their local environments in ways that increased their chances of survival and reproduction. This is also true of diverse aspects of the sensory systems that enable species to perceive their surroundings. In the case of the visual system, these adaptations have shaped features such as the positioning of the eyes and the relative acuity of different regions of the retina.

However, our knowledge of the functional evolution of visual systems in mammals has remained relatively sparse. “In the past 10 or 15 years, the mouse has become the favored model for the investigation of the processing of visual information,” says Professor Laura Busse of the Department of Biology II at LMU. “That’s a somewhat surprising development, given that it was previously thought that these rodents primarily sensed the world using their whisker system and smell.” However, color vision in mammals is known to have an effect on the ability to find food, evade predators, and choose mating partners.

“It occurred to us that we don’t really know how mice perceive their natural environment visually,” says Busse, who is a member of the transregional Collaborative Research Center (CRC) 1233 on “Robust Vision”. Here, the term “robust” refers the fact that animals (including humans) are able to draw inferences from limited amounts of visual information, even in environments that are constantly changing. Busse decided to close this gap by studying the visual input and the processing of neuronal signals in mice”, In collaboration with Professor Thomas Euler of Tübingen University, the Coordinating University of the CRC.

Video: Imaging the environment as it would appear to a mouse. (The UV channel is coloured in blue.) Source: Yongrong Qiu, Euler Group

Mice are dichromate – in other words, they have two types of cone cells (the photoreceptors that are responsible for color vision) in their retinas. These cells detect electromagnetic radiation in the green and ultraviolet regions of the spectrum, centered on wavelengths of 510 nanometers (nm) and 350 nm, respectively. “We wanted to know what range of color information is available to mice in their natural habitats, and whether the prevalence of these colors can explain the functional characteristics of the neural circuits in the mouse retina,” Busse explains.

Together, the teams in Munich and Tübingen set out to develop an low-cost, open-source camera which, unlike conventional cameras, was specifically designed to cover the spectral regions in the green and ultraviolet to which the mice retina is sensitive. To facilitate its use in the field, the hand-held camera is equipped with a gimbal, which automatically orients the picture frame, thus avoiding sudden, unintentional shifts in perspective.

The researchers used this camera to image the environment as it would appear to a mouse, at different times of the day, in fields that showed clear signs of their presence. “We knew that the upper hemisphere of the mouse retina, with which they can see the sky, is especially sensitive to UV light,” says Busse. “The lower half of the mouse retina, which is normally oriented towards the ground, shows a higher sensitivity in the green region. The team confirmed that these two spectral ranges closely match the color statistics of the natural environments that are favored by mouse populations. This adaptation could be a result of evolutionary processes – and for example help the animal to perceive birds of prey in the sky – and take evasive action. Experiments using artificial neural nets that mimic the processing characteristics of cone cells in the mouse retina confirm this conjecture.

Featured image: A camera specifically designed to cover the spectral regions in the green and ultraviolet | © Y. Qiu, Euler Group

Reference: Yongrong Qiu et al: Natural environment statistics in the upper and lower visual field are reflected in mouse retinal specializations. Current Biology 2021

Provided by LMU

Why Does Mercury Have Such a Big Iron Core? Magnetism! (Planetary Science)

New research from the University of Maryland shows that proximity to the sun’s magnetic field determines a planet’s interior composition

A new study disputes the prevailing hypothesis on why Mercury has a big core relative to its mantle (the layer between a planet’s core and crust). For decades, scientists argued that hit-and-run collisions with other bodies during the formation of our solar system blew away much of Mercury’s rocky mantle and left the big, dense, metal core inside. But new research reveals that collisions are not to blame—the sun’s magnetism is.

William McDonough, a professor of geology at the University of Maryland, and Takashi Yoshizaki from Tohoku University developed a model showing that the density, mass and iron content of a rocky planet’s core are influenced by its distance from the sun’s magnetic field. The paper describing the model was published on July 2, 2021, in the journal Progress in Earth and Planetary Science.

“The four inner planets of our solar system—Mercury, Venus, Earth and Mars—are made up of different proportions of metal and rock,” McDonough said. “There is a gradient in which the metal content in the core drops off as the planets get farther from the sun. Our paper explains how this happened by showing that the distribution of raw materials in the early forming solar system was controlled by the sun’s magnetic field.”

McDonough previously developed a model for Earth’s composition that is commonly used by planetary scientists to determine the composition of exoplanets. (His seminal paper on this work has been cited more than 8,000 times.)

McDonough’s new model shows that during the early formation of our solar system, when the young sun was surrounded by a swirling cloud of dust and gas, grains of iron were drawn toward the center by the sun’s magnetic field. When the planets began to form from clumps of that dust and gas, planets closer to the sun incorporated more iron into their cores than those farther away.

The researchers found that the density and proportion of iron in a rocky planet’s core correlates with the strength of the magnetic field around the sun during planetary formation. Their new study suggests that magnetism should be factored into future attempts to describe the composition of rocky planets, including those outside our solar system.

The composition of a planet’s core is important for its potential to support life. On Earth, for instance, a molten iron core creates a magnetosphere that protects the planet from cancer-causing cosmic rays. The core also contains the majority of the planet’s phosphorus, which is an important nutrient for sustaining carbon-based life.

Using existing models of planetary formation, McDonough determined the speed at which gas and dust was pulled into the center of our solar system during its formation. He factored in the magnetic field that would have been generated by the sun as it burst into being and calculated how that magnetic field would draw iron through the dust and gas cloud.

As the early solar system began to cool, dust and gas that were not drawn into the sun began to clump together. The clumps closer to the sun would have been exposed to a stronger magnetic field and thus would contain more iron than those farther away from the sun. As the clumps coalesced and cooled into spinning planets, gravitational forces drew the iron into their core.

When McDonough incorporated this model into calculations of planetary formation, it revealed a gradient in metal content and density that corresponds perfectly with what scientists know about the planets in our solar system. Mercury has a metallic core that makes up about three-quarters of its mass. The cores of Earth and Venus are only about one-third of their mass, and Mars, the outermost of the rocky planets, has a small core that is only about one-quarter of its mass.

This new understanding of the role magnetism plays in planetary formation creates a kink in the study of exoplanets, because there is currently no method to determine the magnetic properties of a star from Earth-based observations. Scientists infer the composition of an exoplanet based on the spectrum of light radiated from its sun. Different elements in a star emit radiation in different wavelengths, so measuring those wavelengths reveals what the star, and presumably the planets around it, are made of.

“You can no longer just say, ‘Oh, the composition of a star looks like this, so the planets around it must look like this,’” McDonough said. “Now you have to say, ‘Each planet could have more or less iron based on the magnetic properties of the star in the early growth of the solar system.’”

The next steps in this work will be for scientists to find another planetary system like ours—one with rocky planets spread over wide distances from their central sun. If the density of the planets drops as they radiate out from the sun the way it does in our solar system, researchers could confirm this new theory and infer that a magnetic field influenced planetary formation.

The research paper, “Terrestrial planet compositions controlled by accretion disk magnetic field,” McDonough, W. F. and Yoshizaki, T., was published on July 2, 2021, in the journal Progress in Earth and Planetary Science.

Featured image: New research shows the sun’s magnetic field drew iron toward the center of our solar system as the planets formed. That explains why Mercury, which is closest to the sun has a bigger, denser, iron core relative to its outer layers than the other rocky planets like Earth and Mars. (Image Credit: NASA’s Goddard Space Flight Center.)

Provided by University of Maryland

Inside The Lungs, A New Hope For Protection Against Flu Damage (Biology)

New experimental data pinpoints a molecular component responsible for modulating the damage the flu can wreck on the lungs

The seasonal flu kills up to 600 000 people a year worldwide and has a century-long history of pandemics. Examples include the Spanish flu in the late 1910’s or the H1N1 in 2009, which together claimed more than 50 million lives. “The way the stage is set tells us that it is not a matter of if but rather of when there will be a next pandemic. And preparing ourselves for that demands intensive fundamental research and constant accumulation of knowledge about these viruses and the diseases they cause”, says Maria João Amorim, IGC principal investigator and leader of the team that conducted the study.

When a virus like influenza enters our lungs, it is quickly faced with cocktails of molecules that recognize it and alert the host of its presence. Signals flow back and activate the immune response, calling in an army of cells and inflammation sidekicks. Any exaggeration can destabilize the equilibrium needed to clear the virus and spare our tissues from damage. For most people, clearance arrives a few days after infection and leaves very few traces. But for some, influenza infection entails severe complications, resulting from an exacerbated response that damages the lungs.

“We found that DAF, which stands for decay accelerating factor, aggravates influenza A infection and increases damage to the lungs in mice. This virulence mechanism of influenza, and the molecular regulation that underpins it, are new for us”, Maria João Amorim reveals. DAF is a receptor found at the surface of most cells that functions to protect them from being attacked by one of our own immune surveillance systems–the complement. This system protects us against invading pathogens once it detects them in circulation, by inactivating the pathogen itself, or inside infected cells, by mounting a strategy to eliminate them.

“But this can work as double-edged sword because if complement destroys cells from the host, there is the associated danger of provoking excessive self-injury by eliminating too many bystander cells and promoting inflammation. In fact, disease severity and mortality have been associated with both lack or excess of complement activation, which is tuned by regulators such as DAF”, remarks Nuno Santos, first author of the study. Contrary to expectations, the team found that influenza A virus exploits DAF to potentiate complement activation as an immune evasion mechanism, increasing the recruitment of immune cells. “By doing so, it can exacerbate the immune response, and this is what damages the lungs. Remarkably, this occurs in a way that is independent of viral load, telling us that it directly affects resilience to infection”, says Zoé Vaz da Silva, coauthor of the study.

The role of DAF upon influenza infection can depend on how it interacts with some parts of the virus, leading to more or less aggravated responses. “The complement system is important, but not the only component that determines the outcome of the infection. These interactions have functional implications and are an unprecedented way of a virus, via altering a host protein from within the infected cell, to modulate the immune response. Studying this further in the future is crucial”, Maria João Amorim says.

This work highlights a novel immune evasion strategy by influenza A virus and stresses the importance of a balanced immune response to viral infections, which allows disease clearance without causing damage. Despite its intrinsic protective role, the immune system can be the cause of severe complications during influenza A infection.

This work was developed at Instituto Gulbenkian de Ciência, in collaboration with Celso Reis from the I3S. Fuding was granted by Fundação para a Ciência e a Tecnologia and Fundação Calouste Gulbenkian.

Featured image: New hope to modulate the damage the flu wrecks on the lungs © Joana Carvalho, IGC 2021

Reference: Santos NB, Vaz da Silva ZE, Gomes C, Reis CA, Amorim MJ (2021) Complement Decay-Accelerating Factor is a modulator of influenza A virus lung immunopathology. PLoS Pathog 17(7): e1009381. doi:10.1371/journal.ppat.1009381

Provided by IGC

At What Temperature The Weather Becomes A Problem? (Agriculture)

Climate change leads to increasing heat strain for humans, animals and crops

When extreme heat becomes more frequent and temperatures remain high for extended periods of time, as it is currently the case in Canada and the American Northwest, physiological stress increases in humans, animals and crops. Prof. Senthold Asseng, director of the World Agricultural Systems Center at the Technical University of Munich (TUM), provides an overview of thresholds and adaptation strategies.

“We have studied which temperatures are preferable and which are harmful in humans, cattle, pigs, poultry, and agricultural crops and found that they are surprisingly similar,” says Senthold Asseng, Professor of Digital Agriculture at TUM. According to the study, preferable temperatures range from 17 to 24 degrees Celsius.

When does it become too hot for humans?

At high humidity, mild heat strain for humans begins at about 23 degrees Celsius and at low humidity at 27 degrees Celsius. “If people are exposed to temperatures above 32 degrees Celsius at extremely high humidity or above 45 degrees Celsius at extremely low humidity for a lengthy period of time, it can be fatal,” says Prof. Asseng. “During extreme heat events with temperatures far above 40 degrees Celsius, such as those currently being observed on the U.S North West Coast and in Canada, people require technical support, for example in the form of air-conditioned spaces.”

To mitigate increasing heat strain, Prof. Asseng cites a variety of strategies, including increasing natural shade from trees or structural shading. Cities and buildings can be made more temperature-passive, for example, by using roof and wall insulation or by using lighter, reflective roof and wall colors to reduce heat strain.

How do high temperatures affect livestock?

In cattle and pigs, heat strain occurs at 24 degrees Celsius with high humidity and at 29 degrees Celsius with low humidity. The milk yield from cows can decrease by 10 to 20 percent when exposed to heat stress, and the fattening performance in pigs is also reduced. The comfortable temperature range for poultry is 15 to 20 degrees.  Chickens experience mild heat strain at 30 degrees Celsius. At 37 degrees Celsius and above, they experience severe heat stress and their egg laying rate declines.

Heat stress overall leads to reduced growth in cattle and dairy cows, pigs, chickens and other livestock, which means both lower yields and reproductive performance. “There are examples of evolutionary adaptations to warm weather in terrestrial mammals. Transylvanian naked chickens are more heat tolerant than other varieties of chickens because of a complex genetic mutation that suppresses feather growth. They are naturally air-conditioned because they lack feathers on their necks,” says Prof. Asseng. 

How do crops react to high temperature?

“In crops, the optimal temperature zone and temperature thresholds seem to be more diverse due to differences between species and varieties,” explains Prof. Asseng. 

Cold-temperate crops such as wheat, for example, do better at cooler temperatures, while warm-temperature crops such as corn are sensitive to frost but can tolerate warmer temperatures. Strategies to reduce heat stress in crop production include changes in planting dates to avoid heat stress later in the season, irrigation (if feasible), switching to more heat-resistant crops, and breeding to increase heat tolerance. 

How is climate change affecting life on Earth?

“By the end of the century, 45 to 70 percent of the global land area could be affected by climate conditions in which humans cannot survive without technological support, such as air conditioning. Currently, it’s 12 percent,” says Prof. Asseng. This means that in the future, 44 to 75 percent of the human population will be chronically stressed by heat. A similar increase in heat stress is expected for livestock, poultry, agricultural crops and other living organisms.

“Genetic adaptation to a changing climate often takes many generations. The time available is too short for many higher forms of life. If current climate trends persist, many living things could be severely affected or even disappear completely from Earth due to temperature change,” concludes Prof. Asseng.


Asseng, S., Spänkuch, D., Hernandez-Ochoa, I. M. & J. Laporta (2021)
The upper temperature thresholds of life. In: The Lancet Planetary Health. Volume 5, Issue 6, June 2021, Pages e378-e385. DOI:

More information:

In February 2021, Prof. Senthold Asseng became the new director of the World Agricultural Systems Center at TUM. He also heads the Chair of Digital Agriculture at the TUM School of Life Sciences since December 2020. His research interests lie in the development of mathematical modeling and computer simulation of agricultural and biological systems in the context of climate variability, climate change and sustainability. This will involve investigating adaptation strategies to improve food security in sustainable agricultural systems in the context of climate change. 

This study was supported by the Consortium of International Agricultural Research Centers (CGIAR) research program on wheat agri-food systems (CRP WHEAT) and the CGIAR Platform for Big Data in Agriculture, the Bill & Melinda Gates Foundation, the World Bank, and the Government of Mexico through the Sustainable Modernization of Traditional Agriculture (MasAgro) project.

Featured image: Particularly during the summer months, some livestock are raised outside. Dairy cows have severe dehydration when 12 percent of their bodyweight has been lost as water. Increased natural shading by trees is one adaptation strategy to rising temperatures in the livestock sector.Image: Magdevski

Provided by TUM

Physicists Observationally Confirm Hawking’s Black Hole Theorem For The First Time (Cosmology)

Study offers evidence, based on gravitational waves, to show that the total area of a black hole’s event horizon can never decrease.

There are certain rules that even the most extreme objects in the universe must obey. A central law for black holes predicts that the area of their event horizons — the boundary beyond which nothing can ever escape — should never shrink. This law is Hawking’s area theorem, named after physicist Stephen Hawking, who derived the theorem in 1971.

Fifty years later, physicists at MIT and elsewhere have now confirmed Hawking’s area theorem for the first time, using observations of gravitational waves. Their results appear today in Physical Review Letters.

In the study, the researchers take a closer look at GW150914, the first gravitational wave signal detected by the Laser Interferometer Gravitational-wave Observatory (LIGO), in 2015. The signal was a product of two inspiraling black holes that generated a new black hole, along with a huge amount of energy that rippled across space-time as gravitational waves.

If Hawking’s area theorem holds, then the horizon area of the new black hole should not be smaller than the total horizon area of its parent black holes. In the new study, the physicists reanalyzed the signal from GW150914 before and after the cosmic collision and found that indeed, the total event horizon area did not decrease after the merger — a result that they report with 95 percent confidence.

Their findings mark the first direct observational confirmation of Hawking’s area theorem, which has been proven mathematically but never observed in nature until now. The team plans to test future gravitational-wave signals to see if they might further confirm Hawking’s theorem or be a sign of new, law-bending physics.

“It is possible that there’s a zoo of different compact objects, and while some of them are the black holes that follow Einstein and Hawking’s laws, others may be slightly different beasts,” says lead author Maximiliano Isi, a NASA Einstein Postdoctoral Fellow in MIT’s Kavli Institute for Astrophysics and Space Research. “So, it’s not like you do this test once and it’s over. You do this once, and it’s the beginning.”

Isi’s co-authors on the paper are Will Farr of Stony Brook University and the Flatiron Institute’s Center for Computational Astrophysics, Matthew Giesler of Cornell University, Mark Scheel of Caltech, and Saul Teukolsky of Cornell University and Caltech.

An age of insights

In 1971, Stephen Hawking proposed the area theorem, which set off a series of fundamental insights about black hole mechanics. The theorem predicts that the total area of a black hole’s event horizon — and all black holes in the universe, for that matter — should never decrease. The statement was a curious parallel of the second law of thermodynamics, which states that the entropy, or degree of disorder within an object, should also never decrease.

The similarity between the two theories suggested that black holes could behave as thermal, heat-emitting objects — a confounding proposition, as black holes by their very nature were thought to never let energy escape, or radiate. Hawking eventually squared the two ideas in 1974, showing that black holes could have entropy and emit radiation over very long timescales if their quantum effects were taken into account. This phenomenon was dubbed “Hawking radiation” and remains one of the most fundamental revelations about black holes.

“It all started with Hawking’s realization that the total horizon area in black holes can never go down,” Isi says. “The area law encapsulates a golden age in the ’70s where all these insights were being produced.”

Hawking and others have since shown that the area theorem works out mathematically, but there had been no way to check it against nature until LIGO’s first detection of gravitational waves.

Hawking, on hearing of the result, quickly contacted LIGO co-founder Kip Thorne, the Feynman Professor of Theoretical Physics at Caltech. His question: Could the detection confirm the area theorem?

At the time, researchers did not have the ability to pick out the necessary information within the signal, before and after the merger, to determine whether the final horizon area did not decrease, as Hawking’s theorem would assume. It wasn’t until several years later, and the development of a technique by Isi and his colleagues, when testing the area law became feasible.

Before and after

In 2019, Isi and his colleagues developed a technique to extract the reverberations immediately following GW150914’s peak — the moment when the two parent black holes collided to form a new black hole. The team used the technique to pick out specific frequencies, or tones of the otherwise noisy aftermath, that they could use to calculate the final black hole’s mass and spin.

A black hole’s mass and spin are directly related to the area of its event horizon, and Thorne, recalling Hawking’s query, approached them with a follow-up: Could they use the same technique to compare the signal before and after the merger, and confirm the area theorem?

The researchers took on the challenge, and again split the GW150914 signal at its peak. They developed a model to analyze the signal before the peak, corresponding to the two inspiraling black holes, and to identify the mass and spin of both black holes before they merged. From these estimates, they calculated their total horizon areas — an estimate roughly equal to about 235,000 square kilometers, or roughly nine times the area of Massachusetts.

They then used their previous technique to extract the “ringdown,” or reverberations of the newly formed black hole, from which they calculated its mass and spin, and ultimately its horizon area, which they found was equivalent to 367,000 square kilometers (approximately 13 times the Bay State’s area).

“The data show with overwhelming confidence that the horizon area increased after the merger, and that the area law is satisfied with very high probability,” Isi says. “It was a relief that our result does agree with the paradigm that we expect, and does confirm our understanding of these complicated black hole mergers.”

The team plans to further test Hawking’s area theorem, and other longstanding theories of black hole mechanics, using data from LIGO and Virgo, its counterpart in Italy.

“It’s encouraging that we can think in new, creative ways about gravitational-wave data, and reach questions we thought we couldn’t before,” Isi says. “We can keep teasing out pieces of information that speak directly to the pillars of what we think we understand. One day, this data may reveal something we didn’t expect.”

This research was supported, in part, by NASA, the Simons Foundation, and the National Science Foundation.

Featured image: Physicists at MIT and elsewhere have used gravitational waves to observationally confirm Hawking’s black hole area theorem for the first time. This computer simulation shows the collision of two black holes that produced the gravitational wave signal, GW150914. Credit: Simulating eXtreme Spacetimes (SXS) project. Courtesy of LIGO

Reference: Maximiliano Isi, Will M. Farr, Matthew Giesler, Mark A. Scheel, and Saul A. Teukolsky, “Testing the Black-Hole Area Law with GW150914”, Phys. Rev. Lett. 127, 011103 – Published 1 July 2021. DOI:

Provided by MIT

Observation, Simulation, and AI Join Forces to Reveal a Clear Universe (Cosmology)

Japanese astronomers have developed a new artificial intelligence (AI) technique to remove noise in astronomical data due to random variations in galaxy shapes. After extensive training and testing on large mock data created by supercomputer simulations, they then applied this new tool to actual data from Japan’s Subaru Telescope and found that the mass distribution derived from using this method is consistent with the currently accepted models of the Universe. This is a powerful new tool for analyzing big data from current and planned astronomy surveys.

Wide area survey data can be used to study the large-scale structure of the Universe through measurements of gravitational lensing patterns. In gravitational lensing, the gravity of a foreground object, like a cluster of galaxies, can distort the image of a background object, such as a more distant galaxy. Some examples of gravitational lensing are obvious, such as the “Eye of Horus”. The large-scale structure, consisting mostly of mysterious “dark” matter, can distort the shapes of distant galaxies as well, but the expected lensing effect is subtle. Averaging over many galaxies in an area is required to create a map of foreground dark matter distributions.

But this technique of looking at many galaxy images runs into a problem; some galaxies are just innately a little funny looking. It is difficult to distinguish between a galaxy image distorted by gravitational lensing and a galaxy that is actually distorted. This is referred to as shape noise and is one of the limiting factors in research studying the large-scale structure of the Universe.

To compensate for shape noise, a team of Japanese astronomers first used ATERUI II, the world’s most powerful supercomputer dedicated to astronomy, to generate 25,000 mock galaxy catalogs based on real data from the Subaru Telescope. They then added realist noise to these perfectly known artificial data sets, and trained an AI to statistically recover the lensing dark matter from the mock data.

After training, the AI was able to recover previously unobservable fine details, helping to improve our understanding of the cosmic dark matter. Then using this AI on real data covering 21 square degrees of the sky, the team found a distribution of foreground mass consistent with the standard cosmological model.

“This research shows the benefits of combining different types of research: observations, simulations, and AI data analysis.” comments Masato Shirasaki, the leader of the team, “In this era of big data, we need to step across traditional boundaries between specialties and use all available tools to understand the data. If we can do this, it will open new fields in astronomy and other sciences.”

These results appeared as Shirasaki et al. “Noise reduction for weak lensing mass mapping: an application of generative adversarial networks to Subaru Hyper Suprime-Cam first-year data” in the June 2021 issue of Monthly Notices of the Royal Astronomical Society.

Featured image: Artist’s visualization of this research. Using AI driven data analysis to peel back the noise and find the actual shape of the Universe. (Credit: The Institute of Statistical Mathematics)

Provided by NAOJ

Medical Researchers Uncover The Genetic Mechanism Behind Rett Syndrome (Neuroscience)

Medical researchers led by Kyushu University have revealed a possible underlying genetic pathway behind the neurological dysfunction of Rett syndrome. The team found that deficiencies in key genes involved in the pathology triggers neural stem cells to generate less neurons by producing more astrocytes—the brain’s maintenance cells.

The researchers hope that the molecular pathology they identified, as reported in the journal Cell Reports, can lead to potential therapeutic targets for Rett syndrome in the future.

Rett syndrome is a progressive neurodevelopmental disorder characterized by impairments in cognition and coordination—with varying severity—and occurs in roughly one in every 10,000 to 15,000 female births. However, it is difficult to initially identify because children appear to develop normally in the first 6–18 months.

“Rett syndrome is caused by mutations in a single gene called methyl-CpG binding protein 2, or MeCP2. The gene was identified over two decades ago and much has been uncovered since, but exactly how the mutations cause the pathology remains elusive,” explains first author Hideyuki Nakashima of Kyushu University’s Faculty of Medical Sciences.

In their past research, the team had identified that MeCP2 acts as a regulator for the processing of specific microRNAs to control the functions of neurons. So, they went back to investigate if that pathway was also involved in the differentiation of neural stem cells.

Compared to messenger RNA, the final template transcribed from DNA that is used by a cell to synthesize proteins, microRNAs—or miRNAs—are much smaller and act to regulate messenger RNA to make sure the cell is making the correct amount of the desired protein.

“Through our investigation, we found several microRNAs associated with MeCP2, but only one affected the differentiation of neural stem cells: a microRNA called miR-199a,” says Nakashima. “In fact, when either MeCP2 or miR-199a are disrupted, we found that it increased the production of cells called astrocytes.”

Astrocytes are like the support cells of your brain. While neurons fire off the electrical signals, astrocytes are there to help maintain everything else. During development, astrocytes and neurons are generated from the same type of stem cells, known as neural stem cells, where their production is carefully controlled. However, dysfunction in MeCP2 or miR-199a causes these stem cells to produce more astrocytes than neurons.

“Further analysis showed that miR-199a targets the protein Smad1, a transcription factor critical for proper cellular development. Smad1 functions downstream of a pathway called BMP signaling, which is known to inhibit the production of neurons and facilitate the generation of astrocytes,” states Nakashima.

To investigate the process further, the team established a brain organoid culture—a 3D culture of neural stem cells that can mimic aspects of brain development—from iPS cells derived from patients with Rett syndrome. When they inhibited BMP, short for bone morphogenetic protein, the team was able to reduce abnormal neural stem cell differentiation.

“Our findings have given us valuable insight into the role of MeCP2, miR-199a, and BMP signaling in the pathology of Rett syndrome,” concludes Kinichi Nakashima, who headed the team. “Further investigation is needed, but we hope this can lead to clinical treatments for Rett syndrome symptoms.”

For more information about this research, see “MeCP2 controls neural stem cell fate specification through miR-199a-mediated inhibition of BMP-Smad signaling,” Hideyuki Nakashima, Keita Tsujimura, Koichiro Irie, Takuya Imamura, Cleber A. Trujillo, Masataka Ishizu, Masahiro Uesaka, Miao Pan, Hirofumi Noguchi, Kanako Okada, Kei Aoyagi, Tomoko Andoh-Noda, Hideyuki Okano, Alysson R. Muotri, and Kinichi Nakashima, Cell Reports (2021).

Featured image: Images of brain organoids from control (top) and Rett syndrome (RTT) (bottom) patients, with astrocytes stained in cyan. You can see that the intensity of cyan color is higher in the RTT brain organoids. (Credit: Kyushu University/Nakashima Lab).

Provided by Kyushu University

Potential Drug Target For Difficult-to-treat Breast Cancer: RNA-binding Proteins (Medicine)

Studies using human cell lines and tumors grown in mice provide early evidence that inhibiting RNA-binding proteins, a previously overlooked family of molecules, might provide a new approach for treating some cancers

In cancer research, it’s a common goal to find something about cancer cells — some sort of molecule — that drives their ability to survive, and determine if that molecule could be inhibited with a drug, halting tumor growth. Even better: The molecule isn’t present in healthy cells, so they remain untouched by the new therapy.

Plenty of progress has been made in this approach, known as molecular targeted cancer therapy. Some current cancer therapeutics inhibit enzymes that become overactive, allowing cells to proliferate, spread and survive beyond their norm. The challenge is that many known cancer-driving molecules are “undruggable,” meaning their type, shape or location prohibit drugs from binding to them.

University of California San Diego School of Medicine researchers are now exploring the therapeutic potential of RNA-binding proteins, a relatively untapped family of cancer-driving molecules. After genes (DNA) are transcribed into RNA, these proteins provide an extra layer of cellular control, determining which RNA copies get translated into other proteins and which don’t. Like many molecular systems that govern cell growth, RNA-binding proteins can contribute to tumor development when they malfunction.

In their latest study, publishing July 2, 2021 in Molecular Cell, the UC San Diego School of Medicine team discovered in human cells and mouse models that RNA-binding proteins represent a new class of drug targets for cancers, including triple-negative breast cancer, a particularly difficult-to-treat cancer because it lacks most other molecular drug targets.

One RNA-binding protein in particular stood out: YTHDF2. When the researchers genetically removed YTHDF2 from human triple-negative breast tumors transplanted into mice, the tumors shrank approximately 10-fold in volume.

“We’re excited that RNA-binding proteins look like they could be new class of drug targets for cancer,” said senior author Gene Yeo, PhD, professor of cellular and molecular medicine at UC San Diego School of Medicine. “We’re not yet sure how easily druggable they are in this context, but we’ve built a solid framework to begin exploring them.”

Yeo led the study with Jaclyn Einstein, PhD, a graduate student in his lab. Einstein will join a startup company spun out from the lab to explore YTHDF2’s druggability.

Yeo’s team has long studied the role of RNA-binding proteins in a number of other diseases. In 2016, for example, they discovered that mutations in one such protein contribute to ALS by scrambling crucial cellular messaging systems.

To begin exploring RNA-binding proteins as cancer drug targets, the researchers turned to an old philosophy known as synthetic lethality. In this one-two punch approach, they started with human breast cells engineered to over-produce another well-known cancer-driving molecule, and looked for additional vulnerabilities specific to those cells.

The researchers systematically silenced RNA-binding proteins in these cancer cells one-by-one using the CRISPR gene editing technique. They found 57 RNA-binding proteins that, when inhibited, kill cancer cells with the known hyperactive cancer-driver. The advantage of the synthetic lethal approach is that normal cells, which don’t produce that cancer-driving molecule, should be left untouched by the treatment. Of these 57 RNA-binding proteins, YTHDF2 appeared most promising.

Yeo’s team also recently developed a new laboratory technique called Surveying Targets by APOBEC-Mediated Profiling (STAMP), which allows them to measure what had previously been largely invisible: how RNA-binding proteins interact with RNA molecules within individual cells.

The researchers used STAMP in this study to get a detailed look at how the various cells that make up a breast tumor behave without YTHDF2. The approach revealed that YTHDF2-deficient cancer cells die by stress-induced apoptosis, a carefully controlled mechanism cells use to destroy themselves. Apoptosis is supposed to shut down malfunctioning cells so tumors don’t arise, but it doesn’t always work. By removing YTHDF2, they managed to re-activate this cell death signal.

To test how safe it might be to treat cancer by inhibiting YTHDF2, the researchers engineered mice that lack YTHDF2 in every cell of the adult body, not just transplanted breast cancer cells. The mice appeared completely normal — not only did they not have tumors, there were no changes in body weight or behavior.

“Those otherwise healthy mice tell us that we might expect minimal adverse side effects of potential therapies that work by targeting YTHDF2,” Einstein said.

Co-authors of the study also include: Mark Perelis, Isaac A. Chaim, Julia K. Nussbacher, Alexandra T. Tankka, Brian A. Yee, Assael A. Madrigal, Archana Shankar, all at UC San Diego; Jitendra K. Meena, Heyuan Li, Nicholas J. Neill, Siddhartha Tyagi, Thomas F. Westbrook, all at Baylor College of Medicine.

Disclosure: Gene Yeo and Jaclyn Einstein are inventors on a patent disclosure at UC San Diego related to this work. Yeo is co-founder, member of the Board of Directors, equity holder, on the Scientific Advisory Board and paid consultant for Locanabio and Eclipse BioInnovations. Yeo is a visiting faculty at the National University of Singapore. The terms of these arrangements have been reviewed and approved by the University of California San Diego in accordance with its conflict-of-interest policies.

Featured image: Triple-negative breast cancer cells are shown on the left. Without the RNA-binding protein YTHDF2 (right), fewer cancer cells survived. © UC San Diego Health Sciences

Reference: Jaclyn M. Einstein, Mark Perelis, Isaac A. Chaim, Jitendra K. Meena, Julia K. Nussbacher, Alexandra T. Tankka, Brian A. Yee, Heyuan Li, Assael A. Madrigal, Nicholas J. Neill, Archana Shankar, Siddhartha Tyagi, Thomas F. Westbrook, Gene W. Yeo, Inhibition of YTHDF2 triggers proteotoxic cell death in MYC-driven breast cancer, Molecular Cell, 2021, , ISSN 1097-2765, (

Provided by UCSD