# Shadow of A Black Hole Part 1: Kerr BH (Planetary Science)

In 1973, James Bardeen initiated his research on gravitational lensing by spinning black holes. Bardeen gave a thorough analysis of null geodesics (light-ray propagation) around a Kerr black hole.

The Kerr solution had been discovered in 1962 by the New Zealand physicist Roy Kerr and since then focused the attention of many researchers in General Relativity, because it represents the most general state of equilibrium of an astrophysical black hole.

The Kerr spacetime’s metric depends on two parameters : the black hole mass “M” and its normalized angular momentum “a”. An important difference with usual stars, which are in differential rotation, is that Kerr black holes are rotating with perfect rigidity : all the points on their event horizon move with the same angular velocity. There is however a critical angular momentum, given by a = M (in units where G=c=1) above which the event horizon would “break up”: this limit corresponds to the horizon having a spin velocity equal to the speed of light. For such a black hole, called “extreme”, the gravitational field at the event horizon would cancel, because the inward pull of gravity would be compensated by huge repulsive centrifugal forces.

In the last twenty years increasing evidence has been found for the existence of a supermassive black hole at the center of our galaxy. It is expected that a distant observer should “see” this black hole as a dark disk in the sky which is known as the “shadow”. It is sometimes said that the shadow is an image of the event horizon.

James Bardeen was the first to correctly calculate the shape of the shadow of a Kerr black hole. He computed how the black hole’s rotation would affect the shape of the shadow that the event horizon casts on light from a background star field. For a black hole spinning close to the maximum angular momentum, the result is a D-shaped shadow.

Reference: Bardeen, J. M. 1973, Timelike and null geodesics in the Kerr metric, in Black Holes”, Gordon and Breach, Science Publishers, Inc; New York; 23, 6(5), pp. 215–239, 1973. https://inis.iaea.org/search/search.aspx?orig_q=RN:6166516

Copyright of this article totally belongs to our author S. Aman. One is allowed to reuse it only by giving proper credit either to him or to us

# After Cracking The “Sum of Cubes” Puzzle For 42, Mathematicians Discover A New Solution For 3 (Maths)

The 21-digit solution to the decades-old problem suggests many more solutions exist.

What do you do after solving the answer to life, the universe, and everything? If you’re mathematicians Drew Sutherland and Andy Booker, you go for the harder problem.

In 2019, Booker, at the University of Bristol, and Sutherland, principal research scientist at MIT, were the first to find the answer to 42. The number has pop culture significance as the fictional answer to “the ultimate question of life, the universe, and everything,” as Douglas Adams famously penned in his novel “The Hitchhiker’s Guide to the Galaxy.” The question that begets 42, at least in the novel, is frustratingly, hilariously unknown.

In mathematics, entirely by coincidence, there exists a polynomial equation for which the answer, 42, had similarly eluded mathematicians for decades. The equation x3+y3+z3=k is known as the sum of cubes problem. While seemingly straightforward, the equation becomes exponentially difficult to solve when framed as a “Diophantine equation” — a problem that stipulates that, for any value of k, the values for x, y, and z must each be whole numbers.

When the sum of cubes equation is framed in this way, for certain values of k, the integer solutions for x, y, and z can grow to enormous numbers. The number space that mathematicians must search across for these numbers is larger still, requiring intricate and massive computations.

Over the years, mathematicians had managed through various means to solve the equation, either finding a solution or determining that a solution must not exist, for every value of k between 1 and 100 — except for 42.

In September 2019, Booker and Sutherland, harnessing the combined power of half a million home computers around the world, for the first time found a solution to 42. The widely reported breakthrough spurred the team to tackle an even harder, and in some ways more universal problem: finding the next solution for 3.

Booker and Sutherland have now published the solutions for 42 and 3, along with several other numbers greater than 100, this week in the Proceedings of the National Academy of Sciences.

Picking up the gauntlet

The first two solutions for the equation x3+y3+z= 3 might be obvious to any high school algebra student, where x, y, and z can be either 1, 1, and 1, or 4, 4, and -5. Finding a third solution, however, has stumped expert number theorists for decades, and in 1953 the puzzle prompted pioneering mathematician Louis Mordell to ask the question: Is it even possible to know whether other solutions for 3 exist?

“This was sort of like Mordell throwing down the gauntlet,” says Sutherland. “The interest in solving this question is not so much for the particular solution, but to better understand how hard these equations are to solve. It’s a benchmark against which we can measure ourselves.”

As decades went by with no new solutions for 3, many began to believe there were none to be found. But soon after finding the answer to 42, Booker and Sutherland’s method, in a surprisingly short time, turned up the next solution for 3:

5699368212219623807203 + (−569936821113563493509)3 + (−472715493453327032)3 = 3

The discovery was a direct answer to Mordell’s question: Yes, it is possible to find the next solution to 3, and what’s more, here is that solution. And perhaps more universally, the solution, involving gigantic, 21-digit numbers that were not possible to sift out until now, suggests that there are more solutions out there, for 3, and  other values of k.

“There had been some serious doubt in the mathematical and computational communities, because [Mordell’s question] is very hard to test,” Sutherland says. “The numbers get so big so fast. You’re never going to find more than the first few solutions. But what I can say is, having found this one solution, I’m convinced there are infinitely many more out there.”

A solution’s twist

To find the solutions for both 42 and 3, the team started with an existing algorithm, or a twisting of the sum of cubes equation into a form they believed would be more manageable to solve:

k − z3 = x3 + y3 = (x + y)(x2 − xy + y2)

This approach was first proposed by mathematician Roger Heath-Brown, who  conjectured that there should be infinitely many solutions for every suitable k. The team further modified the algorithm by representing x+y as a single parameter, d. They then reduced the equation by dividing both sides by d and keeping only the remainder — an operation in mathematics termed “modulo d” — leaving a simplified representation of the problem.

“You can now think of k as a cube root of z, modulo d,” Sutherland explains. “So imagine working in a system of arithmetic where you only care about the remainder modulo d, and we’re trying to compute a cube root of k.”

With this sleeker version of the equation, the researchers would only need to look for values of d and z that would guarantee finding the ultimate solutions to x, y, and z, for k=3. But still, the space of numbers that they would have to search through would be infinitely large.

So, the researchers optimized the algorithm by using mathematical “sieving” techniques to dramatically cut down the space of possible solutions for d.

“This involves some fairly advanced number theory, using the structure of what we know about number fields to  avoid looking in places we don’t need to look,” Sutherland says.

The team also developed ways to efficiently split the algorithm’s search into hundreds of thousands of parallel processing streams. If the algorithm were run on just one computer, it would have taken hundreds of years to find a solution to k=3. By dividing the job into millions of smaller tasks, each independently run on a separate computer, the team could further speed up their search.

In September 2019, the researchers put their plan in play through Charity Engine, a project that can be downloaded as a free app by any personal computer, and which is designed to harness any spare home computing power to collectively solve hard mathematical problems. At the time, Charity Engine’s grid comprised over 400,000 computers around the world, and Booker and Sutherland were able to run their algorithm on the network as a test of Charity Engine’s new software platform.

“For each computer in the network, they are told, ‘your job is to look for d’s whose prime factor falls within this range, subject to some other conditions,’” Sutherland says. “And we had to figure out how to divide the job up into roughly 4 million tasks that would each take about  three hours for a computer to complete.”

Very quickly, the global grid returned the very first solution to k=42, and just two weeks later, the researchers confirmed they had found the third solution for k=3 — a milestone that they marked, in part, by printing the equation on t-shirts.

The fact that a third solution to k=3 exists suggests that Heath-Brown’s original conjecture was right and that there are infinitely more solutions beyond this newest one. Heath-Brown also predicts the space between solutions will grow exponentially, along with their searches. For instance, rather than the third solution’s 21-digit values, the fourth solution for x, y, and z will likely involve numbers with a mind-boggling 28 digits.

“The amount of work you have to do for each new solution grows by a factor of more than 10 million, so the next solution for 3 will need 10 million times 400,000 computers to find, and there’s no guarantee that’s even enough,” Sutherland says. “I don’t know if we’ll ever know the fourth solution. But I do believe it’s out there.”

This research was supported, in part, by the Simons Foundation.

Featured image: In September 2019, researchers, harnessing the combined power of half a million home computers around the world, for the first time found a solution to 42. The widely reported breakthrough spurred the team to tackle an even harder, and in some ways more universal problem: finding the next solution for 3. Credits Image: Christine Daniloff, MIT

Reference: Andrew R. Booker and Andrew V. Sutherland, “On a question of Mordell”, PNAS March 16, 2021 118 (11) e2022377118; https://doi.org/10.1073/pnas.2022377118

Provided by MIT

# Using Artificial Intelligence to Generate 3D Holograms in Real-time (Science and Technology)

A new method called tensor holography could enable the creation of holograms for virtual reality, 3D printing, medical imaging, and more — and it can run on a smartphone.

Despite years of hype, virtual reality headsets have yet to topple TV or computer screens as the go-to devices for video viewing. One reason: VR can make users feel sick. Nausea and eye strain can result because VR creates an illusion of 3D viewing although the user is in fact staring at a fixed-distance 2D display. The solution for better 3D visualization could lie in a 60-year-old technology remade for the digital world: holograms.

Holograms deliver an exceptional representation of 3D world around us. Plus, they’re beautiful. (Go ahead — check out the holographic dove on your Visa card.) Holograms offer a shifting perspective based on the viewer’s position, and they allow the eye to adjust focal depth to alternately focus on foreground and background.

Researchers have long sought to make computer-generated holograms, but the process has traditionally required a supercomputer to churn through physics simulations, which is time-consuming and can yield less-than-photorealistic results. Now, MIT researchers have developed a new way to produce holograms almost instantly — and the deep learning-based method is so efficient that it can run on a laptop in the blink of an eye, the researchers say.

“People previously thought that with existing consumer-grade hardware, it was impossible to do real-time 3D holography computations,” says Liang Shi, the study’s lead author and a PhD student in MIT’s Department of Electrical Engineering and Computer Science (EECS). “It’s often been said that commercially available holographic displays will be around in 10 years, yet this statement has been around for decades.”

Shi believes the new approach, which the team calls “tensor holography,” will finally bring that elusive 10-year goal within reach. The advance could fuel a spillover of holography into fields like VR and 3D printing.

Shi worked on the study, published today in Nature, with his advisor and co-author Wojciech Matusik. Other co-authors include Beichen Li of EECS and the Computer Science and Artificial Intelligence Laboratory at MIT, as well as former MIT researchers Changil Kim (now at Facebook) and Petr Kellnhofer (now at Stanford University).

The quest for better 3D

A typical lens-based photograph encodes the brightness of each light wave — a photo can faithfully reproduce a scene’s colors, but it ultimately yields a flat image.

In contrast, a hologram encodes both the brightness and phase of each light wave. That combination delivers a truer depiction of a scene’s parallax and depth. So, while a photograph of Monet’s “Water Lilies” can highlight the paintings’ color palate, a hologram can bring the work to life, rendering the unique 3D texture of each brush stroke. But despite their realism, holograms are a challenge to make and share.

First developed in the mid-1900s, early holograms were recorded optically. That required splitting a laser beam, with half the beam used to illuminate the subject and the other half used as a reference for the light waves’ phase. This reference generates a hologram’s unique sense of depth.  The resulting images were static, so they couldn’t capture motion. And they were hard copy only, making them difficult to reproduce and share.

Computer-generated holography sidesteps these challenges by simulating the optical setup. But the process can be a computational slog. “Because each point in the scene has a different depth, you can’t apply the same operations for all of them,” says Shi. “That increases the complexity significantly.” Directing a clustered supercomputer to run these physics-based simulations could take seconds or minutes for a single holographic image. Plus, existing algorithms don’t model occlusion with photorealistic precision. So Shi’s team took a different approach: letting the computer teach physics to itself.

They used deep learning to accelerate computer-generated holography, allowing for real-time hologram generation. The team designed a convolutional neural network — a processing technique that uses a chain of trainable tensors to roughly mimic how humans process visual information. Training a neural network typically requires a large, high-quality dataset, which didn’t previously exist for 3D holograms.

The team built a custom database of 4,000 pairs of computer-generated images. Each pair matched a picture — including color and depth information for each pixel — with its corresponding hologram. To create the holograms in the new database, the researchers used scenes with complex and variable shapes and colors, with the depth of pixels distributed evenly from the background to the foreground, and with a new set of physics-based calculations to handle occlusion. That approach resulted in photorealistic training data. Next, the algorithm got to work.

By learning from each image pair, the tensor network tweaked the parameters of its own calculations, successively enhancing its ability to create holograms. The fully optimized network operated orders of magnitude faster than physics-based calculations. That efficiency surprised the team themselves.

“We are amazed at how well it performs,” says Matusik. In mere milliseconds, tensor holography can craft holograms from images with depth information — which is provided by typical computer-generated images and can be calculated from a multicamera setup or LiDAR sensor (both are standard on some new smartphones). This advance paves the way for real-time 3D holography. What’s more, the compact tensor network requires less than 1 MB of memory. “It’s negligible, considering the tens and hundreds of gigabytes available on the latest cell phone,” he says.

Video: Towards Real-time Photorealistic 3D Holography with Deep Neural Networks © MIT

The research “shows that true 3D holographic displays are practical with only moderate computational requirements,” says Joel Kollin, a principal optical architect at Microsoft who was not involved with the research. He adds that “this paper shows marked improvement in image quality over previous work,” which will “add realism and comfort for the viewer.” Kollin also hints at the possibility that holographic displays like this could even be customized to a viewer’s ophthalmic prescription. “Holographic displays can correct for aberrations in the eye. This makes it possible for a display image sharper than what the user could see with contacts or glasses, which only correct for low order aberrations like focus and astigmatism.”

“A considerable leap”

Real-time 3D holography would enhance a slew of systems, from VR to 3D printing. The team says the new system could help immerse VR viewers in more realistic scenery, while eliminating eye strain and other side effects of long-term VR use. The technology could be easily deployed on displays that modulate the phase of light waves. Currently, most affordable consumer-grade displays modulate only brightness, though the cost of phase-modulating displays would fall if widely adopted.

Three-dimensional holography could also boost the development of volumetric 3D printing, the researchers say. This technology could prove faster and more precise than traditional layer-by-layer 3D printing, since volumetric 3D printing allows for the simultaneous projection of the entire 3D pattern. Other applications include microscopy, visualization of medical data, and the design of surfaces with unique optical properties.

“It’s a considerable leap that could completely change people’s attitudes toward holography,” says Matusik. “We feel like neural networks were born for this task.”

The work was supported, in part, by Sony.

Featured image: MIT researchers have developed a way to produce holograms almost instantly. They say the deep learning-based method is so efficient that it could run on a smartphone. Credits: Image: MIT News, with images from iStockphoto

References: (1) Shi, L., Li, B., Kim, C. et al. Towards real-time photorealistic 3D holography with deep neural networks. Nature 591, 234–239 (2021). https://www.nature.com/articles/s41586-020-03152-0 https://doi.org/10.1038/s41586-020-03152-0 (2) http://cgh.csail.mit.edu/

Provided by MIT

# Dietary Fats Interact With Grape Tannins to Influence Wine Taste (Food)

Wine lovers recognize that a perfectly paired wine can make a delicious meal taste even better, but the reverse is also true: Certain foods can influence the flavors of wines. Now, researchers reporting in ACS’ Journal of Agricultural and Food Chemistry have explored how lipids –– fatty molecules abundant in cheese, meats, vegetable oils and other foods –– interact with grape tannins, masking the undesirable flavors of the wine compounds.

Tannins are polyphenolic compounds responsible for the bitterness and astringency of red wines. Wine testers have noticed that certain foods reduce these sensations, improving the flavor of a wine, but scientists aren’t sure why. Some studies have indicated that tannins interact with lipids at the molecular level. In foods, lipids are found as fat globules dispersed in liquids or solids. Julie Géan and colleagues wanted to investigate how tannins influence the size and stability of lipid droplets in an emulsion. They also wondered how the prior consumption of vegetable oils would impact the taste of tannins for human volunteers.

The researchers made an oil-in-water emulsion using olive oil, water and a phospholipid emulsifier. Then, they added a grape tannin, called catechin, and studied the lipids in the emulsion with various biophysical techniques. The team found that the tannin inserted into the layer of emulsifier that surrounded the oil droplets, causing larger droplets to form. In taste tests, volunteers indicated that consuming a spoonful of rapeseed, grapeseed or olive oil before tasting a tannin solution reduced the astringency of the compounds. Olive oil had the greatest effect, causing the tannins to be perceived as fruity instead of astringent. Combining the biophysical and sensory results, the researchers concluded that tannins can interact with oil droplets in the mouth, making them less available to bind to saliva proteins and cause astringency.

The authors acknowledge funding from the Conseil Interprofessionnel du Vin de Bordeaux.

“New Insights into Wine Taste: Impact of Dietary Lipids on Sensory Perceptions of Grape Tannins”
Journal of Agricultural and Food Chemistry

Provided by American Chemical Society

The American Chemical Society (ACS) is a nonprofit organization chartered by the U.S. Congress. ACS’ mission is to advance the broader chemistry enterprise and its practitioners for the benefit of Earth and all its people. The Society is a global leader in promoting excellence in science education and providing access to chemistry-related information and research through its multiple research solutions, peer-reviewed journals, scientific conferences, eBooks and weekly news periodical Chemical & Engineering News. ACS journals are among the most cited, most trusted and most read within the scientific literature; however, ACS itself does not conduct chemical research. As a specialist in scientific information solutions (including SciFinder® and STN®), its CAS division powers global research, discovery and innovation. ACS’ main offices are in Washington, D.C., and Columbus, Ohio.

To automatically receive press releases from the American Chemical Society, contact newsroom@acs.org.

# Regular Meat Consumption Linked With a Wide Range of Common Diseases (Food)

Regular meat consumption is associated with a range of diseases that researchers had not previously considered, according to a large, population-level study conducted by a team at the University of Oxford.

The results associate regular meat intake with a higher risk of various diseases, including heart disease, pneumonia and diabetes, but a lower risk of iron-deficiency anaemia. The study is published today in BMC Medicine.

Consistent evidence has shown that excess consumption of red meat and processed meat (such as bacon and sausages) may be associated with an increased likelihood of developing colorectal cancer. But up to now, it was not clear whether high meat consumption in general might raise or lower the risk of other, non-cancerous diseases.

This has been investigated in a new large-cohort study which used data from almost 475,000 UK adults, who were monitored for 25 major causes of non-cancerous hospital admissions. At the start of the study, participants completed a questionnaire which assessed their dietary habits (including meat intake), after which they were followed-up for an average period of eight years.

Overall, participants who consumed unprocessed red meat and processed meat regularly (three or more times per week) were more likely than low meat-eaters to smoke, drink alcohol, have overweight or obesity, and eat less fruit and vegetables, fibre, and fish.

However, after taking these factors into account, the results indicated that:

• Higher consumption of unprocessed red meat and processed meat combined was associated with higher risks of ischaemic heart disease, pneumonia, diverticular disease, colon polyps, and diabetes. For instance, every 70 g higher red meat and processed meat intake per day was associated with a 15% higher risk of ischaemic heart disease and a 30% higher risk of diabetes.
• Higher consumption of poultry meat was associated with higher risks of gastro-oesophageal reflux disease, gastritis and duodenitis, diverticular disease, gallbladder disease, and diabetes. Every 30g higher poultry meat intake per day was associated with a 17% higher risk of gastro-oesophageal reflux disease and a 14% greater risk of diabetes.
• Most of these positive associations were reduced if body mass index (BMI, a measure of body weight) was taken into account. This suggests that regular meat eaters having a higher average body weight could be partly causing these associations.
• The team also found that higher intakes of unprocessed red meat and poultry meat were associated with a lower risk of iron deficiency anaemia. The risk was 20% lower with every 50g higher per day intake of unprocessed red meat and 17% lower with every 30g higher per day intake of poultry meat. A higher intake of processed meat was not associated with the risk of iron deficiency anaemia.

The research team suggest that unprocessed red meat and processed meat may increase the risk of ischaemic heart disease because they are major dietary sources of saturated fatty acids. These can increase low-density lipoprotein (LDL) cholesterol, an established risk factor for ischaemic heart disease.

Lead author Dr Keren Papier, from the Nuffield Department of Population Health at the University of Oxford, said: ‘We have long known that unprocessed red meat and processed meat consumption is likely to be carcinogenic and this research is the first to assess the risk of 25 non-cancerous health conditions in relation to meat intake in one study.’

Additional research is needed to evaluate whether the differences in risk we observed in relation to meat intake reflect causal relationships, and if so the extent to which these diseases could be prevented by decreasing meat consumption. The result that meat consumption is associated with a lower risk of iron-deficiency anaemia, however, indicates that people who do not eat meat need to be careful that they obtain enough iron, through dietary sources or supplements.’

The World Cancer Research Fund recommends that people should limit red meat consumption to no more than three portions per week (around 350–500g cooked weight in total), and processed meat should be eaten rarely, if at all.

This study was based on 474,985 middle-aged adults, who were originally recruited into the UK Biobank study between 2006 and 2010, and were followed-up for this study until 2017. These participants were invited to complete a dietary questionnaire with 29 questions on diet, which assessed the consumption frequency of a range of foods. Participants were then categorised into subgroups based on their meat intake: 0-1 times/week; 2 times/week; 3-4 times/week and 5 or more times a week. The information on each participant’s meat intake was linked with hospital admission and mortality data from the NHS Central Registers.

Featured image: New study finds links between regular meat consumption and a wide range of common diseases. Image credit: Shutterstock

Reference: Papier, K., Fensom, G.K., Knuppel, A. et al. Meat consumption and risk of 25 common conditions: outcome-wide analyses in 475,000 men and women in the UK Biobank study. BMC Med 19, 53 (2021). https://bmcmedicine.biomedcentral.com/articles/10.1186/s12916-021-01922-9 https://doi.org/10.1186/s12916-021-01922-9

Provided by University of Oxford

# Neanderthals Disappeared from North West Europe Earlier Than Thought (Archeology)

Neanderthal remains from Belgium are thousands of years older than previously reported, a new paper from a multidisciplinary team of international researchers reveals.

Neanderthal remains from Belgium have long puzzled scientists. Fossil remains from the key site of Spy Cave in Belgium suggested a date of approximately 37,000 years ago, which would place them among the latest surviving Neanderthals in Europe. But sample contamination might have affected these estimates.

Now, a team based in Oxford’s Radiocarbon Accelerator Unit has re-dated Neanderthal specimens from Spy Cave. Most of the dates obtained in this new study have been found to be much older than those obtained previously on the same bone samples—up to 5,000 years older in certain cases

According to the paper, Re-evaluating the timing of Neanderthal disappearance in Northwest Europe, this suggests Neanderthals disappeared from the region 44,200-40,600 years ago, much earlier than previously estimated.

“The results suggest again that Homo sapiens and Neanderthals probably overlapped in different parts of Europe and there must have been opportunities for possible cultural and genetic exchange

Professor Tom Higham

Oxford Professor Tom Higham says, ‘Dating is crucial in archaeology, without a reliable framework of chronology we can’t really be confident in understanding the relationships between Neanderthals and Homo sapiens as we moved into Europe 45,000 years ago and they began to disappear. That’s why these methods are so exciting, because they provide much more accurate and reliable dates. The results suggest again that Homo sapiens and Neanderthals probably overlapped in different parts of Europe and there must have been opportunities for possible cultural and genetic exchange.’

Lead author, Dr Thibaut Devièse says, ‘The new chemistry methods we have applied in the case of the Spy and other Belgian sites provide the only means by which we can decontaminate these key Neanderthal bones for dating and check that contaminants have been fully removed. This gives us confidence in the new ages we obtained for these important specimens.’

Grégory Abrams, of the Scladina Cave Archaeological Centre in Belgium says, ‘We also (re)dated Neanderthal specimens from two additional Belgian sites, Fonds-de-Forêt and Engis, and obtained similar ages than those from Spy. Dating all these Belgian specimens was very exciting as they played a major role in the understanding and the definition of Neanderthals. Almost two centuries after the discovery of the Neanderthal child of Engis, we were able to provide a reliable age.’

Dating all these Belgian specimens was very exciting as they played a major role in the understanding and the definition of Neanderthals

The team found a Neanderthal scapula from the Spy Cave that had produced very recent dates previously (around 28,000 years ago) was heavily contaminated with modern bovine DNA. These results suggest that the bone had been preserved with a glue prepared from cattle bones.

The team used an advanced method for radiocarbon dating fossil bones. Using liquid chromatography separation, they were able to extract a single amino acid from the Neanderthal remains for dating. This so-called ‘compound-specific’ approach allows scientists to reliably date the bones and exclude carbon from contaminants such as those from the glue that was applied to the fossils. These contaminants have plagued previous attempts to reliably date the Belgian Neanderthals because their presence resulted in dates that were much too young.

The results also highlight the need for robust pre-treatment methods when dating Palaeolithic human remains to minimize biases due to contamination, according to the authors. The team is now analysing archaeological evidence, such as bone tools, to further refine our understanding of the cultural transition between Neanderthals and Homo sapiens in this region.

Featured image: Maxilla and mandible assemblage of a late Neanderthal from Spy Cave. Illustration by Patrick Semal © RBINS (co-author on this paper).

Reference: Thibaut Devièse, Grégory Abrams, Mateja Hajdinjak, Stéphane Pirson, Isabelle De Groote, Kévin Di Modica, Michel Toussaint, Valentin Fischer, Dan Comeskey, Luke Spindler, Matthias Meyer, Patrick Semal, Tom Higham, “Reevaluating the timing of Neanderthal disappearance in Northwest Europe”, Proceedings of the National Academy of Sciences Mar 2021, 118 (12) e2022466118; DOI: 10.1073/pnas.2022466118

Provided by University of Oxford

# The Skeleton of the Malaria Parasite Reveals its Secrets (Biology)

Research teams from UNIGE have discovered that the cytoskeleton of the malaria parasite comprises a vestigial form of an organelle called conoid, initially thought to be absent from this species and which could play a role in host invasion.

Plasmodium is the parasite causing malaria, one of the deadliest parasitic diseases. The parasite requires two hosts —the Anopheles mosquito and the human— to complete its life cycle and goes through different forms at each stage of its life cycle. Transitioning from one form to the next involves a massive reorganisation of the cytoskeleton. Two teams from the University of Geneva (UNIGE) have shed new light on the cytoskeleton organisation in Plasmodium. Their research, published in PLOS Biology, details the organisation of the parasite’s skeleton at an unprecedented scale, adapting a recently developed technique called expansion microscopy. Cells are “inflated” before imaging, providing access to more structural details, at a nanometric scale. The study identifies traces of an organelle called “conoid”, which was thought to be lacking in this species despite its crucial role in host invasion of closely related parasites.

The cytoskeleton, or cell skeleton, consists of a network of several types of filaments, including actin and tubulin. It confers rigidity to the cell, allows the attachment or movement of organelles and molecules inside the cell, as well as cell deformations. As the parasite transitions between developmental stages, its cytoskeleton undergoes repeated, drastic, reorganisations. In particular, Plasmodium needs a very specific cytoskeleton in order to move and penetrate the membrane barriers of its host cells,  two processes that are central to the pathogenesis of malaria-causing parasites. “Due to the very small size of Plasmodium—up to 50 times smaller than a human cell—it is a technical challenge to view its cytoskeleton!” begins Eloïse Bertiaux, a researcher at UNIGE and the first author of the study. “That is why we adapted our expansion microscopy protocol, which consists of inflating the biological sample while keeping its original shape, so it can be observed at a resolution that has never been attained before”, continues Virginie Hamel, a researcher at the Department of Cell Biology of the Faculty of Sciences of UNIGE and co-leading the study.

A Vestigial Form of an Organelle

The researchers observed the parasite at the ookinete stage, the form responsible for the invasion of the mosquito midgut, an essential step for the dissemination of malaria. A structure made of tubulin was visible at the tip of the parasite. This structure is similar to a conoid, an organelle involved in host cell invasion, in related Apicomplexa parasites. “The structure observed in Plasmodium seems, however, divergent and reduced compared with the well-described conoid of Toxoplasma, the parasite causing toxoplasmosis. We still need to determine whether this remnant conoid is also important for host cell invasion of Plasmodium”, explains Mathieu Brochet, a professor at the Department of Microbiology and Molecular Medicine of the Faculty of Medicine of UNIGE.

Cytoskeleton Under the Microscope

The discovery of this vestigial conoid highlights the power of expansion microscopy, which can be used to view cytoskeletal structures at the nanoscale without the need for specialised microscopes. Used in combination with electron microscopy and super-resolution microscopy approaches, this method adds molecular details to the available structural information, paving the way for more in-depth studies of the cytoskeleton and its molecular organisation. This will allow us to gain a better understanding of how Plasmodium invades its host cells, a process that is essential for the pathogenesis of this parasite.

Featured image: Plasmodium at the ookinete stage viewed by expansion microscopy. The image shows the cytoskeleton of the pathogen following the labelling of tubulin. The conoid is the ring visible at the upper tip of the cell. © UNIGE/HAMEL

Reference: Bertiaux E, Balestra AC, Bournonville L, Louvel V, Maco B, Soldati-Favre D, et al. (2021) Expansion microscopy provides new insights into the cytoskeleton of malaria parasites including the conservation of a conoid. PLoS Biol 19(3): e3001020. https://journals.plos.org/plosbiology/article?id=10.1371/journal.pbio.3001020 https://doi.org/10.1371/journal.pbio.3001020

Provided by University of Geneve

# Planets Form Quickly Around Low-Mass Stars (Planetary Science)

According to Brianna Zawadzki and colleagues planets form quickly around low mass stars (of 0.2 M), long before the gas disc dissipates.

NASA’s TESS mission is expected to discover hundreds of M dwarf planets. However, few studies focus on how planets form around low-mass stars. Now, Zawadzki and colleagues aim to better characterize the formation process of M dwarf planets to fill this gap and aid in the interpretation of TESS results.

Protoplanetary discs consist of gas and dust that provide the initial conditions for planet formation. Disc properties vary with the spectral type of the central star, implying that planet formation is probably not uniform across all spectral types. This motivates studies of planet formation around M dwarfs.

Regardless of stellar spectral type, the first stage of planet formation is the coagulation of dust grains into larger particles. As dust grains grow larger, the collision speeds increase. Close to the star, particle growth is likely
to be fragmentation-limited, so that the particle size is set by the fragmentation speed of the grain material.

Because these parameters vary by location in the disc, the maximum grain size is dependent on where in the disc grains form. The inner disc is largely dominated by fragmentation, leading to a dust surface density that goes as 𝑟¯1.5, while the outer disc is dominated by radial drift which cannot sustain high enough relative velocities for fragmentation over the disc lifetime. Zawadzki and colleagues simulations focus on planet formation in the inner disc of an M dwarf, so this study includes models in which the embryos are initially distributed according to an 𝑟¯1.5 surface density profile. However, the distribution of solids may be rearranged as dust grains are converted into planetesimals. In addition to the fragmentation-limited case, they also considered the case where the solid distribution of embryos mirrors that of the turbulent gas disc and are initially distributed according to an 𝑟¯0.6 surface density profile. They investigated whether the resulting planetary systems can be used to distinguish between these two initial surface density profiles. One distinguishing feature of their study is that their simulations take the early stellar evolution of the M dwarf into account, including calculations for the changing stellar mass accretion rate and disc parameters. This is particularly important for M dwarfs due to the rate and length of their contraction onto the main sequence, but is not commonly accounted for in studies of M dwarf planet formation.

Aerodynamic drag causes the dust to drift radially inward in the turbulent disc. As the solids continue to orbit, this radial drift combined with the turbulent motions in the disc cause particles to collide and either stick or fragment, depending on the collision speed and grain size. Particle growth stalls at around cm-size pebbles, meaning some other process is likely needed for continued growth. Currently, the most promising mechanism to concentrate particles is a radial convergence of particle drift known as the streaming instability. The streaming instability causes particles to concentrate into long, dense azimuthal filaments which can gravitationally collapse into planetesimals. Another aerodynamic process that may play an important role is the concentration of particles inside turbulent eddies.

As planetesimals collide to form embryos, they experience runaway growth followed by oligarchic growth. During runaway growth, the mass growth of planetesimals is given by 𝑀¤ /𝑀 ∝ 𝑀^1/3. When a small number of embryos become large enough to dominate the neighboring planetesimals, the system enters a stage of oligarchic growth. During this stage, growth is slower (𝑀¤ /𝑀 ∝ 𝑀^–1/3) and proceeds until the oligarchs reach their isolation mass.

The gas in the disc exerts torques on the embryos which can cause planets to migrate. There are two principal modes of planet migration:

Type I: Small planets embedded in the disc experience Type I migration. In it, the planet forms spiral density waves associated with its Lindblad resonances, and causes nearly co-orbital gas to follow horseshoe orbits. The associated overdensities lead to negative Lindblad torques and positive co-rotation torques, respectively.

Type II: When a planet is massive enough to open a gap in the disc, the co-rotation torque is all but eliminated and the Lindblad torque is reduced. The planet becomes coupled to the viscous evolution of the disc and orbital migration greatly slows down.

For current work, Zawadzki and colleagues have shown their interest in Type I migration.

“We showed that planets form quickly around 0.2𝑀 stars, long before the gas disc dissipates. When planets become locked into resonant chains, they can be pushed far into the disc cavity via Type I migration of the outermost planet in the chain.”

They used ten sets of N-body planet formation simulations which vary in whether a gas disc is present, initial range of embryo semi-major axes, and initial solid surface density profile. Each simulation begins with 147 equal-mass embryos around a 0.2 solar mass star and runs for 100 Myr. Their simulations lead them to 4 main conclusions:

1) The planets form quickly around 0.2𝑀 stars with most collisions occurring within the first 1 million year (Myr), long before the gas disc dissipates. When planets become locked into resonant chains, they can be pushed far into the disc cavity via Type I migration of the outermost planet in the chain.

2) Planet formation reshapes the solid distribution and destroys memory of initial conditions. The solid surface density of the final planets does not appear to be related to the initial distribution of embryos. Thus, it may not be possible to infer the initial distribution of solids from present-day observations of planetary systems.

3) The presence of a gas disc reduces the final number of planets relative to a gas-free environment and causes planets to migrate inward.

4) Roughly a quarter of planetary systems experience their final giant impact inside the gas disc. Because these planets largely form inside the gas disc, they may retain an atmosphere even after migrating into the disc cavity.

They also found that systems that form in the presence of gas tend to be more stable than those that did not.

They concluded that future models that span a larger range of stellar masses combined with new TESS observations will further develop our understanding of planet formation around M dwarfs.

Reference: Brianna Zawadzki, Daniel Carrera, Eric Ford, “Rapid Formation of Super-Earths Around Low-Mass Stars”, ArXiv, pp. 1-17, 2021. https://arxiv.org/abs/2103.01239

Copyright of this article totally belongs to our author S. Aman. One is allowed to reuse it only by giving proper credit either to him or to us

# How Diet Plays A Role in Colon Health? (Food)

Colorectal cancer is the second most commonly diagnosed cancer in women and the third most common in men worldwide. The colon is the final part of your digestive tract. Since it’s part of the digestive system, the food you eat plays an important role in the health of your colon.

A recent study suggests that dietary patterns and colorectal cancer are linked. The data shows that high intakes of red meat, processed meat, alcohol and other foods can increase your risk.

Watch: The Mayo Clinic Minute

Want to keep your colon healthy? Use these two diet tips:

1. Eat a nutrient-dense diet
2. Include more fiber-rich foods

“Eating a nutrient-dense, high-fiber diet not only keeps the walls of your colon strong, but it can also prevent hemorrhoids or pouches in your colon,” says Kate Zeratsky, a Mayo Clinic registered dietitian nutritionist. “It also may prevent colon polyps and, potentially, cancer.”

A typical American diet is low in nutrient-density with larger portions of processed meats and refined grains, such as breads and cereals.

“Our Western diet tends to be lower in nutritional value,” says Zeratsky.

Fiber-rich foods, like fruits and veggies, whole grains, nuts and seeds, are also more nutrient-dense. And the fiber keeps you regular and controls the amount of bacteria in your colon.

“The nutrients in those foods also may be beneficial in preventing digestive diseases as well as other chronic diseases, such as diabetes, and help you manage your weight,” says Zeratsky.

And when increasing fiber in your diet, do it gradually, and drink plenty of water.

Reference: Sajesh K. Veettil, Tse Yee Wong, Yee Shen Loo et al., “Role of Diet in Colorectal Cancer Incidence: Umbrella Review of Meta-analyses of Prospective Observational Studies”, JAMA Netw Open. 2021;4(2):e2037341. doi: 10.1001/jamanetworkopen.2020.37341

Provided by Mayo Clinic