Periodontal disease, also known as gum disease, is a serious infection that affects nearly 50 percent of Americans aged 30 years and older. If left unchecked, periodontal disease can destroy the jawbone and lead to tooth loss. The disease is also associated with higher risk of diabetes and cardiovascular disease.
The current treatment for periodontal disease involves opening the infected gum flaps and adding bone grafts to strengthen the teeth. But in new research published recently in the journal Frontiers in Immunology, Forsyth Institute scientists have discovered that a specific type of molecule may stimulate stem cells to regenerate, reversing the inflammation caused by periodontal disease. This finding could lead to the development of new therapeutics to treat a variety of systemic diseases that are characterized by inflammation in the body.
For the study, Dr. Alpdogan Kantarci, his PhD student Dr. Emmanuel Albuquerque, and their team removed stem cells from previously extracted wisdom teeth and placed the stem cells onto petri dishes. The researchers then created a simulated inflammatory periodontal disease environment in the petri dishes. Next, they added two specific types of synthetic molecules called Maresin-1 and Resolvin-E1, both specialized pro-resolving lipid mediators from omega-3 fatty acids. The scientists found that Mar1 and RvE1 stimulated the stem cells to regenerate even under the inflammatory conditions.
“Both Maresin-1 and Resolvin-1 reprogrammed the cellular phenotype of the human stem cells, showing that even in response to inflammation, it is possible to boost capacity of the stem cells so they can become regenerative,” said Dr. Kantarci, Associate Member of Staff at the Forsyth Institute.
This finding is important because it allows scientists to identify the specific protein pathways involved in inflammation. Those same protein pathways are consistent across many systemic diseases, including periodontal disease, diabetes, heart disease, dementia, and obesity.
“Now that we understand how these molecules stimulate the differentiation of stem cells in different tissues and reverse inflammation at a critical point in time, the mechanism we identified could one day be used for building complex organs” said Dr. Kantarci. “There is exciting potential for reprogramming stem cells to focus on building tissues.”
MIT researchers find blocking the expressions of the genes XPA and MK2 enhances the tumor-shrinking effects of platinum-based chemotherapies in p53-mutated cancers.
Cancer therapies that target specific molecular defects arising from mutations in tumor cells are currently the focus of much anticancer drug development. However, due to the absence of good targets and to the genetic variation in tumors, platinum-based chemotherapies are still the mainstay in the treatment of many cancers, including those that have a mutated version of the tumor suppressor gene p53. P53 is mutated in a majority of cancers, which enables tumor cells to develop resistance to platinum-based chemotherapies. But these defects can still be exploited to selectively target tumor cells by targeting a second gene to take down the tumor cell, leveraging a phenomenon known as synthetic lethality.
Focused on understanding and targeting cell signaling in cancer, the laboratory of Michael Yaffe, the David H. Koch Professor Science and director of the MIT Center for Precision Cancer Medicine, seeks to identify pathways that are synthetic lethal with each other, and to develop therapeutic strategies that capitalize on that relationship. His group has already identified MK2 as a key signaling pathway in cancer and a partner to p53 in a synthetic lethal combination.
Now, working with a team of fellow researchers at MIT’s Koch Institute for Integrative Cancer Research, Yaffe’s lab added a new target, the gene XPA, to the combination. Appearing in Nature Communications, the work demonstrates the potential of “augmented synthetic lethality,” where depletion of a third gene product enhances a combination of targets already known to show synthetic lethality. Their work not only demonstrates the effectiveness of teaming up cancer targets, but also of the collaborative teamwork for which the Koch Institute is known.
P53 serves two functions: first, to give cells time to repair DNA damage by pausing cell division, and second, to induce cell death if DNA damage is too severe. Platinum-based chemotherapies work by inducing enough DNA damage to initiate the cell’s self-destruct mechanism. In their previous work, the Yaffe lab found that when cancer cells lose p53, they can re-wire their signaling circuitry to recruit MK2 as a backup pathway. However, MK2 only restores the ability to orchestrate DNA damage repair, but not to initiate cell death.
The Yaffe group reasoned that targeting MK2, which is only recruited when p53 function is absent, would be a unique way to create a synthetic lethality that specifically kills p53-defective tumors, by blocking their ability to coordinate DNA repair after chemotherapy. Indeed, the Yaffe Lab was able to show in pre-clinical models of non-small cell lung cancer tumors with mutations in p53, that silencing MK2 in combination with chemotherapy treatment caused the tumors to shrink significantly.
Although promising, MK2 has proven difficult to drug. Attempts to create target-specific, clinically viable small-molecule MK2 inhibitors have so far been unsuccessful. Researchers led by co-lead author Yi Wen Kong, then a postdoc in the Yaffe lab, have been exploring the use of RNA interference (siRNA) to stop expression of the MK2 gene, but siRNA’s tendency to degrade rapidly in the body presents new challenges.
Enter the potential of nanomaterials, and a team of nanotechnology experts in the laboratory of Paula Hammond, the David H. Koch Professor of Engineering, head of the MIT Department of Chemical Engineering, and the Yaffe group’s upstairs neighbor. There, Kong found a willing collaborator in then-postdoc Erik Dreaden, whose team had developed a delivery vehicle known as a nanoplex to protect siRNA until it gets to a cancer cell. In studies of non-small cell lung cancer models where mice were given the MK2-targeting nanocomplexes and standard chemotherapy, the combination clearly enhanced tumor cell response to chemotherapy. However, the overall increase in survival was significant, but relatively modest.
Meanwhile, Kong had identified XPA, a key protein involved in another DNA repair pathway called NER, as a potential addition to the MK2-p53 synthetic lethal combination. As with MK2, efforts to target XPA using traditional small-molecule drugs have not yet proven successful, and RNA interference emerged as the team’s tool of choice. The flexible and highly controllable nature of the Hammond group’s nanomaterials assembly technologies allowed Dreaden to incorporate siRNAs against both XPA and MK2 into the nanocomplexes.
Kong and Dreaden tested these dual-targeted nanocomplexes against established tumors in an immunocompetent, aggressive lung cancer model developed in collaboration between the laboratories of professor of biology Michael Hemann and Koch Institute Director Tyler Jacks. They let the tumors grow even larger before treatment than they had in their previous study, thus raising the bar for therapeutic intervention.
Tumors in mice treated with the dual-targeted nanocomplexes and chemotherapy were reduced by up to 20-fold over chemotherapy alone, and similarly improved over single-target nanocomplexes and chemotherapy. Mice treated with this regimen survived three times longer than with chemotherapy alone, and much longer than mice receiving nanocomplexes targeting MK2 or XPA alone.
Overall, these data demonstrate that identification and therapeutic targeting of augmented synthetic lethal relationships — in this case between p53, MK2 and XPA — can produce a safe and highly effective cancer therapy by re-wiring multiple DNA damage response pathways, the systemic inhibition of which may otherwise be toxic.
The nanocomplexes are modular and can be adapted to carry other siRNA combinations or for use against other cancers in which this augmented synthetic lethality combination is relevant. Beyond application in lung cancer, the researchers — including Kong, who is now a research scientist at the Koch Institute, and Dreaden, who is now an assistant professor at Georgia Tech and Emory School of Medicine — are working to test this strategy for use against ovarian and other cancers.
Additional collaborations and contributions were made to this project by the laboratories of Koch Institute members Stephen Lippard and Omer Yilmaz, the Eisen and Chang Career Development Professor.
This work was supported in part by a Mazumdar-Shaw International Oncology Fellowship, a postdoctoral fellowship from the S. Leslie Misrock (1949) Frontier Fund for Cancer Nanotechnology, and by the Charles and Marjorie Holloway Foundation, the Ovarian Cancer Research Foundation, and the Breast Cancer Alliance.
As people get older they often jump from disease to disease and carry the burden of more chronic diseases at once. But is there a system in the way diseases follow each other? Danish researchers have for the past six years developed a comprehensive tool, the Danish Disease Trajectory Browser, that utilizes 25 years of public health data from Danish patients to explore what they call the main highways of disease development.
“A lot of research focus is on investigating one disease at a time. We try to add a time perspective and look at multiple diseases following each other to discover where are the most common trajectories – what are the disease highways that we as people encounter,” says professor Søren Brunak from the Novo Nordisk Foundation Center for Protein Research at University of Copenhagen.
To illustrate the use of the tool the research group looked at data for Down Syndrome patients and showed, as expected, that these patients in general are diagnosed with Alzheimer’s Disease at an earlier age that others. Other frequent diseases are displayed as well.
The Danish Disease Trajectory Browser is published in Nature Communications.
Making health data accessible for research
In general, there is a barrier for working with health data in research. Both in terms of getting approval from authorities to handle patient data and the fact that researchers need specific technical skills to extract meaningful information from the data.
“We wanted to make an easily accessible tool for researchers and health professionals where they don’t necessarily need to know all the details. The statistical summary data on disease to disease jumps in the tool are not person-sensitive. We compute statistics over many patients and have boiled it down to data points that visualize how often patients with one disease get a specific other disease at a later point. So we are focusing on the sequence of diseases,” says Søren Brunak.
The Danish Disease Trajectory Browser is freely available for the scientific community and uses WHO’s disease codes. Even though there are regional differences in disease patterns the tool is highly relevant in an international context to compare i.e. how fast diseases progress in different countries.
Disease trajectories can help in personalized medicine
For Søren Brunak the tool has a great potential in personalized medicine.
“In personalized medicine a part of the job is to divide patients into subgroups that will benefit most from a specific treatment. By knowing the disease trajectories you can create subgroups of patients not just by their current disease, but based on their previous conditions and expected future conditions as well. In that way you find different subgroups of patients that may need different treatment strategies,” Søren Brunak explains.
Currently the Disease Trajectory Browser contains data from 1994 to 2018 and will continuously be updated with new data.
A sweeping study of proteins that modulate cell signaling could lead to a better understanding of myriad diseases–and of what makes us all different.
Scientists at Scripps Research have comprehensively mapped how a key class of proteins within cells regulates signals coming in from cell surface receptors.
The study reveals, among other things, that people commonly have variants in these proteins that cause their cells to respond differently when the same cell receptor is stimulated–offering a plausible explanation for why people’s responses to the same drugs can vary widely.
The findings, published October 1 in Cell, set the stage for a better understanding of the complex roles these proteins, known as RGS proteins, play in health and disease. That in turn could lead to new treatment approaches for a range of conditions.
“Before you can fix things, you need to know how they’re broken and how they work normally, and in this study that’s essentially what we’ve done for these important regulatory proteins,” says study senior author Kirill Martemyanov, PhD, professor and chair of the Department of Neuroscience at Scripps Research’s Florida campus.
A reset button for cell receptors
RGS proteins, discovered about 25 years ago, provide an essential “braking” function for a large family of cellular receptors called G-protein-coupled receptors. GPCRs, as they’re known, control hundreds of important functions on cells throughout the body, and have been implicated in dozens of diseases, from heart problems to vision impairments and mood disorders. Accordingly, GPCRs comprise the largest single category of drug targets–more than a third of FDA-approved drugs treat diseases by binding to GPCRs and modifying their activities.
When GPCRs are activated by hormones or neurotransmitters, they initiate signaling cascades within their host cells, via signal-carrying proteins called G-proteins. RGS (Regulator of G-Protein Signaling) proteins work by deactivating G-proteins, shutting off this signaling cascade. This shutoff mechanism limits G-protein signaling to a brief time window and allows cells to reset and accept new incoming signals. Without it, the GPCR-initiated signal stays on inappropriately and functional signaling becomes dysfunctional.
“One condition I studied earlier in my career involves the loss of RGS regulation in light-detecting cells in the retina,” Martemyanov says. “Patients born with this condition can’t stop perceiving light, even when they go into a dark room, and they can’t track moving objects very well because they lack the normal visual refresh rate. It’s easy to imagine how devastating it would be if you had a similar loss of RGS regulation in the heart or the brain where timing is so important.”
Scanning the ‘barcodes’ for clues
Researchers have evaluated some RGS proteins individually, but in the new study, Martemyanov and colleagues painstakingly covered all 20 of the RGS proteins found in human cells, studying how each one selectively recognizes and regulates its G-protein counterparts. In so doing the researchers essentially created a roadmap for how GPCR signals are routed in cells.
“This selective recognition of G-protein subunits turns out to be performed by a few elements in each RGS protein–elements organized in a pattern resembling a barcode,” says study first author Ikuo Masuho, PhD, staff scientist in the Martemyanov lab.
In an analysis of the genomes of more than 100,000 people, the researchers showed in general how mutations and common variations in RGS barcode regions can disrupt RGS proteins’ recognition of G-proteins or even cause them to recognize the wrong G-proteins. The team also demonstrated a particular example, showing how mutations in the RGS protein known as RGS16, which have been linked to insomnia, cause it to lose its usual recognition of G proteins.
“It’s clear that genetic variation in the RGS barcode regions has the potential to disrupt normal GPCR signaling, to cause disease or to create more subtle differences or traits,” Martemyanov says. “For example, it may help explain why different individuals treated with the same GPCR-targeting drug often differ widely in their responses.”
Martemyanov and his team found that RGS proteins’ barcode regions and the G-proteins they regulate are constantly evolving. They were able to reconstruct less refined, “ancestral” RGS proteins, based on analyses of different species. From these findings they were able to devise principles for crafting “designer” RGS proteins that regulate a desired set of G-proteins.
The same principles could guide the development of drugs targeting RGS proteins for therapeutic benefits, a major ongoing effort in the GPCR field. Treatments that put corrective new RGS proteins in cells might be another avenue, Martemyanov says.
An international team, led by Dr. Sabrina Simon (Wageningen University & Research) and Dr. Hojun Song (Texas A&M), succeeded in tracing the evolution of acoustic communication in the insect family of crickets and grasshoppers (Orthoptera). The results show that crickets were the first species to communicate, approximately 300 million years ago. The results are also significant because it was the first time this analysis has been done on such a large scale. The publication by Dr. Simon et al. appeared in the prominent scientific journal Nature Communications today.
“Insects have a vital role in terrestrial ecosystems. To understand how insects influence, sustain or endanger ecosystems, and what happens when they decline or even disappear, we first need to understand why insects are so species-rich and how they evolved,” says Dr. Simon.
Orthoptera is a charismatic insect group of high evolutionary, ecological and economic importance such as crickets, katydids, and grasshoppers. They are a prime example of animals using acoustic communication. Using a large genomic dataset, the team established a phylogenetic framework to analyze how hearing and sound production originated and diversified during several hundred million years of evolution.
The familiar sound of crickets was first experienced 300 million years ago, the researchers found. It was experienced because specialized and dedicated hearing organs were developed later. Sound production originally served as a defense mechanism against enemies, who were startled by the vibrating cricket in their mouths. Later on, the ability to produce sound started to play a prominent role in reproduction, because sound-producing crickets had a greater chance of being located by a female.
Insects are one of the most species-rich groups of animals. They are crucial in almost every ecosystem. The number of insects is rapidly declining. Insect species are becoming invasive or disappearing due to climate change. That—in itself—has an impact on ecosystems and eventually on humans. “We need to understand the evolutionary history of this amazingly successful animal group. This is also important for our (daily) economic life because only then can we understand what happens when insect species decline or even disappear,” says Dr. Simon.
“We have access to a lot of genomic data on crickets and grasshoppers, thanks to the 1KITE project and a collaboration with the Song Lab at Texas A&M University, U.S.,” Dr. Simon says. “This enables us to sanalyse how different species relate to each other. We generated a genealogical tree of when what species of crickets, grasshoppers and their allies lived on earth. On top of that, we know what species were able to produce sound and hear. That allowed us to create a timeline that shows when the first crickets could communicate: around 300 million years ago.”
The 1KITE (1K Insect Transcriptome Evolution) project aims to study the transcriptomes (that is the entirety of expressed genes) of more than 1,000 insect species encompassing all srecognised insect orders. Overall, scientists from eleven nations (Australia, Austria, China, France, Germany, Japan, Mexico, the Netherlands, New Zealand, UK and the US) are closely collaborating in the 1KITE project.
References: Song, H., Béthoux, O., Shin, S. et al. Phylogenomic analysis sheds light on the evolutionary pathways towards acoustic communication in Orthoptera. Nat Commun 11, 4939 (2020). https://doi.org/10.1038/s41467-020-18739-4
A new system capable of automatically turning words into molecules on demand will open up the digitisation of chemistry, scientists say.
Researchers from the University of Glasgow’s School of Chemistry, who developed the system, claim it will lead to the creation of a “Spotify for chemistry”—a vast online repository of downloadable recipes for important molecules including drugs.
The creation of such a system could help developing countries more easily access medications, enable more efficient international scientific collaboration, and even support the human exploration of space.
The Glasgow team, led by Professor Lee Cronin, have laid the groundwork for digital chemistry with the development of what they call a “chemical processing unit”—an affordable desktop-sized robot chemist which is capable of doing the repetitive and time-consuming work of creating chemicals. Other robot chemists, built with different operating systems, have also been developed elsewhere.
Up until now, those robot chemists have required a massive amount of programming from their human counterparts, with detailed instructions. The problem is there is currently no standard programming language for chemistry, meaning that programs made for one robot do not work on any other type.
In a new paper published in the journal Science, the Glasgow researchers describe a universal approach to digitizing chemistry, including a programming system which could remove the vast majority of the effort required to program the robots.
They have found a way to create new sets of instructions for robot chemists by harnessing the power of natural language processing. They developed a computer program called SynthReader to scan through scientific papers and recognize sections which outline procedures for organic and inorganic chemical synthesis. Synthreader automatically breaks those procedures down to simple instructions and stores them in a format the team call Chemical Description Language, or XDL, which is a new open source language for describing chemical and material synthesis.
Those XDL files are chemical instructions which can in principle be read any chemical robot in. The team built an easy-to-use interface called ChemIDE to integrate with any robotic chemist system and allow the XDL instructions to be turned into chemicals. The only human input required is ensuring that the equipment the robot needs to make the molecules is set up correctly.
The paper describes how the team used their system to scan scientific papers and produce 12 different molecules using their chemical processing unit, including the analgesic lidocaine, the Dess-Martin periodinane oxidation reagent, and the fluorinating agent AlkylFluor.
Professor Lee Cronin, Regius Professor of Chemistry at the University of Glasgow, said: “What we’ve managed to do with the development of our ‘Chemical Spotify’ is something similar to ripping a compact disc into an MP3. We take information stored in a physical format, in this case a scientific paper, and pull out all the data we need to create a digital file which can be played on any system, in this case any robot chemist, including our robotic system which is an order of magnitude lower cost than any other similar robot. We’re hoping that the system we’ve built will massively expand the capabilities of robot chemists and allow the creation of a huge database of molecules drawn from hundreds of years’ worth of scientific papers. Our system, which we’re calling Chemify, can read and run XDL files which have been shared among users. Putting that kind of knowledge directly in the hands of people with access to robot chemists could help doctors make drugs on demand in the future. It could even mean that future manned missions to Mars could take raw chemical materials with them and make whatever they need right there on the red planet.”
References: S. Hessam M. Mehr, Matthew Craven, Artem I. Leonov et al., “A universal system for digitization and automatic execution of the chemical synthesis literature”, Science 02 Oct 2020: Vol. 370, Issue 6512, pp. 101-108 DOI: 10.1126/science.abc2986 link: https://science.sciencemag.org/content/370/6512/101
A group of researchers at the U.S. Department of Energy’s Ames Laboratory has discovered a way to convert a common byproduct of the paper manufacturing process into valuable chemical precursors for making nylon. The process is much more environmentally friendly in terms of the solvent(s) used and the energy inputs than other methods and provides a useful alternative to burning waste products of pulping.
Kraft (from the German meaning strength) lignin is a major waste product of the paper industry, amounting to about 50 million tons annually. This waste lignin is typically burned for heat, however, that process also releases carbon dioxide into the environment.
Ames Laboratory researchers discovered that treating this lignin with aqueous sodium hydroxide at reasonable temperatures (200 °C) produces guaiacol. Guaiacol can then be converted into nylon precursors under even milder conditions using suitable catalysts—creating a new, viable two-step process for producing important chemicals from lignin.
“We found that Kraft lignin was depolymerized in dilute alkaline solution at relatively low temperature (200 °C) under an ambient nitrogen pressure environment,” said Igor Slowing, Ames Laboratory scientist and lead investigator. “We were able to produce guaiacol with high selectivity (>80%) in a total monomer amount of 13% based on the lignin input.
The team used a series of techniques including a suite of advanced solution and solid state nuclear magnetic resonance (NMR) experiments, mass spectrometry and model reactions to determine that guaiacol was generated mainly through cleavage of β-O-4 bonds in the original lignin structure. Cleaving this type of bond often requires severe reaction conditions, which many times lead to undesirable side reactions that result in formation of intractable chars.”
The Kraft lignin-derived guaiacol was then converted to the nylon precursor Ketone Alcohol or KA oil, using Ru/C catalyst under 1 bar H2. The use of low H2 pressure proved critical to ensure full selectivity to KA oil, without formation of the undesired methoxy-cyclohexanol byproduct. Importantly, the deactivation of the Ru/C catalyst observed in the direct treatment of lignin, was avoided in the two-step procedure.
“This two-step process provides a new option for lignin utilization in the production of high-demand value-added chemicals,” said Slowing. “We envision this process as a low-energy path that leaves the remaining oligomers available for downstream processing into other chemical commodities in an integrated refinery for waste Kraft lignin.”
The research has been detailed in the Royal Society of Chemistry’s journal Green Chemistry, Two-step conversion of Kraft lignin to nylon precursors under mild conditions.
References: Hui Zhou et al., “Two-step conversion of Kraft lignin to nylon precursors under mild conditions”, Green Chem., 2020, 22, pp. 4676-4682, doi: https://doi.org/10.1039/D0GC01220C
The age-old question has long been asked by scientists and researchers without much progress in finding the answer.
There have been more than 4,200 exoplanets discovered outside our solar system, and while past techniques were developed to test for life on exoplanets, none of which tested for complex, non-technological life like vegetation. Now, space telescopes may soon be able to directly view these planets—including one within the habitable zone of the Earth’s nearest star neighbor. With the help of these telescopes and a team of researchers in informatics and astronomy at Northern Arizona University, an answer to this question might not be so out of this world.
Funded by a NASA Habitable Worlds grant, a team of researchers, which includes Chris Doughty, David Trilling and Ph.D. student Andrew Abraham, published a study in the International Journal of Astrobiology that develops and tests a technique to determine whether specifically multicellular or complex-but-not-technological life can be uniquely detected outside the solar system.
In an attempt to find some answers, the team turned to one of Earth’s most common multicellular life forms—trees. More specifically, their shadows.
“Earth has more than three trillion trees, and each casts shadows differently than inanimate objects,” said Doughty, lead author on the paper and assistant professor in the School of Informatics, Computing, and Cyber Systems. “If you go outside at noon, almost all shadows will be from human objects or plants and there would be very few shadows at this time of day if there wasn’t multicellular life.”
The team hypothesizes that abundant upright photosynthetic multicellular life (trees) will cast shadows at high sun angles, distinguishing them from single cellular life. Therefore, using future space telescopes to observe the types of shadows cast should, in theory, determine if there are similar life forms on exoplanets.
“The difficult part is that any future space telescope will likely only have a single pixel to determine if life exists on that exoplanet,” said Abraham, who worked closely with Doughty on the study. “So, the question becomes: Can we detect these shadows indicating multicellular life with a single pixel?”
With just one pixel to work with, the team had to make sure that the shadows detected in these telescopes were conclusively multicellular life, not other exoplanet features like craters.
“It was suggested that craters might cast shadows similar to trees, and our idea would not work,” said Trilling, associate professor of astronomy. “So, we decided to look at the replica moon landing site in northern Arizona where the Apollo astronauts trained for their mission to the moon.”
Drones were used at different times of the day to determine that craters did in fact cast shadows differently than trees.
The researchers then turned to imaging to determine if their theory would work on a large scale. By using the POLDER (Polarization and Directionality of Earth’s Reflectance) satellite, the team was able to observe the shadows on Earth at different sun angles and times of day. The resolution was reduced to mimic what Earth would look like as a single pixel to a distant observer as it rotates around the sun. Then, the team compared this to similar data from Mars, the moon, Venus and Uranus to see if Earth’s multicellular life was unique.
The team found that on parts of the planet where trees were in abundance, like the Amazon basin, multicellular life could be distinguished, but when it came to observing the planet as a whole as a single pixel, distinguishing multicellular life was difficult.
However, the potential that observing shadows brings to the conversation of life on exoplanets could be closer than scientists and researchers have ever been before. Doughty believes the technique remains valid in theory—a future space telescope could rely on the shadows found in a single pixel.
“If each exoplanet was only a single pixel, we might be able to use this technique to detect multicellular life in the next few decades,” he said. “If more pixels are required, we may have to wait longer for technological improvements to answer whether multicellular life on exoplanets exists.”
References: Doughty, C., Abraham, A., Windsor, J., Mommert, M., Gowanlock, M., Robinson, T., & Trilling, D. (2020). Distinguishing multicellular life on exoplanets by testing Earth as an exoplanet. International Journal of Astrobiology, 1-8. doi:10.1017/S1473550420000270
Some low-mass planets are expected to be ejected from their parent planetary systems during early stages of planetary system formation. According to planet-formation theories, such as the core accretion theory, typical masses of ejected planets should be between 0.3 and 1.0 M⊕. Although in practice such objects do not emit any light, they may be detected using gravitational microlensing via their light-bending gravity. Microlensing events due to terrestrial-mass rogue planets are expected to have extremely small angular Einstein radii (< 1 uas) and extremely short timescales (< 0.1 day).
Now, Mroz and colleagues in their research paper presented the discovery of the shortest-timescale microlensing event (lasted only 41.5 minutes), OGLE-2016-BLG-1928, identified to date. That’s not much time for detailed data to be gathered.
Thanks to the detection of finite-source effects in the light curve of the event, they were able to measure the angular Einstein radius of the lens θE=0.842±0.064 uas, making the event the most extreme short-timescale microlens discovered to date.
Depending on its unknown distance, the lens may be a Mars- to Earth-mass object, with the former possibility favored by the Gaia proper motion measurement of the source. They rule out stellar companions up to the projected distance of 8.0 au from the planet. Their discovery demonstrated that terrestrial-mass free-floating planets can be detected and characterized using microlensing.
References: Mroz et al., A terrestrial-mass rogue planet candidate detected in the shortest-timescale microlensing event. arXiv:2009.12377 [astro-ph.EP]. arxiv.org/abs/2009.12377
All copyright of this article comes under uncover reality.