Effective therapeutics which can inhibit the replication of SARS-CoV-2 in infected individuals are still under development. Several studies suggested the use of drug combinations which can inhibit or prevent SARS-CoV-2 infection. Weeks before, we wrote an article on the study which showed that, the use of combination of Pegasys (IFNa) and nafamostat can effectively prevent SARS-CoV-2 infection in cell culture and hamsters. Now, Dr. Kim Stegmann and colleagues showed that the combination of NHC and DHODH inhibitors such as teriflunomide, IMU-838/vidofludimus, and BAY2402234, strongly synergizes to inhibit SARS-CoV-2 replication. Their study recently appeared in BioRxiv.
The nucleoside analogue N4-hydroxycytidine (NHC), also known as EIDD-1931, interferes with SARS-CoV-2 replication in cell culture. It is the active metabolite of the prodrug Molnupiravir (MK-4482), which is currently being evaluated for the treatment of COVID-19 in advanced clinical studies. Meanwhile, inhibitors of dihydroorotate dehydrogenase (DHODH), by reducing the cellular synthesis of pyrimidines, counteract virus replication and are also being clinically evaluated for COVID-19 therapy.
Now, Kim Stegmann and colleagues carried out study to determine the effectiveness of single and combination of NHC and DHODH inhibitors, in preventing SARS-CoV-2 infection.
They showed that the combination of NHC and DHODH inhibitors such as teriflunomide, IMU-838/vidofludimus, and BAY2402234, strongly synergizes to inhibit SARS-CoV-2 replication. While single drug treatment only mildly impaired virus replication, combination treatments reduced virus yields by at least two orders of magnitude.
They determined this by RT-PCR, TCID50, immunoblot and immunofluorescence assays in Vero E6 and Calu-3 cells infected with wildtype and the Alpha and Beta variants of SARS-CoV-2.
They proposed that the lack of available pyrimidine nucleotides upon DHODH inhibition increases the incorporation of NHC in nascent viral RNA, thus precluding the correct synthesis of the viral genome in subsequent rounds of replication, thereby inhibiting the production of replication competent virus particles. This concept was further supported by the rescue of replicating virus after addition of pyrimidine nucleosides to the media.
“Since both classes of compounds are undergoing advanced clinical evaluation for the treatment of COVID-19, our observations at least raise the prespective of using both drugs as antiviral combination therapy.”
— concluded authors of the study
Reference: Kim M. Stegmann, Antje Dickmanns, Natalie Heinen, Uwe Groß, Dirk Görlich, Stephanie Pfaender, Matthias Dobbelstein, “N4-hydroxycytidine and inhibitors of dihydroorotate dehydrogenase synergistically suppress SARS-CoV-2 replication”, bioRxiv 2021.06.28.450163; doi: https://doi.org/10.1101/2021.06.28.450163
Note for editors of other websites: To reuse this article fully or partially kindly give credit either to our author/editor S. Aman or provide a link of our article
A team of scientists from Gladstone, UC San Francisco and Emory University have uncovered features of T cells that distinguish fatal from non-fatal cases of severe COVID-19.
While vaccines are doing a remarkable job of slowing the COVID-19 pandemic, infected people can still die from severe illness and new medications to treat them have been slow to arise. What kills these patients in the end doesn’t seem to be the virus itself, but an over-reaction of their immune system that leads to massive inflammation and tissue damage.
The findings, published in the scientific journal Cell Reports, could pave the way for new treatments. Currently, patients who are hospitalized for severe COVID-19 mostly receive dexamethasone, a drug used to reduce inflammation.
“Dexamethasone has been a life saver for many patients,” says Gladstone Associate Investigator Nadia Roan, PhD, a senior and corresponding author of the study. “But it is not always sufficient. Our study suggests that it may also be beneficial to directly prevent excess immune cells, including inflammatory T cells, from entering the lung and causing further damage. This approach could be a good complement to anti-inflammatory treatments for COVID-19 patients in the intensive care unit.”
The work could also help with disease prognosis.
“Some patients can fall seriously ill from the virus,” says Warner Greene, MD, PhD, senior investigator at Gladstone and co-senior author of the study. “We are in dire need for effective ways to anticipate the course of disease, as well as to alleviate lung damage in people with severe COVID-19.”
An Imbalance of T Cells
T cells are a crucial component of a successful immune response to many viruses, including SARS-CoV-2, the virus that causes COVID-19. And they are markedly depleted from the blood during severe COVID-19.
To characterize the features of the T cells that remain, the scientists obtained blood samples that had been collected from COVID-19 patients in an intensive care unit (ICU). While about half of these patients eventually recovered, the other half died of the disease. By examining samples taken at different times during the patients’ stay in the ICU, the scientists were able to discern trends that they could relate to the disease outcome.
Much has already been learned about the immune response of COVID-19 patients during active infection or after convalescence. For example, studies from convalescent individuals, including Roan’s own previous work, reveal how the immune system may provide long-term immunity. Less clear, however, is how the immune system may protect from severe illness or, conversely, contribute to its worsening and to death. To understand the cause of fatalities, the researchers needed to compare fatal to non-fatal severe cases.
“The T-cell response to SARS-CoV-2 increased in patients who were eventually discharged from the ICU and recovered,” says Roan, who is also an associate professor of urology at UC San Francisco. “But in patients who eventually died, we sometimes could not detect any T-cell response, or their response decreased over time.”
Differences also extended to the composition of the patients’ T cells that recognize the SARS-CoV-2 virus. In particular, patients who survived harbored a growing number of T cells called Th1, which are known as important fighters of viral infection. Roan’s team found molecular features on the Th1 cells that may explain why they were able to multiply in these patients.
By contrast, they found that patients who died had an elevated number of T cells secreting an inflammatory molecule that would contribute to a worsening of their lung condition. These patients also held more regulatory T cells recognizing the virus. Regulatory T cells normally help quiet down the immune response once infection subsides.
“Perhaps in these patients, regulatory T cells were activated too early and prevented effector T cells from ever mounting an adequate immune response to SARS-CoV-2,” says Roan. “This could help explain the patients’ paltry response to the virus.”
Based on these findings, doctors might be able to predict the course of illness from the relative abundance of Th1 and regulatory T cells that recognize SARS-CoV-2 in a patient’s blood.
Roan cautions, however, “Our findings show correlations, not causes. The immune system is complex, with many moving parts and possible interactions between them. Proving the cause of fatality will require further studies.”
Stemming the Flow of Lung-Homing Cells
Another potential cause of fatality that the team discovered was a surge in T cells able to infiltrate the lungs of dying patients. By contrast, these cells decreased over time in the patients who recovered.
The scientists call these lung-homing cells “bystander T cells,” because they are T cells that do not directly recognize the SARS-CoV-2 virus.
“Our study suggests that during severe COVID-19, bystander T cells are recruited from the blood into the lung, where they may contribute to immune-mediated pathology,” says Roan.
What triggers the surge of bystander T cells in severe COVID-19 cases remains unclear, but may be in part mediated by proteins secreted by the lung that recruit these cells. Regardless, stemming their flow into the lung may help reduce lung damage and accelerate the recovery of patients with severe illness.
This approach is particularly promising, as drugs that antagonize a molecule found on the surface of the bystander T cells are already approved for the treatment of metastatic cancer.
“Our next step is to test these drugs in a mouse model of severe COVID-19,” says Roan. “We hope that after further scrutiny, such drugs could rapidly be tested as adjunctive treatment for COVID-19.”
Other authors include Jason Neidleman, Xiaoyu Luo, Ashley F. George, and Matthew McGregor from Gladstone Institutes; Junkai Yang and Eliver Ghosn of Emory Vaccine Center, Emory University, Atlanta, GA, USA; Cassandra Yun, Kara Lynch, Victoria Murray, Gurjot Gill, Joshua Vasquez, and Sulggi A. Lee of UC San Francisco.
This work was supported by the Van Auken Private Foundation, David Henke, and Pamela and Edward Taft; the Program for Breakthrough Biomedical Research, which is partly funded by the Sandler Foundation; philanthropic funds donated to Gladstone Institutes by The Roddenberry Foundation and individual donors devoted to COVID-19 research; Fast Grants Awards (2164, 2208, and 2160), a part of Emergent Ventures from the Mercatus Center at George Mason University; and the National Institutes of Health (R01 AI123126-05S1, P30 DK063720, and S10 1S10OD018040).
Lowering levels of a hormone called PTHrP can prevent metastases and improve survival in mice with pancreatic cancer and could lead to a new way to treat patients, according to a study(link is external and opens in a new window) from cancer researchers at Columbia University Vagelos College of Physicians and Surgeons and Herbert Irving Comprehensive Cancer Center and with collaborators at the University of Pennsylvania.
When patients are first diagnosed with pancreatic cancer, the cancer usually has spread to other organs. Because of these metastases, nearly all patients will succumb to their cancer within one year of diagnosis, but no drugs exist to prevent metastasis.
In an effort to find treatments, cancer researchers at Columbia—led by Anil K. Rustgi, MD, and Jason R. Pitarresi, PhD—investigated a hormone called PTHrP. Although PTHrP (parathyroid hormone-related protein) is often highly active in patients with pancreatic cancer, its role in metastasis was unclear.
Loss of PTHrP dramatically improves survival in mice
The researchers first manipulated the levels of PTHrP in mice with pancreatic cancer. Elimination of PTHrP from mice—with genetic engineering or with an antibody that targets the hormone—not only eliminated metastasis and enhanced overall survival, but also dramatically reduced the size of the initial tumors in the pancreas.
Even in mice with a highly aggressive form of pancreatic cancer, the increase in survival was dramatic, increasing from a median of 111 days to 192 days, with near complete elimination of metastases. The 73% increase in survival, the researchers say, is one of the largest observed in mice with this type of pancreatic cancer, which closely resembles human cancers.
The striking results with mice led the researchers to test the anti-PTHrP antibodies in human pancreatic cancer cells. The results from these experiments were also encouraging: Among 3D organoids derived from pancreatic cancer patients under an IRB-approved protocol, anti-PTHrP antibodies greatly reduced growth and viability of the cells.
Two-pronged attack on cell growth and metastasis
Targeting PTHrP attacks pancreatic cancer in two ways, the researchers say. It reduces the ability of the tumor cells to transition from an epithelial state to a mesenchymal state, which is necessary for the creation of new metastases. And targeting PTHrP also prevents the growth of primary and secondary tumors.
“We think these findings provide a strong rationale for further developing anti-PTHrP therapy towards clinical trials,” says Rustgi, who adds that the antibody used in the study has the potential to be used in people and credits Richard Kremer, MD, PhD, of McGill University for developing the antibodies.
“We are hopeful that a drug targeting PTHrP could be used to treat most patients with pancreatic cancer,” he says, “because the vast majority have tumors with high levels of PTHrP. There is the potential application to other cancers as well.”
Potential with other cancers
The researchers originally began investigating PTHrP because its gene is often amplified when another nearby gene, KRAS, is amplified. KRAS has long been recognized as a cancer-promoting gene in pancreatic and other cancers.
For patients, that may mean anti-PTHrP therapies may have potential in other cancers that are known to harbor KRAS amplifications.
For researchers, the finding also suggests a wider search for cancer-causing genes is needed.
“We feel that PTHrP may have been previously overlooked as a mere passenger gene co-amplified with KRAS, but our study shows that PTHrP has its own tumor-promoting functions,” Pitarresi says. “It suggests other so-called ‘passenger’ genes may have bigger roles in cancer than we initially thought and should be examined more closely.” Rustgi notes “it might open up for combinatorial therapies of targeting the KRAS pathway with an antibody to PTHrP.”
A study in the journal Brain Communications by Danish and Belgian researchers attributes for the first time a biological purpose to near-death experiences (NDEs)
Near-death experiences are known from all parts of the world, various times and numerous cultural backgrounds. This universality suggests they may have a biological origin and purpose, but exactly what this could be has been largely unexplored.
A new study conducted jointly by the University of Copenhagen (Denmark) and the University of Liege (Belgium) and published in Brain Communications shows how near-death experiences in humans may have arisen from evolutionary mechanisms.
“Adhering to a preregistered protocol, we investigated the hypothesis that thanatosis is the evolutionary origin of near-death experiences”, says Daniel Kondziella, a neurologist from Rigshospitalet, Copenhagen University Hospital.
When attacked by a predator, as a last resort defense mechanism, animals can feign death to improve their chances of survival, one example being the opossum. This phenomenon is termed thanatosis, also known as death-feigning or tonic immobility. “As a survival strategy,” Daniel Kondziella adds, “thanatosis is probably as old as the fight-or-flight response.”
Charlotte Martial, neuropsychologist from the Coma Science Group at ULiège explains: “We first show that thanatosis is a highly preserved survival strategy occurring at all major nodes in a cladogram ranging from insects to fish, reptiles, birds and mammals, including humans. We then show that humans under attack by big animals such as lions or grizzly bears, human predators such as sexual offenders, and ‘modern’ predators such as cars in traffic accidents can experience both thanatosis and near-death experiences. Furthermore, we show that the phenomenology and the effects of thanatosis and near-death experiences overlap.”
Steven Laureys, neurologist and head of GIGA Consciousness research unit and Centre du Cerveau (ULiège, CHU Liège) is excited: “In this paper, we build a line of evidence suggesting that thanatosis is the evolutionary foundation of near-death experiences and that their shared biological purpose is the benefit of survival.”
The authors propose that the acquisition of language enabled humans to transform these events from relatively stereotyped death-feigning under predatory attacks into the rich perceptions that form near-death experiences and extend to non-predatory situations.
“Of note, the proposed cerebral mechanisms behind death-feigning are not unlike those that have been suggested to induce near-death experiences, including intrusion of rapid eye movement sleep into wakefulness,” Daniel Kondziella explains. “This further strengthens the idea that evolutionary mechanisms are an important piece of information needed to develop a complete biological framework for near-death experiences.”
No previous work has tried to provide such a phylogenetic basis. Steven Laureys concludes, “this may also be the first time we can assign a biological purpose to near-death experiences, which would be the benefit of survival.”
And Daniel Kondziella adds, “after all, near-death experiences are by definition events that are always survived, without exception.”
Reference: Costanza Peinkhofer, MD, Charlotte Martial, PhD, Helena Cassol, PhD, Steven Laureys, MD, PhD, Daniel Kondziella, MD, PhD, The evolutionary origin of near-death experiences: a systematic investigation, Brain Communications, 2021;, fcab132, https://doi.org/10.1093/braincomms/fcab132
RUDN University chemists have proposed a new method of producing fuel from Jatropha Curcas, a poisonous tropical plant. Natural minerals and a non-toxic additive from vegetable raw materials are used for that. The reaction efficiency is 85%. The fuel can be used in diesel internal combustion engines. The results are published in the International Journal of Green Energy.
Jatropha Curcas is a common plant in many tropical regions. Its seeds contain lots of oil, but they cannot be used agriculture because the oil contains toxins that are dangerous for people and animals. But the composition of jatropha oil is suitable for the manufacture of biodiesel. One of challenge of the processing the plant raw materials is to select sufficiently effective and safe catalysts. RUDN University chemists found a suitable catalyst and selected the optimal additive-a substance that improves the useful properties of the fuel.
“Mineral catalysts with a complex chemical composition, for example, zeolites — calcium and sodium silicates, have performed well in biodiesel production from vegetable and animal fats. They are quite active, eco-friendly and can be reused. But biodiesel, like hydrocarbons, cannot be used without improving additives”, Ezeldin Osman, PhD student at RUDN University.
RUDN University chemists decided to use furfural as an additive for diesel biodiesel. It is obtained from plant waste, such as sawdust or straw, it improves the characteristics of diesel fuel, in particular, its cetane number is an indicator of flammability.
As a first step, the researchers obtained biodiesel from Jatropha Curcas oil. To do this, they mixed the oil with three times as much methanol and added a catalyst — minerals from the zeolite group, mainly thomsonite. The catalyst amount was 5 times lower than the oil. RUDN University chemists also tested other reaction settings, but the highest yield of biodiesels (up to 85% in the composition of the reaction products) was obtained at this ratio of reagents and a temperature of 75°C.
The main part of the experiment was the selection of the optimal amount of furfural to improve the characteristics of biodiesel. RUDN University chemists mixed biodiesel and the additive in equal quantities, in other variants they used twice as much additive as fuel, and vice versa. It turned out that the highest cetane number (64.1) is in fuel containing 66.6% of furfural. This is 4.3 units higher than that of biodiesels without furfural. In this ratio, the additive removes all compounds that impair flammability from the biodiesel, such as alcohols and carbonyl compounds. The achieved characteristics of biodiesel from jatropha kurkas allow it to be used in internal combustion engines in the future.
“The additive reduced the content of aluminum, sodium, magnesium, potassium, iron and other substances in biodiesel that form ash — a non-combustible solid residue of fuel. This not only improves fuel performance, but also reduces the risk of engine wear. At the same time, furfural is a stable additive at high temperatures, environmentally friendly in production and application. We will continue experiments to improve diesel fuel with this substance”, Tatiana Sheshko, PhD, the head of the Adsorption and Catalysis Laboratory at RUDN University.
Reference: M. Ezeldin Osman, T. F. Sheshko, T.D Dipheko, N. E. Abdallah, E. A. Hassan & C. Y. Ishak (2021) Synthesis and improvement of Jatropha curcas L. biodiesel based on eco-friendly materials, International Journal of Green Energy, DOI: 10.1080/15435075.2021.1904943
A team led by Skoltech professor Artem R. Oganov studied the structure and properties of ternary hydrides of lanthanum and yttrium and showed that alloying is an effective strategy for stabilizing otherwise unstable phases YH10 and LaH6, expected to be high-temperature superconductors. The research was published in the journal Materials Today.
Cuprates had long remained record-setters for high-temperature superconductivity until H3S was predicted in 2014. This unusual sulfur hydride was estimated to have high-temperature superconductivity at 191-204 K and was later obtained experimentally, setting a new record in superconductivity.
Following this discovery, many scientists turned to superhydrides, which are abnormally rich in hydrogen, and discovered new compounds that became superconducting at even higher temperatures: LaH10 (predicted and then experimentally shown to have superconductivity at 250-260 K at 2 million atmospheres) and YH10 (predicted to be an even higher temperature superconductor). Despite the similarity between yttrium and lanthanum, YH10 proved to be unstable, and thus far no one has succeeded in synthesizing it in its pure form. Having reached the upper limit of critical temperatures for binary hydrides, chemists turned to ternary hydrides which appear as the most promising path towards still higher temperature superconductivity. Finally in 2020, after over 100 years of research, scientists were able to synthesize the first room-temperature superconductor – a ternary sulfur and carbon hydride − with a critical temperature of +15 oC.
In their recent work, scientists from Skoltech, the Institute of Crystallography of RAS, and V.L. Ginzburg Center for High-Temperature Superconductivity and Quantum Materials studied ternary hydrides of lanthanum and yttrium with different ratios of these two elements.
“Although lanthanum and yttrium are similar, their hydrides are different: YH6 and LaH10 do exist, while LaH6 and YH10 do not. We found that both structures could be stabilized by adding the other element. For example, LaH6 can be made more stable by adding 30 percent of yttrium, and its critical superconductivity temperature is slightly higher as compared to YH6,” professor Oganov says.
In addition, the research has helped to elucidate the general profile of superconductivity in ternary hydrides. “We realized that ternary and quaternary hydrides have progressively less ordered structures and a much greater width of the superconducting transition than binary hydrides. Also, they require more intensive and longer laser heating than their binary counterparts,” lead author and Skoltech PhD student Dmitrii Semenok explains.
The scientists believe that the study of ternary hydrides holds much promise for stabilizing unstable compounds and enhancing their superconducting performance.
Reference: Dmitrii V. Semenok, Ivan A. Troyan, Anna G. Ivanova, Alexander G. Kvashnin, Ivan A. Kruglov, Michael Hanfland, Andrey V. Sadakov, Oleg A. Sobolevskiy, Kirill S. Pervakov, Igor S. Lyubutin, Konstantin V. Glazyrin, Nico Giordano, Denis N. Karimov, Alexander L. Vasiliev, Ryosuke Akashi, Vladimir M. Pudalov, Artem R. Oganov, Superconductivity at 253â€¯K in lanthanumâ€“yttrium ternary hydrides, Materials Today, 2021, , ISSN 1369-7021, https://doi.org/10.1016/j.mattod.2021.03.025. (https://www.sciencedirect.com/science/article/pii/S1369702121001309)
Damaris Lorenzo, PhD, at the UNC School of Medicine, led the discovery of a new neurodevelopmental syndrome, its underlying genetic basis and molecular mechanisms, both important milestones on the road to create therapeutic strategies.
Scientists at the University of North Carolina at Chapel Hill School of Medicine and colleagues have demonstrated that variants in the SPTBN1 gene can alter neuronal architecture, dramatically affecting their function and leading to a rare, newly defined neurodevelopmental syndrome in children.
Damaris Lorenzo, PhD, assistant professor in the UNC Department of Cell Biology and member of the UNC Neuroscience Center at the UNC School of Medicine, led this research, which was published today in the journal Nature Genetics. Lorenzo, who is also a member of the UNC Intellectual and Developmental Disabilities Research Center (IDDRC) at the UNC School of Medicine, is the senior author.
The gene SPTBN1 instructs neurons and other cell types how to make βII-spectrin, a protein with multiple functions in the nervous system. Children carrying these variants can suffer from speech and motor delays, as well as intellectual disability. Some patients have received additional diagnosis, such as autism spectrum disorder, ADHD, and epilepsy. Identification of the genetic variants that cause this broad spectrum of disabilities is the first important milestone to finding treatments for this syndrome.
Lorenzo first learned about patients with complex neurodevelopmental presentations carrying SPTBN1 variants from Queenie Tan, MD, PhD, a medical geneticist, and Becky Spillmann, MS, a genetic counselor – both members of the NIH-funded Undiagnosed Disease Network (UDN) site at Duke University and co-authors of the Nature Genetics paper. They connected with Margot Cousin, PhD, a geneticist associated with the UDN site at the Mayo Clinic and co-first author or the study. Cousin had also collected clinical information from SPTBN1 variant carriers. Other clinical genetics teams learned about these efforts and joined the study.
The cohort of individuals affected by SPTBN1 variants continues to grow. Lorenzo and colleagues have been contacted about new cases after they published a preprint of their initial findings last summer. Identifying the genetic cause of rare diseases such as the SPTBN1 syndrome requires pooling knowledge from several patients to establish common clinical and biological patterns.
“Fortunately, the advent of affordable gene sequencing technology, together with the creation of databases and networks to facilitate the sharing of information among clinicians and investigators, has vastly accelerated the diagnosis of rare diseases,” Lorenzo said. “To put our case in historical perspective, βII-spectrin was co-discovered 40 years ago through pioneering work that involved my UNC colleagues Keith Burridge, PhD, and Richard Cheney, PhD, as well as my postdoctoral mentor Vann Bennett, PhD, at Duke. However, its association with disease eluded us until now.”
βII-spectrin is tightly associated with the neuronal cytoskeleton – a complex network of filamentous proteins that spans the neuron and plays pivotal roles in their growth, shape, and plasticity. βII-spectrin forms an extended scaffolding network that provides mechanical integrity to membranes and helps to orchestrate the correct positioning of molecular complexes throughout the neuron. Through research published in PNAS in 2019, Lorenzo found that βII-spectrin is essential for normal brain wiring in mice and for proper transport of organelles and vesicles in axons – the long extensions that carry signals from neurons to other neurons. βII-spectrin is an integral part of the process that enables normal development, maintenance, and function of neurons.
In this new study, Lorenzo’s research team showed that, at the biochemical level, the genetic variants identified in patients are sufficient to cause protein aggregation, aberrant association of βII-spectrin with the cytoskeleton, impair axonal organelle transport and growth, and change the morphology of neurons. These deficiencies can permanently alter how neurons connect and communicate with each other, which is thought to contribute to the etiology of neurodevelopmental disorders. The team showed that reduction of βII-spectrin levels only in neurons disrupts structural connectivity between cortical areas in mutant mice, a deficit also observed in brain MRIs of some patients.
In collaboration with Sheryl Moy, PhD, professor in the UNC Department of Psychiatry and director of the Mouse Behavioral Phenotyping (MBP) Core of the UNC IDDRC, the researchers found that these mice have developmental and behavioral deficits consistent with presentations observed in humans.
“Now that we’ve established the methods to assign likelihood of pathogenicity to SPTBN1 variants and to determine how they alter neurons, our immediate goal is to learn more about the affected molecular and cellular mechanisms and brain circuits, and evaluate strategies for potential clinical interventions,” Lorenzo said.
To this end, her team will collaborate with Adriana Beltran, PhD, assistant professor in the UNC Department of Genetics and director of the UNC Human Pluripotent Cell Core, to use neurons differentiated from patient-derived induced pluripotent stem cells. And the research team will continue to tap into molecular modeling predictions in collaboration with Brenda Temple, PhD, professor in the UNC Department of Biochemistry and Biophysics and director of the UNC Structural Bioinformatics Core, both co-authors on the Nature Genetics paper.
“As a basic science investigator, it’s so satisfying to use knowledge and tools to provide answers to patients,” Lorenzo said. “I first witnessed this thrill of scientific discovery and collaborative work as a graduate student 15 years ago when our lab identified the genetic cause of the first spectrinopathy affecting the nervous system, and it has been a powerful motivator since.”
That work was the discovery of variants in a different spectrin gene as the cause of spinocebellar ataxia type 5 (SCA5), led by Laura Ranum, PhD, who at the time was at the University of Minnesota. In follow up work, as part of that team, Lorenzo contributed insights into the pathogenic mechanism of SCA5.
“Aside from the immediate relevance to affected patients, insights from our work on SPTNB1 syndrome will inform discoveries in other complex disorders with overlapping pathologies,” Lorenzo said. “It is exciting to be part of such important work with a team of dedicated scientists and clinicians.”
Members of the Lorenzo lab who are co-authors in the Nature Genetics paper are co-first author Blake Creighton, lab research technician in the Lorenzo lab; Reggie Edwards, graduate student; Keith Breau, graduate student at the time of this research; Deepa Ajit, PhD, a postdoctoral fellow; Sruthi Dontu, Simone Afriyie, and Julia Bay, all undergraduates at UNC-Chapel Hill; and Liset Falcon, lab research technician at the time of this research. Other UNC-Chapel Hill collaborators and co-authors in the paper are Kathryn Harper, PhD, project manager in the MBP Core; and Lorena Munoz and Alvaro Beltran, both research associates in the hHPSC.
This research was funded by grants from the National Institutes of Health and the National Ataxia Foundation.
Featured image: Right, βII-spectrin (magenta) forms aggregates throughout neurites of a mouse cortical neuron expressing one of the human SPTBN1 variants. (Lorenzo Lab)
Method could help improve color for electronic displays and create more natural LED lighting
If you’ve ever tried to capture a sunset with your smartphone, you know that the colors don’t always match what you see in real life. Researchers are coming closer to solving this problem with a new set of algorithms that make it possible to record and display color in digital images in a much more realistic fashion.
“When we see a beautiful scene, we want to record it and share it with others,” said Min Qiu, leader of the Laboratory of Photonics and Instrumentation for Nano Technology (PAINT) at Westlake University in China. “But we don’t want to see a digital photo or video with the wrong colors. Our new algorithms can help digital camera and electronic display developers better adapt their devices to our eyes.”
In Optica, The Optical Society’s (OSA) journal for high impact research, Qiu and colleagues describe a new approach for digitizing color. It can be applied to cameras and displays — including ones used for computers, televisions and mobile devices — and used to fine-tune the color of LED lighting.
“Our new approach can improve today’s commercially available displays or enhance the sense of reality for new technologies such as near-eye-displays for virtual reality and augmented reality glasses,” said Jiyong Wang, a member of the PAINT research team. “It can also be used to produce LED lighting for hospitals, tunnels, submarines and airplanes that precisely mimics natural sunlight. This can help regulate circadian rhythm in people who are lacking sun exposure, for example.”
Mixing digital color
Digital colors such as the ones on a television or smartphone screen are typically created by combining red, green and blue (RGB), with each color assigned a value. For example, an RGB value of (255, 0, 0) represents pure red. The RGB value reflects a relative mixing ratio of three primary lights produced by an electronic device. However, not all devices produce this primary light in the same way, which means that identical RGB coordinates can look like different colors on different devices.
There are also other ways, or color spaces, used to define colors such as hue, saturation, value (HSV) or cyan, magenta, yellow and black (CMYK). To make it possible to compare colors in different color spaces, the International Commission on Illumination (CIE) issued standards for defining colors visible to humans based on the optical responses of our eyes. Applying these standards requires scientists and engineers to convert digital, computer-based color spaces such as RGB to CIE-based color spaces when designing and calibrating their electronic devices.
In the new work, the researchers developed algorithms that directly correlate digital signals with the colors in a standard CIE color space, making color space conversions unnecessary. Colors, as defined by the CIE standards, are created through additive color mixing. This process involves calculating the CIE values for the primary lights driven by digital signals and then mixing those together to create the color. To encode colors based on the CIE standards, the algorithms convert the digital pulsed signals for each primary color into unique coordinates for the CIE color space. To decode the colors, another algorithm extracts the digital signals from an expected color in the CIE color space.
“Our new method maps the digital signals directly to a CIE color space,” said Wang. “Because such color space isn’t device dependent, the same values should be perceived as the same color even if different devices are used. Our algorithms also allow other important properties of color such as brightness and chromaticity to be treated independently and precisely.”
Creating precise colors
The researchers tested their new algorithms with lighting, display and sensing applications that involved LEDs and lasers. Their results agreed very well with their expectations and calculations. For example, they showed that chromaticity, which is a measure of colorfulness independent of brightness, could be controlled with a deviation of just ~0.0001 for LEDs and 0.001 for lasers. These values are so small that most people would not be able to perceive any differences in color.
The researchers say that the method is ready to be applied to LED lights and commercially available displays. However, achieving the ultimate goal of reproducing exactly what we see with our eyes will require solving additional scientific and technical problems. For example, to record a scene as we see it, color sensors in a digital camera would need to respond to light in the same way as the photoreceptors in our eyes.
To further build on their work, the researchers are using state-of-art nanotechnologies to enhance the sensitivity of color sensors. This could be applied for artificial vision technologies to help people who have color blindness, for example.
Paper: N. Tang, L. Zhang, J. Zhou, J. Yu, B. Chen, Y. Peng, X. Tian, W. Yan, J. Wang and M. Qiu, “Nonlinear Color Space Coded by Additive Digital Pulses,” Optica, 8, 7, 977-983 (2021). DOI: https://doi.org/10.1364/OPTICA.422287.
Researchers use a computer model to explain how children integrate information during word learning
“We know that children use a lot of different information sources in their social environment, including their own knowledge, to learn new words. But the picture that emerges from the existing research is that children have a bag of tricks that they can use”, says Manuel Bohn, a researcher at the Max Planck Institute for Evolutionary Anthropology.
For example, if you show a child an object they already know – say a cup – as well as an object they have never seen before, the child will usually think that a word they never heard before belongs with the new object. Why? Children use information in the form of their existing knowledge of words (the thing you drink out of is called a “cup”) to infer that the object that doesn’t have a name goes with the name that doesn’t have an object. Other information comes from the social context: children remember past interactions with a speaker to find out what they are likely to talk about next.
“But in the real world, children learn words in complex social settings in which more than just one type of information is available. They have to use their knowledge of words while interacting with a speaker. Word learning always requires integrating multiple, different information sources”, Bohn continues. An open question is how children combine different, sometimes even conflicting, sources of information.
Predictions by a computer program
In a new study, a team of researchers from the Max Planck Institute for Evolutionary Anthropology, MIT, and Stanford University takes on this issue. In a first step, they conducted a series of experiments to measure children’s sensitivity to different information sources. Next, they formulated a computational cognitive model which details the way that this information is integrated.
“You can think of this model as a little computer program. We input children’s sensitivity to different information, which we measure in separate experiments, and then the program simulates what should happen if those information sources are combined in a rational way. The model spits out predictions for what should happen in hypothetical new situations in which these information sources are all available”, explains Michael Henry Tessler, one of the lead-authors of the study.
In a final step, the researchers turned these hypothetical situations into real experiments. They collected data with two- to five-year-old children to test how well the predictions from the model line up with real-world data. Bohn sums up the results: “It is remarkable how well the rational model of information integration predicted children’s actual behavior in these new situations. It tells us we are on the right track in understanding from a mathematical perspective how children learn language.”
Language learning as a social inference problem
How does the model work? The algorithm that processes the different information sources and integrates them is inspired by decades of research in philosophy, developmental psychology, and linguistics. At its heart, the model looks at language learning as a social inference problem, in which the child tries to find out what the speaker means – what their intention is. The different information sources are all systematically related to this underlying intention, which provides a natural way of integrating them.
Additionally, the model also specifies what changes as children get older. Over development, children become more sensitive to the individual information sources, and yet the social reasoning process that integrates the information sources remains the same.
“The virtue of computational modeling is that you can articulate a range of alternative hypotheses – alternative models – with different internal wiring to test if other theories would make equally good or better predictions. In some of these alternatives, we assumed that children ignore some of the information sources. In others, we assumed that the way in which children integrate the different information sources changes with age. None of these alternative models provided a better explanation of children’s behavior than the rational integration model”, explains Tessler.
The study offers several exciting and thought-provoking results that inform our understanding of how children learn language. Beyond that, it opens up a new, interdisciplinary way of doing research. “Our goal was to put formal models in a direct dialogue with experimental data. These two approaches have been largely separated in child development research”, says Manuel Bohn. The next steps in this research program will be to test the robustness of this theoretical model. To do so, the team is currently working on experiments that involve a new set of information sources to be integrated.