The gravitational collapse of a massive star is a natural process that can produce a black hole. But, have you ever thought, that a massive star can also gravitationally collapse into a wormhole. Yeah, thats what Chakrabarti and Kar considered in their recent paper. They proposed a non-singular model of gravitational collapse and explored a possibility of the formation of wormhole. They showed that a time-dependent/Lorentzian wormhole geometry can arise in gravitational collapse and this wormhole structure is very similar to the recently proposed Simpson-Visser vacuum solution. Their study recently appeared in Arxiv.
Simpson-visser solution is a modification of the standard Schwarzschild spacetime, with an additional parameter ‘a’ being introduced in the metric, which controls the interpolation of the metric between a standard Schwarzschild black hole and a Morris-Thorne traversable wormhole.
In other words, for different values of the parameter ‘a’, the metric can yield different geometric structure such as:
When a = 0, you will get the standard Schwarzschild geometry.
When a ≠ 0, you will get a regular black hole.
If a > 2m, you will get a two-way traversable wormhole geometry.
If a = 2m, you will get a one-way wormhole with an extremal null throat.
Chakrabarti and Kar studied the time evolution of the collapsing wormhole geometry, which is very similar to the Simpson-Visser vacuum solution.
They investigated the behavior of geodesic congruences and confirmed that no zero proper volume singularity is reached at any time.
From a suitable boundary matching condition, they have also given an exact collapsing solution, which slowly evolves into a spherical wormhole geometry at a non-zero minimum radius.
“The term responsible for a wormhole structure comes from the g11 component of the metric”
Finally, they discussed that this singularity-free nature of the spacetime lies within its wormhole-like structure (called collapsing sphere), which also leads to a violation of the Null Convergence Condition.
A team of researchers at USC is helping AI imagine the unseen, a technique that could also lead to fairer AI, new medicines and increased autonomous vehicle safety.
Imagine an orange cat. Now, imagine the same cat, but with coal-black fur. Now, imagine the cat strutting along the Great Wall of China. Doing this, a quick series of neuron activations in your brain will come up with variations of the picture presented, based on your previous knowledge of the world.
In other words, as humans, it’s easy to envision an object with different attributes. But, despite advances in deep neural networks that match or surpass human performance in certain tasks, computers still struggle with the very human skill of “imagination.”
Now, a USC research team comprising computer science Professor Laurent Itti, and PhD students Yunhao Ge, Sami Abu-El-Haija and Gan Xin, has developed an AI that uses human-like capabilities to imagine a never-before-seen object with different attributes. The paper, titled Zero-Shot Synthesis with Group-Supervised Learning, was published in the 2021 International Conference on Learning Representations on May 7.
“We were inspired by human visual generalization capabilities to try to simulate human imagination in machines,” said Ge, the study’s lead author.
“Humans can separate their learned knowledge by attributes—for instance, shape, pose, position, color—and then recombine them to imagine a new object. Our paper attempts to simulate this process using neural networks.”
“This new disentanglement approach, for the first time, truly unleashes a new sense of imagination in A.I. systems, bringing them closer to humans’ understanding of the world.” Laurent Itti.
AI’s generalization problem
For instance, say you want to create an AI system that generates images of cars. Ideally, you would provide the algorithm with a few images of a car, and it would be able to generate many types of cars—from Porsches to Pontiacs to pick-up trucks—in any color, from multiple angles.
This is one of the long-sought goals of AI: creating models that can extrapolate. This means that, given a few examples, the model should be able to extract the underlying rules and apply them to a vast range of novel examples it hasn’t seen before. But machines are most commonly trained on sample features, pixels for instance, without taking into account the object’s attributes.
The science of imagination
In this new study, the researchers attempt to overcome this limitation using a concept called disentanglement. Disentanglement can be used to generate deepfakes, for instance, by disentangling human face movements and identity. By doing this, said Ge, “people can synthesize new images and videos that substitute the original person’s identity with another person, but keep the original movement.”
Similarly, the new approach takes a group of sample images—rather than one sample at a time as traditional algorithms have done—and mines the similarity between them to achieve something called “controllable disentangled representation learning.”
Then, it recombines this knowledge to achieve “controllable novel image synthesis,” or what you might call imagination. “For instance, take the Transformer movie as an example” said Ge, “It can take the shape of Megatron car, the color and pose of a yellow Bumblebee car, and the background of New York’s Times Square. The result will be a Bumblebee-colored Megatron car driving in Times Square, even if this sample was not witnessed during the training session.”
This is similar to how we as humans extrapolate: when a human sees a color from one object, we can easily apply it to any other object by substituting the original color with the new one. Using their technique, the group generated a new dataset containing 1.56 million images that could help future research in the field.
Understanding the world
While disentanglement is not a new idea, the researchers say their framework can be compatible with nearly any type of data or knowledge. This widens the opportunity for applications. For instance, disentangling race and gender-related knowledge to make fairer AI by removing sensitive attributes from the equation altogether.
In the field of medicine, it could help doctors and biologists discover more useful drugs by disentangling the medicine function from other properties, and then recombining them to synthesize new medicine. Imbuing machines with imagination could also help create safer AI by, for instance, allowing autonomous vehicles to imagine and avoid dangerous scenarios previously unseen during training.
“Deep learning has already demonstrated unsurpassed performance and promise in many domains, but all too often this has happened through shallow mimicry, and without a deeper understanding of the separate attributes that make each object unique,” said Itti. “This new disentanglement approach, for the first time, truly unleashes a new sense of imagination in A.I. systems, bringing them closer to humans’ understanding of the world.”
Featured image: THE NEW AI SYSTEM TAKES ITS INSPIRATION FROM HUMANS: WHEN A HUMAN SEES A COLOR FROM ONE OBJECT, WE CAN EASILY APPLY IT TO ANY OTHER OBJECT BY SUBSTITUTING THE ORIGINAL COLOR WITH THE NEW ONE. ILLUSTRATION/CHRIS KIM.
Tests Can Accurately Detect Virus in Minutes Using Nanoparticle and Electrochemical Sensing Techniques
Researchers at the University of Maryland School of Medicine (UMSOM) have developed two rapid diagnostic tests for COVID-19 that are nearly as accurate as the gold-standard test currently used in laboratories. Unlike the gold-standard test, which extracts RNA and uses it to amplify the DNA of the virus, these new tests can detect the presence of the virus in as little as five minutes using different methods.
One test is a COVID-19 molecular diagnostic test, called Antisense, that uses electrochemical sensing to detect the presence of the virus. The other uses a simple assay of gold nanoparticles to detect a color change when the virus is present. Both tests were developed by Dipanjan Pan, PhD, Professor of Diagnostic Radiology and Nuclear Medicine and Pediatrics at UMSOM and his research team. Dr. Pan has a joint appointment at the University of Maryland Baltimore County (UMBC).
“These tests detect the presence of the virus within 5 to 10 minutes and rely on simple processes that can be performed with little lab training,” said Dr. Pan. They do not require the extraction of the virus’s RNA, which is both complicated and time consuming.
They also are more reliable than the rapid antigen tests currently on the market, which detect the virus only in those with significantly high viral levels. “These two newer tests are extremely sensitive and can detect the presence of the virus, even in those with low levels of the virus,” Dr. Pan said.
Dr. Pan’s team included UMSOM research fellow Maha Alafeef, UMSOM research associate Parikshit Moitra, PhD, and research fellow Ketan Dighe, from UMBC.
The U.S. Food and Drug Administration (FDA) registered the laboratory of Dr. Pan as an approved laboratory development site for the Antisense test in June. The move paves the way for Dr. Pan’s laboratory to begin conducting the test at the university, in research settings, as it undergoes further development.
In February, RNA Disease Diagnostics, Inc. (RNADD) received an exclusive global license from UMB and UMBC to commercialize the test. Dr. Pan serves as an unpaid scientific advisor to the company.
This test detects the virus in a swab sample using an innovative technology called electrochemical sensing. It uses a unique dual-pronged molecular detection approach that integrates electrochemical sensing to rapidly detect the SARS-CoV-2 virus. “The final prototype is like a glucometer, which patients with diabetes use at home to measure their blood glucose levels,” said Dr. Pan, “and is just as easy for people to do themselves.”
Dr. Pan and his colleagues, in collaboration with RNA Disease Diagnostics, are launching a study of NBA basketball players in New York City to compare the Antisense test to rapid COVID tests that the NBA is using to monitor COVID infections in its players.
“We would like to see whether our test can yield more reliable results compared to the existing platforms,” he said. “Current antigen-based rapid COVID tests miss infections about 20 percent of the time and also have high rates of false positive results. Our Antisense test appears to be about 98 percent reliable, which is similar to the PCR test.”
Similar to the Antisense test, the second rapid test also does not require the use of any advanced laboratory techniques, such as those commonly used to extract RNA, for analysis. It uses a simple assay containing plasmonic gold nanoparticles to detect a color change when the virus is present. In April, Dr. Pan and his colleagues published a stepwise protocol in the journal Nature Protocols, explaining how the nano-amplified colorimetric test works and how it can be used.
Once a nasal swab or saliva sample is obtained from a patient, the nucleic acid (bits of genetic material) in the sample is amplified via a simple process that takes about 10 minutes. The test uses a highly specific molecule attached to the gold nanoparticles to detect a particular protein. This protein is part of the genetic sequence that is unique to the novel coronavirus. When the biosensor binds to the virus’ gene sequence, the gold nanoparticles respond by turning the liquid reagent from purple to blue.
“Innovations in COVID-19 testing remain incredibly important even as the epidemic appears to be waning in this country,” said E. Albert Reece, MD, PhD, MBA, Executive Vice President for Medical Affairs, UM Baltimore, and the John Z. and Akiko K. Bowers Distinguished Professor and Dean, University of Maryland School of Medicine. “As we continue to monitor infections in unvaccinated segments of our population and the potential spread of new variants, there will be a vital need for inexpensive rapid tests to ensure that we continue to maintain low infection rates.”
Reference: Alafeef, M., Moitra, P., Dighe, K. et al. RNA-extraction-free nano-amplified colorimetric test for point-of-care clinical diagnosis of COVID-19. Nat Protoc 16, 3141–3162 (2021). https://doi.org/10.1038/s41596-021-00546-w
Scientists from the Regeneron Genetics Center (RGC) have discovered rare genetic mutations in the GPR75 gene that are associated with protection against obesity.
As part of the research that led to the finding, published in Science, RGC scientists analyzed deidentified genetic and associated health data from 645,000 volunteers from the United Kingdom, United States and Mexico, including participants in Geisinger’s MyCode Community Health Initiative.
It is estimated that more than one billion people will be suffering from severe obesity (body mass index [BMI] of 35 or higher) by 2030. Working with collaborators, RGC scientists found that individuals who have at least one inactive copy of the GPR75 gene have lower BMI and, on average, tend to weigh about 12 pounds less and face a 54% lower risk of obesity than those without the mutation. Protective mutations were found in about one of every 3,000 people sequenced.
“This is a potentially game-changing discovery that could improve the lives and health of millions of people dealing with obesity, for whom lasting interventions have often been elusive,” said Christopher D. Still, D.O., director for the Geisinger Obesity Research Institute at Geisinger Medical Center. “While the behavioral and environmental ties to obesity are well understood, the discovery of GPR75 helps us put the puzzle pieces together to better understand the influence of genetics. Further studies and evaluation are needed to determine if reducing weight in this manner can also lower the risk of conditions commonly associated with high BMI, such as heart disease, diabetes, high blood pressure and fatty liver disease.”
Regeneron scientists, collaborating with scientists at New York Medical College, replicated their finding in mice that were genetically engineered using Regeneron’s VelociGene® technology to lack copies of the GPR75 gene. Such mice gained 44% less weight than mice without the mutation when both groups were fed a high-fat diet. Regeneron scientists are pursuing multiple therapeutic pathways – such as antibody, small molecule and gene silencing approaches – based on this newly discovered genetic target.
“Discovering protective genetic superpowers, such as in GPR75, provides hope in combatting global health challenges as complex and prevalent as obesity,” said George D. Yancopoulos, M.D., Ph.D., co-founder, president and chief scientific officer at Regeneron. “Discovery of protective mutations – many of which have been made by the Regeneron Genetics Center in its eight-year history – will allow us to unlock the full potential of genetic medicine by instructing on where to deploy cutting-edge approaches like gene-editing, gene-silencing and viral vector technologies.”
Reference: Parsa Akbari et al., “Sequencing of 640,000 exomes identifies GPR75 variants associated with protection from obesity”, Science 02 Jul 2021: Vol. 373, Issue 6550, eabf8683 DOI: https://doi.org/10.1126/science.abf8683
An accurate understanding of the propagation of coronal mass ejections (CMEs) is crucial in the prediction of space weather. CMEs generate geomagnetic storms causing catastrophic damages to power grids on Earth and are serious radiation threat to satellites on low-Earth orbit and their crew during spacewalks. Basic parameters such as their velocity and acceleration varying with time and heliospheric distance away from the Sun gives researchers the opportunity to predict their arrival time in the vicinity of the Earth. In this paper, we analyze the trend of this parameter in regards to the solar cycles 23 and 24.
Space weather is mostly controlled by coronal mass ejections (CMEs), which are huge expulsions of magnetized plasma from the solar atmosphere. They have been intensively studied for their significant impact on the Earth’s environment. The first CME was recorded by the coronograph on board the 7th Orbiting Solar Observatory (OSO-7) satellite. Since 1995 CMEs have been intensively studied using the sensitive Large Angle and Spectrometric Coronagraph (LASCO) instrument on board the Solar and Heliospheric Observatory (SOHO) spacecraft. SOHO/LASCO recorded about 30,000 CMEs until December 2017. The basic attributes of CMEs, determined manually from LASCO images, are stored in the SOHO/LASCO catalog. The initial velocity of CMEs, obtained by fitting a straight line to the height-time data points, has been the basic parameter used in prediction of geoeffectiveness of CMEs.
The two basic parameters, velocity and acceleration of CMEs, are obtained by fitting a straight and quadratic line to all the height-time data points measured for a given event. The parameters determined in this way, in some sense, reflect the average values in the field of view of the LASCO coronagraphs. Nevertheless, it is evident that both these parameters are continuously changing with distance and time after CME onset from the Sun. Therefore, the average values of velocity and acceleration, used in many studies, do not give a correct description of CME propagation. In this paper we present a statistical study of the kinematic properties of 28894 CMEs recorded by LASCO from 1996 to mid-2017. This research covers a large number of events observed during the 23 and 24 solar cycles. For the study, we employed SOHO/ LASCO catalog data and a new technique to determine the speed of ejections.
The presented statistical analysis reveals that at the beginning of their expansion, in the vicinity of the Sun, CMEs are subject to several factors (Lorentz Force, CME-CME interaction, speed differences between leading and trailing parts of the CME) that determine their propagation. Although their average values of catalog accelerations are always close to zero, a more detailed study shows that their instantaneous accelerations may be quite different depending on the conditions prevailing in the Sun and the environment in which they propagate. These conditions vary depending on the individual eruption and over time as the solar activity changes. The initial acceleration phase is characterized by a rapid increase in CME velocity just after eruption in the inner corona. This phase is followed by a non-significant residual acceleration (deceleration) characterized by an almost constant speed of CMEs. We demonstrate that the initial acceleration is in the range 0.24–2616 ms−2 with median (average) value of 57 ms−2 (ms−2) and it takes place up to a distance of about 28 RSUN with median (average) value of 7.8 RSUN (6 RSUN).
We note that the significant driving force of CME, namely Lorentz force, can operate up to a distance of 6 RSUN from the Sun during the first 2 hours of propagation. We found a significant anti-correlation between the initial acceleration magnitude and the acceleration duration, whereas the residual acceleration covers a range from −1224 to 0 m ms−2 with a median (average) value of −34 ms−2 (−17 ms−2). One intriguing finding is that the residual acceleration is much smaller during the 24 cycle in comparison to the 23 cycle of solar activity. Our study has also revealed that the considered parameters, initial acceleration (ACCINI), residual acceleration (ACCRES), maximum velocity (VMAX), and time at maximum velocity (TimeMAX) mostly follow solar cycles and the intensities of the individual cycle.
The research was conducted at the Department of High Energy Astrophysics of the Jagiellonian University’s Astronomical Observatory (OAUJ). The work was supported by the Polish National Science Centre through the grant UMO-2017/25/B/ ST9/00536 and DSC grant N17/MNS/000038. This work was also supported by NASA LWS project led by Dr. N. Gopalswamy.
A diagnostic tool called the MasSpec Pen has been tested for the first time in pancreatic cancer patients during surgery. The device is shown to accurately identify tissues and surgical margins directly in patients and differentiate healthy and cancerous tissue from banked pancreas samples. At about 15 seconds per analysis, the method is more than 100 times as fast as the current gold standard diagnostic, Frozen Section Analysis. The ability to accurately identify margins between healthy and cancerous tissue in pancreatic cancer surgeries can give patients the greatest chance of survival.
“These results show the technology works in the clinic for surgical guidance,” said Livia Schiavinato Eberlin, an assistant professor of chemistry at UT Austin who leads the team that invented the pen, in collaboration with James Suliburk, head of endocrine surgery at Baylor. “Surgeons can easily integrate the MasSpec Pen into their workflow, and the initial data really supports the diagnostic accuracy we were expecting to achieve.”
The most common type of pancreatic cancer, pancreatic ductal adenocarcinoma, spreads rapidly and is highly lethal, with a five-year survival rate of 9% for all stages. The most effective treatment option is surgical removal of the tumor.
Cancer surgeons face a dilemma: It’s often difficult to tell good tissue from bad. If any cancerous tissue is left behind, there’s a risk the tumor will regrow, potentially requiring the patient to undergo additional rounds of surgery, radiation or chemotherapy, and decreasing the chances of survival. On the other hand, removing too much healthy tissue, especially from vital organs, can also compromise a patient’s health. Determining the margin between healthy and cancerous tissue is critical to a successful surgery.
For this study, the investigators first used the MasSpec Pen to analyze 157 banked human pancreatic tissues to develop and evaluate the technology in the laboratory for pancreatic cancer. Then, the investigators moved the system to the operating room at Baylor St. Luke’s Medical Center in Houston, which is affiliated with Baylor College of Medicine, where the surgeons tested the technology in 18 pancreatic surgeries. The pen has been tested in more than 150 human surgeries to date, including for breast and thyroid, and results of those additional tests will be submitted for publication soon.
Mary King, a graduate student and the study’s first author, and other members of the Eberlin research group operated the mass spectrometer during surgery.
A typical surgery to remove a pancreatic tumor can take from 6 to 12 hours.
“Surgery of the pancreas is a very complex and detailed surgery that requires numerous intraoperative decisions over several hours that can have long-lasting effects on oncologic outcomes for patients with pancreas cancer,” said George Van Buren, M.D., associate professor of surgery at Baylor and one of the surgeons who performed operations during the experiment. “The MasSpec Pen technology opens the door for real-time, precision medicine to be performed in the operating room at a level that has never been seen before.”
These are the first published results of intraoperative use of the MasSpec Pen, in other words, on intact or just-removed tissue from patients during surgery. Preclinical research published about the technology in 2017 led to widespread enthusiasm and interest, including from writers in Hollywood who adapted the idea for a segment on the television program “Grey’s Anatomy.”
The researchers plan eventually to submit the design to the U.S. Food and Drug Administration for approval as a medical device.
Banked tissue samples were provided by the Cooperative Human Tissue Network and Baylor.
This work was supported in part by the National Cancer Institute of the National Institutes of Health, and by the Gordon and Betty Moore Foundation. Livia Eberlin receives support for research in her lab from the Cancer Prevention and Research Institute of Texas.
King, Suliburk, Eberlin, Jialing Zhang and others are inventors in US Patent No. 10,643,832 and/or in other patent applications related to the MasSpec Pen technology licensed by The University of Texas to MS Pen Technologies Inc. and its subsidiary Genio Technologies. Zhang, Suliburk and Eberlin are shareholders in MS Pen Technologies Inc.
The University of Texas at Austin is committed to transparency and disclosure of all potential conflicts of interest. The university investigator who led this research, Livia Schiavinato Eberlin, has submitted required financial disclosure forms with the university. Eberlin is a co-founder with an equity stake in MSP Technologies, an inventor-led startup formed to commercialize the MasSpec Pen technology, owned by the university.
Featured image: Jialing Zhang demonstrates using the MasSpec Pen on a human tissue sample. Photo credit: Vivian Abagiu/Univ. of Texas at Austin.
Infrared data obtained by SOFIA was crucial for studying an outburst from a massive protostar in the iconic Cat’s Paw Nebula that is now glowing at 50,000 times the luminosity of the Sun.
Even though the birth of stars is hidden from the view of even the most powerful optical telescopes, longer wavelength infrared and millimeter light can pierce through the tons of obscuring gas and dust. These observations reveal the environments where massive stars are forming, and enables astronomers to finally compare the physics governing these lesser-known processes with those that are observationally well established for low-mass stars like our Sun.
Stars form via the gradual, continuous accretion of matter from a surrounding disk. But this steady process is occasionally interrupted as a massive clump from the disk falls onto the forming star, causing a tremendous outburst of energy that can last from several months to hundreds of years. Such outbursts have been seen in dozens of low-mass protostars during the past 50 years.
While monitoring NGC 6334 I, a well-studied protostar cluster in the Cat’s Paw Nebula, researchers discovered a millimeter outburst from a massive protostar with the Atacama Large Millimeter/submillimeter Array, or ALMA. Unlike some of its companions, this particular protostar is so deeply embedded that it was not even detectable in the infrared prior to the outburst. The findings are published in Astrophysical Journal Letters. The lead author is Todd Hunter from NRAO and Universities Space Research Association’s James De Buzier is a co-author (DOI 10.3847/2041-8213/abf6d9).
“The most interesting aspects of this outburst are the extreme luminosity and longevity,” said James De Buizer, USRA senior scientist for SOFIA based at Ames. “This event now exceeds all other accretion outbursts in massive protostars by a factor of about three in both energy output and duration.”
Using the Stratospheric Observatory for Infrared Astronomy (SOFIA), the region was revisited after the discovery of the millimeter outburst. Observations by SOFIA’s FORCAST and HAWC+ instruments revealed that infrared emission from the protostar had also increased considerably. Not only could the protostar be seen in the infrared, but it was now the brightest infrared source in the entire cluster.
Because the radiation generated from an accretion event emerges mainly in the infrared, SOFIA data is crucial for deriving the total luminosity of the young star and the fundamental parameters of the outburst. Combining the SOFIA and ALMA data allowed astronomers to test predictions of how massive disks fragment. It also helped them rule out alternative causes of the outburst, like a stellar merger, or less likely explanations for the protostar’s sudden appearance, like changes in gas and dust clouds along the telescope’s line of sight.
“Since the matter distribution surrounding the star is clumpy, fragments occasionally fall onto the growing star,” said Todd Hunter, an astronomer at the National Radio Astronomy Observatory (NRAO), in Charlottesville, Virginia, and lead author on the paper. “In this case, it may have even triggered a temporary change in the size and temperature of the protostar.”
These observations show the importance of continuous access to the infrared observations to enable time-domain studies of the important accretion stage of massive star formation. Moreover, the new observations by SOFIA provide strong evidence of episodic accretion in young massive stars. Presumably such accretion bursts, while rare for an individual object, often occur somewhere in the galaxy, due to the large number of protostars in this phase.
“Without SOFIA, accurate measurements of the mass and luminosity of this event and future events would not be possible,” added co-author Crystal Brogan, also an astronomer at NRAO.
These new findings confirm that the formation of high-mass stars can be considered a scaled-up version of the process by which low-mass stars, like our Sun, are born. The main differences are that massive stars would form with larger disks, higher accretion rates, and on much shorter time scales (around 100,000 years instead of several million years).
Featured image: The Cat’s Paw Nebula (NGC 6334), imaged here by NASA’s Spitzer Space Telescope using the IRAC instrument, is a star-forming region of the Milky Way galaxy. The dark filament running through the middle of the nebula is a particularly dense region of gas and dust. The inset shows the region of the high-mass protostar with pre- and post-outburst luminosity imaged by the Cerro Tololo Inter-American Observatory and NASA’s Stratospheric Observatory for Infrared Astronomy, respectively. Credits: Cat’s Paw Nebula: NASA/JPL-Caltech; Left inset: De Buizer et al. 2000; Right inset: Hunter et al. 2021
A new research approach reveals how the brain learns new rules
Learning new relations, or “rules”, between elements in the world is critical to the survival of all animal species. It allows them to recognize repeating patterns in the environment and to deduce general principles for coping with similar situations in the future. The ability of primates, humans in particular, to learn new rules is especially developed, and this enhances our capacity for making complex decisions and for planning ahead. Since our natural environment is rich and extensively dynamic, a key challenge for the brain is to distill applicable rules from the ceaseless stimuli that surround us. Extracting the features of the environment that are relevant and understanding their correct relation to one another allow us to form rules that could be applicable in future situations that might be similar, but not identical, to the scenario in which the initial learning process took place.
Although such learning processes have been studied extensively on a behavioral level, tracing them as they unfold in the brain in real time has remained a significant challenge. A new study published by Prof. Rony Paz, Prof. Elad Schneidman and Dr. Yarden Cohen of the Neurobiology Department at the Weizmann Institute of Science develops and offers an innovative approach to studying rule learning that combines computational modeling and live recording of neuronal activity in the brain.
The researchers used a learning task that is based on the classification of small patterns, composed of several white or black squares, which are organized in different layouts. For every experimental session, the researchers define a rule that divides the patterns into two groups: for example, one group in which the right-most square is black, and another group where it is white. In every iteration, the participants learn their task through trial and error – they are presented with one pattern at a time, and they must then decide to which one of the two groups it belongs; a successful iteration is rewarded. Gradually, participants may learn the current session’s underlying rule, or some approximation of it. (In a small number of cases, their performance did not improve during the session, reflecting a failure to learn.)
An effective learning task: relatively basic patterns that can be used to develop a huge variety of rules – from simple to complex
This learning task turned out to be very effective: the play stones – the patterns – are relatively simple, but they can be used to develop a huge variety (in the thousands) of rules, with some being simple, while others are more complex. These patterns have an additional important advantage: all rules, namely all partition possibilities of the patterns into two groups, can be represented in geometrical terms. The spatial representation of the data can be described as a multidimensional cube, where each rule is represented by a vector – an arrow pointing in a specific direction within the cube.
During the experiment, the researchers recorded neuronal activity from two areas in the brain that are known to participate in the process of rule learning: the dorsal anterior cingulate cortex (dACC) and the striatum. The researchers measured and quantified the response of each neuron in relation to the displayed pattern. By employing this method, the researchers were able to represent the activity of neurons as vectors in the same multidimensional cube where the rule that is being learned is represented as well.
The fact that the neuronal activity and the rule being learned are represented in the same space enabled the depiction of the learning process at a single-neuron resolution and, importantly, in real time, while the subjects were learning. Whereas every rule is represented by a static vector in a specific session, the activity of each neuron is indicated by a series of vectors that correspond to its instantaneous activity and that therefore move to different locations in space as the learning gradually occurs. The further ahead the learning process is, the closer the vectors representing neuronal activity are to the location of the vector representing the rule.
Coordination between the two brain regions is necessary for an effective learning process
Using this approach, the researchers were able to conclude that neurons in the dACC “learn” by engaging in a process of trial and error, in which they gradually “venture” closer to the rule. In comparison, the vectors representing neurons recorded from the striatum elongate as learning progresses, which implies that they increase their confidence level, signaling that the brain had indeed found the correct rule.
The researchers furthermore discovered that these processes, which take place in the two brain regions, are coordinated: neurons in the striatum strengthen the brain’s confidence in the rule immediately after dACC neurons identify it. Moreover, the neuronal activity that was measured at the end of each day was predictive of the behavior and success of the participant in the following day’s trials. This finding supports the researchers’ conclusion that they were indeed able to identify the neural process that reflects learning, but it also points toward how such learning is stored in memory for future use. In the future the researchers suggest that this approach could both enable identification of the underlying mechanisms that lead to learning disorders and suggest brain-inspired approaches to improving learning abilities.
Prof. Rony Paz is the incumbent of the Manya Igel Chair of Neurobiology.
Prof. Paz’s research is supported by the M. Judith Ruth Center for Trauma and Anxiety Research; the Nella and Leon Benoziyo Center for Neurosciences; the Nella and Leon Benoziyo Center for Neurological Diseases; the Carl and Micaela Einhorn-Dominic Brain Research Institute; the Murray H. & Meyer Grodetsky Center for Research of Higher Brain Functions; the Monroy-Marks Integrative Center for Brain Disorder Research; the Irving B. Harris Fund for New Directions in Brain Research; the Bernard & Norton Wolf Family Foundation; Gary Clayman; Rosanne Cohen; and Howard Garoon, Lisa Garoon, and Nanci Garoon Leigner.
Prof. Elad Schneidman is the incumbent of the Joseph and Bessie Feinberg Professorial Chair.
There are 65,536 possible rules a subject can learn in one version of the learning task developed by the researchers
Two enormous galaxies capture your attention in this spectacular image taken with the NASA/ESA Hubble Space Telescope using the Wide Field Camera 3 (WFC3). The galaxy on the left is a lenticular galaxy, named 2MASX J03193743+4137580. The side-on spiral galaxy on the right is more simply named UGC 2665. Both galaxies lie approximately 350 million light-years from Earth, and they both form part of the huge Perseus galaxy cluster.
Perseus is an important figure in Greek mythology, renowned for slaying Medusa the Gorgon – who is herself famous for the unhappy reason that she was cursed to have living snakes for hair. Given Perseus’s impressive credentials, it seems appropriate that the galaxy cluster is one of the biggest objects in the known universe, consisting of thousands of galaxies, only a few of which are visible in this image. The wonderful detail in the image is thanks to the WFC3’s powerful resolution and sensitivity to both visible and near-infrared light, the wavelengths captured in this image.
Text credit: European Space Agency Image credit: ESA/Hubble & NASA, W. Harris