Tag Archives: #time

Memory Details Fade Over Time, With Only the Main gist Preserved (Neuroscience)

What information is retained in a memory over time, and which parts get lost? These questions have led to many scientific theories over the years, and now a team of researchers at the Universities of Glasgow and Birmingham have been able to provide some answers.

Their new study, which is published today in Nature Communications, demonstrates that our memories become less vibrant and detailed over time, with only the central gist eventually preserved. Moreover, this ‘gistification’ of our memories is boosted when we frequently recall our recent experiences.

The work could have implications in a number of areas, including the nature of memories in post-traumatic stress disorder, the repeated questioning of eye-witness testimonies and even in best practice for exam studying.

While memories are not exact carbon copies of the past – remembering is understood to be a highly reconstructive process – experts have suggested that the contents of a memory could change each time we bring it back to mind.

However, exactly how our memories differ from the original experiences, and how they are transformed over time, has until now proven difficult to measure in laboratory settings.

For this study the researchers developed a simple computerised task that measures how fast people can recover certain characteristics of visual memories when prompted to do so. Participants learned word-image pairs and were later required to recollect different elements of the image when cued with the word. For example, participants were asked to indicate, as fast as possible, if the image was coloured or greyscale (a perceptual detail), or whether it showed an animate or inanimate object (a semantic element).

These tests, probing the quality of the visual memories, happened immediately after learning and also after a two-day delay. Reaction time patterns showed that participants were faster to recollect meaningful, semantic elements than surface, perceptual ones.

Julia Lifanov, lead author of the study from the University of Birmingham, said: “Many memory theories assume that over time, and as people re-tell their stories, they tend to forget the surface details but retain the meaningful, semantic content of an event.

“Imagine reminiscing about a pre-COVID dinner with a friend – you realize that you cannot recall the table décor but know exactly what you ordered; or you remember the conversation with the bartender, but not the colour of his shirt. Memory experts call this phenomenon ‘semanticization’.”

Prof Maria Wimber, senior author on the study from the University of Glasgow, said: “The pattern towards recollection of meaningful semantic elements we demonstrate in this study indicates that memories are biased towards meaningful content in the first place – and we have shown in previous studies that this bias is clearly reflected in brain signals too.

“Our memories change with time and use and that is a good and adaptive thing. We want our memories to retain the information that is most likely to be useful in the future, when we encounter similar situations.”

The researchers found that the bias towards semantic memory content becomes significantly stronger with the passage of time, and with repeated remembering. When participants came back to the lab two days later, they were much slower at answering the perceptual-detailed questions, but they show relatively preserved memory for the semantic content of the images. However, the shift from detail-rich to more concept-based memories was far less pronounced in a group of subjects who repeatedly viewed the images, rather than being asked to actively bringing them back to mind.

The study has implications for probing the nature of memories in health and disease. It provides a tool to study maladaptive changes, for example in post-traumatic stress disorder where patients often suffer from intrusive, traumatic memories, and tend to over-generalize these experiences to novel situations. The findings are also highly relevant for understanding how eyewitness memories may be biased by frequent interviews and repeatedly recalling the same event.

The findings also demonstrate that testing yourself prior to an exam (for example, by using flashcards) will make the meaningful information stick for longer, especially when followed by periods of rest and sleep.

The study, ‘Feature-specific reaction times reveal a semanticisation of memories over time and with repeated remembering’ is published in Nature Communications. The work is funded by the European Research Council, the Economic and Social Sciences Research Council UK and the Midlands Integrative Biosciences Training Partnership.


Provided by University of Birmingham

For the Brain, Timing is Everything (Neuroscience)

Columbia Engineering/UCLA team is first to demonstrate that phase precession plays a significant role in the human brain, and links not only sequential positions, as seen in animals, but also abstract progression towards specific goals.

For decades the dominant approach to understanding the brain has been to measure how many times individual neurons activate during particular behaviors. In contrast to this “rate code,” a more recent hypothesis proposes that neurons signal information by changing the precise timing when they activate. One such timing code, called phase precession, is commonly observed in rodents as they navigate through spaces and is thought to form the basis for how the brain represents memories for sequences. Surprisingly, phase precession has never been seen in humans, and thus its usefulness in explaining brain function and creating brain-machine interfaces has been quite limited.

In a study published today in Cell, Joshua Jacobs, associate professor of biomedical engineering at Columbia Engineering, in collaboration with Dr. Itzhak Fried, a professor of neurosurgery at the David Geffen School of Medicine at UCLA, demonstrate the existence of this neural code in the human brain for the first time, and show that phase precession not only links sequential positions, as in animals, but also abstract progression towards specific goals.

“We were convinced that phase precession held a lot of promise as a widespread neural code that could be used for learning and cognition,” says Salman E. Qasim, lead author of the study who received his PhD from Columbia Engineering in 2021. “There’s no reason the human brain wouldn’t take advantage of this mechanism to encode any kind of sequence, spatial or otherwise.”

The researchers analyzed direct brain recordings from neurosurgical patients who performed a virtual-reality spatial navigation task in which they had to find and return to six specific buildings. By identifying an internal clock in the form of low-frequency (2-10 Hz) brain oscillations, the team was able to measure how the relative timing of neuronal action potentials correlated with sequential spatial locations, just as in rodents. And what they found especially exciting was that this temporal code extended beyond only representing spatial location to represent the episodic progress subjects had made towards certain goal locations.

The team was able to measure the activity of single neurons by taking advantage of a rare opportunity: the ability to invasively record from the brains of 13 neurosurgical patients at UCLA. Because these bedridden patients had drug-resistant epilepsy, they already had recording electrodes implanted in their brains for their clinical treatment. They used laptops and handheld controllers to move through virtual environments to complete a spatial navigation task designed by the research team.

As the researchers analyzed the neural data from the patients, they noticed how often neurons seemed to fire in concert with slow brain waves. The team was then able to identify phase precession in the hippocampus as subjects moved through different locations, similar to prior observations in rodents.

“This study demonstrates the unique insights at the level of single brain cells that we may gain in special clinical settings of brain surgery for patients with epilepsy and other disorders,” said the study’s co-author Fried, director of the epilepsy surgery program at UCLA Health. “Here, a simple task performed by patients unveils a fundamental brain code for human negotiation of their environment.”

Knowing now that this temporal code was present for spatial locations in humans, Qasim next looked for evidence that phase precession tracked more complex cognitive sequences, such as the more abstract progress a person had made towards specific goals (i.e. buildings). To do so, the team had to devise a way to measure the temporal relationship between sparse, inconsistent brain-waves and neural spiking without any reference to spatial position. Once they accomplished this, they were surprised to find evidence for phase precession in the frontal cortex, where it has never been observed before, as subjects sought specific goals.

“It is hard to study the neural representations of complex cognitive functions, like goal-seeking, in many animal models. By demonstrating that phase precession in humans might represent particular goal states, this study supports the idea that temporal codes like phase precession could be critical to understanding human cognition,” says Sameer Sheth, a leading neurosurgeon and neuroscientist at the Baylor College of Medicine who is not affiliated with the study.

Qasim and Jacobs hope that establishing a precedent for phase precession in humans will guide research into temporal coding as an important aspect of human cognition. By demonstrating the existence of phase precession in multiple brain regions, with respect to multiple aspects of a task, the Columbia Engineering team hopes to open new avenues to decoding brain activity that rely on temporal coding, in addition to rate coding. Furthermore, scientists have theorized that phase precession might be important for memory, as sequence learning is important to ordering events in our memory. In line with this, rodent researchers have primarily observed phase precession in brain circuits disrupted by Alzheimer’s disease. As such, the discovery of phase precession may enable researchers to further probe neuronal biomarkers of memory.

Jacobs adds, “We hope to further explore whether phase precession is a universal code throughout the human brain, and for different kinds of behaviors. Then we can begin to better understand how this neuronal coding mechanism can be used for brain-machine interfaces, and manipulated by therapeutic brain stimulation.”

The study is funded by the National Institute of Neurological Disorders and Stroke R01-NS033221 and R01-NS084017 (to I.F.), National Institute of Mental Health R01-MH104606 and the National Science Foundation BCS-1724243 (to J.J.), and NSF Graduate Research Fellowship DGE 16-44869 (to S.E.Q.).

About the Study

The study is titled “Phase precession in the human hippocampus and entorhinal cortex.”

Featured image: Phase precession in the human hippocampus. 01] Coronal T2 MRI scan merged with post-implantation CT scan showing location of electrodes (white circles) in the human hippocampus (red).
02] Heat map of spatial phase precession for one human hippocampal neuron. Each pixel represents a location in space in the virtual environment. Colors indicate the theta phase action-potentials occurred at as the subject moved through a particular location; precession is represented by the gradient of colors across space.
03] Overhead view of the virtual environment, showing the six navigation location goals.
04] First-person view of the virtual environment. © Salman Ehtesham Qasim/Columbia Engineering


Provided by Columbia University School of Engineering and Applied Science

Link Found Between Time Perception, Risk For Developmental Coordination Disorder (Neuroscience)

Neuroscientists at McMaster University have found a link between children who are at risk for developmental coordination disorder (DCD), a common condition that can cause clumsiness, and difficulties with time perception such as interpreting changes in rhythmic beats.

Accurate time perception is crucial for basic skills such as walking and processing speech and music.

“Many developmental disorders, including dyslexia or reading difficulties, autism and attention deficits have been linked to deficits in auditory time perception,” says Laurel Trainor, senior author of the study and founding director of the McMaster Institute for Music and the Mind.

Previous research has shown the brain networks involved in time perception often overlap with the motor control networks required for such activities as catching a ball or tapping along to musical beats. Until now, researchers had not investigated whether children with DCD tended to have auditory timing deficits, despite being at risk for dyslexia and attention deficits.

The study, published online in the journal Child Development, provides new evidence about that connection in children.

A researcher helping a child participant wearing an EEG cap, a non-invasive approach to measure the brain waves. © Auditory Development Lab, McMaster University

Developmental coordination disorder is a common but little-studied condition that affects approximately five to 15 per cent of all children, who can experience a wide range of difficulties with fine and/or gross motor skills. It can have profound and lifelong effects on everyday tasks such as get dressed, writing, and engaging in sports or play, and often interferes with learning, academic performance and socialization.

For this study, researchers recruited more than 60 children aged 6 and 7 years old, who underwent motor skills tests and were assessed either to be at risk for DCD or to be developing typically.

During the first study, each child was asked in a series of trials to pinpoint which of two sounds was shorter in time or had an off-beat rhythm. From this, researchers measured the threshold or smallest time difference at which each child could just barely make the correct judgement.

“We saw that indeed, children at risk for DCD were much less sensitive to time changes compared to typically developing children,” says Andrew Chang, the lead researcher and graduate student in the Department of Psychology, Neuroscience & Behaviour at McMaster.

In the second experiment, researchers used EEG to measure the brain waves of children as they listened to a sequence of sounds that had been tweaked to include occasional timing deviations. Children at risk for DCD had slower brain activity in response to the unexpected timing deviants.

There are no medications to treat DCD, but physiotherapy and occupational therapy can help children improve muscle strength, balance and coordination.

“We know anecdotally that therapists sometimes incorporate regular rhythms into the physical therapy they give to children with DCD, and they have the impression this helps – for example that children can walk better when they walk to a rhythm.” Chang explains.

“Although our current study did not directly investigate any intervention effects, the results suggest that music with salient and regular beats could be used for physiotherapy to help treat children,” he says.

He points to motor rehabilitation featuring auditory cueing with metronomes or musical beats, which helps adult patients who have Parkinson’s disease or are recovering from a stroke. Further research could help to determine whether similar therapies are useful for children with DCD, he says.

Attention editors: A copy of the study can be found at this link:
https://srcd.onlinelibrary.wiley.com/doi/epdf/10.1111/cdev.13537

Featured image: Researchers assessing the motor skills of children © Pediatric Activity & Coordination for Excellence, China Medical University (Taiwan)


Provided by McMaster University

How Complex Oscillations in a Quantum System Simplify With Time? (Quantum)

With a clever experiment, physicists have shown that in a one-dimensional quantum system, the initially complex distribution of vibrations or phonons can change over time into a simple Gaussian bell curve. The experiment took place at the Vienna University of Technology, while the theoretical considerations were carried out by a joint research group from the Freie Universität Berlin and HZB.

Quantum physics allows to make statements about the behaviour of a wide variety of many-particle systems at the atomic level, from salt crystals to neutron stars. In quantum systems, many parameters do not have concrete values, but are distributed over various values with certain probabilities. Often this distribution takes the form of a simple Gaussian bell curve that is encountered also in classical systems for example the distribution of balls in the Galton box experiment. However, not all quantum systems follow this simple behavior and some might deviate from the Gaussian distribution due to interactions.

Prof. Dr. Jens Eisert, who heads a joint research group on theoretical physics at the Freie Universität Berlin and the Helmholtz-Zentrum Berlin, argues that once interactions are reduced such deviations decay over time and become Gaussian distributed. Now he has been able to substantiate this presumption experimentally.

To do this, the Berlin team worked together with a group of experimental physicists led by Prof. Dr. Jörg Schmiedmayer at the Vienna University of Technology. Schmiedmayer and members of his group, in particular Dr. Thomas Schweigler, prepared a so-called Bose-Einstein condensate: this is a quantum system consisting of several thousand rubidium atoms, which were confined in a quasi-one-dimensional configuration with the help of magnetic fields and cooled near absolute zero (50 nanokelvin).

“The Vienna group created a synthetic quantum system in which the distribution of the phonons can be observed particularly sharply” explains Dr. Marek Gluza, coauthor of the study and postdoc with Jens Eisert. The measurement data initially represent the complex dynamics of the phonons. But the complexity is lost over time and the distribution takes on the shape of a Gaussian bell curve.

“In fact, we can see here how a Gaussian distribution emerges over time. Nature finds a simple solution, all by itself, through its physical laws” comments Jens Eisert.

What is unique about the performed experiment is that as time goes on the system swings back to the more complex distribution, demonstrating that the signatures of a complicated state can be retrieved again. “We know precisely why it swings back and what it depends on”, Gluza explains. “This shows us something about the isolation of the system because the information about the signatures has never left the system “.

Prof. Eisert describes his research result for a wider audience in this short text:

The emergence of simplicity

Nature as we encounter it undeniably features a rich phenomenogy. It is the primary task of physics to describe this phenomelogy. It provides models for it and captures the physical world in terms of basic laws. It aims at understanding how constituents interact and what emergent properties these interactions give rise to. Quantum physics is the best physical theory we have available today to describe nature on a fundamental level. So in one way or the other, these interacting systems will ultimately follow dynamical laws within quantum theory. Given a physical model, that is to say, quantum physics will predict how the system under consideration will evolve in time.

Now strikingly, very simple models happen to describe a wealth of physical situations very well. These are so-called Gaussian states and models. While this may sound abstract, it may be sufficient to say that Gaussian states describe a physical situation at a given time in terms of simple Gaussian distributions. These are distributions of a kind as they are ubiquitous in statistics and in nature, to say the least. Distributions as we used to know from the old note of the ten Deutsche Mark bill. Indeed, physical systems that interact very little can be described by such Gaussian quantum states to a very good approximation. This is all fine, but these insights seem to miss an explanation how quantum systems that have interacted in the past ultimately end up in such Gaussian states. Where does the simplicity come from?

Theoretical work has long predicted notions of “Gaussification”, so physical systems to dynamically move to Gaussian states. In fact, Jens Eisert of the Freie Universität Berlin has suggested similar phenomena theoretically as early as in 2008. But experimental evidence has been missing. Now a team of researchers of the Technical University of Vienna – theoretically supported by a team at the Freie Universität Berlin including Marek Gluza and Spyros Sotiriadis and led by Jens Eisert – has set out to experimentally probe the question how quantum systems ultimately approach Gaussian quantum states. This question is rooted in and is related to the question how ensembles of quantum statistical mechanical would ultimately emerge. Placing atoms cooled to extremely low temperatures on top of a precisely designed chip, the team has been able to approach this long standing question that has already puzzled the forefathers of quantum mechanics under extremely accurate experimental conditions.

Indeed, in this experiment, one sees equilibrium properties as described by Gaussian states to emerge dynamically, accurately monitored in time. After some while, that is to say, one encounters how nature finds itself in a simple situation, one that is captured by simple physical laws: Simplicity emerges dynamically.

Featured image: The phonons distribution is complex (upper curves) and then simplifies with time to a Gaussian bell curve (lower curve). © S. Sotiriadis / Freie Universität Berlin


Reference: Thomas Schweigler, Marek Gluza, Mohammadamin Tajik, Spyros Sotiriadis, Federica Cataldini, Si-Cong Ji, Frederik S. Møller, João Sabino, Bernhard Rauer, Jens Eisert, Jörg Schmiedmayer, “Decay and recurrence of non-Gaussian correlations in a quantum many-body system”, Nature Physics (2021), https://www.nature.com/articles/s41567-020-01139-2 http://dx.doi.org/10.1038/s41567-020-01139-2


Provided by Helmholtz Zentrum Berlin

Neuroscientists Identify Brain Circuit That Encodes Timing of Events (Neuroscience)

Findings suggest this hippocampal circuit helps us to maintain our timeline of memories.

When we experience a new event, our brain records a memory of not only what happened, but also the context, including the time and location of the event. A new study from MIT neuroscientists sheds light on how the timing of a memory is encoded in the hippocampus, and suggests that time and space are encoded separately.

MIT neuroscientists have found that pyramidal cells (green) in the CA2 region of the hippocampus are responsible for storing critical timing information. Credits: Image: The Tonegawa Lab, edited by MIT News

In a study of mice, the researchers identified a hippocampal circuit that the animals used to store information about the timing of when they should turn left or right in a maze. When this circuit was blocked, the mice were unable to remember which way they were supposed to turn next. However, disrupting the circuit did not appear to impair their memory of where they were in space.

The findings add to a growing body of evidence suggesting that when we form new memories, different populations of neurons in the brain encode time and place information, the researchers say.

“There is an emerging view that ‘place cells’ and ‘time cells’ organize memories by mapping information onto the hippocampus. This spatial and temporal context serves as a scaffold that allows us to build our own personal timeline of memories,” says Chris MacDonald, a research scientist at MIT’s Picower Institute for Learning and Memory and the lead author of the study.

Susumu Tonegawa, the Picower Professor of Biology and Neuroscience at the RIKEN-MIT Laboratory of Neural Circuit Genetics at the Picower Institute, is the senior author of the study, which appears this week in the Proceedings of the National Academy of Sciences.

Time and place

About 50 years ago, neuroscientists discovered that the brain’s hippocampus contains neurons that encode memories of specific locations. These cells, known as place cells, store information that becomes part of the context of a particular memory.

The other critical piece of context for any given memory is the timing. In 2011, MacDonald and the late Howard Eichenbaum, a professor of psychological and brain sciences at Boston University, discovered cells that keep track of time, in a part of the hippocampus called CA1.

In that study, MacDonald, who was then a postdoc at Boston University, found that these cells showed specific timing-related firing patterns when mice were trained to associate two stimuli — an object and an odor — that were presented with a 10-second delay between them. When the delay was extended to 20 seconds, the cells reorganized their firing patterns to last 20 seconds instead of 10.

“It’s almost like they’re forming a new representation of a temporal context, much like a spatial context,” MacDonald says. “The emerging view seems to be that both place and time cells organize memory by mapping experience to a representation of context that is defined by time and space.”

In the new study, the researchers wanted to investigate which other parts of the brain might be feeding CA1 timing information. Some previous studies had suggested that a nearby part of the hippocampus called CA2 might be involved in keeping track of time. CA2 is a very small region of the hippocampus that has not been extensively studied, but it has been shown to have strong connections to CA1.

To study the links between CA2 and CA1, the researchers used an engineered mouse model in which they could use light to control the activity of neurons in the CA2 region. They trained the mice to run a figure-eight maze in which they would earn a reward if they alternated turning left and right each time they ran the maze. Between each trial, they ran on a treadmill for 10 seconds, and during this time, they had to remember which direction they had turned on the previous trial, so they could do the opposite on the upcoming trial.

When the researchers turned off CA2 activity while the mice were on the treadmill, they found that the mice performed very poorly at the task, suggesting that they could no longer remember which direction they had turned in the previous trial.

“When the animals are performing normally, there is a sequence of cells in CA1 that ticks off during this temporal coding phase,” MacDonald says. “When you inhibit the CA2, what you see is the temporal coding in CA1 becomes less precise and more smeared out in time. It becomes destabilized, and that seems to correlate with them also performing poorly on that task.”

Memory circuits

When the researchers used light to inhibit CA2 neurons while the mice were running the maze, they found little effect on the CA1 “place cells” that allow the mice to remember where they are. The findings suggest that spatial and timing information are encoded preferentially by different parts of the hippocampus, MacDonald says.

“One thing that’s exciting about this work is this idea that spatial and temporal information can operate in parallel and might merge or separate at different points in the circuit, depending on what you need to accomplish from a memory standpoint,” he says.

MacDonald is now planning additional studies of time perception, including how we perceive time under different circumstances, and how our perception of time influences our behavior. Another question he hopes to pursue is whether the brain has different mechanisms for keeping track of events that are separated by seconds and events that are separated by much longer periods of time.

“Somehow the information that we store in memory preserves the sequential order of events across very different timescales, and I’m very interested in how it is that we’re able to do that,” he says.

The research was funded by the RIKEN Center for Brain Science, the Howard Hughes Medical Institute, and the JPB Foundation.

Reference: Christopher J. MacDonald, Susumu Tonegawa, “Crucial role for CA2 inputs in the sequential organization of CA1 time cells supporting memory”, Proceedings of the National Academy of Sciences Jan 2021, 118 (3) e2020698118; DOI: 10.1073/pnas.2020698118 https://www.pnas.org/content/118/3/e2020698118

Provided by MIT

Does Bacteria Have Internal Clocks Like Us? (Biology)

Humans have them, so do other animals and plants. Now research reveals that bacteria too have internal clocks that align with the 24-hour cycle of life on Earth.

Shining a light on internal clocks – the bacterium Bacillus subtilis © Professor Ákos Kovács, Technical University of Denmark

The research answers a long-standing biological question and could have implications for the timing of drug delivery, biotechnology, and how we develop timely solutions for crop protection.

Biological clocks or circadian rhythms are exquisite internal timing mechanisms that are widespread across nature enabling living organisms to cope with the major changes that occur from day to night, even across seasons.

Existing inside cells, these molecular rhythms use external cues such as daylight and temperature to synchronise biological clocks to their environment. It is why we experience the jarring effects of jet lag as our internal clocks are temporarily mismatched before aligning to the new cycle of light and dark at our travel destination.

A growing body of research in the past two decades has demonstrated the importance of these molecular metronomes to essential processes, for example sleep and cognitive functioning in humans, and water regulation and photosynthesis in plants.

Although bacteria represent 12% biomass of the planet and are important for health, ecology, and industrial biotechnology, little is known of their 24hr biological clocks.

Previous studies have shown that photosynthetic bacteria which require light to make energy have biological clocks.

But free-living non photosynthetic bacteria have remained a mystery in this regard.

In this international study researchers detected free running circadian rhythms in the non-photosynthetic soil bacterium Bacillus subtilis.

The team applied a technique called luciferase reporting, which involves adding an enzyme that produces bioluminescence that allows researchers to visualise how active a gene is inside an organism.

They focused on two genes: firstly, a gene called ytvA which encodes a blue light photoreceptor and secondly an enzyme called KinC that is involved in inducing formation of biofilms and spores in the bacterium.

They observed the levels of the genes in constant dark in comparison to cycles of 12 hours of light and 12 hours of dark. They found that the pattern of ytvA levels were adjusted to the light and dark cycle, with levels increasing during the dark and decreasing in the light. A cycle was still observed in constant darkness.

Researchers observed how it took several days for a stable pattern to appear and that the pattern could be reversed if the conditions were inverted. These two observations are common features of circadian rhythms and their ability to “entrain” to environmental cues.

They carried out similar experiments using daily temperature changes; for example, increasing the length or strength of the daily cycle, and found the rhythms of ytvA and kinC adjusted in a way consistent with circadian rhythms, and not just simply switching on and off in response to the temperature.

“We’ve found for the first time that non-photosynthetic bacteria can tell the time,” says lead author Professor Martha Merrow, of LMU (Ludwig Maximilians University) Munich. “They adapt their molecular workings to the time of day by reading the cycles in the light or in the temperature environment.”

“In addition to medical and ecological questions we wish to use bacteria as a model system to understand circadian clock mechanisms. The lab tools for this bacterium are outstanding and should allow us to make rapid progress,” she added.

This research could be used to help address such questions as: is the time of day of bacterial exposure important for infection? Can industrial biotechnological processes be optimised by taking the time of day into account? And is the time of day of anti-bacterial treatment important?

“Our study opens doors to investigate circadian rhythms across bacteria. Now that we have established that bacteria can tell the time we need to find out the processes that cause these rhythms to occur and understand why having a rhythm provides bacteria with an advantage,” says author Dr Antony Dodd from the John Innes Centre.

Professor Ákos Kovács, co-author from the Technical University of Denmark adds that “Bacillus subtilis is used in various applications from laundry detergent production to crop protection, besides recently exploiting as human and animal probiotics, thus engineering a biological clock in this bacterium will culminate in diverse biotechnological areas.”

Reference: Zheng Eelderink-Chen, Jasper Bosman, Francesca Sartor, Antony N. Dodd, Ákos T. Kovács, Martha Merrow, “A circadian clock in a non-photosynthetic prokaryote”, Science Advances, Vol. 7, no. 2, eabe2086 DOI: 10.1126/sciadv.abe2086 (https://doi.org/10.1126/sciadv.abe2086) https://advances.sciencemag.org/content/7/2/eabe2086

Provided by John Innes Center

NTU Singapore Scientists Invent Glue Activated by Magnetic Field (Engineering)

A potential boon to green manufacturing, the new glue saves on energy, time and space.

Scientists from Nanyang Technological University, Singapore (NTU Singapore), have developed a new way to cure adhesives using a magnetic field.

(Left to right) NTU Assoc Prof Terry Steele, Prof Raju V. Ramanujan and Dr Richa Chaudhary holding up various soft and hard materials bonded by their new magnetocuring glue © NTU Singapore

Conventional adhesives like epoxy which are used to bond plastic, ceramics and wood are typically designed to cure using moisture, heat or light. They often require specific curing temperatures, ranging from room temperature up to 80 degrees Celsius.

The curing process is necessary to cross-link and bond the glue with the two secured surfaces as the glue crystallises and hardens to achieve its final strength.

NTU’s new “magnetocuring” glue can cure by passing it through a magnetic field. This is very useful in certain environmental conditions where current adhesives do not work well. Also, when the adhesive is sandwiched between insulating material like rubber or wood, traditional activators like heat, light and air cannot easily reach the adhesive.

Products such as composite bike frames, helmets and golf clubs, are currently made with two-part epoxy adhesives, where a resin and a hardener are mixed and the reaction starts immediately.

For manufacturers of carbon fibre – thin ribbons of carbon glued together layer by layer – and makers of sports equipment involving carbon fibre, their factories use large, high temperature ovens to cure the epoxy glue over many hours. This energy-intensive curing process is the main reason for the high cost of carbon fibre.

Assoc Prof Steele (left) and Dr Richa curing the magnetocuring glue on a cotton mesh using an electromagnetic field © NTU Singapore

The new “magnetocuring” adhesive is made by combining a typical commercially available epoxy adhesive with specially tailored magnetic nanoparticles made by the NTU scientists. It does not need to be mixed with any hardener or accelerator, unlike two-component adhesives (which has two liquids that must be mixed before use), making it easy to manufacture and apply.

It bonds the materials when it is activated by passing through a magnetic field, which is easily generated by a small electromagnetic device. This uses less energy than a large conventional oven.

For example, one gram of magnetocuring adhesive can be easily cured by a 200-Watt electromagnetic device in five minutes (consuming 16.6 Watt Hours). This is 120 times less energy needed than a traditional 2000-Watt oven which takes an hour (consuming 2000 Watt Hours) to cure conventional epoxy.

Developed by Professor Raju V. Ramanujan, Associate Professor Terry Steele and Dr Richa Chaudhary from the NTU School of Materials Science and Engineering, the findings were published in the scientific journal Applied Materials Today and offer potential application in a wide range of fields.

This includes high-end sports equipment, automotive products, electronics, energy, aerospace and medical manufacturing processes. Laboratory tests have shown that the new adhesive has a strength up to 7 megapascals, on par with many of the epoxy adhesives on the market.

NTU Prof Raju holding and bending two pieces of wood bonded in the middle by the magnetocuring glue, to demonstrate its strong bonding strength © NTU Singapore

Assoc Prof Steele, an expert in various types of advanced adhesives, explained: “Our key development is a way to cure adhesives within minutes of exposure to a magnetic field, while preventing overheating of the surfaces to which they are applied. This is important as some surfaces that we want to join are extremely heat-sensitive, such as flexible electronics and biodegradable plastics.”

How “magnetocuring” glue works

The new adhesive is made of two main components – a commercially available epoxy that is cured through heat, and oxide nanoparticles made from a chemical combination including manganese, zinc and iron (MnxZn1-xFe2O4).

These nanoparticles are designed to heat up when electromagnetic energy is passed through them, activating the curing process. The maximum temperature and rate of heating can be controlled by these special nanoparticles, eliminating overheating and hotspot formation.

Without the need for large industrial ovens, the activation of the glue has a smaller footprint in space and energy consumption terms. The energy efficiency in the curing process is crucial for green manufacturing, where products are made at lower temperatures, and use less energy for heating and cooling.

For instance, manufacturers of sports shoes often have difficulty heating up the adhesives in between the rubber soles and the upper half of the shoe, as rubber is a heat insulator and resists heat transmission to the conventional epoxy glue. An oven is needed to heat up the shoe over a long time before the heat can reach the glue.

Using magnetic-field activated glue bypasses this difficulty, by directly activating the curing process only in the glue.

The alternating magnetic field can also be embedded at the bottom of conveyor belt systems, so products with pre-applied glue can be cured when they pass through the magnetic field.

Improving manufacturing efficiency

Prof Raju Ramanujan, who is internationally recognised for his advances in magnetic materials, jointly led the project and predicts that the technology could increase the efficiency of manufacturing where adhesive joints are needed.

“Our temperature-controlled magnetic nanoparticles are designed to be mixed with existing one-pot adhesive formulations, so many of the epoxy-based adhesives on the market could be converted into magnetic field-activated glue,” Prof Ramanujan said.

“The speed and temperature of curing can be adjusted, so manufacturers of existing products could redesign or improve their existing manufacturing methods. For example, instead of applying glue and curing it part by part in a conventional assembly line, the new process could be to pre-apply glue on all the parts and then cure them as they move along the conveyor chain. Without ovens, it would lead to much less downtime and more efficient production.”

First author of the study, Dr Richa Chaudhary said, “The curing of our newly-developed magnetocuring adhesive takes only several minutes instead of hours, and yet is able to secure surfaces with high strength bonds, which is of considerable interest in the sports, medical, automotive and aerospace industries. This efficient process can also bring about cost savings as the space and energy needed for conventional heat curing are reduced significantly.”

This three-year project was supported by the Agency for Science, Technology and Research (A?STAR).

Previous work on heat-activated glue used an electric current flowing through a coil, known as induction-curing, where the glue is heated and cured from outside. However, its drawbacks include overheating of the surfaces and uneven bonding due to hotspot formation within the adhesive.

Moving forward, the team hopes to engage adhesive manufacturers to collaborate on commercialising their technology. They have filed a patent through NTUitive, the university’s innovation and enterprise company. They have already received interest in their research from sporting goods manufacturers.

Reference: “Magnetocuring of temperature failsafe epoxy adhesives” published in Dec issue of Applied Materials Today. doi.org/10.1016/j.apmt.2020.100824

Provided by Nanyang Technological University

Quantum Interference in Time (Quantum)

Bosons – including photons in particular – have a natural propensity to congregate. In 1987, three physicists demonstrated this gregarious character through a remarkable experiment: the Hong-Ou-Mandel effect. Recently, researchers at the Center for Quantum Information and Communication (ULB) discovered another manifestation of this herd tendency of photons. The research has just been published in PNAS.

Figure: phenomenon highlighted by ULB researchers

Since the very beginning of quantum physics, a hundred years ago, it has been known that all particles in the universe fall into two categories: fermions and bosons. For instance, the protons found in atomic nuclei are fermions, while bosons include photons–which are particles of light- as well as the BroutEnglert-Higgs boson, for which François Englert, a professor at ULB, was awarded a Nobel Prize in Physics in 2013.

Bosons–especially photons–have a natural tendency to clump together. One of the most remarkable experiments that demonstrated photons’ tendency to coalesce was conducted in 1987, when three physicists identified an effect that was since named after them: the Hong-Ou-Mandel effect. If two photons are sent simultaneously, each towards a different side of a beam splitter–a sort of semitransparent mirror–, one could expect that each photon will be either reflected or transmitted.

Logically, photons should sometimes be detected on opposite sides of this mirror, which would happen if both are reflected or if both are transmitted. However, the experiment has shown that this never actually happens: the two photons always end up on the same side of the mirror, as though they ‘preferred’ sticking together! In an article published recently in US journal Proceedings of the National Academy of Sciences, Nicolas Cerf–a professor at the Centre for Quantum Information and Communication (École polytechnique de Bruxelles)–and his former PhD student Michael Jabbour–now a postdoctoral researcher at the

University of Cambridge–describe how they identified another way in which photons manifest their tendency to stay together. Instead of a semi-transparent mirror, the researchers used an optical amplifier, called an active component because it produces new photons. They were able to demonstrate the existence of an effect similar to the Hong-Ou-Mandel effect, but which in this case captures a new form of quantum interference.

Quantum physics tells us that the Hong-Ou-Mandel effect is a consequence of the interference phenomenon, coupled with the fact that both photons are absolutely identical. This means it is impossible to distinguish the trajectory in which both photons were reflected off the mirror on the one hand, and the trajectory in which both were transmitted through the mirror on the other hand; it is fundamentally impossible to tell the photons apart. The remarkable consequence of this is that both trajectories cancel each other out! As a result, the two photons are never observed on the two opposite sides of the mirror. This property of photons is quite elusive: if they were tiny balls, identical in every way, both of these trajectories could very well be observed. As is often the case, quantum physics is at odds with our classical intuition.

The two researchers from ULB and the University of Cambridge have demonstrated that the impossibility to differentiate the photons emitted by an optical amplifier produces an effect that may be even more surprising. Fundamentally, the interference that occurs on a semi-transparent mirror stems from the fact that if we imagine switching the two photons on either sides of the mirror, the resulting configuration is exactly identical. With an optical amplifier, on the other hand, the effect identified by Cerf and Jabbour must be understood by looking at photon exchanges not through space, but through time.

When two photons are sent into an optical amplifier, they can simply pass through unaffected. However, an optical amplifier can also produce (or destroy) a pair of twin photons: so another possibility is that both photons are eliminated and a new pair is created. In principle, it should be possible to tell which scenario has occurred based on whether the two photons exiting the optical amplifier are identical to those that were sent in. If it were possible to tell the pairs of photons apart, then the trajectories would be different and there would be no quantum effect. However, the researchers have found that the fundamental impossibility of telling photons apart in time (in other words, it is impossible to know whether they have been replaced inside the optical amplifier) completely eliminates the possibility itself of observing a pair of photons exiting the amplifier. This means the researchers have indeed identified a quantum interference phenomenon that occurs through time. Hopefully, an experiment will eventually confirm this fascinating prediction!

Reference: Cerf, Nicolas J., and Michael G. Jabbour. “Two-boson quantum interference in time.” Proceedings of the National Academy of Sciences (2020): 202010827. Web. 14 Dec. 2020. https://www.pnas.org/content/early/2020/12/10/2010827117

Provided by ULB

How The Brain Remembers Right Place, Right Time (Neuroscience)

Two studies led by UT Southwestern researchers shed new light on how the brain encodes time and place into memories. The findings, published recently in PNAS and Science, not only add to the body of fundamental research on memory, but could eventually provide the basis for new treatments to combat memory loss from conditions such as traumatic brain injury or Alzheimer’s disease.

Hippocampal neurons create spatial and temporal “maps” of our world. Brain waves called “theta rhythms” help organize the activity of these neurons. A study by Bradley Lega, M.D., and colleagues determined how a group of neurons known as time cells allow the brain to correctly mark the order of events and assist in memory. Credit: Melissa Logies

About a decade ago, a group of neurons known as ‘time cells’ was discovered in rats. These cells appear to play a unique role in recording when events take place, allowing the brain to correctly mark the order of what happens in an episodic memory.

Located in the brain’s hippocampus, these cells show a characteristic activity pattern while the animals are encoding and recalling events, explains Bradley Lega, M.D., associate professor of neurological surgery at UTSW and senior author of the PNAS study. By firing in a reproducible sequence, they allow the brain to organize when events happen, Lega says. The timing of their firing is controlled by 5 Hz brain waves, called theta oscillations, in a process known as precession.

Lega investigated whether humans also have time cells by using a memory task that makes strong demands on time-related information. Lega and his colleagues recruited volunteers from the Epilepsy Monitoring Unit at UT Southwestern’s Peter O’Donnell Jr. Brain Institute, where epilepsy patients stay for several days before surgery to remove damaged parts of their brains that spark seizures. Electrodes implanted in these patients’ brains help their surgeons precisely identify the seizure foci and also provide valuable information on the brain’s inner workings, Lega says.

While recording electrical activity from the hippocampus in 27 volunteers’ brains, the researchers had them do “free recall” tasks that involved reading a list of 12 words for 30 seconds, doing a short math problem to distract them from rehearsing the lists, and then recalling as many words from the list as possible for the next 30 seconds. This task requires associating each word with a segment of time (the list it was on), which allowed Lega and his team to look for time cells. What the team found was exciting: Not only did they identify a robust population of time cells, but the firing of these cells predicted how well individuals were able to link words together in time (a phenomenon called temporal clustering). Finally, these cells appear to exhibit phase precession in humans, as predicted.

“For years scientists have proposed that time cells are like the glue that holds together memories of events in our lives,” according to Lega. “This finding specifically supports that idea in an elegant way.”

In the second study in Science, Brad Pfeiffer, Ph.D., assistant professor of neuroscience, led a team investigating place cells—a population of hippocampal cells in both animals and humans that records where events occur. Researchers have long known that as animals travel a path they’ve been on before, neurons encoding different locations along the path will fire in sequence much like time cells fire in the order of temporal events, Pfeiffer explains. In addition, while rats are actively exploring an environment, place cells are further organized into “mini-sequences” that represent a virtual sweep of locations ahead of the rat. These radar-like sweeps happen roughly 8-10 times per second and are thought to be a brain mechanism for predicting immediately upcoming events or outcomes.

Prior to this study, it was known that when rats stopped running, place cells would often reactivate in long sequences that appeared to replay the rat’s prior experience in the reverse. While these “reverse replay” events were known to be important for memory formation, it was unclear how the hippocampus was able to produce such sequences. Indeed, considerable work had indicated that experience should strengthen forward, “look ahead” sequences but weaken reverse replay events.

To determine how these backward and forward memories work together, Pfeiffer and his colleagues placed electrodes in the hippocampi of rats, then allowed them to explore two different places: a square arena and a long, straight track. To encourage them to move through these spaces, they placed wells with chocolate milk at various places. They then analyzed the animals’ place cell activity to see how it corresponded to their locations.

Particular neurons fired as the rats wandered through these spaces, encoding information on place. These same neurons fired in the same sequence as the rats retraced their paths, and periodically fired in reverse as they completed different legs of their journeys. However, taking a closer look at the data, the researchers found something new: As the rats moved through these spaces, their neurons not only exhibited forward, predictive mini-sequences, but also backward, retrospective mini-sequences. The forward and backward sequences alternated with each other, each taking only a few dozen milliseconds to complete.

“While these animals were moving forward, their brains were constantly switching between expecting what would happen next and recalling what just happened, all within fraction-of-a-second timeframes,” Pfeiffer says.

Pfeiffer and his team are currently studying what inputs these cells are receiving from other parts of the brain that cause them to act in these forward or reverse patterns. In theory, he says, it might be possible to hijack this system to help the brain recall where an event happened with more fidelity. Similarly, adds Lega, stimulation techniques might eventually be able to mimic the precise patterning of time cells to help people more accurately remember temporal sequences of events. Further studies with “In the past few decades, there’s been an explosion in new findings about memory,” he adds. “The distance between fundamental discoveries in animals and how they can help people is becoming much shorter now.”

Reference: (1) Gray Umbach et al. Time cells in the human hippocampus and entorhinal cortex support episodic memory, Proceedings of the National Academy of Sciences (2020). DOI: 10.1073/pnas.2013250117 https://www.pnas.org/content/117/45/28463 (2) Mengni Wang et al. Alternating sequences of future and past behavior encoded within hippocampal theta oscillations, Science (2020). DOI: 10.1126/science.abb4151 https://science.sciencemag.org/lookup/doi/10.1126/science.abb4151

Provided by UT Southwestern Medical Center