In a new study from Skoltech and the University of Kentucky, researchers found a new connection between quantum information and quantum field theory. This work attests to the growing role of quantum information theory across various areas of physics. The paper was published in the journal Physical Review Letters.
Quantum information plays an increasingly important role as an organizing principle connecting various branches of physics. In particular, the theory of quantum error correction, which describes how to protect and recover information in quantum computers and other complex interacting systems, has become one of the building blocks of the modern understanding of quantum gravity.
“Normally, information stored in physical systems is localized. Say, a computer file occupies a particular small area of the hard drive. By “error” we mean any unforeseen or undesired interaction which scrambles information over an extended area. In our example, pieces of the computer file would be scattered over different areas of the hard drive. Error correcting codes are mathematical protocols that allow collecting these pieces together to recover the original information. They are in heavy use in data storage and communication systems. Quantum error correcting codes play a similar role in cases when the quantum nature of the physical system is important,” Anatoly Dymarsky, Associate Professor at the Skoltech Center for Energy Science and Technology (CEST), explains.
In a rather unexpected twist, scientists realized not too long ago that quantum gravity – the theory describing quantum dynamics of space and time – operates similar mathematical protocols to exchange information between different parts of space. “The locality of information within quantum gravity remains one of the few open fundamental problems in theoretical physics. That is why the appearance of well-studied mathematical structures such as quantum error correcting codes is intriguing,” Dymarsky notes. Yet the role of codes was only understood schematically, and the explicit mechanism behind the locality of information remains elusive.
In their new paper, he and his colleague, Alfred Shapere from the University of Kentucky Department of Physics and Astronomy, establish a novel connection between quantum error correcting codes and two-dimensional conformal field theories. The latter describe interactions of quantum particles and have become standard theoretical tools to describe many different phenomena, from fundamental elementary particles to quasi-particles emerging in quantum materials, such as graphene. Some of these conformal field theories also describe quantum gravity via holographic correspondence.
“Now we have a new playground to study the role of quantum error correcting codes in the context of quantum field theory. We hope this is a first step in understanding how locality of information actually works, and what hides behind all this beautiful mathematics,” Dymarsky concludes.
A UNSW mathematician has revealed the origins of applied geometry on a 3700-year-old clay tablet that has been hiding in plain sight in a museum in Istanbul for over a century.
The tablet – known as Si.427 – was discovered in the late 19th century in what is now central Iraq, but its significance was unknown until the UNSW scientist’s detective work was revealed today.
Most excitingly, Si.427 is thought to be the oldest known example of applied geometry – and in the study released today in the Foundations of Science, the research also reveals a compelling human story of land surveying.
“Si.427 dates from the Old Babylonian (OB) period – 1900 to 1600 BCE,” says lead researcher Dr Daniel Mansfield from UNSW Science’s School of Mathematics and Statistics.
“It’s the only known example of a cadastral document from the OB period, which is a plan used by surveyors to define land boundaries. In this case, it tells us legal and geometric details about a field that’s split after some of it was sold off.”
This is a significant object because the surveyor uses what are now known as “Pythagorean triples” to make accurate right angles.
“The discovery and analysis of the tablet have important implications for the history of mathematics,” Dr Mansfield says. “For instance, this is over a thousand years before Pythagoras was born.”
Si.427 is a hand tablet from 1900-1600 BC, created by an Old Babylonian surveyor. It’s made out of clay and the surveyor wrote on it with a stylus.
On the front, we see a diagram of a field.
The field is being split, and some of it is being sold.
That’s what the lines are for – they demarcate the boundaries of the different fields. The boundaries are really accurate – a lot more accurate than you’d expect for that time.
Our surveyor managed to be so precise by using Pythagorean triples – making the boundary lines he created truly perpendicular. In the simplest example of a Pythagorean triple, a triple has sides of 3, 4 and 5 – creating a perfect right angle.
On the back of the tablet we see text, written in cuneiform, one of the earliest systems of writing.
The text corresponds to the diagram on the front – describing things such as the size of the field.
At the bottom of the back of the tablet, we see large numbers – that’s the only thing we haven’t quite figured out.
Hot on the heels of another world-first find
In 2017, Dr Mansfield conjectured that another fascinating artefact from the same period, known as Plimpton 322, was a unique kind of trigonometric table.
“It is generally accepted that trigonometry – the branch of maths that is concerned with the study of triangles – was developed by the ancient Greeks studying the night sky in the second century BCE,” says Dr Mansfield.
“But the Babylonians developed their own ‘proto-trigonometry’ to solve problems measuring the ground, not the sky.”
The tablet revealed today is thought to have existed even before Plimpton 322 – in fact, surveying problems likely inspired Plimpton 322.
“There is a whole zoo of right triangles with different shapes. But only a very small handful can be used by Babylonian surveyors. Plimpton 322 is a systematic study of this zoo to discover the useful shapes,” says Dr Mansfield.
Tablet purpose revealed: surveying land
Back in 2017, the team speculated about the purpose of Plimpton 322, hypothesizing that it was likely to have had some practical purpose, possibly in the construction of palaces and temples, building canals or surveying of fields.
“With this new tablet, we can actually see for the first time why they were interested in geometry: to lay down precise land boundaries,” Dr Mansfield says.
“This is from a period where land is starting to become private – people started thinking about land in terms of ‘my land and your land’, wanting to establish a proper boundary to have positive neighbourly relationships. And this is what this tablet immediately says. It’s a field being split, and new boundaries are made.”
There are even clues hidden on other tablets from that time period about the stories behind these boundaries.
“Another tablet refers to a dispute between Sin-bel-apli – a prominent individual mentioned on many tablets including Si.427 – and a wealthy female landowner,” says Dr Mansfield.
“The dispute is over valuable date palms on the border between their two properties. The local administrator agrees to send out a surveyor to resolve the dispute. It is easy to see how accuracy was important in resolving disputes between such powerful individuals.”
Dr Mansfield says the way these boundaries are made reveals real geometric understanding.
“Nobody expected that the Babylonians were using Pythagorean triples in this way,” Dr Mansfield says. “It is more akin to pure mathematics, inspired by the practical problems of the time.”
Creating right angles – easier said than done
One simple way to make an accurate right angle is to make a rectangle with sides 3 and 4, and diagonal 5. These special numbers form the 3-4-5 “Pythagorean triple” and a rectangle with these measurements has mathematically perfect right angles. This is important to ancient surveyors and still used today.
“The ancient surveyors who made Si.427 did something even better: they used a variety of different Pythagorean triples, both as rectangles and right triangles, to construct accurate right angles,” Dr Mansfield says.
However, it is difficult to work with prime numbers bigger than 5 in the base 60 Babylonian number system.
“This raises a very particular issue – their unique base 60 number system means that only some Pythagorean shapes can be used,” Dr Mansfield says.
“It seems that the author of Plimpton 322 went through all these Pythagorean shapes to find these useful ones.
“This deep and highly numerical understanding of the practical use of rectangles earns the name ‘proto-trigonometry’ but it is completely different to our modern trigonometry involving sin, cos, and tan.”
Hunting down Si.427
Dr Mansfield first learned about Si.427 when reading about it in excavation records – the tablet was dug up during the Sippar expedition of 1894, in what’s the Baghdad province in Iraq today.
“It was a real challenge to trace the tablet from these records and physically find it – the report said that the tablet had gone to the Imperial Museum of Constantinople, a place that obviously doesn’t exist anymore.
“Using that piece of information, I went on a quest to track it down, speaking to many people at Turkish government ministries and museums, until one day in mid 2018 a photo of Si.427 finally landed in my inbox.
“That’s when I learned that it was actually on display at the museum. Even after locating the object it still took months to fully understand just how significant it is, and so it’s really satisfying to finally be able to share that story.”
Next, Dr Mansfield hopes to find what other applications the Babylonians had for their proto-trigonometry.
There’s just one mystery left that Dr Mansfield hasn’t unlocked: on the back of the tablet, at the very bottom, it lists the numbers ‘25,29’ in big font – think of it as 25 minutes and 29 seconds.
“I can’t figure out what these numbers mean – it’s an absolute enigma. I’m keen to discuss any leads with historians or mathematicians who might have a hunch as to what these numbers trying to tell us!”
Wormholes not only play a key role in understanding the nonperturbative physics of quantum black holes, for instance: the eternal traversable wormhole; the long-time behavior of the spectral form factor and correlation functions, the Page curve etc. but also, it leads to puzzles, in particular the factorization problem. Imagine two decoupled boundary systems in the AdS/CFT context, labelled L and R. From the boundary perspective, if one evaluates a partition function in the combined system the result is just the product of the results for the two component systems:
It factorizes. But, if the bulk calculation of ZLR includes a wormhole linking L and R then superficially at least ZLR ≠ ZLZR. It fails to factorize. Some of the phenomena recently explained by wormholes, in particular the spectral form factor and squared matrix elements, are described by decoupled boundary systems and so the wormhole explanation give rise to a factorization puzzle.
But, you can remove this factorization puzzle by averaging the L and R systems over the same ensemble, denoted by (·), with the help of the SYK model. The factorization puzzle solves because (ZLZR) need not to be same as (ZL) (ZR). And infact this link between wormholes and ensembles is not a new one, it dates back to the 1980s. However, it has been recently applied in AdS/CFT context.
We can create a new form of factorization puzzle in such ensembles by asking what happens to the wormholes connecting decoupled systems when we focus on just 1 element of the ensemble. Now, Phil Saad and colleagues addressed this question in the SYK model where instead of averaging the L and R systems they picked a fixed set of couplings between the fermions.
After averaging over fermion couplings, SYK model has a collective fields called G and Σ, that sometimes has “wormhole” solutions. Phil Saad and colleagues studied the fate of these wormholes when the couplings are fixed.
Working mainly in a simple model, they found that the wormhole saddles persist, and the dependence on the couplings is weak. The wormhole is “self-averaging”. But, that new saddles also appear elsewhere in the integration space, which they interpret as “half-wormholes.” The half-wormhole contributions depends sensitively on the particular choice of couplings.
Finally, they showed that, the half-wormholes are crucial for factorization (or restore factorization) of decoupled systems with fixed couplings, but they vanish after averaging, leaving the non-factorizing wormhole behind.
Reference: Phil Saad, Stephen H. Shenker, Douglas Stanford, and Shunyu Yao, “Wormholes without averaging”, Arxiv, pp. 1-34, 2021. arXiv:2103.16754
Note for editors of other websites: To reuse this article fully or partially kindly give credit either to our author/editor S. Aman or provide a link of our article
Mathematicians and neuroscientists achieve breakthrough in understanding how whiskers ‘amplify’ texture
How we sense texture has long been a mystery. It is known that nerves attached to the fingertip skin are responsible for sensing different surfaces, but how they do it is not well understood. Rodents perform texture sensing through their whiskers. Like human fingertips, whiskers perform multiple tasks, sensing proximity and shape of objects, as well as surface textures.
Mathematicians from the University of Bristol’s Department of Engineering Mathematics, worked with neuroscientists from the University of Tuebingen in Germany, to understand how the motion of a whisker across a surface translates texture information into neural signals that can be perceived by the brain.
By carrying out high precision laboratory tests on a real rat whisker, combined with computation models, the researchers found that whiskers act like antennae, tuned to sense the tiny stick-slip motions caused by friction between the surface and the tip of the whisker.
“One of the most striking things we found both in the experiments and the theory was the thousand-fold amplification of tiny force signals perceived by the tip of the whisker to that received by the neurons at the whiskers base. Suddenly we realised that the whisker is acting like an amplifier, taking micro-scale stick-slip events and rapidly turning them into clean pulses that can be picked up and processed by the brain,” said Professor Alan Champneys from the University of Bristol, co-lead of the modelling work with colleague, Dr Robert Szalai. Dr Thibaut Putelat carried out the detailed numerical modelling.
The research Conveyance of texture signals along a rat whisker, published in the journal Scientific Reports from the publisher Nature, reveals the tapering of the whisker has the effect of amplifying tiny high-frequency motions into appreciable pulse-like changes in forces and movement at the whisker follicle. In turn, the nerve cells in the follicle sense these changes and transmit them to the brain.
“It is almost as if the morphology of the whisker is designed to convey these friction-induced signals as “AC” waves on top of the “DC” motion of the whisker that conveys the information on surface proximity and hardness.
“These AC waves are too small and too rapid to be perceived by the human eye. However, in approaching this problem in a multidisciplinary fashion, we have been able to reveal these waves with clarity for the first time,” said Professor Champneys.
“The findings have implications for human touch too, where the morphology of finger-print ridges is more complex, but might similarly distinguish between AC and DC signals as our brain tries to disentangle multiple information streams about what we are feeling,” said Dr Maysam Oladazimi, who carried out the experiments as part of his PhD.
The findings could have far-reaching benefits including how textures could be designed to provide optimal cues for the visually impaired, for human safety operation in low light environments, or for immersive artistic installations.
“This research opens several avenues for future work. As neuroscientists, we are interested in developing a more detailed understanding of neural signalling pathways in texture discrimination, while our colleagues in Bristol are keen to explore implications for the design of future robotic sensing systems,” said Professor Cornelius Schwarz, who led the experiments at the University of Tuebingen.
Professor Champneys said the research was of particular value to haptic-sensing in the field of robotics, where robots literally feel their environment and is the focus of much current research, especially for robots that need to act autonomously in the dark, such as in search and rescue missions. Professor Nathan Lepora and colleagues at the Bristol Robotics Laboratory are pioneers in this field.
“This transnational interdisciplinary collaboration between experimentalists and mathematical modellers was exciting. The results from the computer models and from the laboratory experiments went hand in hand – it was only through a combination of the two that we were able to make our breakthrough,” said Professor Champneys.
Paper: Conveyance of texture signals along a rat whisker, by Oladazimi, M; Putelat, T; Szalai, R; Noda, K; Shimoyama, I; Champneys, A & Schwarz, C; published in Scientific Reports. 11, 13570 (2021). https://doi.org/10.1038/s41598-021-92770-3
RUDN University mathematicians have developed a model for calculating the density of 5G stations needed to achieve the required network parameters. The results are published in Computer Communications.
Network slicing (NS) is one of the key technologies that the new 5G communication standard relies on. Several virtual networks, or layers, are deployed on the same physical infrastructure (the same base stations). Each layer is allocated to a separate group of users, devices, or applications. To slice the network, one need the NR (New Radio) technology, which operates on millimetre waves. Most of the research in this area is aimed at creating an infrastructure of NR stations that would provide network slicing in each specific case. RUDN University mathematicians have developed a first general theoretical approach that helps to calculate the density of NR base stations needed to slice the network with the specified parameters of the quality of service.
“The concept of network slicing will drastically simplify the market entrance for mobile virtual network operators as well as provisioning of differentiated quality to network services. This functionality is a major paradigm shift in the cellular world enabling multi-layer network structures similar to that of the modern Internet and allowing resource sharing with logical isolation among multiple tenants and/or services in multi-domain context”, said Ekaterina Lisovskaya, PhD, junior Researcher at the Research Center for Applied Probability & Stochastic Analysis at RUDN University.
When constructing the algorithm, the RUDN mathematicians used a model city. NR base stations were distributed with some density. The stations had three antennas, each of which covered 120 degrees. Users of devices with 5G cellular communication network operating in the millimetre frequency range (30-100 GHz) were randomly distributed around the city. They moved and could block each other’s line-of-sight with the base station. Each antenna had an effective range, where the connection doesn’t break even if the line-of-sight is blocked. The RUDN University mathematicians constructed the dependence of the network characteristics on the density of the station location.
To check the accuracy of the constructed model, mathematicians used a computer simulation. The results of theoretical and experimental calculations agreed. The model shows, for example, how the density of the stations affects the regime of network splitting from full isolation to full mixing. The first one assumes that each layer has its own frequency range of a fixed width. In the second regime, the frequencies of the layers are mixed with each other. The second option is more difficult from a technical point of view, but it increases the efficiency of using physical network resources. RUDN University mathematicians have studied these regimes as two boundary versions of the network implementation — in real life, some intermediate implementation is usually required. It turned out that the difference in the density of stations between these bounds is small — one station per 10,000 square meters.
“Our numerical evaluation campaign reveals that the full isolation and full mixing systems’ operational regime changes rather abruptly with respect to the density of NR BSs. However, the system parameters may drastically affect the required density. Practically, it implies that at the initial market penetration phase, the full isolation strategy can be utilized without compromising the network performance. However, at mature deployment phases, more sophisticated schemes may reduce the capital expenditures of network operators” said Ekaterina Lisovskaya, PhD, junior Researcher at the Research Center for Applied Probability & Stochastic Analysis at RUDN University.
Spatial reasoning ability in small children reflects how well they will perform in mathematics later. Researchers from the University of Basel recently came to this conclusion, making the case for better cultivation of spatial reasoning.
Good math skills open career doors in the natural sciences as well as technical and engineering fields. However, a nationwide study on basic skills conducted in Switzerland in 2019 found that schoolchildren achieved only modest results in mathematics. But it seems possible to begin promoting math skills from a young age, as Dr. Wenke Möhring’s team of researchers from the University of Basel reported after studying nearly 600 children.
The team found a correlation between children’s spatial sense at the age of three and their mathematical abilities in primary school. “We know from past studies that adults think spatially when working with numbers – for example, represent small numbers to the left and large ones to the right,” explains Möhring. “But little research has been done on how spatial reasoning at an early age affects children’s learning and comprehension of mathematics later.”
A good foundation for math comprehension
The study, which was published in the journal Learning and Instruction, suggests that there is a strong correlation between early spatial skills and the comprehension of mathematical concepts later. The researchers also ruled out the possibility that this correlation is due to other factors, such as socio-economic status or language ability. Exactly how spatial ability affects mathematical skills in children is still unclear, but the spatial conception of numbers might play a role.
The findings are based on the analysis of data from 586 children in Basel. As part of a project on language acquisition of German as a second language, the researchers gave three-year-old children a series of tasks to test cognitive, socio-emotional and spatial abilities. For example, the children were asked to arrange colored cubes in certain shapes. The researchers repeated these tests four times at an interval of about 15 months and compared the results with the academic performance of seven-year-old children in the first grade.
Developmental dynamics appear irrelevant
The researchers also closely examined whether the pace of development, i.e. particularly rapid development of spatial abilities, can predict future mathematical ability. Past studies with a small sample size had found a correlation, but Möhring and her colleagues were unable to confirm this in their own study. Three-year-old children who started out with low spatial abilities improved them faster in the subsequent years, but still performed at a lower level in mathematics when they were seven years old. Despite faster development, by the time they began school these children had still not fully caught up with the children possessing higher initial spatial reasoning skills.
“Parents often push their children in the area of language skills,” says Möhring. “Our results suggest how important it is to cultivate spatial reasoning at an early age as well.” There are simple ways to do this, such as using “spatial language” (larger, smaller, same, above, below) and toys – e.g. building blocks – that help improve spatial reasoning ability.
Spatial reasoning and gender
The researchers found that boys and girls are practically indistinguishable in terms of their spatial reasoning ability at the age of three, but in subsequent years this develops more slowly in girls. Möhring and her colleagues suspect that boys may hear more “spatial language” and that toys typically designed for boys often promote spatial reasoning, whereas toys for girls focus mainly on social skills. Children may also internalize their parents’ and teacher’s expectations and then, as they grow up, live up to stereotypes – for example, that women do not perform as well in the areas of spatial reasoning and mathematics as men.
Featured image: Toys like building blocks promote spatial thinking and thus possibly also later understanding of mathematics. (Photo: Jelleke Vanooteghem, unsplash)
Casting models of a complex system in terms of differential equations on networks allows researchers to use its underlying structure for efficient simulations
Emerging open-source programming language Julia is designed to be fast and easy to use. Since it is particularly suited for numerical applications, such as differential equations, scientists in Germany are using it to explore the challenges involved in transitioning to all-renewable power generation.
Decarbonization implies a radical restructuring of power grids, which are huge complex systems with a wide variety of constraints, uncertainties, and heterogeneities. Power grids will become even more complex in the future, so new computational tools are needed.
In Chaos, from AIP Publishing, Potsdam Institute for Climate Impact Research (PIK) scientists describe a software package they built to enable the simulation of general dynamical systems on complex networks.
They wanted to build an open-source tool — so anyone can verify its software structure and algorithms — to make all state-of-the-art algorithms within Julia’s ecosystem easily accessible to engineers and physicists. Their package, called NetworkDynamics.jl, started out as the computational backend of another one, PowerDynamics.jl.
“We realized our computational backend would be useful to other researchers within the dynamical systems community as well,” said Michael Lindner, a postdoctoral researcher at PIK.
The two theoretical pillars of their work are differential equations and complex networks.
“By casting models of power grids or brains, for example, in terms of differential equations on networks, we give them a clear underlying structure,” he said. “The network encodes locality, what interacts with what, and the differential equations encode dynamics, how things change with time.”
This enables researchers to obtain state-of-the-art simulation speeds.
“We first compute all the interactions among network components, then the back reactions of individual components to that interaction. This allows us to compute the entire evolution of the system within two easily parallelizable loops,” said Lindner.
Since Julia is fast and easy to write and has a library for solving differential equations (DifferentialEquations.jl), researchers can implement and simulate complicated models within one day — rather than the month it used to require with other languages.
“It removes some of the barriers limiting scientific creativity,” Lindner said. “I hadn’t even thought about certain models and important questions before, just because they seemed completely out of reach with my given time constraints and programming skills.”
A good, intuitive interface to high-performance algorithms is “important for science today,” he said, “because they enable scientists to focus on their research questions and models instead of code and implementation details.”
The article, “NetworkDynamics.jl – Composing and simulating complex networks in Julia,” is authored by Michael Lindner, Lucas Lincoln, Fenja Drauschke, Julia M. Koulen, Hans Würfel, Anton Plietzsch, and Frank Hellmann. It will appear in Chaos on June 22, 2021 (DOI: 10.1063/5.0051387). After that date, it can be accessed at https://aip.scitation.org/doi/10.1063/5.0051387.
Featured image: Schematic view of the structure of DynamicNetworks.jl. CREDIT: Michael Lindner, Lucas Lincoln, Fenja Drauschke, Julia M. Koulen, Hans Würfel, Anton Plietzsch, and Frank Hellmann
In recent years, physical reservoir computing*1), one of the new information processing technologies, has attracted much attention. This is a physical implementation version of reservoir computing, which is a learning method derived from recurrent neural network (RNN)*2) theory. It implements computation by regarding the physical system as a huge RNN, outsourcing the main operations to the dynamics of the physical system that forms the physical reservoir. It has the advantage of obtaining optimization instantaneously with limited computational resources by adjusting linear and static readout weightings between the output and a physical reservoir without requiring optimization of the weightings by back propagation. However, since the information processing capability depends on the physical reservoir capacity, it is important that this is investigated and optimized. Furthermore, when designing a physical reservoir with high information processing capability, it is expected that the experimental cost will be reduced by numerical simulation. Well-known examples of physical reservoir computing include its application to soft materials, photonics, spintronics, and quanta, while in recent years, much attention has been paid to waves; neuromorphic devices that simulate functions of the brain by using non-linear waves have been proposed (see references 1-3). The fluid flow of water, air, etc. represents a physical system that is familiar but shows various and complicated patterns that have been thought to have high information processing capability. However, virtual physical reservoir computing using numerical simulation or investigation of information processing capability of fluid flow phenomena has not been realized due to its relatively high numerical computational cost. Therefore, the relationship between the flow vortex and information processing capability remained unknown.
In this study, Prof. Hirofumi Notsu and a graduate student of Kanazawa University in collaboration with Prof. Kohei Nakajima of the University of Tokyo investigated fluid flow phenomena as a physical system, especially the fluid flow that occurs around a cylinder, which is well understood. It is known that this physical system is governed by the incompressible Navier-Stokes equations*3), which describe fluid flow, and also includes the Reynolds number*4), a parameter indicative of the system characteristics. This physical system was virtually implemented by spatial two-dimensional numerical simulation using the stabilized Lagrange-Galerkin method*5), and the dynamics of flow velocity and pressure at the selected points in the downstream region of the cylinder were used as the physical reservoir. The information processing capability was evaluated using the NARMA model*6) (see Figure).
It is known that in the flow of fluid around a cylinder, as the Reynolds number value increases, twin vortices formed in the downstream region of the cylinder gradually become larger and eventually form a Karman vortex street, the alternate shedding of vortices. In this study, it was clarified that at the Reynolds number where the twin vortices are maximal but just before the transition to a Karman vortex street, the information processing capability is the highest. In other words, before the transition to a Karman vortex street, the information processing capability increases as the size of the twin vortices increases. On the other hand, since the echo state property*7) that guarantees the reproducibility of the reservoir computing cannot be maintained when the transition to the Karman vortex street takes place, it becomes clear that the Karman vortex street cannot be used for computing.
It is expected that these findings concerning fluid flow vortices and information processing capability will be useful when, in future, the information processing capability of the physical reservoir can be expanded using fluid flow, e.g. in the development of wave-based neuromorphic devices recently reported. Although the numerical computational cost of fluid flow phenomena is relatively high, this study has made it possible to handle macroscopic vortices that are physically easy to understand and has clarified the relationship between vortices and information processing capabilities by virtually implementing physical reservoir computing with spatial two-dimensional numerical simulation. Virtual physical reservoir computing, which used to be applied to a relatively large number of physical systems described as one-dimensional systems, has been expanded to include physical systems with two or more spatial dimensions. It is expected that the results of this study will allow the virtual investigation of the information processing capabilities of a wider range of physical systems. In addition, since it is revealed that vortices are the key to information processing capability, it is expected that research for creating or maintaining vortices will be further promoted.
*1) Physical reservoir computing A physical implementation version of reservoir computing, which is a type of learning method for recurrent neural networks (RNN). The dynamics of a physical system (physical reservoir) is regarded as a huge RNN, with which major computing is performed. Since it outsources operations, it has the advantage of being able to obtain optimization instantaneously with limited computational resources.
*2) Recurrent neural network (RNN) A recurrent neural network is a neural network in which the output of the middle layers is also the input of itself or another layer.
*3) Incompressible Navier-Stokes equations Incompressible Navier-Stokes equations are partial differential equations concerning the velocity and pressure of fluid flow in which the material density is constant.
*4) Reynolds number Reynolds number (Re) is a dimensionless number that represents the ratio of the inertial force and the viscous force of the flow where ν is the kinematic viscosity of the fluid (m2/s), U is the flow speed (m/s), and L is the characteristic linear dimension (m), Re = UL/ν.
*5) Stabilized Lagrange-Galerkin method A numerical solution of the finite element method. An implicit finite element method in which Lagrangian coordinates are used for the inertial term, piecewise linear elements are used to approximate the fluid flow velocity and pressure for the incompressible Navier-Stokes equations, and pressure stabilization is applied. It is characterized by robustness with respect to convection, relatively small numerical diffusion, and a symmetric matrix.
*6) NARMA model NARMA (Nonlinear Autoregressive Moving Average) model, a non-linear time series model with inputs. In NARMA2, the next value is determined by the present and previous time values and input values. In NARMA3, it also depends on the value at the pre-previous time and input values.
*7) Echo state property The property that the internal state of the current reservoir is expressed as a function that depends only on past inputs (it does not depend on the initial value). As a result, reservoirs with this property will eventually synchronize if the same inputs are given continuously, even if they start from different states.
1. Marcucci, G., Pierangeli, D., Conti, C., Theory of neuromorphic computing by waves: Machine learning by Rogue Waves, Dispersive Shocks, and Solitons, Phys. Rev. Lett., 125, 093901 (2020). 2. Silva, N. A., Ferreira, T. D., Guerreiro, A., Reservoir computing with solitons, New Journal of Physics, 23, 023013 (2021). 3. Hughes, T. W., Williamson, I. A., Minkov, M., Fan, S. Wave physics as an analog recurrent neural network. Science Advances, 5, eaay6946 (2019).
Findings suggest new strategies to limit the growth of groups like the Boogaloo and ISIS
Early online support for the Boogaloos, one of the groups implicated in the January 2021 attack on the United States Capitol, followed the same mathematical pattern as ISIS, despite the stark ideological, geographical and cultural differences between their forms of extremism. That’s the conclusion of a new study published today by researchers at the George Washington University.
“This study helps provide a better understanding of the emergence of extremist movements in the U.S. and worldwide,” Neil Johnson, a professor of physics at GW, said. “By identifying hidden common patterns in what seem to be completely unrelated movements, topped with a rigorous mathematical description of how they develop, our findings could help social media platforms disrupt the growth of such extremist groups,” Johnson, who is also a researcher at the GW Institute for Data, Democracy & Politics, added.
The study, published in the journal Scientific Reports, compares the growth of the Boogaloos, a U.S.-based extremist group, to online support for ISIS, a militant, terrorist organization based in the Middle East. The Boogaloos are a loosely organized, pro-gun-rights movement preparing for civil war in the U.S. By contrast, ISIS adheres to a specific ideology, a radicalized form of Islam, and is responsible for terrorist attacks across the globe.
Johnson and his team collected data by observing public online communities on social media platforms for both the Boogaloos and ISIS. They found that the evolution of both movements follows a single shockwave mathematical equation.
The findings suggest the need for specific policies aimed at limiting the growth of such extremist movements. The researchers point out that online extremism can lead to real world violence, such as the attack on the U.S. Capitol, an attack that included members of the Boogaloo movement and other U.S. extremist groups.
Social media platforms have been struggling to control the growth of online extremism, according to Johnson. They often use a combination of content moderation and active promotion of users who are providing counter messaging. The researchers point out the limitations in both approaches and suggest that new strategies are needed to combat this growing threat.
“One key aspect we identified is how these extremist groups assemble and combine into communities, a quality we call their ‘collective chemistry’,” Yonatan Lupu, an associate professor of political science at GW and co-author on the paper, said. “Despite the sociological and ideological differences in these groups, they share a similar collective chemistry in terms of how communities grow. This knowledge is key to identifying how to slow them down or even prevent them from forming in the first place.”