Category Archives: Maths

Julia Programming Language Tackles Differential Equation Challenges (Computer Science / Maths)

Casting models of a complex system in terms of differential equations on networks allows researchers to use its underlying structure for efficient simulations

Emerging open-source programming language Julia is designed to be fast and easy to use. Since it is particularly suited for numerical applications, such as differential equations, scientists in Germany are using it to explore the challenges involved in transitioning to all-renewable power generation.

Decarbonization implies a radical restructuring of power grids, which are huge complex systems with a wide variety of constraints, uncertainties, and heterogeneities. Power grids will become even more complex in the future, so new computational tools are needed.

In Chaos, from AIP Publishing, Potsdam Institute for Climate Impact Research (PIK) scientists describe a software package they built to enable the simulation of general dynamical systems on complex networks.

They wanted to build an open-source tool — so anyone can verify its software structure and algorithms — to make all state-of-the-art algorithms within Julia’s ecosystem easily accessible to engineers and physicists. Their package, called NetworkDynamics.jl, started out as the computational backend of another one, PowerDynamics.jl.

“We realized our computational backend would be useful to other researchers within the dynamical systems community as well,” said Michael Lindner, a postdoctoral researcher at PIK.

The two theoretical pillars of their work are differential equations and complex networks.

“By casting models of power grids or brains, for example, in terms of differential equations on networks, we give them a clear underlying structure,” he said. “The network encodes locality, what interacts with what, and the differential equations encode dynamics, how things change with time.”

This enables researchers to obtain state-of-the-art simulation speeds.

“We first compute all the interactions among network components, then the back reactions of individual components to that interaction. This allows us to compute the entire evolution of the system within two easily parallelizable loops,” said Lindner.

Since Julia is fast and easy to write and has a library for solving differential equations (DifferentialEquations.jl), researchers can implement and simulate complicated models within one day — rather than the month it used to require with other languages.

“It removes some of the barriers limiting scientific creativity,” Lindner said. “I hadn’t even thought about certain models and important questions before, just because they seemed completely out of reach with my given time constraints and programming skills.”

A good, intuitive interface to high-performance algorithms is “important for science today,” he said, “because they enable scientists to focus on their research questions and models instead of code and implementation details.”

The article, “NetworkDynamics.jl – Composing and simulating complex networks in Julia,” is authored by Michael Lindner, Lucas Lincoln, Fenja Drauschke, Julia M. Koulen, Hans Würfel, Anton Plietzsch, and Frank Hellmann. It will appear in Chaos on June 22, 2021 (DOI: 10.1063/5.0051387). After that date, it can be accessed at https://aip.scitation.org/doi/10.1063/5.0051387.

Featured image: Schematic view of the structure of DynamicNetworks.jl. CREDIT: Michael Lindner, Lucas Lincoln, Fenja Drauschke, Julia M. Koulen, Hans Würfel, Anton Plietzsch, and Frank Hellmann


Provided by AIP Publishing

Vortex, The Key To Information Processing Capability: Virtual Physical Reservoir Computing (Maths)

[Background]

In recent years, physical reservoir computing*1), one of the new information processing technologies, has attracted much attention. This is a physical implementation version of reservoir computing, which is a learning method derived from recurrent neural network (RNN)*2) theory. It implements computation by regarding the physical system as a huge RNN, outsourcing the main operations to the dynamics of the physical system that forms the physical reservoir. It has the advantage of obtaining optimization instantaneously with limited computational resources by adjusting linear and static readout weightings between the output and a physical reservoir without requiring optimization of the weightings by back propagation. However, since the information processing capability depends on the physical reservoir capacity, it is important that this is investigated and optimized. Furthermore, when designing a physical reservoir with high information processing capability, it is expected that the experimental cost will be reduced by numerical simulation. Well-known examples of physical reservoir computing include its application to soft materials, photonics, spintronics, and quanta, while in recent years, much attention has been paid to waves; neuromorphic devices that simulate functions of the brain by using non-linear waves have been proposed (see references 1-3). The fluid flow of water, air, etc. represents a physical system that is familiar but shows various and complicated patterns that have been thought to have high information processing capability. However, virtual physical reservoir computing using numerical simulation or investigation of information processing capability of fluid flow phenomena has not been realized due to its relatively high numerical computational cost. Therefore, the relationship between the flow vortex and information processing capability remained unknown.

[Results]

In this study, Prof. Hirofumi Notsu and a graduate student of Kanazawa University in collaboration with Prof. Kohei Nakajima of the University of Tokyo investigated fluid flow phenomena as a physical system, especially the fluid flow that occurs around a cylinder, which is well understood. It is known that this physical system is governed by the incompressible Navier-Stokes equations*3), which describe fluid flow, and also includes the Reynolds number*4), a parameter indicative of the system characteristics. This physical system was virtually implemented by spatial two-dimensional numerical simulation using the stabilized Lagrange-Galerkin method*5), and the dynamics of flow velocity and pressure at the selected points in the downstream region of the cylinder were used as the physical reservoir. The information processing capability was evaluated using the NARMA model*6) (see Figure).

It is known that in the flow of fluid around a cylinder, as the Reynolds number value increases, twin vortices formed in the downstream region of the cylinder gradually become larger and eventually form a Karman vortex street, the alternate shedding of vortices. In this study, it was clarified that at the Reynolds number where the twin vortices are maximal but just before the transition to a Karman vortex street, the information processing capability is the highest. In other words, before the transition to a Karman vortex street, the information processing capability increases as the size of the twin vortices increases. On the other hand, since the echo state property*7) that guarantees the reproducibility of the reservoir computing cannot be maintained when the transition to the Karman vortex street takes place, it becomes clear that the Karman vortex street cannot be used for computing.

[Future prospects]

It is expected that these findings concerning fluid flow vortices and information processing capability will be useful when, in future, the information processing capability of the physical reservoir can be expanded using fluid flow, e.g. in the development of wave-based neuromorphic devices recently reported. Although the numerical computational cost of fluid flow phenomena is relatively high, this study has made it possible to handle macroscopic vortices that are physically easy to understand and has clarified the relationship between vortices and information processing capabilities by virtually implementing physical reservoir computing with spatial two-dimensional numerical simulation. Virtual physical reservoir computing, which used to be applied to a relatively large number of physical systems described as one-dimensional systems, has been expanded to include physical systems with two or more spatial dimensions. It is expected that the results of this study will allow the virtual investigation of the information processing capabilities of a wider range of physical systems. In addition, since it is revealed that vortices are the key to information processing capability, it is expected that research for creating or maintaining vortices will be further promoted.

###

[Glossary]

*1) Physical reservoir computing
A physical implementation version of reservoir computing, which is a type of learning method for recurrent neural networks (RNN). The dynamics of a physical system (physical reservoir) is regarded as a huge RNN, with which major computing is performed. Since it outsources operations, it has the advantage of being able to obtain optimization instantaneously with limited computational resources.

*2) Recurrent neural network (RNN)
A recurrent neural network is a neural network in which the output of the middle layers is also the input of itself or another layer.

*3) Incompressible Navier-Stokes equations
Incompressible Navier-Stokes equations are partial differential equations concerning the velocity and pressure of fluid flow in which the material density is constant.

*4) Reynolds number
Reynolds number (Re) is a dimensionless number that represents the ratio of the inertial force and the viscous force of the flow where ν is the kinematic viscosity of the fluid (m2/s), U is the flow speed (m/s), and L is the characteristic linear dimension (m), Re = UL/ν.

*5) Stabilized Lagrange-Galerkin method
A numerical solution of the finite element method. An implicit finite element method in which Lagrangian coordinates are used for the inertial term, piecewise linear elements are used to approximate the fluid flow velocity and pressure for the incompressible Navier-Stokes equations, and pressure stabilization is applied. It is characterized by robustness with respect to convection, relatively small numerical diffusion, and a symmetric matrix.

*6) NARMA model
NARMA (Nonlinear Autoregressive Moving Average) model, a non-linear time series model with inputs. In NARMA2, the next value is determined by the present and previous time values and input values. In NARMA3, it also depends on the value at the pre-previous time and input values.

*7) Echo state property
The property that the internal state of the current reservoir is expressed as a function that depends only on past inputs (it does not depend on the initial value). As a result, reservoirs with this property will eventually synchronize if the same inputs are given continuously, even if they start from different states.

Featured image: A: Outline of the study. B: Typical fluid flow at each Reynolds number. C: Inputs along time sequence and the results of NARMA2 and NARMA3 models. Target values are in black while values by virtual physical reservoir computing using vortices are in red. D: Values of errors (normalized mean square errors, NMSE) at each Reynolds number in NARMA2 and NARMA3 models. The error is minimal with Reynolds number being around 40. © Kanazawa University


[References]

1. Marcucci, G., Pierangeli, D., Conti, C., Theory of neuromorphic computing by waves: Machine learning by Rogue Waves, Dispersive Shocks, and Solitons, Phys. Rev. Lett., 125, 093901 (2020).
2. Silva, N. A., Ferreira, T. D., Guerreiro, A., Reservoir computing with solitons, New Journal of Physics, 23, 023013 (2021).
3. Hughes, T. W., Williamson, I. A., Minkov, M., Fan, S. Wave physics as an analog recurrent neural network. Science Advances, 5, eaay6946 (2019).


Provided by Kanazawa University

Researchers Shed Light On the Evolution of Extremist Groups (Maths)

Findings suggest new strategies to limit the growth of groups like the Boogaloo and ISIS

Early online support for the Boogaloos, one of the groups implicated in the January 2021 attack on the United States Capitol, followed the same mathematical pattern as ISIS, despite the stark ideological, geographical and cultural differences between their forms of extremism. That’s the conclusion of a new study published today by researchers at the George Washington University.

“This study helps provide a better understanding of the emergence of extremist movements in the U.S. and worldwide,” Neil Johnson, a professor of physics at GW, said. “By identifying hidden common patterns in what seem to be completely unrelated movements, topped with a rigorous mathematical description of how they develop, our findings could help social media platforms disrupt the growth of such extremist groups,” Johnson, who is also a researcher at the GW Institute for Data, Democracy & Politics, added.

The study, published in the journal Scientific Reports, compares the growth of the Boogaloos, a U.S.-based extremist group, to online support for ISIS, a militant, terrorist organization based in the Middle East. The Boogaloos are a loosely organized, pro-gun-rights movement preparing for civil war in the U.S. By contrast, ISIS adheres to a specific ideology, a radicalized form of Islam, and is responsible for terrorist attacks across the globe.

Johnson and his team collected data by observing public online communities on social media platforms for both the Boogaloos and ISIS. They found that the evolution of both movements follows a single shockwave mathematical equation.

The findings suggest the need for specific policies aimed at limiting the growth of such extremist movements. The researchers point out that online extremism can lead to real world violence, such as the attack on the U.S. Capitol, an attack that included members of the Boogaloo movement and other U.S. extremist groups.

Social media platforms have been struggling to control the growth of online extremism, according to Johnson. They often use a combination of content moderation and active promotion of users who are providing counter messaging. The researchers point out the limitations in both approaches and suggest that new strategies are needed to combat this growing threat.

“One key aspect we identified is how these extremist groups assemble and combine into communities, a quality we call their ‘collective chemistry’,” Yonatan Lupu, an associate professor of political science at GW and co-author on the paper, said. “Despite the sociological and ideological differences in these groups, they share a similar collective chemistry in terms of how communities grow. This knowledge is key to identifying how to slow them down or even prevent them from forming in the first place.”

The paper, “Hidden order across online extremist movements can be disrupted by nudging collective chemistry,” appeared May 19 in Scientific Reports.

Featured image credit: Neil Johnson/GW


Provided by George Washington University

How To Thermally Cloak An Object? (Maths)

Can you feel the heat? To a thermal camera, which measures infrared radiation, the heat that we can feel is visible, like the heat of a traveler in an airport with a fever or the cold of a leaky window or door in the winter.

In a paper published in Proceedings of the Royal Society A: Mathematical, Physical and Engineering Sciences, an international group of applied mathematicians and physicists, including Fernando Guevara Vasquez and Trent DeGiovanni from the University of Utah, report a theoretical way of mimicking thermal objects or making objects invisible to thermal measurements. And it doesn’t require a Romulan cloaking device or Harry Potter’s invisibility cloak. The research is funded by the National Science Foundation.

The method allows for fine-tuning of heat transfer even in situations where the temperature changes in time, the researchers say. One application could be to isolate a part that generates heat in a circuit (say, a power supply) to keep it from interfering with heat-sensitive parts (say, a thermal camera). Another application could be in industrial processes that require accurate temperature control in both time and space, for example controlling the cooling of a material so that it crystallizes in a particular manner.

Watch a visualization of how the method cloaks a kite-shaped object here.
Or watch how it works for a Homer Simpson-shaped object here.

Left to right: 1. temperature of a plate subject to a point source firing at time t=0 (this could be e.g. a laser pulse). 2. temperature of the plate with a “kite” object present. As you can see the isotherms, or temperature contours, are deformed by the presence of the object and this can be used by an observer to detect and locate the kite. 3. the kite object is surrounded by our active cloak. Now the isotherms look exactly like the ones in the case where the object is not present, which hides the kite object. © Fernando Guevara Vasquez/University of Utah

Cloaking or invisibility devices have long been elements of fictional stories, but in recent years scientists and engineers have explored how to bring science fiction into reality. One approach, using metamaterials, bends light in such a way as to render an object invisible.

Just as our eyes see objects if they emit or reflect light, a thermal camera can see an object if it emits or reflects infrared radiation. In mathematical terms, an object could become invisible to a thermal camera if heat sources placed around it could mimic heat transfer as if the object wasn’t there.

The novelty in the team’s approach is that they use heat pumps rather than specially crafted materials to hide the objects. A simple household example of a heat pump is a refrigerator: to cool groceries it pumps heat from the interior to the exterior. Using heat pumps is much more flexible than using carefully crafted materials, Guevara says. For example, the researchers can make one object or source appear as a completely different object or source. “So at least from the perspective of thermal measurements,” Guevara says, “they can make an apple appear as an orange.”

The researchers carried out the mathematical work needed to show that, with a ring of heat pumps around an object, it’s possible to thermally hide an object or mimic the heat signature of a different object.

The work remains theoretical, Guevara says, and the simulations assume a “probing” point source of heat that would reflect or bend around the object – the thermal equivalent of a flashlight in a dark room.

The temperature of that probing source must be known ahead of time, a drawback of the work. However, the approach is within reach of current technology by using small heat pumps called Peltier elements that transport heat by passing an electrical current across a metal-metal junction. Peltier elements are already widely used in consumer and industrial applications.

The researchers envision their work could be used to accurately control the temperature of an object in space and time, which has applications in protecting electronic circuits. The results, the researchers say, could also be applied to accurate drug delivery, since the mathematics of heat transfer and diffusion are similar to those of the transfer and diffusion of medications. And, they add, the mathematics of how light behaves in diffuse media such as fog could lead to applications in visual cloaking as well.

Find a preprint of the study here.

The full study can be found here.

In addition to Guevara and DeGiovanni, Maxence Cassier, CNRS Researcher at the Fresnel Institute in Marseille, France and Sébastien Guenneau, CNRS researcher, UMI 2004 Abraham de Moivre-CNRS, Imperial College London, London, U.K., co-authored the study.


Provided by University of Utah

Universal Equation For Explosive Phenomena (Maths)

Mathematicians find core mechanism to calculate tipping points

Climate change, a pandemic or the coordinated activity of neurons in the brain: In all of these examples, a transition takes place at a certain point from the base state to a new state. Researchers at the Technical University of Munich (TUM) have discovered a universal mathematical structure at these so-called tipping points. It creates the basis for a better understanding of the behavior of networked systems.

It is an essential question for scientists in every field: How can we predict and influence changes in a networked system? “In biology, one example is the modelling of coordinated neuron activity,” says Christian Kühn, professor of multiscale and stochastic dynamics at TUM. Models of this kind are also used in other disciplines, for example when studying the spread of diseases or climate change.

All critical changes in networked systems have one thing in common: a tipping point where the system makes a transition from a base state to a new state. This may be a smooth shift, where the system can easily return to the base state. Or it can be a sharp, difficult-to-reverse transition where the system state can change abruptly or “explosively.” Transitions of this kind also occur in climate change, for example with the melting of the polar ice caps. In many cases, the transitions result from the variation of a single parameter, such as the rise in concentrations of greenhouse gases behind climate change.

Similar structures in many models

In some cases – such as climate change – a sharp tipping point would have extremely negative effects, while in others it would be desirable. Consequently, researchers have used mathematical models to investigate how the type of transition is influenced by the introduction of new parameters or conditions. “For example, you could vary another parameter, perhaps related to how people change their behavior in a pandemic. Or you might adjust an input in a neural system,” says Kühn. “In these examples and many other cases, we have seen that we can go from a continuous to a discontinuous transition or vice versa.”

Kühn and Dr. Christian Bick of Vrije Universiteit Amsterdam studied existing models from various disciplines that were created to understand certain systems. “We found it remarkable that so many mathematical structures related to the tipping point looked very similar in those models,” says Bick. “By reducing the problem to the most basic possible equation, we were able to identify a universal mechanism that decides on the type of tipping point and is valid for the greatest possible number of models.”

Universal mathematical tool

The scientists have thus described a new core mechanism that makes it possible to calculate whether a networked system will have a continuous or discontinuous transition. “We provide a mathematical tool that can be applied universally – in other words, in theoretical physics, the climate sciences and in neurobiology and other disciplines – and works independently of the specific case at hand,” says Kühn.

Publications:

Christian Kühn, Christian Bick: A universal route to explosive phenomena, Science Advances, 16 Apr 2021: Vol. 7, no. 16
https://doi.org/10.1126/sciadv.abe3824

More information:

Dr. Christian Bick is a Hans Fischer Fellow at the TUM Institute for Advanced Study (IAS). The IAS brings together the best scientists at TUM with outstanding guest researchers to work together in interdisciplinary focus groups on highly creative and high-risk projects. Bick is also an honorary senior lecturer at the University of Exeter and an OCIAM Visiting Research Fellow at the University of Oxford.

Featured image: At a tipping point, the system state may change slowly or abruptly.Image: Emiliano Arano / Pexels


Provided by TUM

Unsolved Mysteries: Stevens Senior Investigates Unanswered Questions in Pure Math (Maths)

Jonathan Cerqueira’s senior research project aims to shed new light on the 50-year-old Gottschalk surjunctivity conjecture in the study of cellular automata

Everybody loves a good mystery – playing detective to solve a theft, a murder, a kidnapping, whether a cellular automaton defined on the Cayley graph of a group can be injective but not surjective…

What? You haven’t heard of that last one? Then you haven’t talked with Jonathan Cerqueira ’21, whose Stevens Institute of Technology senior research project is focused on deepening our understanding of the Gottschalk surjunctivity conjecture, an abstract mathematical concept first proposed in 1973.

“The surjunctivity conjecture is a major open problem at the intersection of abstract algebra, dynamical systems, and graph theory,” explained Jan Cannizzo, teaching assistant professor in the Department of Mathematical Sciences, and Cerqueira’s project advisor. “Jonathan’s work does not directly tackle this conjecture, but by changing the context of the problem, he’s allowing us to think about it in different ways.”

To infinite graphs, and beyond!

Cayley graph
This portion of a Cayley graph shows colors that will eventually evolve into new patterns that form the basis of the mystery Cerqueira’s research aims to help understand. © SIT

Cerqueira is studying cellular automata in a very general setting. These are discrete dynamical systems that can exhibit surprisingly complex behavior. Suppose you take an infinite, highly symmetric network of vertices and edges, assign every vertex one of a handful of colors, and then create a rule for changing the color of each vertex based on the colors of its neighbors. Now apply your rule over and over again. How will the initial coloring evolve?

“Imagine that you have two different colorings of a specific kind of graph known as a Cayley graph,” Cannizzo explained. “Over time, those two colorings evolve. If they always evolve to two distinct colorings and never merge into the same state, the system is injective. If there’s a particular coloring that can serve as the initial state but that never appears as a future state, the system is surjective. And there’s this big open question that asks whether it can be injective, but not surjective. In other words, can the dynamical system defined by such an automaton contain a copy of itself?”

“The answer is ‘yes,’ if we convert the Cayley graph in a particular way,” Cerqueira said. This involves thinking about the underlying graph not as the Cayley graph of a group, but as the so-called Schreier graph of a monoid. “Several infinite families of examples demonstrate this property and, given some assumptions, it can be shown such automata are abundant. Now I’m working on generalizing this idea.”

Photo of Jonathan Cerqueira
Jonathan Cerqueira ’21 is bringing new light to a half-century old abstract mathematical problem that involves infinite, highly symmetric graphs as might be seen in the edges of this colorful background image. © SIT

“Jonathan is modest, but I think he’s discovered a new, large family of automata that exhibits this phenomenon of injectivity but not surjectivity,” Cannizzo added. “It puts the surjunctivity conjecture in context allows us to think about it differently, and he’s trying to understand how large this class of examples that he’s been able to study actually is.”

‘It’s satisfying to come up with original ideas’

To address this complex problem, Cerqueira relied on a surprisingly simple tool.

“I did a lot of staring at whiteboards,” he said with a laugh. “You stare at the whiteboard for two hours trying to understand the problem. You don’t get anywhere. You walk away from it, and 20 minutes later, you get an idea, and then come back to work on it a bit more. And it’s typical to have what seems like a million ideas, and then most of them fall flat, and you have to try again. I think it’s how a lot of mathematical research works!”

Clearly, the rewards outweigh the challenges.

“It’s satisfying to come up with original ideas,” Cerqueira said. “It’s our hope to get these findings published in a journal. That would be exciting.”

That element of problem-solving is what initially drew Cerqueira to the field of mathematics.

“Math sparked my curiosity and made me want to seek answers,” he recalled. “I’d mess around on Desmos graphing calculators or try to teach myself some calculus. In high school I started to see math as something truly amazing when I discovered Numberphile and other math-related YouTube channels. That’s also when I found out about mathematicians, and the idea that math is an ongoing, evolving field that could be a career path for me.”

Cerqueira soon became attracted to the universality of math, as well as its relatively lower burden to deliver immediate research results.

graph of cellular automaton
A Schreier graph © Stevens Institute of Technology

“If some alien beings dropped down to Earth from a different universe, they might have a different sense of aesthetics, a different language, and a different set of scientific laws,” he explained. “Despite that, they might still have studied the math we have, or they’d be able to borrow from our mathematical works far more quickly than our scientific or cultural ones. I also love the lax nature of math studies. With science, medicine, or engineering, you face a fair amount of pressure to demonstrate immediate value for your research direction. With math, people often decide their research directions based on their interests or open questions they want to take a crack at. With pure math, people understand that over the span of a few hundred years, something can go from recreational math, to something studied for its own sake, to something with broad applications, as happened with topology, number theory, and, to a lesser degree, cellular automata.”

Stevens offered Cerqueira the opportunity to immerse himself in that supportive environment while also experiencing the broader range of scientific disciplines.

Cayley graph
The famous novel suggests that A Tree Grows in Brooklyn, but Cerqueira grew this mathematical tree – connected graphs that, like trees, never cycle back to their beginnings – in Hoboken, at Stevens Institute of Technology.

“I chose Stevens for its slightly smaller campus, as well as the atmosphere that comes along with an engineering school,” he said. “Everyone’s always doing something, working on a project, or involving themselves in some organization. I’ve been involved in seven clubs and organizations, and I’ve met so many people. I enjoy the liveliness, Hoboken, the history of the school, and most of all, I love the pace of it. The classes, clubs, and everything move fast. It feels like I was a freshman only yesterday.”

For now, Cerqueira’s research continues – both into surjunctivity, and also into which doctoral program he’ll attend this fall.

“Jonathan has made our department very proud, especially because he’s going to go on to pursue his Ph.D. in mathematics right out of his pure and applied mathematics bachelor’s degree, which he’s finishing in only three years,” Cannizzo said. “He was my student in a class I taught on mathematical proofs, and I was impressed with his mathematical skills. When he asked me about doing some research together the following summer, I was happy to agree, and it’s been a great pleasure working with him. By changing the context of the Gottschalk surjunctivity conjecture and pursuing this related, parallel question, he has helped put the conjecture into perspective.”


Provided by Stevens Institute of Technology

Cosmologists Found First Evidence That Non-metricity f(Q) Gravity Can Challenge Lambda-CDM model (Cosmology / Astronomy / Maths)

Although General Relativity (GR) is the well-established theory for the description of the gravitational interaction, there are two main motivations that justify the large amount of research devoted to its modification and extension. The first arises from cosmological grounds, since modified gravity is very efficient in describing the universe’s two phases of accelerated expansion, and moreover it can alleviate the two possible tensions of ΛCDM cosmology, namely the H0 and the σ8 ones. The second, and chronologically older, motivation is purely theoretical, and aims towards the improvement of the renormalizability of General Relativity with the further goal to finally reach to a quantum gravitational theory. Hence, the goal is to construct gravitational theories that possess General Relativity as a particular limit, but which in general include extra degree(s) of freedom that are able to fulfill the above requirements.

Now, Anagnostopoulos and colleagues proposed a specific model in the framework of the recently constructed f(Q) modified gravity, which they showed is very efficient in fitting the cosmological data. In this class of modification, one starts from the so-called symmetric teleparallel theories, which is an equivalent description of gravity using the non-metricity scalar Q, and extends it to an arbitrary function f(Q). f(Q) gravity leads to interesting applications, and trivially passes the constraints arising from gravitational wave observations. By confronting their new model with data from Supernovae type Ia (SNIa), Baryonic Acoustic Oscillations (BAO), Hubble parameter cosmic chronometers (CC) and Redshift Space Distortions (RSD) fσ8 observations, they deduced that the scenario at hand is, in some cases, statistically better than ΛCDM, although it does not include it as a particular limit.

“Nevertheless, confrontation with observations at both background and perturbation levels, namely with Supernovae type Ia (SNIa), Baryonic Acoustic Oscillations (BAO), cosmic chronometers (CC), and Redshift Space Distortion (RSD) data, reveals that the scenario, according to Akaike Information Criterion (AIC), the Bayesian Information Criterion (BIC), and the Deviance Information Criterion (DIC), is at some cases statistically preferred comparing to ΛCDM cosmology.”


For SNIa + CC datasets, they found that the two models are statistically compatible. However, for SNIa + BAOs + CC datasets, their f(Q) model is slightly more preferred (by the data) compared to ΛCDM one. In contrast, for SNIa + BAOs + RSD datasets, the f(Q) model is deemed inferior by the data, however still statistically indistinguishable from ΛCDM.

The information criteria AIC, BIC and DIC for the examined cosmological models, alongside the corresponding differences ∆IC ≡ IC – ICmin. © Anagnostopoulos et al.

They also showed that, in the large redshift limit (i.e at large E²(z) ≡ H²(z)/H²0) the proposed f(Q) tends to Q and thus the scenario at hand tends to GR, hence it trivially passes the early universe constraints and in particular the Big Bang Nucleosynthesis (BBN) ones. Additionally, knowing the observational bounds of E²(z) throughout the evolution, and using parameter ‘λ’, they got the effective Newton’s constant,

from which they deduced that throughout the evolution | (Geff/G) − 1| remains smaller than 0.1 and therefore it satisfies the observational constraints.

“In simple terms, the model doesnt exhibit early dark energy features and thus it immediately passes Big Bang Nucleosynthesis constraints, while the variation of the effective Newtons constant lies well inside the observational bounds”


Finally, they concluded that, this result could be used as motivation for further study of the present model, as well as f(Q) gravity in general, as it constitutes one of the first alternatives to the concordance model that apart from being preferred by the data (at least by some datasets), it additionally possesses a Lagrangian description. Further studies on this model, using the full CMB and LSS spectra, weak lensing data and other datasets, could enlighten their findings and verify whether the present f(Q) model outperforms the concordance one or not.

Featured image: The 1σ and 2σ iso-likelihood contours for the f(Q) model, as well as for the ΛCDM scenario, for the 2D subsets of the parameter space (Ωm0, h,M, rd), using graphic package getdist. We have used the joint analysis of SNIa+CC+BAOs datasets. © Anagnostopoulos et al.


Reference: Fotios K. Anagnostopoulos, Spyros Basilakos, Emmanuel N. Saridakis, “First evidence that non-metricity f(Q) gravity can challenge ΛCDM”, Arxiv, pp. 1-4, 2021. https://arxiv.org/abs/2104.15123


Note for editors of other websites: To reuse this article partially or fully kindly give credit to our author S. Aman or either, provide a link of our article

Starlight, Star Bright…as Explained by Math (Maths / Astronomy)

The evolving periodicity of the brightness of certain types of stars can now be described mathematically.

Not all stars shine brightly all the time. Some have a brightness that changes rhythmically due to cyclical phenomena like passing planets or the tug of other stars. Others show a slow change in this periodicity over time that can be difficult to discern or capture mathematically. KAUST’s Soumya Das and Marc Genton have now developed a method to bring this evolving periodicity within the framework of mathematically “cyclostationary” processes.

“It can be difficult to explain the variations of the brightness of variable stars unless they follow a regular pattern over time,” says Das. “In this study we created methods that can explain the evolution of the brightness of a variable star, even if it departs from strict periodicity or constant amplitude.”

Classic cyclostationary processes have an easily definable variation over time, like the sweep of a lighthouse beam or the annual variation in solar irradiance at a given location. Here, “stationary” refers to the constant nature of the periodicity over time and describes highly predictable processes like a rotating shaft or a lighthouse beam. However, when the period or amplitude changes slowly over many cycles, the mathematics for cyclostationary processes fails.

The team applied their method to model the light emitted from the variable star R Hydrae, which exhibited a slowing of its period from 420 to 380 days between 1900 and 1950. © 2021 Morgan Bennett Smith

“We call such a process an evolving period and amplitude cyclostationary, or EPACS, process,” says Das. “Since EPACS processes are more flexible than cyclostationary processes, they can be used to model a wide variety of real-life scenarios.”

Das and Genton modeled the nonstationary period and amplitude by defining them as functions that vary over time. In doing this, they expanded the definition of a cyclostationary process to better describe the relationship among variables, such as the brightness and periodic cycle for a variable star. They then used an iterative approach to refine key parameters in order to fit the model to the observed process.

“We applied our method to model the light emitted from the variable star R Hydrae, which exhibited a slowing of its period from 420 to 380 days between 1900 and 1950,” says Das. “Our approach showed that R Hydrae has an evolving period and amplitude correlation structure that was not captured in previous work.”

Importantly, because this approach links EPACS processes back to classical cyclostationary theory, then fitting an EPACS process makes it possible to use existing methods for cyclostationary processes.

“Our method can also be applied to similar phenomena other than variable stars, such as climatology and environmetrics, and particularly for solar irradiance, which could be useful for predicting energy harvesting in Saudi Arabia,” Das says. 

To learn more about the model designed by KAUST Ph.D. student Soumya, check out this video: My PhD in 90 seconds #KAUST_WEP_2021 #SCICOMM_VIDEO_COMPETITION

Featured image: A newly developed method mathematically describes periodic changes in the brightness of stars. The model can also be applied to similar variable phenomena such as climatology and solar irradiance. © 2021 Morgan Bennett Smith


Reference

  1. Das, S. & Genton, M.G. Cyclostationary Processes with Evolving Periods and Amplitudes. IEEE Transactions on Signal Processing 69, 1579-1590 (2021).| article

Provided by KAUST

What Does 1.5 °C Warming Limit Mean for China? (Maths)

As part of the Paris Agreement, nearly all countries agreed to take steps to limit the average increase in global surface temperature to less than 2 °C, or preferably 1.5 °C, compared with preindustrial levels. Since the Agreement was adopted, however, concerns about global warming suggest that countries should aim for the “preferable” warming limit of 1.5 °C.

What are the implications for China of trying to achieve this lower limit?

Prof. DUAN Hongbo from the University of Chinese Academy of Sciences and Prof. WANG Shouyang from the Academy of Mathematics and Systems Science of the Chinese Academy of Sciences, together with their collaborators, have attempted to answer this question.

Their results were published in an article entitled “Assessing China’s efforts to pursue the 1.5°C warming limit,” which was published in Science on April 22 .

The authors used nine different integrated assessment models (IAMs) to make their evaluation of China’s effort to achieve the warming limit of 1.5 °C.

The various models show different emission trajectories for carbon and noncarbon emissions. The majority of the IAMs will achieve near-zero or negative carbon emissions by around 2050, with a range from -0.13 billion tonnes of CO2 (GtCO2) to 2.34 GtCO2 across models. However, one highly consistent finding among all models is that the 1.5°C warming limit requires carbon emissions decrease sharply after 2020.

The researchers discovered that a steep and early drop in carbon emissions reduces dependency on negative emission technologies (NETs), i.e., technologies that capture and sequester carbon. One implication of this finding is that there is a trade-off between substantial early mitigation of carbon emissions and reliance on NETs, which may have uncertain performance. At the same time, the model showing the lowest carbon emissions by 2050 shows the greatest reliance on carbon capture and storage (CCS) technology—suggesting that NETs have an important role in reducing carbon emissions.

Although carbon emissions were an important focus of the study, the researchers also noted that reducing noncarbon emissions is necessary to stay under the warming limit. Specifically, carbon emissions must be reduced by 90%, CH4 emissions by about 71% and N2O emissions by about 52% to achieve the 1.5 °C goal.

The study showed that mitigation challenges differ across sectors, e.g., industry, residential and commercial, transportation, electricity and “other.” Among these sectors, industry plays a big role in end-use energy consumption. Therefore, substantial changes in industrial energy use must occur to reach deep decarbonization of the entire economy and realization of the given climate goals. Indeed, a highly consistent finding across all models is that the largest proportion of emission reduction will come from a substantial decline in energy consumption.

The study also highlights the importance of replacing fossil fuels with renewables, a strategy that plays the next most important role in emission reduction behind reducing energy consumption. The study suggests that China needs to decrease its fossil energy consumption (as measured by standard coal equivalent, or Gtce) by about 74% in 2050 in comparison with the no policy scenario.

The researchers estimate that achieving the 1.5 °C goal will involve a loss of GDP in 2050 in the range of 2.3% to 10.9%, due to decreased energy consumption and other factors.

The study also noted that China’s recently announced plan to become carbon neutral by 2060 largely accords with the 1.5 °C warming limit; however, achieving the latter goal is more challenging.


Reference

Assessing China’s efforts to pursue the 1.5°C warming limit


Provided by Chinese Academy of Sciences