Tag Archives: #sound

A Surprising Discovery: Bats Know the Speed of Sound From Birth (Biology)

A new Tel Aviv University study has revealed, for the first time, that bats know the speed of sound from birth. In order to prove this, the researchers raised bats from the time of their birth in a helium-enriched environment in which the speed of sound is higher than normal. They found that unlike humans, who map the world in units of distance, bats map the world in units of time. What this means is that the bat perceives an insect as being at a distance of nine milliseconds, and not one and a half meters, as was thought until now.

The Study was published in PNAS.

In order to determine where things are in a space, bats use sonar – they produce sound waves that hit objects and are reflected back to the bat. Bats can estimatethe position of the object based on the time that elapses between the moment the sound wave is produced and the moment it is returned to the bat. This calculation depends on the speed of sound, which can vary in different environmental conditions, such as air composition or temperature. For example, there could be a difference of almost 10% between the speed of sound at the height of the summer, when the air is hot and the sound waves spread faster, and the winter season. Since the discovery of sonar in bats 80 years ago, researchers have been trying to figure out whether bats acquire the ability to measure the speed of sound over the course of their lifetime or are born with this innate, constant sense.

Now, researchers led by Prof. Yossi Yovel, head of the Sagol School of Neuroscience and a faculty member of the School of Zoology in the Faculty of Life Sciences and his former doctoral student Dr. Eran Amichai (currently studying at Dartmouth College) have succeeded in answering this question. The researchers conducted an experiment in which they were able to manipulate the speed of sound. They enriched the air composition with helium to increase the speed of sound, and under these conditions raised bat pups from the time of their birth, as well as adult bats. Neither the adult bats nor the bat pups were able to adjust to the new speed of sound and consistently landed in front of the target, indicating that they perceived the target as being closer – that is, they did not adjust their behavior to the higher speed of sound.

Because this occurred both in the adult bats that had learned to fly in normal environmental conditions and in the pups that learned to fly in an environment with a higher-than-normal speed of sound, the researchers concluded that the rate of the speed of sound in bats is innate – they have a constant sense of it. “Because bats need to learn to fly within a short time of their birth,” explains Prof. Yovel, “we hypothesize that an evolutionary ‘choice’ was made to be born with this knowledge in order to save time during the sensitive development period.”

Another interesting conclusion of the study is that bats do not actually calculate the distance to the target according to the speed of sound. Because they do not adjust the speed of sound encoded in their brains, it seems that they also do not translate the time it takes for the sound waves to return into units of distance. Therefore, their spatial perception is actually based on measurements of time and not distance.

Prof. Yossi Yovel: “What most excited me about this study is that we were able to answer a very basic question – we found that in fact bats do not measure distance, but rather time, to orient themselves in space. This may sound like a semantic difference, but I think that it means that their spatial perception is fundamentally different than that of humans and other visual creatures, at least when they rely on sonar. It’s fascinating to see how diverse evolution is in the brain-computing strategies it produces.”

Featured image: Prof. Yossi Yovel © Tel Aviv University

Reference: Eran Amichai, Yossi Yovel, “Echolocating bats rely on an innate speed-of-sound reference”, Proceedings of the National Academy of Sciences May 2021, 118 (19) e2024352118; https://doi.org/10.1073/pnas.2024352118

Provided by Tel Aviv University

The Science of Sound, Vibration to Better Diagnose, Treat Brain Diseases (Neuroscience)

Multidisciplinary Researchers Uncover New Ways to Use Ultrasound Energy to Image and Treat Hard-to-reach Areas of Brain

A team of engineering researchers at the Georgia Institute of Technology hopes to uncover new ways to diagnose and treat brain ailments, from tumors and stroke to Parkinson’s disease, leveraging vibrations and ultrasound waves. 

The five-year, $2 million National Science Foundation (NSF) project initiated in 2019 already has resulted in several published journal articles that offer promising new methods to focus ultrasound waves through the skull, which could lead to broader use of ultrasound imaging — considered safer and less expensive than magnetic resonance imaging (MRI) technology.  

Specifically, the team is researching a broad range of frequencies, spanning low frequency vibrations (audio frequency range) and moderate frequency guided waves (100 kHz to 1 MHz) to high frequencies employed in brain imaging and therapy (in the MHz range).

“We’re coming up with a unique framework that incorporates different research perspectives to address how you use sound and vibration to treat and diagnose brain diseases,” explained Costas Arvanitis, an assistant professor in Georgia Tech’s George W. Woodruff School of Mechanical Engineering and the Wallace H. Coulter Department of Biomedical Engineering at Georgia Tech and Emory University. “Each researcher is bringing their own expertise to explore how vibrations and waves across a range of frequencies could either extract information from the brain or focus energy on the brain.”

Accessing the Brain Is a Tough Challenge

While it is possible to treat some tumors and other brain diseases non-invasively if they are near the center of the brain, many other conditions are harder to access, the researchers say. 

“The center part of the brain is most accessible; however, even if you are able to target the part of the brain away from the center, you still have to go through the skull,” Arvanitis said.

He added that moving just 1 millimeter in the brain constitutes “a huge distance” from a diagnostic perspective. The science community widely acknowledges the brain’s complexity, each part associated with a different function and brain cells differing from one to the other.  

According to Brooks Lindsey, a biomedical engineering assistant professor at Georgia Tech and Emory, there is a reason why brain imaging or therapy works well in some people but not in others. 

“It depends on the individual patient’s skull characteristics,” he said, noting that some people have slightly more trabecular bone ? the spongy, porous part of the bone ? that makes it more difficult to treat. 

Using ultrasound waves, the researchers are tackling the challenge on multiple levels. Lindsey’s lab uses ultrasound imaging to assess skull properties for effective imaging and therapy. He said his team conducted the first investigation that uses ultrasound imaging to measure the effects of bone microstructure — specifically, the degree of porosity in the inner, trabecular bone layer of the skull.

“By understanding transmission of acoustic waves through microstructure in an individual’s skull, non-invasive ultrasound imaging of the brain and delivery of therapy could be possible in a greater number of people,” he said, explaining one potential application would be to image blood flow in the brain following a stroke.

Refocusing Ultrasound Beams on the Fly   

Arvanitis’ lab recently found a new way to focus ultrasound through the skull and into the brain, which is “100-fold faster than any other method,” Arvanitis said. His team’s work in adaptive focusing techniques would allow clinicians to adjust the ultrasound on the fly to focus it better.

Georgia Tech researchers align the electrodynamic exciter and Laser Doppler Vibrometer setup for vibration experiments. (Photo credit: Allison Carter, Georgia Tech)

“Current systems rely a lot on MRIs, which are big, bulky, and extremely expensive,” he said. “This method lets you adapt and refocus the beam. In the future this could allow us to design less costly, simpler systems, which would make the technology available to a wider population, as well as be able to treat different parts of the brain.”

Using ‘Guided Waves’ to Access Periphery Brain Areas

Another research cohort, led by Alper Erturk, Woodruff Professor of Mechanical Engineering at Georgia Tech, and former Georgia Tech colleague Massimo Ruzzene, Slade Professor of Mechanical Engineering at the University of Colorado Boulder, performs high-fidelity modeling of skull bone mechanics along with vibration-based elastic parameter identification. They also leverage guided ultrasonic waves in the skull to expand the treatment envelope in the brain. Erturk and Ruzzene are mechanical engineers by background, which makes their exploration of vibrations and guided waves in difficult-to-reach brain areas especially fascinating. 

Erturk noted that guided waves are used in other applications such as aerospace and civil structures for damage detection. “Accurate modeling of the complex bone geometry and microstructure, combined with rigorous experiments for parameter identification, is crucial for a fundamental understanding to expand the accessible region of the brain,” he said. 

Ruzzene compared the brain and skull to the Earth’s core and crust, with the cranial guided waves acting as an earthquake. Just as geophysicists use earthquake data on the Earth’s surface to understand the Earth’s core, so are Erturk and Ruzzene using the guided waves to generate tiny, high frequency “earthquakes” on the external surface of the skull to characterize what comprises the cranial bone.

Trying to access the brain periphery via conventional ultrasound poses added risks from the skull heating up. Fortunately, advances such as cranial leaky Lamb waves increasingly are recognized for transmitting wave energy to that region of the brain.

These cranial guided waves could complement focused ultrasound applications to monitor changes in the cranial bone marrow from health disorders, or to efficiently transmit acoustic signals through the skull barrier, which could help access metastases and treat neurological conditions in currently inaccessible regions of the brain.

(L to R)  Multidisciplinary researchers Alper Erturk, Costas Arvanitis and Brooks Lindsey hope their work will make full brain imaging feasible while stimulating new medical imaging and therapy techniques. (Photo credit: Allison Carter, Georgia Tech)

Ultimately, the four researchers hope their work will make full brain imaging feasible while stimulating new medical imaging and therapy techniques. In addition to transforming diagnosis and treatment of brain diseases, the techniques could better detect traumas and skull-related defects, map the brain function, and enable neurostimulation. Researchers also see the potential for uncovering ultrasound-based blood-brain barrier openings for drug delivery for managing and treating diseases such as Alzheimer’s.

With this comprehensive research of the skull-brain system, and by understanding the fundamentals of transcranial ultrasound, the team hopes to make it more available to even more diseases and target many parts of the brain. 

This work is funded by the National Science Foundation (CMMI Award 1933158 “Coupling Skull-Brain Vibroacoustics and Ultrasound Toward Enhanced Imaging, Diagnosis, and Therapy”). 

Featured image: Graduate research assistants Eetu Kohtanen and Pradosh Dash and postdoctoral researchers Christopher Sugino and Bowen Jing test a human skull to measure and characterize its vibration response. (Photo credit: Allison Carter, Georgia Tech)

References: (1) C. Sugino, M. Ruzzene, and A. Erturk, “Experimental and Computational Investigation of Guided Waves in a Human Skull.” (Ultrasound in Medicine and Biology, 2021) https://doi.org/10.1016/j.ultrasmedbio.2020.11.019 (2) M. Mazzotti, E. Kohtanen, A. Erturk, and M. Ruzzene, “Radiation Characteristics of Cranial Leaky Lamb Waves.” (IEEE Transactions on Ultrasonics, Ferroelectrics, and Frequency Control, 2021) https://doi.org/10.1109/TUFFC.2021.3057309 (3) S. Schoen, C. Arvanitis, “Heterogeneous Angular Spectrum Method for Trans-Skull Imaging and Focusing.” (IEEE Xplore, 2020) https://ieeexplore.ieee.org/document/8902167  (4) B. Jing, C. Arvanitis, B. Lindsey, “Effect of Incidence Angle and Wave Mode Conversion on Transcranial Ultrafast Doppler Imaging.” (IEEE Xplore, 2020)   https://ieeexplore.ieee.org/document/9251477 

Provided by Georgia Tech

Adding Or Subtracting Single Quanta of Sound (Quantum)

Researchers perform experiments that can add or subtract a single quantum of sound, with surprising results when applied to noisy sound fields.

Quantum mechanics tells us that physical objects can have both wave and particle properties. For instance, a single particle—or quantum—of light is known as a photon, and, in a similar fashion, a single quantum of sound is known as a phonon, which can be thought of as the smallest unit of sound energy.

Detecting a single photon gives us an event-ready signal that we have subtracted a single phonon.

— Georg Enzian

A team of researchers spanning Imperial College London, University of Oxford, the Niels Bohr Institute, University of Bath, and the Australian National University have performed an experiment that can add or subtract a single phonon to a high-frequency sound field using interactions with laser light.

The team’s findings aid the development of future quantum technologies, such as hardware components in a future ‘quantum internet’, and help pave the way for tests of quantum mechanics on a more macroscopic scale. The details of their research are published today in the prestigious journal Physical Review Letters.

To add or subtract a single quantum of sound, the team experimentally implement a technique proposed in 2013 that exploits correlations between photons and phonons created inside a resonator. More specifically, laser light is injected into a crystalline microresonator that supports both the light and the high-frequency sound waves.

The two types of waves then couple to one another via an electromagnetic interaction that creates light at a new frequency. Then, to subtract a single phonon, the team detect a single photon that has been up-shifted in frequency. “Detecting a single photon gives us an event-ready signal that we have subtracted a single phonon,” says lead author of the project Georg Enzian.

When the experiment is performed at a finite temperature, the sound field has random fluctuations from thermal noise. Thus, at any one time, the exact number of sound quanta present is unknown but on average there will be n phonons initially.

What happens now when you add or subtract a single phonon? At first thought, you may expect this would simply change the average to n + 1 or n – 1, respectively, however the actual outcome defies this intuition. Indeed, quite counterintuitively, when you subtract a single phonon, the average number of phonons actually goes up to 2n.

This surprising result where the mean number of quanta doubles has been observed for all-optical photon-subtraction experiments and is observed for the first time outside of optics here.

The team’s experiment can be thought of as a quantum version of a ‘claw machine’, where light acts as a claw, and the balls are quanta of sound © Imperial College London

“One way to think of the experiment is to imagine a claw machine that you often see in video arcades, except that you can’t see how many toys there are inside the machine. Before you agree to play, you’ve been told that on average there are n toys inside but the exact number changes randomly each time you play. Then, immediately after a successful grab with the claw, the average number of toys actually goes up to 2n,” describes Michael Vanner, Principal Investigator of the Quantum Measurement Lab at Imperial College London.

It’s important to note that this result certainly does not violate energy conservation and comes about due to the statistics of thermal phonons.

The team’s results, combined with their recent experiment that reported strong coupling between light and sound in a microresonator, open a new path for quantum science and technology with sound waves.

Featured image credit: StarLine/Shutterstock

Reference: ‘Single-phonon addition and subtraction to a mechanical thermal state‘ by G. Enzian, J. J. Price, L. Freisem, J. Nunn, J. Janousek, B. C. Buchler, P. K. Lam, and M. R. Vanner is published in Physical Review Letters, 2021.

Provided by Imperial College London

Perfect Transmission Through Barrier Using Sound: Zhang’s Team Proves For The First Time a Century-old Quantum Theory

The perfect transmission of sound through a barrier is difficult to achieve, if not impossible based on our existing knowledge. This is also true with other energy forms such as light and heat.

The phononic crystals are made by artificially placing the acrylic posts in the special pattern. © University of Hong Kong

A research team led by Professor Xiang Zhang, President of the University of Hong Kong (HKU) when he was a professor at the University of California, Berkeley, (UC Berkeley) has for the first time experimentally proved a century old quantum theory that relativistic particles can pass through a barrier with 100% transmission. The research findings have been published in the top academic journal Science.

Just as it would be difficult for us to jump over a thick high wall without enough energy accumulated. In contrast, it is predicted that a microscopic particle in the quantum world can pass through a barrier well beyond its energy regardless of the height or width of the barrier, as if it is “transparent”.

As early as 1929, theoretical physicist Oscar Klein proposed that a relativistic particle can penetrate a potential barrier with 100% transmission upon normal incidence on the barrier. Scientists called this exotic and counterintuitive phenomenon the “Klein tunneling” theory. In the following 100 odd years, scientists tried various approaches to experimentally test Klein tunneling, but the attempts were unsuccessful and direct experimental evidence is still lacking.

Professor Zhang’s team conducted the experiment in artificially designed phononic crystals with triangular lattice. The lattice’s linear dispersion properties make it possible to mimic the relativistic Dirac quasiparticle by sound excitation, which led to the successful experimental observation of Klein tunneling.

Experimental setup: the artificial phononic crystals are designed and fabricated by the research team. Sound emitted from the loudspeakers array normally propagates from the right and excites the relativistic quasiparticle inside the phononic crystals. A mini microphone is attached to a 3D motion motor to scan the pressure field © University of Hong Kong

“This is an exciting discovery. Quantum physicists have always tried to observe Klein tunneling in elementary particle experiments, but it is a very difficult task. We designed a phononic crystal similar to graphene that can excite the relativistic quasiparticles, but unlike natural material of graphene, the geometry of the man-made phononic crystal can be adjusted freely to precisely achieve the ideal conditions that made it possible to the first direct observation of Klein tunneling,” said Professor Zhang.

The achievement not only represents a breakthrough in fundamental physics, but also presents a new platform for exploring emerging macroscale systems to be used in applications such as on-chip logic devices for sound manipulation, acoustic signal processing, and sound energy harvesting.

“In current acoustic communications, the transmission loss of acoustic energy on the interface is unavoidable. If the transmittance on the interface can be increased to nearly 100%, the efficiency of acoustic communications can be greatly improved, thus opening up cutting-edge applications. This is especially important when the surface or the interface play a role in hindering the accuracy acoustic detection such as underwater exploration. The experimental measurement is also conducive to the future development of studying quasiparticles with topological property in phononic crystals which might be difficult to perform in other systems,” said Dr. Xue Jiang, a former member of Zhang’s team and currently an Associate Researcher at the Department of Electronic Engineering at Fudan University.

Dr. Jiang pointed out that the research findings might also benefit the biomedical devices. It may help to improve the accuracy of ultrasound penetration through obstacles and reach designated targets such as tissues or organs, which could improve the ultrasound precision for better diagnosis and treatment.

On the basis of the current experiments, researchers can control the mass and dispersion of the quasiparticle by exciting the phononic crystals with different frequencies, thus achieving flexible experimental configuration and on/off control of Klein tunneling. This approach can be extended to other artificial structure for the study of optics and thermotics. It allows the unprecedent control of quasiparticle or wavefront, and contributes to the exploration on other complex quantum physical phenomena.

The article published in Science: https://science.sciencemag.org/content/370/6523/1447.

Reference: Xue Jiang, Chengzhi Shi, Zhenglu Li, Siqi Wang, Yuan Wang, Sui Yang, Steven G. Louie, Xiang Zhang, “Direct observation of Klein tunneling in phononic crystals”, Science 18 Dec 2020: Vol. 370, Issue 6523, pp. 1447-1450. DOI: 10.1126/science.abe2011

Provided by University of Hong Kong

How Inner Crust of Neutron Star Affects Its Radius? (Astronomy)

Lopes and colleagues studied how the small contribution of the inner crust to the total equation of state (EoS) of a neutron star affects its mass-radius relation, focusing on the canonical mass of 1.4 M.

The neutron star can be divided in four distinct regions: outer crust, inner crust, outer core and inner core.

Graphical abstract by nature

The outer crust is the region understood between 10-¹⁴ fm-³ ≲ n ≲ 10-⁴ fm-³, where the ground state of nuclear matter is at which all neutrons are bound in nuclei, and that it forms a perfect crystal with a single nuclear species, (number of neutrons N, number of protons Z), at lattice sites.

The inner crust is the region comprehending around 10-⁴ ≲ fm-³ ≲ n ≲ 10-¹ fm-³. Here, very neutron rich nuclei are immersed in a gas of dripped neutrons.

If the density is high enough (around 0.06 – 0.1 fm-³) the surface and Coulomb contributions can be ignored and the matter can be approximate by an infinite and uniform plasma of interacting protons, neutrons and free electrons (and muons if the electron Fermi energy is high enough) in chemical equilibrium. This is the outer core. Therein exist a very special point: the nuclear saturation density: n0 (0.148 – 0.170 fm-³). From this point, the nuclear forces become repulsive instead of attractive.

The region with n > 2 n0 is called inner core. At such densities new and exotic degrees of freedom can be present.

It is well accepted that the symmetry energy slope at the saturation density – which correspond to the outer core region – is the main responsible to control the neutron stars radii. Although some studies suggest that this cannot be the whole history, it is undeniable that the symmetry energy slope plays more than a significant role.

In the present work, nevertheless, Lopes and colleagues however explore another region of the neutron star: the inner crust. Instead of build a model for it, they study only its behaviour, using an empirical parametrization for the EoS: p(∊) = K.∊^γ + b in the range of 0.003 fm-³ < n < 0.08 fm-³, where they varying the value of γ and determine the value of the constants K and b in order to keep the EoS continuum. Also, in order to gain physical insight, they calculated the speed of sound of the inner crust. They showed that although for all γ values they always have a monotically increasing EoS, they have very distinct behaviour for the speed of sound as well different values of the radius of the canonical mass.

FIG. 1. (Colour online) Square of the speed of sound in the parameterizated inner crust for different values of γ.
FIG. 2. (Colour online) Mass-radius relations for different values of γ. The light blue (yellow) hatched region correspond the credibility interval of 68% (95%).

They see that different behaviours of the speed of sound can affect the radius of the canonical star by more than 1.1 km. They concluded that this result can help us understand extreme results as GW170817, where some studies indicate that the radius of the canonical star cannot exceed 11.9 km.

Reference: Luiz L. Lopes, “The neutron star inner crust: an empirical essay”, ArXiv, pp. 1-5, 2020. https://arxiv.org/abs/2012.05277v1

Copyright of this article totally belongs to our experienced researcher S. Aman. One is allowed to use it only by proper credit either to him or to us.

Stanford Engineers Combine Light and Sound to See Underwater (Engineering)

The “Photoacoustic Airborne Sonar System” could be installed beneath drones to enable aerial underwater surveys and high-resolution mapping of the deep ocean.

An artist’s rendition of the photoacoustic airborne sonar system operating from a drone to sense and image underwater objects. ©Kindea Labs

Stanford University engineers have developed an airborne method for imaging underwater objects by combining light and sound to break through the seemingly impassable barrier at the interface of air and water.

The researchers envision their hybrid optical-acoustic system one day being used to conduct drone-based biological marine surveys from the air, carry out large-scale aerial searches of sunken ships and planes, and map the ocean depths with a similar speed and level of detail as Earth’s landscapes. Their “Photoacoustic Airborne Sonar System” is detailed in a recent study published in the journal IEEE Access.

“Airborne and spaceborne radar and laser-based, or LIDAR, systems have been able to map Earth’s landscapes for decades. Radar signals are even able to penetrate cloud coverage and canopy coverage. However, seawater is much too absorptive for imaging into the water,” said study leader Amin Arbabian, an associate professor of electrical engineering in Stanford’s School of Engineering. “Our goal is to develop a more robust system which can image even through murky water.”

Subhead: Energy loss

Oceans cover about 70 percent of the Earth’s surface, yet only a small fraction of their depths have been subjected to high-resolution imaging and mapping.

The main barrier has to do with physics: Sound waves, for example, cannot pass from air into water or vice versa without losing most – more than 99.9 percent – of their energy through reflection against the other medium. A system that tries to see underwater using soundwaves traveling from air into water and back into air is subjected to this energy loss twice – resulting in a 99.9999 percent energy reduction.

Similarly, electromagnetic radiation – an umbrella term that includes light, microwave and radar signals – also loses energy when passing from one physical medium into another, although the mechanism is different than for sound. “Light also loses some energy from reflection, but the bulk of the energy loss is due to absorption by the water,” explained study first author Aidan Fitzpatrick, a Stanford graduate student in electrical engineering. Incidentally, this absorption is also the reason why sunlight can’t penetrate to the ocean depth and why your smartphone – which relies on cellular signals, a form of electromagnetic radiation – can’t receive calls underwater.

The upshot of all of this is that oceans can’t be mapped from the air and from space in the same way that the land can. To date, most underwater mapping has been achieved by attaching sonar systems to ships that trawl a given region of interest. But this technique is slow and costly, and inefficient for covering large areas.

Subhead: An invisible jigsaw puzzle

Enter the Photoacoustic Airborne Sonar System (PASS), which combines light and sound to break through the air-water interface. The idea for it stemmed from another project that used microwaves to perform “non-contact” imaging and characterization of underground plant roots. Some of PASS’s instruments were initially designed for that purpose in collaboration with the lab of Stanford electrical engineering professor Butrus Khuri-Yakub.

An animation showing the 3D image of the submerged object recreated using reflected ultrasound waves. ©Aidan Fitzpatrick)

At its heart, PASS plays to the individual strengths of light and sound. “If we can use light in the air, where light travels well, and sound in the water, where sound travels well, we can get the best of both worlds,” Fitzpatrick said.

To do this, the system first fires a laser from the air that gets absorbed at the water surface. When the laser is absorbed, it generates ultrasound waves that propagate down through the water column and reflect off underwater objects before racing back toward the surface.

The returning sound waves are still sapped of most of their energy when they breach the water surface, but by generating the sound waves underwater with lasers, the researchers can prevent the energy loss from happening twice.

“We have developed a system that is sensitive enough to compensate for a loss of this magnitude and still allow for signal detection and imaging,” Arbabian said.

The reflected ultrasound waves are recorded by instruments called transducers. Software is then used to piece the acoustic signals back together like an invisible jigsaw puzzle and reconstruct a three-dimensional image of the submerged feature or object.

“Similar to how light refracts or ‘bends’ when it passes through water or any medium denser than air, ultrasound also refracts,” Arbabian explained. “Our image reconstruction algorithms correct for this bending that occurs when the ultrasound waves pass from the water into the air.”

Subhead: Drone ocean surveys

Conventional sonar systems can penetrate to depths of hundreds to thousands of meters, and the researchers expect their system will eventually be able to reach similar depths.

To date, PASS has only been tested in the lab in a container the size of a large fish tank. “Current experiments use static water but we are currently working toward dealing with water waves,” Fitzpatrick said. “This is a challenging but we think feasible problem.”

The next step, the researchers say, will be to conduct tests in a larger setting and, eventually, an open-water environment.

“Our vision for this technology is on-board a helicopter or drone,” Fitzpatrick said. “We expect the system to be able to fly at tens of meters above the water.”

See video:

References: A. Fitzpatrick, A. Singhvi and A. Arbabian, “An Airborne Sonar System for Underwater Remote Sensing and Imaging,” in IEEE Access, vol. 8, pp. 189945-189959, 2020.
doi: 10.1109/ACCESS.2020.3031808 URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=9228880&isnumber=8948470

Provided by Stanford School of Engineering

Minor Fluctuations In Sound Make It Hard To Identify In Which Concert Hall Music Is Played (Engineering)

Study shows music volume has a major impact on how the listener experiences the acoustics of a concert hall.

The volume and timbre of music have a significant impact on how people perceive the acoustics in a concert hall, according to two recent studies carried out by the research group of Aalto University Professor Tapio Lokki. Both have been published in the Journal of the Acoustical Society of America, one of the most prestigious journals in its field.

Speaker orchestra. ©Photo: Jukka Patynen.

The first study demonstrated that, based on the music alone, it is difficult to distinguish which concert hall a piece of music is being played in. The test subjects listened to recordings of a single violin and of part of Beethoven’s Seventh Symphony, all played in four different concert halls: the rectangular Concertgebouw in Amsterdam and Herkulessaal in Munich, the vineyard-shaped Berlin Philharmonic, and the fan-shaped Cologne Philharmonic. They first heard a sample from the reference location, after which they tried to identify the reference location from four music samples.

It was easier to identify the hall when the same music sample played in each concert halls. If the reference sample was from a slightly different part of the same piece of music as used in the control locations, it was harder to identify the hall.

‘Even small differences in the music listened to made it very difficult to identify concert halls of similar size and architecture. Halls that were very different to each other were clearly more easily identified regardless of the music,’ explains postdoctoral researcher Antti Kuusinen.

Another study showed that the acoustics of a concert hall are experienced differently depending on the volume at which the orchestra is playing. The concert halls used in the study were the Helsinki’s Musiikkitalo, Munich Herkulessaal, Berlin Philharmonic and Berlin Konzerthaus.

The subjects listened to the orchestra playing at different volume levels, from the quietest piano pianoissimo to the strongest forte fortissimo, after which they placed the concert halls in order according to how loud and enveloping they experienced the music to be. The order of the concert halls changed in some cases related to the volume of the music.

‘Traditionally, the acoustics of concert halls are studied by using objective measurements to calculate acoustic parameters, such as reverberation time, which is independent of the characteristics or dynamics of the music. Our research clearly shows that this is insufficient for understanding the acoustics in its entirety, because both the timbre of the sound and the listeners’ perceptions shift as the volume changes,’ Lokki explains.

Lokki’s research group has also previously studied how the acoustics of concert halls affect the emotional reactions evoked by the music. The studies indicated that halls with acoustics that support large dynamic fluctuations evoke the strongest emotional experiences in listeners.

References: Tapio Lokki, Laura McLeod and Antti Kuusinen, “Perception of loudness and envelopment for different orchestral dynamics”, The Journal of the Acoustical Society of America 148, 2137 (2020); https://doi.org/10.1121/10.0002101 link: https://asa.scitation.org/doi/10.1121/10.0002101#

Provided by AALTO University

An Underwater Navigation System Powered By Sound (Engineering)

New approach could spark an era of battery-free ocean exploration, with applications ranging from marine conservation to aquaculture.

GPS isn’t waterproof. The navigation system depends on radio waves, which break down rapidly in liquids, including seawater. To track undersea objects like drones or whales, researchers rely on acoustic signaling. But devices that generate and send sound usually require batteries — bulky, short-lived batteries that need regular changing. Could we do without them?

MIT researchers think so. They’ve built a battery-free pinpointing system dubbed Underwater Backscatter Localization (UBL). Rather than emitting its own acoustic signals, UBL reflects modulated signals from its environment. That provides researchers with positioning information, at net-zero energy. Though the technology is still developing, UBL could someday become a key tool for marine conservationists, climate scientists, and the U.S. Navy.

These advances are described in a paper being presented this week at the Association for Computing Machinery’s Hot Topics in Networks workshop, by members of the Media Lab’s Signal Kinetics group. Research Scientist Reza Ghaffarivardavagh led the paper, along with co-authors Sayed Saad Afzal, Osvy Rodriguez, and Fadel Adib, who leads the group and is the Doherty Chair of Ocean Utilization as well as an associate professor in the MIT Media Lab and the MIT Department of Electrical Engineering and Computer Science.


It’s nearly impossible to escape GPS’ grasp on modern life. The technology, which relies on satellite-transmitted radio signals, is used in shipping, navigation, targeted advertising, and more. Since its introduction in the 1970s and ’80s, GPS has changed the world. But it hasn’t changed the ocean. If you had to hide from GPS, your best bet would be underwater.

Because radio waves quickly deteriorate as they move through water, subsea communications often depend on acoustic signals instead. Sound waves travel faster and further underwater than through air, making them an efficient way to send data. But there’s a drawback.

“Sound is power-hungry,” says Adib. For tracking devices that produce acoustic signals, “their batteries can drain very quickly.” That makes it hard to precisely track objects or animals for a long time-span — changing a battery is no simple task when it’s attached to a migrating whale. So, the team sought a battery-free way to use sound.

Good vibrations

Adib’s group turned to a unique resource they’d previously used for low-power acoustic signaling: piezoelectric materials. These materials generate their own electric charge in response to mechanical stress, like getting pinged by vibrating soundwaves. Piezoelectric sensors can then use that charge to selectively reflect some soundwaves back into their environment. A receiver translates that sequence of reflections, called backscatter, into a pattern of 1s (for soundwaves reflected) and 0s (for soundwaves not reflected). The resulting binary code can carry information about ocean temperature or salinity.

In principle, the same technology could provide location information. An observation unit could emit a soundwave, then clock how long it takes that soundwave to reflect off the piezoelectric sensor and return to the observation unit. The elapsed time could be used to calculate the distance between the observer and the piezoelectric sensor. But in practice, timing such backscatter is complicated, because the ocean can be an echo chamber.

The sound waves don’t just travel directly between the observation unit and sensor. They also careen between the surface and seabed, returning to the unit at different times. “You start running into all of these reflections,” says Adib. “That makes it complicated to compute the location.” Accounting for reflections is an even greater challenge in shallow water — the short distance between seabed and surface means the confounding rebound signals are stronger.

The researchers overcame the reflection issue with “frequency hopping.” Rather than sending acoustic signals at a single frequency, the observation unit sends a sequence of signals across a range of frequencies. Each frequency has a different wavelength, so the reflected sound waves return to the observation unit at different phases. By combining information about timing and phase, the observer can pinpoint the distance to the tracking device. Frequency hopping was successful in the researchers’ deep-water simulations, but they needed an additional safeguard to cut through the reverberating noise of shallow water.

Where echoes run rampant between the surface and seabed, the researchers had to slow the flow of information. They reduced the bitrate, essentially waiting longer between each signal sent out by the observation unit. That allowed the echoes of each bit to die down before potentially interfering with the next bit. Whereas a bitrate of 2,000 bits/second sufficed in simulations of deep water, the researchers had to dial it down to 100 bits/second in shallow water to obtain a clear signal reflection from the tracker. But a slow bitrate didn’t solve everything.

To track moving objects, the researchers actually had to boost the bitrate. One thousand bits/second was too slow to pinpoint a simulated object moving through deep water at 30 centimeters/second. “By the time you get enough information to localize the object, it has already moved from its position,” explains Afzal. At a speedy 10,000 bits/second, they were able to track the object through deep water.

Efficient exploration

Adib’s team is working to improve the UBL technology, in part by solving challenges like the conflict between low bitrate required in shallow water and the high bitrate needed to track movement. They’re working out the kinks through tests in the Charles River. “We did most of the experiments last winter,” says Rodriguez. That included some days with ice on the river. “It was not very pleasant.”

Conditions aside, the tests provided a proof-of-concept in a challenging shallow-water environment. UBL estimated the distance between a transmitter and backscatter node at various distances up to nearly half a meter. The team is working to increase UBL’s range in the field, and they hope to test the system with their collaborators at the Wood Hole Oceanographic Institution on Cape Cod.

They hope UBL can help fuel a boom in ocean exploration. Ghaffarivardavagh notes that scientists have better maps of the moon’s surface than of the ocean floor. “Why can’t we send out unmanned underwater vehicles on a mission to explore the ocean? The answer is: We will lose them,” he says.

UBL could one day help autonomous vehicles stay found underwater, without spending precious battery power. The technology could also help subsea robots work more precisely, and provide information about climate change impacts in the ocean. “There are so many applications,” says Adib. “We’re hoping to understand the ocean at scale. It’s a long-term vision, but that’s what we’re working toward and what we’re excited about.”

This work was supported, in part, by the Office of Naval Research.

References: Dan Ackerman, “Underwater Backscatter Localization: Toward a Battery-Free Underwater GPS”, 2020.

Provided by MIT

Scientists Find Upper Limit For The Speed Of Sound (Physics)

A research collaboration between Queen Mary University of London, the University of Cambridge and the Institute for High Pressure Physics in Troitsk has discovered the fastest possible speed of sound.

The result- about 36 km per second—is around twice as fast as the speed of sound in diamond, the hardest known material in the world.

Waves, such as sound or light waves, are disturbances that move energy from one place to another. Sound waves can travel through different mediums, such as air or water, and move at different speeds depending on what they’re travelling through. For example, they move through solids much faster than they would through liquids or gases, which is why you’re able to hear an approaching train much faster if you listen to the sound propagating in the rail track rather than through the air.

Einstein’s theory of special relativity sets the absolute speed limit at which a wave can travel which is the speed of light, and is equal to about 300,000 km per second. However until now it was not known whether sound waves also have an upper speed limit when travelling through solids or liquids.

The study, published in the journal Science Advances, shows that predicting the upper limit of the speed of sound is dependent on two dimensionless fundamental constants: the fine structure constant and the proton-to-electron mass ratio.

These two numbers are already known to play an important role in understanding our Universe. Their finely-tuned values govern nuclear reactions such as proton decay and nuclear synthesis in stars and the balance between the two numbers provides a narrow ‘habitable zone’ where stars and planets can form and life-supporting molecular structures can emerge. However, the new findings suggest that these two fundamental constants can also influence other scientific fields, such as materials science and condensed matter physics, by setting limits to specific material properties such as the speed of sound.

The scientists tested their theoretical prediction on a wide range of materials and addressed one specific prediction of their theory that the speed of sound should decrease with the mass of the atom. This prediction implies that the sound is the fastest in solid atomic hydrogen. However, hydrogen is an atomic solid at very high pressure above 1 million atmospheres only, pressure comparable to those in the core of gas giants like Jupiter. At those pressures, hydrogen becomes a fascinating metallic solid conducting electricity just like copper and is predicted to be a room temperature superconductor. Therefore, researchers performed state-of-the-art quantum mechanical calculations to test this prediction and found that the speed of sound in solid atomic hydrogen is close to the theoretical fundamental limit.

Professor Chris Pickard, Professor of Materials Science at the University of Cambridge, said: “Soundwaves in solids are already hugely important across many scientific fields. For example, seismologists use sound waves initiated by earthquakes deep in the Earth interior to understand the nature of seismic events and the properties of Earth composition. They’re also of interest to materials scientists because sound waves are related to important elastic properties including the ability to resist stress.”

References: K. Trachenko, B. Monserrat, “Speed of sound from fundamental physical constants”, Science Advances 09 Oct 2020, Vol. 6, no. 41, eabc8662 DOI: 10.1126/sciadv.abc8662 link: https://advances.sciencemag.org/content/6/41/eabc8662

Provided by Queen Mary, University of London