Tag Archives: #virtualreality

What Could Possibly Go Wrong With Virtual Reality? (Science and Technology)

YouTube is a treasure trove of virtual reality fails: users tripping, colliding into walls and smacking inanimate and animate objects. By investigating these “VR Fails” on YouTube, researchers at the University of Copenhagen have sought to learn more about when and why things go sideways for users and how to improve VR design and experiences so as to avoid accidents.

Millions of YouTube viewers have enjoyed hearty laughs watching others getting hurt using virtual reality – people wearing VR headsets, falling, screaming, crashing into walls and TV sets, or knocking spectators to the floor. Some of us have even been that failing someone. Now, videos of virtual reality mishaps, called “VR Fails”, have actually become a field of research.

Specifically, University of Copenhagen researchers who specialize in improving the relationship between computer technology and human beings studied 233 YouTube videos of VR fails. Their results have now been published in CHI Conference Proceedings.

For the most part, virtual reality remains the domain of gamers. But VR has become quite widespread in education, marketing and the experience industry and is increasingly popular in a variety of other sectors.

“As virtual reality has become cheaper and more common, the technology is being used in a host of environments that weren’t considered during its design. For example, VR is now used in people’s homes, which are often cluttered and populated by people and pets that move around. We’ve taken a closer look at where things go wrong for people so as to optimize user experience,” explains PhD Fellow Andreea-Anamaria Muresan of the University of Copenhagen’s Department of Computer Science, one of the study’s authors.

Throwing slaps

Based upon the 233 different video clips, the researchers created a catalogue of, among other things, various types of fails, accidents and their causes.

“One of the recurrent accident types is when VR users hit walls, furniture or spectators. In one case, we saw a spectator tickling the person wearing the VR headset. This resulted in the VR user turning around and slapping the spectator,” says Andreea-Anamaria Muresan.

Fear is the most frequent reason for fails or accidents: users get scared of a development in their virtual universe, such as a roller coaster ride or objects rushing towards them. Fear manifests itself in exaggerated reactions, in the form of shouting and screaming or wild, uncontrolled movements in which users begin to lash out with their arms and legs. This often results in falls and collisions when, for example, a user tries to escape from a virtual situation and ends up running head first into a solid wall.

Illustration from scientific paper
Illustration from scientific paper created by Andreea-Anamaria Muresan.

New ideas for designers

Andreea-Anamaria Muresan and her fellow researchers now have a number of suggestions for how to prevent some of these accidents.

“Some VR games are designed to frighten players or give them jolts of adrenaline – which is part of what makes them fun to play. But in many cases, this isn’t the intention. So one has to find a balance. Because users get hurt from time to time and equipment can be destroyed, some people might lose their interest in using the technology. We seek to prevent this,” says Andreea-Anamaria Muresan, who elaborates:

“We can now provide designers with ideas about what they can do differently to avoid accidents. For example, collisions can be avoided by changing some of the game elements. Instead of a player being outfitted with a sword that requires large swings of the arms, their equipment can be substituted with a shield. This changes a player’s behavior.”

The researchers will now try to implement some of their design proposals in VR game prototypes.

WATCH SOME OF THE VIDEO CLIPS ON YOUTUBE:

https://www.youtube.com/watch?v=0KcllPEe8y8

https://www.youtube.com/watch?v=9cma-1DNlZU

Featured image credit: gettyimages


Reference: Emily Dao, Andreea Muresan, Kasper Hornbæk, Jarrod Knibbe, “Bad Breakdowns, Useful Seams, and Face Slapping: Analysis of VR Fails on YouTube”, CHI ’21: Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems, May 2021, Article No.: 526, Pages 1–14. https://doi.org/10.1145/3411764.3445435


Provided by University of Copenhagen

Using Artificial Intelligence to Generate 3D Holograms in Real-time (Science and Technology)

A new method called tensor holography could enable the creation of holograms for virtual reality, 3D printing, medical imaging, and more — and it can run on a smartphone.

Despite years of hype, virtual reality headsets have yet to topple TV or computer screens as the go-to devices for video viewing. One reason: VR can make users feel sick. Nausea and eye strain can result because VR creates an illusion of 3D viewing although the user is in fact staring at a fixed-distance 2D display. The solution for better 3D visualization could lie in a 60-year-old technology remade for the digital world: holograms.

Holograms deliver an exceptional representation of 3D world around us. Plus, they’re beautiful. (Go ahead — check out the holographic dove on your Visa card.) Holograms offer a shifting perspective based on the viewer’s position, and they allow the eye to adjust focal depth to alternately focus on foreground and background.

Researchers have long sought to make computer-generated holograms, but the process has traditionally required a supercomputer to churn through physics simulations, which is time-consuming and can yield less-than-photorealistic results. Now, MIT researchers have developed a new way to produce holograms almost instantly — and the deep learning-based method is so efficient that it can run on a laptop in the blink of an eye, the researchers say.

“People previously thought that with existing consumer-grade hardware, it was impossible to do real-time 3D holography computations,” says Liang Shi, the study’s lead author and a PhD student in MIT’s Department of Electrical Engineering and Computer Science (EECS). “It’s often been said that commercially available holographic displays will be around in 10 years, yet this statement has been around for decades.”

Shi believes the new approach, which the team calls “tensor holography,” will finally bring that elusive 10-year goal within reach. The advance could fuel a spillover of holography into fields like VR and 3D printing.

This figure shows the experimental demonstration of 2D and 3D holographic projection. The left photograph is focused on the mouse toy (in yellow box) closer to the camera, and the right photograph is focused on the perpetual desk calendar (in blue box). Credits: Courtesy of the researchers

Shi worked on the study, published today in Nature, with his advisor and co-author Wojciech Matusik. Other co-authors include Beichen Li of EECS and the Computer Science and Artificial Intelligence Laboratory at MIT, as well as former MIT researchers Changil Kim (now at Facebook) and Petr Kellnhofer (now at Stanford University).

The quest for better 3D

A typical lens-based photograph encodes the brightness of each light wave — a photo can faithfully reproduce a scene’s colors, but it ultimately yields a flat image.

In contrast, a hologram encodes both the brightness and phase of each light wave. That combination delivers a truer depiction of a scene’s parallax and depth. So, while a photograph of Monet’s “Water Lilies” can highlight the paintings’ color palate, a hologram can bring the work to life, rendering the unique 3D texture of each brush stroke. But despite their realism, holograms are a challenge to make and share.

First developed in the mid-1900s, early holograms were recorded optically. That required splitting a laser beam, with half the beam used to illuminate the subject and the other half used as a reference for the light waves’ phase. This reference generates a hologram’s unique sense of depth.  The resulting images were static, so they couldn’t capture motion. And they were hard copy only, making them difficult to reproduce and share.

Computer-generated holography sidesteps these challenges by simulating the optical setup. But the process can be a computational slog. “Because each point in the scene has a different depth, you can’t apply the same operations for all of them,” says Shi. “That increases the complexity significantly.” Directing a clustered supercomputer to run these physics-based simulations could take seconds or minutes for a single holographic image. Plus, existing algorithms don’t model occlusion with photorealistic precision. So Shi’s team took a different approach: letting the computer teach physics to itself.

They used deep learning to accelerate computer-generated holography, allowing for real-time hologram generation. The team designed a convolutional neural network — a processing technique that uses a chain of trainable tensors to roughly mimic how humans process visual information. Training a neural network typically requires a large, high-quality dataset, which didn’t previously exist for 3D holograms.

The team built a custom database of 4,000 pairs of computer-generated images. Each pair matched a picture — including color and depth information for each pixel — with its corresponding hologram. To create the holograms in the new database, the researchers used scenes with complex and variable shapes and colors, with the depth of pixels distributed evenly from the background to the foreground, and with a new set of physics-based calculations to handle occlusion. That approach resulted in photorealistic training data. Next, the algorithm got to work.

By learning from each image pair, the tensor network tweaked the parameters of its own calculations, successively enhancing its ability to create holograms. The fully optimized network operated orders of magnitude faster than physics-based calculations. That efficiency surprised the team themselves.

“We are amazed at how well it performs,” says Matusik. In mere milliseconds, tensor holography can craft holograms from images with depth information — which is provided by typical computer-generated images and can be calculated from a multicamera setup or LiDAR sensor (both are standard on some new smartphones). This advance paves the way for real-time 3D holography. What’s more, the compact tensor network requires less than 1 MB of memory. “It’s negligible, considering the tens and hundreds of gigabytes available on the latest cell phone,” he says.

Video: Towards Real-time Photorealistic 3D Holography with Deep Neural Networks © MIT

The research “shows that true 3D holographic displays are practical with only moderate computational requirements,” says Joel Kollin, a principal optical architect at Microsoft who was not involved with the research. He adds that “this paper shows marked improvement in image quality over previous work,” which will “add realism and comfort for the viewer.” Kollin also hints at the possibility that holographic displays like this could even be customized to a viewer’s ophthalmic prescription. “Holographic displays can correct for aberrations in the eye. This makes it possible for a display image sharper than what the user could see with contacts or glasses, which only correct for low order aberrations like focus and astigmatism.”

“A considerable leap”

Real-time 3D holography would enhance a slew of systems, from VR to 3D printing. The team says the new system could help immerse VR viewers in more realistic scenery, while eliminating eye strain and other side effects of long-term VR use. The technology could be easily deployed on displays that modulate the phase of light waves. Currently, most affordable consumer-grade displays modulate only brightness, though the cost of phase-modulating displays would fall if widely adopted.

Three-dimensional holography could also boost the development of volumetric 3D printing, the researchers say. This technology could prove faster and more precise than traditional layer-by-layer 3D printing, since volumetric 3D printing allows for the simultaneous projection of the entire 3D pattern. Other applications include microscopy, visualization of medical data, and the design of surfaces with unique optical properties.

“It’s a considerable leap that could completely change people’s attitudes toward holography,” says Matusik. “We feel like neural networks were born for this task.”

The work was supported, in part, by Sony.

Featured image: MIT researchers have developed a way to produce holograms almost instantly. They say the deep learning-based method is so efficient that it could run on a smartphone. Credits: Image: MIT News, with images from iStockphoto

References: (1) Shi, L., Li, B., Kim, C. et al. Towards real-time photorealistic 3D holography with deep neural networks. Nature 591, 234–239 (2021). https://www.nature.com/articles/s41586-020-03152-0 https://doi.org/10.1038/s41586-020-03152-0 (2) http://cgh.csail.mit.edu/


Provided by MIT

Virtual Reality Helping To Treat Fear of Heights (Psychiatry)

Researchers from the University of Basel have developed a virtual reality app for smartphones to reduce fear of heights. Now, they have conducted a clinical trial to study its efficacy. Trial participants who spent a total of four hours training with the app at home showed an improvement in their ability to handle real height situations.

Fear of heights is a widespread phenomenon. Approximately 5% of the general population experiences a debilitating level of discomfort in height situations. However, the people affected rarely take advantage of the available treatment options, such as exposure therapy, which involves putting the person in the anxiety-causing situation under the guidance of a professional. On the one hand, people are reluctant to confront their fear of heights. On the other hand, it can be difficult to reproduce the right kinds of height situations in a therapy setting.

This motivated the interdisciplinary research team led by Professor Dominique de Quervain of the University of Basel to develop a smartphone-based virtual reality exposure therapy app called Easyheights. The app uses 360° images of real locations, which the researchers captured using a drone. People can use the app on their own smartphones together with a special virtual reality headset.

Gradually increasing the height

During the virtual experience, the user stands on a platform that is initially one meter above the ground. After allowing acclimatization to the situation for a certain interval, the platform automatically rises. In this way, the perceived distance above the ground increases slowly but steadily without an increase in the person’s level of fear.

Advanced level of the virtual reality app. (Image: Bentz et al., NPJ Digital Medicine 2021)

The research team studied the efficacy of this approach in a randomized, controlled trial and published the results in the journal NPJ Digital Medicine. Fifty trial participants with a fear of heights either completed a four-hour height training program (one 60-minute session and six 30-minute sessions over the course of two weeks) using virtual reality, or were assigned to the control group, which did not complete these training sessions.

Before and after the training phase – or the same period of time without training – the trial participants ascended the Uetliberg lookout tower near Zurich as far as their fear of heights allowed them. The researchers recorded the height level reached by the participants along with their subjective fear level at each level of the tower. At the end of the trial, the researchers evaluated the results from 22 subjects who completed the Easyheights training and 25 from the control group.

The group that completed the training with the app exhibited less fear on the tower and was able to ascend further towards the top than they could before completing the training. The control group exhibited no positive changes. The efficacy of the Easyheights training proved comparable to that of conventional exposure therapy.

Therapy in your own living room

Researchers have already been studying the use of virtual reality for treating fear of heights for more than two decades. “What is new, however, is that smartphones can be used to produce the virtual scenarios that previously required a technically complicated type of treatment, and this makes it much more accessible,” explains Dr. Dorothée Bentz, lead author of the study.

The results from the study suggest that the repeated use of a smartphone-based virtual reality exposure therapy can greatly improve the behavior and subjective state of well-being in height situations. People who suffer from a mild fear of heights will soon be able to download the free app from major app stores and complete training sessions on their own. However, the researchers recommend that people who suffer from a serious fear of heights only use the app with the supervision of a professional.

The current study is one of several projects in progress at the Transfaculty Research Platform for Molecular and Cognitive Neurosciences, led by Professor Andreas Papassotiropoulos and Professor Dominique de Quervain. Their goal is to improve the treatment of mental disorders through the use of new technologies and to make these treatments widely available.

Featured image: In the virtual reality app, users gradually rise to greater heights and can indicate the degree of their fear at each level. (Image: Bentz et al., NPJ Digital Medicine 2021)


Reference: Dorothée Bentz, Nan Wang, Merle K Ibach, Nathalie S Schicktanz, Anja Zimmer, Andreas Papassotiropoulos, Dominique JF de Quervain
Effectiveness of a stand-alone, smartphone-based virtual reality exposure app to reduce fear of heights in real-life: a randomized trial
NPJ Digital Medicine (2021), doi: 10.1038/s41746-021-00387-7


Provided by University of Basel

Operations On Screen: Creating an Accessible Surgery Simulator (Medicine)

A UOC project is developing a program to train surgeons’ psychomotor skills. Offering the possibility of low-cost distribution, this virtual reality tool could also be accessible for low-income countries.

Practice makes perfect. In the complex world of medicine too, where just a millimetre can make the difference between success and failure. In partnership with the University of Manizales (Colombia), the Universitat Oberta de Catalunya (UOC) is hosting a project to create a low-cost surgery simulator; a much more accessible tool than those currently available and which could be used to train both surgeons who are in the early stages of their career and those who are more experienced.

UOC and Universidad de Manizales in Colombia are developing a low-cost surgery simulator to train surgeons’ psychomotor skills (National Cancer Institute – Unsplash)

The project creates a 3D virtual environment in which users can put their psychomotor skills to the test. But unlike real surgery, the operations carried out in the simulator consist of manoeuvring within a series of geometric shapes. The programme provides real-time feedback on the precision with which users carry out the movements and their overall performance in the exercises. Because “a virtual environment without metrics, feedback or validation is nothing more than a video game,” explained Fernando Álvarez-López, a paediatric surgeon who has created this project as part of his doctoral degree in Education and ITC at the UOC together with the University of Manizalez in Colombia and within the framework of the CYTED – RITMOS Network (Ibero-American Network of Mobile Technologies in Health [RITMOS]).

The advantage of this tool, according to its developers, would be its low cost and its accessibility. Many of the virtual reality environments implemented to date are very expensive and require complicated machinery to operate. The simulator developed by the UOC, on the other hand, may cost less than half the price of its competitors, putting this technology within the reach of professionals from low- or middle-income countries.

Paradigm shift in learning

“This type of tool represents a paradigm shift in medicine,” said Francesc Saigí-Rubió, a professor at the UOC’s Faculty of Health Sciences, researcher at the I2TIC lab and co-creator of this project together with Marcelo Maina, professor at the Psychology and Education Sciences Department and researcher at the Edul@b group. “In surgery, you have to learn a series of movements, watch your time and follow protocols; in a way, like when you learn to drive. These simulators will allow surgeons to train from their office, or even from home, until they perfect their technique,” added Álvarez.

The ability to perform very precise movements is one of the keys to success in minimally invasive surgery, performed using tiny surgical instruments inserted through small incisions made in the body. Patient recovery can be quicker and easier with this type of surgery, but considerable skill is required to ensure success. Hence the importance of creating environments in which surgeons can practise over and over again all the movements that must be performed for a successful operation.

The project’s present and future

The tool has already been tested by 148 users: 100 undergraduates, 20 surgical residents and 28 experts. Among others, professionals from the Vall d’Hebron Hospital in Barcelona have taken part in these tests. The results of the study, published in the specialized open access journal JMIR Publications, endorse the tool’s validity for improving surgeons’ psychomotor skills at different stages in their career. It is equally useful for those who are already familiar with virtual reality platforms and for those who have no prior experience.

The researchers are currently working to take this tool to hospital environments. The programme’s creators hope to develop a version that can be downloaded directly from the internet. Among other things, users will be able to adjust the level of difficulty to their profile and the needs of the time. In the future, it may even be possible to create a more enveloping experience through the use of virtual reality glasses. “Technology is constantly moving forward, so we want to continue improving this project in line with the needs of the moment,” concluded Álvarez.

This research supports Sustainable Development Goals, good health and well-being, quality education; and, reduced inequalities.

References: Fernando Alvarez-Lopez, Marcelo Fabián Maina, Fernando Arango, Francesc Saigí-Rubió, “Use of a Low-Cost Portable 3D Virtual Reality Simulator for Psychomotor Skill Training in Minimally Invasive Surgery: Task Metrics and Score Validity”, JMIR, Vol 8, No 4 (2020). https://games.jmir.org/2020/4/e19723/#Results http://dx.doi.org/10.2196/19723

Provided by Universitat Oberta De Catalunya (UOC)

About UOC R&I

The UOC’s research and innovation (R&I) are helping 21st-century global societies to overcome pressing challenges by studying the interactions between ICT and human activity, with a specific focus on e-learning and e-health. Over 400 researchers and 50 research groups work among the University’s seven faculties and two research centres: the Internet Interdisciplinary Institute (IN3) and the eHealth Center (eHC).

The United Nations’ 2030 Agenda for Sustainable Development and open knowledge serve as strategic pillars for the UOC’s teaching, research and innovation. More information: research.uoc.edu. #UOC25years

3D-printed Glass Enhances Optical Design Flexibility (Optics / Computer)

Lawrence Livermore National Laboratory (LLNL) researchers have used multi-material 3D printing to create tailored gradient refractive index glass optics that could make for better military specialized eyewear and virtual reality goggles.

Artistic rendering of an aspirational future automated production process for custom GRIN optics, showing multi-material 3D printing of a tailored composition optic preform, conversion to glass via heat treatment, polishing and inspection of the final optics with refractive index gradients. Image by Jacob Long and Brian Chavez. ©LLNL

The new technique could achieve a variety of conventional and unconventional optical functions in a flat glass component (with no surface curvature), offering new optical design versatility in environmentally stable glass materials.

The team was able to tailor the gradient in the material compositions by actively controlling the ratio of two different glass-forming pastes or “inks” blended together inline using the direct ink writing (DIW) method of 3D printing. After the composition-varying optical preform is built using DIW, it is then densified to glass and can be finished using conventional optical polishing.

“The change in material composition leads to a change in refractive index once we convert it to glass,” said LLNL scientist Rebecca Dylla-Spears, lead author of a paper appearing today in Science Advances.

The project started in 2016 when the team began looking at ways that additive manufacturing could be used to advance optics and optical systems. Because additive manufacturing offers the ability to control both structure and composition, it provided a new path to manufacturing of gradient refractive index glass lenses.

Gradient refractive index (GRIN) optics provide an alternative to conventionally finished optics. GRIN optics contain a spatial gradient in material composition, which provides a gradient in the material refractive index — altering how light travels through the medium. A GRIN lens can have a flat surface figure yet still perform the same optical function as an equivalent conventional lens.

GRIN optics already exist in nature because of the evolution of eye lenses. Examples can be found in most species, where the change in refractive index across the eye lens is governed by the varying concentration of structural proteins.

An array of polished, 3D printed gradient refractive index lenses made of titania-doped silica glass. Grid squares are 1 millimeter on each side. ©LLNL

The ability to fully spatially control material composition and optical functionality provides new options for GRIN optic design. For example, multiple functionalities could be designed into a single optic, such as focusing combined with correction of common optical aberrations. In addition, it has been shown that the use of optics with combined surface curvature and gradients in refractive index has the potential to reduce the size and weight of optical systems.

By tailoring the index, a curved optic can be replaced with a flat surface, which could reduce finishing costs. Surface curvature also could be added to manipulate light using both bulk and surface effects.

The new technique also can save weight in optical systems. For example, it’s critical that optics used by soldiers in the field are light and portable.

“This is the first time we have combined two different glass materials by 3D printing and demonstrated their function as an optic. Although demonstrated for GRIN, the approach could be used to tailor other material or optical properties as well,” Dylla-Spears said.

Other Livermore researchers involved in the project include Timothy Yee, Koroush Sasan, Du Nguyen, Nikola Dudukovic, Jason Ortega, Michael Johnson, Oscar Herrera, Frederick Ryerson and Lana Wong. The Laboratory Directed Research and Development program funded the work.

Provided by LLNL

Future VR Could Employ New Ultrahigh-res Display (Engineering)

By expanding on existing designs for electrodes of ultra-thin solar panels, Stanford researchers and collaborators in Korea have developed a new architecture for OLED – organic light-emitting diode – displays that could enable televisions, smartphones and virtual or augmented reality devices with resolutions of up to 10,000 pixels per inch (PPI). (For comparison, the resolutions of new smartphones are around 400 to 500 PPI.)

Illustration of the meta-OLED display and the underlying metaphotonic layer, which improves the overall brightness and color of the display while keeping it thin and energy efficient. ©Samsung Advanced Institute of Technology.

Such high-pixel-density displays will be able to provide stunning images with true-to-life detail – something that will be even more important for headset displays designed to sit just centimeters from our faces.

The advance is based on research by Stanford University materials scientist Mark Brongersma in collaboration with the Samsung Advanced Institute of Technology (SAIT). Brongersma was initially put on this research path because he wanted to create an ultra-thin solar panel design.

“We’ve taken advantage of the fact that, on the nanoscale, light can flow around objects like water,” said Brongersma, who is a professor of materials science and engineering and senior author of the Oct. 22 Science paper detailing this research. “The field of nanoscale photonics keeps bringing new surprises and now we’re starting to impact real technologies. Our designs worked really well for solar cells and now we have a chance to impact next generation displays.”

In addition to having a record-setting pixel density the new “metaphotonic” OLED displays would also be brighter and have better color accuracy than existing versions, and they’d be much easier and cost-effective to produce as well.

Hidden gems

At the heart of an OLED are organic, light-emitting materials. These are sandwiched between highly-reflective and semi-transparent electrodes that enable current injection into the device. When electricity flows through an OLED, the emitters give off red, green or blue light. Each pixel in an OLED display is composed of smaller sub-pixels that produce these primary colors. When the resolution is sufficiently high, the pixels are perceived as one color by the human eye. OLEDs are an attractive technology because they are thin, light and flexible and produce brighter and more colorful images than other kinds of displays.

This research aims to offer an alternative to the two types of OLED displays that are currently commercially available. One type – called a red-green-blue OLED – has individual sub-pixels that each contain only one color of emitter. These OLEDs are fabricated by spraying each layer of materials through a fine metal mesh to control the composition of each pixel. They can only be produced on a small scale, however, like what would be used for a smartphone.

Larger devices like TVs employ white OLED displays. Each of these sub-pixels contains a stack of all three emitters and then relies on filters to determine the final sub-pixel color, which is simpler to fabricate. Since the filters reduce the overall output of light, white OLED displays are more power-hungry and prone to having images burn into the screen.

OLED displays were on the mind of Won-Jae Joo, a SAIT scientist, when he visited Stanford from 2016 to 2018. During that time, Joo listened to a presentation by Stanford graduate student Majid Esfandyarpour about an ultrathin solar cell technology he was developing in Brongersma’s lab and realized it had applications beyond renewable energy.

“Professor Brongersma’s research themes were all very academically profound and were like hidden gems for me as an engineer and researcher at Samsung Electronics,” said Joo, who is lead author of the Science paper.

Joo approached Esfandyarpour after the presentation with his idea, which led to a collaboration between researchers at Stanford, SAI and Hanyang University in Korea.

“It was quite exciting to see that a problem that we have already thought about in a different context can have such an important impact on OLED displays,” said Esfandyarpour.

A fundamental foundation

The crucial innovation behind both the solar panel and the new OLED is a base layer of reflective metal with nanoscale (smaller than microscopic) corrugations, called an optical metasurface. The metasurface can manipulate the reflective properties of light and thereby allow the different colors to resonate in the pixels. These resonances are key to facilitating effective light extraction from the OLEDs.

“This is akin to the way musical instruments use acoustic resonances to produce beautiful and easily audible tones,” said Brongersma, who conducted this research as part of the Geballe Laboratory for Advanced Materials at Stanford.

For example, red emitters have a longer wavelength of light than blue emitters, which, in conventional RGB-OLEDs, translates to sub-pixels of different heights. In order to create a flat screen overall, the materials deposited above the emitters have to be laid down in unequal thicknesses. By contrast, in the proposed OLEDs, the base layer corrugations allow each pixel to be the same height and this facilitates a simpler process for large-scale as well as micro-scale fabrication.

In lab tests, the researchers successfully produced miniature proof-of-concept pixels. Compared with color-filtered white-OLEDs (which are used in OLED televisions) these pixels had a higher color purity and a twofold increase in luminescence efficiency – a measure of how bright the screen is compared to how much energy it uses. They also allow for an ultrahigh pixel density of 10,000 pixels-per-inch.

The next steps for integrating this work into a full-size display is being pursued by Samsung, and Brongersma eagerly awaits the results, hoping to be among the first people to see the meta-OLED display in action.

References: Won-Jae Joo, Jisoo Kyoung, Majid Esfandyarpour et al., “Metasurface-driven OLED displays beyond 10,000 pixels per inch”, Science, vol. 370, Issue 6515, pp. 459-463, 2020. DOI: 10.1126/science.abc8530 link: https://science.sciencemag.org/content/370/6515/459/tab-article-info

Provided by Stanford University

‘Universal Law Of Touch’ Will Enable New Advances In Virtual Reality (Science And Technology)

Seismic waves, commonly associated with earthquakes, have been used by scientists to develop a universal scaling law for the sense of touch.

©University Of Birmingham

A team, led by researchers at the University of Birmingham, used Rayleigh waves to create the first scaling law for touch sensitivity. The results are published in Science Advances.

The researchers are part of a European consortium (H-Reality) that are already using the theory to develop new Virtual Reality technologies that incorporate the sense of touch.

Rayleigh waves are created by impact between objects and are commonly thought to travel only along surfaces. The team discovered that, when it comes to touch, the waves also travel through layers of skin and bone and are picked up by the body’s touch receptor cells.

Using mathematical modelling of these touch receptors the researchers showed how the receptors were located at depths that allowed them to respond to Rayleigh waves. The interaction of these receptors with the Rayleigh waves will vary across species, but the ratio of receptor depth vs wavelength remains the same, enabling the universal law to be defined.

The mathematics used by the researchers to develop the law is based on approaches first developed over a hundred years ago to model earthquakes. The law supports predictions made by the Nobel-Prize-winning physicist Georg von Békésy who first suggested the mathematics of earthquakes could be used to explore connections between Rayleigh waves and touch.

The team also found that the interaction of the waves and receptors remained even when the stiffness of the outermost layer of skin changed. The ability of the receptors to respond to Rayleigh waves remained unchanged despite the many variations in this outer layer caused by, age, gender, profession, or even hydration.

Dr Tom Montenegro-Johnson, of the University of Birmingham’s School of Mathematics, led the research. He explains: “Touch is a primordial sense, as important to our ancient ancestors as it is to modern day mammals, but it’s also one of the most complex and therefore least understood. While we have universal laws to explain sight and hearing, for example, this is the first time that we’ve been able to explain touch in this way.”

James Andrews, co-author of the study at the University of Birmingham, adds: “The principles we’ve defined enable us to better understand the different experiences of touch among a wide range of species. For example, if you indent the skin of a rhinoceros by 5mm, they would have the same sensation as a human with a similar indentation – it’s just that the forces required to produce the indentation would be different. This makes a lot of sense in evolutionary terms, since it’s connected to relative danger and potential damage.”

References: J. W. Andrews, M. J. Adams and T. D. Montenegro-Johnson, “A universal scaling law of mammalian touch”, Science Advances 09 Oct 2020: Vol. 6, no. 41, eabb6912. DOI: 10.1126/sciadv.abb6912 link: https://advances.sciencemag.org/content/6/41/eabb6912/tab-article-info

Provided by University Of Birmingham