Category Archives: Science and Technology

How To Build A Synthetic Digestive System For Marvel’s Vision? (Science and Technology / Superhero)

If you are the marvel fan, you may have been very well aware of vision character. If not, let me tell you, vision is a synthezoid or say, synthetic human, who is a member of the Avengers and constantly striving to be more human. In the films of the Marvel Cinematic Universe, Vision is based on advanced robotics and bioengineering technologies. However, his body contains biological cells, and his external appearance consists of eyes, a mouth, a nose, teeth, and fingernails; traditional human attributes. Unlike humans, Vision does not eat as his metabolic energy requirements are met by a fictional alien artefact known as the “Mind Stone”. Nonetheless, given that Vision has eyes and a tongue, could he also have other organs? Does his body contain the organs for a synthetic digestive system? And if so, how would these organs meet his body’s metabolic energy needs?

Now, Falk J. Tauber & Barry W. Fitzgerald showed how advancements in soft robotics and the development of biocompatible self-actuating and self-sensing materials can be combined to build an artificial digestive system for marvel’s vision. Their study recently appeared in the Journal Superhero Science and Technology.

How It Can Be Built?

It is built from a combination of the various systems: the SBPP (silicone based biomimetic peristaltic pump) as an artificial oesophagus, intestines, and for the rectum, and the SoGut system as an artificial stomach (as shown in fig. 1). The structured inner conduits of intestines are lined with tissue scaffolds growing epithelial tissue cells forming folds and villi. Beneath the scaffold layer is network of microfluidic channels imitating a circulatory system, with together with the tissue cells an organ-on-a-chip like system.

Figure 1: The possible artificial digestive system for Vision. © Tauber and Fitzgerald

“Even defecating would be possible with the SBPP system, which would allow Vision to rid his body of digestive waste products, just like the other Avengers.”

Can We Use All These Devices To Build A Synthetic Digestive System Today?

Well, not yet. But why?

Guys, the main role of our digestive system is to convert food into energy. In this process, the food must be decomposed and metabolised. The mechanical part of the decomposition is possible with today’s soft robotic-based systems, and a living biofilm and tissue cells is a first step towards metabolisation. However, there are many more factors to consider.

For example, specific digestive juices are produced by glands that break down and decompose foods with the help of acids and enzymes, in addition to mechanical breakdown. As a result, food is broken down into its components such that the body’s cells can use them as an energy source. In Vision’s case, this energy could also come from electrochemical reactions in a microbial fuel cell (MFC) system powered by food. Though a hydrogen fuel cell would cover Vision basal energy needs, to have sufficient energy to fight villains, Vision would need a more sustainable energy source, which could be provided by an artificial digestive system.

What else would be needed?

The system would also need to transmit and communicate information to Vision’s “brain” so that Vision could experience hunger or even butterflies in his stomach. For this, Vision’s digestive system would also need chemical- and mechanoreceptors and sensors. In humans, common triggers for hunger are blood glucose and insulin levels as glucose is one of the main sources of metabolic energy. Another stimulus would be signals associated with mechano- and chemoreceptors, which sense whether food is currently in the stomach or intestine and send signals to the brain for replenishment. This is a very strong simplification of the highly complex processes. Overall, Vision would not only need a fully artificial digestive system, but also complimentary nervous and circulatory systems.

How would humanity benefit from the development of an artificial digestive system?

First, these systems could serve as prostheses and provide replacements for key organs in the digestive system. Second, these systems could be used in the development of food for dysphagia patients. Third, these systems could be used in the clinical treatment of patients with digestion problems.

“These technologies could conceivably have a positive impact on the fields of healthcare and energy. In the meantime, plans for an artificial digestive system that provides sufficient energy to sustain a synthezoid like the MCU’s Vision are still very much a work in progress.”

— they concluded.

Reference: Tauber, F., & Fitzgerald, B. W. (2021). How to build a synthetic digestive system for Marvel’s Vision. Superhero Science and Technology, 2(2), 1–20. https://doi.org/10.24413/sst.2021.2.5636


Note for editors of other websites: To reuse this article fully or partially kindly give credit either to our author/editor S. Aman or provide a link of our article

This Device Harvests Power From Your Sweaty Fingertips While You Sleep (Science and Technology)

Feeling extra sweaty from a summer heat wave? Don’t worry–not all your perspiration has to go to waste. In a paper publishing July 13 in the journal Joule, researchers have developed a new device that harvests energy from the sweat on–of all places–your fingertips. To date, the device is the most efficient on-body energy harvester ever invented, producing 300 millijoules (mJ) of energy per square centimeter without any mechanical energy input during a 10-hour sleep and an additional 30 mJ of energy with a single press of a finger. The authors say the device represents a significant step forward for self-sustainable wearable electronics.

“Normally, you want maximum return on investment in energy. You don’t want to expend a lot of energy through exercise to get only a little energy back,” says senior author Joseph Wang (@JWangnano), a nanoengineering professor at the University of California San Diego. “But here, we wanted to create a device adapted to daily activity that requires almost no energy investment–you can completely forget about the device and go to sleep or do desk work like typing, yet still continue to generate energy. You can call it ‘power from doing nothing.'”

Previous sweat-based energy devices required intense exercise, such as a great deal of running or biking, before the user sweated enough to activate power generation. But the large amount of energy consumed during exercise can easily cancel out the energy produced, often resulting in energy return on investment of less than 1%.

In contrast, this device falls into what the authors call the “holy grail” category of energy harvesters. Instead of relying on external, irregular sources like sunlight or movement, all it needs is finger contact to collect more than 300 mJ of energy during sleep–which the authors say is enough to power some small wearable electronics. Since no movement is needed, the ratio between harvested energy and invested energy is essentially infinite.

This video shows the process of wrapping the BFC onto the fingertip using a stretchable, water-proof film. © Lu Yin

It may seem odd to choose fingertips as the source of this sweat over, say, the underarms, but in fact, fingertips have the highest concentration of sweat glands compared to anywhere else on the body.

“Generating more sweat at the fingers probably evolved to help us better grip things,” says first co-author Lu Yin (@YinLu_CLT), a nanoengineering PhD student working in Wang’s lab. “Sweat rates on the finger can reach as high as a few microliters per square centimeter per minute. This is significant compared to other locations on the body, where sweat rates are maybe two or three orders of magnitude smaller.”

The device the researchers developed in this study is a type of energy harvester called a biofuel cell (BFC) and is powered by lactate, a dissolved compound in sweat. From the outside, it looks like a simple piece of foam connected to a circuit with electrodes, all of which is attached to the pad of a finger. The foam is made out of carbon nanotube material, and the device also contains a hydrogel that helps maximize sweat absorption.

“The size of the device is about 1 centimeter squared. Its material is flexible as well, so you don’t need to worry about it being too rigid or feeling weird. You can comfortably wear it for an extended period of time,” says Yin.

Within the device, a series of electrochemical reactions occur. The cells are equipped with a bioenzyme on the anode that oxidizes, or removes electrons from, the lactate; the cathode is deposited with a small amount of platinum to catalyze a reduction reaction that takes the electron to turn oxygen into water. Once this happens, electrons flow from the lactate through the circuit, creating a current of electricity. This process occurs spontaneously: as long as there is lactate, no additional energy is needed to kickstart the process.

Separate from but complementary to the BFC, piezoelectric generators–which convert mechanical energy into electricity–are also attached to the device to harvest up to 20% additional energy. Relying on the natural pinching motion of fingers or everyday motions like typing, these generators helped produce additional energy from barely any work: a single press of a finger once per hour required only 0.5 mJ of energy but produced over 30 mJ of energy, a 6,000% return in investment.

The researchers were able to use the device to power effective vitamin C- and sodium-sensing systems, and they are optimistic about improving the device to have even greater abilities in the future, which might make it suitable for health and wellness applications such as glucose meters for people with diabetes. “We want to make this device more tightly integrated in wearable forms, like gloves. We’re also exploring the possibility of enabling wireless connection to mobile devices for extended continuous sensing,” Yin says.

“There’s a lot of exciting potential,” says Wang. “We have ten fingers to play with.”

This work was supported by the UCSD Center for Wearable Sensors.

Featured image: This image shows a small hydrogel (right) collecting sweat from the fingertip for the vitamin-C sensor (left), then displaying the result on the electrochromic display. © Lu Yin


Reference: Yin et al.: “A passive perspiration biofuel cell: High energy return on investment”, Joule, 2021. https://www.cell.com/joule/fulltext/S2542-4351(21)00292-0


Provided by Cell Press

USC Researchers Enable AI To Use Its “Imagination” (Science and Technology)

A team of researchers at USC is helping AI imagine the unseen, a technique that could also lead to fairer AI, new medicines and increased autonomous vehicle safety.

Imagine an orange cat. Now, imagine the same cat, but with coal-black fur. Now, imagine the cat strutting along the Great Wall of China. Doing this, a quick series of neuron activations in your brain will come up with variations of the picture presented, based on your previous knowledge of the world.

In other words, as humans, it’s easy to envision an object with different attributes. But, despite advances in deep neural networks that match or surpass human performance in certain tasks, computers still struggle with the very human skill of “imagination.”

Now, a USC research team comprising computer science Professor Laurent Itti, and PhD students Yunhao Ge, Sami Abu-El-Haija and Gan Xin, has developed an AI that uses human-like capabilities to imagine a never-before-seen object with different attributes. The paper, titled Zero-Shot Synthesis with Group-Supervised Learning, was published in the 2021 International Conference on Learning Representations on May 7.

“We were inspired by human visual generalization capabilities to try to simulate human imagination in machines,” said Ge, the study’s lead author.

“Humans can separate their learned knowledge by attributes—for instance, shape, pose, position, color—and then recombine them to imagine a new object. Our paper attempts to simulate this process using neural networks.”

“This new disentanglement approach, for the first time, truly unleashes a new sense of imagination in A.I. systems, bringing them closer to humans’ understanding of the world.” Laurent Itti.

AI’s generalization problem

For instance, say you want to create an AI system that generates images of cars. Ideally, you would provide the algorithm with a few images of a car, and it would be able to generate many types of cars—from Porsches to Pontiacs to pick-up trucks—in any color, from multiple angles.

This is one of the long-sought goals of AI: creating models that can extrapolate. This means that, given a few examples, the model should be able to extract the underlying rules and apply them to a vast range of novel examples it hasn’t seen before. But machines are most commonly trained on sample features, pixels for instance, without taking into account the object’s attributes.

The science of imagination

In this new study, the researchers attempt to overcome this limitation using a concept called disentanglement. Disentanglement can be used to generate deepfakes, for instance, by disentangling human face movements and identity. By doing this, said Ge, “people can synthesize new images and videos that substitute the original person’s identity with another person, but keep the original movement.”

Similarly, the new approach takes a group of sample images—rather than one sample at a time as traditional algorithms have done—and mines the similarity between them to achieve something called “controllable disentangled representation learning.”

Then, it recombines this knowledge to achieve “controllable novel image synthesis,” or what you might call imagination. “For instance, take the Transformer movie as an example” said Ge, “It can take the shape of Megatron car, the color and pose of a yellow Bumblebee car, and the background of New York’s Times Square. The result will be a Bumblebee-colored Megatron car driving in Times Square, even if this sample was not witnessed during the training session.”

This is similar to how we as humans extrapolate: when a human sees a color from one object, we can easily apply it to any other object by substituting the original color with the new one. Using their technique, the group generated a new dataset containing 1.56 million images that could help future research in the field.

IN A NEW APPROACH TO TEACHING AI’S TO “IMAGINE THE UNSEEN,” TRAINING IMAGES (BOTTOM) ARE COMBINED TO SYNTHESIZE THE REQUESTED IMAGE (TOP). IMAGE/GE ET AL.
Understanding the world

While disentanglement is not a new idea, the researchers say their framework can be compatible with nearly any type of data or knowledge. This widens the opportunity for applications. For instance, disentangling race and gender-related knowledge to make fairer AI by removing sensitive attributes from the equation altogether.

In the field of medicine, it could help doctors and biologists discover more useful drugs by disentangling the medicine function from other properties, and then recombining them to synthesize new medicine. Imbuing machines with imagination could also help create safer AI by, for instance, allowing autonomous vehicles to imagine and avoid dangerous scenarios previously unseen during training.

“Deep learning has already demonstrated unsurpassed performance and promise in many domains, but all too often this has happened through shallow mimicry, and without a deeper understanding of the separate attributes that make each object unique,” said Itti. “This new disentanglement approach, for the first time, truly unleashes a new sense of imagination in A.I. systems, bringing them closer to humans’ understanding of the world.”

Featured image: THE NEW AI SYSTEM TAKES ITS INSPIRATION FROM HUMANS: WHEN A HUMAN SEES A COLOR FROM ONE OBJECT, WE CAN EASILY APPLY IT TO ANY OTHER OBJECT BY SUBSTITUTING THE ORIGINAL COLOR WITH THE NEW ONE. ILLUSTRATION/CHRIS KIM.


Provided by USC Viterbi

Researchers Propose Advanced Network for Object Detection (Science and Technology)

Object detection is one of the most important computer vision tasks and many researchers have proposed enormous object detection methods based on Convolutional Neural Network (CNN). Still, the performances of these object detectors are hindered by the diversity of object sizes and categories.

To get better feature expression, the utilization of multiscale features has been proposed. The extraction and utilization of multiscale features, known as Feature Pyramid Network (FPN), have great influence on the performance of the final detector. However, feature fusion of FPN is insufficient to express objects of similar size but different appearance due to the unidirectional feature fusion.

A research team led by Prof. Dr. LU Xiaoqiang from the Xi’an Institute of Optics and Precision Mechanics (XIOPM) of the Chinese Academy of Sciences (CAS) proposed a new multiscale feature fusion method with bidirectional feature fusion to solve the one-direction fusion of FPN, which was called Adaptive Multiscale Feature (AMF). The results were published in Neurocomputing

The main problem of the backbone network is how to integrate the deep and shallow features reasonably, because using only the last layer of features makes it difficult to deal with multi-size objects. Therefore, the unidirectional feature fusion of FPN should be avoided and AMF is employed in the detector.

According to the researchers, there are two parts in the AMF module for feature fusion and feature redistribution, Feature Scattering (FS) and Feature Redistribution (FR). Based on Convolutional Long Short Term Memory networks, the fusion was carried out in two directions. The shallow features are enhanced by the deep features and the deep features are also enhanced by the shallow features. Then the two features were further fused, for each level, channel-wise attention was utilized to assign features to the corresponding layer.

To demonstrate the effectiveness of the proposed AMF for both anchor-free based and anchor based detectors, they used Fully Convolutional One-Stage Object Detection and RetinaNet as the baseline, representing anchor-free based and anchor based detectors, respectively.

Experimental results based on the COCO 2014 dataset show that the proposed AMF module performs the popular FPN based detector. Whether anchored-free based detectors or anchored based detectors, the performance of detector can be improved through AMF.

The proposed AMF method exceeds the current most advanced object detector in accuracy.

Featured image: Overview of the proposed AMF. (Image by XIOPM)


Reference: Xiaoyong Yu, Siyuan Wu, Xiaoqiang Lu, Guilong Gao, Adaptive multiscale feature for object detection, Neurocomputing, Volume 449, 2021, Pages 146-158, ISSN 0925-2312, https://doi.org/10.1016/j.neucom.2021.04.002. (https://www.sciencedirect.com/science/article/pii/S0925231221005208)


Provided by Chinese Academy of Sciences

High-resolution Microscope Built From LEGO and bits of Phone (Science and Technology)

Research led by Göttingen University shows constructing microscope improves children’s understanding

Microscopy is an essential tool in many fields of science and medicine. However, many groups have limited access to this technology due to its cost and fragility. Now, researchers from the Universities of Göttingen and Münster have succeeded in building a high-resolution microscope using nothing more than children’s plastic building bricks and affordable parts from a mobile phone. They then went on to show that children aged 9-13 had significantly increased understanding of microscopy after constructing and working with the LEGO® microscope. Their results were published in The Biophysicist.

The researchers designed a fully functional, high-resolution microscope with capabilities close to a modern research microscope. Apart from the optics, all parts were from the toy brick system. The team realized that the lenses in modern smartphone cameras, which cost around €4 each, are of such high quality that they can make it possible to resolve even individual cells. The scientists produced instructions for building the microscope as well as a step-by-step tutorial to guide people through the construction process whilst learning about the relevant optical characteristics of a microscope. The researchers measured children’s understanding through questionnaires given to a group of 9-13 year olds. The researchers found that children given the parts and plans to construct the microscope themselves significantly increased their knowledge of microscopy. For this particular study, the researchers, whose day-to-day research focusses on fundamental biophysical processes, benefitted from the input and enthusiasm of their 10-year-old co-author.

Researchers produced instructions for building the microscope in several languages: https://github.com/tobetz/LegoMicroscope © Photo: Timo Betz

“An understanding of science is crucial for decision-making and brings many benefits in everyday life, such as problem-solving and creativity,” says Professor Timo Betz, University of Göttingen. “Yet we find that many people, even politicians, feel excluded or do not have the opportunities to engage in scientific or critical thinking. We wanted to find a way to nurture natural curiosity, help people grasp fundamental principles and see the potential of science.” The researchers stayed in contact with the children and monitored their progress: after they had constructed the main parts, they discovered that the lenses can act as magnifying glasses. After exploring this, and realizing that a good light source was important, they initially found it tricky to align two magnifying glasses. However, once they had achieved this, the lenses generated tremendous magnification. This enabled the children to literally “play” with the microscope: make their own adaptations; explore how the magnification works; and discover the exciting world of the micro-cosmos for themselves.

Researchers built a high-resolution microscope using just children’s plastic building bricks and bits from a mobile phone. They showed that children aged 9-13 increased their understanding of microscopy by building it. Photo: Timo Betz

“We hope that this modular microscope will be used in classrooms and homes all over the world to excite and inspire children about science,” continues Betz. “We have shown that scientific research does not need to be separate from everyday life. It can be enlightening, educational and fun!”

All plans and instructions are available in English, German, Dutch and Spanish. They are free to access here: https://github.com/tobetz/LegoMicroscope

Featured image: Professor Timo Betz © Photo: Peter Leßmann


Original publication: Timo Betz et al, “Designing a high-resolution, LEGO-based microscope for an educational setting”, The Biophysicist 2021, Doi: 10.35459/tbp.2021.000191, text also available here


Provided by University of Göttingen

Miniature Robots Controlled by Magnetic Fields (Science and Technology)

A team of scientists at NTU Singapore has developed millimetre-sized robots that can be controlled using magnetic fields to perform highly manoeuvrable and dexterous manipulations. This could pave the way to possible future applications in biomedicine and manufacturing.    

The research team created the miniature robots by embedding magnetic microparticles into biocompatible polymers — non-toxic materials that are harmless to humans. The robots are ‘programmed’ to execute their desired functionalities when magnetic fields are applied.

The made-in-NTU robots improve on many existing small-scale robots by optimizing their ability to move in six degrees-of-freedom (DoF) – that is, translational movement along the three spatial axes, and rotational movement about those three axes, commonly known as roll, pitch and yaw angles.

While researchers have previously created six DoF miniature robots, the new NTU miniature robots can rotate 43 times faster than them in the critical sixth DoF when their orientation is precisely controlled. They can also be made with ‘soft’ materials and thus can replicate important mechanical qualities — one type can ‘swim’ like a jellyfish, and another has a gripping ability that can precisely pick and place miniature objects.

The research by the NTU team was published in the peer-reviewed scientific journal Advanced Materials in May 2021 and is featured as the front cover of the June 10 issue.

Lead author of the study, Assistant Professor Lum Guo Zhan from the School of Mechanical and Aerospace Engineering said the crucial factor that led to the team’s achievement lie in the discovery of the ‘elusive’ third and final principal vector of these magnetic fields, which is critical for controlling such machines.

By contrast, previous works had only defined the applied magnetic fields in terms of two principal vectors.

“My team sought to uncover the fundamental working principles of miniature robots that have six-DoF motions through this work. By fully understanding the physics of these miniature robots, we are now able to accurately control their motions. Furthermore, our proposed fabrication method can magnetise these robots to produce 51 to 297 folds larger six-DoF torques than other existing devices. Our findings are therefore pivotal, and they represent a significant advancement for small-scale robotic technologies,” explains Asst Prof Lum.

Remote-controlled miniature robots suitable for surgical, manufacturing use

Measuring about the size of a grain of rice, the miniature robots may be used to reach confined and enclosed spaces currently inaccessible to existing robots, say the NTU team, making them particularly useful in the field of medicine.

The movements of the robots can be controlled remotely by an operator, using a programme running on a control computer that precisely varies the strength and direction of magnetic fields generated by an electromagnetic coil system.

The miniature robots may also inspire novel surgical procedures for ‘difficult-to-reach’ vital organs such as the brain in future, the NTU team said, adding that much more work and testing still need to be accomplished before the miniature robots can eventually be deployed for their targeted medical applications.

Co-authors of the research, PhD students Xu Changyu and Yang Zilin from the School of Mechanical and Aerospace Engineering said, “Besides surgery, our robots may also be of value in biomedical applications such as assembling lab-on-chip devices that can be used for clinical diagnostics by integrating several laboratory processes on a single chip.”

NTU miniature robots swim through barriers, assemble structures

In lab experiments, the research team demonstrated the dexterity and speed of the miniature robots.

Using a jellyfish inspired robot, the NTU team showed how it was able to swim speedily through a tight opening in a barrier when suspended in water. This demonstration was highly significant as it suggested that these robots were able to negotiate barriers in dynamic and uncertain environments and this could be a highly desirable ability for their targeted biomedical applications in future such as in surgical procedures for ‘difficult-to-reach’ vital organs such as the brain.

Demonstrating precise orientation control, the miniature robot also recorded a rotation speed of 173 degrees per second for their sixth DoF motion, exceeding the fastest rotation that existing miniature robots have achieved, which is four degrees per second for their sixth DoF motion.

With their gripper robot, the scientists were able to assemble a 3D structure consisting of a bar sitting atop two Y-shaped stilts in less than five minutes, about 20 times faster than existing miniature robots have been able to. This proof-of-concept demonstration, say the researchers, suggests that one day they may be used in ‘micro factories’ that build microscale devices.

Providing an independent view, Professor Huajian Gao, a Distinguished University Professor from the School of MAE at NTU, and a recipient of the prestigious 2021 American Society of Mechanical Engineers Timoshenko Medal, said, “This is a perfect example of how engineering ingenuity rooted in deep scientific understanding helps us develop advanced robotics for the benefit of mankind. This research work can have profound impact in many fields ranging from novel surgical methods to small scale assembly processes in future manufacturing.”

The NTU team is now looking to make their robots even smaller, on the scale of a few hundred micrometres, and to ultimately make the robots fully autonomous in terms of control.

Featured image: The NTU millimeter-sized robots measure about the size of a grain of rice and can be controlled using magnetic fields. © NTU Singapore


Reference: Xu, C., Yang, Z., Lum, G. Z., Small-Scale Magnetic Actuators with Optimal Six Degrees-of-Freedom. Adv. Mater. 2021, 33, 2100170. https://doi.org/10.1002/adma.202100170


Provided by Nanyang Technological University

Researchers Developed New Technology That Allows People See Clearly In The Dark (Science and Technology)

Researchers from The Australian National University (ANU) have developed new technology that allows people to see clearly in the dark, revolutionising night-vision.

The first-of-its-kind thin film, described in a new article published in Advanced Photonics, is ultra-compact and one day could work on standard glasses.

The researchers say the new prototype tech, based on nanoscale crystals, could be used for defence, as well as making it safer to drive at night and walking home after dark.

The team also say the work of police and security guards – who regularly employ night vision – will be easier and safer, reducing chronic neck injuries from currently bulk night-vision devices.

“We have made the invisible visible,” lead researcher Dr Rocio Camacho Morales said.

“Our technology is able to transform infrared light, normally invisible to the human eye, and turn this into images people can clearly see – even at distance.

“We’ve made a very thin film, consisting of nanometre-scale crystals, hundreds of times thinner than a human hair, that can be directly applied to glasses and acts as a filter,  allowing you to see in the darkness of the night.”  

The technology is extremely lightweight, cheap and easy to mass produce, making them accessible to everyday users.

Currently, high-end infrared imaging tech requires cryogenic freezing to work and are costly to produce. This new tech works at room temperatures.

Dragomir Neshev, Director of the ARC Centre for Excellence in Transformative Meta-Optical Systems (TMOS) and ANU Professor in Physics, said the new tech used meta-surfaces, or thin films, to manipulate light in new ways.

“This is the first time anywhere in the world that infrared light has been successfully transformed into visible images in an ultra-thin screen,” Professor Neshev said.

“It’s a really exciting development and one that we know will change the landscape for night vision forever.”

The new tech has been developed by an international team of researchers from TMOS, ANU, Nottingham Trent University, UNSW and European partners.

Mohsen Rahmani, the Leader of the Advanced Optics and Photonics Lab in Nottingham Trent University’s School of Science and Technology, led the development of the nanoscale crystal films.

“We previously demonstrated the potential of individual nanoscale crystals, but to exploit them in our everyday life we had to overcome enormous challenges to arrange the crystals in an array fashion,” he said.  

“While this is the first proof-of-concept experiment, we are actively working to further advance the technology.”

Read about the breakthrough in Advanced Photonics.

Featured image: Dr Rocio Camacho Morales says the researchers have made the “invisible, visible”. © Jamie Kidston, The Australian National University


Reference: Rocio Camacho-Morales et al., “Infrared upconversion imaging in nonlinear metasurfaces”, Advanced Photonics, 3(3), 036002 (2021). https://doi.org/10.1117/1.AP.3.3.036002


Provided by Australian National University

Novel Fast-beam-switching Transceiver Takes 5G to the Next Level (Science and Technology)

Scientists at Tokyo Institute of Technology (Tokyo Tech) and NEC Corporation jointly develop a 28-GHz phased-array transceiver that supports efficient and reliable 5G communications. The proposed transceiver outperforms previous designs in various regards by adapting fast beam switching and leakage cancellation mechanism.

With the recent emergence of innovative technologies, such as the Internet of Things, smart cities, autonomous vehicles, and smart mobility, our world is on the brink of a new age. This stimulates the use of millimeter-wave bands, which have far more signal bandwidth, to accommodate these new ideas. 5G can offer data rates over 10 Gbit/s through the use of these millimeter-waves and multiple-in-multiple-out (MIMO) technology–a technology that employs multiple transmitters and receivers to transfer more data at the same time.

Large-scale phased-array transceivers are crucial for the implementation of these MIMO systems. While MIMO systems boost spectral performance, large-scale phased-array systems face several challenges, such as increased power dissipation and implementation costs. One such critical challenge is latency caused by beam switching time. Beam switching is an important feature that enables the selection of the most optimal beam for each terminal. A design that optimizes beam switching time and device cost is, thus, the need of the hour.

The proposed phased-array transceiver is fabricated using a 65-nm CMOS process and packaged with wafer-level chip-scale package. It is configured in an area as small as 5 × 4.5 mm. © 2021 Symposia on VLSI Technology and Circuits

Motivated by this, scientists from Tokyo Institute of Technology and NEC Corporation in Japan collaborated to develop a 28-GHz phased-array transceiver that supports fast beam switching and high-speed data communication. Their findings will be discussed at the 2021 Symposia on VLSI Technology and Circuits, an international conference that explores emerging trends and innovative concepts in semiconductor technology and circuits.

The proposed design facilitates dual-polarized operation, in which data is transmitted simultaneously through horizontal and vertical-polarized waves. However, one issue with these systems is cross-polarization leakage, which results in signal degradation, especially in the millimeter-wave band. The research team delved into the issue and developed a solution. Prof. Kenichi Okada, who led the research team, says, “Fortunately, we were able to devise a cross-polarization detection and cancellation methodology, using which we could suppress the leakages in both transmit and receive mode.”

One critical feature of the proposed mechanism is the ability to achieve low-latency beam switching and high-accuracy beam control. Static elements control the building blocks of the mechanism, while on-chip SRAM is used to store the settings for different beams (Figure 1). This mechanism leads to fast beam switching with ultra-low latency being achieved. It also enables fast switching in transmit and receive modes due to the use of separate registers for each mode.

Another aspect of the proposed transceiver is its low cost and small size. The transceiver has a bi-directional architecture, which allows for a smaller chip size of 5 × 4.5 mm2 (Figure 2). For a total of 256-pattern beam settings stored within the on-chip SRAM, a beam switching time of only 4 nanoseconds was achieved! Error vector magnitude (EVM)–a measure to quantify the efficiency of digitally modulated signals such as quadrature amplitude modulation (QAM)–was calculated for the proposed transceiver. The transceiver was supported with EVMs of 5.5% in 64QAM and 3.5% in 256QAM.

When compared with state-of-the-art 5G phased-array transceivers, the system has a faster beam switching time and excellent MIMO efficiency. Okada is optimistic about the future of the 28-GHz 5G phased-array transceiver. He concludes, “The technology we developed for the 5G NR network supports high-volume data streaming with low latency. Thanks to its rapid beam switching capabilities, it can be used in scenarios where enhanced multi-user perception is required. This device sets the stage for a myriad of applications, including machine connectivity and the construction of smart cities and factories.”

This research is supported by the Ministry of Internal Affairs and Communications in Japan (JPJ000254).

Featured image: Large volume SRAM and lookup table are used for supporting 256 beam settings. The mechanism supports fast switching in transmit (TX) and receive (RX) mode with direct external TX/RX enable pins. © 2021 Symposia on VLSI Technology and Circuits


Reference

  • Authors: Jian Pang, Zheng Li, Xueting Luo, Joshua Alvin, Kiyoshi Yanagisawa, Yi Zhang, Zixin Chen, Zhongliang Huang, Xiaofan Gu, Weichu Chen, Yun Wang, Dongwon You, Zheng Sun, Yuncheng Zhang, Hongye Huang, Naoki Oshima, Keiichi Motoi, Shinichi Hori, Kazuaki Kunihiro, Tomoya Kaneko, Atsushi Shirane, and Kenichi Okada
  • Session: Session 11 Advanced Wireless for 5G, C11-2 (June 17,8:50JST)
  • Session Title: A Fast-Beam-Switching 28-GHz Phased-Array Transceiver Supporting Cross-Polarization Leakage Self-Cancellation
  • Conference: 2021 Symposia on VLSI Technology and Circuits
  • Affiliations: Tokyo Institute of Technology, NEC Corporation

Provided by Tokyo Institute of Technology

New Nondestructive Broadband Imager Is the Next Step Towards Advanced Technology (Science and Technology)

Scientists at Tokyo Institute of Technology (Tokyo Tech) have designed a versatile sensing platform with a compact source-camera module that enables 3D feature extraction of curved objects at multiple frequencies ranging from terahertz to infrared light. In their paper, they demonstrate rapid, omnidirectional photo-monitoring performance after integrating their platform to a robot-assisted movable arm, offering a way to realize an Internet of Things system of sensor network.

One of the key aspects of academic and industrial research today is non-destructive imaging, a technique in which an object or sample is imaged (using light) without causing any damage to it. Often, such imaging techniques are crucial to ensuring safety and quality of industrial products, subsequently leading to growing demands for high-performance imaging of objects with arbitrary structures and locations.

On one hand, there has been tremendous advancements in the scope of non-destructive imaging regarding the region of electromagnetic (EM) spectrum it can access, which now ranges from visible light to as far as millimeter waves! On the other, imaging devices have become flexible and wearable, enabling stereoscopic (3D) visualization of both plane and curved samples without forming a blind spot.

Despite such progress, however, issues such as portability of sensing modules, cooling-free (free of bulky cooling equipment) device operation, and unmanned or robot-assisted photo-monitoring remain to be addressed. “The transition from manned to robotic inspection can make operations such as disconnection testing of power-transmission lines and exploring cramped environments safer and more sustainable,” explains Prof. Yukio Kawano, from Tokyo Tech and Chuo University, who researches extensively on terahertz (THz) waves (EM waves with frequency in the terahertz range) and THz imaging.

Graphical abstract by Yukio Kawano et al.

While multiple studies in the past have explored systems equipped with one of the aforementioned modules, their functional integration has not yet been attempted, limiting progress. Against this backdrop, in a recent study published in Nature Communications, Prof. Kawano and his colleagues from Tokyo Tech, Japan, developed a robot-assisted, broadband (using a wide range of frequencies) photo-monitoring platform equipped with a light source and imager that can operate in a location-independent manner and switch between reflective and transmissive sensing.

In their proposed module, the scientists made use of physically and chemically enriched carbon nanotube (CNT) thin films to act as uncooled imager sheets that employed “photothermoelectric effect” to convert light into electric signal via thermoelectric conversion. Due to their excellent absorption properties over a wide range of wavelengths, CNTs showed a broadband sensitivity. Moreover, the imager sheet allowed for a stereoscopic sensing operation in both reflective and transmissive modes, thereby enabling inspections of several curved objects such as beverage bottles, water pipes, and gas pipes. By detecting the local changes on signals, scientists were able to identify minuscule defects in these structures otherwise invisible. Further, by employing multi-frequency photo-monitoring, ranging between THz and infrared (IR) bands, the scientists were able to extract both the outer surface and inner surface features using IR and THz light, respectively.

Finally, they achieved a 360°-view photo-monitoring using a light-source-integrated compact sensing module and implemented the same in a multi-axis, robot-assisted movable-arm that performed a high-speed photo-monitoring of a defective miniature model of a winding road-bridge.

The results have spurred scientists to consider the future prospects of their device. “Our efforts can potentially provide a roadmap for the realization of a ubiquitous sensing platform. Additionally, the concept of this study could be used for a sustainable, long-term operable and user-friendly Internet of Things system of a sensor network,” observes an excited Prof. Kawano.

This study, indeed, takes sensing technology to the next level!


Reference

  • Authors :Kou Li, Ryoichi Yuasa, Ryogo Utaki, Meiling Sun, Yu Tokumoto, Daichi Suzuki, and Yukio Kawano
  • Title of original paper :Robot-assisted, source-camera-coupled multi-view broadband imagers for ubiquitous sensing platform
  • Journal :Nature Communications
  • DOI : 10.1038/s41467-021-23089-w

Provided by Tokyo Institute of Technology