Tag Archives: #ai

Artificial Intelligence Helps Improve NASA’s Eyes on the Sun (Planetary Science)

A group of researchers is using artificial intelligence techniques to calibrate some of NASA’s images of the Sun, helping improve the data that scientists use for solar research. The new technique was published in the journal Astronomy & Astrophysics on April 13, 2021. 

A solar telescope has a tough job. Staring at the Sun takes a harsh toll, with a constant  bombardment by a never-ending stream of solar particles and intense sunlight. Over time, the sensitive lenses and sensors of solar telescopes begin to degrade. To ensure the data such instruments send back is still accurate, scientists recalibrate periodically to make sure they understand just how the instrument is changing. 

Launched in 2010, NASA’s Solar Dynamics Observatory, or SDO, has provided high-definition images of the Sun for over a decade. Its images have given scientists a detailed look at various solar phenomena that can spark space weather and affect our astronauts and technology on Earth and in space. The Atmospheric Imagery Assembly, or AIA, is one of two imaging instruments on SDO and looks constantly at the Sun, taking images across 10 wavelengths of ultraviolet light every 12 seconds. This creates a wealth of information of the Sun like no other, but – like all Sun-staring instruments – AIA degrades over time, and the data needs to be frequently calibrated.  

Seven of the ultraviolet wavelengths observed by the AIA on NASA’s SDO. The top row is taken from May 2010 and the bottom row shows from 2019, without any corrections, showing how the instrument degraded over time.
This image shows seven of the ultraviolet wavelengths observed by the Atmospheric Imaging Assembly on board NASA’s Solar Dynamics Observatory. The top row is observations taken from May 2010 and the bottom row shows observations from 2019, without any corrections, showing how the instrument degraded over time.Credits: Luiz Dos Santos/NASA GSFC

Since SDO’s launch, scientists have used sounding rockets to calibrate AIA. Sounding rockets are smaller rockets that typically only carry a few instruments and take short flights into space –  usually only 15 minutes. Crucially, sounding rockets fly above most of Earth’s atmosphere, allowing instruments on board to to see the ultraviolet wavelengths measured by AIA. These wavelengths of light are absorbed by Earth’s atmosphere and can’t be measured from the ground. To calibrate AIA, they would attach an ultraviolet telescope to a sounding rocket and compare that data to the measurements from AIA. Scientists can then make adjustments to account for any changes in AIA’s data. 

There are some drawbacks to the sounding rocket method of calibration. Sounding rockets can only launch so often, but AIA is constantly looking at the Sun. That means there’s downtime where the calibration is slightly off in between each sounding rocket calibration. 

“It’s also important for deep space missions, which won’t have the option of sounding rocket calibration,” said Dr. Luiz Dos Santos, a solar physicist  at NASA’s Goddard Space Flight Center in Greenbelt, Maryland, and lead author on the paper. “We’re tackling two problems at once.” 

Virtual calibration

With these challenges in mind, scientists decided to look at other options to calibrate the instrument, with an eye towards constant calibration. Machine learning, a technique used in artificial intelligence, seemed like a perfect fit. 

As the name implies, machine learning requires a computer program, or algorithm, to learn how to perform its task.

First, researchers needed to train a machine learning algorithm to recognize solar structures and how to compare them using AIA data. To do this, they give the algorithm images from sounding rocket calibration flights and tell it the correct amount of calibration they need. After enough of these examples, they give the algorithm similar images and see if it would identify the correct calibration needed. With enough data, the algorithm learns to identify how much calibration is needed for each image.

Because AIA looks at the Sun in multiple wavelengths of light, researchers can also use the algorithm to compare specific structures across the wavelengths and strengthen its assessments.

To start, they would teach the algorithm what a solar flare looked like by showing it solar flares across all of AIA’s wavelengths until it recognized solar flares in all different types of light. Once the program can recognize a solar flare without any degradation, the algorithm can then determine how much degradation is affecting AIA’s current images and how much calibration is needed for each. 

“This was the big thing,” Dos Santos said. “Instead of just identifying it on the same wavelength, we’re identifying structures across the wavelengths.” 

This means researchers can be more sure of the calibration the algorithm identified. Indeed, when comparing their virtual calibration data to the sounding rocket calibration data, the machine learning program was spot on. 

Two lines of images of the Sun. The top line gets darker and harder to see, while the bottom row stays a consistent brightly visible image.
The top row of images show the degradation of AIA’s 304 Angstrom wavelength channel over the years since SDO’s launch. The bottom row of images are corrected for this degradation using a machine learning algorithm.Credits: Luiz Dos Santos/NASA GSFC

With this new process, researchers are poised to constantly calibrate AIA’s images between calibration rocket flights, improving the accuracy of SDO’s data for researchers. 

Machine learning beyond the Sun

Researchers have also been using machine learning to better understand conditions closer to home. 

One group of researchers led by Dr. Ryan McGranaghan – Principal Data Scientist and Aerospace Engineer at ASTRA LLC and NASA Goddard Space Flight Center –  used machine learning to better understand the connection between Earth’s magnetic field and the ionosphere, the electrically charged part of Earth’s upper atmosphere. By using data science techniques to large volumes of data, they could apply machine learning techniques to develop a newer model that helped them better understand how energized particles from space rain down into Earth’s atmosphere, where they drive space weather. 

As machine learning advances, its scientific applications will expand to more and more missions. For the future, this may mean that deep space missions – which travel to places where calibration rocket flights aren’t possible – can still be calibrated and continue giving accurate data, even when getting out to greater and greater distances from Earth or any stars.

Header image caption (same as image in the story): The top row of images show the degradation of AIA’s 304 Angstrom wavelength channel over the years since SDO’s launch. The bottom row of images are corrected for this degradation using a machine learning algorithm. Credits: Luiz Dos Santos/NASA GSFC

Provided by NASA

Artificial Intelligence Allows the Selection Of 30 Million Possible Drugs Against SARS-CoV-2 (Medicine)

Mayo Clinic researchers and collaborators used computer simulation and artificial intelligence (AI) to select 30 million potential drugs that block the SARS-CoV-2 virus, which causes COVID-19. In the work published in Biomolecules , researchers accelerated drug discovery to better identify and study the most promising targets, as they are interested in discovering new treatments for COVID-19 .

“A multi-drug platform was used to select the ones that might work. The analysis was done with drugs clinically tested and licensed by the US Food and Drug Administration, as well as other novel compounds. Thanks to the computational power of advanced technology, it was possible to determine the best drug from a composite library for further investigation, ”says Dr. Thomas Caulfield , a molecular neuroscientist at Mayo Clinic and an expert author on the paper.

The studies were carried out using a computer simulation called silicon detection (which means on the computer) and validated through biological experiments with live viruses. This type of research uses digital databases and mathematical concepts to identify potentially useful drug compounds. Other types of research are carried out in cell lines, which is known as in vitro , or they are carried out in living organisms such as mice or humans and is known as in vivo.

The researchers started with 30 million drug compounds. Virtual assessment tools predicted the behavior of various drug compounds and showed the pattern of how they would interact with particulate biological targets of SARS-CoV-2. Selection with silicon reduced the compounds to 25. Then, for further analysis and laboratory testing, the researchers conducted a pilot study of all 25 compounds against infectious SARS-CoV-2 in human cell cultures, and then they tested for a common problem with drugs, which is toxicity.

Because one of the liver’s tasks is to clean the blood, including the drug components, the team created a model of the human liver on a honeycomb-shaped surface that was no larger than the size of a pencil eraser. The researchers were able to predict that all of those 25 compounds would be safe for the human liver.

‘The goal is to deactivate the infection and restore the cells to health. What we want is to aggressively target the SARS-CoV-2 duplication cycle from several fronts to inhibit entry and spread of the virus, ”says Dr. Caulfield.

The researchers hope that a combination of drugs, similar to a drug cocktail used in the treatment of HIV, will complement the vaccination against COVID-19. Dr. Caufield says the next step is to move forward on the basis of the new discoveries. The researchers plan to test the combination of drugs to obtain pairs that act in synergy and are more powerful against the virus than a single compound.

“This discovery opens the way for the future creation of drugs and clinical trials to accelerate the administration of possible drugs,” concludes the doctor.

Dr. Caulfield led the drug selection team, which included colleagues from Mayo Clinic in Florida and Mayo Clinic in Rochester, as well as researchers from Brigham and Women’s Hospital (affiliated with Harvard Medical School) and the University of California at Riverside. Funding for this study came from the National Institutes of Allergy and Infectious Diseases, part of the National Institutes of Health, and the Center for Personalized Medicine at Mayo Clinic. For a full list of authors, funding information, and conflict of interest statements, see the article in Biomolecules .

This article and others regarding more studies are in the Mayo Clinic research publication Discovery’s Edge .

Reference: Coban, M.A.; Morrison, J.; Maharjan, S.; Hernandez Medina, D.H.; Li, W.; Zhang, Y.S.; Freeman, W.D.; Radisky, E.S.; Le Roch, K.G.; Weisend, C.M.; Ebihara, H.; Caulfield, T.R. Attacking COVID-19 Progression Using Multi-Drug Therapy for Synergetic Target Engagement. Biomolecules 2021, 11, 787. https://doi.org/10.3390/biom11060787

Provided by Mayo Clinic

Machine Learning Accelerates The Search For Promising Moon Sites For Energy & Mineral Resources (Astronomy)

A Moon-scanning method that can automatically classify important lunar features from telescope images could significantly improve the efficiency of selecting sites for exploration.

There is more than meets the eye to picking a landing or exploration site on the Moon. The visible area of the lunar surface is larger than Russia and is pockmarked by thousands of craters and crisscrossed by canyon-like rilles. The choice of future landing and exploration sites may come down to the most promising prospective locations for construction, minerals or potential energy resources. However, scanning by eye across such a large area, looking for features perhaps a few hundred meters across, is laborious and often inaccurate, which makes it difficult to pick optimal areas for exploration.

Siyuan Chen (pictured above) and Professor Xin Gao used machine learning and AI to identify promising lunar areas for the exploration of precious resources, such as uranium and helium-3.
Siyuan Chen (pictured above) and Professor Xin Gao used machine learning and AI to identify promising lunar areas for the exploration of precious resources, such as uranium and helium-3. © 2021 KAUST; Anastasia Serin

Siyuan Chen, Xin Gao and Shuyu Sun, along with colleagues from The Chinese University of Hong Kong, have now applied machine learning and artificial intelligence (AI) to automate the identification of prospective lunar landing and exploration areas.

“We are looking for lunar features like craters and rilles, which are thought to be hotspots for energy resources like uranium and helium-3 — a promising resource for nuclear fusion,” says Chen. “Both have been detected in Moon craters and could be useful resources for replenishing spacecraft fuel.”

Machine learning is a very effective technique for training an AI model to look for certain features on its own. The first problem faced by Chen and his colleagues was that there was no labeled dataset for rilles that could be used to train their model.

Video: KAUST scientists have developed a machine learning method to explore the surface of the moon. © 2021 KAUST; Anastasia Serin.

“We overcame this challenge by constructing our own training dataset with annotations for both craters and rilles,” says Chen. “To do this, we used an approach called transfer learning to pretrain our rille model on a surface crack dataset with some fine tuning using actual rille masks. Previous approaches require manual annotation for at least part of the input images —our approach does not require human intervention and so allowed us to construct a large high-quality dataset.”

The next challenge was developing a computational approach that could be used to identify both craters and rilles at the same time, something that had not been done before.

“This is a pixel-to-pixel problem for which we need to accurately mask the craters and rilles in a lunar image,” says Chen. “We solved this problem by constructing a deep learning framework called high-resolution-moon-net, which has two independent networks that share the same network architecture to identify craters and rilles simultaneously.”

The team’s approach achieved precision as high as 83.7 percent, higher than existing state-of-the-art methods for crater detection. 

Featured image: Machine learning can be used to rapidly identify and classify craters and rilles on the Moon from telescope images. © 2021 NASA


  1. Chen, S., Li, Y., Zhang, T., Zhu, X., Sun, S. & Gao, X. Lunar features detection for energy discovery via deep learning. Applied Energy 296, 117085 (2021).| article

Provided by KAUST

USC Researchers Enable AI To Use Its “Imagination” (Science and Technology)

A team of researchers at USC is helping AI imagine the unseen, a technique that could also lead to fairer AI, new medicines and increased autonomous vehicle safety.

Imagine an orange cat. Now, imagine the same cat, but with coal-black fur. Now, imagine the cat strutting along the Great Wall of China. Doing this, a quick series of neuron activations in your brain will come up with variations of the picture presented, based on your previous knowledge of the world.

In other words, as humans, it’s easy to envision an object with different attributes. But, despite advances in deep neural networks that match or surpass human performance in certain tasks, computers still struggle with the very human skill of “imagination.”

Now, a USC research team comprising computer science Professor Laurent Itti, and PhD students Yunhao Ge, Sami Abu-El-Haija and Gan Xin, has developed an AI that uses human-like capabilities to imagine a never-before-seen object with different attributes. The paper, titled Zero-Shot Synthesis with Group-Supervised Learning, was published in the 2021 International Conference on Learning Representations on May 7.

“We were inspired by human visual generalization capabilities to try to simulate human imagination in machines,” said Ge, the study’s lead author.

“Humans can separate their learned knowledge by attributes—for instance, shape, pose, position, color—and then recombine them to imagine a new object. Our paper attempts to simulate this process using neural networks.”

“This new disentanglement approach, for the first time, truly unleashes a new sense of imagination in A.I. systems, bringing them closer to humans’ understanding of the world.” Laurent Itti.

AI’s generalization problem

For instance, say you want to create an AI system that generates images of cars. Ideally, you would provide the algorithm with a few images of a car, and it would be able to generate many types of cars—from Porsches to Pontiacs to pick-up trucks—in any color, from multiple angles.

This is one of the long-sought goals of AI: creating models that can extrapolate. This means that, given a few examples, the model should be able to extract the underlying rules and apply them to a vast range of novel examples it hasn’t seen before. But machines are most commonly trained on sample features, pixels for instance, without taking into account the object’s attributes.

The science of imagination

In this new study, the researchers attempt to overcome this limitation using a concept called disentanglement. Disentanglement can be used to generate deepfakes, for instance, by disentangling human face movements and identity. By doing this, said Ge, “people can synthesize new images and videos that substitute the original person’s identity with another person, but keep the original movement.”

Similarly, the new approach takes a group of sample images—rather than one sample at a time as traditional algorithms have done—and mines the similarity between them to achieve something called “controllable disentangled representation learning.”

Then, it recombines this knowledge to achieve “controllable novel image synthesis,” or what you might call imagination. “For instance, take the Transformer movie as an example” said Ge, “It can take the shape of Megatron car, the color and pose of a yellow Bumblebee car, and the background of New York’s Times Square. The result will be a Bumblebee-colored Megatron car driving in Times Square, even if this sample was not witnessed during the training session.”

This is similar to how we as humans extrapolate: when a human sees a color from one object, we can easily apply it to any other object by substituting the original color with the new one. Using their technique, the group generated a new dataset containing 1.56 million images that could help future research in the field.

Understanding the world

While disentanglement is not a new idea, the researchers say their framework can be compatible with nearly any type of data or knowledge. This widens the opportunity for applications. For instance, disentangling race and gender-related knowledge to make fairer AI by removing sensitive attributes from the equation altogether.

In the field of medicine, it could help doctors and biologists discover more useful drugs by disentangling the medicine function from other properties, and then recombining them to synthesize new medicine. Imbuing machines with imagination could also help create safer AI by, for instance, allowing autonomous vehicles to imagine and avoid dangerous scenarios previously unseen during training.

“Deep learning has already demonstrated unsurpassed performance and promise in many domains, but all too often this has happened through shallow mimicry, and without a deeper understanding of the separate attributes that make each object unique,” said Itti. “This new disentanglement approach, for the first time, truly unleashes a new sense of imagination in A.I. systems, bringing them closer to humans’ understanding of the world.”


Provided by USC Viterbi

Closer Hardware Systems Bring The Future of Artificial Intelligence Into View (Engineering)

Machine learning is the process by which computers adapt their responses without human intervention. This form of artificial intelligence (AI) is now common in everyday tools such as virtual assistants and is being developed for use in areas from medicine to agriculture. A challenge posed by the rapid expansion of machine learning is the high energy demand of the complex computing processes. Researchers from The University of Tokyo have reported the first integration of a mobility-enhanced field-effect transistor (FET) and a ferroelectric capacitor (FE-CAP) to bring the memory system into the proximity of a microprocessor and improve the efficiency of the data-intensive computing system. Their findings were presented at the 2021 Symposium on VLSI Technology.

Memory cells require both a memory component and an access transistor. In currently available examples, the access transistors are generally silicon-metal-oxide semiconductor FETs. While the memory elements can be formed in the ‘back end of line’ (BEOL) layers, the access transistors need to be formed in what are known as the ‘front end of line’ layers of the integrated circuit, which isn’t a good use of this space.

In contrast, oxide semiconductors such as indium gallium zinc oxide (IGZO) can be included in BEOL layers because they can be processed at low temperatures. By incorporating both the access transistor and the memory into a single monolith in the BEOL, high-density, energy-efficient embedded memory can be achieved directly on a microprocessor.

The researchers used IGZO doped with tin (IGZTO) for both the oxide semiconductor FET and ferroelectric capacitor (FE-cap) to create 3D embedded memory.

“In light of the high mobility and excellent reliability of our previously reported IGZO FET, we developed a tin-doped IGZTO FET,” explains study first author Jixuan Wu. “We then integrated the IGZTO FET with an FE-cap to introduce its scalable properties.”

Both the drive current and the effective mobility of the IGZTO FET were twice those of the IGZO FET without tin. Because the mobility of the oxide semiconductor must be high enough to drive the FE-cap, introducing the tin ensures successful integration.

“The proximity achieved with our design will significantly reduce the distance that signals must travel, which will speed up learning and inference processes in AI computing, making them more energy efficient,” study author Masaharu Kobayashi explains. “We believe our findings provide another step towards hardware systems that can support future AI applications of higher complexity.”

The article, “Mobility-enhanced FET and Wakeup-free Ferroelectric Capacitor Enabled by Sn-doped InGaZnO for 3D Embedded RAM Application”, was presented at the 2021 Symposium on VLSI Technology.

Featured image: Researchers from the Institute of Industrial Science at The University of Tokyo, Kobe Steel, Ltd, and Kobelco Research Institute, Inc, develop high-density, energy-efficient 3D embedded RAM for artificial intelligence applications. © Institute of Industrial Science, the University of Tokyo

Provided by University of Tokyo

Using Artificial Intelligence to Predict Which People with Lung Cancer Will Respond to Immunotherapy (Medicine)

NCI Awards $3 Million Grant to Perlmutter Cancer Center & Case Western Reserve University

Recent advances in immunotherapy have benefited people with locally advanced or metastatic non-small cell lung cancer, but unfortunately not all patients have a favorable response to these treatments. The complexity and dynamic nature of the tumor interactions with the immune system make it challenging to develop tests to develop predictive tests (biomarkers) for immunotherapy.

The National Cancer Institute (NCI) has awarded a 5-year, $3 million grant to researchers at NYU Langone Health’s Perlmutter Cancer Center and Case Western Reserve University in Cleveland to develop and apply artificial intelligence (AI) tools for predicting which people with lung cancer will respond to immunotherapy.

Vamsidhar Velcheti, MD, director of the Thoracic Medical Oncology Program at Perlmutter Cancer Center, in collaboration with Anant Madabhushi, PhD, director of Case Western Reserve’s Center for Computational Imaging and Personalized Medicine, developed new technologies to evaluate tumor response to immunotherapy. Using advanced computer image analysis tools and AI-based algorithms, they could identify signatures, or biomarkers, on CT scan images that could predict which patients would respond to immunotherapy. The NCI grant awarded to the team will allow further development of these tools and help with clinical translation of this research.

“One of the advantages of our approach and the tools that we have developed is that we can use routinely acquired contrast enhanced CT scans, which are used commonly in lung cancer,” says Dr. Velcheti, who is also a member of the Lung Cancer Center and an associate professor in the Department of Medicine at NYU Langone. “These tools can be used to predict response to treatment and longitudinally follow a patient’s progress on treatment.”

The signatures the researchers identified can quantitatively assess the tortuosity (twistedness) of blood vessels within and surrounding the tumors. Dr. Velcheti and his colleagues found that increased vessel tortuosity around a tumor is a strong predictor of response to immunotherapy. Patients who have tumors with increased vessel tortuosity tend to have poor response to immunotherapy possibly because of impaired immune cell trafficking into the tumor.

“Some lung tumors tend to have a lot of twists and turns in the blood vessels due to secretion of VEGF (vascular endothelial growth factor, a protein that causes an increase in the number of blood vessels, fueling tumor growth). These tumors do not respond well to treatment with immunotherapy,” Dr. Velcheti says. “Using the new radiomic biomarker that measures vessel tortuosity, we can identify patients who could potentially benefit from combining immunotherapy with drugs that inhibit VEGF.”

With the NCI grant, Dr. Velcheti and colleagues at Perlmutter Cancer Center plan to develop a clinical trial to test the effect of immunotherapy combined with drugs that inhibit the VEGF protein, which would enable immune cells to gain access to the tumor.

“The NCI funding is a critical step to help translate the exciting science from the lab to the clinic,” Dr. Velcheti says. “This grant will help us continue our multidisciplinary and multi-institutional collaborative research to bring novel and efficient diagnostic tools to help patients with lung cancer.”

Featured image credit: gettyimages

Provided by NYU Langone

NASA AI Technology Could Speed up Fault Diagnosis Process in Spacecraft (Astronomy)

New artificial intelligence technology could speed up physical fault diagnosis in spacecraft and spaceflight systems, improving mission efficiency by reducing down-time.

Research in Artificial Intelligence for Spacecraft Resilience (RAISR) is software developed by Pathways intern Evana Gizzi, who works at NASA’s Goddard Space Flight Center in Greenbelt, Maryland. With RAISR, artificial intelligence could diagnose faults real-time in spacecraft and spaceflight systems in general.

“The spacecraft reporting a fault is like a car with a check engine light on,” Gizzi said. “You know there is an issue, but can’t necessarily explain the cause. That’s where the RAISR algorithm comes in, diagnosing the cause as a loose gas cap.”

portrait of woman, sitting
Evana Gizzi Credits: Sergio Alonso Photography

Right now, the ability to make inferences about what is happening that go beyond traditional ‘if-then-else’ fault trees is something only humans can do, Gizzi said.

Current fault tree diagnosis depends on the physics being simple and already known to engineers and scientists. For instance, if an instrument’s temperature drops too low, the spacecraft can detect this situation and turn on heaters. If the current in a line spikes, the spacecraft may work to isolate the offending circuit. In both cases, the spacecraft simply knows that if ‘A’ happens, respond by doing ‘B.’ What the spacecraft cannot do is figure out what caused these events, especially in unexpected fault cases: whether the spacecraft entered Earth’s shadow or a micrometeoroid damaged a circuit.

These types of conclusions require the ability to follow a logical chain of non-trivial inferences – something like human reasoning, Gizzi said. The artificial intelligence (AI) might even be able to connect the spacecraft’s decreased temperature with a malfunction in its internal heat regulation system: an example of a more catastrophic fault.

Referring such faults to a human on the ground does not just take time, but costs valuable resources in terms of communications networks and bandwidth for smaller missions in Earth orbit, or even for exploring distant planets, where bandwidth to controllers on Earth is limited by distance.

In other circumstances, like orbiting behind another planet or the Moon, contact is simply not available. Computers also excel over human controllers when a proper inference needs to be done extremely fast using several disparate types of data.      

In its current stages, RAISR would not actively control the spacecraft in any way, but facilitates diagnosis by finding associations that a human may miss.

Michael Johnson, the Engineering and Technology Directorate chief technologist at Goddard, said current safe modes waste valuable time because science data collection ceases, whereas a technology that could diagnose and address a fault might lead to a quicker return to normal flight operations.

RAISR uses a combination of machine learning and classical AI techniques. While machine learning-based techniques can be particularly useful in diagnosing faults, its performance depends on having a large amount of diverse data, Gizzi said, and therefore usually addresses faults that have happened in the past. With anomalies, which are faults that have never been experienced, there simply may not be enough data to create sound reasoning with machine learning-based techniques. That is where classical AI steps in, Gizzi said, facilitating reasoning in more complicated situations that don’t have previous data to inform decisions.

logo text: RAISR with graphics
Credits: Evana Gizzi

Gizzi’s technology helps make connections that are extraordinarily difficult to be made by humans, said Conrad Schiff, an assistant chief for technology in the software engineering division at Goddard.

“It’s not just an automated system,” Schiff said. “It’s an autonomous system that attempts to reveal how it arrived at the ‘whodunit.’ Laying out the evidence like a detective at the end of a mystery novel, so that we all can see who is guilty of murder – that’s the same principle here. It understands these associations, it helps us understand its reasoning in arriving at its conclusion.”

RAISR enables better collection of data and observations by reducing resources needed for the maintenance of the systems themselves, Schiff added. “It’s less glamorous, it’s grittier, but it’s making sure the health and safety of the thing producing the data is maintained as best as we can.”

In general, AI can act like an additional brain within the spacecraft.

“You’re taking an engineer or a scientist from the lab and putting a simplified copy of them in the spacecraft, so they can make intelligent decisions in situ,” Johnson said.

RAISR’s next steps include a demonstration on a small satellite mission, Gizzi said, where it can make real-time diagnosis decisions to compare with ground control.

As more missions adopt AI techniques, Johnson said, testing approaches may have to shift. Rigorous protocols that test every possible scenario might not apply. That, combined with the cultural shift from ground-based problem resolution to letting the on-orbit systems solve problems themselves, makes putting AI in spacecraft an incremental journey, he said.

“When I think about spaceflight, it’s a target for autonomous systems that just makes sense,” Johnson said. “The real leap occurs when we go beyond automation to autonomy, from programming steps you know will happen to the system starting to think for itself. When you go into deep space, there are going to be things you did not program for. The need is really there.”

Banner image: A CubeSat is released from the International Space Station. RAISR could help spacecraft like these rely less on ground controllers and communications networks. Photo credit: NASA

Provided by NASA

Artificial Intelligence Tool Uses Chest X-ray To Differentiate Worst Cases of COVID-19 (Medicine)

Trained to see patterns by analyzing thousands of chest X-rays, a computer program predicted with up to 80 percent accuracy which COVID-19 patients would develop life-threatening complications within four days, a new study finds.

Developed by researchers at NYU Grossman School of Medicine, the program used several hundred gigabytes of data gleaned from 5,224 chest X-rays taken from 2,943 seriously ill patients infected with SARS-CoV-2, the virus behind the infections.

The authors of the study, publishing in the journal npj Digital Medicine online May 12, cited the “pressing need” for the ability to quickly predict which COVID-19 patients are likely to have lethal complications so that treatment resources can best be matched to those at increased risk. For reasons not yet fully understood, the health of some COVID-19 patients suddenly worsens, requiring intensive care, and increasing their chances of dying.

In a bid to address this need, the NYU Langone team fed not only X-ray information into their computer analysis, but also patients’ age, race, and gender, along with several vital signs and laboratory test results, including weight, body temperature, and blood immune cell levels. Also factored into their mathematical models, which can learn from examples, were the need for a mechanical ventilator and whether each patient went on to survive (2,405) or die (538) from their infections.

Researchers then tested the predictive value of the software tool on 770 chest X-rays from 718 other patients admitted for COVID-19 through the emergency room at NYU Langone hospitals from March 3 to June 28, 2020. The computer program accurately predicted four out of five infected patients who required intensive care and mechanical ventilation and/or died within four days of admission.

“Emergency room physicians and radiologists need effective tools like our program to quickly identify those COVID-19 patients whose condition is most likely to deteriorate quickly so that health care providers can monitor them more closely and intervene earlier,” says study co-lead investigator Farah Shamout, PhD, an assistant professor in computer engineering at New York University’s campus in Abu Dhabi.

“We believe that our COVID-19 classification test represents the largest application of artificial intelligence in radiology to address some of the most urgent needs of patients and caregivers during the pandemic,” says Yiqiu “Artie” Shen, MS, a doctoral student at the NYU Data Science Center.

Study senior investigator Krzysztof Geras, PhD, an assistant professor in the Department of Radiology at NYU Langone, says a major advantage to machine-intelligence programs such as theirs is that its accuracy can be tracked, updated and improved with more data. He says the team plans to add more patient information as it becomes available. He also says the team is evaluating what additional clinical test results could be used to improve their test model.

Geras says he hopes, as part of further research, to soon deploy the NYU COVID-19 classification test to emergency physicians and radiologists. In the interim, he is working with physicians to draft clinical guidelines for its use.

Funding support for the study was provided by National Institutes of Health grants P41 EB017183 and R01 LM013316; and National Science Foundation grants HDR-1922658 and HDR-1940097.

Besides Geras, Shamout, and Shen, other NYU Langone researchers involved in this study are co-lead investigators Nan Wu; Aakash Kaku; Jungkyu Park; and Taro Makino; and co-investigators Stanislaw Jastrzebski; Duo Wong; Ben Zhang; Siddhant Dogra; Men Cao; Narges Razavian; David Kudlowitz; Lea Azour; William Moore; Yvonne Lui; Yindalon Aphinyanaphongs; and Carlos Fernandez-Granda.

Featured image: Chest X-ray from patient severely ill from COVID-19, showing (in white patches) infected tissue spread across the lungs. CREDIT Courtesy of Nature Publishing or npj Digital Medicine

Reference: Shamout, F.E., Shen, Y., Wu, N. et al. An artificial intelligence system for predicting the deterioration of COVID-19 patients in the emergency department. npj Digit. Med. 4, 80 (2021). https://doi.org/10.1038/s41746-021-00453-0

Provided by NYU Langone

AI Learns To Type On A Phone Like Humans (Engineering)

Computational model precisely replicates eye and finger movements of touchscreen users — could lead to better auto-correct and keyboard usability for unique needs

Touchscreens are notoriously difficult to type on. Since we can’t feel the keys, we rely on the sense of sight to move our fingers to the right places and check for errors, a combination of efforts we can’t pull off at the same time. To really understand how people type on touchscreens, researchers at Aalto University and the Finnish Center for Artificial Intelligence FCAI have created the first artificial intelligence model that predicts how people move their eyes and fingers while typing.

The AI model can simulate how a human user would type any sentence on any keyboard design. It makes errors, detects them — though not always immediately — and corrects them, very much like humans would. The simulation also predicts how people adapt to alternating circumstances, like how their writing style changes when they start using a new auto-correction system or keyboard design.

‘Previously, touchscreen typing has been understood mainly from the perspective of how our fingers move. AI-based methods have helped shed new light on these movements: what we’ve discovered is the importance of deciding when and where to look. Now, we can make much better predictions on how people type on their phones or tablets,’ says Dr. Jussi Jokinen, who led the work.

The study, to be presented at ACM CHI on 12 May, lays the groundwork for developing, for instance, better and even personalized text entry solutions.

‘Now that we have a realistic simulation of how humans type on touchscreens, it should be a lot easier to optimize keyboard designs for better typing — meaning less errors, faster typing, and, most importantly for me, less frustration,’ Jokinen explains.

In addition to predicting how a generic person would type, the model is also able to account for different types of users, like those with motor impairments, and could be used to develop typing aids or interfaces designed with these groups in mind. For those facing no particular challenges, it can deduce from personal writing styles — by noting, for instance, the mistakes that repeatedly occur in texts and emails — what kind of a keyboard, or auto-correction system, would best serve a user.

Based on a method that teaches robots problem-solving

The novel approach builds on the group’s earlier empirical research, which provided the basis for a cognitive model of how humans type. The researchers then produced the generative model capable of typing independently. The work was done as part of a larger project on Interactive AI at the Finnish Center for Artificial Intelligence.

The results are underpinned by a classic machine learning method, reinforcement learning, that the researchers extended to simulate people. Reinforcement learning is normally used to teach robots to solve tasks by trial and error; the team found a new way to use this method to generate behavior that closely matches that of humans — mistakes, corrections and all. 

‘We gave the model the same abilities and bounds that we, as humans, have. When we asked it to type efficiently, it figured out how to best use these abilities. The end result is very similar to how humans type, without having to teach the model with human data,’ Jokinen says.

Comparison to data of human typing confirmed that the model’s predictions were accurate. In the future, the team hopes to simulate slow and fast typing techniques to, for example, design useful learning modules for people who want to improve their typing.  

The paper, Touchscreen Typing As Optimal Supervisory Control, will be presented 12 May 2021 at the ACM CHI conference.

More media

Featured image: Visualisation of where user is pointing and looking when typing. Green indicates location of eyes, blue of fingers. Dark shade stands for longer or more frequent glances or movements. Left: simulation by model; right: observation from user. © Aalto University

Provided by Aalto University