Tag Archives: #algorithms

FSU Researchers Enhance Quantum Machine Learning Algorithms (Quantum)

A Florida State University professor’s research could help quantum computing fulfill its promise as a powerful computational tool.

William Oates, the Cummins Inc. Professor in Mechanical Engineering and chair of the Department of Mechanical Engineering at the FAMU-FSU College of Engineering, and postdoctoral researcher Guanglei Xu found a way to automatically infer parameters used in an important quantum Boltzmann machine algorithm for machine learning applications.

Their findings were published in Scientific Reports.

The work could help build artificial neural networks that could be used for training computers to solve complicated, interconnected problems like image recognition, drug discovery and the creation of new materials.

“There’s a belief that quantum computing, as it comes online and grows in computational power, can provide you with some new tools, but figuring out how to program it and how to apply it in certain applications is a big question,” Oates said.

Quantum bits, unlike binary bits in a standard computer, can exist in more than one state at a time, a concept known as superposition. Measuring the state of a quantum bit — or qubit — causes it to lose that special state, so quantum computers work by calculating the probability of a qubit’s state before it is observed.

Specialized quantum computers known as quantum annealers are one tool for doing this type of computing. They work by representing each state of a qubit as an energy level. The lowest energy state among its qubits gives the solution to a problem. The result is a machine that could handle complicated, interconnected systems that would take a regular computer a very long time to calculate — like building a neural network.

One way to build neural networks is by using a restricted Boltzmann machine, an algorithm that uses probability to learn based on inputs given to the network. Oates and Xu found a way to automatically calculate an important parameter associated with effective temperature that is used in that algorithm. Restricted Boltzmann machines typically guess at that parameter instead, which requires testing to confirm and can change whenever the computer is asked to investigate a new problem.

“That parameter in the model replicates what the quantum annealer is doing,” Oates said. “If you can accurately estimate it, you can train your neural network more effectively and use it for predicting things.”

This research was supported by Cummins Inc. and used resources of the Oak Ridge Leadership Computing Facility, which is a DOE Office of Science User Facility.

Featured image: William Oates, the Cummins Inc. Professor in Mechanical Engineering and chair of the Department of Mechanical Engineering at the FAMU-FSU College of Engineering. (FAMU-FSU College of Engineering/Mark Wallheisier)

Reference: Xu, G., Oates, W.S. Adaptive hyperparameter updating for training restricted Boltzmann machines on quantum annealers. Sci Rep 11, 2727 (2021). https://www.nature.com/articles/s41598-021-82197-1 https://doi.org/10.1038/s41598-021-82197-1

Provided by Florida State University

Researchers Develop Speedier Network Analysis For a Range of Computer Hardware (Engineering / Computer Science)

The advance could boost recommendation algorithms and internet search.

Graphs — data structures that show the relationship among objects — are highly versatile. It’s easy to imagine a graph depicting a social media network’s web of connections. But graphs are also used in programs as diverse as content recommendation (what to watch next on Netflix?) and navigation (what’s the quickest route to the beach?). As Ajay Brahmakshatriya summarizes: “graphs are basically everywhere.”

Brahmakshatriya has developed software to more efficiently run graph applications on a wider range of computer hardware. The software extends GraphIt, a state-of-the-art graph programming language, to run on graphics processing units (GPUs), hardware that processes many data streams in parallel. The advance could accelerate graph analysis, especially for applications that benefit from a GPU’s parallelism, such as recommendation algorithms.

Brahmakshatriya, a PhD student in MIT’s Department of Electrical Engineering and Computer Science and the Computer Science and Artificial Intelligence Laboratory, will present the work at this month’s International Symposium on Code Generation and Optimization. Co-authors include Brahmakshatriya’s advisor, Professor Saman Amarasinghe, as well as Douglas T. Ross Career Development Assistant Professor of Software Technology Julian Shun, postdoc Changwan Hong, recent MIT PhD student Yunming Zhang PhD ’20 (now with Google), and Adobe Research’s Shoaib Kamil.

When programmers write code, they don’t talk directly to the computer hardware. The hardware itself operates in binary — 1s and 0s — while the coder writes in a structured, “high-level” language made up of words and symbols. Translating that high-level language into hardware-readable binary requires programs called compilers. “A compiler converts the code to a format that can run on the hardware,” says Brahmakshatriya. One such compiler, specially designed for graph analysis, is GraphIt.

The researchers developed GraphIt in 2018 to optimize the performance of graph-based algorithms regardless of the size and shape of the graph. GraphIt allows the user not only to input an algorithm, but also to schedule how that algorithm runs on the hardware. “The user can provide different options for the scheduling, until they figure out what works best for them,” says Brahmakshatriya. “GraphIt generates very specialized code tailored for each application to run as efficiently as possible.”

A number of startups and established tech firms alike have adopted GraphIt to aid their development of graph applications. But Brahmakshatriya says the first iteration of GraphIt had a shortcoming: It only runs on central processing units or CPUs, the type of processor in a typical laptop.

“Some algorithms are massively parallel,” says Brahmakshatriya, “meaning they can better utilize hardware like a GPU that has 10,000 cores for execution.” He notes that some types of graph analysis, including recommendation algorithms, require a high degree of parallelism. So Brahmakshatriya extended GraphIt to enable graph analysis to flourish on GPUs.

Brahmakshatriya’s team preserved the way GraphIt users input algorithms, but adapted the scheduling component for a wider array of hardware. “Our main design decision in extending GraphIt to GPUs was to keep the algorithm representation exactly the same,” says Brahmakshatriya. “Instead, we added a new scheduling language. So, the user can keep the same algorithms that they had before written before [for CPUs], and just change the scheduling input to get the GPU code.”

This new, optimized scheduling for GPUs gives a boost to graph algorithms that require high parallelism — including recommendation algorithms or internet search functions that sift through millions of websites simultaneously. To confirm the efficacy of GraphIt’s new extension, the team ran 90 experiments pitting GraphIt’s runtime against other state-of-the-art graph compilers on GPUs. The experiments included a range of algorithms and graph types, from road networks to social networks. GraphIt ran fastest in 65 of the 90 cases and was close behind the leading algorithm in the rest of the trials, demonstrating both its speed and versatility.

GraphIt “advances the field by attaining performance and productivity simultaneously,” says Adrian Sampson, a computer scientist at Cornell University who was not involved with the research. “Traditional ways of doing graph analysis have one or the other: Either you can write a simple algorithm with mediocre performance, or you can hire an expert to write an extremely fast implementation — but that kind of performance is rarely accessible to mere mortals. The GraphIt extension is the key to letting ordinary people write high-level, abstract algorithms and nonetheless getting expert-level performance out of GPUs.”

Sampson adds the advance could be particularly useful in rapidly changing fields: “An exciting domain like that is genomics, where algorithms are evolving so quickly that high-performance expert implementations can’t keep up with the rate of change. I’m excited for bioinformatics practitioners to get their hands on GraphIt to expand the kinds of genomic analyses they’re capable of.”

Brahmakshatriya says the new GraphIt extension provides a meaningful advance in graph analysis, enabling users to go between CPUs and GPUs with state-of-the-art performance with ease. “The field these days is tooth-and-nail competition. There are new frameworks coming out every day,” He says. But he emphasizes that the payoff for even slight optimization is worth it. “Companies are spending millions of dollars each day to run graph algorithms. Even if you make it run just 5 percent faster, you’re saving many thousands of dollars.”

This research was funded, in part, by the National Science Foundation, U.S. Department of Energy, the Applications Driving Architectures Center, and the Defense Advanced Research Projects Agency.

Featured image: MIT researchers developed software to more efficiently run graph applications on a range of computing hardware, including both CPUs and GPUs. Credits: Image: Istockphoto images edited by MIT News

Reference paper: “Compiling Graph Applications for GPUs with GraphIt”

Provided by MIT

Light-based Processors Boost Machine-learning Processing (Computer / Engineering)

An international team of scientists have developed a photonic processor that uses rays of light inside silicon chips to process information much faster than conventional electronic chips. Published in Nature, the breakthrough study was carried out by scientists from EPFL, the Universities of Oxford, Münster, Exeter, Pittsburgh, and IBM Research – Zurich.

Schematic representation of a processor for matrix multiplications that runs on light. Credit: University of Oxford

The exponential growth of data traffic in our digital age poses some real challenges on processing power. And with the advent of machine learning and AI in, for example, self-driving vehicles and speech recognition, the upward trend is set to continue. All this places a heavy burden on the ability of current computer processors to keep up with demand.

Now, an international team of scientists has turned to light to tackle the problem. The researchers developed a new approach and architecture that combines processing and data storage onto a single chip by using light-based, or “photonic” processors, which are shown to surpass conventional electronic chips by processing information much more rapidly and in parallel.

The scientists developed a hardware accelerator for so-called matrix-vector multiplications, which are the backbone of neural networks (algorithms that simulate the human brain), which themselves are used for machine-learning algorithms. Since different light wavelengths (colors) don’t interfere with each other, the researchers could use multiple wavelengths of light for parallel calculations. But to do this, they used another innovative technology, developed at EPFL, a chip-based “frequency comb”, as a light source.

“Our study is the first to apply frequency combs in the field of artificially neural networks,” says Professor Tobias Kippenberg at EPFL, one the study’s leads. Professor Kippenberg’s research has pioneered the development of frequency combs. “The frequency comb provides a variety of optical wavelengths that are processed independently of one another in the same photonic chip.”

“Light-based processors for speeding up tasks in the field of machine learning enable complex mathematical tasks to be processed at high speeds and throughputs,” says senior co-author Wolfram Pernice at Münster University, one of the professors who led the research. “This is much faster than conventional chips which rely on electronic data transfer, such as graphic cards or specialized hardware like TPU’s (Tensor Processing Unit).”

After designing and fabricating the photonic chips, the researchers tested them on a neural network that recognizes of hand-written numbers. Inspired by biology, these networks are a concept in the field of machine learning and are used primarily in the processing of image or audio data. “The convolution operation between input data and one or more filters – which can identify edges in an image, for example, are well suited to our matrix architecture,” says Johannes Feldmann, now based at the University of Oxford Department of Materials. Nathan Youngblood (Oxford University) adds: “Exploiting wavelength multiplexing permits higher data rates and computing densities, i.e. operations per area of processer, not previously attained.”

“This work is a real showcase of European collaborative research,” says David Wright at the University of Exeter, who leads the EU project FunComp, which funded the work. “Whilst every research group involved is world-leading in their own way, it was bringing all these parts together that made this work truly possible.”

The study is published in Nature this week, and has far-reaching applications: higher simultaneous (and energy-saving) processing of data in artificial intelligence, larger neural networks for more accurate forecasts and more precise data analysis, large amounts of clinical data for diagnoses, enhancing rapid evaluation of sensor data in self-driving vehicles, and expanding cloud computing infrastructures with more storage space, computing power, and applications software. 


EPSRC, Deutsche Forschungsgemeinschaft (DFG), European Research Council, European Union’s Horizon 2020 Research and Innovation Programme (Fun-COMP), Studienstiftung des deutschen Volkes

References: J. Feldmann, N. Youngblood, M. Karpov, H. Gehring, X. Li, M. Stappers, M. Le Gallo, X. Fu, A. Lukashchuk, A.S. Raja, J. Liu, C.D. Wright, A. Sebastian, T.J. Kippenberg, W.H.P. Pernice, H. Bhaskaran. Parallel convolution processing using an integrated photonic tensor core. Nature 06 January 2021. https://www.nature.com/articles/s41586-020-03070-1 DOI: 10.1038/s41586-020-03070-1

Provided by EPFL

Ferrofluid Surface Simulations Go More Than Skin Deep (Computer Science / Engineering)

Computer models efficiently and accurately simulate the magnetic responses of ferrofluids by considering only the fluid’s surface.

The spiky structure that erupts from the smooth surface of a ferrofluid when a magnet is brought close can be predicted more accurately than previously thought. KAUST researchers have shown that computational algorithms can calculate the ferrofluid’s bristling response to a magnet by simulating only the liquid’s surface layer.

The magnetic responses of ferrofluids can be modeled to expand their use in a broader range of fields such as advanced electronics and nanomedicine. © 2021 KAUST

Ferrofluids are liquid suspensions of iron-based particles that behave like a regular fluid, but once a magnet is present, the ferrofluid rapidly shape-shifts to form spikes that align with the magnetic field. Originally developed by NASA, ferrofluids have numerous uses ranging from advanced electronics to nanomedicine and have the potential for even broader use, if their magnetic responses could be predicted more accurately.

Dominik Michels and his team are applying computer simulations to model ferrofluid behavior. “Our aim is to develop an efficient and accurate algorithm to simulate the macroscopic shapes and dynamic movement of ferrofluids,” says Libo Huang, a Ph.D. student in Michels’ team.

Recently, looking at the wider field of fluid simulation, the team has shown that the concept of simulating fluid motion by considering only the liquid’s surface can be adapted to ferrofluids.

Video: KAUST researchers are applying computer simulations to model ferrofluid behavior with the goal of developing an efficient and accurate algorithm to simulate the macroscopic shapes and dynamic movement of ferrofluids. 2021 KAUST; Anastasia Serin.

“While the surface-only liquid simulation provides a platform for fluid simulation, its extension to ferrofluids is significant,” Huang says. To model a fluid’s behavior based only on its surface, the liquid must respond to inputs in a simple linear fashion. Most ferrofluids have a complex non-linear response to a magnetic field.

“Our aim is to develop an efficient and accurate algorithm to simulate the macroscopic shapes and dynamic movement of ferrofluids,” said Huang.

However, the team showed that as long as the magnetic field is not too strong, the response is close to linear, enabling them to perform a surface-only calculation of the magnetic field response.

When a magnet is brought close to a ferrofluid, the ferrofluid shape-shifts to form spikes that align with the magnetic field. Reproduced with permission from © 2020 Huang et al.

In the simulation, the researchers represented the liquid surface as a series of triangles, Huang explains. “The representation of ferrofluids as surface triangles allowed us to accurately estimate the curvature of the liquid interface as well as the interface position,” he says. The spike structure can be simulated by calculating the interplay between the magnetic force and the liquid’s surface tension.

Considering only the fluid’s surface, rather than its entire volume, made the simulation far more computationally efficient, enabling more accurate simulation of the complex ferrofluid behavior. “We were able to reproduce the distance between spikes of the real fluid’s spike pattern in an accurate quantitative fashion,” Michels says. “We could simulate much more complex dynamic motion.”

The next step could be to extend the work to include nonlinear magnetic relationships, Huang says.

Reference: Huang, L., Michels, D.L. Surface-only ferrofluids. ACM Transactions on Graphics 39, 174 (2020). http://dx.doi.org/10.1145/3414685.3417799 https://dl.acm.org/doi/10.1145/3414685.3417799

Provided by KAUST

AI Algorithms Detect Diabetic Eye Disease Inconsistently (Ophthalmology / Medicine)

Although some artificial intelligence software tested reasonably well, only one met the performance of human screeners, researchers found.

Diabetes continues to be the leading cause of new cases of blindness among adults in the United States. But the current shortage of eye-care providers would make it impossible to keep up with demand to provide the requisite annual screenings for this population. A new study looks at  the effectiveness of seven artificial intelligence-based screening algorithms to diagnose diabetic retinopathy, the most common diabetic eye disease leading to vision loss.

Diabetic retinopathy is the most common diabetic eye disease leading to vision loss. © Gettyimages

In a paper published Jan. 5 in Diabetes Care, researchers compared the algorithms against the diagnostic expertise of retina specialists. Five companies produced the tested algorithms – two in the United States (Eyenuk, Retina-AI Health), one in China (Airdoc), one in Portugal (Retmarker), and one in France (OphtAI).

The researchers deployed the algorithm-based technologies on retinal images from nearly 24,000 veterans who sought diabetic retinopathy screening at the Veterans Affairs Puget Sound Health Care System and the Atlanta VA Health Care System from 2006 to 2018.

The researchers found that the algorithms don’t perform as well as they claim. Many of these companies are reporting excellent results in clinical studies. But their performance in a real-world setting was unknown. Researchers conducted a test in which the performance of each algorithm and the performance of the human screeners who work in the VA teleretinal screening system were all compared to the diagnoses that expert ophthalmologists gave when looking at the same images. Three of the algorithms performed reasonably well when compared to the physicians’ diagnoses and one did worse. But only one algorithm performed as well as the human screeners in the test.

“It’s alarming that some of these algorithms are not performing consistently since they are being used somewhere in the world,” said lead researcher Aaron Lee, assistant professor of ophthalmology at the University of Washington School of Medicine.

Differences in camera equipment and technique might be one explanation. Researchers said their trial shows how important it is for any practice that wants to use an AI screener to test it first and to follow the guidelines about how to properly obtain images of patients’ eyes, because the algorithms are designed to work with a minimum quality of images.  

The study also found that the algorithms’ performance varied when analyzing images from patient populations in Seattle and Atlanta care settings. This was a surprising result and may indicate that the algorithms need to be trained with a wider variety of images.

This study was supported by NIH/NEI K23EY029246, R01AG060942 and an unrestricted grant from Research to Prevent Blindness.

Reference: Aaron Y. Lee, Ryan T. Yanagihara, Cecilia S. Lee, Marian Blazes, Hoon C. Jung, Yewlin E. Chee, Michael D. Gencarella, Harry Gee, April Y. Maa, Glenn C. Cockerham, Mary Lynch, Edward J. Boyko, “Multicenter, Head-to-Head, Real-World Validation Study of Seven Automated Artificial Intelligence Diabetic Retinopathy Screening Systems”, Diabetes Care 2021 Jan; dc201877. https://care.diabetesjournals.org/content/early/2021/01/01/dc20-1877

Provided by UW Medicine

Quick Look Under the Skin (Medicine)

Self-learning algorithms analyze medical imaging data.

Imaging techniques enable a detailed look inside an organism. But interpreting the data is time-consuming and requires a great deal of experience. Artificial neural networks open up new possibilities: They require just seconds to interpret whole-body scans of mice and to segment and depict the organs in colors, instead of in various shades of gray. This facilitates the analysis considerably.

Thanks to artificial intelligence, the AIMOS software is able to recognize bones and organs on three-dimensional grayscale images and segments them, which makes the subsequent evaluation considerably easier. Image: Astrid Eckert / TUM

How big is the liver? Does it change if medication is taken? Is the kidney inflamed? Is there a tumor in the brain and did metastases already develop? In order to answer such questions, bioscientists and doctors to date had to screen and interpret a wealth of data.

“The analysis of three-dimensional imaging processes is very complicated,” explains Oliver Schoppe. Together with an interdisciplinary research team, the TUM researcher has now developed self-learning algorithms to in future help analyze bioscientific image data.

At the core of the AIMOS software – the abbreviation stands for AI-based Mouse Organ Segmentation – are artificial neural networks that, like the human brain, are capable of learning. “You used to have to tell computer programs exactly what you wanted them to do,” says Schoppe. “Neural networks don’t need such instructions:” It’s sufficient to train them by presenting a problem and a solution multiple times. Gradually, the algorithms start to recognize the relevant patterns and are able to find the right solutions themselves.”

Training self-learning algorithms

In the AIMOS project, the algorithms were trained with the help of images of mice. The objective was to assign the image points from the 3D whole-body scan to specific organs, such as stomach, kidneys, liver, spleen, or brain. Based on this assignment, the program can then show the exact position and shape.

“We were lucky enough to have access to several hundred image of mice from a different research project, all of which had already been interpreted by two biologists,” recalls Schoppe. The team also had access to fluorescence microscopic 3D scans from the Institute for Tissue Engineering and Regenerative Medicine at the Helmholtz Zentrum München.

Through a special technique, the researchers were able to completely remove the dye from mice that were already deceased. The transparent bodies could be imaged with a microscope step by step and layer for layer. The distances between the measuring points were only six micrometers – which is equivalent to the size of a cell. Biologists had also localized the organs in these datasets.

Artificial intelligence improves accuracy

At the TranslaTUM the information techs presented the data to their new algorithms. And these learned faster than expected, Schoppe reports: “We only needed around ten whole-body scans before the software was able to successfully analyze the image data on its own – and within a matter of seconds. It takes a human hours to do this.”

The team then checked the reliability of the artificial intelligence with the help of 200 further whole-body scans of mice. “The result shows that self-learning algorithms are not only faster at analyzing biological image data than humans, but also more accurate,” sums up Professor Bjoern Menze, head of the Image-Based Biomedical Modeling group at TranslaTUM at the Technical University of Munich.

The intelligent software is to be used in the future in particular in basic research: “Images of mice are vital for, for example, investigating the effects of new medication before they are given to humans. Using self-learning algorithms to analyze image data in the future will save a lot of time in the future,” emphasizes Menze.

Reference: Oliver Schoppe, Chenchen Pan, Javier Coronel, Hongcheng Mai, Zhouyi Rong, Mihail Ivilinov Todorov, Annemarie Müskes, Fernando Navarro, Hongwei Li, Ali Ertürk, Bjoern H. Menze
Deep learning-enabled multi-organ segmentation in whole-body mouse scans, nature communications, 6.11.2020 – DOI: 10.1038/s41467-020-19449-7 https://www.nature.com/articles/s41467-020-19449-7#Abs1

Provided by TUM

Artificial Intelligence Classifies Supernova Explosions With Unprecedented Accuracy (Astronomy)

A new machine learning algorithm trained only with real data has classified over 2,300 supernovae with over 80% accuracy.

Artificial intelligence is classifying real supernova explosions without the traditional use of spectra, thanks to a team of astronomers at the Center for Astrophysics | Harvard & Smithsonian. The complete data sets and resulting classifications are publicly available for open use.

Cassiopeia A, or Cas A, is a supernova remnant located 10,000 light years away in the constellation Cassiopeia, and is the remnant of a once massive star that died in a violent explosion roughly 340 years ago. This image layers infrared, visible, and X-ray data to reveal filamentary structures of dust and gas. Cas A is amongst the 10-percent of supernovae that scientists are able to study closely. CfA’s new machine learning project will help to classify thousands, and eventually millions, of potentially interesting supernovae that may otherwise never be studied. Credit: NASA/JPL-Caltech/STScI/CXC/SAO

By training a machine learning model to categorize supernovae based on their visible characteristics, the astronomers were able to classify real data from the Pan-STARRS1 Medium Deep Survey for 2,315 supernovae with an accuracy rate of 82-percent without the use of spectra.

The astronomers developed a software program that classifies different types of supernovae based on their light curves, or how their brightness changes over time. “We have approximately 2,500 supernovae with light curves from the Pan-STARRS1 Medium Deep Survey, and of those, 500 supernovae with spectra that can be used for classification,” said Griffin Hosseinzadeh, a postdoctoral researcher at the CfA and lead author on the first of two papers published in The Astrophysical Journal. “We trained the classifier using those 500 supernovae to classify the remaining supernovae where we were not able to observe the spectrum.”

Edo Berger, an astronomer at the CfA explained that by asking the artificial intelligence to answer specific questions, the results become increasingly more accurate. “The machine learning looks for a correlation with the original 500 spectroscopic labels. We ask it to compare the supernovae in different categories: color, rate of evolution, or brightness. By feeding it real existing knowledge, it leads to the highest accuracy, between 80- and 90-percent.”

Although this is not the first machine learning project for supernovae classification, it is the first time that astronomers have had access to a real data set large enough to train an artificial intelligence-based supernovae classifier, making it possible to create machine learning algorithms without the use of simulations.

“If you make a simulated light curve, it means you are making an assumption about what supernovae will look like, and your classifier will then learn those assumptions as well,” said Hosseinzadeh. “Nature will always throw some additional complications in that you did not account for, meaning that your classifier will not do as well on real data as it did on simulated data. Because we used real data to train our classifiers, it means our measured accuracy is probably more representative of how our classifiers will perform on other surveys.” As the classifier categorizes the supernovae, said Berger, “We will be able to study them both in retrospect and in real-time to pick out the most interesting events for detailed follow up. We will use the algorithm to help us pick out the needles and also to look at the haystack.”

The project has implications not only for archival data, but also for data that will be collected by future telescopes. The Vera C. Rubin Observatory is expected to go online in 2023, and will lead to the discovery of millions of new supernovae each year. This presents both opportunities and challenges for astrophysicists, where limited telescope time leads to limited spectral classifications.

“When the Rubin Observatory goes online it will increase our discovery rate of supernovae by 100-fold, but our spectroscopic resources will not increase,” said Ashley Villar, a Simons Junior Fellow at Columbia University and lead author on the second of the two papers, adding that while roughly 10,000 supernovae are currently discovered each year, scientists only take spectra of about 10-percent of those objects. “If this holds true, it means that only 0.1-percent of supernovae discovered by the Rubin Observatory each year will get a spectroscopic label. The remaining 99.9-percent of data will be unusable without methods like ours.”

Unlike past efforts, where data sets and classifications have been available to only a limited number of astronomers, the data sets from the new machine learning algorithm will be made publicly available. The astronomers have created easy-to-use, accessible software, and also released all of the data from Pan-STARRS1 Medium Deep Survey along with the new classifications for use in other projects. Hosseinzadeh said, “It was really important to us that these projects be useful for the entire supernova community, not just for our group. There are so many projects that can be done with these data that we could never do them all ourselves.” Berger added, “These projects are open data for open science.”

This project was funded in part by a grant from the National Science Foundation (NSF) and the Harvard Data Science Initiative (HDSI).

References: (1) SuperRAENN: A semi-supervised supernova photometric classification pipeline trained on Pan-STARRS1 Medium Deep Survey supernovae, A. Villar et al, The Astrophysical Journal, 2020 December 17
[doi: 10.3847/1538-4357/abc6fd, preprint: https://arxiv.org/pdf/2008.04921.pdf] (2) Photometric Classification of 2315 Pan-STARRS1 Supernovae with Superphot by G. Hosseinzadeh et al, The Astrophysical Journal, 2020 December 17,
[doi: 10.3847/1538-4357/abc42b, preprint: https://arxiv.org/pdf/2008.04912.pdf]

Provided by Harvard Smithsonian Center for Astrophysics

Inducing Accurately Controlled ‘Fever’ in Tumors to Fight Cancer (Medicine)

Heating tumors can greatly enhance the effect of radio- and chemotherapies for cancer. This increases the chances of recovery and helps reduce the use of radiation and drugs, leading to fewer side effects for patients. An effective way of heating tumors is via ultrasound, which is noninvasive and ensures pinpoint accuracy. To optimize the cancer-killing effects of radio- and chemotherapies using thermal ultrasound treatments, Ph.D. candidate Daniel Deenen has developed self-learning algorithms that automatically steer the beams based on the current tumor temperature measurements. This new method could drastically improve patients’ chances for curing from cancer. Deenen defended his thesis on Thursday 3 December.

Using ultrasound to heat up tumors decreases the need for toxic treatments like radiation and chemotherapy. Credit: Eindhoven University of Technology

With more than 18 million new cases and 9 million deaths each year, cancer is the second most significant cause of death globally. In fact, in the Netherlands it is the leading cause, with almost one out of every three deaths being due to cancer. In addition to surgery, cancer treatments typically consist of radio- and chemotherapy, which due to their toxicity are limited in their admissible dose and can lead to severe side effects for the surviving patients.

Fever temperatures

In hyperthermia therapy, tumors are heated to fever temperatures of about 42 °C for 60 minutes or more, which significantly enhances the therapeutic efficacy of radio- and chemotherapies without causing additional toxicity or side effects. As a result, hyperthermia can be used to substantially improve the chance for disease-free and long-term survival, or allows for the use of lower systemic doses of radiation and drugs to reduce the severity of the negative side effects typically associated with cancer treatment. However, accurately controlling the tumor temperature is essential for a successful hyperthermia treatment outcome.

In magnetic-resonance-guided high-intensity focused ultrasound (MR-HIFU) hyperthermia treatments, powerful and millimeter-accurate heating is applied via ultrasound waves, while the tumor temperature is measured in real time using an MRI scanner. This allows for a completely noninvasive treatment, which greatly contributes to the patients’ comfort and well-being.


The goal of Deenen’s Ph.D. research was to develop algorithms or ‘controllers’ that automatically steer the HIFU beams based on the current tumor temperature measurements in such a manner that the tumor temperature and, therefore the cancer-killing effects, are optimized. These algorithms learn the tumor’s thermal behavior from measurement data and then adapt the HIFU steering correspondingly. The result is personalized hyperthermia treatments in which accurate and safe heating is ensured.

This is extremely important in practice, since each patient and tumor is different, and may even change over time. Furthermore, he designed controllers that enable the optimal treatment of larger tumors than previously possible. The researcher tested these algorithms on a clinical MR-HIFU setup at, and in collaboration with, the University Hospital of Cologne (Germany) using artificial tissue-mimicking models (so-called phantoms) and in real-life in-vivo experiments for large animals, proving that they can be applied successfully in clinics.

Clinical trials

This Ph.D. research is an important step towards enabling optimal, safe, and effective hyperthermia treatments for people that suffer from cancer. In fact, clinical trials for MR-HIFU hyperthermia are currently being prepared at the University Hospital of Cologne. This is a true advancement in the medical field, and in turn could drastically improve patients’ chances of recovery from cancer.

Reference: Daniel Deenen, “Model predictive control for MR-guided ultrasound hyperthermia in cancer therapy”, Technische Universiteit Eindhoven, 2020. https://research.tue.nl/en/publications/model-predictive-control-for-mr-guided-ultrasound-hyperthermia-in

Provided by Eindhoven University of Technology

Could Your Vacuum Be Listening To You? (Computer Science / Engineering)

Researchers hacked a robotic vacuum cleaner to record speech and music remotely.

A team of researchers demonstrated that popular robotic household vacuum cleaners can be remotely hacked to act as microphones.

The researchers–including Nirupam Roy, an assistant professor in the University of Maryland’s Department of Computer Science–collected information from the laser-based navigation system in a popular vacuum robot and applied signal processing and deep learning techniques to recover speech and identify television programs playing in the same room as the device.

Researchers repurposed the laser-based navigation system on a vacuum robot (right) to pick up sound vibrations and capture human speech bouncing off objects like a trash can placed near a computer speaker on the floor. ©Sriram Sami

The research demonstrates the potential for any device that uses light detection and ranging (Lidar) technology to be manipulated for collecting sound, despite not having a microphone. This work, which is a collaboration with assistant professor Jun Han at the University of Singapore was presented at the Association for Computing Machinery’s Conference on Embedded Networked Sensor Systems (SenSys 2020) on November 18, 2020.

“We welcome these devices into our homes, and we don’t think anything about it,” said Roy, who holds a joint appointment in the University of Maryland Institute for Advanced Computer Studies (UMIACS). “But we have shown that even though these devices don’t have microphones, we can repurpose the systems they use for navigation to spy on conversations and potentially reveal private information.”

The Lidar navigation systems in household vacuum bots shine a laser beam around a room and sense the reflection of the laser as it bounces off nearby objects. The robot uses the reflected signals to map the room and avoid collisions as it moves through the house.

Privacy experts have suggested that the maps made by vacuum bots, which are often stored in the cloud, pose potential privacy breaches that could give advertisers access to information about such things as home size, which suggests income level, and other lifestyle-related information. Roy and his team wondered if the Lidar in these robots could also pose potential security risks as sound recording devices in users’ homes or businesses.

Sound waves cause objects to vibrate, and these vibrations cause slight variations in the light bouncing off an object. Laser microphones, used in espionage since the 1940s, are capable of converting those variations back into sound waves. But laser microphones rely on a targeted laser beam reflecting off very smooth surfaces, such as glass windows.

A vacuum Lidar, on the other hand, scans the environment with a laser and senses the light scattered back by objects that are irregular in shape and density. The scattered signal received by the vacuum’s sensor provides only a fraction of the information needed to recover sound waves. The researchers were unsure if a vacuum bot’s Lidar system could be manipulated to function as a microphone and if the signal could be interpreted into meaningful sound signals.

Deep learning algorithms were able to interpret scattered sound waves, such those above that were captured by a robot vacuum, to identify numbers and musical sequences. ©Sriram Sami

First, the researchers hacked a robot vacuum to show they could control the position of the laser beam and send the sensed data to their laptops through Wi-Fi without interfering with the device’s navigation.

Next, they conducted experiments with two sound sources. One source was a human voice reciting numbers played over computer speakers and the other was audio from a variety of television shows played through a TV sound bar. Roy and his colleagues then captured the laser signal sensed by the vacuum’s navigation system as it bounced off a variety of objects placed near the sound source. Objects included a trash can, cardboard box, takeout container and polypropylene bag–items that might normally be found on a typical floor.

The researchers passed the signals they received through deep learning algorithms that were trained to either match human voices or to identify musical sequences from television shows. Their computer system, which they call LidarPhone, identified and matched spoken numbers with 90% accuracy. It also identified television shows from a minute’s worth of recording with more than 90% accuracy.

“This type of threat may be more important now than ever, when you consider that we are all ordering food over the phone and having meetings over the computer, and we are often speaking our credit card or bank information,” Roy said. “But what is even more concerning for me is that it can reveal much more personal information. This kind of information can tell you about my living style, how many hours I’m working, other things that I am doing. And what we watch on TV can reveal our political orientations. That is crucial for someone who might want to manipulate the political elections or target very specific messages to me.”

The researchers emphasize that vacuum cleaners are just one example of potential vulnerability to Lidar-based spying. Many other devices could be open to similar attacks such as smartphone infrared sensors used for face recognition or passive infrared sensors used for motion detection.

“I believe this is significant work that will make the manufacturers aware of these possibilities and trigger the security and privacy community to come up with solutions to prevent these kinds of attacks,” Roy said.

Provided by University of Maryland