Tag Archives: #algorithm

Astronomers Presented An Algorithm Which Computes N-point Correlation Functions Faster Than Any (Cosmology / Instrumentation)

Ever wanted to do cosmology from the four-point or the five-point function? Oliver Philcox and colleagues introduced a new estimator which computes the N-point galaxy correlation functions of Ng galaxies in O(Ng^2) time, *far* faster than the naive O(Ng^N) scaling!

𝑁-point correlation functions (NPCFs), or their Fourier-space counterparts, the polyspectra, are amongst the most powerful tools in the survey analyst’s workshop. These encode the statistical properties of the galaxy overdensity field at sets of 𝑁 positions, and may be compared to data to give constraints on properties such as the Universe’s expansion rate and composition.

Most inflationary theories predict density fluctuations in the early Universe to follow Gaussian statistics; in this case all the information is contained within the two-point correlation function (2PCF), or, equivalently, the power spectrum. For a homogeneous and isotropic Universe, both are simple functions of one variable, and have been the subject of almost all galaxy survey analyses to date.

The late-time Universe is far from Gaussian. Statistics beyond the 2PCF are of importance since the bulk motion of matter in structure formation causes a cascade of information from the 2PCF to higher-order statistics, such as the three- and four-point functions (3PCF and 4PCF).

“Correlation functions form the cornerstone of modern cosmology, and their efficient computation is a task of utmost importance for the analysis of current and future galaxy surveys.”

Now, Oliver Philcox and colleagues presented a new estimator/algorithm for efficiently computing the N-point galaxy correlation functions of Ng galaxies in O(Ng^2) time, *far* faster than the naive O(Ng^N) scaling!

By decomposing the N-point correlation function (NPCF) into an angular basis composed of products of spherical harmonics, the estimator becomes *separable* in r1, r2, r3, etc. It can be computed as a weighted sum of *pairs* of galaxies, for any N.

© Oliver Philcox et al

The algorithm is included in their new code *encore*: https://github.com/oliverphilcox/encore

It is written in C++ and computes the 3PCF, 4PCF, 5 PCF and 6 PCF of a BOSS-like galaxy survey in ~ 100 CPU-hours, including applying corrections for the non-uniform survey geometry. It can also be run on a GPU!

Whilst the complexity is technically O(Ng^2), for N>3, they practically found computation-time to scale *linearly* with the number of galaxies unless the density is very large! Below figure is the measurement of a few 5PCF components:

© Oliver Philcox et al.

This will allow future surveys like Euclid, DESI, and Roman to include higher-point functions in their analyses, giving sharper constraints on cosmological parameters, and testing new physics such as parity-violation!

Featured image: Strong scaling of the encore code: dependence of runtime, 𝑇 , on the number of CPU cores on a single node for different test cases. Dashed lines indicate linear relationships, and are calibrated at the single-CPU time © Philcox et al.


Reference: Oliver H. E. Philcox, Zachary Slepian, Jiamin Hou, Craig Warner, Robert N. Cahn, Daniel J. Eisenstein, “ENCORE: Estimating Galaxy N-point Correlation Functions in O(N2g) Time”, Arxiv, pp. 1-24, 2021. https://arxiv.org/abs/2105.08722


Note for editors of other websites: To reuse this article fully or partially kindly give credit either to our author/editor S. Aman or provide a link of our article

A Scientist From HSE University Has Developed an Image Recognition Algorithm (Computer Science)

A scientist from HSE University has developed an image recognition algorithm that works 40% faster than analogues. It can speed up real-time processing of video-based image recognition systems. The results of the study have been published in the journal Information Sciences.

Convolutional neural networks (CNNs), which include a sequence of convolutional layers, are widely used in computer vision. Each layer in a network has an input and an output. The digital description of the image goes to the input of the first layer and is converted into a different set of numbers at the output. The result goes to the input of the next layer and so on until the class label of the object in the image is predicted in the last layer. For example, this class can be a person, a cat, or a chair. For this, a CNN is trained on a set of images with a known class label. The greater the number and variability of the images of each class in the dataset are, the more accurate the trained network will be.

If there are only a few examples in the training set, the additional training (fine-tuning) of the neural network is used. CNN is trained to recognize images from a similar dataset that solves the original problem. For example, when a neural network learns to recognize faces or their attributes (emotions, gender, age), it is preliminary trained to identify celebrities from their photos. The resulting neural network is then fine-tuned on the available small dataset to identify the faces of family or relatives in home video surveillance systems. The more depth (number) of layers there are in a CNN, the more accurately it predicts the type of object in the image. However, if the number of layers is increased, more time is required to recognize objects.

The study’s author, Professor Andrey Savchenko of the HSE Campus in Nizhny Novgorod, was able to speed up the work of a pre-trained convolutional neural network with arbitrary architecture, consisting of 90-780 layers in his experiments. The result was an increase in recognition speed of up to 40%, while controlling the loss in accuracy to no more than 0.5-1%. The scientist relied on statistical methods such as sequential analysis and multiple comparisons (multiple hypothesis testing).

“The decision in the image recognition problem is made by a classifier — a special mathematical algorithm that receives an array of numbers (features/embeddings of an image) as inputs, and outputs a prediction about which class the image belongs to. The classifier can be applied by feeding it the outputs of any layer of the neural network. To recognize “simple” images, the classifier only needs to analyse the data (outputs) from the first layers of the neural network.

There is no need to waste further time if we are already confident in the reliability of the decision made. For “complex” pictures, the first layers are clearly not enough — you need to move on to the next. Therefore, classifiers were added to the neural network into several intermediate layers. Depending on the complexity of the input image, the proposed algorithm decided whether to continue recognition or complete it. Since it is important to control errors in such a procedure, I applied the theory of multiple comparisons: I introduced many hypotheses, at which intermediate layer to stop, and sequentially tested these hypotheses,” explained Professor Savchenko.

If the first classifier already produced a decision that was considered reliable by the multiple hypothesis testing procedure, the algorithm stopped. If the decision was declared unreliable, the calculations in the neural network continued to the intermediate layer, and the reliability check was repeated.

As the scientist notes, the most accurate decisions are obtained for the outputs of the last layers of the neural network. Early network outputs are classified much faster, which means it is necessary to simultaneously train all classifiers in order to accelerate recognition while controlling loss in accuracy. For example, so that the error due to an earlier stop is no more than 1%.

“High accuracy is always important for image recognition. For example, if a decision in face recognition systems is made incorrectly, then either someone outside can gain access to confidential information or conversely the user will be repeatedly denied access, because the neural network cannot identify him correctly. Speed can sometimes be sacrificed, but it matters, for example, in video surveillance systems, where it is highly desirable to make decisions in real time, that is, no more than 20-30 milliseconds per frame. To recognize an object in a video frame here and now, it is very important to act quickly, without losing accuracy,” said Professor Savchenko.


Reference: A.V. Savchenko, Fast inference in convolutional neural networks based on sequential three-way decisions, Information Sciences, Volume 560, 2021, Pages 370-385, ISSN 0020-0255, https://doi.org/10.1016/j.ins.2021.01.068. (https://www.sciencedirect.com/science/article/pii/S0020025521001067)


Provided by National Research University Higher School of Economics

Astrofix: An Image Imputation Algorithm Based on Gaussian Process Regression (Astronomy)

The history of data artifacts is as long as the history of observational astronomy. Artifacts such as dead pixels, hot pixels, and cosmic ray hits are common in astronomical images. They at best render the pixels’ data unusable while, at worst, disable the entire image in downstream approaches.

In dealing with missing pixels, some astronomical procedures simply ignore them while others require imputing their values first. Optimal extraction of spectra and Point Spread Function (PSF) photometry ignore missing data, while box spectral extraction and aperture photometry do not. Aperture photometry and box extraction have the advantage of requiring little knowledge about the PSF or line-spread function (LSF). For this reason, aperture photometry has been used for ultra-precise Kepler photometry. Box extraction is currently standard for the SPHERE and GPI integral-field spectrographs.

In general, correcting the corrupted data in an image involves two steps: identifying what they are and imputing their values. Existing algorithms have emphasized bad pixel identification and rejection. For example, there are well-developed packages that detect cosmic rays (CRs) by comparing multiple exposures. When multiple exposures are not available, Rhoads rejects CRs by applying a PSF filter, van Dokkum by Laplacian edge detection (LACosmic), and Pych by iterative histogram analysis. Among the above methods, LACosmic offers the best performance. Approaches based on machine learning like deepCR, a deep-learning algorithm, may offer further improvements.

In contrast, the literature on methods of imputing missing data is sparse. Currently, the most common approach is the median replacement, which replaces a bad pixel with the median of its neighbours. Algorithms that apply median replacement include LACosmic. Some other packages, such as astropy.convolution, take an average of the surrounding pixels, weighted by Gaussian kernels. An alternative is a 1D linear interpolation. This approach is standard for the integral-field spectrographs GPI and SPHERE. deepCR, on the other hand, predicts the true pixel values by a trained neural network. However, none of these methods are statistically well-motivated, and they usually apply a fixed interpolation kernel to all images and everywhere on the same image. In reality, however, the optimal kernel could vary from image to image and even from pixel to pixel. Moreover, in a continuous region of bad pixels or near the boundary of an image, most existing data imputation approaches either have their performance compromised or have to treat these regions as special cases. Only deepCR can handle them naturally with minimal performance differences.

Now, Zhang and Brandt in their recent paper presented astrofix, a robust and flexible image imputation algorithm based on Gaussian Process Regression (GPR). Through an optimization process, astrofix chooses and applies a different interpolation kernel to each image, using a training set extracted automatically from that image. It naturally handles clusters of bad pixels and image edges and adapts to various instruments and image types, including both imaging and spectroscopy. The mean absolute error of astrofix is several times smaller than that of median replacement and interpolation by a Gaussian kernel (i.e. astropy.convulation).

Figure 1. Part of an image (top left) convolved with three different kernels: a median filter (second column), a Gaussian kernel with zero weight in the central pixel and renormalized to unit area (third column), and a GPR kernel (right column). The GPR kernel best restores the original image. It is constructed from the squared exponential covariance function with a = 3.02 and h = 0.72. The Gaussian kernel (third column) has standard deviation 0.72 to match the GPR kernel (right column). The bottom images show the interpolating kernels themselves. © Zhang and Brandt

astrofix accepts images with a bad pixel mask or images with bad pixels flagged as NaN, and it fixes any given image in three steps:

  1. Determine the training set of pixels that astrofix will attempt to reproduce.
  2. Find the optimal hyperparameters a and h (or a, hx and hy) given the training set from Step 1.
  3. Fix the image by imputing data for the ba.

According to authors, the actual performance of astrofix may depend on the initial guess for the optimization, the choice of the training set, and the resemblance of the covariance function to the instrumental PSF. Other covariance functions remain to be explored, and the use of sGPR should be considered carefully.

They also showed that astrofix also has good potential to be used for bad pixel detection. One could compare the GPR imputed values with the measured counts and the expected noise at each pixel, and iterate this procedure to reject continuous regions of bad pixels.

“astrofix has the potential to outperform conventional bad pixel detection algorithms because of its ability to train the imputation specifically for each image. This idea could be developed in future work.”

— concluded authors of the study.

They demonstrated good performance of astrofix on both imaging and spectroscopic data, including the SBIG 6303 0.4m telescope and the FLOYDS spectrograph of Las Cumbres Observatory and the CHARIS integral-field spectrograph on the Subaru Telescope.

algorithm is implemented in the Python package astrofix, which is available at this https URL

Featured image: Corrections to the CHARIS Image by GPR, the 5 × 5 median filter, and astropy.convolution. The counts are plotted on a logarithmic scale. GPR best restores the structure of the bars, while the two other approaches produce fuzzier images. © Zhang and Brandt


Reference: Hengyue Zhang, Timothy D. Brandt, “Cleaning Images with Gaussian Process Regression”, ArXiv, 23 March 2021. https://arxiv.org/abs/2103.12250


Copyright of this article totally belongs to our author S. Aman. One is allowed to reuse it only by giving proper credit either to him or to us

New Quantum Algorithm Surpasses the QPE Norm (Quantum)

Osaka City University refines quantum computer-ready algorithm to measure the vertical ionization energies of atoms and molecules within 0.1 eV of precision.

Quantum computers have seen a lot attention recently as they are expected to solve certain problems that are outside the capabilities of normal computers. Primary to these problems is determining the electronic states of atoms and molecules so they can be used more effectively in a variety of industries – from lithium-ion battery designs to in silico technologies in drug development. A common way scientists have approached this problem is by calculating the total energies of the individual states of a molecule or atom and then determine the difference in energy between these states. In nature, many molecules grow in size and complexity, and the cost to calculate this constant flux is beyond the capability of any traditional computer or currently establish quantum algorithms. Therefore, theoretical predictions of the total energies have only been possible if molecules are not sizable and isolated from their natural environment.

“For quantum computers to be a reality, its algorithms must be robust enough to accurately predict the electronic states of atoms and molecules, as they exist in nature, ” state Kenji Sugisaki and Takeji Takui from the Graduate School of Science, Osaka City University.

In December 2020, Sugisaki and Takui, together with their colleagues, led a team of researchers to develop a quantum algorithm they call Bayesian eXchange coupling parameter calculator with Broken-symmetry wave functions (BxB), that predicts the electronic states of atoms and molecules by directly calculating the energy differences. They noted that energy differences in atoms and molecules remain constant, regardless to how complex and large they get despite their total energies grow as the system size. “With BxB, we avoided the common practice of calculating the total energies and targeted the energy differences directly, keeping computing costs within polynomial time”, they state. “Since then, our goal has been to improve the efficiency of our BxB software so it can predict the electronic sates of atoms and molecules with chemical precision.”

Using the computing costs of a well-known algorithm called Quantum Phase Estimation (QPE) as a benchmark, “we calculated the vertical ionization energies of small molecules such as CO, O2, CN, F2, H2O, NH3 within 0.1 electron volts (eV) of precision,” states the team, using half the number of qubits, bringing the calculation cost on par with QPE.

Their findings will be published online in the March edition of The Journal of Physical Chemistry Letters.

Ionization energy is one of the most fundamental physical properties of atoms and molecules and an important indicator for understanding the strength and properties of chemical bonds and reactions. In short, accurately predicting the ionization energy allows us to use chemicals beyond the current norm. In the past, it was necessary to calculate the energies of the neutral and ionized states, but with the BxB quantum algorithm, the ionization energy can be obtained in a single calculation without inspecting the individual total energies of the neutral and ionized states. “From numerical simulations of the quantum logic circuit in BxB, we found that the computational cost for reading out the ionization energy is constant regardless of the atomic number or the size of the molecule,” the team states, “and that the ionization energy can be obtained with a high accuracy of 0.1 eV after modifying the length of the quantum logic circuit to be less than one tenth of QPE.” (See image for modification details)

With the development of quantum computer hardware, Sugisaki and Takui, along with their team, are expecting the BxB quantum algorithm to perform high-precision energy calculations for large molecules that cannot be treated in real time with conventional computers.

Featured image: Comparison of their new quantum circuit with their previous one © Kenji Sugisaki, Takeji Takui, Kazunobu Sato


Reference: Kenji Sugisaki, Kazuo Toyota, Kazunobu Sato, Daisuke Shiomi, and Takeji Takui, “Quantum Algorithm for the Direct Calculations of Vertical Ionization Energies”, J. Phys. Chem. Lett. 2021, 12, XXX, 2880–2885.
https://doi.org/10.1021/acs.jpclett.1c00283


Provided by Osaka City University

Danish Computer Scientist Has Developed a Superb Algorithm For Finding the Shortest Route (Computer science)

One of the most classic algorithmic problems deals with calculating the shortest path between two points. A more complicated variant of the problem is when the route traverses a changing network—whether this be a road network or the internet. For 40 years, an algorithm has been sought to provide an optimal solution to this problem. Now, computer scientist Christian Wulff-Nilsen of the University of Copenhagen and two research colleagues have come up with a recipe.

When heading somewhere new, most of us leave it to computer algorithms to help us find the best route, whether by using a car’s GPS, or public transport and map apps on their phone. Still, there are times when a proposed route doesn’t quite align with reality. This is because road networks, public transportation networks and other networks aren’t static. The best route can suddenly be the slowest, e.g. because a queue has formed due to roadworks or an accident.

People probably don’t think about the complicated math behind routing suggestions in these types of situations. The software being used is trying to solve a variant for the classic algorithmic “shortest path” problem, the shortest path in a dynamic network. For 40 years, researchers have been working to find an algorithm that can optimally solve this mathematical conundrum. Now, Christian Wulff-Nilsen of the University of Copenhagen’s Department of Computer Science has succeeded in cracking the nut along with two colleagues.

“We have developed an algorithm, for which we now have mathematical proof, that it is better than every other algorithm up to now—and the closest thing to optimal that will ever be, even if we look 1000 years into the future,” says Associate Professor Wulff-Nilsen. The results were presented at the prestigious FOCS 2020 conference.

Optimally, in this context, refers to an algorithm that spends as little time and as little computer memory as possible to calculate the best route in a given network. This is not just true of road and transportation networks, but also the internet or any other type of network.

Networks as graphs

The researchers represent a network as a so-called dynamic graph”. In this context, a graph is an abstract representation of a network consisting of edges, roads for example, and nodes, representing intersections, for example. When a graph is dynamic, it means that it can change over time. The new algorithm handles changes consisting of deleted edges—for example, if the equivalent of a stretch of a road suddenly becomes inaccessible due to roadworks.

“The tremendous advantage of seeing a network as an abstract graph is that it can be used to  represent any type of network. It could be the internet, where you want to send data via as short a route as possible, a human brain or the network of friendship relations on Facebook. This makes graph algorithms applicable in a wide variety of contexts,” explains Christian Wulff-Nilsen.

Traditional algorithms assume that a graph is static, which is rarely true in the real world. When these kinds of algorithms are used in a dynamic network, they need to be rerun every time a small change occurs in the graph—which wastes time.

More data necessitates better algorithms

Finding better algorithms is not just useful when travelling. It is necessary in virtually any area where data is produced, as Christian Wulff-Nilsen points out:

“We are living in a time when volumes of data grow at a tremendous rate and the development of hardware simply can’t keep up. In order to manage all of the data we produce, we need to develop smarter software that requires less running time and memory. That’s why we need smarter algorithms,” he says.

He hopes that it will be possible to use this algorithm or some of the techniques behind it in practice, but stresses that this is theoretical evidence and first requires experimentation.

Christian Wulff-Nilsen (Photo: University of Copenhagen)

Facts

  • The research article “Near-Optimal Decremental SSSP in Dense Weighted Digraphs” was presented at the prestigious FOCS 2020 conference.
  • The article was written by Christian Wulff-Nilsen, of the University of Copenhagen’s Department of Computer Science, and former Department of Computer Science PhD student Maximillian Probst Gutenberg and assistant professor Aaron Bernstein of Rutgers University.
  • The version of the “shortest path” problem that the researchers solved is called “The Decremental Single-Source Shortest Path Problem”. It is essentially about maintaining the shortest paths in a changing dynamic network from one starting point to all other nodes in a graph. The changes to a network consist of edge removals.
  • The paper gives a mathematical proof that the algorithm is essentially the optimal one for dynamic networks. On average, users will be able to change routes according to calculations made in constant time.

Featured image credit: gettyimages


Provided by University of Copenhagen

Mayo Clinic Algorithm Shows Potential in Individualizing Treatment For Depression (Psychiatry)

Finding an effective antidepressant medication for people diagnosed with depression, also called major depressive disorder, is often a long and complex process of “try and try again” ― going from one prescription to the next until achieving a therapeutic response.

This complex disease, which affects more than 16 million people in the U.S., can cause symptoms of persistent emotional and physical problems, including sadness, irritability and loss of interest. In severe cases, suicidal thoughts are a risk.

Now, a computer algorithm developed by researchers within Mayo Clinic’s Center for Individualized Medicine and the University of Illinois at Urbana-Champaign could help clinicians accurately and efficiently predict whether a patient with depression will respond to an antidepressant. 

The new research, published in Neuropsychopharmacology, represents a possible step forward in individualizing treatment for major depressive disorder. It also demonstrates a collaboration between computer scientists and clinicians who are using large datasets to address challenges of individualizing medicine practices of globally devastating diseases.

Overall, the algorithm was tested on 1,996 patients with depression, and the outcomes of whether patients would respond favorably to therapy were correctly predicted in over 72% of the patients.

Harnessing teamwork to enhance patient outcomes

In this study, Arjun Athreya, Ph.D., a computer scientist within Mayo Clinic’s Molecular Pharmacology and Experimental Therapeutics, and William Bobo, M.D., chair of Mayo Clinic Florida’s Psychiatry and Psychology, closely collaborated to find ways in which a clinical challenge of global significance could be modeled. They worked together to derive insights to augment routine clinical decision-making to enhance patient outcomes.

“The idea was to create a technology that serves as a reliable companion to a clinician at the point of care, as opposed to a technology that replaces their judgment,” explains Dr. Athreya. “This meant I had to embed myself in the practice enough to learn the challenges clinicians face as well as the needs of the patient to collectively transform those needs and challenges into an engineering technology.”

Putting artificial intelligence (AI) to the test

The new approach uses an AI framework, called Almond, to find patterns and unique characteristics in a patient’s genomic and clinical data. This allows for the right treatment to be selected or enables a treatment to be changed relatively soon after it is started, if the algorithm predicts a poor response.

Arjun Athreya, Ph.D., © Mayo clinic

“We used the algorithm to identify the specific depressive symptoms and thresholds of improvement that were predictive of antidepressant response by four weeks for a patient to achieve remission or response, or a nonresponse, by eight weeks.”

– Dr. Arjun Athreya

For the study, Drs. Athreya and Bobo, and their team, trained the algorithm by creating symptom profiles for nearly 1,000 patients with major depressive disorder who were starting treatment with selective serotonin reuptake inhibitors, often called SSRIs. These are the most commonly prescribed first-line antidepressants.

The team first stratified patients according to their depression severity to construct a graph. Then they identified the different ways that their depression changed after starting treatment. They found that some depressive symptoms were more helpful in predicting treatment outcomes than others. They also identified the levels of improvement needed in each treatment to have a good outcome.

“We used the algorithm to identify the specific depressive symptoms and thresholds of improvement that were predictive of antidepressant response by four weeks for a patient to achieve remission or response, or a nonresponse, by eight weeks,” explains Dr. Athreya.

Overall, the algorithm was tested on 1,996 patients with depression, and the outcomes of whether patients would respond favorably to therapy were correctly predicted in over 72% of the patients.

Algorithm allows health care providers, patients to individualize care

Dr. Bobo says that providing interpretable predictions could enhance clinical treatment of depression and reduce the time associated with multiple trials of ineffective antidepressants.

“The model generates output in a way that clinicians are able to easily assimilate, interpret and potentially use in the limited time they have in clinical visits with the patients.”

– Dr. William Bobo.

“Typically, clinicians initiate a therapy and patients come back after four weeks,” Dr. Bobo explains. “Then, based on the clinician’s judgment of improvement, they make their best guess as to what the outcome would be at eight weeks, and decide to either change the therapy or stay the course.”

Dr. Bobo says the study highlights the essential partnership between computer scientists and clinicians.

“The logic that clinicians apply follows certain steps, and this is what really intrigued Dr. Athreya, who abstracted this as both a computational problem and an engineering problem. We were coming at the same idea with different viewpoints that ultimately had a high degree of synergy,” Dr. Bobo explains.

“What we then did with the study was to ‘algorithmize’ the clinician’s thinking logic using probabilistic graphical models to formalize the predictions of eight-week outcomes by how severe the disease was prior to treatment and how much specific depressive symptoms had to have improved between baseline and four weeks,” Dr. Bobo says. “This way, the model generates output in a way that clinicians are able to easily assimilate, interpret and potentially use in the limited time they have in clinical visits with the patients.”

The researchers say the model could be beneficial in busy primary care practices and may hasten referrals for specialty mental health consultation if the predicted outcomes of treatment are nonresponses.

“We hope this study will help us pave the way forward for developing electronic tools that can help clinicians and patients make better decisions about their treatment at the earliest point in time possible after starting it,” says Dr. Athreya.

For a disease that is marked with high degrees of variability in treatment outcomes across patients, this accuracy marks a step in the right direction of individualizing therapy for depression ― with the opportunity to augment clinical measures with biological measures such as genomics, which the team already is working on as part of a broader study supported by the National Science Foundation.

Also, the team is prospectively validating its findings in routing practice at Mayo Clinic in Rochester and Mayo Clinic in Florida through a Transform the Practice Award (NCT04355650).


Reference: Athreya, A.P., Brückl, T., Binder, E.B. et al. Prediction of short-term antidepressant response using probabilistic graphical models with replication across multiple drugs and treatment settings. Neuropsychopharmacol. (2021). https://www.nature.com/articles/s41386-020-00943-x?utm_source=feedburner&utm_medium=feed&utm_campaign=Feed%3A+npp%2Frss%2Fcurrent+%28Neuropsychopharmacology+-+Issue%29 https://doi.org/10.1038/s41386-020-00943-x


Provided by Mayo Clinic

Automated AI Algorithm Uses Routine Imaging to Predict Cardiovascular Risk (Medicine)

Artificial intelligence deep learning system can automatically measure coronary artery calcium from routine CT scans and predict cardiovascular events like heart attacks.

Coronary artery calcification — the buildup of calcified plaque in the walls of the heart’s arteries — is an important predictor of adverse cardiovascular events like heart attacks. Coronary calcium can be detected by computed tomography (CT) scans, but quantifying the amount of plaque requires radiological expertise, time and specialized equipment. In practice, even though chest CT scans are fairly common, calcium score CTs are not. Investigators from Brigham and Women’s Hospital’s Artificial Intelligence in Medicine (AIM) Program and the Massachusetts General Hospital’s Cardiovascular Imaging Research Center (CIRC) teamed up to develop and evaluate a deep learning system that may help change this. The system automatically measures coronary artery calcium from CT scans to help physicians and patients make more informed decisions about cardiovascular prevention. The team validated the system using data from more than 20,000 individuals with promising results. Their findings are published in Nature Communications.

“Coronary artery calcium information could be available for almost every patient who gets a chest CT scan, but it isn’t quantified simply because it takes too much time to do this for every patient,” said corresponding author Hugo Aerts, PhD, director of the Artificial Intelligence in Medicine (AIM) Program at the Brigham and Harvard Medical School. “We’ve developed an algorithm that can identify high-risk individuals in an automated manner.”

Working with colleagues, lead author Roman Zeleznik, MSc, a data scientist in AIM, developed the deep learning system described in the paper to automatically and accurately predict cardiovascular events by scoring coronary calcium. While the tool is currently only for research purposes, Zeleznik and co-authors have made it open source and freely available for anyone to use.

“In theory, the deep learning system does a lot of what a human would do to quantify calcium,” said Zeleznik. “Our paper shows that it may be possible to do this in an automated fashion.”

The team began by training the deep learning system on data from the Framingham Heart Study (FHS), a long-term asymptomatic community cohort study. Framingham participants received dedicated calcium scoring CT scans, which were manually scored by expert human readers and used to train the deep learning system. The deep learning system was then applied to three additional study cohorts, which included heavy smokers having lung cancer screening CT (NLST: National Lung Screening Trial), patients with stable chest pain having cardiac CT (PROMISE: the Prospective Multicenter Imaging Study for Evaluation of Chest Pain), and patients with acute chest pain having cardiac CT (ROMICAT-II: the Rule Out Myocardial Infarction using Computer Assisted Tomography trial). All told, the team validated the deep learning system in over 20,000 individuals.

Udo Hoffmann, MD, director of CIRC@MGH who is the principal investigator of CT imaging in the FHS, PROMISE and ROMICAT, emphasized that one of the unique aspects of this study is the inclusion of three National Heart, Lung, and Blood Institute–funded high-quality image and outcome trials that strengthen the generalizability of these results to clinical settings.

The automated calcium scores from the deep learning system highly correlated with the manual calcium scores from human experts. The automated scores also independently predicted who would go on to have a major adverse cardiovascular event like a heart attack.

The coronary artery calcium score plays an important role in current guidelines for who should take a statin to prevent heart attacks. “This is an opportunity for us to get additional value from these chest CTs using AI,” said co-author Michael Lu, MD, MPH, director of artificial intelligence at MGH’s Cardiovascular Imaging Research Center. “The coronary artery calcium score can help patients and physicians make informed, personalized decisions about whether to take a statin. From a clinical perspective, our long-term goal is to implement this deep learning system in electronic health records, to automatically identify the patients at high risk.”

Funding for this work was provided by the National Institutes of Health (NIH-USA U24CA194354, NIHUSA U01CA190234, NIH-USA U01CA209414, NIH-USA R35CA22052, 5R01-HL109711, NIH/NHLBI 5K24HL113128, NIH/NHLBI 5T32HL076136, NIH/NHLBI 5U01HL123339), the European Union—European Research Council (866504), as well as the German Research Foundation (1438/1-1, 6405/2-1), American Heart Association Institute for Precision Cardiovascular Medicine (18UNPG34030172), Fulbright Visiting Researcher Grant (E0583118), and a Rosztoczy Foundation Grant.


Reference: Zeleznik et al. “Deep convolutional neural networks to predict cardiovascular risk from computed tomography” Nature Communications DOI: 10.1038/s41467-021-20966-2


Provided by Brigham’s and Women’s Hospital

First Measurement Device Independent Quantum Key Distribution Experiment to Secure Communication (Physics)

PAN Jianwei and colleagues PENG Chengzhi and ZHANG Qiang, from University of Science and Technology of China of the Chinese Academy of Sciences (CAS), collaborating with WANG Xiangbin from Tsinghua University and YOU Lixing from Shanghai Institute of Microsystems of CAS, realized the measurement device independent quantum key distribution (MDI-QKD) experiment based on long-distance free space channel for the first time. The study was published online in Physics Review Letter.

Experiment setup. It shows the top view of the experimental layout at the Pudong area, Shanghai. (Image by CAO Yuan et al.) 

Due to the fact that the atmospheric turbulence in free space channel destroys the spatial mode, it is necessary to use single-mode optical fiber for spatial filtering before interferometry. The low coupling efficiency and intensity fluctuation are the two major difficulties in this experiment.

The researchers in this study developed an adaptive optics system with strong turbulence resistance based on the stochastic gradient descent algorithm, which improved the total channel efficiency of dual links by about 4~10 times.

The rapid fluctuation of light intensity challenges clock synchronization and optical frequency comparison methods in the traditional optical fiber system to apply.

To tackle the synchronization problem, the researchers adopted a super stable crystal oscillator as the independent clock source at three experimental points and measuring the real-time feedback of the pulse arrival time, which achieves am accuracy of 32 ps.

The hydrogen cyanide molecular absorption cell was deployed at the both coding ends to calibrate the optical frequency and assured the frequency difference of the interference light was less than 10 MHz, achieving frequency locking.

The above breakthroughs helped to realize the first free space MDI-QKD experiment in Shanghai urban atmospheric channel. The length of the two channels is 7.7 km and 11.5 km respectively and the distance between two communication ends is 19.2 km.

This study provides the possibility of realizing more complex quantum information processing tasks based on long-distance quantum interference in free space channel. It is one step closer to “a global internet invulnerable to hackers may be a ways off,” according to an article published in the website of American Physical Society.

Reference: Yuan Cao, Yu-Huai Li, Kui-Xing Yang, Yang-Fan Jiang, Shuang-Lin Li, Xiao-Long Hu, Maimaiti Abulizi, Cheng-Long Li, Weijun Zhang, Qi-Chao Sun, Wei-Yue Liu, Xiao Jiang, Sheng-Kai Liao, Ji-Gang Ren, Hao Li, Lixing You, Zhen Wang, Juan Yin, Chao-Yang Lu, Xiang-Bin Wang, Qiang Zhang, Cheng-Zhi Peng, and Jian-Wei Pan, “Long-Distance Free-Space Measurement-Device-Independent Quantum Key Distribution”, Phys. Rev. Lett. 125, 260503 – Published 23 December 2020. https://journals.aps.org/prl/abstract/10.1103/PhysRevLett.125.260503

Provided by Chinese Academy of Sciences

Machine Intelligence Accelerates Research Into Mapping Brains (Neuroscience)

Scientists in Japan’s brain science project have used machine intelligence to improve the accuracy and reliability of a powerful brain-mapping technique, a new study reports.

© OIST

Their development, published on December 18th in Scientific Reports, gives researchers more confidence in using the technique to untangle the human brain’s wiring and to better understand the changes in this wiring that accompany neurological or mental disorders such as Parkinson’s or Alzheimer’s disease.

“Working out how all the different brain regions are connected – what we call the connectome of the brain – is vital to fully understand the brain and all the complex processes it carries out,” said Professor Kenji Doya, who leads the Neural Computation Unit at the Okinawa Institute of Science and Technology Graduate University (OIST).

To identify connectomes, researchers track nerve cell fibers that extend throughout the brain. In animal experiments, scientists can inject a fluorescent tracer into multiple points in the brain and image where the nerve fibers originating from these points extend to. But this process requires analyzing hundreds of brain slices from many animals. And because it is so invasive, it cannot be used in humans, explained Prof. Doya.

However, advances in magnetic resonance imaging (MRI) have made it possible to estimate connectomes non-invasively. This technique, called diffusion MRI-based fiber tracking, uses powerful magnetic fields to track signals from water molecules as they move – or diffuse – along nerve fibers. A computer algorithm then uses these water signals to estimate the path of the nerve fibers throughout the whole brain.

(Left) MRI scanners, like the one pictured at RIKEN Center for Brain Science, can be used to non-invasively map the brain by analyzing the diffusion of water molecules. (Right) The diffusion MRI measures the direction that the water molecules diffuse at each point in the brain, as illustrated by ellipsoids here. Fiber tracking algorithms then use this information to estimate the path of the nerve fibers. Credit: 
(Left) Junichi Hata and Hideyuki Okano, from the RIKEN Center for Brain Science. (Right) Figure was created using The MRtrix viewer 3.0.1.

But at present, the algorithms do not produce convincing results. Just like how photographs can look different depending on the camera settings chosen by a photographer, the settings – or parameters – chosen by scientists for these algorithms can generate very different connectomes.

“There are genuine concerns with the reliability of this method,” said Dr. Carlos Gutierrez, first author and postdoctoral researcher in the OIST Neural Computation Unit. “The connectomes can be dominated by false positives, meaning they show neural connections that aren’t really there.”

Furthermore, the algorithms struggle to detect nerve fibers that stretch between remote regions of the brain. Yet these long-distance connections are some of the most important for understanding how the brain functions, Dr. Gutierrez said.

In 2013, scientists launched a Japanese government-led project called Brain/MINDS (Brain Mapping by Integrated Neurotechnologies for Disease Studies) to map the brains of marmosets — small nonhuman primates whose brains have a similar structure to human brains.

The brain/MINDS project aims to create a complete connectome of the marmoset brain by using both the non-invasive MRI imaging technique and the invasive fluorescent tracer technique.

“The data set from this project was a really unique opportunity for us to compare the results from the same brain generated by the two techniques and determine what parameters need to be set to generate the most accurate MRI-based connectome,” said Dr. Gutierrez.

In the current study, the researchers set out to fine-tune the parameters of two different widely-used algorithms so that they would reliably detect long-range fibers. They also wanted to make sure the algorithms identified as many fibers as possible while minimally pinpointing ones that were not actually present.

Instead of trying out all the different parameter combinations manually, the researchers turned to machine intelligence.

To determine the best parameters, the researchers used an evolutionary algorithm. The fiber tracking algorithm estimated the connectome from the diffusion MRI data using parameters that changed – or mutated – in each successive generation. Those parameters competed against each other and the best parameters – the ones that generated connectomes that most closely matched the neural network detected by the fluorescent tracer – advanced to the next generation.

The researchers tested the algorithms using fluorescent tracer and MRI data from ten different marmoset brains.

But choosing the best parameters wasn’t simple, even for machines, the researchers found. “Some parameters might reduce the false positive rate but make it harder to detect long-range connections. There’s conflict between the different issues we want to solve. So deciding what parameters to select each time always involves a trade-off,” said Dr. Gutierrez.

Throughout the multiple generations of this “survival-of-the-fittest” process, the algorithms running for each brain exchanged their best parameters with each other, allowing the algorithms to settle on a more similar set of parameters. At the end of the process, the researchers took the best parameters and averaged them to create one shared set.

“Combining parameters was an important step. Individual brains vary, so there will always be a unique combination of parameters that works best for one specific brain. But our aim was to come up with the best generic set of parameters that would work well for all marmoset brains,” explained Dr. Gutierrez.

The team found that the algorithm with the generic set of optimized parameters also generated a more accurate connectome in new marmoset brains that weren’t part of the original training set, compared to the default parameters used previously.

The green represents nerve fibers detected by injecting a fluorescent tracer at a single point. The red represents nerve fibers detected using a diffusion MRI-based fiber tracking algorithm. Only the nerve fibers that also connected up to the point where the tracer was injected are shown. The yellow represents nerve fibers that were detected using both techniques. The results show that the optimized algorithm performed better than the default algorithm, not only on a brain it was trained on, but on a previously unseen brain. The optimized algorithm detected a higher number of fibers and also fibers that stretched longer distances. © OIST

The striking difference between the images constructed by algorithms using the default and optimized parameters sends out a stark warning about MRI-based connectome research, the researchers said.

“It calls into question any research using algorithms that have not been optimized or validated,” cautioned Dr. Gutierrez.

(Top left) The image shows all the estimated fibers in the whole brain of a marmoset using a diffusion MRI-based fiber tracking algorithm with generic set of optimized parameters. (Top right) The image shows the same marmoset brain but the connectome is generated using the same algorithm with default parameters. There are noticeably fewer fibers. (Bottom) The two matrices show the strength of connection (density of nerve fibers) between one brain region and another brain region. The left matrix shows that the algorithm with the genetic set of optimized parameters detected a higher density of nerve fibers connecting the brain regions compared to the right matrix, which shows that the default algorithm detected a much lower density of nerve fibers. © OIST

In the future, the scientists hope to make the process of using machine intelligence to identify the best parameters faster, and to use the improved algorithm to more accurately determine the connectome of brains with neurological or mental disorders.

“Ultimately, diffusion MRI-based fiber tracking could be used to map the whole human brain and pinpoint the differences between healthy and diseased brains,” said Dr. Gutierrez. “This could bring us one step closer to learning how to treat these disorders.”

This collaboration involved the Okinawa Institute of Science and Technology Graduate University (OIST), the RIKEN Center for Brain Science and Kyoto University. It was funded by AMED (Japan Agency for Medical Research and Development). Grant Numbers: JP18dm0207030to KD/ JP20dm0207001 to HO/ JP20dm0207088 to KN

References: Gutierrez, C.E., Skibbe, H., Nakae, K. et al. Optimization and validation of diffusion MRI-based fiber tracking with neural tracer data as a reference. Sci Rep 10, 21285 (2020). https://www.nature.com/articles/s41598-020-78284-4 https://doi.org/10.1038/s41598-020-78284-4

Provided by OIST