This Deep Learning Model Turns Blurred Galactic Images Into Clearer Ones (Astronomy)

Gan and colleagues proposed a new deep learning method called generative adversarial networks (GAN’s) to turn blurred galactic images into clearer ones.

Classification of galactic morphologies has long been a critical task in extragalactic astronomy, not only because global galactic morphologies such as bulge-to-disk-ratios and spiral arm shapes can have fossil information of the galaxy formation, but also because the detailed statistical studies of galactic properties for each category can provide insights into the formation processes of different types of galaxies. Galaxy classification schemes proposed in previous pioneering works have long been used as standard tools in many observational and theoretical studies of galaxy formation and evolution. These days, galaxy classification is also done by non-professional astronomers such as the Galaxy Zoo project, in which a large number of galaxy images (> 106) from SDSS are provided for citizen science.

Galaxy classification has always been done by the human eye and will be done in future works. More recently, however, this process has begun to be automated by applying machine learning algorithms to actual observational data. For example, convolutional neural networks (CNNs) have been used in the automated classification of galactic morphologies for many galaxies . Galaxy classification using these deep learning algorithms has been successfully done for a large number (> 106 ) of images from large ground-based telescopes such as the Subaru 8m telescope. Such quick automated classification is now considered to be the primary (and possibly only) way to classify a vast number of galaxies from ongoing and future surveys of galaxies such as LSTT and EUCLID.

One of the potential problems in classifying galaxy images from ground-based telescopes is that the images can be severely blurred owing to the seeing effects of the sky. Fine structures of galaxies, such as bars, spiral arms, and rings, is used to classify and quantify galaxies, such structures can be much less visible in galaxy images from ground-based telescopes, in particular, for distant galaxies. Thus, if this optical blurring due to sky seeing can be removed by applying machine learning algorithms to real galaxy images, it will provide significant benefits both to professional astronomers and non-professional ones who are working on the Galaxy Zoo project.

Now, Gan and colleagues developed a new GANbased model that can convert blurred ground-based Subaru Telescope images of galaxies into clear HST-like galaxy images.

“Galaxy images from the HST do not have such problems as seeing effects because atmospheric distortion due to light travelling through the turbulent atmosphere is not a problem in these observations by a space telescope.”

— told Gan, lead author of the study

In the present study, they manifested that using an existing deep learning method called generative adversarial networks (GANs), they can eliminate seeing effects, effectively resulting in an image similar to an image taken by the HST. Using their first of its kind machine learning-based deblurring technique on space images, they obtained up to 18% improvement in terms of CW-SSIM (Complex Wavelet Structural Similarity Index) score when comparing the Subaru-HST pair versus SeeingGAN-HST pair (refer fig 1 below).

“With this model, we can generate HST-like images from relatively less capable telescopes in very less time, making space exploration more accessible to the broader astronomy community. Furthermore, this model can be used not only in professional morphological classification studies of galaxies but in all citizen science for galaxy classifications.

— told Gan, lead author of the study

There are several scientific merits of their SeeingGAN in astronomical studies. First, astronomers can see the internal fine structures of galaxies such as spiral arms, tidal tails, and massive clumps more clearly, which can be difficult to see in optical images of distant galaxies from ground-based telescopes. These generated clearer images by SeeingGAN would assist astronomers to classify galaxies better and discover new internal structures of distant galaxies which otherwise could be difficult to find in original blurred images. For example, it could be possible that distant galaxies classified as S0s with no spirals in original blurred images are indeed spiral galaxies in the debarred images by SeeingGAN. This can influence the redshift evolution of S0 fraction in groups and clusters, discussed in many recent papers. Also, SeeingGAN can be used for citizen science projects for galaxy classification by the public, e.g., the Galaxy Zoo project. If galaxy images in these projects are blurred (more challenging to classify galaxies), then the deblurred images generated by SeeingGAN can be easily used for the public galaxy classification instead of the original image. The speed at which SeeingGAN can convert blurred images to deblurred ones is rapid, it is not tricky for SeeingGAN to generate a massive number of deblurred galaxy images.

Figure 1. Sample results produced by SeeingGAN. The images are listed in the order of HST, Subaru, SeeingGAN prediction. The SeeingGAN result is obtained by predicting the results from the Subaru image. The CW-SSIM value is obtained by comparing the said image and the HST image, a higher CW-SSIM value indicates that the image is more similar to the HST image. © Gan et al.

As shown in Fig. 1, the deblurred images are clearer than the original Subaru images, however, some of them are not dramatically clearer as the HST counterparts. Hence, their future study investigates whether different CNN architectures, larger numbers of image pairs, and model parameters of SeeingGAN can improve the performance of SeeingGAN. Since the present study has proposed one example of SeeingGAN for a limited number of Subaru-HST image pairs, it is worth a try for Gan et al. to investigate different architectures of GAN for a much larger number of image pairs.

They plan to use the large number (a million) of Subaru Hyper Suprime-Cam and HST images to test new architectures of SeeingGAN for its better performance. It might be essential for them to use galaxy images from other optical telescopes (e.g., VLT) to confirm that SeeingGAN can be developed from different combinations of ground-based and space telescopes. Although they have focused exclusively on galaxy images in optical wavelengths, it might be an interesting future study to use galaxy images at other wavelengths from space telescopes (e.g., JWST) to develop new SeeingGAN.

Featured image: One enlarged sample result predicted by SeeingGAN. The predicted image is obtained by feeding the Subaru 8.2m telescope’s image into SeeingGAN. The resultant image has a higher CW-SSIM score, which indicates a better similarity to the HST image. © Gan et al.

Reference: Fang Kai Gan, Kenji Bekki, Abdolhosein Hashemizadeh, “SeeingGAN: Galactic image deblurring with deep learning for better morphological classification of galaxies”, pp. 1-11, ArXiv, 2021.

Copyright of this article totally belongs to our author S. Aman. One is allowed to reuse it only by giving proper credit either to him or to us

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s