Advances in Astronomy

Advances in Astronomy / 2019 / Article
Special Issue

Big Data Processing and Modeling in Solar Physics

View this Special Issue

Research Article | Open Access

Volume 2019 |Article ID 7821025 |

Zongxia Xie, Chunyang Ji, "Single and Multiwavelength Detection of Coronal Dimming and Coronal Wave Using Faster R-CNN", Advances in Astronomy, vol. 2019, Article ID 7821025, 9 pages, 2019.

Single and Multiwavelength Detection of Coronal Dimming and Coronal Wave Using Faster R-CNN

Academic Editor: Huaning Wang
Received12 Apr 2019
Accepted23 Jun 2019
Published08 Jul 2019


Automatic detection of solar events, especially uncommon events such as coronal dimming (CD) and coronal wave (CW), is very important in solar physics research. The CD and CW are not only related to the detection of coronal mass ejections (CMEs) but also affect space weather. In this paper, we have studied methods for automatically detecting them. In addition, we have collected and processed a dataset that includes the solar images and event records, where the solar images come from the Atmospheric Imaging Assembly (AIA) of Solar Dynamics Observatory (SDO) and the event records come from Heliophysics Event Knowledgebase (HEK). Different from the methods used before, we introduce the idea of deep learning. We train single-wavelength and multiwavelength models based on Faster R-CNN. In terms of accuracy, the single-wavelength model performs better. The multiwavelength model has a better detection performance on multiple solar events than the single-wavelength model.

1. Introduction

Solar eruption is an event that produces numerous energy, releasing a large amount of radiation as ejecting plasma into the heliosphere as coronal mass ejections (CMEs). CMEs can cause geomagnetic storms, which in turn may affect the reliability of power systems. In addition, CMEs can also affect space weather [1, 2]. The detection of CMEs is of vital importance. However, CMEs are difficult to detect as a result of the complex characteristics. CMEs are associated with many solar phenomena. Coronal dimming (CD) is temporally and spatially consistent with CMEs [3] and dimming-associated CMEs have much higher speed than others [4]. Coronal waves (CWs) can be driven and shaped by the expanding flanks of CMEs [5]. Therefore, CD and CW become the main objects of detection.

As for CD, several algorithms were proposed to identify it. An algorithm for fully automatic detection of CD was discussed by Bewsher et al. The image was subjected to a differential operation to form a base-difference image, and then the pixel intensity was processed by threshold to find dimming regions [6]. Podladchikova et al. continued to study this algorithm and performed minimum and maximum pixel mapping to describe dimming regions [7]. In addition, the statistical distribution of the running difference image is calculated to find significant disturbances, i.e., dimming regions. Different from the above method, Krista et al. used the original nondifference image and the corresponding magnetograms to detect the dimming regions [8].

For the detection of CW, Thompson et al. manually identified it [9] in 2009 with data on Solar and Heliospheric Observatory (SOHO) between 1997 and 1998. Then, Nitta et al. did a similar job and identified the CW [10] observed by the AIA on board the SDO spacecraft [11] in 2003. Manual identification is completely dependent on the user and this method is only suitable for small dataset. Podladchikova et al. used automatic dimming detection algorithm NEMO to detect CW based on SOHO [7]. Wills-Davey et al. proposed Huygens tracking technique [12]. This algorithm used percentage base-difference images to identify the pulse by finding the peak intensity line corresponding to the peak of the Gaussian cross-section. The semiautomated detection algorithm, Coronal Pulse Identification and Tracking Algorithm (CorPITA), was proposed by Long et al. [13]. CroPITA used intensity profile techniques to identify the propagation pulse, tracked its entire evolution, and then returned its kinematics estimate.

In 2010, NASA launched the SDO spacecraft, which orbited the Earth to capture full-disk images of the sun. AIA, an independent instrument of SDO, captures eight high-definition (4096×4096) full-disk images of different wavelengths every 10 to 12 seconds. SDO produces approximately 1.5 TB of data per day so that manual and traditional identification methods are no longer suitable for current conditions. In the case of large data, an automatic and rapid detection algorithm of CD and CW is very research-oriented. In 2017, Kucuk et al. used deep convolutional neural networks to classify solar events [14], including Sigmoid (SG) and Flare (FL). At the same time, they also used a deep learning model to detect solar events, such as Active Region (AR) [15].

In the field of computer vision, object detection is to locate the position of the object in an image and to identify the category of the object. This is a fundamental problem, but it is also very challenging because there are many interference factors in the image, such as the background clutter and perspective changes. In recent years, deep neural networks have become a research hotspot, and there are many high precision object detection based on deep learning, for example, Single Shot MultiBox Detector (SSD) [16], You Only Look Once (YOLO) [17], Region-based Fully Convolutional Networks (R-FCN) [18], and Faster Regions with Convolutional Neural Networks Features (Faster R-CNN) [19]. Both SSD and YOLO are fully end-to-end detection models, but SSD has an additional layer to predict the default box offset to improve accuracy. Unlike them, R-FCN and Faster R-CNN use the Region Proposal Network (RPN) [18]. The RPN is a separate network that generates proposed regions and flags, which contain the object or not. The difference between R-FCN and Faster R-CNN is the bounding box classification. Classification feature used by the R-FCN is to crop from the last layer of base network.

In this paper, deep convolutional neural networks (Faster R-CNN) is introduced to detect CD and CW. The rest of the paper is as follows. Section 2 introduces the dataset. Section 3 states the deep learning algorithm. The experiments and results are discussed in Section 4. Section 5 gives the conclusion.

2. Data Preparation

In this section, we introduce our dataset, including data collection and processing.

2.1. Data Collection

The dataset consists of two parts. One is the solar images. The other is the solar event records, especially the location coordinate and category of the solar events. As for the images, they come from AIA module of SDO. AIA can capture high-definition images at multiple wavelengths, but the metadata of AIA are FITS format and cannot be used directly to train the detection model. We can process the metadata into JPEG (4096×4096) format to continue the work. For the solar event records, they are determined according to HEK that receives metadata from many automated event detection modules. The metadata contains a lot of information, including the location coordinates, the start time, the end time, and the category of the solar events.

In order to collect required data, we use the SunPy (, which is a package for python language. Through the API of this package, we can get solar event records from HEK based on the time and the category of solar event. For example, we can search CD and CW from 2014 to 2017, and then we can get CD and CW event records from different wavelength (171Å, 193Å, 211Å). From the event record, we can obtain the start time and the end time. According to them and the API, the metadata of the corresponding event can be downloaded from AIA module of SDO. The metadata for the event is recorded every 12 seconds in the AIA module of SDO, and we can get multiple metadata during the time of the event. In addition, we use the Solarsoft (, which is a suite for IDL, to parse and preprocess metadata. What is more, the metadata are converted in turn into the JPEG image format. The event record also contains the location coordinates, the lower left and upper right coordinate of the bounding box. But these acquired coordinates are based on the Helioprojective Cartesian (HPC) coordinate system and need to be converted to the image pixel coordinates. We can do this using the API provided by the suite. So far, we have completed the data collection work, and then we process and clean the data.

2.2. Data Processing

There are many data sources in the HEK database, and the data is relatively messy. Inevitably there are some incorrect data. Furthermore, the occurrence of a solar event is a dynamic process, but the HEK database only provides a set of coordinates, which is not reasonable. We need to clean and process the data to make it more reliable. As mentioned in the description of the data collection above, we can get multiple metadata in the time range of the event. Then, we use the API to differentiate these metadata to get the running difference images. It helps us find the location of the event more easily. At the same time, we compare the location coordinates in the event record, clean up some unreasonable data, and simply adjust the location coordinates of the event. Among them, running difference image is a work that takes a lot of time, and it is necessary to constantly adjust some parameters to make the event noticeable. Figures 1 and 2 show some examples of CD and CW in the dataset, respectively.

In the end, we divide the dataset into eight parts. As for CD, it consists of four parts, three of which are single-wavelength (171Å, 193Å, 211Å) and one of which is multiwavelength. Each image in the dataset has 4096×4096 pixels. Each part consists of a training set and a test set. It is worth mentioning that we collect data from 2014 to 2017, but the training and test sets are randomly divided. The three single-wavelength training sets and test sets are different. The multiwavelength training set is a union of single-wavelength training sets, and the test set is consistent with a single-wavelength test set. The remaining four parts are CW, and the situation is consistent with CD. In Table 1, we give details of the training set and test set. As we all know, CD and CW are not common solar events, especially CW. Such events are still controversial, and detecting them is a very challenging task.



3. Deep Learning for CD and CW Detection

In this section, we introduce the deep learning model, Faster R-CNN, for detecting CD and CW.

3.1. Deep Learning and Convolutional Neural Networks

Traditional machine learning techniques are difficult to perform object detection task. Deep learning is a relatively new field of research in machine learning. It is also known for its strong learning ability. The essence of deep learning is that features are not artificially extracted and specified, but are characterized by data learning.

The convolutional neural networks (CNNs) are a kind of deep learning model. By training the filters, the convolution and the pooling are alternately operated on the original input image, thereby the increasingly complex features of the hierarchy are extracted. CNNs can build a map between input and output, which is nonlinear. Compared to traditional algorithms, deep CNNs perform well in the face of big data. Kucuk et al. used deep convolutional neural networks to classify solar events, and the model can classify them correctly with high precision.

3.2. Faster R-CNN

Faster R-CNN is a faster object detector based on the deep learning framework named Caffe [20]. Before introducing Faster R-CNN, we briefly describe fast region-based convolutional neural networks (Fast R-CNN).

The main idea of Fast R-CNN is to turn the object detection problem into a classification problem based on candidate feature extraction. This allows taking advantage of strong performance of deep learning on classification issues. An image is input to the Fast R-CNN, and CNNs are first used to obtain the feature of the image. CNNs are called pretraining networks. Fast R-CNN uses Selective Search (SS), which is an algorithm to generate the object candidate regions [21]. At the same time, it proposes the Regions of Interest (RoI), which maps the candidate regions generated by the SS to the feature layers of the CNNs and extracts the deep features directly on the feature layer. Then, softmax is used to classify the extracted features, and the regression of bounding box is trained to improve the accuracy of the object location. It can be seen that Fast R-CNN integrates the training, classification, and regression of CNNs to improve the efficiency of object detecting as a whole. But the SS algorithm is independent of the deep neural networks and cannot use GPU computing. The calculation speed has a certain impact on the overall performance of the algorithm. In order to solve this problem, Faster R-CNN was proposed.

Faster R-CNN is improved based on the Fast R-CNN algorithm. Instead of the SS algorithm in Fast R-CNN, Region Proposal Networks (RPN) is proposed for the purpose of extracting candidate regions. Faster R-CNN integrates candidate region extraction, deep feature extraction, classification, and bounding box regression into deep neural networks. All of these tasks can be trained in the GPU, increasing the efficiency of the algorithm without losing accuracy.

The input of the Faster R-CNN is the solar image and the corresponding label. The label includes the event type (CD) and location coordinates in the image. The feature is extracted from the solar image through a deep convolutional neural network, and then the feature map is processed in two lines. One line is a RPN network, and the purpose is to obtain candidate regions. RoI is operated on the obtained candidate regions and the feature map to get candidate region feature. The other line is the classification and bounding box regression of candidate regions feature. The topology of the network is shown in Figure 3.

RPN is a full convolutional neural network. The previous layer of the input of the network is feature map, and the output is rectangle candidate region. First, a sliding window is performed on the feature map of the shared convolution network to obtain the window. This window is used as an input to the network, and candidate regions are predicted at the same time. Here, we name the candidate region an ancher and each ancher has different sizes and ratios. Each window is mapped to a low latitude vector by a convolution operation. This vector is used in two subnetworks, with bounding box regression and bounding box classification. The output of the bounding box classification network is whether each ancher contains an object, i.e., foreground or background, and the size of the output vector is . The output of the bounding box regression network is to adjust the location of each ancher to make it more accurate, and the size of the output vector is . The network structure of RPN is shown in Figure 4.

After the RPN network, we get the object candidate region, which is also the rough position of the object. The deep feature is extracted by performing a RoI operation according to the object candidate region and the feature map. The deep feature is subjected to two full connection layer operations, and then the feature of the full connection is operated in parallel. One is the softmax classification layer, which outputs the probability distribution of each candidate region feature in the category while the other is the bounding box regression layer, which again corrects the location of the object to make it more accurate. The loss function has a combination of these two parts aswhere represents the probability distribution of each candidate region feature on categories, represents the real category, is the output of the bounding box regression, and represents the ground truth of the category . is the logistic loss of the real category as is the bounding box regression loss as (3). It is based on two sets of parameters, , . loss is given in (4).For , the value is 1 when , indicating that the candidate region is positive and has regression loss; otherwise the value is 0, indicating no loss. At this time, the candidate region is the background and the background has no ground truth. is used to balance the two losses.

4. Experimental Results and Discussion

4.1. Experiment Settings

The experiment was performed on a GPU server, and the GPU type was NVIDIA GM200. In addition, we used the deep learning framework named Caffe. We used the datasets mentioned in the second section, including the training set and test set. For CD, because of small size of dataset, we used the ZF pretraining model. As for CW, the size is larger than CD, we use the VGG1024 pretraining model. We trained two different models, single-wavelength and multiwavelength and also tested the model with the same test set. All models were iterated 35000 times. It takes a long time to train a model, which is about 18 hours. In order to get the optimal performance, we only try several values of parameters, ‘iteration’ and ‘learning rate’.

We use average precision (Ap), precision, and recall to characterize the performance of the model. In order to test the model, it is very important to have a correct test standard. If the value (the ratio of intersection to union, IOU) between predicted bounding box (PBbox) and the ground truth box (GTbox) is greater than 0.5, the detection is considered to be correct. Recall is the ratio of corrected PBboxes in all ground truth as shown in (5). Precision is the ratio of corrected PBboxes in all PBboxes as shown in (6).The Ap is related to the precision-recall curve and is calculated by the mean precision at eleven recall levels :Based on the level , find the maximum precision value for which the corresponding recall exceeds as where is the precision at recall level .

4.2. Single-Wavelength Faster R-CNN

For CD, the Ap value of 211 Å is 0.909, which is the best of three single-wavelength models. In contrast, the model of 193Å has the lowest Ap value. But overall, the accuracy of the three single-wavelength models is very high, as shown in Table 2. In addition, we compare the predicted bounding box with the acquired labels on the HEK database. The three models can accurately locate the location of the event. Some sample images of the detection results are shown in Figure 5.

Model171 Å193 Å211 Å


As for CW, the accuracy of the single-wavelength detection models is approximately 78%. The Ap values of the 171Å, 193Å, and 211Å models are shown in Table 2. Among them, the Ap value of 171Å is the highest, that of 211Å is the lowest, but the differences between the three models are not large. We also compare the test results of the models with the HEK database. The visual results are shown in Figure 5. We can see that the events marked in the HEK are recognized in our detection model.

In addition, we trained the R-FCN model to detect CDs and CWs. To ensure the effectiveness of the comparative experiments, we used the same dataset and experimental environment. In Table 3, as for CW, the Ap of the 193 Å is higher than the Faster R-CNN, and the Ap of multiwavelength detection model is higher than the Faster R-CNN at 211 Å. The other cases are not as good as the Faster R-CNN. We also recorded the training time of the two models. For the single-wavelength model, Faster R-CNN takes approximately 18 hours and R-FCN takes 20 hours. For the multiwavelength models, Faster R-CNN takes 21 hours and R-FCN takes 23 hours. Considering both factors, Faster R-CNN is better than R-FCN.

Model171 Å193 Å211 Å


4.3. Multiwavelength Faster R-CNN

Since the single-wavelength model has good accuracy and intuitive effects, we mix the characteristics of the three wavelength solar events and complement each other to form a multiwavelength detection model. As for CD and CW, the Ap values of the detection model are also shown in Table 2. Compared with the results of the single-wavelength detection model, only the test images of 193Å have higher Ap than single-wavelength models, while those of others have lower Ap than single-wavelength models. But this does not mean that the effect of multiwavelength is not as good as single-wavelength. Precision and recall are also indicators of model performance. The precision and recall of the events for single-wavelength and multiwavelength are shown in Figure 6. Here, the precision and recall is the mean of those on each test image. We find that the precision of the single-wavelength model is higher than that of the multiwavelength one, but the recall of the multiwavelength model is higher than that of the single-wavelength one. We can visually check the detection effect of multiple models. Compared to the single-wavelength model, we can see that the multiwavelength models are better for detecting multiple solar events. As shown in Figure 7, the original multiple solar events are not completely detected with single-wavelength, but the multiwavelength model is relatively accurately detecting those events. However, there are cases where the multiwavelength model is used to detect some incorrect bounding boxes. Because of these incorrect bounding boxes, it will lead to a decrease in the accuracy rate. Therefore, the precision of the multiwavelength model is lower than that of the single-wavelength model. For the recall rate, since the number of Gtbox is fixed, the denominator in (5) is constant, and the number of correct bounding box detected increases, which results in an increase in the recall rate. Although Ap is an important indicator of the measurement model, it is not absolute. It can be seen from the above analysis that the multiwavelength detection model still has a certain effect.

5. Conclusions

We train the deep learning models to detect CD and CW for single-wavelength and multiwavelength, respectively. The results show that the models can accurately detect about 70% of CD and CW. Single-wavelength models have high accuracy in detecting single solar events, but sometimes it is not comprehensive when they detect multiple solar events. Compared to single-wavelength models, multiwavelength models have better performance in detecting multiple solar events, but not as accurate as single-wavelength models. In addition, multiwavelength models predict some incorrect bounding boxes, which may misinterpret some regions as solar events. It is feasible to apply deep learning to the field of solar event detection and has certain research value.

For future work, we first consider adding difference images during the training of the model. In other words, the solar events in the difference image are easier to identify and may enhance the detection of the original image. Furthermore, we consider feature fusion of multiple wavelengths. The same solar event may appear in images of multiple wavelengths, and the feature of merging multiple wavelengths may enhance the detection of solar events at a certain wavelength.

Data Availability

SolarData consists of two parts, one is CD and the other is CW. The CD consists of three wavelengths (171 Å, 193 Å, 211 Å). Each wavelength contains the solar images and labels (event type and coordinates). The situation of CW is consistent with CD. The SolarData data used to support the findings of this study have been deposited in the GitHub repository (

Conflicts of Interest

The authors declare that there is no conflict of interest regarding the publication of this paper.


The authors thank SunPy developers, SDO, and HEK for providing the data used in the paper. This work is supported by the National Natural Science Foundation of China under Grants 61432011, U1435212, 61105054,and Open Research Program of Key Laboratory of Solar Activity.


  1. N. Gopalswamy, “A global picture of CMEs in the inner heliosphere,” in The Sun and the Heliosphere as an Integrated System, vol. 317 of Astrophysics and Space Science Library, pp. 201–251, Springer, Dordrecht, Netherlands, 2004. View at: Publisher Site | Google Scholar
  2. I. G. Richardson, E. W. Cliver, and H. V. Cane, “Sources of geomagnetic storms for solar minimum and maximum conditions during 1972-2000,” Geophysical Research Letters, vol. 28, no. 13, pp. 2569–2572, 2001. View at: Publisher Site | Google Scholar
  3. T. H. Howard and R. A. Harrison, “On the coronal mass ejection onset and coronal dimming,” Solar Physics, vol. 219, no. 2, pp. 315–342, 2004. View at: Google Scholar
  4. A. A. Reinard and D. A. Biesecker, “The relationship between coronal dimming and coronal mass ejection properties,” The Astrophysical Journal , vol. 705, no. 1, pp. 914–919, 2009. View at: Publisher Site | Google Scholar
  5. M. Temmer, A. M. Veronig, N. Gopalswamy, and S. Yashiro, “Relation between the 3D-geometry of the coronal wave and associated CME during the 26 April 2008 event,” Solar Physics, vol. 273, no. 2, pp. 421–432, 2011. View at: Publisher Site | Google Scholar
  6. D. Bewsher, R. A. Harrison, and D. S. Brown, “The relationship between EUV dimming and coronal mass ejections,” Astronomy & Astrophysics, vol. 478, no. 3, pp. 897–906, 2008. View at: Publisher Site | Google Scholar
  7. O. Podladchikova and D. Berghmans, “Automated detection Of eit waves and dimmings,” Solar Physics, vol. 228, no. 1-2, pp. 265–284, 2005. View at: Publisher Site | Google Scholar
  8. L. D. Krista, A. Reinard, and A. Alysha, “Study of the recurring dimming region detected at AR 11305 using the COronal DImming Tracker (CoDiT),” The Astrophysical Journal, vol. 762, no. 2, p. 91, 2013. View at: Google Scholar
  9. B. J. Thompson and D. C. Myers, “A catalog of coronal “EIT wave” transients,” The Astrophysical Journal Supplement Series, vol. 183, no. 2, pp. 225–243, 2009. View at: Publisher Site | Google Scholar
  10. N. V. Nitta, C. J. Schrijver, J. Carolus et al., “Large-scale coronal propagating fronts in solar eruptions as observed by the atmospheric imaging assembly on board the solar dynamics observatory - An ensemble study,” The Astrophysical Journal, vol. 776, no. 1, pp. 1567–1579, 2013. View at: Google Scholar
  11. P. C. Martens, G. D. Attrill, A. R. Davey et al., “Computer vision for the solar dynamics observatory (SDO),” Solar Physics, vol. 275, no. 1-2, pp. 79–113, 2012. View at: Publisher Site | Google Scholar
  12. M. J. Wills-Davey, “Tracking large-scale propagating coronal wave FRONTS (EIT waves) using automated methods,” The Astrophysical Journal, vol. 645, no. 1, pp. 757–765, 2006. View at: Publisher Site | Google Scholar
  13. D. M. Long, D. S. Bloomfield, P. T. Gallagher, and D. Pérez-Suárez, “CorPITA: an automated algorithm for the identification and analysis of coronal “EIT Waves”,” Solar Physics, vol. 289, no. 9, pp. 3279–3295, 2004. View at: Publisher Site | Google Scholar
  14. A. Kucuk, J. M. Banda, and R. A. Angryk, “Solar event classification using deep convolutional neural networks,” in Proceedings of the International Conference on Artificial Intelligence & Soft Computing, Springer, Cham, Switzerland, 2017. View at: Google Scholar
  15. K. Ahmet, B. Aydin, and R. Angryk, “Multi-wavelength solar event detection using faster R-CNN,” in Proceedings of the 5th IEEE International Conference on Big Data, Big Data 2017, pp. 2552–2558, December 2017. View at: Google Scholar
  16. W. Liu, D. Anguelov, D. Erhan et al., SSD: Single Shot Multibox Detector, vol. 9905, European conference on computer vision, Springer, Cham, Switzerland, 2016. View at: Publisher Site
  17. J. Redmon, S. Divvala, R. Girshick, and A. Farhadi, “You only look once: Unified, real-time object detection,” in Proceedings of the 2016 IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2016, July 2016. View at: Google Scholar
  18. J. Dai, Y. Li, K. He, and J. Sun, “R-FCN: Object detection via region-based fully convolutional networks,” in Proceedings of the 30th Annual Conference on Neural Information Processing Systems, NIPS 2016, December 2016. View at: Google Scholar
  19. S. Ren, K. He, R. Girshick, and J. Sun, “Faster R-CNN: towards real-time object detection with region proposal networks,” IEEE transactions on pattern analysis and machine intelligence, vol. 39, no. 6, pp. 1137–1149, 2015. View at: Publisher Site | Google Scholar
  20. Y. Jia, E. Shelhamer, J. Donahue et al., Caffe: Convolutional Architecture for Fast Feature Embedding, ACM, 2014. View at: Publisher Site
  21. J. R. R. Uijlings, K. E. A. Van De Sande, T. Gevers, and A. W. M. Smeulders, “Selective search for object recognition,” International Journal of Computer Vision, vol. 104, no. 2, pp. 154–171, 2013. View at: Publisher Site | Google Scholar

Copyright © 2019 Zongxia Xie and Chunyang Ji. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

More related articles

 PDF Download Citation Citation
 Download other formatsMore
 Order printed copiesOrder

Related articles