Abstract

With the increasing interest in effective renewable alternative energy sources resulting from the Paris Agreement on Climate Change in 2015, photovoltaic (PV) power generation is attracting attention as a practical measure. In this study, we develop procedures for efficiently monitoring PV panels in a large area and increasing their classification accuracy to enable efficient management of PV panels, an important component of renewable energy generation. To accomplish this, first, the persistent scatterer characteristics (e.g., polarization, imaging module, and topography) of PV panels in SAR images were utilized. Then, we developed a technique for classifying panels over a certain size using the polarization and pulse-scattering characteristics of Sentinel-1. Next, by stacking Sentinel-1 ground range Doppler (GRD) images and comparing them with the surroundings of the same area, the morphological features of PV panels were derived and built as learning data for machine learning. Then, a more precise classification of PV panels was performed by applying these learning data in AI algorithms. When SAR-based AI training data for the same PV panels were used in the YOLOv3 and YOLOv5 algorithms, both algorithms showed high accuracy of over 90%, but there were differences in precision and recall. These findings will enable more efficient monitoring of PV panels, the use of which is expected to increase in the future. In addition, they can serve as a proactive response tool to address environmental problems such as PV panel waste and panels washed away during natural disasters.

1. Introduction

Since the 21st Conference of Parties to the United Nations Framework Convention on Climate Change (COP21, Paris Agreement) [1] in December 2015, efforts to reduce greenhouse gas emissions have been mandated around the world. Countries are reorganizing their supply and expansion of sustainable renewable energy, with a focus on photovoltaic (PV) power generation (IPCC, 2014). In 2017, South Korea generated about 39.9 GW of renewable energy, and 5.7 GW of this amount, about 30%, was generated using PV methods [2]. This level of production will continue to increase. It is expected that Korea will expand its renewable energy to a total of 53 GW, with PV generation accounting for more than 50%, by 2030 [3].

Although the installation and operation processes of PV generation are recognized as being ecofriendly, PV panels produce a large amount of waste at the end of their service lives, and proper disposal requires complex chemical treatment [46]. PV panel waste contains the carcinogen lead, cadmium, and chromium, and there are currently no regulations for its disposal [7]. In addition, copper, indium, potassium, and selenium compounds are involved in the chemical treatment of some PV panels during their manufacturing process. If such panel waste is neglected or buried, secondary environmental problems, including soil and water pollution, may occur and increase through time [8]. The average lifespan of general PV panels is about 30 years, so PV panel waste will likely increase rapidly in the future [9]. The International Renewable Energy Agency predicted that as much as 78 million tons of PV panel waste will be generated by 2050. If this amount is recalculated on an annual basis, it means that about 6 million tons of such waste will be generated each year [9].

Moreover, in South Korea, when a PV system is installed and operated at a size exceeding [10] for sustainable renewable energy generation, it must be reported to the central and local governments. It is also subjected to an environmental impact assessment (EIA). However, when solar panels below a certain size are installed and operated, autonomous operation that is not subject to reporting obligations and EIA is permitted, which may result in a lack of appropriate management.

Problems related to PV panels can be divided into two general categories: first, only PV panels over a certain size are monitored for installation, operation, and management; second, autonomously operated PV panels below a certain size are left unattended with regard to secondary environmental pollution that could occur after their disposal. Thus, a rapid and effective monitoring method is needed to utilize PVs more efficiently and address the potential environmental problems proactively.

Remote sensing can be used to efficiently monitor both large areas and difficult-to-access areas. In particular, synthetic aperture radar (SAR) is advantageous for distinguishing PV panels from other objects by utilizing active sensors to determine both the spectroscopic and physical installation characteristics. Furthermore, the application of machine learning, which has recently been used in various ways for object classification and recognition, to SAR imagery may enable more precise classification of PV panels.

Many studies have applied SAR imagery to machine learning-based algorithms, including deep learning. Two studies have applied Radarsat-2 imagery to a convolution neural network- (CNN-) based algorithm to classify multiple objects such as water and trees [11, 12]. The classification accuracy exceeded 80%. In addition, [13] applied Radarsat-2 polarimetric SAR (PolSAR) imagery to a deep belief network (DBN) to classify 10 items, including pastures and crops, and achieved an accuracy of 81%. [14] applied SAR data to semantic segmentation. On the other hand, in the field of object detection using SAR, fast region-based CNN (Fast-R-CNN) has been used to detect oil slicks and vehicles in the ocean using a mask region-based deep learning CNN algorithm [15, 16]. The You Look Only Once (YOLO) algorithm, which is the machine learning algorithm for object detection that is used in the most fields, has higher classification performance and faster learning speed than other object detection algorithms [17]. Studies using YOLO for SAR images have mainly been conducted to detect means of marine transportation, including small ships [1820].

The distinction of this study was derived as follows by summarizing the utilization status of PV panels as well as previous studies. First, classification of PV panels in a large area was performed. Most of the previous monitoring studies of PV panels have targeted a single PV panel or each panel within PV power plants [2123]. To this end, those studies utilized remote sensing data, including optical image-based unnamed aerial vehicle- (UAV-) sourced data, which have a narrow spatial range but high spatial resolution [2123]. In the study of Chen et al. [24], there is a case of classifying solar panels by applying artificial intelligence techniques to Sentinel-2 optical images. Although there have been monitoring studies on PV panels with a narrow radius, studies using SAR images to detect large areas are still lacking [2123]. With respect to photovoltaic (PV) panel detection research, most of the research has been focused on detecting PV panels or panel defects in a small area through various techniques including deep learning. However, in this study, the distribution status of PV panels over a wide area was monitored using Sentinel-1 with a wide spatial range. The Sentinel-1 SAR image can be continuously monitored without being affected by weather conditions due to the characteristics of using electromagnetic waves [25, 26]. Second, object classification was performed in this study using SAR satellite images and AI algorithms; representative object detection models YOLOv3 and YOLOv5 were applied. YOLOv3 is one of the representative deep learning algorithms, and YOLOv5 is the latest algorithm announced in 2020. The monitoring performance was evaluated by applying two algorithms: the previously verified algorithm and the latest algorithm.

2. Methods

The purpose of this study was to quickly classify PV panels in a wide area using SAR images and to classify them more precisely using machine learning. We sought to develop a technique for efficiently classifying PV panels distributed in various sizes by fusing and utilizing existing time-series images and new images acquired in a wide area.

In more detail, it is as follows. First, a study area in which PV panels over a certain size are distributed in various ways was selected. In this study, the characteristics of persistent scatterers (e.g., polarization, imaging module, and topographical characteristics) showed by PV panels in SAR images were utilized. That is, we tried to develop a technique for classifying PV panels by utilizing the polarization and pulse-scattering characteristics of Sentinel-1, as well as for efficiently classifying PV panels over a certain size. Second, the distribution status of PV panels was quickly classified using time-series images from SAR for a wide study area. To this end, the Sentinel-1 GRD (ground range Doppler) images were stacked and compared with the surroundings in the scenes of the same area, through which the characteristics of the distribution patterns of PV panels were extracted and constructed as learning data for machine learning. Third, a more precise classification of PV panels was performed by applying SAR image-based learning data of the study area to an AI algorithm (e.g., YOLO3 and 5) during machine learning (Figure 1).

2.1. Study Area

The machine learning dataset for PV panel detection was constructed by targeting 570 PV plants installed in a region between 34.317°N, 125.842°E and 35.112°N, and 126.594°E (Figure 2). The study area, located in southwestern South Korea, includes the administrative districts of Mokpo-si, Muan-gun, Shinan-gun, Jindo-gun, and Haenam-gun. The region includes numerous islands, and PV power plants have been developed rapidly in large wetlands and abandoned salt fields since 2015 [27].

The PV plant complex located in the study area has a total annual electricity generation of about 18 GW based on renewable portfolio standard (RSP) projects [28], and it can produce up to 89 MW per day. It has the highest concentration of renewable energy generation facilities in Korea [29] (Figure 3). Assuming that the entire complex is composed of 300 W solar panels, it includes about 5.33 million solar panels with a total PV panel area of about 10.31 km2 [30]. As most of the PV plants in the study area are in coastal areas with an altitude of less than 50 m and an inclination of less than 6%, it was expected that errors caused by topographic effects in SAR images, including foreshortening, layover, and shadowing, would be reduced compared to those of complexes in other areas.

2.2. Dataset Configuration

The raw data used to build the dataset for machine learning were from Sentinel-1 images acquired in 2020. First, the signal characteristics of the PV panels in the study area were analyzed. PV panels as persistent scatterers have several characteristics in SAR images. First, they exhibit a specific range of signal strengths. As shown in Figure 4, the studied panels had signal strengths from –14 dB to 2 dB, with a peak at –8 dB. Second, they show consistent but low signal strengths at the temporal baseline of SAR images (Figure 4). This is assumed to occur because of the low roughness of the PV panel surface and the constant azimuth angle, which causes most electromagnetic signals emitted from the SAR satellite to deviate in the direction of the reflection angle [31]. In this study, learning data for machine learning were constructed using these characteristics.

Sentinel-1, a C-band SAR satellite, can implement terrain observation with progressive scans (TOPS) in azimuth mode [32]. The satellite operates in four exclusive acquisition modes with different spatial resolutions and shooting ranges. The interferometric wide swath mode (IW) is widely used in applications related to interferometric synthetic aperture radar (InSAR). In IW mode, the reflected signal data of the radar pulse can be acquired, with excellent resolution (10 m), over a large area all at once. It captures three subswaths and a total swath width of more than 250 km. It also features spatial resolution (single look complex (SLC); ) with a repeat cycle of at least 6 days, and SLC images can be downloaded free of charge for research purposes. This characteristic can be very useful in studies monitoring changes in the ground surface through time.

In this study, IW mode GRD Level 1 amplitude image data captured by Sentinel-1A and Sentinel-1B from January 2020 to April 2021 were used. With the goal of detecting fixed artificial structures (PV panels), we did not determine the sensitivity of the polarization direction. To construct machine learning data, both vertical transmit and vertical-received (VV) and vertical transmit and horizontal-received (VH) polarization data and imaging modules (ascending and descending) were used.

Sentinel-1 SAR C-band ground range-detected (GRD) log-scaling datasets were processed using the Python programming language in the application programming interface (API) provided by ESA’s Copernicus Earth Engine Data. The datasets were updated on daily basis and accessible within about 2 days after capture [33]. For this study, 98 GRD amplitude images taken in 2020 were used as learning data. These were stacked by dividing them at each month by direction and polarization.

Since SAR satellites have active sensors, the images obtained from them generally contain various noises. Of such noise, speckle noise has a characteristic of multiplicative noise, which degrades the quality of the SAR images and causes information loss [34]. The Sentinel-1 GRD images used in this study can cause errors in object identification due to salt-and-pepper noise, as shown in Figure 5(b). In addition, the mission of the Sentinel-1 satellite was to map large areas with simple objects including forests, lakes, and rivers, rather than areas with various objects such as cities. Accordingly, a more common method of removing noise is required [34]. The most common and effective noise reduction for this type of noise is to use a median filter [35]. In this study, a median filter with low computational complexity was used to quickly and efficiently construct the large amount of training data required for AI algorithms. The kernel size can be expressed as ~, and there is no degradation of spatial resolution by applying median filter to temporal resolution. The spatial resolution of the Sentitnel-1 GRDH image is about 10 m, which is rather low to detect a solar panel, so it was determined that the kernel that affects the spatial resolution could not be used during the noise removal process. The SAR training datasets were set monthly for the ascending and descending orbit GRD images taken for 14 months from January 2020 to February 2021; 2–6 images of the same scene were stacked, and the noise of the entire image was consistently removed by applying a median filter. This approach has the advantage of coherently correlating speckle signals [36]. By stacking the images, it can be seen that the characteristics of PV panels on the ground are more clearly observed, as shown in Figures 5(c) and 5(d).

The spatial size (in swath width) of Sentinal-1 raw images used in this study is about 250 km2, corresponding to 25,000 horizontal pixels and 17,000 vertical pixels with a spatial resolution of 10 m. It was reconstructed as to apply the image data to AI algorithms of YOLOv3 and YOLOv5 in this study. The datasets were adjusted by normalizing them from the 16-bit to the 8-bit range (Figure 5).

Ascending and descending images, which are active sensor-imaging modules, were classified using time-series Sentinel-1 GRD images. Thereafter, four datasets were constructed by dividing the two polarized images, VV and VH, based on the transmission/reception direction (Table 1). The construction process was further subdivided as follows. First, the spatial status of PV panels exceeding a certain size, which are subject to EIA, was utilized. Accurate location information is available for such panels. This information was used to build learning data. Second, based on the EIA data, attribute information at the time of construction of the PA panels was included in data labels. The creation of SAR-based learning datasets considering the construction time of the PA panels prevented the occurrence of patches in which PA panels did not exist at a certain time. Third, the PV footprint obtained using aerial and satellite images was labeled for the entire study area. This is the most important process in building the datasets. Because polygons for PV plants represent geographically accurate location information, their coordinate locations were shared in all datasets.

2.3. Research Method

AI algorithms are largely divided into semantic segmentation and object detection algorithms. In terms of object detection and monitoring using SAR images, there have been many recent studies that applied deep learning algorithms [3739]. Object detection algorithms are divided into one-stage detection and two-stage detection algorithms. Two-stage-based R-CNN algorithms show lower efficiency in learning time than one-stage-based YOLO algorithms [40, 41]; In particular, the YOLO algorithm shows high performance in monitoring various objects, such as detecting ships sailing in the ocean [2123]. Most of the research related to monitoring and detection of PV panels used UAV images to detect a single PV panel or PV panels in a small area [2426]. In this study, to detect PV panels in a large area using SAR satellite images, the YOLO algorithm was selected, which shows high performance in object detection of SAR images when detecting and monitoring using existing deep learning. The YOLOv3 model is an algorithm that has already been used in various fields including ship detection and has been recognized for its performance [2426]. The YOLOv5 model has recently been widely used in the object detection field, and its performance has been recognized for its use of high-resolution remote sensing data [4245]. This study sought to compare and evaluate the YOLOv3 and YOLOv5 models.

2.3.1. YOLOv3

The YOLO algorithm, published by [46], is a one-stage network model using a simple CNN network. A detector on a one-stage network (one-stage detector) is an algorithm that simultaneously executes object localization and object classification [47]. The YOLO algorithm yields better performance than the conventional Fast-R-CNN on the PASCAL Visual Object Classes (VOC) dataset [48]. It is able to predict multiple bounding boxes per grid cell using a single network and the required number of bounding boxes using a confidence score based on the self-classification result [48]. YOLOv2, a follow-up algorithm of YOLO, uses batch normalization for all convolution layers. The algorithm performance is also improved, such as by directly predicting bounding boxes from the beginning using anchor boxes as initial values. As a result, it shows higher performance, including higher accuracy and shorter learning time, than the existing single-shot multibox detector [49]. YOLOv3 uses Darknet-53 as a backbone network and achieves better learning speed than existing algorithms that use residual blocks and skip connections [50, 51] (Figure 6).

2.3.2. YOLOv5

YOLOv5, proposed by [52], an improvement from the existing YOLO algorithms, mainly includes YOLOv5s, YOLOv5m, and YOLOv5l. It is classified as YOLOv5x because it extends from small (s) to extra-large (x) model sizes, indicating a deeper neural network [53]. YOLOv5 uses the CSPDarknet53, instead of the DarkNet-53 backbone network of YOLO3 [54], and shows high efficiency of performance, including learning time, compared to YOLOv3 and YOLOv4 [55]. YOLOv5 is a neural network composed of four modules, including input and backbone modules, in which the Focus module, an advanced form of YOLOv3, was added, and the algorithm was changed in CSPNet of the neck module compared to the previous version [56] (Figure 7).

3. Results

3.1. Algorithm Application Result

The datasets in this study to which the YOLOv3 and YOLOv5 algorithms were applied were developed from a total of 13,152 images with a size of . Learning and validation datasets were constructed at a ratio of approximately 7 : 3, following a previous study [58]. For YOLOv3, Darknet-53 was used as the backbone network, while for YOLOv5, training was carried out based on the YOLOv5l model. Epoch, a hyperparameter indicating the number of learning times, was set to 1,200. Batch size, a hyperparameter indicating the number of data points to be learned at one time, was set as 24 in consideration of computation cost. Learning rate and other hyperparameters were set as default values, and models with pretrained weight values were used for both YOLOv3 and YOLOv5 (Table 2). Then, machine learning was carried out.

Machine learning was performed for up to 1,200 epochs, and the last learning weight and the weight that achieved the highest performance were stored separately. Figure 8 shows the results of comparing the training object and box loss values of YOLOv3 and YOLOv5 during training of 1,200 epochs. For object loss, both algorithms showed stable loss values of less than 0.005. However, the value of YOLOv3 was approximately 0.002 lower than that of YOLOv5 during the learning process. Box loss decreased to less than 0.04 as the training progressed through 1,200 epochs, where that of YOLOv5 was about 0.004 lower than that of YOLOv3.

Figure 9 also shows the highest validation precision and recall results of the YOLOv3 and YOLOv5 algorithms. Recall (Figure 9(a)), which can also be called sensitivity, refers to the ratio of true positives classified by the algorithms among the results that should have been predicted as positive [50]. Like precision, YOLOv5 achieves high recalls at low epochs. YOLOv3 had higher recalls than YOLOv5 when training was performed for more than 800 epochs. Precision (Figure 9(b)) is a measure of classified true values being actual true values [59]. The precision of the YOLOv5 algorithm increased until 200 epochs and then showed a constant pattern. YOLOv3 repeated an increase and a decrease in precision until 600 epochs and then showed constant precision from 800 epochs. For the highest precision value, YOLOv5 showed about 0.02 higher results than YOLOv3.

For the two indicators, it can be seen that the YOLOv5 algorithm shows higher performance at low epochs compared to the YOLOv3 algorithm. When training is performed for sufficient epochs, the two algorithms yield similar performance. In addition, precision indicates the number of true positives among all the results predicted as positive by the algorithms. Recall means the ratio of true positives classified by the algorithms among the results that should have been predicted as positive. Based on these, it can be interpreted from the results of this study that YOLOv5 with a higher precision had a lower error rate than YOLOv3, and that YOLOv3 with a higher recall found more PV panels than YOLOv5.

Next, we plotted precision–recall (PR) curves and defined the area under the curve as the average precision (AP) [60]. An area closer to 1 denotes higher performance, and the mean AP (mAP) is the average value of AP for each class [61, 62]. According to mAP values, YOLOv5 showed higher performance when training with less than 600 epochs. On the other hand, when training with more than 800 epochs, the performances of the two algorithms were almost identical (Figure 10). Overall, YOLOv3 had slightly higher AP and recall but slightly lower precision than YOLOv5 (Table 3).

The reason YOLOv5 shows more consistent results in Figures 9 and 10 is presumed to be that the backbone network of YOLOv5 uses CSPDarknet53, which is improved from the existing YOLO algorithm [55]. However, after sufficient training (about 800 epochs), YOLOv3 shows similar performance. If it is impossible to increase the epoch, it is judged that it is wise to choose the YOLOv5 algorithm.

Figure 11 below shows the validation images: (a) the input images, (b) the PV panels image, (c) the detection results of YOLOv3, and (d) the detection results of YOLOv5 image. In addition, for the purpose of distinguishing the research results more clearly, the readability of PV panels in the SAR images was enhanced to better understand the results of Figure 11. (1) At the top of Figure 11 is descending orbit images corresponding to number 06 of Table 1, and image (2) is descending orbit images corresponding to number 07 of Table 1. Image (3) is ascending orbit images corresponding to number 06 in Table 1. It can be seen that the YOLOv3 and YOLOv5 algorithms performed well in classification, although there is a difference in their bounding box size. For image (1), PV panels occupy about 54% of the entire image and are located in the center of the image, where it was confirmed that PV panels can be classified without difficulties. For image (2), unlike image (1), PV panels are not located in the middle of the image, showing detection results in which the panels occupy a small area of about 4.6% of the entire image. It was confirmed that the algorithms can perform classification even when PV panels occupy a small area, as in image (2). For image (3), PV panels occupy about 27.1% of the total area, and compared to images (1) and (2), there are objects that look similar to PV panels even when compared with the naked eye. It was confirmed that the algorithms can classify PV panels without difficulties from difficult images with similar objects in adjacent areas, as shown in the classification results. The YOLOv5 model tends to detect objects in smaller sizes of bounding box than the YOLOv3 model; when estimating the bounding box, the detection result of YOLOv3 showed a tendency to draw a bounding box larger than that of YOLOv5 by about 1-4%. This slight difference in bounding box size seems to be due to YOLOv3’s slightly higher average precision (AP).

4. Discussion

Our method was divided into four parts. First, to classify PV panels more clearly in the SAR images, a stacking method was applied using time-series images. Second, characteristics such as polarization, topography, and structures were derived from the images and directly classified and applied. Then, learning data were constructed, and PV panel prediction results were derived by applying the YOLOv3 and YOLOv5 algorithms to the learning data. Third, the two algorithms performed similarly for object and box loss (indicating algorithm error) as well as for precision and recall (indicating accuracy). Comparing the three indicators (precision, recall, and AP) indicating accuracy, in terms of precision, YOLOv5 was analyzed to be 0.2% higher than YOLOv3, while for recall, YOLOv3 was analyzed to be 0.4% higher than YOLOv5. Finally, for AP, it was analyzed that YOLOv3 was 0.1% higher. Furthermore, when analyzed based on epochs, YOLOv5 showed a stable AP from 200 epochs, but YOLOv3 showed a stable AP from 600 epochs. As a result, it is determined that YOLOv5 is more efficient in terms of the learning speed of AI algorithms, while YOLOv3 is more advantageous in terms of learning accuracy. However, this may vary depending on the purpose of each study.

For object detection and monitoring, deep learning algorithms were utilized, which have been used recently in a range of studies. Of the various algorithms, the YOLO algorithm showing high performance in object detection was selected. Training and object classification were performed using the YOLOv3 and YOLOv5 algorithms. As a result, it was analyzed that not only PV panels in large-scale solar power plants but also those in small-scale plants could be detected. For the AP, which is an indicator of performance, YOLOv3 obtained about 1% higher performance than YOLOv5.

Compared with previous studies, it shows a 90% performance similar to this study in classifying solar panels or targets using the US Geological Survey dataset and Sentinel-2 image, which is a performance that is not insufficient even compared to optical images confirmed [6, 24]. Compared with object detection studies using SAR images, the accuracy is in the range of 88-94%, and the results of this study are also expected to be useful when classifying solar panels [21, 26].

In this study, it was confirmed that PV panels can be classified not only from existing high-resolution optical images (e.g., taken by drones) using the algorithms but also from SAR images. In addition, the results of applying the two AI algorithms to the SAR images were derived. YOLOv3 was found to classify PV panels more efficiently than the latest AI algorithm, YOLOv5, indicating that YOLOv3 is suitable for the purpose of this study to classify them in a large area. Based on the results, it is considered to be important to select an AI learning dataset and algorithm suitable for the research purpose.

5. Conclusion

The purpose of this study was to detect PV panels scattered over a wide area for waste management and monitoring of PV panels. Previous studies related to PV panel detection have mostly focused on detection in a local area to detect individual panels or panels in a small area. In this study, SAR satellite imagery was used to compensate for the shortcomings of detection in a wide area and optical imaging. Synthetic aperture radar (SAR) imaging has two advantages over optical imaging. First, since the SAR image uses electromagnetic waves, it is possible to create an image regardless of day or night. Also, because of the transmission of electromagnetic waves, it is possible to produce images without being affected by weather conditions, which has strengths in real-time and continuous detection and monitoring [25, 26]. The advantage of SAR imaging is that it is an image suitable for continuous detection and monitoring of solar panels in a large area.

PV technology is receiving attention as a feasible renewable energy generation method in the face of climate change. However, it is not environmentally benign, as panel waste and other issues can contaminant the environment. In addition, PV panels are scattered over wide areas, and there are diverse types and densities of panels in use. Hence, it is necessary to develop technologies to efficiently monitor PV panels. This study developed a method for detecting PV panels by efficiently monitoring a large area using SAR remote sensing images and using machine learning for object recognition. The study area was the southern coast of Korea, which has many islands with PV panels of various sizes and shapes.

We attempted to improve methods of classifying PV panels from existing optical images. To this end, SAR GRD images were used for a rough classification of PV panels in a large area, and AI algorithms were applied for a more precise analysis. Our findings will enable more efficient monitoring of PV panels, which are expected to be used progressively more in the future. In addition, our results could help inform proactive responses to environmental problems related to PVs, including PV panel waste and panels swept away by natural disasters.

This study has significance in that the detection results were derived by using C-band images among the SAR ones, and the images and AI were connected. However, this study has the following limitations, which should be addressed in future research. First, in the AI algorithm application, it should be expanded in the future to the semantic segmentation technique, an image segmentation model. This study using the object detection model needs to be expanded to semantic segmentation for various analyses such as a more quantitative estimation of area. Second, a performance evaluation should be performed by applying the model to classifying relatively small-scale PV panels. This is because this study was conducted only on large-scale PV panels in a wide area.

Data Availability

The satellite image needed for the study was acquired from https://scihub.copernicus.eu/dhus/#/home.

Conflicts of Interest

The authors declare that there are no conflicts of interest regarding the publication of this paper.

Acknowledgments

This research was conducted at the Korea Environment Institute (KEI) with support from project “Development of Optimization Techniques for Reducing Heat Wave Considering Urban Environment” by the National Disaster Management Research Institute (NDMI) and funded by the Korea Ministry of the Interior and Safety (MOIS) (2022-002(R)), and this research was also supported by a grant from the Basic Science Research Program through the National Research Foundation of Korea (NRF) funded by the Ministry of Education (NRF-2018R1D1A1B07041203 and 2022-034(R)).