Abstract

Solar Energy is the feedstock for various applications of renewable energy sources; thus, the knowledge of the intensity of the incident solar irradiance is essential for monitoring the performance of such systems. The major unpredictable factor in defining the solar irradiance and the performance of solar systems is the presence of clouds in the sky. So far, various researchers proposed several models to correlate solar irradiance to cloud coverage and cloud type. The present work describes the development of a simple method for cloud detection and computation of short-term cloud motion. The minimum accuracy of the model was 95% for the prediction of the cloud location seven timesteps in advance with only three cloud images processed. When including the dimensions of the cloud to the accuracy calculation, the minimum accuracy was 88%.

1. Introduction

During the past few years, there has been a major progress in the use of solar technologies, especially in renewable energy systems (RESs) for the production of electricity. Such technologies include residential and large-scale photovoltaic (PV) parks [1], solar troughs and solar tower power plants, solar dish technologies, and the upcoming construction of commercial solar updraft towers [2]. Still, solar RESs have not yet been achieved to confront the use of fossil fuels mainly due to their dependency on the sun’s irradiance, since in most cases, especially in PV parks, there is no affordable energy storage option. The continuous development of technologies that use solar energy leads to the need of accurate knowledge of the amount of the incident solar irradiance on the surface of the Earth. This will enable one to (1) predict the output energy from such systems, (2) regulate the system, (3) better design the system and the possible energy storage, (4) recourse assessment for the specific area calculated and finally (5) increase the profit margins.

There are numerous models for the computation of solar radiation, ranging from complicated computer algorithms to very simple empirical relations that do not require any meteorological parameter as an input [3, 4]. Several researchers have proposed various models to calculate the solar irradiance, either ones based on the Angstrom equation [5] or other more complex models based on various parameters. From these parameters, the most profound parameter for solar irradiance variations is cloudiness, because although it can be predicted, the spatial and temporal resolution of the predictions is very low, resulting an uncertainty in solar energy generation prediction, especially for solar power systems without energy storage, such as PV parks [6, 7].

Apart from cloudiness, all other parameters that effect solar irradiance can be computed with very good accuracy using the existing numerical forecasting methods [3, 4, 8]. The influence of clouds on irradiance is due to reflection and absorption of the irradiance by cloud particles and depends strongly on the volume, shape, thickness, and composition of the clouds [4].

There are three scenarios for the state of the sky regarding clouds: clear sky, overcast sky, and partially cloudy sky. For the first two scenarios, the numerical models along with the predictions of NWP models can forecast the irradiance over a specified area [3, 4]. The third scenario is unpredictable, presenting sudden short-term variations of irradiance due to moving clouds, and the existing models cannot predict the local motion of clouds. Furthermore, when using satellite images, due to the very low temporal and spatial resolution of the satellite imagers [9] compared to the local effects of a cloud, the location of a cloud over the solar field of a solar power plant cannot be forecasted precisely, and the energy production of the system cannot be predicted. Thus, the local motion of clouds cannot be forecasted by existing models and can only be estimated by on-site ground observations [10] using ground-based cameras or all-sky cameras [10, 11].

This paper presents the results of our progress regarding the distinction of clouds from sky and the computation of the cloud trajectories. The equipment and methodology used are first presented, then the results are analysed and finally the accuracy of our method is discussed.

2. Equipment

The equipment used in our research consists of a ground-based digital camera positioned at the roof of a building at the university campus in Limassol, Cyprus (34.675N, 33.045E).

The camera used was a Nikon D3100 digital single lens reflex (DSLR), positioned at an angle of view of 30° to the horizontal. The camera was used for capturing pictures of the sky, as a portable, easy to use instrument, to perform preliminary models regarding cloud detection and computation of cloud trajectories. The imaging sensor of the camera used was ,  mm color CMOS, and the lens used was Nikon 18–55 VR.

The camera was programmed to photograph the sky at scheduled time intervals. The pictures captured from the camera were stored on a computer server both as pictures and as video. The video was used for quick screening of the daily clouds, while the pictures were used for further processing regarding cloud distinction and classification.

The stored images were then processed by an algorithm developed in C++.

3. Methodology: Forecast of Cloud Motion

The methodology used in our research regarding the estimation of cloud motion consists of three subalgorithms applied to every picture taken from the camera. The first is the preprocessing cloud detection algorithm, where the pixels depicting clouds are distinct from the pixels depicting sky, and, then, the cloudy pixels are separated into individual clouds. The second algorithm computes the location of the cloud in the picture and the statistical features that determine the dispersion of the cloud pixels in the picture. The third algorithm deals with the computation of the cloud trajectories for the clouds detected at the previous step and the estimation of their future location.

3.1. Cloud Detection Algorithm

The technique used to distinguish clouds from the sky is a modification of the Ratio Red/Blue (R/B) threshold technique where the picture taken by the camera is divided pixel by pixel, into clear and cloudy regions, utilizing their red and blue pixel values. The principle of the technique is that the different spectrum colors that constitute sunlight are scattered in a different way by the particles of the sky. In a cloud-free atmosphere, which appears blue to our eyes, more light within the blue spectrum is scattered by gas molecules, while clouds consisting of water and ice particles scatter the same amount of blue and red spectrum and appear colorless (ranging from white to dark grey) to our eyes. Thus, clear sky sections of an image have relatively lower red pixel values compared to cloudy sections, and thus they can be distinguished [10, 12]. When a picture is processed, a red-to-blue threshold is used to distinguish between clear and cloudy pixels. The exact value of the separating threshold depends on the camera and the atmospheric conditions, so it has to be determined empirically. Furthermore, the threshold should not be the same across the entire image but must be determined as a function of the relative position of the pixels towards the horizon and the sun [11, 13].

The pictures taken by the camera are colored images ( pixels), consisting of the three main red, green and blue (RGB) compounds. Using image-processing techniques, the three color compounds of the initial RGB picture were isolated into three different images, that is, three grey-level images of the corresponding red, blue, and green layers of the original picture. The intensity of grey (ranging from 0 to 255) of every pixel of the generated pictures represents the amount of the specific color at the same pixel of the original image. Subsequently, the values of the pixels in the blue and red picture for the same pixel were compared. In our research, we used the modified R/B threshold method, where instead of computing the ratio of red to blue, the criterion used was the difference of blue to red (B-R threshold) as proposed by Heinle et al. [14]. If the difference in the intensity of a pixel between blue and red is less than the threshold, then the pixel was classified as cloudy; otherwise, it was classified as sky. A binary image of the same dimensions as the original image was then created, where every pixel classified as cloudy had a value of 1 and every pixel classified as sky had a value of 0.

Afterwards, the picture was further processed to extract the characteristics of the clouds. At first, the cloud coverage of the image was defined as the ratio of the cloud pixels over the total cloud pixels. Then, the pixels classified as cloudy were further processed taking into account their neighboring pixels, and each group of neighboring cloudy pixels enclosed in the same contour was considered to be an individual cloud. The detected clouds of every picture were numbered and further calculations were carried out to each of them in order to define the designated cloud characteristics (spectral and textural features [15, 16]) which will be used for the identification of each cloud in the consequent pictures.

3.2. Computation of Cloud Center of Gravity

All the dynamical and microphysical properties of a cloud are considered to be applied to a single point in space that represents the location of the equivalent cloud and the short-term cloud movement can be calculated from sequential pictures, taken at scheduled time intervals using kinematic equations. This point is called the center of fravity (COG) of the cloud and is defined from the location of the cloud in the sky plane in cartesian or polar coordinates, in a two-dimensional space. COG is the point in space at which we consider that the total mass of the cloud is concentrated and can be considered as the equivalent center of mass in classical physics at which external forces may be applied. Therefore, the coordinates of COG result as the average of the position of each cloud element (pixel) was weighted by the mass of the element, over the total mass of the system (sum of the values for every pixel). In the generated binary image, the weight of every pixel is equal to 1, and the total mass of the cloud is equal to the number of pixels that form the cloud while in the generated RGB images the weight is equal to the pixel value for every cloudy pixel. Since clouds are open shaped or hollow objects, the COG is not identical to the geometrical center, and it can be outside the contour of the cloud [17].

Apart from the coordinates of the COG of the cloud, the statistical features that characterize the cloud have to be calculated. Such features are the length and height of the cloud (in pixels), the spread that represents the dispersion of cloud elements over the COG (standard deviation), the effective area of the cloud (defined as the product of the -axis and -axis spread), and the aspect ratio of the cloud (defined as the ratio of the -axis over -axis spread). The technique can be expanded not only to the mass of the cloud but to any textural or spectral feature of the cloudy pixels providing the dynamical evolution of cloud in time [18].

3.3. Computation of the Cloud Trajectories

The principle used to estimate the cloud’s trajectories is based on the assumption that the cloud is equivalent to a uniform object in a rectilinear trajectory. The cloud’s motion is caused due to advection and is considered not to be affected by the variation of cloud’s characteristics in time, the partition of a mother cloud in smaller clouds, or the merging of small clouds. The short-term cloud motion was calculated from a stack of sequential pictures, taken at scheduled time intervals of 1 minute apart from a camera [19]. An individual cloud was isolated in every picture, and the COG of that cloud was recorded in every picture. Then, knowing the displacement of the cloud and the time-lapse between the pictures, using the equations of motion from kinematics, the cloud trajectories for the -axis and -axis were calculated. Since the motion of clouds can be approximated as a smooth deformation with local gradual variations in velocities [20], the derived equations were in 2nd degree polynomial form. The 2nd degree polynomial form corresponds to clouds moving with variable speed at constant acceleration, and at least three sequential pictures are necessary for the computations.

At first, the equation of motion of the cloud over the horizontal axis was estimated using the COG from the first three pictures. Using the derived equation, the location of the COG of the cloud was forecasted for the next time steps. Next, this procedure was repeated using an additional known position of the cloud (i.e., the first four pictures were used, then the first five, and so on) in order to estimate the equation of motion. This procedure was thus repeated, adding one more picture at a time. At each iteration, the future location of the COG was estimated based on the resulted equation. The aim of this procedure was the prediction of the future location of the COG, using as few pictures (known time-lapses) and as distant in time as possible (forecasted time-lapses). Then this procedure was repeated for the vertical axis.

3.4. Accuracy of the Method

The forecasted location of a cloud was compared to the true location of the COG as resulted from the pictures taken, in order to estimate the accuracy of the method. The accuracy (1) was estimated as the distance between the forecasted COG of the cloud and the true COG of the cloud over the true COG of the cloud. The true COG of the cloud was derived by the pictures taken by the camera: where is the accuracy, , are the coordinates of the forecasted COG, and , are the coordinates of the real COG for the specific time-lapse.

The drawback of the above formula is that it depends on the location of the cloud, which affects the denominator, resulting in different errors for clouds at different location on the picture, even if the distance between the forecasted and real COG is the same. Furthermore, this formula does not include the parameter of the size of a cloud.

This additional parameter includes the error in shading of the cloud over the area under consideration which is more important than the error in the location of the COG. Thus, the same error in the COG is more significant for smaller clouds than for bigger clouds. If the error in the COG is larger than the dimensions of the cloud, the forecasted COG is totally off-position, and the shade of the cloud is not affecting the area under consideration. This does not apply when the error in the COG is smaller than the dimensions of the cloud, where part of the cloud is shading the area under consideration.

Therefore, the accuracy is proposed as the distance between the forecasted COG of the cloud and the true COG of the cloud over the dimensions of the cloud (2). The dimensions of the cloud were derived by the pictures taken by the camera: where is the accuracy, is the horizontal dimension of the cloud (length), and is the vertical dimension (height) for the specific time-lapse.

4. Results

For demonstrating our methodology, we have used ten sequential photographs of a cloud, taken at one-minute intervals. The camera was intentionally faced towards North to avoid the use of neutral density filters that would protect the camera lens flare or the use of sun-blocking devices which would display the sun circle as a black area in the image.

Furthermore, if the camera was faced towards the Sun, the area around the sun circle would appear whiter and brighter than the rest of the sky, and, thus, it would incorrectly be classified by the cloud detection algorithm as “cloudy” even if it was not [11, 13].

Using the developed algorithm, the cloudy pixels were distinguished from the noncloudy ones, and the corresponding binary images were created, where the cloudy pixels are shown in white and the non-cloudy pixels are shown in black. The threshold value used was 60. This value was applied to all the pixels of the image since the camera was faced towards north, and the field of view of the camera did not include the horizon or the sun. For every picture, the individual clouds were separated and identified. A cloud that was present in all the photos was selected for demonstrating the methodology.

Figure 1 presents the initial pictures, where the preselected cloud is encircled in a red contour, and the binary representation of the distinction of clouds (white) from sky (black) for the ten timesteps, each one-minute apart. For each picture of the stack, the COG and the statistical features of the selected cloud were calculated. Furthermore, the spectral and textural features of the cloud were computed in order to verify the presence of the same cloud in every sequential picture.

Table 1 presents the results of the calculations for the COG, the cloud pixels, and the dimensions for the selected cloud in every picture. Table 2 presents the results of the calculations for the spread for the selected cloud in every picture. As seen from the data of the table, the COG varied with time, while the remaineder data of the table indicate that the cloud grows with time, and changes shape from a “thin eclipse” towards a “circle.”

From the displacement of the COG of the cloud in the image sequence and the evaluation of the features of the cloud, the equation of motion of the cloud was then calculated. Using the resulting equation, the COG of the cloud in future timesteps was forecasted up to the penultimate picture, and the procedure was repeated using one more known picture. Then, the forecasted positions of the cloud’s COG were compared to the actual position of the cloud, as derived from the original pictures.

Figure 2 presents the accuracy of our methodology for tracking cloud motion. The vertical axis of the graph represents the accuracy and as described in Section 3. The numerator of the accuracy for both formulas is computed as the forecasted location of the COG of the 10th photograph compared to the real location of the COG. The denominator for is the COG of the cloud in the tenth picture, whereas the denominator of is based on the dimensions of the cloud in the tenth picture. The position of the cloud was located using cartesian coordinates where point 0,0 was located at the upper left corner of the picture. The horizontal axis represents the last known photograph that was used to derive the equation of motion. For example, at timestep “−7,” 3 images were used to calculate the COG of the cloud, 7 time steps in advance. Likewise, at timestep “−6,” 4 images were used to calculate the COG of the cloud, 6 time steps in advance, and so on.

As seen in the graph, the equation simulates very precisely the equation of motion of the cloud, although it exhibits great dependence on the forecast period and the number of recorded images. As the forecast period decreases and the number of recorded images increases, the accuracy in the estimation of the location of the cloud increases. More precisely, the accuracy is higher than 95% by processing only 3 images and 7 time steps in advance of the “predicted” image, increasing to 99% for three steps in advance.

Correspondingly, by introducing the size of the cloud in the formula, the accuracy is always higher than 88% from point −7 onwards, which means that the predicted location of the cloud’s elements, when taking its size into consideration is always within the spread of the cloud.

5. Conclusions and Future Work

This paper presents the methodology for short-term prediction of cloud motion, along with a demonstration of the first results of our algorithm. A ground-based camera installed at the roof top of a university building was programmed to photograph the sky at scheduled time intervals. The pictures of the sky are preprocessed by a cloud detection algorithm that separates cloudy from non-cloudy pixels and generates a binary equivalent of every picture. The technique proposed in this study for the distinction is the red-blue threshold technique, which distinguishes clouds from sky by analyzing the image and by comparing the red to the blue values of the image to a threshold value. For every cloud in the pictures, the spectral and textural features were calculated. Then, the algorithm regarding the calculation of cloud trajectories was applied to the generated pictures. The COG of the clouds were calculated, and the equations of motion were generated. Using the generated equations of motion, the future position of the cloud was forecasted, and for the verification of the accuracy of the algorithm, the forecasted position of the cloud was compared to the actual position measured in the final picture. Furthermore, the dimensions of the cloud were integrated in our error analysis in order to evaluate the effect of the error in the forecasted location of the COG to the shade of the cloud over a specified area.

A demonstration of the methodology was presented for ten pictures of the sky taken at one-minute intervals. As shown in the results, the algorithm exhibits great dependence on the forecast period and the number of recorded images. As the forecast period decreases and the number of recorded images increases, the accuracy in the estimation of the location of the cloud increases. The minimum accuracy of the forecasted COG towards the real COG was 95% at point −7, while when including the dimensions of the cloud to the calculations the maximum error was 88%.

We believe that the developed methodology will provide a useful tool for researchers that want to focus on the effect of small local clouds on the energy production of their solar RES.