Table of Contents
Journal of Solar Energy
Volume 2015, Article ID 410684, 13 pages
Research Article

Spatial Approach of Artificial Neural Network for Solar Radiation Forecasting: Modeling Issues

1School of Engineering, Indian Institute of Technology Mandi (IIT Mandi), Room No. 106, Mandi Campus, Mandi 175005, India
2Mechanical Engineering Department, Indian Institute of Technology Roorkee (IITR), Roorkee 247667, India
3School of Computing and Electrical Engineering, Indian Institute of Technology Mandi (IIT Mandi), Mandi 175005, India

Received 25 September 2014; Revised 5 December 2014; Accepted 18 December 2014

Academic Editor: Jayasundera M. S. Bandara

Copyright © 2015 Yashwant Kashyap et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.


Design of neural networks architecture has been done on setting up the number of neurons, delays, and activation functions. The expected model was initiated and tested with Indian solar horizontal irradiation (GHI) metrological data. The results are assessed using the effect of different statistical errors. The effort is made to verify simulation capability of ANN architecture accurately, on hourly radiation data. ANN model is a well-organized technique to estimate the radiation using different meteorological database. In this paper, we have used nine spatial neighbour locations and 10 years of data for assessment of neural network. Hence, overall 90 different inputs are compared, on customized ANN model. Results show the flexibility with respect to spatial orientation of model inputs.

1. Introduction

Utility working in the field of solar energy production is compulsory to develop its forecast ability in diverse climatic situation [1, 2]. Unusual fluctuations occur in direct and diffuse incident irradiation (up to 100%), due to existence of clouds [3, 4].

In recent years, artificial neural networks (ANN) have been used for forecasting and regression of solar radiation in different latitudes and climate conditions. Since its development, no accurate method has been found to solve stability and uncertainty of delayed networks. Therefore, delayed network stability study has gain additional importance. In recent years, awareness of delayed neural networks has increased. It is necessary for the ANN to appear in globally stable region. A numbers of method have been developed to determine optimum numeral of delays, especially, by means of cross validation [511].

Numerous researchers have made efforts and suggested techniques for setting number of neuron in artificial neural network. In 2011, Gonzalez-Carrasco et al. considered a new method of optimizing the number of neurons in data mining. In 1998, Fujita documented numerical approximation of the number of neurons of the neural network. In 1997, Tamura and Tateishi observed that in standard Akaike recovered neuron produced independently. In 2003, Zhang et al. develop suitable methods for neurons. In 1995 Li et al. observed voluntary approach neurons of the time series prediction [1216].

Most important works has been done towards using of activation functions. Jordan obtain the logistic function which is a standard design of the next prospect in a binary classification convolution [17]. Yao and Liu enhanced the structure of complete networks with two strange activation function, those are sigmoid and Gaussian basis [18]. Sopena et al. available number of test appearance on multilayer feed forward networks used a sine activation function [19, 20]. However, the difficulty along with transfer functions is not a theoretical condition for their selection [21].

The literature assessment demonstrates that the ANNs have not been used for spatial domain analysis. In this paper, consider such special characteristics of spatial domain with solar radiation time series data. Associated study considered source of spatial matrixes and 10 years of time series. Spatiotemporal aspect has been used for ANN analysis. Results are evaluated with standard statistics errors. All spatial inputs are tested based on delay, neurons, and activation functions as modeling factor.

2. Data Processing

2.1. Data Collection

The default value of the radiation area is about 1360 Wm−2 and carries diffuse horizontal radiation (GHI) and direct normal irradiation (DNI). These two terms are used to calculate the global horizontal irradiation (GHI) as follows: where is the solar zenith angle [22]. Longitudinal data for India in the form of 10 kilometers ( resolution) points in space [23] are available. Sunny satellites are used every hour from January 2003 to June 2012. This work includes the latitude and longitude 31.85°–33.65° and 74.65°–78.45° accumulated in GHI in northern India, as shown in Figure 1. The data are in a rectangular grid  km² placed. The GHI exists in a particular time zone 5.5 to a central location of the proposed area in Figure 2 for 5,000 hours of 2008.

Figure 1: Spatial distribution of solar radiation and data points.
Figure 2: Global horizontal irradiation plot (GHI) of 5000 hours.

2.2. Data Processing

Important approach of data mining is used to scale the input and target in the ANN. Thus, the normalization is used with the standard deviation and the mean of the training data set. Therefore, for the data of solar radiation with a zero mean and standard deviation unit of the next equation: where , , , , , and data sets are data, mean, and slandered deviation of target and data and mean and slandered deviation of training set, respectively.

The data sets are in form of time series and require interpreting into spatial domain. The input data sets are prepared primarily in physical position allocating to topology function. Using a rectangular grid function may also be used similar to hexagon or random topology. It begins with input data in a rectangular grid similar to that shown in Figure 3 for an instance. Assume that spatial data are in arrays of nine different locations. The input 1 has position (1, 1), input 2 has the position (1, 2), input 3 has the positions (1, 3) and (2, 1), and so forth. Another three-dimensional topology of spatial data set is shown in Figure 4. The center input has neighbourhoods of increasing diameter nearby it. A neighbourhood of diameter 1 includes the center and its instant neighbours. The neighbourhood of diameter 2 consists of the diameter 1 and its immediate neighbours. The rectangular topology function and all the neighbourhoods for a multiple input map are characterized by an -by- matrix of distance.

Figure 3: Spatial approach of Neural Network distribution in 2D of Solar Radiation.
Figure 4: Spatial approach of Neural Network distribution in 3D of Solar Radiation.

3. Artificial Neural Network (ANN)

3.1. Neural Networks for Hourly Solar Radiation

Artificial neural networks processing systems have ability to learn through information [24]. Figure 2 shows a classic demonstration for a neuron architecture where , , , , , and are the signals, weights, bias, activation potential, output signal, and activation function, respectively. Afterwards, one can supervise that the neuron efficiency is given by Such usual network architecture is usually referred to as a multilayer neural network [25]. It is based on its topology and the amount of the weights in the input layer. The simplification of an artificial neural network is the capacity of replicating preferred signals for different input signals and the capacity of holding the dynamics of the system [26].

However, to determine number of neurons in each layer is not trivial. Research articles have explained cases where underfitting and overfitting might occur when smaller and larger numbers of neurons are used in the network [16]. Numerous approaches have been used without success to find the appropriate method for computing the number of neurons in each layer [27]. However, due to their simplicity, these methods have been extensively utilized in time series analysis [28]. The ultimate outcome of ANN is group of weights and input variables for linear and nonlinear process [29, 30]. In this paper, we have used one-output nodes in the outer layer for forecasting of GHI. The accuracy of the various ANN models is compared to the most correct model of hourly solar radiation, as described below.

Custom Network. Start network designing using toolbox offer special choice. To construct custom arrangements, start with an empty network and set its properties as special as shown in Figure 6. The network used numerous function properties that have been set in many ways, as desired for network architecture. In this section use the slight normal network and different spatial inputs. The input network recognizes normalized value range from −1 to +1 of radiation. The number of layers used for this network is two, initialized with the Nguyen Widrow layer initialization method and trained with the Levenberg-Marquardt as described below. To train greater number of layers it needs additional time. While there are simply 2 layers in solar radiation network for default custom models that can be trained for 1000 epochs, the literature described in terms of accuracy, when using 2 layers in ANN, is much higher than that of higher layers. Hence, we can consider the network with 2 hidden layers, which provides the highest precision value, as the most appropriate network for this problem. Any output vectors in output layer will learn to associate the connected target vectors with minimal mean squared error including weights and biases [2932].

3.2. Neural Network Training

The LMA deliver enhanced performance as soon as being connected with classic back propagation processes. Due to Newton’s technique the networks modernized law is where , , , , , , and are the network weight matrix, number of repetitions, the Hessian matrix, the gradient matrix, the Jacobian matrix, the identity matrix, and a scalar, respectively [25].

The projected networks were trained by 70% of the delivered data, whereas the continuing 15% was used for validation and remaining 15% was for testing the trained network. Thereafter, the trained networks were used for forecast using last 100 days of data. The suggested ANNs forecast the solar radiation at center point of position 5 from Figure 3 (in terms of matrix) from 2012 year of data, and at that time the forecast effects were equated through the measured data. Individually inputs of 9 spatial locations and 10 year of data (total 90) were verified separately by the hourly radiation values, and all of the suggested inputs were matched collectively by the RMSE values of solar radiation. These special networks were compared with training RMSE error.

4. Model Evaluation Criteria

Finally, the model has been selected based on the lowest forecasting error. The estimation of error can be many forms such as root mean square error (RMSE) and MAE (mean absolute error): In (5) there are different expressions for error estimation where represents measured value at forecasted horizon and is forecasted value. Also represents the total number of test samples. This validation process defines the model accuracy and stop iteration process for ANN model [33].

4.1. Model Behavior with Delays

In this section, we used one input at a time with delay out of 90 different radiation data. The hidden layer required was often (10) ten neurons; the input delay varies from one to thirty (30). For GHI, the wrong alarm frequency was constantly low while being trained on the maximum amount of past data [34]. All configuration models are tested for 30 times at continuous time delay of 1 to 30. The network has to optimize at minimum root mean square (RMSE) on the training data set [28].

Figure 7 and Table 1 illustrate an evaluation of measured and forecasted values by the suggested ANNs; this evaluation was built on hourly averages of global radiations. Established radiation on input numbers (from the Table 1) (IN-10, IN-3, and IN-9) had very small dissimilarity on global forecast. IN-10 was superior and then IN-3 in terms of radiation forecasting, but both had comparable values. IN-90 was the poorest between the suggested networks with the highest error in global radiation forecasting. It starts error with 32.23% RMSE at 5 delays and ends with 19.63% at 23 delays and further remains constant in between for all other inputs and delays.

Table 1: Radiation forecasting; spatial comparison of ANN models with time delay.

Table 1 demonstrated a summary evaluation of the suggested models by percentage performance of the MAE, MSE, and RMSE. These results show that the solar radiation forecasting differs too much irrespective of ANN inputs. Forecasting is affected based on number of time delays applied in different ANN inputs. The delay decides the convergence property of ANN inputs. Table 1 shows that some of the input converges very fast with minimum error as IN-51 with 0 time delays and around 23.89 percent of testing RMSE error. Similarly other comparable methods of IN-16 show the 23.29 percent of testing RMSE error but long time delay (29). The hourly correlation factor between two clear skies days is almost one day, which is perfect for time series prediction considering the same interval. In terms of spatial analysis top best five testing results show percentage in RMSE 19.63, 20.07, 20.17, 20.66, and 20.71 of neighbour positions 11(1), 13(7), 33(9), 12(4), and 21(2) and delays of 23, 27, 26, 7, and 8. On the other hand, compare the worse results are 29.22, 29.63, 30.57, 31.93 and 32.23 of position 22(5), 31(3), 22(5), 32(6) and 33(9) with delay of 23, 0, 6, 6, and 5, respectively. From these results there is no special pattern of spatial position; this may be due to different delay correlate between inputs and outputs. But from results, position 5 appears in worst case as compared to the best performance from neighbour places. On the other hand, temporal analysis shows clear pattern between input and output with respect to different years of data. In the table, top best results are from 2011, 2012, 2012, 2011, and 2010, respectively. If compared with the worse results which are 2005, 2003, 2003, 2003, and 2003, respectively, it clearly shows that the best result performed by ANN model depends on closer year of data. The contribution of delay is random, some inputs perform well with higher delay and some perform at lower delay, it all depends on property of data with respect to time.

4.2. Model Behavior with Neurons

Table 2 demonstrates that the projected neuron model offers superior results for evaluation. The current technique is used to control the number of neurons based on trial-and-error. It starts with minimum number of neurons and increased neuron to its maximum limit. The drawback is that it is time consuming and there is no surety of setting the neuron. The particular measures for 90 inputs used range of 10–300 neurons at interval of 10 neurons and target for a minimized MAE value. Since the smallest RMSE demonstrates the estimation method accuracy at local level or small number of data sets, MAE indicates global accuracy. In this case previous obtained outcome delays are used and standard transfer functions, hyperbolic tangent sigmoid (tansig), for respective numbers of inputs, are used as well.

Table 2: Radiation forecasting; spatial comparison of ANN models with time delay and neurons.

The accuracy was tested by using the training sets results in each case. Figure 8 and Table 2 show the results with all errors, performance coefficients, delays, and different neurons for each input. It is preferred that simplification capability increased while the number of neurons is increased. In solar radiation estimation problem, the precision degree of the productivity was 3.61, 4.63, and 3.98% in training, validation, and testing, respectively, while 190 neurons are used in IN-3 at delay (27). Comparatively the same result was obtained in IN-4, IN-5, IN-7, and IN-8 with 4.08, 4.12, 4.17, and 4.34% testing accuracy at 190, 250, 130, and 40 neuron with 22, 1, 3, and 1 delay. In terms of spatial analysis the top best five results show percentage testing MAE 3.98, 4.08, 4.12, 4.17, and 4.34% of positions 13(7), 21(2), 22(5), 31(3), and 32(6) considering the neurons 90, 190, 250, 130, and 40, respectively. This result shows that position does not give clear future of spatial orientation of radiation performance. On the other hand temporal analysis shows clear pattern between input and output with respect to different years of data. In the table, top best results are from 2012, 2012, 2012, 2012, and 2012, respectively. If compared with the worse results which are testing MAE errors 98.39, 99.64, 100.87, 101.42, and 102.08 of years 2003, 2003, 2004, 2003, and 2003, considering neurons 40, 160, 40, 10, and 160, respectively, it clearly shows that best result performed by ANN model depends on closer year of data similarly like delay case.

4.3. Model Behavior with Multiple Transfer Functions (Activation Functions)

In this study, different activation functions are used as shown in Figure 9 that depend on different number of iterations for comparing their performances on radiation data. For every standard activation function, we used the number of neurons in the hidden layer as mentioned in neuron section for different inputs shown in Figure 8. After presenting in Table 3 different performance parameters, interpretations of their results will follow here. We used the different number of delays in the first section and different neurons in the second section and to compare different inputs with differences activation functions are described here. According to the first graph from Figure 10 generated by the all inputs are compare with the different activation functions. However, each input performs significantly different behavior with its one kind of function. Satlins activation function results perform the most successful one, for all the test parameters. However, the accuracy of the inputs was totally different in training and testing cases; higher accurate result (lowest MAE value) shows the inverse effect during testing case.

Table 3: Radiation forecasting; spatial comparison of ANN models with time delay, neurons and transfer function.

This may be the case of over- or underfitting of model with data. The rest of the functions used in this study were not successful and accurate enough in group. Table 3 shows logsig function with IN-69 reporting the best activation function for training. However, testing results show that the accuracy of satlins activation function with IN-11 was much better. This situation explains that total mean absolute error (MAE) according to iterations cannot determine the network accuracy. Hence, we obtain the real accuracy based on testing results. Table 3 shows the error for each activation function for different numbers of inputs, which vary from 1 to 72 after removing the last two years (2003 and 2004) of data due to its poor performance. In terms of spatial analysis top best five percentage MAE testing results show 3.04, 3.08, 3.13, 3.25, and 3.29% of positions 12(4), 31(3), 23(8), 33(9), and 23(8) with transfer functions “satlins,” “compet,” “hardlims,” “hardlim,” and “purelin,” respectively. However, compare with the worse results, which are MAE test error 6.68, 6.69, 6.76, 6.9, and 7.31% of position 32(6), 31(7), 11(1), 22(5), and 33(9) with transfer function “tansig,” “logsig,” “hardlim,” “poslin,” and “compet,” respectively. This result shows no special pattern related to spatial position; even then the target position is 22(5) not directly related to the same position 22(5) of input in terms of performance; therefore it is important to consider the neighbor position in the modeling of ANN. As it is not a direct relation between input and output irrespective of position, the modeling needs expert supervision. On the other hand temporal analysis shows clear pattern between input and output with respect to different years of data. In the table, top best results are from 2011, 2012, 2012, 2012, and 2011, respectively. If compared with worse five results which are 2005, 2005, 2005, 2005, and 2005, respectively, it clearly shows that the best result performed by ANN model depends on much closer year of data similarly like delay and neuron case. Besides the best performance the worst cases result shows that old years of data and lower neuron number do not play important role in better performance.

5. Conclusion

This document includes the modeling characteristic of artificial neural networks based on spatial feature. The estimated model was initiated and tested on solar radiation data. The results are evaluated with different statistical error. This document certifies the ability of ANN to accurately reproduce hour’s global radiation forecast. The estimation accuracy of the hourly solar radiation can be achieved by using conventional meteorological data of ten years. This model has been used extensively for the specific application; due to the dynamic nature, modeling needs professional advice. This section gives more value to the parameter that shows the progress of the ANN architecture, the delay, neurons, and the corresponding transfer function of spatial position (Figure 5). The results show a high degree of flexibility in the choice of different inputs and connected parameters for comparative accuracy.

Figure 5: Basic neuron network architecture.
Figure 6: Typical custom neuron architecture.
Figure 7: Radiation forecasting; spatial comparison of ANN models during training with time delay.
Figure 8: Radiation forecasting; spatial comparison of ANN models during training with time delay and neurons.
Figure 9: ANN models used with different transfer function.
Figure 10: Radiation forecasting; spatial comparison of ANN models during training with time delay, neurons, and transfer function.

Conflict of Interests

The authors declare that there is no conflict of interests regarding the publication of this paper.


  1. R. Marquez and C. F. M. Coimbra, “Intra-hour DNI forecasting based on cloud tracking image analysis,” Solar Energy, vol. 91, pp. 327–336, 2013. View at Publisher · View at Google Scholar · View at Scopus
  2. M. J. Ahmad and G. N. Tiwari, “Solar radiation models-a review,” International Journal of Energy Research, vol. 35, no. 4, pp. 271–290, 2011. View at Publisher · View at Google Scholar · View at Scopus
  3. V. Badescu, “Correlations to estimate monthly mean daily solar global irradiation: application to Romania,” Energy, vol. 24, no. 10, pp. 883–893, 1999. View at Publisher · View at Google Scholar · View at Scopus
  4. K. Bakirci, “Models of solar radiation with hours of bright sunshine: a review,” Renewable and Sustainable Energy Reviews, vol. 13, no. 9, pp. 2580–2588, 2009. View at Publisher · View at Google Scholar · View at Scopus
  5. T. Khatib, A. Mohamed, K. Sopian, and M. Mahmoud, “Assessment of artificial neural networks for hourly solar radiation prediction,” International Journal of Photoenergy, vol. 2012, Article ID 946890, 7 pages, 2012. View at Publisher · View at Google Scholar · View at Scopus
  6. D. Stathakis, “How many hidden layers and nodes?” International Journal of Remote Sensing, vol. 30, no. 8, pp. 2133–2147, 2009. View at Publisher · View at Google Scholar · View at Scopus
  7. H. Zhang, Z. Wang, and D. Liu, “Global asymptotic stability of recurrent neural networks with multiple time-varying delays,” IEEE Transactions on Neural Networks, vol. 19, no. 5, pp. 855–873, 2008. View at Publisher · View at Google Scholar · View at Scopus
  8. L. Wang and X. Zou, “Convergence of discrete-time neural networks with delays,” International Journal of Qualitative Theory of Differential Equations and Applications, vol. 2, pp. 24–37, 2008. View at Google Scholar
  9. X.-G. Liu, R. R. Martin, M. Wu, and M.-L. Tang, “Global exponential stability of bidirectional associative memory neural networks with time delays,” IEEE Transactions on Neural Networks, vol. 19, no. 3, pp. 397–407, 2008. View at Publisher · View at Google Scholar · View at Scopus
  10. L. Prechelt, “Early stopping—but when?” in Neural Networks: Tricks of the Trade, vol. 7700 of Lecture Notes in Computer Science, pp. 53–67, Springer, Berlin, Germany, 2012. View at Publisher · View at Google Scholar
  11. R. Setiono, “Feedforward neural network construction using cross validation,” Neural Computation, vol. 13, no. 12, pp. 2865–2877, 2001. View at Publisher · View at Google Scholar · View at Scopus
  12. I. Gonzalez-Carrasco, A. Garcia-Crespo, B. Ruiz-Mezcua, and J. L. Lopez-Cuadrado, “Dealing with limited data in ballistic impact scenarios: an empirical comparison of different neural network approaches,” Applied Intelligence, vol. 35, no. 1, pp. 89–109, 2011. View at Publisher · View at Google Scholar · View at Scopus
  13. O. Fujita, “Statistical estimation of the number of hidden units for feedforward neural networks,” Neural Networks, vol. 11, no. 5, pp. 851–859, 1998. View at Publisher · View at Google Scholar · View at Scopus
  14. S. I. Tamura and M. Tateishi, “Capabilities of a four-layered feedforward neural network: four layers versus three,” IEEE Transactions on Neural Networks, vol. 8, no. 2, pp. 251–255, 1997. View at Publisher · View at Google Scholar · View at Scopus
  15. Z. Zhang, X. Ma, and Y. Yang, “Bounds on the number of hidden neurons in three-layer binary neural networks,” Neural Networks, vol. 16, no. 7, pp. 995–1002, 2003. View at Publisher · View at Google Scholar · View at Scopus
  16. J.-Y. Li, T. W. Chow, and Y.-L. Yu, “The estimation theory and optimization algorithm for the number of hidden units in the higher-order feedforward neural network,” in Proceedings of the IEEE International Conference on Neural Networks, IEEE, 1995.
  17. M. I. Jordan, Z. Ghahramani, T. S. Jaakkola, and L. K. Saul, “Introduction to variational methods for graphical models,” Machine Learning, vol. 37, no. 2, pp. 183–233, 1999. View at Publisher · View at Google Scholar · View at Scopus
  18. X. Yao and Y. Liu, “A new evolutionary system for evolving artificial neural networks,” IEEE Transactions on Neural Networks, vol. 8, no. 3, pp. 694–713, 1997. View at Publisher · View at Google Scholar · View at Scopus
  19. J. M. Sopena, E. Romero, and R. Alquezar, “Neural networks with periodic and monotonic activation functions: a comparative study in classification problems,” in Proceedings of the 9th International Conference on Artificial Neural Networks (ICANN '99), Conference Publication no. 470, pp. 323–328, IET, September 1999. View at Scopus
  20. B. Karlik and A. V. Olgac, “Performance analysis of various activation functions in generalized MLP architectures of neural networks,” International Journal of Artificial Intelligence and Expert Systems, vol. 1, no. 4, pp. 111–122, 2011. View at Google Scholar
  21. H. Yonaba, F. Anctil, and V. Fortin, “Comparing sigmoid transfer functions for neural network multistep ahead streamflow forecasting,” Journal of Hydrologic Engineering, vol. 15, no. 4, Article ID 003004QHE, pp. 275–283, 2010. View at Publisher · View at Google Scholar · View at Scopus
  22. R. H. Inman, H. T. C. Pedro, and C. F. M. Coimbra, “Solar forecasting methods for renewable energy integration,” Progress in Energy and Combustion Science, vol. 39, no. 6, pp. 535–576, 2013. View at Publisher · View at Google Scholar · View at Scopus
  23. Nrel, India Solar Resource Data: Hourly, 2013.
  24. S. Haykin, Neural Networks: A Comprehensive Foundation, Prentice Hall PTR, 1994.
  25. F. D. Marques, L. D. F. R. De Souza, D. C. Rebolho, A. S. Caporali, E. M. Belo, and R. L. Ortolan, “Application of time-delay neural and recurrent neural networks for the identification of a hingeless helicopter blade flapping and torsion motions,” Journal of the Brazilian Society of Mechanical Sciences and Engineering, vol. 27, no. 2, pp. 97–103, 2005. View at Publisher · View at Google Scholar · View at Scopus
  26. N. Saravanan, A. Duyar, T.-H. Guo, and W. C. Merrill, “Modeling space shuttle main engine using feed-forward neural networks,” Journal of Guidance, Control, and Dynamics, vol. 17, no. 4, pp. 641–648, 1994. View at Publisher · View at Google Scholar · View at Scopus
  27. S. Karsoliya, “Approximating number of hidden layer neurons in multiple hidden layer BPNN Architecture,” International Journal of Engineering Trends and Technology, vol. 3, no. 6, pp. 714–717, 2012. View at Google Scholar
  28. T. Taskaya-Temizel and M. C. Casey, “A comparative study of autoregressive neural network hybrids,” Neural Networks, vol. 18, no. 5-6, pp. 781–789, 2005. View at Publisher · View at Google Scholar · View at Scopus
  29. M. Caudill and C. Butler, Understanding Neural Networks: Computer Explorations: A Workbook in Two Volumes with Software for the Macintosh and PC Compatibles, MIT Press, Boston, Mass, USA, 1994.
  30. K. Mehrotra, C. K. Mohan, and S. Ranka, Elements of Artificial Neural Networks, MIT Press, Cambridge, Mass, USA, 1997.
  31. M. T. Hagan, H. B. Demuth, and M. H. Beale, Neural Network Design, PWS Publishing, Boston, Mass, USA, 1996.
  32. R. R. Trippi and E. Turban, Neural Networks in Finance and Investing: Using Artificial Intelligence to Improve Real World Performance, McGraw-Hill, New York, NY, USA, 1992.
  33. H. T. C. Pedro and C. F. M. Coimbra, “Assessment of forecasting techniques for solar power production with no exogenous inputs,” Solar Energy, vol. 86, no. 7, pp. 2017–2028, 2012. View at Publisher · View at Google Scholar · View at Scopus
  34. E. W. Saad, D. V. Prokhorov, and D. C. Wunsch II, “Comparative study of stock trend prediction using time delay, recurrent and probabilistic neural networks,” IEEE Transactions on Neural Networks, vol. 9, no. 6, pp. 1456–1470, 1998. View at Publisher · View at Google Scholar · View at Scopus