Research Article  Open Access
ShortTerm Wind Speed Forecast Based on BSpline Neural Network Optimized by PSO
Abstract
Considering the randomness and volatility of wind, a method based on Bspline neural network optimized by particle swarm optimization is proposed to predict the shortterm wind speed. The Bspline neural network can change the division of input space and the definition of basis function flexibly. For any input, only a few outputs of hidden layers are nonzero, the outputs are simple, and the convergence speed is fast, but it is easy to fall into local minimum. The traditional method to divide the input space is thoughtless and it will influence the final prediction accuracy. Particle swarm optimization is adopted to solve the problem by optimizing the nodes. Simulated results show that it has higher prediction accuracy than traditional Bspline neural network and BP neural network.
1. Introduction
Wind power is a kind of clean, free, and renewable natural resource. In wind power generation systems, the randomness and volatility of wind will affect the quality of power and the reliability of power system. Improving the prediction accuracy of shortterm wind speed is of great significance for the realtime scheduling of power, reliability improving of power supply, and cost lowering of the wind power generation [1–3].
The common methods of wind speed forecasting include time series method, support vector machine (SVM) method, and wavelet analysis method. Time series method is adopted in [4, 5], making full use of the sample data and the prediction model is simple which can reduce the calculation time, but it does not take the correlation among the sample data into account, and the estimation of model orders is quite complicated. In [6, 7] SVM is adopted, SVM can make indivisible vector in low dimension space map into divided function in high dimension space by choosing appropriate kernel function, the generalizing ability is improved and the local minimization problem is solved, and it has higher prediction accuracy than time series method. The multiresolution analysis character of wavelet transform and the generalizing ability of SVM are combined to realize the shortterm wind speed forecasting in [8]. The method can change the nonlinear phenomenon very well and improve the prediction accuracy, but the design process is rather complex.
In view of the various defects of prediction methods above in practical application, a prediction method based on Bspline neural network optimized by particle swarm optimization (PSOBSNN) is proposed. Firstly, the wind speed data are used to calculate correlation integral function [9, 10] and prove that the discrete time series is chaotic. By phase space reconstruction [11, 12], the high dimension chaotic attractors are recovered. BSNN can change the division of input space and the definition of basis function flexibly [13, 14]. For any input, only a few outputs of the hidden layers are nonzero, so the outputs are simple and convergence speed is fast, but traditional method which divides the input space into linear space is thoughtless and will influence the final prediction accuracy. PSO is an intelligent search method. It has strong global search ability and is also easy to be realized [15–20]. It is used to optimize the nodes of BSNN. Simulated results show that PSOBSNN has higher prediction accuracy than BSNN and BPNN.
2. PhaseSpace Reconstruction
A series of continuous sample data are collected from a field of wind farm in Colorado State. Set the sample interval of wind speed as 10 min; there will be 6 sample data in each hour. Let the 6 data be averaged; then the mean value of each hour is got. The discrete sample data of 10 days are recorded as . For better usage of nonlinear nature of neural network transfer function, the sample data are normalized into the interval , and the normalization formula can be expressed as follows:where is sample data, is minimum of all sample data, is the maximum of all sample data, is normalization data in the interval , and is the length of time sequence. The normalized wind speed is displayed in Figure 1.
2.1. Takens Theorem
Takens has proved that an appropriate embedding dimension can be found which satisfies , and is the dimension of dynamic system. The principle of the dynamic system can be recovered by phasespace reconstruction. If is the delay time and is embedding dimension, then can be described as , after phasespace reconstruction. is phase point of the restructured time series, which can be described as follows:where is the number of phase point.
2.2. Parameter Selection
The selection of embedding dimension and delay time is very important for the time series to be reconstructed, which will influence the final prediction accuracy. The saturation correlation dimension method is given to calculate embedding dimension , and the correlation integral function can be stated as follows:where is searching radius, , and is Heaviside unit function:
For a proper scope of , the attractor dimension is ; the relation curve between and is obtained as in Figure 2. The relation curve between and is obtained as in Figure 3.
It can be seen from Figure 3 that when , the attractor dimension reaches the maximum, so is the saturation correlation dimension. The logarithm value of correlation integral function is less than zero; namely, and then the change trend of is decay exponentially, which indicates that the time series is chaotic rather than noise series.
Mutual information method is used to determine the delay time . The information analysis of time series can be defined as the entropy function:where is entropy function and is probability density function.
If and are discrete random variables, its joint entropy function can be expressed as follows:
Mutual information function is
The relation curve between and is displayed in Figure 4.
Mutual information function signifies the correlation between and . If , they are completely irrelevant, and when reaches the minimum for the first time, the value is the delay time required. It can be seen from Figure 4, and .
3. BSpline Neural Network
BSNN is designed based on spline interpolation theory and it can change the division of input space and definition of the basis function flexibly. For any input, only a few outputs of the hidden layers are nonzero so that the outputs are simple and convergence speed is faster. It can be used to realize the nonlinear function approximation with high precision.
3.1. Basis Function of BSNN
According to (2), BSNN prediction model is structured as shown in Figure 5, where is the input vector and is the output of the network and corresponds to the mean value of wind speed in the next hour.
(1) The Division of Input Space. The input space has to be divided before the basis function being defined. is a finite interval for :
Then interval can be divided as follows:where is the th internal node, and the external node can be defined as follows:
The interval can be divided into subinterval by all of the nodes as follows:
(2) The Multivariable Basis Function of BSNN. Let indicate order basis function for , its th basis function is which is defined by , is expansion factor, and the recurrence relations can be expressed as follows:
If the number of internal nodes is 2 and expansion factor , the basis function graph of 3 degrees BSNN for single variable is shown in Figure 6.
The number of basis function for is
The basis function of is got as the result that single variable basis functions combine with each other:
3.2. The Mapping of BSNN
The network from input to output can be divided into two parts.(1). In this section, it realizes the nonlinear mapping from input layer to hidden layer and set , where is output value of the th basis function for . According to the positivity and tightness of basis function, the number of output nonzero values is , belongs to the interval , and the other output values are zero. If we let , is got.(2). In this section, it realizes linear mapping from hidden layer to output layer:where is the weight value of the network. The weight value can be updated as follows:where is the number of iteration, is the expected value, is the actual value, and is learning rate and ensures that the iterative algorithm can converge. The verification is as follows.
Set ; then
Set the Lyapunov function as ; then
Since , then , and the parameters can converge.
4. BSpline Neural Network Optimized by PSO
The internal nodes of BSNN have a great relationship with the prediction accuracy. The traditional method which divides the input space into linear space is thoughtless and it is difficult to get higher approximation accuracy. PSO is used to optimize the parameters, which can improve the accuracy and avoid falling into local minimum.
4.1. The Basic Principle of PSO
PSO stems from the enlightenment of birds foraging behavior, which is mainly used to solve the optimization problem. Each particle is a point with a certain speed in the solution space and it also has the related individual fitness which corresponds to the objective function; the effectiveness of the solution is determined by the fitness function that is defined on the basis of optimized goals. Particles are following the current optimum particles in solution space and finding the optimal solution by iteration.
Set the current position as , the current speed as , and the best position that the particle has experienced as in dimension search space. Set the number of all particles as ; the best position of all the particles experienced is . The particles can update themselves by tracking individual extremum and global extremum during each iteration. In the process of finding the two extremums, the particles update their own speed and position as follows:where are acceleration constant; is the speed of the th particle; is the position of the th particle; are random number and they belong to the interval ; is weight value and is used to balance global searching and local searching, and can update itself as follows:where is the maximum of and is the minimum of ; is the current number of iteration and is the maximum of iteration.
In order to prevent the particle far from the search space, the speed of each particle will be limited in a maximum ; if the updated speed exceeds the specified speed, then set the speed of the particle as , , where belongs to the interval and is the maximum of .
4.2. PSOBSNN Algorithm Implementation
The traditional method for BSNN to divide the internal nodes of input space is thoughtless. If the internal nodes cannot be determined accurately, the BSNN is easy to fall into local minimum and then the prediction accuracy is influenced. PSO is a kind of intelligent search method; the internal nodes can be optimized by using PSO algorithm and then get the optimal location. The dimension of each particle vector is , and each vector is composed of internal and external nodes of BSNN. For the single input variable , the number of internal nodes is 2 and the external nodes is 4; then the vector dimension . The optimization process is as follows.
(1) Initialization: the basic initial parameters include the position , speed of each particle , the population of particles , the largest iteration number , and the acceleration constants .
(2) Fitness Evaluation: the error between expected value and actual value is selected as objective function which is used to measure the effectiveness of optimization. The mean square error is adopted as the fitness function:where is the number of sample date, is the expected value for the th particle, and is the actual output value for the th particle.
(3) Update the parameters according to formula (17) and limit the speed according to .
(4) Calculate the fitness evaluation value, comparing it with the optimal fitness evaluation value; if better, update the particle location; otherwise do not change.
(5) Judge whether the result satisfies the termination condition. The termination condition is whether it reaches the maximum number of iterations or satisfies mean square error. If it does, then it is over, or return to step (2).
5. Simulation Results and Analysis
Let the largest iteration number , the acceleration constant , and the population of particles , and let , , and the initial position and speed of each particle are got randomly in the defined interval. Adopting 3 orders Bspline neural network, the number of neural network input is 5. The internal node number of each input variable is 2 and expansion factor is 1; then the nodes number of hidden layer units is and the nonzero output nodes number of hidden layer is . There is only one output unit.
Three kinds of prediction error are adopted as criteria for evaluation:where is root mean square error and it is used to measure the deviation between prediction value and real value, is mean absolute error that reflects the error of prediction value deviating from real value, and is mean relative error that reflects the reliability of mean absolute error value.
According to the wind speed date of 10 days in Figure 1 that is provided from the wind form in Colorado State, the first data of 7 days is used as training sample to train neural network and the last date of 3 days is used as test sample to examine the results of prediction. Simulation results based on PSOBSNN, BSNN, and BPNN are displayed in Figure 7. They all adopt 3layer network structures.
The prediction error is listed in Table 1.

It can be seen from Table 1 that the prediction model based on BSNN has better performance than the prediction model based on BPNN, and have fallen by 0.2455 m/s and 0.2614 m/s, respectively, has fallen by 14.54%, and the prediction effects have some improvements. Comparing with the prediction model based on BSNN, the prediction model based on PSOBSNN enhances the prediction performance largely, because the input space of BSNN has been optimized by particle swarm. It gets higher prediction accuracy than the prediction model based on BPNN, and have fallen by 0.4198 m/s and 0.4233 m/s, respectively, and has fallen by 36.50%.
Now BPNN is set 4layer network structures, and the simulation result is shown in Figure 8.
The prediction error is listed in Table 2.

It can be seen from Table 2 that the simulation result based on 4layer BPNN structure is very similar to BSNN, but still poorer than the prediction result based on PSOBSNN.
The prediction of 3day wind speed date belongs to shortterm wind speed prediction. Now, the wind speed date of 10 hours from Sinkiang in China is used in prediction. The first data of 7 hours is used as training sample and last wind speed date of 3 hours is used as test sample. It belongs to super shortterm wind speed prediction. The sample interval of wind speed is still set as 10 min, there will be 6 sample data in each hour, and the number of sample dates of 3 hours is 18. The simulation result is shown in Figure 9.
The prediction error is listed in Table 3.

It can be seen from Table 3 that comparing with BPNN, in the simulation results based on BSNN, and have fallen by 0.0465 m/s and 0.0346 m/s, respectively, and has fallen by 8.76%. Comparing with BSNN, in the simulation results based on PSOBSNN, and have fallen by 0.0493 m/s and 0.0616 m/s, respectively, and has fallen by 6.29%.
Very shortterm wind speed prediction results based on 4layer BP neural network structure are shown in Figure 10.
The prediction error is listed in Table 4.

It can be seen from Table 4 that the super shortterm wind speed prediction based on 4layer BPNN is very similar to BSNN, but still poorer than the prediction result based on PSOBSNN.
According to the simulation results of shortterm and super shortterm wind speed prediction, the prediction model based on PSOBSNN always gets better results than BSNN and BPNN.
6. Conclusions
A prediction model based on BSNN optimized by PSO is present in this paper. BSNN is designed based on Bspline interpolation theory, it can change the input space division flexibly, and the traditional method to divide the input space into linear space is thoughtless which will influence the final prediction accuracy. PSO is adopted to solve the problem. Simulation results show that the PSOBSNN has better performance than BSNN and BPNN, and the forecasting accuracy is improved effectively.
Conflict of Interests
The authors declare that there is no conflict of interests regarding the publication of this paper.
References
 R. Belu and D. Koracin, “Wind characteristics and wind energy potential in western Nevada,” Renewable Energy, vol. 34, no. 10, pp. 2246–2251, 2009. View at: Publisher Site  Google Scholar
 K. Xie and R. Billinton, “Energy and reliability benefits of wind energy conversion systems,” Renewable Energy, vol. 36, no. 7, pp. 1983–1988, 2011. View at: Publisher Site  Google Scholar
 M. C. Alexiadis, P. S. Dokopoulos, H. S. Sahsamanoglou, and I. M. Manousaridis, “Shortterm forecasting of wind speed and related electrical power,” Solar Energy, vol. 63, no. 1, pp. 61–68, 1998. View at: Publisher Site  Google Scholar
 A. Yona, T. Senjyu, F. Toshihisa, and C. Kim, “Very shortterm generating power forecasting for wind power generators based on time series analysis,” Smart Grid and Renewable Energy, vol. 4, no. 2, pp. 181–186, 2013. View at: Publisher Site  Google Scholar
 E. A. Bossanyi, “Shortterm wind prediction using Kalman filters,” Wind Engineering, vol. 9, no. 1, pp. 1–8, 1985. View at: Google Scholar
 M. A. Mohandes, T. O. Halawani, S. Rehman, and A. A. Hussain, “Support vector machines for wind speed prediction,” Renewable Energy, vol. 29, no. 6, pp. 939–947, 2004. View at: Publisher Site  Google Scholar
 S. SalcedoSanz, E. G. OrtizGarcía, Á. M. PérezBellido, A. PortillaFigueras, and L. Prieto, “Short term wind speed prediction based on evolutionary support vector regression algorithms,” Expert Systems with Applications, vol. 38, no. 4, pp. 4052–4057, 2011. View at: Publisher Site  Google Scholar
 O. Kisi, J. Shiri, and O. Makarynskyy, “Wind speed prediction by using different wavelet conjunction models,” The International Journal of Ocean and Climate Systems, vol. 2, no. 3, pp. 189–208, 2011. View at: Publisher Site  Google Scholar
 H. Leung, T. Lo, and S. Wang, “Prediction of noisy chaotic time series using an optimal radial basis function neural network,” IEEE Transactions on Neural Networks, vol. 12, no. 5, pp. 1163–1172, 2001. View at: Publisher Site  Google Scholar
 D. S. K. Karunasinghe and S.Y. Liong, “Chaotic time series prediction with a global model: artificial neural network,” Journal of Hydrology, vol. 323, no. 1–4, pp. 92–105, 2006. View at: Publisher Site  Google Scholar
 J. Zhang, K. C. Lam, W. J. Yan, H. Gao, and Y. Li, “Time series prediction using Lyapunov exponents in embedding phase space,” Computers & Electrical Engineering, vol. 30, no. 1, pp. 1–15, 2004. View at: Publisher Site  Google Scholar
 H. Ma and C. Han, “Selection of embedding dimension and delay time in phase space reconstruction,” Frontiers of Electrical and Electronic Engineering in China, vol. 1, no. 1, pp. 111–114, 2006. View at: Google Scholar
 A. Kechroud, J. J. H. Paulides, and E. A. Lomonova, “Bspline neural network approach to inverse problems in switched reluctance motor optimal design,” IEEE Transactions on Magnetics, vol. 47, no. 10, pp. 4179–4182, 2011. View at: Publisher Site  Google Scholar
 L. D. S. Coelho and M. W. Pessôa, “Nonlinear identification using a Bspline neural network and chaotic immune approaches,” Mechanical Systems and Signal Processing, vol. 23, no. 8, pp. 2418–2434, 2009. View at: Publisher Site  Google Scholar
 R. Poli, J. Kennedy, and T. Blackwell, “Particle swarm optimization,” Swarm Intelligence, vol. 1, no. 1, pp. 33–57, 2007. View at: Publisher Site  Google Scholar
 Y. Marinakis, M. Marinaki, and G. Dounias, “A hybrid particle swarm optimization algorithm for the vehicle routing problem,” Engineering Applications of Artificial Intelligence, vol. 23, no. 4, pp. 463–472, 2010. View at: Publisher Site  Google Scholar
 B. Brandstätter and U. Baumgartner, “Particle swarm optimization—massspring system analogon,” IEEE Transactions on Magnetics, vol. 38, no. 2, pp. 997–1000, 2002. View at: Publisher Site  Google Scholar
 J. Botzheim, C. Cabrita, and L. T. Kóczy, “Genetic and bacterial programming for Bspline neural networks design,” Journal of Advanced Computational Intelligence and Intelligence Informatics, vol. 11, no. 2, pp. 220–231, 2007. View at: Google Scholar
 R. C. Eberhart and Y. Shi, “Particle swarm optimization: developments, applications and resources,” in Proceedings of the IEEE Congress on Evolutionary Computation, vol. 1, pp. 81–86, IEEE, Seoul, South Korea, May 2001. View at: Publisher Site  Google Scholar
 V. G. Gudise and G. K. Venayagamoorthy, “Comparison of particle swarm optimization and backpropagation as training algorithms for neural networks,” in Proceedings of the IEEE Swarm Intelligence Symposium (SIS '03), pp. 110–117, Indianapolis, Ind, USA, April 2003. View at: Publisher Site  Google Scholar
Copyright
Copyright © 2015 Zhongqiang Wu et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.