Computational Intelligence and Neuroscience

Computational Intelligence and Neuroscience / 2018 / Article

Research Article | Open Access

Volume 2018 |Article ID 3713410 | 9 pages |

A Hybrid Model for Forecasting Sunspots Time Series Based on Variational Mode Decomposition and Backpropagation Neural Network Improved by Firefly Algorithm

Academic Editor: Leonardo Franco
Received25 May 2018
Revised12 Sep 2018
Accepted13 Sep 2018
Published14 Oct 2018


The change of the number of sunspots has a great impact on the Earth’s climate, agriculture, communications, natural disasters, and other aspects, so it is very important to predict the number of sunspots. Aiming at the chaotic characteristics of monthly mean of sunspots, a novel hybrid model for forecasting sunspots time-series based on variational mode decomposition (VMD) and backpropagation (BP) neural network improved by firefly algorithm (FA) is proposed. Firstly, a set of intrinsic mode functions (IMFs) are obtained by VMD decomposition of the monthly mean time series of the sunspots. Secondly, the firefly algorithm is introduced to initialize the weights and thresholds of the BP neural network, and a prediction model is established for each IMF. Finally, the predicted values of these components are calculated to obtain the final predict results. Comparing BP model, FA-BP model, EMD-BP model, and VMD-BP model, the simulation results show that the proposed algorithm has higher prediction accuracy and can be used to forecast the time series of sunspots.

1. Introduction

Sunspot is the most basic and obvious activity in the solar activity, and the sunspot numbers are an indicator of the total solar activity level. Time-series data of sunspot are considered to be nonlinear, nonstationary, and chaotic, which are widely used to evaluate the effectiveness of nonlinear time-series models [1]. Sunspot has a very significant impact on the Earth’s magnetic field, climate, geology, animals, and plants and also endangers the living environment of human beings. Therefore, the observation and prediction of the sunspot numbers are of great significance. At present, with a large amount of long-term historical data, researchers have proposed a variety of prediction methods to forecast years, monthly mean, and peak of sunspots, such as chaos theory [2, 3], various neural networks, and their optimization algorithms. Among them, Ding et al. [4] used BP neural network to predict monthly mean of smoothed sunspot area. Zhao et al. [5] used RBF neural network to predict the smoothed monthly mean sunspot numbers, but the prediction error was gradually enlarged with the prolongation of forecasting time. Artificial neural networks inherently slowed down the learning speed and have poor fault tolerance and easily fell into local minima. These defects have not been well improved, thus the prediction accuracy cannot be further improved. The rise of intelligent optimization algorithms has greatly improved these shortcomings of artificial neural networks. Guan et al. [6] used quantum-behaved particle swarm optimization to optimize the weights and thresholds of BP neural networks to predict sunspot numbers. Li et al. [7, 8] used genetic algorithms to optimize BP neural networks to predict short-term traffic flow chaotic time series. The experimental results show that the optimized models are better.

Empirical mode decomposition (EMD), wavelet decomposition, and other methods provide an idea for processing nonlinear signals. In references [9, 10], EMD has been applied to the decomposition prediction of nonlinear time series, but the EMD has the effect of mode mixing and point effects [11]. The introduction of ensemble empirical mode decomposition (EEMD) [12] and variational mode decomposition (VMD) [13] has improved the mode mixing to some extent. Especially VMD transforms signal decomposition into nonrecursive variational mode decomposition mode. Its number of components is also less than EMD and EEMD, and it shows better noise robustness.

In recent years, hybrid models of EMD, EEMD, and VMD combined with artificial neural networks have been widely used in the field of prediction [9, 10, 14, 15]. The hybrid forecasting models formed by combining VMD with other algorithms have been successfully applied in many fields, such as financial time series [16], stock price evaluation [17], wind speed [18], and so on. Lahmiri [16] developed a new model of VMD and general regression neural network (GRNN) to analyze and predict economic and financial time series. In reference [17], the VMD and BP neural network were combined to predict the stock price well. In reference [18], the VMD was used to predict the wind speed sequence and then greatly reduced the data complexity and improved the prediction accuracy. Therefore, this paper proposes a novel hybrid forecasting model based on VMD and firefly algorithm (FA) to optimize BP neural network (FA-BP) and applies the model to the forecast of sunspot numbers.

2. Basic Theory

2.1. Variational Mode Decomposition

VMD is an effective signal decomposition method. Its overall framework is a variational problem [13, 19], which is different from the EMD of circular filtering. Its each mode is assumed to be a finite bandwidth with a different center frequency and the goal is to minimize the sum of the estimated bandwidths for each mode. The algorithm can be divided into the structure and solution of variational problem. The detailed description is as follows.

2.1.1. The Structure of Variational Problem

Assume the original signal is decomposed into K modal functions , such that the variational problem can be described as a constrained variational problem. The problem is targeted at minimizing the sum of the estimated bandwidth for each mode, and the constraint is that the sum of modes is equal to the input signal , as follows:

2.1.2. The Solution of Variational Problem

(a)The above constrained variational problem can be changed into a nonbinding variational problem by introducing a quadratic penalty factor and Lagrange multipliers . Where guarantees the reconstruction accuracy of the signal and maintains the rigor of the constraint. The augmented Lagrange is denoted as(b)The alternate direction multiplier method (ADMM) is used to solve the saddle points of the above variational problem, and then the , , and are updated alternately ( represents the number of iterations), which is given by(c)Given a discriminant accuracy , the convergence condition of the stop iteration is as follows:

The specific process of the VMD algorithm is summarized as follows:(1)Initialize , , and .(2)Update the value of , and according to Equations (3)–(5).(3)Judge whether or not meets the convergence condition (6), and repeat the above steps to update parameters until the convergence stop condition is satisfied.(4)The corresponding modal subsequences are obtained according to the given modal number.

2.2. FA-BP Model
2.2.1. Backpropagation Neural Network

Backpropagation (BP) network is a multilayered feedforward neural network trained by error backpropagation. It has good self-organizing learning ability and can implement any nonlinear mapping from input to output [20, 21]. The network prediction model mainly realizes the training process through the forward propagation of the input signal and the backpropagation of the error signal. It can parallelize large-scale data and has a certain degree of robustness and fault tolerance. The typical structure of three-layer BP neural network is shown in Figure 1.

In Figure 1, is input value of BP neural network, is output value of BP neural network, and is the expected output. The input vectors are propagated layer by layer from input layer, hidden layer, and output layer. The number of input layer nodes and output layer nodes are determined by the features and dimensions of training samples, respectively. The number of hidden layer nodes needs to be selected artificially according to the situation. Setting the transfer function between the layers as used sigmoid function, the output of the jth node of the hidden layer iswhere is the connection weight of the ith node of the input layer to the jth node of the hidden layer, is the threshold of the jth node of the hidden layer, and is the number of hidden layer nodes. The hidden layer output is further passed back to get the kth node of the output layer aswhere is the connection weight of the jth node of the hidden layer to the kth node of the output layer, is the threshold of the kth node of the output layer, and is the number of output layer nodes. According to the network prediction output and the expected output , calculate the network prediction error :

BP neural network corrects the weights and thresholds of all layers along the direction of error reduction and repeatedly modifies the weights and thresholds until the algorithm converges to obtain satisfactory error precision.

2.2.2. Firefly Algorithm

Firefly algorithm (FA) was developed by Cambridge scholar Yang [22]. It is a swarm intelligence optimization algorithm based on the idealized behavior of the flashing characteristics of fireflies. Comparing with other optimization algorithms, the main advantages of firefly algorithm are simple structure, less adjustment parameters, and better spatial search capability [23]. Therefore, the application field of the algorithm is also quite extensive such as function optimization, path planning, resource management, and so on [2427].

In this algorithm, the brightness of a firefly and mutual attractiveness are the key factors that determine the firefly’s movement. The fitness value of the firefly’s position determines its brightness. The brighter the brightness, the better its position. Attractiveness and brightness are related. The higher brightness of the fireflies has a stronger attraction, so other fireflies of lower brightness approach in this direction. Through the interaction of brightness and attractiveness, fireflies will eventually gather in the highest brightness position to achieve the optimization of objective function. In order to facilitate the establishment of the algorithm model, the following definition is given:

Definition 1. The relative brightness of a firefly:where is the original light intensity which is related to the objective function value, is the light absorption coefficient, generally defined as a constant 1, and is the Cartesian distance between firefly and firefly . The expression is as follows:

Definition 2. The attractiveness of a firefly is as follows:where is the maximum attractiveness which represents the degree of attraction at the location of the maximum brightness. Equation (12) describes the property that the attractiveness of fluorescence emitted by fireflies to other individuals decreases with distance and the absorption coefficient of the propagation medium. can be taken as 1 for most application problems. The value of has a great influence on the performance of the algorithm. Theoretically, we can take , but in practice we take usually .

Definition 3. Updated formula for the position of firefly moved by firefly is as follows:where is the iteration number of the algorithm, are the spatial positions of the firefly and , is the step factor, and taking the constant at , is the random factor that obeys the uniform distribution on .

The process of firefly optimization algorithm is as follows:(1)Initialize the basic parameters of FA and set the number of firefly population , maximum attractiveness , light absorption coefficient , step size , the maximum number of iterations , and search accuracy .(2)Randomly initialize the location of fireflies and calculate the target value of fireflies which is used as their maximum brightness .(3)Calculate the relative brightness and attractiveness of the fireflies in the group by (10) and (12) and determine the direction of the firefly’s movement according to the relative brightness.(4)According to Equation (13), to update the spatial location of fireflies, the firefly at the best position is randomly perturbed by to prevent premature convergence and fall into the local optimal solution.(5)According to the location of the updated firefly, recalculate the brightness of fireflies.(6)Turn on (7) when the search accuracy or the maximum number of searches is satisfied. Otherwise, the number of searches will be increased by 1 and turn (3) to the next search.(7)Output the global extreme point and the optimal individual value.

2.2.3. FA-BP Prediction Model

In the BP neural network, the two kinds of training parameters of weight matrices and thresholds have significant influences on the prediction accuracy. The basic principle of optimizing FA-BP neural network is to use the better global search ability of FA to optimize the topology, connection weights, and thresholds of neural network. Considering the network parameters of BP neural network as firefly individuals, the fitness function of firefly individuals and the training error output of the corresponding network model form a linear relationship in the process of network model training:

According to the above equation, it shows that the smaller the training error, the greater the individual fitness value. In the firefly algorithm, the greater the fitness value, the greater the brightness, indicating that firefly individuals perform better on the optimization problem. By searching for the best individual, which is the optimal network parameter, the firefly position is constantly updated, and then the weights and thresholds are substituted into the BP neural network to predict the sample. Specific steps are as follows:(1)Determine the neural network structure according to the input sample characteristics and output requirements.(2)Initialize BP neural network, determine the number of neurons in each layer, and calculate the weight and the threshold number.(3)Consider weights and thresholds as individual fireflies and initialize firefly algorithm parameters.(4)Carry out the firefly algorithm iterative update process and search for the best individual fitness.(5)The optimal individuals are transmitted back to BP neural network for training and prediction with test data.

The flowchart of FA-BP prediction model is shown in Figure 2.

2.3. Hybrid VMD-FA-BP Forecasting Model

In this section, the proposed VMD-FA-BP model is established for the monthly mean of sunspots forecasting. The basic structure of the hybrid forecasting method is as follows:(1)First decomposition. The time series of monthly mean of sunspots are decomposed by VMD into IMFs with different frequencies.(2)Component prediction. The data of each IMF component is divided into training samples and predicting samples and normalize input/output samples. Then, the FA-BP prediction model is established, respectively, to obtain the predicted value of each component.(3)Reconstruct the predicted value. The predicted values of each IMF component are summed up to obtain the final prediction result.

The flowchart of VMD-FA-BP model is shown in Figure 3.

3. Data Simulation and Analysis

3.1. Data Decomposition Preprocessing

The time series of monthly mean sunspots data comes from the official website of the solar influence data analysis center (SIDC). The experiment selects the observation data from January 1917 to December 2016 as the experimental sample, a total of 1200 data. The time-series diagram is shown in Figure 4. Taking into the randomness of the monthly mean of sunspots, direct prediction there will be a greater error. In order to improve the accuracy of prediction, the data complexity needs to be reduced. The original sequence is decomposed to generate multiple subsequences by VMD. Set the number of subsequences K before proceeding with VMD decomposition. However, we can see from the previous test that for the average monthly sequence of sunspots, the following subsequences tend to be similar when K > 6, so we choose K = 6 in this paper. The original data VMD decomposition is shown in Figure 5. The same data sample is also decomposed by EMD as shown in Figure 6.

As shown in Figures 5 and 6, the time series of sunspots are decomposed into 6 IMFs by VMD, and 6 IMFs and 1 residual component are obtained by EMD decomposition. From the decomposition results, it can be seen that the IMF3 component decomposed by the VMD method is most similar to the original signal waveform. The waveform distortion is small, and the trend of the high-frequency component variation is relatively stable. So, it is good for prediction. Although the low-frequency component fluctuates greatly, the prediction error is limited. Because the final VMD forecast result is an accumulation of each IMF’s forecast result, the characteristics of the VMD decomposition result help to improve the prediction accuracy. However, the IMFs discomposed by EMD have different characteristics. End-point effects occur during the decomposition process. Large fluctuations occur at both ends of the IMF component and affect the entire component sequence continuously. The error would be more serious and the decomposition results would be seriously distorted, especially for low-frequency IMF components. Moreover, the drastic fluctuations of the IMF will lead to large errors in the final forecast based on the EMD. Therefore, IMFs decomposed by VMD are more suitable for the establishment of hybrid forecasting model than IMFs decomposed by EMD [2830].

3.2. Performance Standards of Prediction Accuracy

This study uses the following three error indicators to verify the validity and practicability of the proposed prediction model: mean absolute error (MAE), root mean squared error (RMSE), and mean absolute percentage error (MAPE). The error of prediction value is quantified by using the performance indexes of MAE, RMSE, and MAPE. The smaller the value, the better the prediction accuracy. These formulas are as follows:where is forecast data and is original data. In order to avoid the errors caused by the difference and randomness of each prediction result, this paper averages the results of 10 predictions, that is, the error value of the final prediction.

3.3. Data Prediction Processing and Results Analysis

All the simulation models in this paper are completed in MATLAB 2014a. The first 900 of the 1200 samples in the experimental sample are taken as the training samples, and the last 300 are used as the prediction samples. The BP neural network prediction model adopts the network structure of 5-8-1 and adopts the same structure for the FA-BP network prediction model, that is, the number of nodes in the input layer is 5, the number of nodes in the input layer is 8, and the number of nodes in the input layer is 1. There are weights, thresholds. The individual code length of the firefly algorithm is . The parameters are set as follows: the number of fireflies , the maximum attraction , the light absorption coefficient , the step factor , and the maximum number of iterations . The above parameter settings and sample data are applied to all model tests in this article to ensure fair and efficient comparisons between different forecast models.

According to the above settings, each IMF after VMD decomposition is predicted and reconstructed to obtain the final prediction value. The prediction result is shown in Figure 7. It can be seen from the figure that the blue line represents the actual monthly mean of sunspots and the red line represents the predicted value of the hybrid forecasting model. It can be seen that the VMD-FA-BP model proposed in this paper is good for fitting the original data and can predict the monthly mean of sunspots very well. In order to facilitate comparison, BP neural network, FA-BP, EMD-BP, and VMD-BP prediction models are used to predict the same sunspot monthly mean time series, as shown in Figure 8. The partial enlargement is shown in Figure 9. While the forecasting performance evaluation results are displayed in Table 1 and Figure 10, respectively.

ModelsError indicators


It can be seen from Table 1 and Figure 9 that the three error indicators of the VMD-FA-BP prediction model proposed in this paper are less than other prediction models, where MAE = 1.2208, RMSE = 1.7117, and MAPE = 0.0435, and its forecasting accuracy is improved by one order of magnitude compared with other models.

This obvious change is chiefly due to the following reasons. First, as a new adaptive multiresolution technique, the VMD is more robust to analyze noisy signals such as sunspot data than EMD and reduces the mode mixing existed in the EMD, making each IMF used for the following predictions better. Secondly, the processing time required by VMD, EMD, and BP neural network is significantly short. When processing monthly mean sunspot time series, the VMD and EMD computational time is 10s30 and 12s26, respectively. The BP neural network takes on average 0.95 s for learning and testing, compared with the traditional autoregressive moving average model in [14, 28], which takes more time for modeling and forecasting (more than 5 min) and saves a lot of time and improves the prediction efficiency and accuracy. Then, the firefly algorithm is a feasible and effective group intelligent optimization algorithm with high optimization precision and convergence speed. The optimal initial weights and thresholds of the BP neural network are obtained by using the firefly optimization algorithm, and the FA-BP model is applied to predict each IMF, and the obtained effect is better than a single BP model. Finally, the predicted values of each IMF are accumulated to obtain the final predicted values, thus the best prediction accuracy is shown in all the models of Table 1.

Hybrid model is more beneficial for modeling and improving prediction accuracy by using the advantages of each single model. It is proved that the variational mode decomposition combined with BP neural network prediction model using firefly algorithm in this paper can predict the variation trend of the monthly mean of sunspots time series well, which is a good prediction model.

4. Conclusions

In this paper, aiming at the problem of forecasting the monthly mean time series of sunspots, a hybrid forecasting model based on variational mode decomposition and firefly algorithm to optimize BP neural network is proposed. Firstly, a set of intrinsic mode functions (IMFs) are obtained by VMD decomposition of the monthly mean of the sunspots. Secondly, a prediction model is established for each IMF to improve the prediction accuracy. In order to overcome the defects of the slow learning speed and easy to fall into local minimum in BP neural network, firefly algorithm is used to optimize the BP neural network. The optimal weights and thresholds are obtained. The training samples of the network are trained, and then the test samples are predicted. Finally, the predicted values of these components are summed to obtain the final prediction result, and the predicted value is compared with the real value to calculate the error. The experimental results show that the combination method proposed in this paper can be used to predict the monthly mean of sunspots and improve the prediction accuracy and reduce the error compared with other models. Therefore, the model proposed in this paper has a good reference value for forecasting actual problems accurately such as short-time traffic flow and other time series.

Data Availability

The smoothed monthly mean sunspot number time series in this paper comes from the data analysis center of the Sun Observatory of Belgium ( It records the sunspot data from 1749 to the present. In particular, the data analysis center significantly revised the number of sunspots on July 1, 2015. The data in this paper were derived from the revised dataset.

Conflicts of Interest

The authors declare that there are no conflicts of interest regarding the publication of this paper.


This work was supported by the National Natural Science Foundation of China (no. 51709228).


  1. G. Shenshi, W. Zhiqian, and C. Jitai, “The fractal research and predicating on the time series of sunspot relative number,” Applied Mathematics and Mechanics, vol. 20, no. 1, pp. 84–89, 1999. View at: Publisher Site | Google Scholar
  2. J. Tang and X. Zhang, “Prediction smoothed monthly mean sunspot number based on chaos theory,” Acta Physica Sinica, vol. 61, no. 16, Article ID 169601, 2012. View at: Google Scholar
  3. J. Ren, Q. S. Zeng, and R. R. Wei, “Neural network forecasting model for sunspots time series prediction based on phase space reconstruction,” Computer Simulation, vol. 31, no. 1, pp. 395–398, 2014. View at: Google Scholar
  4. L. G. Ding, R. S. Lan, Y. Jiang, and J. D. Peng, “Prediction of the smoothed monthly mean sunspot area based on neural network,” Transactions of Atmospheric Sciences, vol. 35, no. 4, pp. 508–512, 2012. View at: Google Scholar
  5. H. J. Zhao, J. L. Wang, W. G. Zong, Y. Q. Tang, and G. M. Le, “Prediction of the smoothed monthly mean sunspot numbers by means of radial basis function neural networks,” Chinese Journal of Geophysics, vol. 51, no. 1, pp. 20–24, 2008. View at: Publisher Site | Google Scholar
  6. X. Z. Guan, X. Huangfu, X. Li, Y. Tong, and L. G. Sun, “Sunspot number prediction based on quantum-behaved particle swarm-based neural network,” Computer and Digital Engineering, vol. 42, no. 10, pp. 1853–1893, 2014. View at: Google Scholar
  7. S. Li, Y. Luo, and M. R. Zhang, “Prediction method for chaotic time series of optimized BP neural network based on genetic algorithm,” Computer Engineering and Applications, vol. 47, no. 29, pp. 52–55, 2011. View at: Google Scholar
  8. S. Li, L. J. Liu, and Y. L. Xie, “Chaotic prediction for short-term traffic flow of optimized BP neural network based on genetic algorithm,” Control and Decision, vol. 26, no. 10, pp. 1581–1585, 2011. View at: Google Scholar
  9. S. L. Xu and R. Q. Niu, “Displacement prediction of Baijiabao landslide based on empirical mode decomposition and long short-term memory neural network in Three Gorges area, China,” Computers and Geosciences, vol. 111, pp. 87–96, 2018. View at: Publisher Site | Google Scholar
  10. B. Z. Zhu, D. Han, P. Wang, Z. C. Wu, T. Zhang, and Y. M. Wei, “Forecasting carbon price using empirical mode decomposition and evolutionary least squares support vector regression,” Applied Energy, vol. 191, pp. 521–530, 2017. View at: Publisher Site | Google Scholar
  11. Y. Xiao and F. L. Yin, “Decorrelation EMD: a new method to eliminating mode mixing,” Journal of Vibration and Shock, vol. 34, no. 4, pp. 25–29, 2015. View at: Google Scholar
  12. Z. H. Wu and N. E. Huang, “Ensemble empirical mode decomposition: a noise-assisted data analysis method,” Advances in Adaptive Data Analysis, vol. 1, no. 1, pp. 1–41, 2009. View at: Publisher Site | Google Scholar
  13. K. Dragomiretskiy and D. Zosso, “Variational mode decomposition,” IEEE Transactions on Signal Processing, vol. 62, no. 3, pp. 531–544, 2014. View at: Publisher Site | Google Scholar
  14. T. L. Sun and G. H. Li, “EEMD and RBF neural network prediction of sunspot monthly mean,” Computer Engineering and Applications, vol. 53, no. 24, pp. 252–262, 2017. View at: Google Scholar
  15. G. H. Li, X. Ma, and H. Yang, “A hybrid model for monthly precipitation time series forecasting based on variational mode decomposition with extreme learning machine,” Information, vol. 9, no. 7, p. 177, 2018. View at: Publisher Site | Google Scholar
  16. S. Lahmiri, “A variational mode decomposition approach for analysis and forecasting of economic and financial time series,” Expert Systems with Applications, vol. 55, pp. 268–273, 2016. View at: Publisher Site | Google Scholar
  17. S. Lahmiri, “Intraday stock price forecasting based on variational mode decomposition,” Journal of Computational Science, vol. 12, pp. 23–27, 2016. View at: Publisher Site | Google Scholar
  18. L. Fan, Z. N. Wei, H. J. Li, W. C. Kwork, G. Q. Sun, and Y. H. Sun, “Short-term wind speed interval prediction based on VMD and BA-RVM algorithm,” Electric Power Automation Equipment, vol. 37, no. 1, pp. 93–100, 2017. View at: Google Scholar
  19. G. Sun, T. Chen, Z. N. Wei, Y. H. Sun, H. X. Zang, and S. Chen, “A carbon price forecasting model based on variational mode decomposition and spiking neural networks,” Energies, vol. 9, no. 1, p. 54, 2016. View at: Publisher Site | Google Scholar
  20. H. Haviluddin and A. Rayner, “Daily network traffic prediction based on backpropagation neural network,” Australian Journal of Basic and Applied Sciences, vol. 8, no. 24, pp. 164–169, 2014. View at: Google Scholar
  21. D. Wang, Y. Liu, H. Y. Luo, C. Q. Yue, and S. Cheng, “Day-ahead PM2.5 concentration forecasting using WT-VMD based decomposition method and back propagation neural network improved by differential evolution,” International Journal of Environmental Research and Public Health, vol. 14, no. 7, p. 764, 2017. View at: Publisher Site | Google Scholar
  22. X. S. Yang, “Firefly algorithm, stochastic test functions and design optimisation,” International Journal of Bio-Inspired Computation, vol. 2, no. 2, pp. 78–84, 2010. View at: Publisher Site | Google Scholar
  23. C. P. Liu and C. M. Ye, “Novel bioinspired swarm intelligence optimization algorithm: firefly algorithm,” Application Research of Computers, vol. 28, no. 9, pp. 3295–3297, 2011. View at: Google Scholar
  24. S. Ranganathan, M. S. Kalavathi, and A. R. C. Christober, “Self-adaptive firefly algorithm based multi-objectives for multi-type FACTS placement,” IET Generation Transmission and Distribution, vol. 10, no. 11, pp. 2576–2584, 2016. View at: Publisher Site | Google Scholar
  25. W. H. Li, S. L. Weng, and S. H. Zhang, “A firefly neural network and its application in bearing fault diagnosis,” Journal of Mechanical Engineering, vol. 51, no. 7, pp. 99–106, 2015. View at: Publisher Site | Google Scholar
  26. L. Olatomiwa, S. Mekhilef, S. Shamshirband, K. Mohammadi, D. Petkvic, and C. Sudheer, “A support vector machine-firefly algorithm-based model for global solar radiation prediction,” Solar Energy, vol. 115, pp. 632–644, 2015. View at: Publisher Site | Google Scholar
  27. P. Liu, H. Liu, X. W. Zheng, and Y. H. Ding, “Approach for dynamic group automatic aggregation path planning based on improved FA,” Application Research of Computers, vol. 28, no. 11, pp. 4146–4149, 2011. View at: Google Scholar
  28. S. Lahmiri, “Comparing variational and empirical mode decomposition in forecasting day-ahead energy prices,” IEEE Systems Journal, vol. 11, no. 3, pp. 1907–1910, 2017. View at: Publisher Site | Google Scholar
  29. Y. C. Zhang, K. P. Liu, L. Qin, and X. L. An, “Deterministic and probabilistic interval prediction for short-term wind power generation based on variational mode decomposition and machine learning methods,” Energy Conversion and Management, vol. 112, pp. 208–219, 2016. View at: Publisher Site | Google Scholar
  30. T. Chakravorti and P. K. Dash, “Multiclass power quality events classification using variational mode decomposition with fast reduced kernel extreme learning machine-based feature selection,” IET Science Measurement and Technology, vol. 12, no. 1, pp. 106–117, 2018. View at: Publisher Site | Google Scholar

Copyright © 2018 Guohui Li et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

More related articles

673 Views | 377 Downloads | 4 Citations
 PDF  Download Citation  Citation
 Download other formatsMore
 Order printed copiesOrder

Related articles

We are committed to sharing findings related to COVID-19 as quickly and safely as possible. Any author submitting a COVID-19 paper should notify us at to ensure their research is fast-tracked and made available on a preprint server as soon as possible. We will be providing unlimited waivers of publication charges for accepted articles related to COVID-19. Sign up here as a reviewer to help fast-track new submissions.