Operations Research in Engineering Problems: Potential Applications and PerspectivesView this Special Issue
Research Article | Open Access
Zahra Shafiei Chafi, Hossein Afrakhte, "Short-Term Load Forecasting Using Neural Network and Particle Swarm Optimization (PSO) Algorithm", Mathematical Problems in Engineering, vol. 2021, Article ID 5598267, 10 pages, 2021. https://doi.org/10.1155/2021/5598267
Short-Term Load Forecasting Using Neural Network and Particle Swarm Optimization (PSO) Algorithm
Electrical load forecasting plays a key role in power system planning and operation procedures. So far, a variety of techniques have been employed for electrical load forecasting. Meanwhile, neural-network-based methods led to fewer prediction errors due to their ability to adapt properly to the consuming load's hidden characteristic. Therefore, these methods were widely accepted by the researchers. As the parameters of the neural network have a significant impact on its performance, in this paper, a short-term electrical load forecasting method using neural network and particle swarm optimization (PSO) algorithm is proposed, in which some neural network parameters including learning rate and number of hidden layers are determined in order to forecast electrical load using the PSO algorithm precisely. Then, the neural network with these optimized parameters is used to predict the short-term electrical load. In this method, a three-layer feedforward neural network trained by backpropagation algorithm is used beside an improved gbest PSO algorithm. Also, the neural network prediction error is defined as the PSO algorithm cost function. The proposed approach has been tested on the Iranian power grid using MATLAB software. The average of three indices beside graphical results has been considered to evaluate the performance of the proposed method. The simulation results reflect the capabilities of the proposed method in accurately predicting the electrical load.
Load forecasting is an effective and crucial process in the management and operation of power systems which can lead to significant cost savings when accurately calculated. Also, very important decisions are made based on the forecasted load, the economic consequences of which are notable .
Load forecasting can be divided into three categories: short-term, mid-term, and long-term forecasting . Due to the vital role of short-term load forecasting in optimizing the unit commitment, turning the thermal units on and off, spinning reserve control, and buying and selling the electricity in interconnected systems, the efforts are majorly concentrated on short-term load forecasting . Appropriate load prediction has always been one of the main challenges for researchers, so that if the predicted load is less than its actual value, the required load will not be supplied, and if it is estimated to be more than the actual amount, it will impose additional costs and cause energy waste.
Due to the great ability in nonlinear relationships modelling between inputs and outputs, artificial neural networks are increasingly used in load forecasting [4–7]. These networks are able to extract the implicit relations between input variables by learning through training data . The first reports of the neural network application in load forecasting were published in the late 1980s and early 1990s and since then their number steadily increased .
Optimizing neural network architecture design, including determining the number of input variables, the number of input nodes, and the number of hidden neurons to enhance prediction performance, is an important issue in intelligent systems [10–13]. In recent years, many intelligent methods such as PSO have been proposed to improve artificial neural networks' training and architecture in short-term electrical load forecasting [8, 14, 15]. The results reflect the capability of these methods compared to the past ones.
Reference  proposed a hybrid method that consists of deep neural network and empirical mode decomposition (EMD) technique. The EMD is used to decompose the load time series and deep neural network is used to perform short-term load forecasting. In , a new ensemble residual network model is presented. At first, a recurrent neural network similar structure is built and then a modified residual network is applied, where the final outputs are obtained. Reference  proposed a deep learning framework based on a combination of a convolutional neural network (CNN)  and long short-term memory (LSTM) . The CNN layers are used for feature extraction  from the input data and LTSM layers are used for sequence learning. In , a multifactorial framework for short-term load forecasting is proposed. At first, the candidate feature set is chosen from the load data. Next, partial mutual information is used to omit the redundant and irrelevant features from the candidate set. Then artificial neural network optimized by genetic algorithm is applied to train this set. Finally, the optimized trained network is used to predict the shot-term load forecasting. Reference  investigates the way to apply sequence-to-sequence recurrent neural networks to short-term load forecast. Reference  proposes a full wavelet neural network method for short-term load forecasting. Decomposition of the load profile and various features is performed using the full wavelet packet transform model. The neural networks are then trained using these features and the outputs of these trained neural networks are known as the forecasted load. In , a data mining and artificial neural network optimized by multiobjective grasshopper and phase space reconstruction method is presented. In , a short-term load forecasting approach that can capture variations in building operation regardless of building type and location is proposed. Also, nine different hybrids of recurrent neural networks and clustering methods are explored. In , the applications and features of support vector machine method, the random forest regression method, and the LSTM neural network method are discussed and compared. Also, a fusion forecasting approach and a data preprocessing technique are proposed by integrating these methods’ advantages. In , the past load data is considered as a feature and the time series characteristics of load data simultaneously. In order to forecast the load, an approach named multi-temporal-spatial-scale temporal convolutional network is adopted. Reference  presented six clustering techniques involving different combinations of Kalman filtering (KF), wavelet neural network (WNN), and artificial neural network (ANN) schemes. In , a hybrid method based on Elman neural network (ENN) and PSO is proposed. Reference  proposes a genetic-algorithm-based backpropagation neural network (GABPNN) considering data loss. Also, a particle swarm optimization-supporting vector regression (PSO-SVR) algorithm is further used to integrate the GABPNN results with better accuracy. In addition, a combined ultra-short-term load forecasting model for industrial power users is introduced. Furthermore, the proposed model combines the cubature Kalman filter (CKF) prediction model with good performance in nonlinear dynamic systems and the least square support vector machine (LS-SVM) prediction model with good performance in small-scale data prediction. The grey neural network is used to integrate the two algorithms, which further improves the accuracy of ultra-short-term load forecasting.
The rest of this paper is organized as follows: A brief overview of neural networks and PSO algorithms is presented in Sections 2 and 3. In Section 4, the proposed method is demonstrated and, in Section 5, the methodology applied to predict the load is explained. In Section 6, results are reflected and finally Section 7 is devoted to conclusion.
2. Neural Network
An artificial neural network is derived from the way of information process in human biological systems and consists of an interconnected group of elements called neurons [32–35]. Various architectures are used in neural networks [36–41]. Some of them include feedforward networks and recurrent networks. In the meantime, feedforward neural networks have become more popular. In this type of the network, the input signal from the input layer propagates to the output layers through the hidden layers, where the outputs of one layer will be the inputs of the next layer  as depicted in Figure 1. This figure shows a neural network with R input and s output. It has three layers named input layer, hidden layer, and output layer. The output of the input layer is the input of the hidden layer and the output of the hidden layer is the input of the output layer. In most of the presented papers, this architecture has been used in order to perform short-term electrical load forecasting .
An artificial neural network with an input layer, one or more hidden layers, and an output layer is called multilayer perceptron network. Each layer consists of several neurons, each of which is connected in a layer to its adjacent layers through some weights. Weight and bias are the two adjustable parameters in neural networks. Tuning the neural networks parameters is done in a process called training algorithm . Neural networks training is accompanied by minimizing a cost function [44–46]. The most well-known training algorithm in neural networks is the backpropagation algorithm in which the mentioned mathematical cost function is the mean of the squared errors. In order to minimize the errors, in backpropagation technique, weights and biases are modified according to the errors returned to the system .
3. Particle Swarm Optimization (PSO) Algorithm
Swarm intelligence algorithms are classified to several old-style methods such as ant colony optimizer (ACO) [47, 48], PSO , differential evolution (DE) [50, 51], differential search (DS) , and some recent methods and their advanced versions such as Harris Hawks Optimizer (HHO) [53–55], slime mould algorithm (SMA) , fruit fly optimizer (FFO) [57, 58], moth-flame optimizer (MFO) [59–61], whale optimization algorithm (WOA) [62–64], grey wolf optimizer (GWO) [65, 66], bacterial foraging optimization (BFO) , and grasshopper optimization algorithm (GOA) . The aim of optimization is to determine a suitable value for one or more parameters between all possible values for them in order to minimize or maximize a function and it can be applied to find feasible answer to many potential real-life applications such as deployment optimization , adaptive control concepts [35, 70–72], computer vision techniques , transportation networks , image and video processing [75–80], decision-making approaches [81–83], power allocation systems , sensor fusion approaches , monitoring systems [86–89], and deep learning models [19, 90–93]. The PSO algorithm as an optimization algorithm is a social interaction model between independent particles that use their social knowledge to find the minimum and maximum value of a function . Kennedy and Eberhart first proposed this algorithm in 1994 [94, 95].
PSO algorithm is an iterative optimization method in which a population is produced for the search process at first called particles. Then these particles travel a multidimension space formed by each particle . There are two parameters, position and velocity, in PSO algorithm, which are updated for each particle and in all considered dimensions . Each particle alters its position according to best position it has ever achieved and the best position achieved by other particles up to now.
Benefits of PSO over other metaheuristic approaches are computational feasibility and effectiveness. PSO shows its uniqueness such as easy implementation and consistency in performance . The main advantage of PSO in comparison to other optimization methods is its ability to accomplish fast convergence in many complicated optimization problems. In addition , PSO has several attractive advantages like simplicity with fewer mathematical equations and having fewer parameters in implementation . PSO has many key features that attracted the attention of many researchers to use it in various applications in which traditional optimization algorithms might fail. We have the following examples:
-Only a fitness function to measure the ‘‘quality" of a solution instead of complex mathematical operations like gradient, Hessian, or matrix inversion is required. This reduces the computational complexity and relieves some of the restrictions that are usually imposed on the objective function like differentiability, continuity, or convexity.(i)As it is a population-based algorithm, it is less sensitive to a good initial solution.(ii)Easily incorporates with other optimization tools to form hybrid ones.(iii)It has the ability to escape local minima, since it follows probabilistic transition rules.
More interesting PSO advantages can be emphasized when compared to other members of evolutionary methods like GA, HHO, GWO, and so forth as follows:(i) Easily programmed and modified with basic mathematical and logic operations.(ii) Inexpensive in terms of computation time and memory.(iii) Less parameter tuning is required.(iv)It works with direct real valued numbers, which omits the need to do binary conversion of classical canonical genetic algorithm .
Different PSO algorithms have been known up to now, among which gbest algorithm is more popular. In this approach, the whole population is considered as a unique neighborhood for that particle during the gaining experience process. In order to optimize the search procedure, the best particle shares its coordinates information with other particles .
In this algorithm, the ith particle velocity, , is updated according to the following equatio (1) [95, 99]:where and are two random numbers between zero and one, xik and are the position and velocity of the ith particle in kth iteration, respectively, is the best position experienced by the whole particles, and represents the best personal experience of the particle. Also, is called the inertia constant, which actually considers a percentage of the previous particle velocity in the new velocity calculation. c1 and c2 are constants called personal learning factor and social learning factor, respectively.
Updating the particle position is done according to the following equation: where xik+1 is the new particle position, xik is the previous particle position, and is the new particle velocity obtained according to (1). The velocity and position of each particle are updated according to (1) and (2) until all particles move. Then the next iteration occurs and this procedure continues until finding the best solution .
The old types of PSO algorithm had some undesirable dynamic characteristics including velocity restrictions to control the particle path. In this paper, by applying the limitation on factors according to (3) and (4), the possibility of the dynamic characteristic control on the particle swarms and making a balance between local search and global search is provided . According to (3), the factors related to PSO algorithm are considered as in (4).where and are positive random numbers gained from a unified distribution, where their sum, , should be more than 4, and has a value between zero and one. Also, is known as restriction factor.
4. Proposed Method
In this paper, in order to predict the short-term electrical load, a feedforward neural network trained by backpropagation algorithm has been chosen. This network consists of one hidden layer, and the number of neurons in this layer is considered as the optimization parameter. In designing the neural network architecture, the number of neurons in the hidden layer has an important effect on the network performance, making the precision in choosing them. If the number of these layers is chosen to be low, the network gets in trouble in the training step; and if the number of these layers is chosen to be high, the network will face overfitting. Also, the network learning rate between other parameters of the neural network is considered the other optimization parameter. Suitable values for the two optimization parameters are found, utilizing the improved PSO algorithm introduced in Section 3. This algorithm’s parameters are selected in accordance with (3) and (4) to overcome the dynamic problems, involved in traditional PSO algorithm. The resulting error by the neural network for load forecasting is considered the cost function and, as declared before, two variables of the network, the learning rate and the number of hidden layer neurons, are considered the optimization variables for the PSO algorithm. Along with minimizing the neural network error, which leads to minimizing the load forecasting error, the PSO algorithm tries to find the best values for the learning rate and the hidden layer neurons. The neural network with these optimized parameters is then used to predict the load.
The flowchart of the proposed method is reflected in Figure 2. At the beginning, the preprocessed input data are fed into the three-layer backpropagation neural network, where its learning rate and the number of the hidden layer neurons are considered as the optimizing parameters for the PSO algorithm. PSO travels the search space to find the best values for the intended neural network. After simulation, these two parameters are obtained, while the neural network prediction error is minimal. Next, the trained, optimized neural network is used for electrical load forecasting. This network is used to predict load per day and, for each day, three indices that will be introduced in Section 6 are calculated. The total values of these indices for all days for which the load is predicted are obtained by averaging all indices.
This section provides more details about the proposed method that was presented in the previous section. As stated in Section 4, a neural network with one hidden layer is used to forecast the electrical load. At first, the information of the total hourly daily load consumption of Iran's power grid was extracted from  and the data related to its 1093 days (22 March 2010 to 18 March 2013) was selected for study. The simulations are performed in MATLAB software environment.
In order to improve the performance of the neural network and prevent the neurons saturations phenomena, all used data in neural network are normalized using the following formula:
The first 900 days of 1093 studying days are considered as training data and the remaining 193 are considered neural network test data. The number of neurons in the output layer is 24 and, due to crucial role of the number of the hidden layer neurons, its number is considered as an optimization parameter. The transition function for the output layer and hidden layer is considered tansig and the learning law is considered Levenberg-Marquardt [100, 102]. Also, the network learning rate between other parameters of the neural network is considered the other optimization parameter.
The resulting error by the neural network for load forecasting is considered the cost function and, as declared before, two variables of the network, the learning rate and the number of hidden layer neurons, are considered the optimization variables for the PSO algorithm. The improved type of PSO is considered, the parameters of which are as in (3) and (4). The optimized value of is considered 4.2 and is considered as 0.729843.
This section demonstrates the effectiveness of the proposed method; it has been simulated in MATLAB software. The computing system is a core i5 system with 1.6 GHz CPU and 4 GB memory.
The chosen test data are adopted to evaluate this network's performance, where the results for some days are according to Figure 3 through Figure 4. They show the consumed load according to hour of the day. In each figure, the solid graph is related to the actual load and the dashed graph reflects the result of the prediction. It can be seen that Iran’s load is a two-peak load, meaning that it has two peaks. One appears around noon at 3 p.m. and the other appears in the evening at 8 p.m. These Figures (Figures 3, 5, 6, and 7) follow nearly the same load pattern except for Figures 8 and 4, where their first peak appears at a lower level of the power. This pattern is impressed by the day of the week, the special religious ceremony, certain TV program, and so on. In all cases, the maximum prediction error is not more than 1000 MW.
The obtained results apparently show the appropriate performance of the introduced approach.
For more investigation, in order to evaluate the forecasting model, MSE, MAPE, and MAE indices are calculated according to Table 1, where the average results of these calculations for all test data are summarized in Table 2. In Table 1, N is the total number of pieces of the test data, ym(i) is the ith data actual load value, and yp(i) is the ith data predicted load value.
Although these results are acceptable and proper, in this paper, only the load historical data is used, whereas using other impressive factors on load behaviour can reduplicate the approach's performance.
Electrical load forecasting affected the power system operation and planning processes in a way where the correct operation of the power system depends on precision of this prediction. Also, the power system's behaviour, especially its generation units in small or large scale, is affected by this prediction and its deviation from the actual value can impose additional costs to the system. Numerous load forecasting methods have been proposed up to now, where neural-network-based methods are one of them. Due to the nonlinear relationship between load pattern changes and effective parameters on it and complex relations between load pattern changes and these parameters and neural networks' ability to discover them, researchers have accepted them more than other methods. Meanwhile, numerical to neural network parameters have an obvious effect on their performance. So, exploiting algorithms such as PSO algorithm can be helped. This paper proposes an approach for electrical load forecasting using PSO algorithm and neural network with backpropagation algorithm. At first, the PSO algorithm is used to tune some neural network parameters to access the optimized and appropriate model. Then, the neural network with obtained optimized parameters is used for short-term electrical load forecasting. The simulation results indicate the precision and power of the proposed method in short-term electrical load forecasting. For future directions, we will develop a model according to the deep learning techniques [103, 104] and fuzzy logic [105, 106]. Also, more parameters can be used for load forecasting beside the load information like the temperature, the humidity, and so forth in order to improve the prediction accuracy.
The data used to support the findings of this study are included within the article.
Conflicts of Interest
The authors declare that there are no conflicts of interest regarding the publication of this paper.
- J. Wang, S. Zhu, W. Zhang, and H. Lu, “Combined modeling for electric load forecasting with adaptive particle swarm optimization,” Energy, vol. 35, no. 4, pp. 1671–1678, 2010.
- C.-N. Ko and C.-M. Lee, “Short-term load forecasting using SVR (support vector regression)-based radial basis function neural network with dual extended Kalman filter,” Energy, vol. 49, pp. 413–422, 2013.
- N. Kandil, R. Wamkeue, M. Saad, and S. Georges, “An efficient approach for short term load forecasting using artificial neural networks,” International Journal of Electrical Power and Energy Systems, vol. 28, no. 8, pp. 525–530, 2006.
- Q. Zhang, Y. Ma, G. Li, J. Ma, and J. Ding, “Short-term load forecasting based on frequency domain decomposition and deep learning,” Mathematical Problems in Engineering, vol. 2020, Article ID 7240320, 9 pages, 2020.
- J. Huang, Y. Tang, and S. Chen, “Energy demand forecasting: combining cointegration analysis and artificial intelligence algorithm,” Mathematical Problems in Engineering, vol. 2018, Article ID 5194810, 13 pages, 2018.
- S. Zheng, Q. Zhong, L. Peng, and X. Chai, “A simple method of residential electricity load forecasting by improved Bayesian neural networks,” Mathematical Problems in Engineering, vol. 2018, Article ID 4276176, 16 pages, 2018.
- L. Wu, C. Kong, X. Hao, and W. Chen, “A short-term load forecasting method based on GRU-CNN hybrid neural network model,” Mathematical Problems in Engineering, vol. 2020, Article ID 1428104, 9 pages, 2020.
- H. Shayeghi, H. Shayanfar, and G. Azimi, “A hybrid particle swarm optimization back propagation algorithm for short term load forecasting,” International Journal on, Technical and Physical Problems (IJTPE), vol. 4, no. 2, pp. 12–22, 2010.
- H. S. Hippert, C. E. Pedreira, and R. C. Souza, “Neural networks for short-term load forecasting: a review and evaluation,” IEEE Transactions on Power Systems, vol. 16, no. 1, pp. 44–55, 2001.
- H.-L. Chen, G. Wang, C. Ma, Z.-N. Cai, W.-B. Liu, and S.-J. Wang, “An efficient hybrid kernel extreme learning machine approach for early diagnosis of Parkinson׳s disease,” Neurocomputing, vol. 184, pp. 131–144, 2016.
- L. Hu, G. Hong, J. Ma, X. Wang, and H. Chen, “An efficient machine learning approach for diagnosis of paraquat-poisoned patients,” Computers in Biology and Medicine, vol. 59, pp. 116–124, 2015.
- J. Xia, H. Chen, Q. Li et al., “Ultrasound-based differentiation of malignant and benign thyroid Nodules: an extreme learning machine approach,” Computer Methods and Programs in Biomedicine, vol. 147, pp. 37–49, 2017.
- C. Li, L. Hou, B. Y. Sharma et al., “Developing a new intelligent system for the diagnosis of tuberculous pleural effusion,” Computer Methods and Programs in Biomedicine, vol. 153, pp. 211–225, 2018.
- X. Zheng, X. Ran, and M. Cai, “Short-term load forecasting of power system based on neural network intelligent algorithm,” IEEE Access, , 1 page, 2020.
- Y. K. Semero, J. Zhang, and D. Zheng, “EMD–PSO–ANFIS-based hybrid approach for short-term load forecasting in microgrids,” IET Generation, Transmission and Distribution, vol. 14, no. 3, pp. 470–475, 2019.
- Z. Kong, C. Zhang, H. Lv, F. Xiong, and Z. Fu, “Multimodal feature extraction and fusion deep neural networks for short-term load forecasting,” IEEE Access, vol. 8, pp. 185373–185383, 2020.
- Q. Xu, X. Yang, and X. Huang, “Ensemble residual networks for short-term load forecasting,” IEEE Access, vol. 8, pp. 64750–64759, 2020.
- M. Alhussein, K. Aurangzeb, and S. I. Haider, “Hybrid CNN-LSTM model for short-term individual household load forecasting,” IEEE Access, vol. 8, pp. 180544–180557, 2020.
- H. Chen, A. Chen, L. Xu et al., “A deep learning CNN architecture applied in smart near-infrared analysis of water pollution for agricultural irrigation resources,” Agricultural Water Management, vol. 240, Article ID 106303, 2020.
- B. Wang, L. Zhang, H. Ma, H. Wang, and S. Wan, “Parallel LSTM-based regional integrated energy system multienergy source-load information interactive energy prediction,” Complexity, vol. 2019, Article ID 7414318, 13 pages, 2019.
- J. Zhang and B. Liu, “A review on the recent developments of sequence-based protein feature extraction methods,” Current Bioinformatics, vol. 14, no. 3, pp. 190–199, 2019.
- Y. Gao, Y. Fang, H. Dong, and Y. Kong, “A multifactorial framework for short-term load forecasting system as well as the jinan's case study,” IEEE Access, vol. 8, pp. 203086–203096, 2020.
- E. Skomski, J.-Y. Lee, W. Kim, V. Chandan, S. Katipamula, and B. Hutchinson, “Sequence-to-sequence neural networks for short-term electrical load forecasting in commercial office buildings,” Energy and Buildings, vol. 226, Article ID 110350, 2020.
- M. El-Hendawi and Z. Wang, “An ensemble method of full wavelet packet transform and neural network for short term electrical load forecasting,” Electric Power Systems Research, vol. 182, Article ID 106265, 2020.
- C. Li, “Designing a short-term load forecasting model in the urban smart grid system,” Applied Energy, vol. 266, Article ID 114850, 2020.
- G. Chitalia, M. Pipattanasomporn, V. Garg, and S. Rahman, “Robust short-term electrical load forecasting framework for commercial buildings using deep recurrent neural networks,” Applied Energy, vol. 278, Article ID 115410, 2020.
- W. Guo, L. Che, M. Shahidehpour, and X. Wan, “Machine-Learning based methods in short-term load forecasting,” The Electricity Journal, vol. 34, no. 1, Article ID 106884, 2021.
- L. Yin and J. Xie, “Multi-temporal-spatial-scale temporal convolution network for short-term load forecasting of power systems,” Applied Energy, vol. 283, Article ID 116328, 2020.
- H. H. H. Aly, “A proposed intelligent short-term load forecasting hybrid models of ANN, WNN and KF based on clustering techniques for smart grid,” Electric Power Systems Research, vol. 182, Article ID 106191, 2020.
- K. Xie, H. Yi, G. Hu, L. Li, and Z. Fan, “Short-term power load forecasting based on Elman neural network with particle swarm optimization,” Neurocomputing, vol. 416, pp. 136–142, 2020.
- H. Jiang, A. Wu, B. Wang, P. Xu, and G. Yao, “Industrial ultra-short-term load forecasting with data completion,” IEEE Access, vol. 8, pp. 158928–158940, 2020.
- E. Banda and K. A. Folly, “Short term load forecasting based on hybrid ANN and PSO,” in Proceedings of the International Conference in Swarm Intelligence, pp. 98–106, Springer, Beijing, China, June 2015.
- L. Ding, S. Li, H. Gao, C. Chen, and Z. Deng, “Adaptive partial reinforcement learning neural network-based tracking control for wheeled mobile robotic systems,” IEEE Transactions on Systems, Man, and Cybernetics: Systems, vol. 50, no. 7, pp. 2512–2523, 2018.
- S. Wang, Y. Zhao, J. Li et al., “Neurostructural correlates of hope: dispositional hope mediates the impact of the SMA gray matter volume on subjective well-being in late adolescence,” Social Cognitive and Affective Neuroscience, vol. 15, no. 4, pp. 395–404, 2020.
- J. Wang, P. Zhu, B. He, G. Deng, C. Zhang, and X. Huang, “An adaptive neural sliding mode control with ESO for uncertain nonlinear systems,” International Journal of Control, Automation and Systems, pp. 1–11, 2020.
- X. Zhang, J. Wang, T. Wang, R. Jiang, J. Xu, and L. Zhao, “Robust feature learning for adversarial defense via hierarchical feature alignment,” Information Sciences, vol. 560, pp. 256–270, 2020.
- X. Zhang, T. Wang, W. Luo, and P. Huang, “Multi-level fusion and attention-guided CNN for image dehazing,” IEEE Transactions on Circuits and Systems for Video Technology, p. 1, 2020.
- X. Zhang, M. Fan, D. Wang, P. Zhou, and D. Tao, “Top-k feature selection framework using robust 0-1 integer programming,” IEEE Transactions on Neural Networks and Learning Systems, pp. 1–15, 2020.
- X. Zhang, D. Wang, Z. Zhou, and Y. Ma, “Robust low-rank tensor recovery with rectification and alignment,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 43, no. 1, pp. 238–255, 2019.
- X. Zhang, R. Jiang, T. Wang, and J. Wang, “Recursive neural network for video deblurring,” IEEE Transactions on Circuits and Systems for Video Technology, p. 1, 2020.
- X. Zhang, T. Wang, J. Wang, G. Tang, and L. Zhao, “Pyramid channel-based feature attention network for image dehazing,” Computer Vision and Image Understanding, vol. 197-198, Article ID 103003.
- R. ZhiChao, Y. Qiang, W. Haiyan, C. Chao, and L. Yuan, “Power load forecasting in the spring festival based on feedforward neural network model,” in Proceedings of the 2017 3rd IEEE International Conference on Computer and Communications (ICCC), pp. 2855–2858, IEEE, Chengdu, China, December 2017.
- M. Hagan, H. Demuth, M. Beale, and O. De Jesús, Neural Network Design, vol. 20, Pws Pub, Boston, MA, USA, 1996.
- A. Baliyan, K. Gaurav, and S. K. Mishra, “A review of short term load forecasting using artificial neural network models,” Procedia Computer Science, vol. 48, pp. 121–125, 2015.
- X.-F. Wang, P. Gao, Y.-F. Liu, H.-F. Li, and F. Lu, “Predicting thermophilic proteins by machine learning,” Current Bioinformatics, vol. 15, no. 5, pp. 493–502, 2020.
- L. Ding, S. Li, H. Gao, Y.-J. Liu, L. Huang, and Z. Deng, “Adaptive neural network-based finite-time online optimal tracking control of the nonlinear system with dead zone,” IEEE Transactions on Cybernetics, vol. 51, no. 1, pp. 382–392, 2021.
- X. Zhao, D. Li, B. Yang, C. Ma, Y. Zhu, and H. Chen, “Feature selection based on improved ant colony optimization for online detection of foreign fiber in cotton,” Applied Soft Computing, vol. 24, pp. 585–596, 2014.
- D. Zhao, “Chaotic random spare ant colony optimization for multi-threshold image segmentation of 2D Kapur entropy,” Knowledge-Based Systems, Article ID 106510, 2020.
- B. Bai, Z. Guo, C. Zhou, W. Zhang, and J. Zhang, “Application of adaptive reliability importance sampling-based extended domain PSO on single mode failure in reliability engineering,” Information Sciences, vol. 546, pp. 42–59, 2021.
- G. Sun, C. Li, and L. Deng, “An adaptive regeneration framework based on search space adjustment for differential evolution,” Neural Computing and Applications, pp. 1–17, 2021.
- G. Sun, B. Yang, Z. Yang, and G. Xu, “An adaptive differential evolution with combined strategy for global numerical optimization,” Soft Computing, pp. 1–20, 2019.
- J. Liu, C. Wu, G. Wu, and X. Wang, “A novel differential search algorithm and applications for structure design,” Applied Mathematics and Computation, vol. 268, pp. 246–269, 2015.
- Y. Zhang, R. Liu, X. Wang, H. Chen, and C. Li, “Boosted binary Harris hawks optimizer and feature selection,” Engineering with Computers, pp. 1–30, 2020.
- H. Chen, A. A. Heidari, H. Chen, M. Wang, Z. Pan, and A. H. Gandomi, “Multi-population differential evolution-assisted Harris hawks optimization: framework and case studies,” Future Generation Computer Systems, vol. 111, pp. 175–198, 2020.
- N. A. Golilarz, H. Gao, and H. Demirel, “Satellite image de-noising with Harris hawks meta heuristic optimization algorithm and improved adaptive generalized Gaussian distribution threshold function,” IEEE Access, vol. 7, pp. 57459–57468, 2019.
- Y. Zhang, “Towards augmented kernel extreme learning models for bankruptcy prediction: algorithmic behavior and comprehensive analysis,” Neurocomputing, vol. 430, pp. 185–212, 2020.
- H. Yu, “Dynamic Gaussian bare-bones fruit fly optimizers with abandonment mechanism: method and analysis,” Engineering with Computers, pp. 1–29, 2020.
- L. Shen, H. Chen, Z. Yu et al., “Evolving support vector machines using fruit fly optimization for medical data classification,” Knowledge-Based Systems, vol. 96, pp. 61–75, 2016.
- W. Shan, Z. Qiao, A. A. Heidari, H. Chen, H. Turabieh, and Y. Teng, “Double adaptive weights for stabilization of moth flame optimizer: balance analysis, engineering cases, and medical diagnosis,” Knowledge-Based Systems, vol. 214, Article ID 106728, 2020.
- Y. Xu, H. Chen, J. Luo, Q. Zhang, S. Jiao, and X. Zhang, “Enhanced Moth-flame optimizer with mutation strategy for global optimization,” Information Sciences, vol. 492, pp. 181–203, 2019.
- M. Wang, H. Chen, B. Yang et al., “Toward an optimal kernel extreme learning machine using a chaotic moth-flame optimization strategy with applications in medical diagnoses,” Neurocomputing, vol. 267, pp. 69–84, 2017.
- J. Tu, H. Chen, J. Liu et al., “Evolutionary biogeography-based whale optimization methods with communication structure: towards measuring the balance,” Knowledge-Based Systems, vol. 212, Article ID 106642, 2021.
- M. Wang and H. Chen, “Chaotic multi-swarm whale optimizer boosted support vector machine for medical diagnosis,” Applied Soft Computing Journal, vol. 88, Article ID 105946, 2020.
- Y. Cao, Y. Li, G. Zhang, K. Jermsittiparsert, and M. Nasseri, “An efficient terminal voltage control for PEMFC based on an improved version of whale optimization algorithm,” Energy Reports, vol. 6, pp. 530–542, 2020.
- J. Hu, H. Chen, A. A. Heidari et al., “Orthogonal learning covariance matrix for defects of grey wolf optimizer: insights, balance, diversity, and feature selection,” Knowledge-Based Systems, vol. 213, Article ID 106684, 2021.
- X. Zhao, X. Zhang, Z. Cai et al., “Chaos enhanced grey wolf optimization wrapped ELM for diagnosis of paraquat-poisoned patients,” Computational Biology and Chemistry, vol. 78, pp. 481–490, 2019.
- X. Xu and H.-L. Chen, “Adaptive computational chemotaxis based on field in bacterial foraging optimization,” Soft Computing, vol. 18, no. 4, pp. 797–807, 2014.
- C. Yu, “SGOA: annealing-behaved grasshopper optimizer for global tasks,” Engineering with Computers, pp. 1–28, 2021.
- B. Cao, J. Zhao, Y. Gu, S. Fan, and P. Yang, “Security-aware industrial wireless sensor network deployment optimization,” IEEE Transactions on Industrial Informatics, vol. 16, no. 8, pp. 5309–5316, 2019.
- Z. Chen, J. Wang, K. Ma, X. Huang, and T. Wang, “Fuzzy adaptive two‐bits‐triggered control for nonlinear uncertain system with input saturation and output constraint,” International Journal of Adaptive Control and Signal Processing, vol. 34, no. 4, pp. 543–559, 2020.
- J. Wang, Y. Huang, T. Wang, C. Zhang, and Y. h. Liu, “Fuzzy finite-time stable compensation control for a building structural vibration system with actuator failures,” Applied Soft Computing, vol. 93, Article ID 106372, 2020.
- Y. Huang, J. Wang, F. Wang, and B. He, “Event-triggered adaptive finite-time tracking control for full state constraints nonlinear systems with parameter uncertainties and given transient performance,” ISA Transactions, vol. 108, pp. 131–143, 2021.
- S. Xu, J. Wang, W. Shou, T. Ngo, A. Sadick, and X. Wang, “Computer vision techniques in construction: a critical review,” Archives of Computational Methods in Engineering, 2020.
- C. Wu, P. Wu, J. Wang, R. Jiang, M. Chen, and X. Wang, “Ontological knowledge base for concrete bridge rehabilitation project management,” Automation in Construction, vol. 121, Article ID 103428, 2021.
- Q. Zhu, “Research on road traffic situation awareness system based on image big data,” IEEE Intelligent Systems, vol. 35, no. 1, pp. 18–26, 2019.
- Q. Jiang, F. Shao, W. Gao, Z. Chen, G. Jiang, and Y.-S. Ho, “Unified no-reference quality assessment of singly and multiply distorted stereoscopic images,” IEEE Transactions on Image Processing, vol. 28, no. 4, pp. 1866–1881, 2018.
- M. Xu, C. Li, S. Zhang, and P. L. Callet, “State-of-the-Art in 360° video/image processing: perception, assessment and compression,” IEEE Journal of Selected Topics in Signal Processing, vol. 14, no. 1, pp. 5–26, 2020.
- M. Yang and A. Sowmya, “An underwater color image quality evaluation metric,” IEEE Transactions on Image Processing, vol. 24, no. 12, pp. 6062–6071, 2015.
- B. Wang, B. Zhang, and X. Liu, “An image encryption approach on the basis of a time delay chaotic system,” Optik, vol. 225, Article ID 165737, 2020.
- S. Hinojosa, D. Oliva, E. Cuevas, G. Pajares, D. Zaldivar, and M. Pérez-Cisneros, “Reducing overlapped pixels: a multi-objective color thresholding approach,” Soft Computing, vol. 24, no. 9, pp. 6787–6807, 2020.
- C. Wu, P. Wu, J. Wang, R. Jiang, M. Chen, and X. Wang, “Critical review of data-driven decision-making in bridge operation and maintenance,” Structure and Infrastructure Engineering, pp. 1–24, 2020.
- S. Liu, W. Yu, F. T. S. Chan, and B. Niu, “A variable weight-based hybrid approach for multi-attribute group decision making under interval-valued intuitionistic fuzzy sets,” International Journal of Intelligent Systems, vol. 36, pp. 1015–1052, 2021.
- S. Liu, F. T. S. Chan, and W. Ran, “Decision making for the selection of cloud vendor: an improved approach under group decision-making with integrated weights and objective/subjective attributes,” Expert Systems with Applications, vol. 55, pp. 37–47, 2016.
- J. Yan, W. Pu, S. Zhou, H. Liu, and Z. Bao, “Collaborative detection and power allocation framework for target tracking in multiple radar system,” Information Fusion, vol. 55, pp. 173–183, 2020.
- J.-W. Hu, B.-Y. Zheng, C. Wang et al., “A survey on multi-sensor fusion based obstacle detection for intelligent ground vehicles in off-road environments,” Frontiers of Information Technology & Electronic Engineering, vol. 21, no. 5, pp. 675–692, 2020.
- C. Li, L. Sun, Z. Xu, X. Wu, T. Liang, and W. Shi, “Experimental investigation and error analysis of high precision FBG displacement sensor for structural health monitoring,” International Journal of Structural Stability and Dynamics, vol. 20, Article ID 2040011, 2020.
- L. Sun, C. Li, C. Zhang, T. Liang, and Z. Zhao, “The strain transfer mechanism of fiber bragg grating sensor for extra large strain monitoring,” Sensors, vol. 19, no. 8, p. 1851, 2019.
- C. Zhang, Z. Alam, L. Sun, Z. Su, and B. Samali, “Fibre Bragg grating sensor‐based damage response monitoring of an asymmetric reinforced concrete shear wall structure subjected to progressive seismic loads,” Structural Control and Health Monitoring, vol. 26, no. 3, Article ID e2307, 2019.
- L. Sun, C. Li, C. Zhang, Z. Su, and C. Chen, “Early monitoring of rebar corrosion evolution based on FBG sensor,” International Journal of Structural Stability and Dynamics, vol. 18, no. 8, Article ID 1840001, 2018.
- T. Qiu, X. Shi, J. Wang et al., “Deep learning: a rapid and efficient route to automatic metasurface design,” Advanced Science, vol. 6, no. 12, Article ID 1900128, 2019.
- T. Li, M. Xu, C. Zhu, R. Yang, Z. Wang, and Z. Guan, “A deep learning approach for multi-frame in-loop filter of HEVC,” IEEE Transactions on Image Processing, vol. 28, no. 11, pp. 5663–5678, 2019.
- J. Qian, S. Feng, Y. Li et al., “Single-shot absolute 3D shape measurement with deep-learning-based color fringe projection profilometry,” Optics Letters, vol. 45, no. 7, pp. 1842–1845, 2020.
- J. Qian, “Deep-learning-enabled geometric constraints and phase unwrapping for single-shot absolute 3d shape measurement,” APL Photonics, vol. 5, no. 4, Article ID 046105, 2020.
- R. Eberhart and J. Kennedy, “A new optimizer using particle swarm theory.,” in Proceedings of the Sixth International Symposium on Micro Machine and Human Science MHS'95, pp. 39–43, Nagoya, Japan, 1995.
- S. Mishra and S. K. Patra, “Short term load forecasting using neural network trained with genetic algorithm & particle swarm optimization,” in Proceedings of the 2008 First International Conference on Emerging Trends in Engineering and Technology, pp. 606–611, IEEE, Nagpur, India, July 2008.
- S. Quaiyum, Y. I. Khan, S. Rahman, and P. Barman, “Artificial neural network based short term load forecasting of power system,” International Journal of Computer Applications, vol. 30, no. 4, pp. 1–7, 2011.
- J. Joy, S. Rajeev, and V. Narayanan, “Particle swarm optimization for resource constrained-project scheduling problem with varying resource levels,” Procedia Technology, vol. 25, pp. 948–954, 2016.
- W. Ali and S. Malebary, “Particle swarm optimization-based feature weighting for improving intelligent phishing website detection,” IEEE Access, vol. 8, pp. 116766–116780, 2020.
- M. R. AlRashidi and K. M. El-Naggar, “Long term electric load forecasting based on particle swarm optimization,” Applied Energy, vol. 87, no. 1, pp. 320–326, 2010.
- M. Clerc and J. Kennedy, “The particle swarm - explosion, stability, and convergence in a multidimensional complex space,” IEEE Transactions on Evolutionary Computation, vol. 6, no. 1, pp. 58–73, 2002.
- Available: http://www.IGMC.ir/Power-grid-status-report.
- M. T. Hagan and M. B. Menhaj, “Training feedforward networks with the Marquardt algorithm,” IEEE Transactions on Neural Networks, vol. 5, no. 6, pp. 989–993, 1994.
- Z. Lv and L. Qiao, “Deep belief network and linear perceptron based cognitive computing for collaborative robots,” Applied Soft Computing, Article ID 106300, 2020.
- H. Zhang, Z. Qiu, J. Cao, M. Abdel-Aty, and L. Xiong, “Event-Triggered synchronization for neutral-type semi-markovian neural networks with partial mode-dependent time-varying delays,” IEEE Transactions on Neural Networks and Learning Systems, vol. 31, no. 11, pp. 4437–4450, 2020.
- B. Wang and L. Chen, “New results on the control for a kind of uncertain chaotic systems based on fuzzy logic,” Complexity, vol. 2019, Article ID 8789438, 8 pages, 2019.
- H. Chen, H. Qiao, L. Xu, Q. Feng, and K. Cai, “A fuzzy optimization strategy for the implementation of RBF LSSVR model in vis-NIR analysis of pomelo maturity,” IEEE Transactions on Industrial Informatics, vol. 15, no. 11, pp. 5971–5979, 2019.
Copyright © 2021 Zahra Shafiei Chafi and Hossein Afrakhte. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.