Research Article  Open Access
Xuecun Yang, Xiaoru Yan, Chunfeng Song, "Pressure Prediction of Coal Slurry Transportation Pipeline Based on Particle Swarm Optimization Kernel Function Extreme Learning Machine", Mathematical Problems in Engineering, vol. 2015, Article ID 542182, 7 pages, 2015. https://doi.org/10.1155/2015/542182
Pressure Prediction of Coal Slurry Transportation Pipeline Based on Particle Swarm Optimization Kernel Function Extreme Learning Machine
Abstract
For coal slurry pipeline blockage prediction problem, through the analysis of actual scene, it is determined that the pressure prediction from each measuring point is the premise of pipeline blockage prediction. Kernel function of support vector machine is introduced into extreme learning machine, the parameters are optimized by particle swarm algorithm, and blockage prediction method based on particle swarm optimization kernel function extreme learning machine (PSOKELM) is put forward. The actual test data from HuangLing coal gangue power plant are used for simulation experiments and compared with support vector machine prediction model optimized by particle swarm algorithm (PSOSVM) and kernel function extreme learning machine prediction model (KELM). The results prove that mean square error (MSE) for the prediction model based on PSOKELM is 0.0038 and the correlation coefficient is 0.9955, which is superior to prediction model based on PSOSVM in speed and accuracy and superior to KELM prediction model in accuracy.
1. Introduction
The coal slurry as one waste is transported through pipelines to circulating fluidized bed boiler for mixed burning in gangue power plant; on the one hand, the low calorific value energy can be made full use of, and the coal slurry can be processed; on the other hand, pollution is prevented, which brings good economic and social benefits for the coal mine enterprises. At present, the coal slurry is transported through pipelines and wet transportation technology is applied, which can solve the secondary pollution problem caused by transportation process [1]. If the moisture content of coal slurry is too low, or in the process of coal slurry transfer and storage, miscellaneous articles or objects enter into coal slurry, then it can make the coal quality unstable and causes the pipeline blockage. When the blockage happens, in the upstream of choke point, compared with the normal pressure, pipeline pressure at every point is higher, and in the downstream of choke point, pipeline pressure is lower, and even to zero. Therefore, the pressure of pipeline measuring point can be a feature of blockage prediction; in this paper, pressure prediction research of coal slurry pipeline will be made, which lays a foundation for blockage prediction.
Artificial neural network and support vector machine (SVM) can deal well with nonlinear regression problem and have wide application in the future data prediction. But much study has found that feedforward neural network has some problems such as having slow learning speed, being easy to fall into local minimum, and being sensitive to parameter selection [2–4], of which BP algorithm acts as a representative. Support vector machine (SVM) works as a kind of small sample learning algorithm, although there is a certain advantage in the small sample learning, but it also has slow learning speed, and its performance is more sensitive to the selection of kernel parameter. When SVM meets large data, its computational complexity is very high and consumed time is long [5–8]. In this paper, based on the present research of forecasting methods, combined with the advantages of particle swarm optimization (PSO) algorithm and extreme learning machine, pressure prediction method based on kernel function extreme learning machine optimized by particle swarm algorithm is proposed to predict the coal slurry pipeline pressure.
2. Kernel Function Extreme Learning Machine
Guangbin Huang has proposed a new learning algorithm called extreme learning machine (ELM) [9]; it is based on the single hidden layer feedforward neural network. Firstly, all parameters of hidden layer nodes for single hidden layer feedforward neural network are randomly selected (input weights and thresholds of hidden layer nodes); then the network output weights are analyzed and calculated. All the parameters of the hidden layer nodes in the ELM are independent of the objective function and the training samples and do not need network iterative adjustment. In theory, this algorithm can get good generalization performance at an extremely rapid learning speed.
ELM has shown great potential in solving problem such as data regression and classification, which overcomes some challenging limits when feedforward neural networks or other intelligent algorithms are used. Compared with the more popular BackPropagation (BP) and Support Vector Machine (SVM), ELM not only inherits the various advantages of neural network and support vector machine (SVM) but also has a lot of outstanding characteristics.(1)ELM is easy to use, requiring less human intervention.(2)ELM has faster speed of learning. Some training learning can be done in a few seconds or minutes.(3)ELM has better generalization performance. In most cases, ELM can obtain a better generalization performance than BP and get close or better generalization performance than the SVM.(4)ELM applies to most of the nonlinear activation functions.(5)This algorithm is simpler. ELM is a simple algorithm with three steps that does not need to be adjusted, and simple mathematics is enough for use.(6)ELM cannot meet problems such as local minimum, inappropriate learning speed, and the overfitting, where traditional classical learning algorithm can be faced with.
Figure 1 is the structure of extreme learning machine.
It can be seen that extreme learning machine still adopts threelayer feedforward neural network structure.
Different from the BP neural network, all the connection weights of BP neural network in training are constantly adjusted in the process of iteration, but initial weights of extreme learning machine can be set randomly and given well before the training; then they no longer need to be readjusted in the process of training, and only minimum weights of the output need to be solved out, which can be finished by solving the generalized inverse matrix once.
The hidden layer output of each node for extreme learning machine is shown in formula (1):where is the input sample, , and is the output sample. is the input weight that connects the th hidden layer node: . is the bias of the th hidden layer node. is the output weight that connects the th hidden layer node: . is excitation function.
Formula (1) can also be given as matrix formula (2):In formula (2), is the hidden layer output:
There are both empirical risk and structural risk in the statistical learning theory. Extreme learning machine not only considers the experience error minimization, which is the training error minimum, but also needs to consider the structural risk minimization as well. It is easy to generate the overfitting problem if only the minimum error is considered; that is to say, although the training error is the minimum, the optimum test effect is unable to be got. So if you want to get a good model, these two kinds of risks need to be thought about compromisingly at the same time. Therefore, it is necessary to make a compromise between minimized output weights and the minimized error; then the calculation formula (4) is constructed asThe above can be expressed as formula (5):In formula (5), is the error square sum and stands for structural risk; stands for the empirical risk. is the error between network output and the real sample value . is penalty factor. is mapping function.
According to the KKT conditions, Lagrange function can be used to solve the above problem; that is to say, the above problem can be solved through the following formula (6):where is Lagrange multiplier and nonnegative. The corresponding optimization constraints are as follows:where is hidden layer output matrix that is mentioned above, it is only relative to the sample number and the number of hidden layer nodes, and it has nothing to do with the number of output nodes. For classification problems, it has no connection with the sample number of categories.
So bring (7a) and (7b) into (7c), and then
Let
So the above formulas can be combined as
In the end, formula (11) can be derived:
So approximating function of extreme learning machine can be written as
In addition, in order to improve the nonlinear classification performance of extreme learning machine, it can be considered to combine the principle of support vector machine (SVM), and the nonlinear kernel mapping can be introduced into the extreme learning machine.
Let
The hidden layer and output of each sample can be regarded as the nonlinear mapping of sample , and the mapping can be used as the type of , or in the form of RBF.
Then
It can be seen from formula (14) that it is all inner product form for in the formula; in fact the specific form for does not need to be known, while the mapping of is unknown; according to the theory of kernel function, the implicit mapping can be constructed to replace it; that is to say, the kernel function can be constructed to replace , as is shown in formula (15):
So formula (16) can be deduced:
And ; then the solution formula of extreme learning machine can be written as
The extreme learning machine of the kernel function has more strong nonlinear approximation ability; therefore, the kernel function of support vector machines (SVM) is introduced into extreme learning machine, and the blockage prediction method for coal slurry pipeline based on kernel function extreme learning machine (KELM) is proposed in this paper.
Parameters of the kernel function are closely related to the complexity of the kernel function; generally speaking, the Gaussian kernel function is one of the preferred. When prediction based on extreme learning machine and support vector machine is made in this paper, the Gaussian function works as the kernel function, as shown in formula (18):where , is the key parameter of Gaussian kernel function.
3. Prediction Model Based on Particle Swarm Optimization Kernel Function Extreme Learning Machine Is Established
Particle swarm optimization is a kind of optimization algorithm based on group search [5, 10], each particle mainly has two aspects of flight information, and one is the particle’s own flying experience, including the current position and the history optimal location of particles. The other is the shared flying experience from particle swarm. Each particle can obtain its optimal value and optimal value by statistical iteration to continuously updated forward direction and speed, to follow the current optimal particle; in the end positive feedback mechanism of group optimization is formed.
Particle swarm algorithm has many advantages such as fast convergence rate and strong robustness, which is easy to implement and easy to combine with other algorithms to improve performance. In view of this, particle swarm algorithm is used to optimize the parameters from support vector machine (SVM) and extreme learning machine, and their optimization process is similar. So pressure prediction workflows based on particle swarm optimization kernel function extreme learning machine (PSOKELM) are given here, as shown in Figure 2.
The main steps are as follows:(1)The original data are preprocessed and divided into training set and validation set.(2)Initialize all particle swarm, and particle swarm algorithm is used to optimize penalty factor and kernel function which are from kernel function extreme learning machine.(3)fold crossvalidation is used to calculate the fitness of each particle and update the particle velocity and position.(4)Record and save the best fitness value.(5)When the limit conditions (the number of iterations or fitness values) are met, terminate the iteration. Otherwise, return to step . When the 5th step iteration is completed, then the optimal parameters of extreme learning machine are obtained.(6)Pressure prediction model based on PSOKELM with the best parameters is established for pipeline pressure prediction.
Coal slurry pipeline pressure is related to many factors, including traffic, coal slurry water content, pump outlet pressure, coal slurry temperature, and the distance to pump outlet. Coal slurry pipeline pressure prediction is a multivariable prediction problem. The coal slurry pipeline pressure prediction method based on kernel function extreme learning machine with particle swarm optimization (PSOKELM) is proposed in this paper, which is compared with prediction method based on support vector machine optimized by particle swarm. And accuracy and prediction speed advantage of prediction method based on PSOKELM has been verified.
The effect of the designed prediction model is generally evaluated by MSE (mean square error) and the correlation coefficient :In formula, is the number of samples and is the prediction value of .
4. Simulation Verification
The coal transportation system from HuangLing coal gangue power plant worked as experimental platform, PLC is the control core of the whole system, and data acquisition, centralized monitoring, and management and automatic sequence control can be realized. Based on the data record from number 2 pump in this coal transportation system, whose time is from 2012.5.26 to 5.31, blockage prediction is made. The 110 training samples and 25 test samples were randomly selected. Input variables include flow rate, the current from main pump, oil temperature, and the distance to pump outlet and water content, and output variables are the pressure from measured point. Take the pump outlet pressure prediction as an example; the computer simulation is made in MATLAB.
Kernel function is used as nonlinear mapping both for kernel function extreme learning machine (KELM) and support vector machine (SVM), and kernel function parameters and penalty factor all have certain effects on their performance. Here the effect of kernel parameter and penalty factor on the MSE is compared, including support vector machine (SVM) and the kernel function of extreme learning machine. Gauss kernel function is used in two algorithms. The simulation results are shown in Figures 3 and 4.
As can be seen from Figures 3 and 4, penalty factor and kernel function have influence on the two algorithms. But it is relatively flat when the kernel function extreme learning machine is selecting parameters.
In addition, the computational complexity of kernel function extreme learning machine is far lower than the support vector machine (SVM), so the calculation time of extreme learning machine should be shorter. Training time comparison has been made for the two algorithms from different training angles. PC used in simulation is Intel I3 processor, 4 GB memory. Training time comparison is shown in Figure 5.
It can be seen from Figure 5 that the training time consumed by a kernel function extreme learning machine is far less compared to the support vector machine, and its training speed is very fast: the reason is the fact that the kernel function extreme learning machine need not to solve the complex convex quadratic optimization, which is different from support vector machine (SVM).
In addition, the SVM and KELM optimum parameters are optimized by the particle swarm algorithm, and the fitness curve is shown in Figure 6. As can be seen from Figure 6, the KELM algorithm is faster in the rate of convergence.
In order to compare the optimization effect of the particle swarm algorithm, PSOKELM and KELM without optimization algorithm are used, respectively, to forecast data sequence; the predicted results are shown in Figure 7.
According to the fitting degree and prediction effect of evaluation model from formula (19) and formula (20), MSE and evaluation results based on KELM and PSOSVM are shown in Table 1.

It can be seen from Figure 7 and Table 1, compared with KELM, that PSOKELM has better prediction accuracy.
In order to further contrast the prediction effect of KELM and SVM, PSOKELM and PSOSVM are used to, respectively, predict data sequence, and the prediction results are shown in Figures 8, 9, and Table 2.

It can be seen from Figures 8 and 9 and Table 2 that the mean square error based on PSOKELM prediction model is 0.0038 and the correlation coefficient is 0.9955. The mean square error based on KELM prediction model is 0.00487 and the correlation coefficient is 0.9428. The mean square error based on PSOSVM prediction model is 0.0057 and the correlation coefficient is 0.9228. Therefore, prediction effect based on PSOKELM is much better than that based on PSOSVM in prediction effect.
5. Conclusion
The pressure prediction method for coal slurry transportation pipeline based on the particle swarm optimization kernel function extreme learning machine (PSOKELM) is studied in this paper. The following conclusions are drawn.(1)For pressure prediction problem of coal slurry transportation pipeline, kernel function of support vector machine is introduced into extreme learning machine, parameters are optimized by the particle swarm algorithm, and the pressure prediction method for coal slurry transportation pipeline based on PSOKELM is put forward and compared with PSOSVM prediction model and KELM prediction model.(2)Experiments simulation results prove that the prediction model based on PSOKELM is superior to prediction model based on PSOSVM in speed and accuracy and superior to KELM prediction model in accuracy, the prediction relative error is within 4%, then the validity of the prediction model is determined.(3)The research and validation of pressure prediction method for coal slurry transportation pipeline lay a foundation for the research of pipeline blocking.
Conflict of Interests
The authors declare that there is no conflict of interests regarding the publication of this paper.
Acknowledgment
The authors acknowledge the support provided by the National Natural Science Foundation Youth Science Fund Project. no. 51405381.
References
 A. Jiang, Z. Jiang, C. Wang et al., “Modeling and optimization of slurry wet pipeline transportation system in fluidized bed boiler,” CIESC Journal, vol. 61, no. 8, pp. 1882–1888, 2010. View at: Google Scholar
 Y. Liu, B. Li, X. Sun, and Z. Zhou, “A fusion model of SWT, QGA and BP neural network for wireless network traffic prediction,” in Proceedings of the 15th IEEE International Conference on Communication Technology (ICCT '13), pp. 769–774, IEEE, Guilin, China, November 2013. View at: Publisher Site  Google Scholar
 J. Gao, J. Yu, Z. Leng, Y. Qin, and B. Zhang, “Fault diagnosis of subway auxiliary inverter based on BPNN optimized by genetic simulated annealing,” ICIC Express Letters, vol. 8, no. 1, pp. 287–294, 2014. View at: Google Scholar
 A. Feng, Application Research Oil Fault Diagnosis of Power Electronic Circuits Based on BP Neutral Network Optimized by QGA, Lanzhou Jiaotong University, Lan Zhou, China, 2011.
 X. Jin, Y. Kang, and K. Zhang, “Stock price predicting using SVM optimized by particle swarm optimization based on uncertain knowledge,” International Journal of Digital Content Technology and its Applications, vol. 6, no. 23, pp. 216–221, 2012. View at: Publisher Site  Google Scholar
 Z. Jie and H. W. Ma, “Research on gas outburst prediction in coal mining in china based on a new kind of SVM algorithm,” in Proceedings of the 3rd International Conference on Energy, Environment and Sustainable Development (EESD '13), vol. 12, pp. 374–379, Shanghai, China, 2013. View at: Google Scholar
 S. Du, J. Zhang, J. Li, Q. Su, W. Zhu, and Y. Chen, “The deformation prediction of mine slope surface using PSOSVM model,” Journal of Electrical Engineering, vol. 11, no. 12, pp. 7182–7189, 2013. View at: Google Scholar
 B. Sun and H.T. Yao, “The shortterm wind speed forecast analysis based on the PSOLSSVM predict model,” Power System Protection and Control, vol. 40, no. 5, pp. 85–89, 2012. View at: Google Scholar
 G.B. Huang, H. Zhou, X. J. Ding, and R. Zhang, “Extreme learning machine for regression and multiclass classification,” IEEE Transactions on Systems, Man, and Cybernetics, Part B: Cybernetics, vol. 42, no. 2, pp. 513–529, 2012. View at: Publisher Site  Google Scholar
 J. Wang and H. Bi, “A kind of extreme learning machine based on particle swarm optimization,” Journal of Zhengzhou University, vol. 45, no. 3, pp. 100–104, 2013. View at: Google Scholar
Copyright
Copyright © 2015 Xuecun Yang et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.