Research Article  Open Access
Fei Gao, Jiangang Lv, "Fault Diagnosis for Engine Based on SingleStage Extreme Learning Machine", Mathematical Problems in Engineering, vol. 2016, Article ID 7939607, 10 pages, 2016. https://doi.org/10.1155/2016/7939607
Fault Diagnosis for Engine Based on SingleStage Extreme Learning Machine
Abstract
SingleStage Extreme Learning Machine (SSELM) is presented to dispose of the mechanical fault diagnosis in this paper. Based on it, the traditional mapping type of extreme learning machine (ELM) has been changed and the eigenvectors extracted from signal processing methods are directly regarded as outputs of the network’s hidden layer. Then the uncertainty that training data transformed from the input space to the ELM feature space with the ELM mapping and problem of the selection of the hidden nodes are avoided effectively. The experiment results of diesel engine fault diagnosis show good performance of the SSELM algorithm.
1. Introduction
As representative equipment, the engine is general power source, and its safety and reliability is very important. In equipment fault diagnosis, reciprocating machinery faults are the most difficult cases. In order to solve the problem, piezoelectric pressure sensors, accelerometers, and sound sensors are widely used to measure signals from the engine. Faults are defined as the deviations from normal behaviors in the plant. Because the engine’s working condition is so bad and its structure is so complex, signals of engine fault are nonstationary and nonlinear. It is difficult to extract a threshold which can clearly reflect the fault characteristics. So the methods which combine signal processing and intelligent pattern recognition are used to actualize fault diagnosis. Firstly, signal processing methods, such as frequency spectrum analysis, wavelet transform, HilbertHuang transform, and mathematical morphology, are used to denoise the measured signals and extract the feature vector which could broadly reflect the fault characteristics and types. Secondly, intelligent pattern recognition methods, such as artificial neural network and support vector machine (SVM), are used to map the feature vector into a higher dimensional feature space and classify the fault mode based on iterative optimization or statistical learning.
The combination of signal processing and intelligent pattern recognition solves the diagnosis puzzler of mechanical fault in a certain extent. But so many parameters need to be tuned to achieve a preferable fault classification rate when the recognition algorithm is applied. The computational complexity and computing time cost of the recognition also limit its effective realization in embedded systems. As a novel learning algorithm, ELM has been proposed recently by Huang et al. [1] for single hidden layer feedforward neural networks (SLFNs). Different from gradientdescentbased methods, ELM randomly chooses input weights (linking the input layer to the hidden layer) and hidden bias, and the output weights (linking the hidden layer to the output layer) are analytically determined using the MoorePenrose generalized pseudoinverse instead of tuning them. Experimental results show that the learning speed of ELM can be thousands of times faster than gradientdescent learning algorithms. So it receives wide applications such as fault prognosis of mechanical components [2], fault classification in series compensated transmission line [3], fault diagnosis on hydraulic tube tester [4], and computer aided diagnosis system [5].
However, because of random mapping from the input space to some feature space, numerical stability of the output has been generally ignored. On the other hand, random selection of the input weights and biases results in a large number of hidden units which consumes so much computing time. To solve these shortages, many improved ELM algorithms were investigated recently [6–9]. Zhao et al. [10] proposed an input weight selection algorithm for an ELM with linear hidden nodes to improve the illconditioned problem. Huynh and Won proposed Least Square Extreme Learning Machine (LSELM) [11], Regularized Least Square Extreme Learning Machine (RLSELM) [12], and SVDNeural classifier [13]. Projection Vector Machine was proposed by Deng et al. [14] for highdimension smallsample data. Although these improved algorithms have enhanced numerical stability of the ELM output in a certain extent, some output fluctuations still exist. For many applications with high security requirements, unstable judgment may cause some fatal safety accidents, such as fault diagnosis for engine, industrial process control, control of the chaotic system, and operation condition monitoring of hydroelectric generating sets [15–18]. So, in order to introduce ELM to fault diagnosis of engine effectively, we proposed SingleStage Extreme Learning Machine (SSELM) in this paper. Firstly, eigenvectors extracted from signal processing methods are directly regarded as the SSELM network’s hidden layer output matrix. Secondly, the MoorePenrose generalized inverse is used to calculate the output weight. Then this method has lower computational complexity and could have been replanted to the embedded systems. Experimental results show that this approach is feasible in identifying engine faults.
The rest of this paper is organized as follows. Section 2 describes extreme learning machine algorithm and its shortage on classification. SingleStage Extreme Learning Machine is presented in Section 3. Experimental results and analysis on engine fault diagnosis are shown in Section 4. Finally, conclusion is made in Section 5.
2. Extreme Learning Machine and Its Shortage on Classification
2.1. Single Hidden Layer Feedforward Networks
The standard architecture of single hidden layer feedforward networks consists of an input layer with neurons, a hidden layer with neurons, and an output layer with neurons. Consider arbitrary training samples , where and are the jth input pattern and the corresponding desired output. Then SLFNs with activation function can be mathematically modeled aswhere is the input weight vector connecting the input neurons and the th hidden node, is the threshold of the th hidden node, is the weight vector connecting the th hidden node with the output neurons, is the real output vector of SLFNs, and is the scalar product of and .
So the main aim of training process is to minimize the following error function by adjusting the network parameters , , and , and the error function can be defined by
2.2. Extreme Learning Machine
Traditionally, gradientdescent algorithm is used to train the SLFNs, in which the set of is iteratively tuned bywhere consists of parameters , , and and denotes the learning rate. As a popular training algorithm for feedforward neural networks based on gradientdescent, backpropagation (BP) learning algorithm has been used in various fields, in which parameters are adjusted with error propagation from the output layer to the input layer. However, it is clear that these algorithms have a slow learning rate, easily get overlearning, and stop at the local minimum.
Recently, an effective training algorithm for SLFNs was proposed by Huang et al. [1] and called ELM. According to Huang and Babri [19], SLFNs with at most hidden nodes and almost any nonlinear activation function can exactly learn distinct observations. So if the standard SLFNs with hidden nodes can approximate these distinct observations with zero error, it implies that there exist , , and such that
Equation (4) can be written compactly aswhere
As proposed by Huang et al. [1], is the hidden layer output matrix. The parameters and (input weights and biases) may simply be assigned with random values and need not be adjusted during the training process. Then (5) becomes a linear system, due to the fact that the matrix may not always be square matrix, so the smallest norm leastsquares of the network is estimated aswhere is the MoorePenrose generalized inverse of matrix .
2.3. Shortage of Extreme Learning Machine on Classification
Although the ELM algorithm surmounts many puzzles, such as improper learning rate, overlearning, and local minima, which always lie in traditional gradientdescent approaches, random selection of input weights and biases may lead to an illconditioned problem so that the output of the network will be numerically unstable [10]. On the other hand, in order to get a preferable classification rate, the number of hidden nodes should be increased substantially which usually leads the complexity of network and training time to increase obviously.
In order to verify the problem of ELM with random mapping, we choose Page Blocks dataset from UCI Machine Learning Repository [20]. The initial number of hidden nodes is set as 50 and the incremental node for each simulation is set as 5. The sigmoidal function is chosen as the activation function; input weights and biases are determined randomly. Then the training and testing accuracy of ELM variations with respect to initial network parameters and hidden nodes on Satellite Image dataset are shown in Figures 1 and 2. Training and testing time of ELM with different hidden nodes are shown in Figures 3 and 4. The simulations are carried out in MATLAB 7.11.0 environment running in AMD, 2.2GHZ CPU with 1 G RAM.
The simulation results show that different outputs of ELM were obtained with the same hidden nodes and activation function because of random selection of the input weights and biases. On the other hand, generalization performance of ELM depends on the proper selection of the input parameters, but it is difficult to search the best network parameters. It usually requires a large number of hidden nodes to achieve a preferable classification accuracy which results in slow response of the trained networks. According to Figure 3, we know that the training time of ELM has an exponential manner approximately increasing along with the number of hidden nodes. In addition, the large number of hidden nodes also needs a large memory to store the parameters. These problems seriously restrict ELM’s applications in embedded system.
For an illconditioned system, the change in the final solution may be large even if change in the output of ELM is small. So the numerical stability is an especially important aspect for fault diagnosis system. Equipments are maintained in a lagging manner and excess maintenance caused by underreporting and misjudgment of fault mode would result in serious accidents and larger financial burden. So ELM must be improved to be applied in the field of fault diagnosis because of the disturbance of the output.
3. SingleStage Extreme Learning Machine
ELM algorithm can classify the multidimensional data which is extracted from the time domain measured signal. And the signal obtained from transducer is a kind of time series which cannot be imported to the ELM network directly. In other words, the ELM used three steps to accomplish classification of the time series. Firstly, some feature extraction methods are used to map the time series into feature space. The second step is to map the eigenvector from ddimensional feature space into Ndimensional hidden layer space using the random input weights and hidden layer biases. The third step is to analytically determine the output weights by computing the pseudoinverse of the hidden layer output matrix.
According to the aforementioned content, we know that the random mapping results in the perturbation of output and a good classification performance requires much more hidden nodes. In other ways, transformation of feature vectors increases the input dimension because the number of hidden nodes is always bigger than the number of input nodes. In order to solve the aforementioned problem, we predigest the network structure by regarding the feature vectors as the output of hidden layer; then the original network becomes a network only with two layers, the original random mapping is avoided, and original training process is simplified. So we call the improved structure SingleStage Extreme Learning Machine (SSELM). In SSELM, the input matrix is equal to the hidden layer output matrix in ELM and the output weight can be calculated by pseudoinverse operation of the input matrix. The sketch maps of ELM and SSELM are shown in Figures 5 and 6.
Given a set of training data , where and are the jth input data and its desired output, then the input matrix can be defined by and the desired output matrix can be defined by . The output weight of the network based on SSELM could be obtained bywhere is the MoorePenrose generalized inverse of matrix .
The training times of SSELM and ELM are mainly costed by computing the MoorePenrose generalized inverse of the hidden layer output matrix [21]. In most cases, singular value decomposition (SVD) is used to compute the MoorePenrose generalized inverse; for the matrix , the computational complexity of SVD is [22]. If the number of hidden layer nodes becomes large, computational time of SVD will remarkably rise. With a compact network structure, SSELM can achieve both better stability of output and lower computational complexity.
Consider a special structure of ELM in which the number of hidden nodes is equal to the dimension of input feature vector to analyze the random mapping from input space to hidden space. Then we investigate whether a difference exists between the special structure of ELM and SSELM or not. Performance comparison of SSELM and the ELM algorithm for Iris and Wine dataset is carried out. 70% and 30% of samples in the dataset are randomly chosen as train and test data at each trial. The details of datasets and the average learning time of forty trials for these two algorithms are listed in Table 1. Training and testing accuracy of 40 trials of simulations for each method are shown in Figures 7 and 8.

The aforementioned simulation results show that the learning speed of SSELM is as fast as ELM. And the main difference between SSELM and ELM algorithms lies in the stability of the output of the network. ELM has an unstable output due to the random mapping from original space to the hidden space, and SSELM has a robust output due to the simplified network structure.
In some realworld applications, the training data may be chunkbychunk or onebyone. Under these circumstances, the incremental learning algorithms may outperform batch learning algorithms as incremental learning algorithms do not require retraining the old data whenever new data is received [23]. Therefore, the batch SSELM algorithm can also be extended to accommodate online incremental learning to the online problems.
Given a set of initial training data , the initial input matrix and output matrix could be got easily, so the initial output weight can be obtained byIf tends to become nonsingular, the initial output weight can be define bywhere . If tends to become singular, one can make it nonsingular by adding a constant diagonal matrix; can be redefined as .
When other observations of the second training subset are received, the training problem becomes minimizingwhere , .
Considering both blocks of training sets and , the output weight becomeswhere .
4. Experimental Results
In this paper, we measured vibration signals on the F3L912type diesel engine which consists of 3 cylinders and 4 strokes. The rotating speed is 1200 r/min and the engine is empty loaded when sampling acceleration signals on the first cylinder. The oil pressure signal can be used as indication to the vibration signal on cylinder head, so the oil pressure signal on the third cylinder is measured by the clip oil pressure sensor with vibration signal in the synchronization manner. The sample rate is 40 KHz in the experiment. We measured signals on eleven working conditions: (1) normal condition, (2) first cylinder misfire condition, (3) second cylinder misfire condition, (4) large exhaust valve clearance condition, (5) small exhaust valve condition, (6) large air supply valve clearance condition, (7) small air supply valve clearance condition, (8) gas leak exhaust valve condition, (9) gas leak air supply valve condition, (10) light oil leak condition, and (11) both first and second cylinder misfire condition. Vibration signal of cylinder head in one work cycle on normal condition and its ShortTime Fourier Transform (STFT) are shown in Figure 9.
As the aforementioned content, the vibration signal on cylinder head reflects the action time and intensity of five main excitation instances. From the timefrequency domain distribution, we know that the vibration signal has a wide frequency band and it is nonstationary. Multiscale principal component analysis (MSPCA) [24] computes the PCA of wavelet coefficients at each scale and combines the results at relevant scales. Because of the multiscale nature, it has wide applications. So, in this paper, we used the MSPCA method to analyze the vibration signal on cylinder head. The original vibration signal on cylinder head and transformation based on MSPCA on four conditions are shown in Figures 10, 11, 12, and 13.
The wavelet packet can focus energy distribution of the signal on different decomposition coefficients, and it has advantages in feature extraction [25]. So we decompose the vibration signal to three layers using the wavelet function Daubechies 4 and reconstruct the conjugate filter coefficients of wavelet packet decomposition firstly. Then we calculate the energy of signal in each frequency band using Parseval theory and normalize the energy of each frequency band as the feature vector. 200 and 125 feature vectors in fault diagnosis problem are randomly chosen for training and testing. Forty trials for each algorithm are conducted, and then we take average of forty trials as final results. The training and testing accuracy are shown in Figure 14. The training time based on ELM with Morlet function and sigmoid function is 4.3 ms and 3.9 ms, in which the number of hidden layer nodes is equal to the dimension of feature vector. The training time of SSELM is 2.3 ms.
Experiment shows that the above mentioned diesel engine fault diagnosis method based on MSPCA and SSELM has a high accuracy rate for fault identification. SSELM overcomes the shortcoming that input vector must be randomly mapped to the kernel space of the hidden layer in ELM. Then SSELM avoids the problem that equipments are maintained in a lagging manner and excess maintenance caused by underreporting and misjudgment of fault mode. So we think that the SSELM can be used in the field of fault diagnosis.
5. Conclusion
In this paper, a fault diagnosis method for engine is proposed. Firstly, due to the random mapping from the input space to the hidden layer space of the traditional ELM, unstable output result which may cause some fatal safety accidents for fault diagnosis of engine exists. Then we simplify the original structure of network and propose the SingleStage Extreme Learning Machine (SSELM). In SSELM, the original random mapping is avoided, original training process is simplified, the input matrix is equal to the hidden layer output matrix in ELM, and the output weight is calculated by pseudoinverse operation of the input matrix. Experimental results show that the output stability of the modified method outperforms the traditional ELM in which the input and hidden layers have the same dimension. The learning speed of SSELM is as fast as ELM, but the modified algorithm does not need to tune any parameter in the whole training process.
Competing Interests
The authors declare that they have no competing interests.
References
 G.B. Huang, Q.Y. Zhu, and C.K. Siew, “Extreme learning machine: theory and applications,” Neurocomputing, vol. 70, no. 1–3, pp. 489–501, 2006. View at: Publisher Site  Google Scholar
 D. MartinezRego, O. FontenlaRomero, B. PerezSanchez et al., “Fault prognosis of mechanical components using online learning neural networks,” in Artificial Neural Networks—ICANN 2010: 20th International Conference, Thessaloniki, Greece, September 15–18, 2010, Proceedings, Part I, vol. 6352, no. 1 of Lecture Notes in Computer Science, pp. 60–66, Springer, Berlin, Germany, 2010. View at: Publisher Site  Google Scholar
 V. Malathi, N. S. Marimuthu, and S. Baskar, “A comprehensive evaluation of multicategory classification methods for fault classification in series compensated transmission line,” Neural Computing & Applications, vol. 19, no. 4, pp. 595–600, 2010. View at: Publisher Site  Google Scholar
 X.F. Hu, Z. Zhao, S. Wang, F.L. Wang, D.K. He, and S.K. Wu, “Multistage extreme learning machine for fault diagnosis on hydraulic tube tester,” Neural Computing & Applications, vol. 17, no. 4, pp. 399–403, 2008. View at: Publisher Site  Google Scholar
 M. Gomathi and P. Thangaraj, “A computer aided diagnosis system for lung cancer detection using machine learning technique,” European Journal of Scientific Research, vol. 51, no. 2, pp. 260–275, 2011. View at: Google Scholar
 N. Wang, J.C. Sun, M. J. Er, and Y.C. Liu, “Hybrid recursive least squares algorithm for online sequential identification using data chunks,” Neurocomputing, vol. 174, pp. 651–660, 2016. View at: Publisher Site  Google Scholar
 N. Wang, M. J. Er, and M. Han, “Generalized singlehidden layer feedforward networks for regression problems,” IEEE Transactions on Neural Networks and Learning Systems, vol. 26, no. 6, pp. 1161–1176, 2015. View at: Publisher Site  Google Scholar  MathSciNet
 N. Wang, M. J. Er, and M. Han, “Parsimonious extreme learning machine using recursive orthogonal least squares,” IEEE Transactions on Neural Networks and Learning Systems, vol. 25, no. 10, pp. 1828–1841, 2014. View at: Publisher Site  Google Scholar
 Z. Shao, M. J. Er, and N. Wang, “An efficient leaveoneout crossvalidationbased extreme learning machine (ELOOELM) with minimal user intervention,” IEEE Transactions on Cybernetics, vol. 46, no. 8, pp. 1939–1951, 2016. View at: Publisher Site  Google Scholar
 G. Zhao, Z. Shen, and Z. Man, “Robust input weight selection for wellconditioned extreme learning machine,” Journal of Information Technology, vol. 17, no. 1, pp. 1–18, 2011. View at: Google Scholar
 H. T. Huynh and Y. Won, “Small number of hidden units for ELM with twostage linear model,” IEICE Transactions on Information and Systems, vol. 91, no. 4, pp. 1042–1049, 2008. View at: Publisher Site  Google Scholar
 H. T. Huynh, Y. Won, and J.J. Kim, “An improvement of extreme learning machine for compact singlehiddenlayer feedforward neural networks,” International Journal of Neural Systems, vol. 18, no. 5, pp. 433–441, 2008. View at: Publisher Site  Google Scholar
 H. T. Huynh and Y. Won, “Training single hidden layer feedforward neural networks by using singular value decomposition,” in Proceedings of the International Conference on Computer Sciences and Convergence Information Technology (ICCIT '09), pp. 1300–1304, Seoul, Republic of Korea, 2009. View at: Google Scholar
 W. Deng, Q. Zheng, S. Lian et al., “Projection vector machine: onestage learning algorithm from highdimension smallsample data,” in Proceedings of the IEEE International Joint Conference on Neural Network, pp. 3375–3382, Barcelona, Spain, 2010. View at: Google Scholar
 X.W. Chen, J.G. Zhang, and Y.J. Liu, “Research on the intelligent control and simulation of automobile cruise system based on fuzzy system,” Mathematical Problems in Engineering, vol. 2016, Article ID 9760653, 12 pages, 2016. View at: Publisher Site  Google Scholar
 G.X. Wen, C. L. P. Chen, Y.J. Liu, and Z. Liu, “Neuralnetworkbased adaptive leaderfollowing consensus control for secondorder nonlinear multiagent systems,” IET Control Theory & Applications, vol. 9, no. 13, pp. 1927–1934, 2015. View at: Publisher Site  Google Scholar
 C. L. P. Chen, Y.J. Liu, and G.X. Wen, “Fuzzy neural networkbased adaptive control for a class of uncertain nonlinear stochastic systems,” IEEE Transactions on Cybernetics, vol. 44, no. 5, pp. 583–593, 2014. View at: Publisher Site  Google Scholar
 Y. Gao and Y.J. Liu, “Adaptive fuzzy optimal control using direct heuristic dynamic programming for chaotic discretetime system,” Journal of Vibration and Control, vol. 22, no. 2, pp. 595–603, 2016. View at: Publisher Site  Google Scholar  MathSciNet
 G.B. Huang and H. A. Babri, “Upper bounds on the number of hidden neurons in feedforward networks with arbitrary bounded nonlinear activation functions,” IEEE Transactions on Neural Networks, vol. 9, no. 1, pp. 224–229, 1998. View at: Publisher Site  Google Scholar
 C. L. Blake and C. J. Merz, UCI Repository of Machine Learning Databases, Department of Information and Computer Sciences, University of California, Irvine, Calif, USA, http://www.ics.uci.edu/~mlearn/MLRepository.html.
 G.B. Huang, H. Zhou, X. Ding, and R. Zhang, “Extreme learning machine for regression and multiclass classification,” IEEE Transactions on Systems, Man, and Cybernetics, Part B: Cybernetics, vol. 42, no. 2, pp. 513–529, 2012. View at: Publisher Site  Google Scholar
 G. H. Golub and C. F. Van Loan, Matrix Computations, Johns Hopkins University Press, Baltimore, Md, USA, 3rd edition, 1996. View at: MathSciNet
 N.Y. Liang, G.B. Huang, P. Saratchandran, and N. Sundararajan, “A fast and accurate online sequential learning algorithm for feedforward networks,” IEEE Transactions on Neural Networks, vol. 17, no. 6, pp. 1411–1423, 2006. View at: Publisher Site  Google Scholar
 B. R. Bakshi, “Multiscale PCA with application to multivariate statistical process monitoring,” AIChE Journal, vol. 44, no. 7, pp. 1596–1610, 1998. View at: Publisher Site  Google Scholar
 C. S. Burrus, R. A. Goponath, and H. Guo, Introduction to Wavelets and Wavelet Transforms, China Machine Press, 2008.
Copyright
Copyright © 2016 Fei Gao and Jiangang Lv. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.