Table of Contents Author Guidelines Submit a Manuscript
Mathematical Problems in Engineering
Volume 2014 (2014), Article ID 824765, 7 pages
http://dx.doi.org/10.1155/2014/824765
Research Article

A Novel Improved ELM Algorithm for a RealIndustrial Application

School of Automation and Electrical Engineering, University of Science and Technology Beijing, Beijing 100083, China

Received 2 December 2013; Accepted 29 January 2014; Published 16 April 2014

Academic Editor: Ramachandran Raja

Copyright © 2014 Hai-Gang Zhang et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Abstract

It is well known that the feedforward neural networks meet numbers of difficulties in the applications because of its slow learning speed. The extreme learning machine (ELM) is a new single hidden layer feedforward neural network method aiming at improving the training speed. Nowadays ELM algorithm has received wide application with its good generalization performance under fast learning speed. However, there are still several problems needed to be solved in ELM. In this paper, a new improved ELM algorithm named R-ELM is proposed to handle the multicollinear problem appearing in calculation of the ELM algorithm. The proposed algorithm is employed in bearing fault detection using stator current monitoring. Simulative results show that R-ELM algorithm has better stability and generalization performance compared with the original ELM and the other neural network methods.

1. Introduction

In the last few years, feedforward neural networks have received very wide range of applications and development. However, its applications encountered a lot of restrictions because of the slow training speed [13]. The most widely used method in feedforward neural network is gradient-based learning algorithm. Huang et al. pointed out that all the parameters of networks are tuned iteratively by using gradient-based learning algorithm, which is the main reason to lead slow training speed. On the other hand, feedforward neural network has a very high chance to fall into local minima [4]. In order to deal with these two shortcomings, Huang et al. proposed extreme learning machine (ELM) algorithm. ELM is a novel single hidden layer feedforward neural network where the input weights and the bias of the hidden nodes are generated randomly without tuning and the output weight is determined analytically. Compared with traditional feedforward network learning algorithms such as back-propagation (BP) algorithm, ELM has the following advantages: the fast training speed, good generalization performance, partially overcoming the problem of local minima, and not needing manual intervention like setting a stop criterion or learning rate. In summary, ELM algorithm overcomes several shortcomings of feedforward network. Unfortunately, the multicollinear problem in ELM algorithm becomes a restriction in many complex industrial processes, which deteriorate its generalization performance. Although in [5, 6], Huang et al. proposed the multicollinear problem appearing in calculation and gave a simple method based on ridge regression, it lacks the rigorous mathematical proof and a more rational approach requires research, which motivates us.

Induction motors play a pivotal role in industrial production. The occurrence of motor faults often cause a lot of damage to property, even threat to the lives of workers [7, 8]. Figure 1 presents the occurrence frequencies of different motor faults, showing that bearing faults are the most prone to appearing ones occupying about 40%–50% of motor damages [9]. Bearing fault detection is a challenging and great significant problem, which attracted many researchers’ attention and thus lots of variable results are proposed recently [1014]. Detection of bearing fault using vibration signals is a very popular way [10, 11]. However, sound senor installation will change the motor’s mechanical body, increasing the difficulty of on-line implementation. Until the late 20th century, artificial neural networks (ANNs) were employed in bearing fault detection [12]. The shortcomings, like being time-consuming and easy to converge to local minima, limited its promotion. Recently, many researchers have realized that bearing faults will affect the stator current spectrum [13, 14]. In this paper, bearing fault detection using stator current monitoring is employed to verify the effective and robust performance of our proposed ELM algorithm.

824765.fig.001
Figure 1: The probability map of different types of faults in induction motors.

In this paper, a new improved ELM algorithm is proposed in order to overcome the multicollinear problem. Compared with the traditional ELM algorithm, it has less covariance () and mean square error (MSE). In order to test and verify its effectiveness, the improved ELM algorithm is employed in bearing fault detection using stator current monitoring. Furthermore, our improved ELM algorithm shows the outstanding fault identification ability compared with other common classification algorithms, such as BP and SVM.

The following sections are organized as follows: the basic theory of the proposed improved ELM algorithm is presented in Section 2. Section 3 describes the current research of bearing faults. Simulative experiments are given in Section 4. Section 5 summarizes the conclusion of this paper.

2. The Improved Extreme Learning Machine Algorithm

Extreme learning machine (ELM) is an efficient training algorithm to deal with single-hidden-layer feedforward neural networks (SLFNs). ELM randomly generates the input weights and the bias of hidden nodes while determining the output weights according to the theory of least square (LS). It can quickly reach the training results without maneuvering intervention and get rid of the puzzle of local minimum simultaneously [1, 4]. So far, ELM algorithm has received wide application like directing multicategory classification problems in the cancer diagnosis area [15], classifying mental tasks from different subjects [16] and fault diagnosis [17]. According to the theory of Huang et al., ELM algorithm can also been applied in human action recognition, location positioning system, human computer interface, security and data privacy, and so on. As a result of the theory of least squares and the random generation of input weights together with the bias of hidden nodes, ELM algorithm suffers the multicollinear problems which shows that the results have large variance and mean square error. This section proposes an improved ELM algorithm named R-ELM to overcome the multicollinear problems.

2.1. The Review of Ordinary ELM Algorithm

Suppose there are samples , where denotes an -dimensional features of the th sample and denotes the target vector. The mathematical model of SLFNs with hidden nodes is as follows: where is input weight matrix connecting the hidden and the input nodes, is output weight matrix connecting the hidden and the output nodes, and is bias of hidden layer nodes. And denotes the inner product of and and is output vector. represents the activation function here.

ELM algorithm aims at finding an optimal solution which has the minimum mean square error (MSE) such that where is the estimation of .

The above equations can be written in matrix form as where

According to the theory of least square, the output weight can be estimated as where is the Moore-Penrose generalized inverse of .

There are several methods to calculate the Moore-Penrose generalized inverse. ELM algorithm makes use of singular value decomposition (SVD), where . So

The ELM algorithm steps can be summarized as follows.

Step 1. Assign arbitrary input weighs and bias of hidden layer nodes .

Step 2. Calculate the hidden layer output matrix .

Step 3. Calculate the output weights: .

The above is the theory of ordinary ELM algorithm. Some properties of the solution will be discussed [18, 19]. Considering the noisy environment in real industrial process when obtaining the data, the model (3) should be modified: where represents the model uncertainty or noise disturbance.

The modified solution is (1),(2) where is the th eigenvalue of ,(3).

Suppose may not always be nonsingular; that is to say when the matrix is multicollinear, some eigenvalues will tend to zero, while and will become larger, which will affect its stability and generalization.

Generally speaking, the data obtained from the field tend to have the presence of multicollinear problems. In the next subsection, one of the main results in our paper will be presented. Like the theory of Ridge Regression to overcome the multicollinear problem in least square method, we call our improved algorithm R-ELM.

2.2. The Improved R-ELM Algorithm

As discussed above, the ELM algorithm encounters severe instability and bad performance while meeting the multicollinear data. In our proposed R-ELM algorithm, decomposition is applied against the symmetric matrix . decomposition (also called factorization) factors a symmetric matrix as the product of a lower triangular matrix (), a diagonal matrix (), and the transpose matrix of the first lower triangular matrix . During decomposition, we set the minimum threshold to some singular elements of matrix, which gets rid of the puzzle of multicollinear problem.

The decomposition of a symmetric matrix () is as follows: where

Each element of and are sequentially calculated using an iterative approach; that is, where is the elements of the ordinary matrix .

After decomposition, whether the original matrix has multicollinear problem is determined by the values of matrix . If values of some elements are close to zero, the ordinal matrix is multicollinear. In order to obtain robust matrix, a modified algorithm to calculate the values of and is given as follows: where is an appropriate positive number.

After decomposition, the output weights can be calculated in the following new way: where , whose value is close to 1.

Assume that the matrix can be decomposed into two parts. That is, , and if some elements have negative values, complex decomposition is to be considered. Then recalculate performance indicators:(1),(2), where is the th eigenvalue of ,(3).

Setting a threshold to the values of every element in matrix can make sure that and . Additionally, the estimation is not unbiased as compensation ().

Remark 1. Unlike Huang’s method based on Ridge Regression to add a threshold to every element of matrix , the proposed R-ELM algorithm just set a proper threshold to some singular element to matrix after decomposition to . In contrast, our improved algorithm seems more reasonable.

3. Stator Current Feature Extraction of Bearing Fault

The majority of electrical machines employ ball or rolling bearings, which consist of two rings—one inner and the other outer. Even under normal operating conditions of balanced load and good alignment, fatigue failure can still take place from the location below the raceway and rolling elements to the surface, which may lead to increased vibration and noise levels. With the increase of continued stressing, flaking or spalling of bearing might occur, while fragments of the material tend to break loose [7, 9, 10].

Many researchers obtain vibration signals from the motor and then extract spectrum information through FFT or wavelet transform to identify the bearing faults. Considering the inconvenience of collection and analysis of vibration, this paper employs the stator current signals. The relation between vibration signal and current signal is as follows: where is the electrical supply frequency, is one of the characteristic vibration frequencies, and is the characteristic frequencies reflected in stator current spectrum.

There are many kinds of bearing faults. Here we consider two main defects:(1)for outer bearing race defects (2)for inner bearing race defects where and are the ball diameter and the bearing pitch diameter, respectively, is the rotational speed in Hertz, is the number of balls in a bear, and is the contact angle of the balls on the races. Figure 2 is a legend to operating parameters of ball bearing.

824765.fig.002
Figure 2: Ball bearing dimensions.

Here an experiment is given to prove that bearing fault can be checked through the monitoring of stator current. A four-pole test motor with nine balls () in the bearing is employed. Other parameters about the motor can be found from the data sheet [13]: the pitch diameter is about 60 mm () and the ball diameter equals 12 mm () approximately. Assume that the experimental motor runs at the rated shafted speed of 1735 rpm ( Hz) and has a contact angle (). So  Hz and  Hz based on (15) and (16). For broken bearing, we drill a hole through the outer race and the inner broken races are not considered here. Figures 3 and 4 show the experimental results. As shown in Figures 3 and 4, the amplitudes of three characteristic frequency points above the fundamental frequency of stator current are extracted as the inputs of R-ELM algorithm. In order to get rid of the interface of noise, we set the characteristic points who have the highest spectrum amplitude during characteristic frequency band , where represents the bandwidth.

824765.fig.003
Figure 3: The stator current spectrums (): healthy operation.
824765.fig.004
Figure 4: The stator current spectrums (): faulty operation.

4. Simulative Results for Bearing Fault Detection

This section presents the simulation results referring to the proposed R-ELM algorithm applied in bearing fault detection. An experiment motor with bearing fault mentioned above section is employed. The process of simulation experiments is shown in Figure 5.

824765.fig.005
Figure 5: Schematic diagram of simulation experiments.
4.1. Experimental Preparation

All the simulations have been conducted in Matlab 7.8.0 running on a desktop PC with AMD Athlon II X2 250 processor, 3.00-GHz CPU, and 2 G RAM. Two operating conditions are considered here: normal operation and operation with current noise pollution. Under 1 KHz sampling, 900 samples (500 for training and 400 for testing) are extracted from the current spectrum of healthy and faulty bearings under each operating conditions. And all the data are normalized into .

There are four performance indicators to measure the quality of ELM algorithm: training time, testing time, train accuracy, and test accuracy. In addition, the variance and mean square error are inversely proportional to train and test accuracy. In general, active functions play an important role in computing of neural networks. Table 1 lists the comparison results among four common active functions (sig, sin, hardlim, and radbas) under the four performance indicators using our sampling data, where we can see that the sig function has more outstanding performance than other ones. Then the proposed E-ELM algorithm applies sig function as active function.

tab1
Table 1: The choice of active functions.
4.2. The Main Experimental Results

Figure 6 presents the stator current signals under two operating conditions, where dashed box indicates the faulty current signals. White noise is added in the current under normal operation through current transformer to test the robust performance of proposed algorithm. Figure 7 depicts the sampling data distribution of both healthy and faulty bearings under normal operating condition. As previously described, the stator current spectrum information from three characteristic frequency bands are sampled and extracted. Symbols with white shade represent the healthy operating condition, while the black symbols mean something is wrong with the experiment bearing. Following work is the classification of R-ELM. Table 2 shows the simulation results of R-ELM classifier in different operating situations, where we can see that R-ELM algorithm is able to achieve a reliable classification accuracy satisfying the need of on-line fault detection.

tab2
Table 2: Simulation results of classification using R-ELM for bearing fault detection.
fig6
Figure 6: The stator current signals: (a) normal operation; (b) operation with noise interface.
824765.fig.007
Figure 7: The distribution of sampling data under normal operation.

Table 3 lists the simulation results of comparing the proposed R-ELM algorithm with other common method include ELM under normal operation. R-ELM algorithm shows an outstanding classification ability with less time-consuming and high recognition rate than BP and SVM algorithms. Referring to the comparing results between ELM and R-ELM, we can see that R-ELM has less variance and mean square error (equals higher recognition rate) than ordinary ELM algorithm despite of a little more time-consuming.

tab3
Table 3: Comparison of results of different classifiers.

5. Conclusions

This paper proposes an improved ELM algorithm named R-ELM. Through the decomposition of matrix and setting limited values of the elements in matrix , the proposed R-ELM algorithm can deal with the multicollinear problems in the application of ELM. We employ R-ELM algorithm in bearing fault detection and two operating conditions are considered. This algorithm shows better performance in fault identification. Compared with other neural network methods, R-ELM algorithm takes less time and has better performance. Compared to the original ELM algorithm, the proposed R-ELM improves the recognition rate (reduces the variance and mean square error) with trade of taking a little more time.

Conflict of Interests

The authors declare that there is no conflict of interests regarding the publication of this paper.

Acknowledgments

This work has been supported by the National Natural Science Foundation of China (NSFC Grant no. 61333002) and Beijing Natural Science Foundation (Grant no. 4132065).

References

  1. H. Zhang, Z. Wang, and D. Liu, “Global asymptotic stability of recurrent neural networks with multiple time-varying delays,” IEEE Transactions on Neural Networks, vol. 19, no. 5, pp. 855–873, 2008. View at Publisher · View at Google Scholar · View at Scopus
  2. H. Zhang, Z. Liu, G.-B. Huang, and Z. Wang, “Novel weighting-delay-based stability criteria for recurrent neural networks with time-varying delay,” IEEE Transactions on Neural Networks, vol. 21, no. 1, pp. 91–106, 2010. View at Publisher · View at Google Scholar · View at Scopus
  3. H. Zhang and Y. Wang, “Stability analysis of Markovian jumping stochastic Cohen-Grossberg neural networks with mixed time delays,” IEEE Transactions on Neural Networks, vol. 19, no. 2, pp. 366–370, 2008. View at Publisher · View at Google Scholar · View at Scopus
  4. G.-B. Huang, Q.-Y. Zhu, and C.-K. Siew, “Extreme learning machine: theory and applications,” Neurocomputing, vol. 70, no. 1–3, pp. 489–501, 2006. View at Publisher · View at Google Scholar · View at Scopus
  5. G. B. Huang, H. M. Zhou, X. J. Ding, and R. Zhang, “Extreme learning machine for regression and multiclass classification,” IEEE Transactions on Systems, Man, and Cybernetics, Part B: Cyberntics, vol. 42, no. 2, pp. 513–529, 2012. View at Google Scholar
  6. G.-B. Huang, D. H. Wang, and Y. Lan, “Extreme learning machines: a survey,” International Journal of Machine Learning and Cybernetics, vol. 2, no. 2, pp. 107–122, 2011. View at Publisher · View at Google Scholar · View at Scopus
  7. H. M. Liu, J. Wang, and C. Lu, “Rolling bearing fault detection based on the teager energy operator and Elman neural network,” Mathematical Problems in Engineering, vol. 2013, Article ID 498385, 10 pages, 2013. View at Publisher · View at Google Scholar
  8. H. Zhang, D. Liu, Y. Luo, and D. Wang, Adaptive Dynamic Programming for Control: Algorithms and Stability, Communications and Control Engineering Series, Springer, London, UK, 2013. View at Publisher · View at Google Scholar · View at MathSciNet
  9. S. Nandi, H. A. Toliyat, and X. Li, “Condition monitoring and fault diagnosis of electrical motors—a review,” IEEE Transactions on Energy Conversion, vol. 20, no. 4, pp. 719–729, 2005. View at Publisher · View at Google Scholar · View at Scopus
  10. P. Konar and P. Chattopadhyay, “Bearing fault detection of induction motor using wavelet and Support Vector Machines (SVMs),” Applied Soft Computing Journal, vol. 11, no. 6, pp. 4203–4211, 2011. View at Publisher · View at Google Scholar · View at Scopus
  11. J. R. Stack, T. G. Habetler, and R. G. Harley, “Effects of machine speed on the development and detection of rolling element bearing faults,” IEEE Power Electronics Letters, vol. 1, no. 1, pp. 19–21, 2003. View at Publisher · View at Google Scholar · View at Scopus
  12. R. R. Schoen, B. K. Lin, T. G. Habetler, J. H. Schlag, and S. Farag, “Unsupervised, on-line system for induction motor fault detection using stator current monitoring,” IEEE Transactions on Industry Applications, vol. 31, no. 6, pp. 1280–1286, 1995. View at Publisher · View at Google Scholar · View at Scopus
  13. R. R. Schoen, T. G. Habetler, F. Kamran, and R. G. Bartheld, “Motor bearing damage detection using stator current monitoring,” IEEE Transactions on Industry Applications, vol. 31, no. 6, pp. 1274–1279, 1995. View at Publisher · View at Google Scholar · View at Scopus
  14. S. Chen and T. A. Lipo, “Bearing currents and shaft voltages of an induction motor under hard- and soft-S witching inverter excitation,” IEEE Transactions on Industry Applications, vol. 34, no. 5, pp. 1042–1048, 1998. View at Publisher · View at Google Scholar · View at Scopus
  15. R. Zhang, G.-B. Huang, N. Sundararajan, and P. Saratchandran, “Multicategory classification using an extreme learning machine for microarray gene expression cancer diagnosis,” IEEE/ACM Transactions on Computational Biology and Bioinformatics, vol. 4, no. 3, pp. 485–494, 2007. View at Publisher · View at Google Scholar · View at Scopus
  16. N.-Y. Liang, P. Saratchandran, G.-B. Huang, and N. Sundararajan, “Classification of mental tasks from EEG signals using extreme learning machine,” International Journal of Neural Systems, vol. 16, no. 1, pp. 29–38, 2006. View at Publisher · View at Google Scholar · View at Scopus
  17. X.-F. Hu, Z. Zhao, S. Wang, F.-L. Wang, D.-K. He, and S.-K. Wu, “Multi-stage extreme learning machine for fault diagnosis on hydraulic tube tester,” Neural Computing and Applications, vol. 17, no. 4, pp. 399–403, 2008. View at Publisher · View at Google Scholar · View at Scopus
  18. R. Uemukai, “Small sample properties of a ridge regression estimator when there exist omitted variables,” Statistical Papers, vol. 52, no. 4, pp. 953–969, 2011. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet · View at Scopus
  19. A. E. Hoerl and R. W. Kennard, “Ridge regression: biased estimation for nonorthogonal problems,” Technometrics, vol. 42, no. 1, pp. 80–86, 2000. View at Google Scholar · View at Scopus