Shock and Vibration

Shock and Vibration / 2020 / Article

Research Article | Open Access

Volume 2020 |Article ID 9096852 | https://doi.org/10.1155/2020/9096852

Yerui Fan, Chao Zhang, Yu Xue, Jianguo Wang, Fengshou Gu, "A Bearing Fault Diagnosis Using a Support Vector Machine Optimised by the Self-Regulating Particle Swarm", Shock and Vibration, vol. 2020, Article ID 9096852, 11 pages, 2020. https://doi.org/10.1155/2020/9096852

A Bearing Fault Diagnosis Using a Support Vector Machine Optimised by the Self-Regulating Particle Swarm

Academic Editor: Seung-Yong Ok
Received04 Oct 2019
Revised17 Jan 2020
Accepted04 Feb 2020
Published20 Mar 2020

Abstract

In this paper, a novel model for fault detection of rolling bearing is proposed. It is based on a high-performance support vector machine (SVM) that is developed with a multifeature fusion and self-regulating particle swarm optimization (SRPSO). The fundamental of multikernel least square support vector machine (MK-LS-SVM) is overviewed to identify a classifier that allows multidimension features from empirical mode decomposition (EMD) to be fused with high generalization property. Then the multidimension parameters of the MK-LS-SVM are configured by the SRPSO for further performance improvement. Finally, the proposed model is evaluated through experiments and comparative studies. The results prove its effectiveness in detecting and classifying bearing faults.

1. Introduction

As a basic component, widely, rolling bearing is used in rotating machinery [1]. Rotating machinery is generally in the state of heavy work. Seriously, equipment performance is affected by bearing failure and even damaged [2]. Therefore, it is necessary to study the fault diagnosis of bearing. Intelligent diagnosis does not require people to wait beside the equipment that runs for a long time. It is suitable for monitoring areas that are in harsh environmental conditions, are sparsely populated, and are not suitable for long-term residential areas. Dong-yang Dou pointed out that, commonly, intelligent diagnostic methods include the K-Nearest Neighbor (KNN) algorithm, Probabilistic Neural Network (PNN), particle swarm optimization (PSO) optimized support vector machine (PSO-SVM), and a Rule-Based Method (RBM) based on the MLEM2 algorithm and a new Rule Reasoning Mechanism (RRM) [3]. RBM has the shortest time in the calculation process; relatively, the identification accuracy is low. It takes a long time in the training process of particle swarm optimization algorithm [3]. However, the PSO-SVM model takes a short time in the recognition process, and the time spent in the training process is not concerned. Moreover, its recognition accuracy is high. In terms of fault diagnosis, the PSO-SVM model has a high accuracy which is satisfied by our requirements in the process of fault identification.

As shown in Figure 1, the fault identification accuracy of the support vector machine is mainly affected by the sample feature information and its parameters. The signal can be decomposed by empirical mode to obtain the intrinsic modal function (IMF). In terms of feature extraction, Chen et al. proposed that vibration signals could be decomposed by EMD to obtain the entropy of IMF as the feature vector [4]. Zhu et al. proposed that hierarchical entropy (HE) was calculated through multiscale entropy as the feature vectors [5]. The limitation of fault diagnosis is a single fault signal, and multifeature fusion information is richer [6]. Yu et al. proposed a fault diagnosis method based on multisensor information [7]. Multisensor signal acquisition is considered to be more comprehensive in perceptual cognition, but it is necessary to design the location of sensor installation. When a sensor is used to obtain the vibration signal of the bearing, it shall be installed as close to the bearing as possible. In general, for multisensor information fusion methods, it is considered very difficult to determine the location of each sensor. Xiang and Cen proposed an entropy fusion method based on kernel principal component analysis (KPCA). Firstly, the energy entropy of IMF and the singular entropy of IMF were obtained by EMD of the signal, and then the entropy value was fused by KPCA to obtain the feature vectors [6]. Compared with multiposition sensors, multientropy fusion can obtain comprehensive information without complicated sensor measurement system.

For rolling bearings, the speed varies slightly under different loads. Classification accuracy is what we care about. Generalization is required to provide better performance for SVM. In this paper, firstly, the energy entropy of IMF and the arrangement entropy of IMF are obtained through EMD theory in signal. Secondly, the feature matrix IMF’s energy entropy and IMF’s permutation entropy are fused through PCA to obtain the feature matrix. More comprehensively, the method can describe the bearing fault information, and it makes the SVM have better performance.

The classification accuracy of SVM is limited by kernel parameters, weights between different kernel functions, and penalty factors [8]. Traditional parameter optimization methods have poor convergence and cannot guarantee the maximum optimal solution, including trial and error, grid search, and gradient descent [9]. Zhang et al. proposed to optimize the SVM parameters by intercluster distance (ICD) in the feature space [9]. ICD combines grid search and multiple cross-validation. The method is cumbersome and the convergence is questionable in that Particle swarm optimization (PSO) algorithm is a kind of global random search optimization algorithm, which is easy to implement and has the advantages of low computation demands [10]. It is widely used for optimizing SVM [8, 1113]. Zhu et al. optimized SVM by PSO [5]. Wu Deng optimized SVM by improved PSO [14]. In order to obtain better convergence effect, Chen et al. proposed to optimize SVM parameters through chaotic PSO [4], which effectively improved the training process. However, due to the high degree of nonlinearity involved in modeling multidimensional problems, there are still significant problems in obtaining efficient SVMs for fault identification.

LS-SVM is very suitable for solving small-sample, nonlinear, and high-dimensional problems. However, for solving nonlinear problems, in particular, the classification results of SVM are limited by the selection of kernel functions. Therefore, in order to verify the effectiveness of bearing diagnosis, a novel intelligent bearing diagnosis method is proposed by combining the advantages of empirical mode decomposition, multifeature, SRPSO algorithm, and least square support vector machine.

In this paper, the parameters of MK-LSSVM are optimized by the algorithm of SRPSO to obtain better performance. The fundamental of multikernel least square support vector machine (MK-LS-SVM) is overviewed to identify a classifier that allows multidimension features from empirical mode decomposition (EMD) to be fused with high generalization property. Then the multidimension parameters of the MK-LS-SVM are configured by the SRPSO for further performance improvement. Finally, the proposed model is evaluated through experiments and comparative studies. In the second section, different feature vectors, their fusion methods, and problems in LSSSVM and their parameters to be optimized are introduced. In the third part, the SRPSO algorithm is introduced, which includes the establishment of an objective function for fault identification. The fourth part is the fault diagnosis of the bearing. By fault diagnosis, this way, which is the SRPSO based multikernel LSSVM, is effectively proved to improve the accuracy. In the last section, the method is summarized.

2. Support Vector Machine Based Identification

2.1. Multikernel LSSVM

In fault diagnosis practice, it is usually difficult to obtain sufficient fault samples [5] for developing an intelligent system. Support vector machine can effectively classify small-samples and nonlinear signals and is widely used in mechanical fault identification [14]. The traditional classification recognition method is a combination of many binary classification support vector machines and requires large-scale training to learn [5]. Least square support vector machine (LS-SVM) based on statistical theory and minimal risk structure can be trained with less samples and avoid overfitting, with high generalization accuracy [5, 8, 11, 15]. The kernel of the polynomial kernel function in the kernel of the support vector machine has a strong generalization ability, but the learning ability is weak.

For rolling bearings, the values of speed may have a small difference under different loads. Classification accuracy is what we care about. Generalization is required to provide better performance for SVM. The Gaussian radial basis function kernel is a local kernel, and the learning ability is good, but the generalization ability is weak. Combining with the merits of these kernel functions allows a multikernel least square support vector machine (MK-LS-SVM) [10] to be developed. The basic idea of the support vector machine is to classify the samples by nonlinear function space to high-dimensional spatial mapping so that the samples are classified according to different attributes [16], as shown in Figure 2.

Least squares support vector machine nonlinear estimation:where is a kernel function, b is an offset, and is the weight.

Gaussian radial basis function kernel:

The polynomial function kernel is defined as

The resulting kernel function is obtained:where is the ratio of Gaussian kernel to polynomial kernel, is the parameter for polynomial parameter kernel, and is the Gaussian kernel parameter.

2.2. Selecting the Feature Vector of MK-LSSVM

Due to the different environmental conditions, the rolling bearing signal is nonlinear, nonstationary signal. Traditional time domain analysis and frequency domain analysis are mostly suitable for linear stationary signals [2, 4, 17], while empirical mode decomposition (EMD) is more suitable for nonlinear, nonstationary signal feature extraction [1, 18, 19]. The IMF’s energy entropy and IMF’s permutation entropy are obtained by EMD of bearing failure data.(1)Let the signal be ① Permutation entropy reference [5]:The m-dimensional vector delay of signal :where is the embedded dimension and is the time delay. The -dimensional delay of is sorted in ascending order:When , take the order .The frequency of each arrangement isPermutation entropy:Energy entropy reference [18]:The signal can be obtained by the empirical mode decomposition of the intrinsic modal function:The energy of each intrinsic modal component is obtained:The sum of the energies of all intrinsic modal components:Percentage of each intrinsic modal component:Energy entropy:(2)Different speed will affect the bearing parts of the fault frequency and load changes. The entropy of the bearing under different rotational speeds is fused into the eigenvector. Feature fusion as shown in Figure 2 and its steps are as follows.The energy entropy and permutation entropy of the IMF are obtained by decomposing the vibration signal through empirical modeMark the serial number of the different fault typesUse the fault type serial number to mark the fault featureThe fault feature and its corresponding markings at different rotational speeds are merged as fusion feature vectors

3. Self-Regulating PSO Optimized MKLS-SVM for Fault Identification

3.1. Self-Regulating PSO

The American social psychologist James Kennedy and electrical engineer Kennedy and Ebenhart proposed the particle swarm optimization in 1995 [20]. The basic idea is to assume that there are groups of N particles in D-dimensional space, and the position and speed of each particle have been constantly updated.

The corresponding position of each particle in the D-dimensional space:

The corresponding velocity of each particle in the D-dimensional space:where the speed of each particle will be based on its own -generation previous speed (), self-awareness (), and social awareness () of three aspects of the update speed [21]. The velocity update of the particle in the generation in dimension satisfies:

The position update of the particle in the generation is satisfied:

In addition to each particle history best position,

Particle history global best position:

Velocity range is and position range is , where , is the inertia weight. , is the random number uniformly distributed between 2 [0, 1]. is the inertia weight, and the linear decreasing rate of each generation makes the convergence of the particle swarm optimization algorithm better [21]. represents the dimension of the particle.

Traditional particle swarm optimization algorithm because of the performance depends on the preset parameters and is therefore easy to fall into the local optimal [4]. In recent years, people can do this for the improvement can be divided into four categories: (a) based on the parameter setting algorithm, (b) based on neighborhood topology algorithm, (c) based on learning strategy algorithm, and (d) mixed type algorithm, which is based on learning strategies and mixed. Among them, the optimization of the type of algorithm is better [21]. As shown in Figure 3, based on the human cognitive psychology decision-making, the self-regulating particle swarm optimization algorithm introduces two strategies in the learning strategy. The first strategy is the setting of the inertia weight; that is, for the increase of the inertia weight of the optimal particles to accelerate the exploration of the optimal particles in the whole, and the rest of the particles are explored along the linearly decreasing inertia weight [21]. The second strategy is to select the search direction for the particles according to the self-cognition [21].

It is best for the particle to have the optimal search direction for its previous velocity direction and not to be influenced by self-cognition () and social cognition (). Besides the direction of other particles’ speed, the impact of self-cognition () and social cognition () on the speed has also to be considered. The ordinary particles are perceived by a uniformly distributed random number a of [0, 1] to the global search direction. The ordinary particles will choose the social cognitive direction according to the uniform distribution random number a and the set threshold . If the uniform random number a reaches the threshold , it is considered that the social cognition is chosen; otherwise it is considered to abandon the social cognition. The size of the threshold has a certain screening effect on the choice of social cognition. From the point of view of probability, the greater the value of , the smaller the effect of social cognition on ordinary particles. The smaller the value of , the greater the effect of nonholonomic optimal particles on social cognition. That is, the logical value of the self-cognition and social cognitive part of the global optimal particle is 0; the logical value of the self-cognition part of the other particles is 1; the social cognitive part is the logical value of 1 when the uniform distribution random number a satisfies the threshold ; and the logic value is zero when is not satisfied.

3.2. Implementation Steps

(1)Initialize the position and velocity of each particle.(2)Calculate the fitness value of each particle.(3)The initial particle position is set to the historical optimal position and the fitness value is compared to obtain the initial historical global optimal position .(4)Calculate the self-regulating inertia weight () of each particle.For the best particles,Other particles:where satisfies condition . is the initial value of the inertia weight. is the inertia weight termination value. is the number of iterations and is a constant that controls the acceleration rate.(5)Update the particle velocity and position.where , is the acceleration coefficient. , is the random number in the range of (0, 1). P is the particle social cognition:where is a random number and is the set threshold, typically 0.5.(6)Calculate the particle fitness value after updating the position.(7)Update the best position of each particle and the global best position of the particle swarm.(8)To determine if the end of the conditions is not met, the conditions are returned 4).

In this paper, r1, r2 is the random number in the range of (0, 1); c1 = 1.49445; c2 = 1.49445; λ = 0.5.

3.3. The SRPSO Optimized MKLS-SVM Model

SRPSO optimization MKLS-SVM fault diagnosis process shown in Figure 4. From formula (8) we can see that there are three parameters in the multikernel least squares support vector machine. Difficulty: Gaussian radial basis function kernel , polynomial function kernel parameter , and weight . The energy entropy and permutation entropy are obtained by empirical mode decomposition of the fault signals of bearings under different rotational speeds. The different entropy values at different rotational speeds are used as the eigenvectors of the signals for the training and testing of multikernel least squares support vector machines. SRPSO in CEC2005 for single-peak, basic multichannel, extended multichannel, and mixed type function test shows a better convergence [15]. The adaptive parameters of the multikernel support vector machine can be found by SRPSO. The appropriate parameters can reduce the SVM classification error ratio. The ratio of the correct number of SVM fault classification and the number of sample signals under different parameters are taken as the fitness target of particle swarm optimization algorithm. The self-regulating PSO can optimize the SVM parameters and can be realized by optimizing the fitness function values in the training samples. Finally, the test samples are entered into the trained SVM, and the classification accuracy of the test samples can be obtained.

4. Fault Detection

In order to verify the effectiveness of the proposed method, the experimental data of the Electrical Engineering Laboratory at Case Western Reserve University were selected [22], which have been explored by many researchers. The tested bearing was a drive end bearing Type 6205-2RS.

4.1. Experimental Data

The sampling frequency is 12,000 Hz. Defect dimensions for bearings are 0.021″, 0.014″, 0.021″, 0.028″. For each original collected signal that represents one working condition, the first 120,000 points (the sampling time is 10 s) were divided into 50 subsignals. Each subsignal contains 2400 points (sampling time is 0.2 s). A set of signals is composed of all subsignals in different environments. A sample is randomly selected in the set of vibration signals in various environments, as shown in Figure 5. The IMF of this sample is calculated through EMD, as shown in Figure 6.

As shown in Figure 6, comparing with other IMF components, it can be seen that the amplitude of the 7th to 8th IMF component is very small. Therefore, taking the first six IMF components is enough to express the original signal. The entropy value of IMF was obtained through EMD. Some entropy values are listed below:IMF’s energy entropy, as shown in Tables 14:


Fault locationFault markFeature numberEnergy entropy at 1730
E1/EE2/EE3/EE4/EE5/EE6/E

Inner ring110.00070.10190.13590.02600.01580.0037
120.00120.11020.16930.03060.01350.0032
130.00080.09460.14750.03490.01460.0034
140.00090.11660.14880.02740.01450.0047
150.00050.09010.12140.04570.00970.0050

Outer ring210.00280.20610.18350.07530.02580.0131
220.00290.18650.20300.10290.05580.0128
230.00230.17900.18380.08790.02690.0099
240.00180.15870.17540.06720.01010.0095
250.00230.16120.19390.10790.01890.0123

Ball310.06940.52190.25520.09420.06450.0145
320.06260.51710.23990.11270.07840.0121
330.07100.52270.25690.12260.08790.0188
340.08010.52660.28080.14940.11200.0059
350.08460.52850.26630.12280.08940.0140


Fault locationFault markFeature numberEnergy entropy at 1750
E1/EE2/EE3/EE4/EE5/EE6/E

Inner ring110.00080.07630.15230.03550.01090.0027
120.00070.07850.14750.03380.01050.0038
130.00060.07270.13290.04120.00960.0046
140.00060.07270.13990.02740.01300.0044
150.00100.11080.15030.08370.01820.0090

Outer ring210.00260.17090.20520.08090.01110.0024
220.00330.16960.23490.05890.02020.0116
230.00290.20070.19560.06160.02190.0118
240.00500.23800.21870.15100.04450.0118
250.00270.19260.20020.04650.01430.0063

Ball310.02330.42090.28110.17230.09380.0065
320.01760.38600.28910.10740.09650.0121
330.01580.37360.26280.18480.11410.0233
340.01840.39910.25350.16410.09820.0197
350.01650.36940.30370.12830.08140.0314


Fault locationFault markFeature numberEnergy entropy at 1772
E1/EE2/EE3/EE4/EE5/EE6/E

Inner ring110.00100.08440.16480.04560.01550.0052
120.00110.10110.16500.03870.01120.0059
130.00080.11290.13070.07290.01470.0068
140.00070.08740.14100.02730.00970.0009
150.00090.09420.15640.03380.01060.0014

Outer ring210.00610.17130.29800.10590.02300.0086
220.00590.20460.28630.05800.02510.0178
230.00410.19420.24180.10450.03260.0136
240.00410.17210.25730.07720.01260.0043
250.00300.12850.24250.06100.01420.0088

Ball310.00140.14330.15690.07920.05050.0140
320.00380.21380.20230.12900.08560.0194
330.00560.20810.25400.17030.08550.0218
340.00110.13450.13700.07280.05360.0107
350.00090.14110.11030.06710.02960.0113


Fault locationFault markFeature numberEnergy entropy at 1792
E1/EE2/EE3/EE4/EE5/EE6/E

Inner ring110.00180.10770.20510.04780.01130.0049
120.00240.11740.22440.03650.00950.0040
130.00180.10490.19860.09830.01040.0034
140.00210.11320.21660.04600.00680.0029
150.00180.12470.19940.05650.01160.0034

Outer ring210.00910.26710.29520.15910.04180.0198
220.00450.17270.26600.08210.03930.0227
230.00370.16490.24820.06730.02210.0130
240.00370.14310.24150.14760.03280.0121
250.00550.22640.17070.23950.06600.0168

Ball310.00550.25550.23400.07890.05450.0093
320.00750.26260.28100.08410.04280.0235
330.00500.25520.21670.08250.04980.0205
340.00850.28510.27990.08600.04560.0137
350.00540.23050.25680.08580.05780.0142

IMF’s permutation entropy, as shown in Tables 58.

Fault locationFault markFeature numberPermutation entropy at 1730
PE1PE2PE3PE4PE5PE6

Inner ring110.74210.55220.37560.27680.20200.1605
120.75020.53210.36560.27040.19430.1585
130.76090.56920.38360.28110.20350.1592
140.75310.52190.35380.27130.18960.1513
150.73990.51580.36230.29230.21250.1662

Outer ring210.80710.63480.40920.30080.23860.1830
220.81120.60380.40100.31600.27200.2020
230.80920.66090.42910.31040.25160.1925
240.80390.61340.38750.30690.22390.1796
250.80600.56030.38480.29610.22920.1864

Ball310.80020.50680.36740.25950.18900.1512
320.82220.54970.38530.27190.19480.1614
330.81410.53330.38470.27520.18820.1447
340.80830.51930.39390.27790.19200.1493
350.81330.52230.38260.27640.19640.1580


Fault locationFault markFeature numberPermutation entropy at 1750
PE1PE2PE3PE4PE5PE6

Inner ring110.74950.56960.36420.26770.19420.1593
120.74500.55210.37160.28790.20470.1601
130.74410.59450.38090.28380.20900.1696
140.74610.52900.36460.26740.19930.1624
150.74840.66400.41090.29530.21850.1752

Outer ring210.80490.60630.39370.28910.21210.1677
220.82040.55790.36960.27880.22130.1862
230.79640.63870.41710.31010.23030.1814
240.81180.68920.45540.31110.26590.1988
250.81310.57600.37710.29310.21780.1765

Ball310.83980.56540.37970.27310.19750.1539
320.81190.53610.36780.27110.19630.1565
330.83370.58780.40600.29760.20800.1708
340.82050.52610.38800.29590.20240.1636
350.81700.55410.37930.27180.20280.1592


Fault locationFault markFeature numberPermutation entropy at 1772
PE1PE2PE3PE4PE5PE6

Inner ring110.74940.54200.35650.27190.20920.1665
120.75200.56140.36370.27300.21160.1713
130.73580.63100.40700.30400.21620.1721
140.74910.52920.36070.27020.19700.1586
150.74460.54890.36220.27060.19320.1484

Outer ring210.81940.56060.37980.27680.21120.1703
220.81420.57320.36990.28950.22870.1879
230.80700.56170.36770.29000.21900.1872
240.81100.61170.37830.28810.22000.1715
250.81440.55660.36990.29230.22930.1748

Ball310.79920.58480.38930.27890.20130.1545
320.80850.57660.39030.28430.20630.1627
330.78600.61970.40630.28420.20230.1617
340.77330.57710.38530.28310.20270.1580
350.78400.57850.39330.28650.21060.1746


Fault locationFault markFeature numberPermutation entropy at 1792
PE1PE2PE3PE4PE5PE6

Inner ring110.73630.53930.35870.26740.20760.1596
120.74160.55500.35080.26640.19550.1513
130.72970.56330.36620.29050.20840.1606
140.72990.55370.36220.26560.20650.1620
150.72980.54720.36850.26620.19690.1530

Outer ring210.82170.67300.43560.31760.24880.1840
220.81460.61930.40820.29420.22990.1747
230.81480.57020.37950.28590.22820.1812
240.82580.58480.38000.29270.22620.1858
250.81240.66690.44300.34170.28100.2267

Ball310.80190.56990.39010.27830.19570.1637
320.80410.58770.38860.28410.21910.1781
330.80530.58420.40020.30210.22790.1732
340.80830.58440.37580.27390.19670.1690
350.81830.58940.39980.29590.21150.1670

4.2. Fault Detection

The process of SVM classification is influenced by its parameters. Generally, PSO is used as a method to optimize parameters. To do this, we need to compare the performance of PSO and SRPSO. The parameters of LSSVM are optimized by PSO, as shown in Figure 7(a). The parameters of LSSVM are optimized by SRPSO, as shown in Figure 7(b).

Figures 7(a) and 7(b) show that SRPSO has a good convergence for optimization of SVM parameters, and no more than 6 generations can be obtained to obtain the optimal solution, thus, selecting the update of the sixth passage.

The study was divided into three parts:(a)How the eigenvectors are obtained has been given. Firstly, the EE and PE of each subsignal are obtained through EMD, and then the entropy value is fused to form the eigenvector.(b)The model for fault identification has been obtained. The parameters of MK-LSSVM were optimized by SRPSO, and the theoretical model of MK-LSSVM to identify fault defects was constructed.(c)The defect of rolling bearing has been identified. The model is used for signal detection to complete the flaw detection of bearing signal.

In this paper, compared with other methods, the superiority of the proposed method is reflected. The first group is the method mentioned in the literature [5]. The second group is the method proposed in the literature [10]. The third group is IPSO-LSSVM. The proposed method is shown in group 4 [14].

Classification accuracy refers to the ratio of the number of correct classifications to the total samples. As can be seen from Table 9, the average classification accuracy of the SRPSO optimized MK-LSSVM proposed in this paper is 99.72%. It can be seen that the fault defect classification accuracy of SRPSO optimized MK-LSSVM is higher. The average recognition accuracy of type 1 is 97.75%. The average recognition accuracy of type 2 is 97.91–100%. Type 3 shows that the recognition accuracy of improved PSO optimized support vector machine under a single feature is 89.50%. The experimental results show that this method can improve the recognition accuracy of rolling bearing by SVM. Comprehensively, SRPSO optimized MK-LSSVM can extract the information of the signal.


Case numberOptimization typeSupport vector machine parametersAverage value of classification accuracy

1PSO optimized multiclass SVM + HE [5]97.75%
2SVM with parameter optimized by ICD [10]97.91–100%
3Improved PSO + LS-SVM [14]89.50%
4The proposed theory0.59758.385350.297099.72%

The expression of accuracy is the ratio of the number of correctly classified sets to all sets (including the number of correctly classified sets and the number of incorrectly classified sets).

5. Conclusions

Based on the analysis of the existing fault diagnosis methods, this paper proposes a SRPSO optimized MK-LSSVM. In this paper, the selection of support vector machine and the feature fusion of signal are given, and the parameters of MK-LSSVM are optimized by SRPSO. The actual results show that the integration of fault feature vectors can improve the adaptability of support vector machines. The optimized MK-LSSVM can obtain more intrinsic information in the signal through the SRPSO theory. Obviously, it has been improved that the classification accuracy was calculated by SRPSO optimized MK-LSSVM.

Data Availability

As for the data, it is put forward by the authors’ laboratory experiment table, which is a confidential data, so the authors are sorry for not being able to disclose the data.

Conflicts of Interest

The authors declare that they have no conflicts of interest.

Acknowledgments

This research was supported by the National Natural Science Foundation of China (51565046, 51965052, and 51865045) and Science and Technology Plan Project of Inner Mongolia Autonomous Region, China (KJJH007).

References

  1. J. Zheng, “Rolling bearing fault diagnosis based on partially ensemble empirical mode decomposition and variable predictive model-based class discrimination,” Archives of Civil and Mechanical Engineering, vol. 16, no. 4, pp. 784–794, 2016. View at: Publisher Site | Google Scholar
  2. J. Zheng, H. Pan, and J. Cheng, “Rolling bearing fault detection and diagnosis based on composite multiscale fuzzy entropy and ensemble support vector machines,” Mechanical Systems and Signal Processing, vol. 85, pp. 746–759, 2017. View at: Publisher Site | Google Scholar
  3. D. Dou and S. Zhou, “Comparison of four direct classification methods for intelligent fault diagnosis of rotating machinery,” Applied Soft Computing, vol. 46, pp. 459–468, 2016. View at: Publisher Site | Google Scholar
  4. F. Chen, B. Tang, T. Song, and L. Li, “Multi-fault diagnosis study on roller bearing based on multi-kernel support vector machine with chaotic particle swarm optimization,” Measurement, vol. 47, pp. 576–590, 2014. View at: Publisher Site | Google Scholar
  5. K. Zhu, X. Song, and D. Xue, “A roller bearing fault diagnosis method based on hierarchical entropy and support vector machine with particle swarm optimization algorithm,” Measurement, vol. 47, pp. 669–675, 2014. View at: Publisher Site | Google Scholar
  6. D. Xiang and J. Cen, “Method of roller bearing fault diagnosis based on feature fusion of EMD entropy,” Journal of Aerospace Power, vol. 30, no. 5, pp. 1149–1155, 2015. View at: Google Scholar
  7. K. Yu, J. Tan, and L. Shan, “Rolling bearing fault diagnosis research based on multi-sensor information fusion,” Intstrument Technique and Sensor, vol. 7, pp. 97–102, 2016. View at: Google Scholar
  8. J. A. K. Suykens, T. V. Gestel, J. D. Brabanter, B. D. Moor, and J. Vandewalle, Least Squares Support Vector Machines, World Scientific, Singapore, 2002.
  9. X. Zhang, Y. Liang, J. Zhou, and Y. zang, “A novel bearing fault diagnosis model integrated permutation entropy, ensemble empirical mode decomposition and optimized SVM,” Measurement, vol. 69, pp. 164–179, 2015. View at: Publisher Site | Google Scholar
  10. Y. Zhang, B. Tang, and P. Xiong, “Rolling element bearing life prediction based on multi-scale mutation particle swarm optimized multi-kernel least square support vector machine,” Chinese Journal of Scientific Instrument, vol. 37, no. 11, pp. 2489–2496, 2016. View at: Google Scholar
  11. Y. Lv, F. Hong, T. Yang, F. Fang, and J. Liu, “A dynamic model for the bed temperature prediction of circulating fluidized bed boilers based on least squares support vector machine with real operational data,” Energy, vol. 124, pp. 284–294, 2017. View at: Publisher Site | Google Scholar
  12. C. Rajeswari, B. Sathiyabhama, S. Devendiran, and K. Manivannan, “Bearing fault diagnosis using wavelet packet transform, hybrid PSO and support vector machine,” Procedia Engineering, vol. 97, pp. 1772–1783, 2014. View at: Publisher Site | Google Scholar
  13. S. M. H. Bamakan, H. Wang, and A. Zare Ravasan, “Parameters optimization for nonparallel support vector machine by particle swarm optimization,” Procedia Computer Science, vol. 91, pp. 482–491, 2016. View at: Publisher Site | Google Scholar
  14. W. Deng, R. Yao, H. Zhao, X. Yang, and G. Li, “A novel intelligent diagnosis method using optimal LS-SVM with improved PSO algorithm,” Soft Computing, vol. 23, no. 7, pp. 2445–2462, 2019. View at: Publisher Site | Google Scholar
  15. J. A. K. Suykens, J. De Brabanter, L. Lukas, and J. Vandewalle, “Weighted least squares support vector machines: robustness and sparse approximation,” Neurocomputing, vol. 48, no. 1–4, pp. 85–105, 2002. View at: Publisher Site | Google Scholar
  16. C. Wang, X. Wang, C. Zhang, and Z. Xia, “Geometric correction based color image watermarking using fuzzy least squares support vector machine and Bessel K form distribution,” Signal Processing, vol. 134, pp. 197–208, 2017. View at: Publisher Site | Google Scholar
  17. R. Yan, Y. Liu, and R. X. Gao, “Permutation entropy: a nonlinear statistical measure for status characterization of rotary machines,” Mechanical Systems and Signal Processing, vol. 29, pp. 474–484, 2012. View at: Publisher Site | Google Scholar
  18. C. Zhang, J. Chen, and X. Guo, “A gear fault diagnosis method based on EMD energy entropy and SVM,” Journal of Vibration and Shock, vol. 29, no. 10, pp. 216–220, 2010. View at: Google Scholar
  19. Z. Feng, M. Liang, and F. Chu, “Recent advances in time-frequency analysis methods for machinery fault diagnosis: a review with application examples,” Mechanical Systems and Signal Processing, vol. 38, no. 1, pp. 165–205, 2013. View at: Publisher Site | Google Scholar
  20. J. Kennedy and R. C. Eberhart, “Particle swarm optimization,” in Proceedings of the IEEE International Conference on Neural Networks, pp. 1942–1948, IEEE, Piscataway, NJ, USA, 1995. View at: Google Scholar
  21. M. R. Tanweer, S. Suresh, and N. Sundararajan, “Self regulating particle swarm optimization algorithm,” Information Sciences, vol. 294, pp. 182–202, 2015. View at: Publisher Site | Google Scholar
  22. Bearing Data Center, Case Western Reserve University, http://csegroups.case.edu/bearingdatacenter/pages/download-data-file.

Copyright © 2020 Yerui Fan et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.


More related articles

 PDF Download Citation Citation
 Download other formatsMore
 Order printed copiesOrder
Views130
Downloads279
Citations

Related articles

We are committed to sharing findings related to COVID-19 as quickly as possible. We will be providing unlimited waivers of publication charges for accepted research articles as well as case reports and case series related to COVID-19. Review articles are excluded from this waiver policy. Sign up here as a reviewer to help fast-track new submissions.