Shock and Vibration

Shock and Vibration / 2019 / Article

Research Article | Open Access

Volume 2019 |Article ID 9303676 | 20 pages | https://doi.org/10.1155/2019/9303676

Bearing Fault Diagnosis Using a Support Vector Machine Optimized by an Improved Ant Lion Optimizer

Academic Editor: Zhixiong Li
Received24 Mar 2019
Accepted19 May 2019
Published27 Jun 2019

Abstract

Bearing is an important mechanical component that easily fails in a bad working environment. Support vector machines can be used to diagnose bearing faults; however, the recognition ability of the model is greatly affected by the kernel function and its parameters. Unfortunately, optimal parameters are difficult to select. To address these limitations, an escape mechanism and adaptive convergence conditions were introduced to the ALO algorithm. As a result, the EALO method was proposed and has been applied to the more accurate selection of SVM model parameters. To assess the model, the vibration acceleration signals of normal, inner ring fault, outer ring fault, and ball fault bearings were collected at different rotation speeds (1500 r/min, 1800 r/min, 2100 r/min, and 2400 r/min). The vibration signals were decomposed using the variational mode decomposition (VMD) method. The features were extracted through the kernel function to fuse the energy value of each VMD component. In these experiments, the two most important parameters for the support vector machine—the Gaussian kernel parameter σ and the penalty factor C—were optimized using the EALO algorithm, ALO algorithm, genetic algorithm (GA), and particle swarm optimization (PSO) algorithm. The performance of these four methods to optimize the two parameters was then compared and analyzed, with the EALO method having the best performance. The recognition rates for bearing faults under different tested rotation speeds were improved when the SVM model parameters optimized by the EALO were used.

1. Introduction

The rolling bearing is a core mechanical component that is widely used in wind turbines, aeroengines, ships, automobiles, and other important mechanical equipment [1, 2]. Rolling bearings usually perform for extended periods of time in extreme conditions such as high temperatures, high speeds, and high loading. These long-term, extreme running conditions lead to a variety of serious failures, including ball bearing wear, metal spalling on the inner and outer raceways, and cracks in the cage [3]. A weak fault results in abnormal vibrations, which hinder the performance of the mechanical equipment and reduce work efficiency. A serious fault may result in the destruction of the machine and—depending on the severity—lead to employee death. Whether a mechanical malfunction or human injury and/or death, the result is a huge loss to the business enterprise and society as a whole; ultimately, this results in a serious barrier to the harmonious development of a national economy. Although bearing failure is inevitable, abiding by a standard maintenance schedule can partially reduce the accident rate caused by bearing failure. However, bearings still suffer issues of over- and undermaintenance, resulting in increased business costs. Developments in computer and testing technology have led to many advanced automatic monitoring and intelligent diagnostics. These have been adopted to allow for online monitoring of working conditions, which allow for timely fault detection as well as the development of accurate reference points for future maintenance decisions. Given these advantages, it is important to study the application of intelligent fault diagnostic technology to rolling bearing performance.

Currently, the vibration monitoring method is the most commonly adopted method to monitor bearing conditions [4]. When the fault first appears, its characteristic signal is very weak [5]. This is because it is overwhelmed by the power of the natural frequency vibrations, transfer modulation, and noise interference. Despite being weak, the signal has obvious nonlinear and nonstationary characteristics [6]. Traditional fault diagnostic methods include analyzing vibration signals from the time, frequency, and time-frequency domains. Traditional approaches have difficulty detecting these signals because they are based on the neural network [79] and the Bayesian decision [1012] methods, which require a large number of valid samples to function properly. This means that when the sample size is too small, model accuracy decreases; however, a large number of fault samples are difficult to discern. Therefore, the application of traditional pattern recognition methods such as the neural network and the Bayesian decision is restricted.

The support vector machine (SVM) [1315] is a pattern recognition method developed in the 1990s that is suitable for small sample conditions. This method takes a kernel function as the core and implicitly maps the original data of the original space to the feature space. In this manner, a search for linear relations in the feature space is conducted, which can then determine efficient solutions to nonlinear problems. SVM has been widely applied in the field of pattern recognition and has been applied to such problems as text recognition [16], handwritten numeral recognition [17], face detection [18], system control [19], and many other related applications. The accuracy of SVM classification is highly affected by the kernel function and its parameters since the relationship between the parameters and model classification accuracy in a multimodal function is irregular. Given this, improper parameter values worsen the model’s generalization ability, leading to more inaccurate fault recognition. Unfortunately, it is difficult to obtain optimal parameters. When applied, empirical selection is unreliable. Computer-driven parameter optimization not only reduces the workload of human engineers but also provides a more reliable basis for the selection of optimal solutions. Current methods used for parameter optimization include grid cross-validation (GCV) [20, 21], genetic algorithm (GA) [22], and particle swarm optimization (PSO) [23, 24]; despite this variety, the optimization efficiency of these methods remains imperfect.

In 2015, a new bionic intelligent algorithm termed “Ant Lion Optimizer” (ALO) was devised by Mirjalili [25]. ALO has many advantages, including its simple principle, ease of implementation, reduced need for parameter adjustment, and high precision [26, 27]. It has been successfully applied to a variety of fields like structure optimization [28], antenna layout optimization [29], distributed system siting [30], idle power distribution problem [31], community mining in complex networks [32], and feature extraction [33]. Recently, He et al. [34] utilized the ant lion optimizer to optimize a GM (1,1) model to predict the power demands of Beijing. Their results showed that this approach improved the adaptability of the GM (1,1) model. Relatedly, Zhao et al. [35] improved the ant lion optimizer by using a chaos detection mechanism to optimize SVM. They then used a UCI standard database for verification. Collectively, their results showed the ALO algorithm improved classification accuracy.

At present, there are few studies regarding ALO application in bearing fault diagnosis. As a new bionic optimization algorithm, there are some ant lion individuals in the ALO algorithm with relatively poor fitness in the iteration process. If the ants select poor fitness ant lions for walking, the probability of falling into a local extremum increases. In addition, resource waste will result if poor fitness ant lions search around the local extremum and partially affect the optimization performance and convergence efficiency of the ALO algorithm.

Given the aforementioned problems, this paper uses the rolling bearing as its test object and to improve the ant lion algorithm. When combined with SVM, it also sought to diagnose bearing faults. This work has both great theoretical significance and practical value to improve the accuracy of fault diagnosis in rolling bearings, thereby ensuring the safety and stability of functional rolling bearings.

2. Basic Algorithm Principles

2.1. Ant Lion Optimizer

The ALO algorithm was modeled on the hunting behavior of ant larvae in nature. As constructed, the optimization algorithm mimics the walking of random ants, constructs traps, lures the ants into the trap, captures the ants, and reconstructs the traps. The ALO algorithm conducts a global search by walking around randomly selected ant lions, and local refinement optimization is achieved by adaptive boundary of the ant lion trap.

The total number of ants and ant lions is defined by N, the problem dimension is D, the maximum number of iterations is , the lower boundary of the optimal space is , the upper boundary is , the position matrix of ants is , and the position matrix of ant lions is :where represents the current number of iterations and satisfies , represents the position of the ant in the dimensional space after the iteration, and represents the position of the ant lion in the dimension space after the iteration. When , the position of the initial ant and ant lion populations can be assigned by the following formulae:

The fitness vectors and can be expressed aswhere is the fitness evaluation function. Define as the elite ant lion after the iteration, which satisfies

The iterative process of the ALO algorithm is to continuously update the position according to the interaction between the ants and the ant lions; after this update, it then reselects the elite ant lions. The ALO algorithm primarily includes random ant walks, trapping in an ant lion’s pit, building traps, sliding ants towards the ant lion, catching prey, rebuilding the pit, and elitism.

2.1.1. Random Ant Walks

When searching for food in nature, ants move stochastically; as such, a random walk is used to model ants’ movement as follows:where is a stochastic function that is defined as follows:

In order to ensure the random walks of ants in the search space, the position of the ants is normalized by

2.1.2. Building a Trap

According to its fitness, an individual ant lion is selected from the ant lion population of the previous generation through a roulette operation, defined as follows:where is the sorting function (in positive order), which is defined as

As increases from 1 and when the first is satisfied, is the selected individual, which then builds the trap together with the elite named “.

2.1.3. Trapping in an Ant Lion’s Pit

This process is described as the ants walking around the “trap.” The boundary of the walking area is affected by the position of the elite, which can be defined by the following formula:

2.1.4. Sliding Ants towards the Ant Lion

As soon as the ant starts sliding towards the trap, the ant lions realize the ant is in the trap and shoot sand to the center of the pit to prevent it from escaping. The process can be described as an adaptive decrease in the radius of a given ant’s random walk hypersphere:where G is the ratio of the current iteration number to the maximum iteration number and is the radius reduction scale index, which satisfies

2.1.5. Catching Prey and Rebuilding the Pit

The ant lion kills the ant and eats its body. If a prey ant’s fitness is higher than the elite’s fitness, the elite ant lions update their position to the prey ants. In other words, the elite ant lions will build new traps for the next prey. Given this scenario, the following equation is proposed:

2.1.6. Elitism

The elite ant lion affects the movements of all ants, with each ant randomly walking around a selected ant lion; this walking is done according to both the roulette wheel and the elite, simultaneously. This behavior is modeled according to the following function:where represents the ant lions which are selected by the roulette wheel at the iteration and represents the ants who walk around the elite at the iteration.

2.2. Support Vector Machine

In the 1950s, Vapnik [13] proposed a new machine learning method that was termed the support vector machine (SVM). SVM is based on both the statistical learning theory and the structural risk minimization principle. Statistical learning theory is specialized for small sample situations of machine learning theory; given this, SVM has a good generalization ability. In addition, SVM is a convex quadratic optimization problem, which guarantees that the obtained extremum solution is also the global optimal solution [36, 37]. Collectively, these characteristics allow it to avoid the local extremum and dimensional disaster problems that are unavoidable when using a neural network. The standard SVM model has been established for two types of classification samples. Its basic principles are as follows.

For the two types of classes to be classified, the sample dataset is defined as follows:

The establishment of an SVM classification model is done to find an optimal classification surface , which satisfieswhere is the normal vector of the hyperplane and is the offset vector. Therefore, the following convex quadratic optimization model can be established:where is the punishment factor and is the relaxation factor.

By solving the following dual problem, the optimal solution is obtained:

This allows the optimal hyperplane normal vector in equation (19) to be obtained:

The corresponding samples to are called the support vectors (SVs), and the offset vector is calculated by

The decision function yielded is as follows:

The above SVM model has been established for linear sample classification; for nonlinear classification, the nonlinear transform is introduced. can transform the nonlinear samples in low-dimensional space into linear samples in high-dimensional Hilbert space. Unfortunately, this nonlinear transformation is difficult to obtain. In practice, a kernel function is often used instead of the explicit nonlinear transform equation . The kernel function transforms the nonlinear samples in low-dimensional space into high-dimensional Hilbert space by calculating the inner product. So the discriminant function is described as

As shown in formula (24), the kernel function is the core of the SVM, and it plays an important role in its generalization ability. The common kernel functions are shown in Table 1.


Kernel functionsExpressionsParameters

Linear kernelNone
Polynomial kernel
Gauss kernel
Sigmoid kernel

As shown in Table 1, different kernel functions have different expressions and parameters. Therefore, different kernel functions and parameters have different abilities to map data to higher-dimensional space. Since kernel functions and parameter values affect the generalization ability of the SVM model, the selection of the best parameters is extremely important.

The standard SVM solves the problem of two-class classification; in reality, encountering a multiclass problem is more common than a two-class problem. Therefore, the study of a multiclass SVM problem is of great significance. At present, researchers have proposed a handful of effective multiclass SVM construction methods. These approaches can be divided into two categories, with the first being the direct construction method. This method improves the discriminant function of a two-class SVM model to construct a multiclass model. This method uses only one SVM discriminant function to achieve a multiclass output. The discriminant function of this algorithm is very complex, and its classification accuracy is not good. The second method is to realize the construction of a multiclass SVM classifier by combining multiple two-class SVMs. In practice, this method is more widely used and includes one-against-one, one-against-all, direct acyclic graph, and binary tree approaches [38].

3. Ant Lion Optimizer Improvement

3.1. Improvement Based on the Escape Mechanism

In the ALO algorithm, ants randomly walk around the elite ant lion and the roulette wheel-selected ant lion; these ants gradually fall into the trap set by the ant lion. As the number of iterations increases, the walk range of the ants becomes increasingly smaller. In turn, this means the range of the search optimization solution becomes increasingly smaller as well. If the elite ant lion is located at the local extremum value, the risk of falling into the local extremum is increased. This reduces the optimization performance of the ALO algorithm. In nature, when an ant lion builds an ant trap, it is not always successful in catching the ants that fall into the trap. If the ants find that there is an ant lion nearby, they will avoid it to escape being eaten.

Here—and based on the aforementioned considerations—the ant escape mechanism was introduced into the ALO algorithm. This introduction resulted in an improved ALO algorithm, termed here as the EALO algorithm. By introducing the ant escape mechanism, the possibility of the algorithm falling into a local extremum value is reduced, thereby improving the optimization ability of the algorithm.

is defined as the escape probability of the ants, is the maximum number of ants, and is the maximum number of escaped ants, satisfying . The fitness of the ants is ranked after walking around the elite ant lion and the roulette-selected ant lion, where is the iteration number. Then, the former ants with low fitness are selected and randomly assigned to any location within the search field. That is,where and .

3.2. Improvement Based on Adaptive Convergence Conditions

The optimization performance of the ALO algorithm includes primarily precision and time consumption. In the ALO algorithm, algorithm convergence is controlled by setting the maximum number of iterations . If the maximum preset number of iterations is too large, the algorithm will take too long to complete; if the maximum number of iterations is too small, the precision of the algorithm cannot be guaranteed. For this reason, usually takes a large value in practical applications, with algorithm accuracy taking priority over time. However, some applications like online monitoring and fault diagnosis require more accurate solutions that are obtained in a short time. This is because it is necessary to have rapid assessment of the running state of mechanical equipment. Given this, it is unreasonable to adopt a fixed number of iterations. This is because as the number of iterations increases, the walking range of the ants will decrease; moreover, fitness differences between individuals will also decrease.

Assuming that is a small positive number, if the following formula is satisfied, then the EALO algorithm terminates and returns an optimal solution :

4. EALO-SVM Modeling

Assuming that is a sample set containing -class faults, the same number of samples are randomly selected from each fault sample set to form the training sample set . The rest of the samples form the test sample set . Taking the radial basis kernel function as an example, the parameters that need to be optimized are the kernel function parameter and penalty factor . A diagram of the SVM parameter optimization based on EALO is shown in Figure 1, and the modeling steps are as follows:(1)Parameter Presetting. Set the population number of ants as and the population number of ant lions as . The lower boundary of the searching space is . The upper boundary of the searching space is . The probability of ant escape is , the maximum number of ants to escape is , and the convergence threshold is .(2)Ant and Ant Lion Population Initialization. According to the preset parameters and the parameters to be optimized, the ant position matrix and ant lion position matrix are randomly generated according to formula (2) and formula (3).(3)Fitness Estimation. According to the ant position matrix and the ant lion position matrix , training the SVM model uses the sample set and then predicts the testing sample set . The fitness vectors of the ant and of the ant lion are obtained. Thereby, the fitness estimation function is defined by the following formula:(4)Natural Elite Selection. According to formula (5), the ant lion with the highest fitness is selected as the natural elite ant lion .(5)Roulette Elite Selection. According to formulae (9)–(11), roulette elite ant lions were selected.(6)Ant Random Walking. According to formulae (12)–(14), ants randomly walking around the natural elite ant lions are and roulette elite ant lions are .(7)Random Escape. Some ants are randomly assigned to any position in the searching space according to formula (25).(8)Rebuilding the Traps. After feeding on the ants, the position of the natural elite ant lion is updated according to formula (15). The natural elite ant lion continues to rebuild traps in the new position to prepare for the next predation.(9)Stopping the Iteration. To calculate the maximum and minimum fitness of the ants, formula (26) is used. If formula (26) is not true, then return to step (5) and continue to the next iteration. Otherwise, the iteration is stopped and output the natural elite ant lions ; its position is taken as the optimal solution .

5. Experiments

5.1. Data Acquisition

Bearing fault vibration tests were conducted to verify the performance of the EALO-SVM algorithm. These tests were performed on a machinery fault simulator (SpectraQuest Inc., Hermitage Road, Richmond, Virginia, USA) and are shown in Figure 2. The simulator is driven by a three-phase motor and allows for accurate control of the rotation speed. In the experiment, the fault bearing was installed on the bearing pedestal near the motor, and the normal bearing was installed on the right bearing pedestal. The vibration signals were collected using a high-speed multichannel data acquisition instrument (DEWETRON, Graz, Austria). Eight acceleration sensors were installed: Four of them were installed on the base (sensors 1–4), three on the fault-bearing pedestal to obtain vibration acceleration signals in the x, y, and z directions (sensors 5–7), and one on the other bearing pedestal (sensor 8). The sampling frequency was 10 kHz. Four typical bearing faults (normal, inner ring, outer ring, and ball) were simulated at four motor speeds (1500, 1800, 2100, and 2400 r/min), and their vibration acceleration signals were collected. The time waveform of the partially collected acceleration signal is shown in Figure 3.

5.2. Feature Extraction

As shown in Figure 3, bearing fault signals were highly nonlinear and nonstationary, making it difficult to directly identify the fault type from the time-domain signals. The weak fault feature was overwhelmed with the strong background noise, and traditional methods extract it effectively.

Variational mode decomposition (VMD) is a new signal decomposition technology that was proposed by Dragomiretskiy [39] in 2014. This method is highly suitable for nonlinear and nonstationary signals. VMD removes the recursive modal decomposition framework of empirical mode decomposition (EMD); this introduces the variational model into the signal decomposition and decomposes the input signal into a series of modal components of different frequency bands by searching for the optimal solution of the constraint variational model [40, 41]. VMD has a strong theoretical basis, and selecting the basis function is unnecessary. In essence, VMD is a group of multiple, adaptive Wiener filters that have good robustness. With these characteristics, VMD has better performance across many domains relative to both wavelet transform and EMD.

Assuming that is a discrete sequence of bearing fault signals collected by the sensor with a length of . VMD is first conducted to obtain modal functions , where

is defined by the following formula:and then, the vector can be obtained from modal components:

Introducing Gaussian kernel function results in

Define vector ; then the kernel feature is obtained according to equation (31):

Finally, a feature sample is obtained by calculating the signals of sensors:

In this experiment, and . 100 groups of feature samples were extracted for the normal bearing to form a dataset with dimensions of 8 × 100; comparatively, the dataset matrix with dimensions of 8 × 100 of each bearing (inner ring fault, outer ring fault, and ball fault) was also extracted.

Figure 4 shows the fault feature vector corresponding to sensor 1 at different rotating speeds. As shown, the proposed VMD fault feature method better represents different fault information.

5.3. Model Optimization and Performance Analysis

Fifty groups of samples were selected from each fault sample dataset (normal, inner ring, outer ring, and ball fault bearings) to construct the training sample matrix with dimensions of 8 × 200; the remaining samples of each fault bearing dataset were used to construct the testing sample matrix with dimensions of 8 × 200. The parameters Gaussian kernel function σ and penalty factor C greatly influence diagnostic accuracy; given this, when the EALO algorithm was used for optimal parameter selection, the searching space range was defined as and .

To verify the effectiveness of the EALO method proposed here, the ALO, genetic algorithm (GA), and particle swarm optimization (PSO) methods with different parameters were selected for comparison. All algorithms were executed on a computer running Windows 10 ×64 operating system with an Intel® Core™ i7-8700k CPU@3.70 GHz and with a memory capacity of 64 GB. To prevent the algorithms from iterating indefinitely, the maximum number of iterations was adopted. Additional parameters for different algorithms are shown in Table 2.


AlgorithmSearching rangeConvergence conditionParameterInitial value

EALO
Ant population50
Ant lion population25
Escape ant population10
Ant escape probability0.7

ALO
Ant population50
Ant lion population25

GA-1
 || Population50
Crossover probability0.1
Mutation probability0.01

GA-2
 || Population50
Crossover probability0.4
Mutation probability0.01

GA-3
 || Population50
Crossover probability0.1
Mutation probability0.1

GA-4
 || Population50
Crossover probability0.4
Mutation probability0.1

GA-5
 || Population50
Crossover probability0.7
Mutation probability0.5

PSO-1
 || Particle size50
Acceleration factor C10.1
Acceleration factor C20.1
Weight W0.5

PSO-2
 || Particle size50
Acceleration factor C10.5
Acceleration factor C20.1
Weight W0.8

PSO-3
 || Particle size50
1.0
Acceleration factor C10.5
Acceleration factor C21.0
Weight W100

PSO-4
 || Particle size50
Acceleration factor C11.5
Acceleration factor C21.7
Weight W1.0

PSO-5
 || Particle size50
Acceleration factor C12.0
Acceleration factor C22.0
Weight W1.5

Using the rotation speed of 1500 r/min as an example, the distributional status of the ant and ant lion populations in the SVM parameter optimization processed by EALO is shown in Figure 5. As shown, as the iteration number increased, the ant and ant lion populations gradually converged to the optimal solution region, indicating that the EALO algorithm was convergent. As shown in Figure 5(a), the initial distributions of the ant and ant lion populations are randomly distributed. The ant lions build traps at random initial locations to prepare for the later capture of ants.

When considering the escape mechanism, there is a given probability that some ants will escape the trap of an ant lion. As shown in Figure 5, when the EALO iterates for the third (Figure 5(b)), seventh (Figure 5(d)), and ninth (Figure 5(e)) times, some of the ants exit the ant lion’s trap and escape to other random locations (labeled “o”). Therefore, the probability that the algorithm falls into local extremum is effectively reduced, and the overall optimization performance of the algorithm is improved. It can also be seen from Figure 5 that the kernel function parameter and the penalty factor greatly influenced the accuracy of the bearing fault classification. When the EALO algorithm stopped iteration (Figure 5(f)), the ant and ant lion locations did not converge to a single point. Rather, these locations converged to multiple points, which demonstrated that there were multiple feasible solutions. Therefore, any ant lion could be selected as the elite ant lion, and its position is regarded as the optimal algorithm solution.

Using bearing fault diagnosis at a speed of 1500 r/min as an example, the convergence curves for EALO, ALO, GA, and PSO are all shown in Figure 6. As shown, the EALO method iterates only 13 times, while the ALO, GA, and PSO iterate 100 times. Although threshold conditions were satisfied after 70 iterations, the ALO method did not stop iterating because it did not have an escape mechanism or adaptive convergence condition. During iteration, if the ants walk around the poor fitness ant lions, the probability of falling into local extremum is increased; simultaneously, the resources of the ant lion individual will be wasted owing to the ant lion’s search in the local extremum neighborhood. Therefore, the optimization performance and the convergence efficiency of the ALO algorithm is partly reduced. The threshold values for the EALO and ALO methods gradually decrease with increasing iteration time.

Using different parameters, the convergence performance of the GA method is different. Using inappropriate parameters resulted in GA performance deterioration; as shown in GA-5, the performance threshold increased with 60 iteration times and no longer decreased after a certain number of iterations. Finally, the PSO optimization performance was also greatly affected by its attendant parameters. The PSO iterative curve oscillated and had no obvious convergence trend, indicating that the convergence condition proposed here was not suitable for the PSO method. Taken together, these findings show that the EALO method proposed here had the fastest convergence speed.

5.4. Results and Discussion

It has been reported that the binary tree model is a more suitable approach to classify bearing faults than many other classification models [19]. This is because the radial basis kernel function has better nonlinear mapping ability in high-dimensional space than other kernel functions, making this model better suited for the fault classification of bearings. Therefore, both the binary tree support vector machine model and the radial basis kernel function model were used in these experiments. The EALO, traditional ALO, GA, and PSO methods with different parameters were used to optimize the SVM model parameters. Four faults of bearings at different speeds were then diagnosed by this optimized SVM model. The diagnostic results are shown in Tables 36.


MethodsTime (s)Iteration numberSV numberTesting datasetTraining dataset
FaultsRecognition rate (%)FaultsRecognition rate (%)

EALO19.54876.97940.90651362Normal100Normal100
Inner100Inner96
Outer100Outer100
Ball98Ball100
Average99.5Average99
Total average99.25

ALO18.52338.37644.677110059Normal100Normal100
Inner94Inner94
Outer100Outer100
Ball98Ball98
Average98Average98
Total average98

GA-127.91212.49639.714210043Normal92Normal100
Inner90Inner88
Outer100Outer100
Ball98Ball98
Average95Average96.5
Total average95.75

GA-218.22719.55309.626410048Normal100Normal100
Inner94Inner96
Outer100Outer100
Ball98Ball98
Average98Average98.5
Total average98.25

GA-320.04409.13539.971310034Normal100Normal100
Inner94Inner96
Outer100Outer100
Ball100Ball98
Average98.5Average98.5
Total average98.5

GA-417.74364.621410.9412100163Normal100Normal100
Inner94Inner90
Outer100Outer100
Ball94Ball86
Average97Average94
Total average95.5

GA-510.6886-2.479210.3611100200Normal100Normal100
Inner92Inner88
Outer100Outer100
Ball0Ball4
Average73Average73
Total average73

PSO-122.30695.946710.669810041Normal100Normal100
Inner92Inner96
Outer100Outer100
Ball98Ball98
Average97.5Average98.5
Total average98

PSO-230.00002.52389.101010047Normal52Normal68
Inner92Inner98
Outer100Outer100
Ball98Ball98
Average85.5Average91
Total average88.25

PSO-330.00009.80348.9303100171Normal96Normal98
Inner72Inner78
Outer98Outer100
Ball82Ball70
Average87Average86.5
Total average86.75

PSO-419.00398.09558.337410056Normal100Normal100
Inner94Inner9
Outer100Outer100
Ball98Ball98
Average98Average98
Total average98

PSO-510.8694-1.96478.8360100200Normal96Normal98
Inner72Inner78
Outer98Outer100
Ball50Ball46
Average79Average80.5
Total average79.75


MethodsTime (s)Iteration numberSV numberTesting datasetTraining dataset
FaultsRecognition rate (%)FaultsRecognition rate (%)

EALO23.78493.80570.93491520Normal100Normal98
Inner100Inner100
Outer100Outer100
Ball98Ball100
Average99.5Average99.5
Total average99.5

ALO19.83888.682054.375010033Normal100Normal98
Inner100Inner96
Outer100Outer100
Ball98Ball100
Average99.5Average98.5
Total average99

GA-122.82055.77199.125210029Normal100Normal98
Inner100Inner98
Outer100Outer100
Ball98Ball100
Average99.5Average99
Total average99.25

GA-223.91217.640410.475410018Normal100Normal98
Inner100Inner96
Outer100Outer100
Ball100Ball100
Average100Average98.5
Total average99

GA-327.86816.790410.665410021Normal100Normal98
Inner100Inner96
Outer100Outer100
Ball100Ball100
Average100Average98.5
Total average99.25

GA-419.9341-4.32589.4085100200Normal100Normal100
Inner92Inner88
Outer100Outer100
Ball44Ball68
Average84Average89
Total average86.5

GA-519.53859.2818710.645210030Normal100Normal98
Inner100Inner96
Outer100Outer100
Ball98Ball100
Average99.5Average98.5
Total average99

PSO-120.02818.33098.433510032Normal100Normal100
Inner100Inner96
Outer100Outer100
Ball98Ball100
Average99.5Average99
Total average99.25

PSO-218.513610.00008.978210031Normal100Normal98
Inner100Inner96
Outer100Outer100
Ball98Ball100
Average99.5Average98.5
Total average99

PSO-328.05147.41728.031110022Normal100Normal98
Inner100Inner98
Outer100Outer100
Ball100Ball100
Average100Average99
Total average99.5

PSO-418.628710.00008.053210031Normal100Normal98
Inner100Inner96
Outer100Outer100
Ball98Ball100
Average99.5Average98.5
Total average99

PSO-518.404710.00007.555210031Normal100Normal98
Inner100Inner96
Outer100Outer100
Ball98Ball100
Average99.5Average98.5
Total average99


MethodsTime (s)Iteration numberSV numberTesting datasetTraining dataset
FaultsRecognition rate (%)FaultsRecognition rate (%)

EALO23.29347.67610.2366319Normal100Normal100
Inner100Inner98
Outer100Outer100
Ball100Ball98
Average100Average99
Total average99.5

ALO25.73347.28693.314710020Normal100Normal100
Inner100Inner98
Outer100Outer100
Ball100Ball98
Average100Average99
Total average99.5

GA-125.47255.14177.471610022Normal100Normal100
Inner100Inner98
Outer100Outer100
Ball100Ball98
Average100Average99
Total average99.5

GA-229.26744.518811.830510017Normal100Normal100
Inner100Inner98
Outer100Outer100
Ball100Ball98
Average100Average99
Total average99.5

GA-328.71799.48719.743510020Normal100Normal100
Inner100Inner98
Outer100Outer100
Ball100Ball98
Average100Average99
Total average99.5

GA-420.10269.25998.005810019Normal100Normal100
Inner100Inner98
Outer100Outer100
Ball100Ball98
Average100Average99
Total average99.5

GA-513.05498.05818.5892100139Normal100Normal98
Inner98Inner96
Outer100Outer100
Ball100Ball100
Average99.5Average98
Total average99

PSO-127.04915.24178.804810019Normal100Normal100
Inner100Inner98
Outer100Outer100
Ball100Ball98
Average100Average99
Total average99.5

PSO-215.435310.00008.243810032Normal100Normal100
Inner100Inner96
Outer100Outer100
Ball100Ball98
Average100Average98.5
Total average99.25

PSO-318.40439.09377.797110022Normal100Normal100
Inner100Inner98
Outer100Outer100
Ball100Ball98
Average100Average99
Total average99.5

PSO-420.391310.00007.728910020Normal100Normal100
Inner100Inner98
Outer100Outer100
Ball100Ball98
Average100Average99
Total average99.5

PSO-530.00009.85615.888210019Normal100Normal100
Inner100Inner98
Outer100Outer100
Ball100Ball98
Average100Average99
Total average99.5


MethodsTime (s)Iteration numberSV numberTesting datasetTraining dataset
FaultsRecognition rate (%)FaultsRecognition rate (%)

EALO26.50276.46640.2656322Normal100Normal100
Inner100Inner98
Outer100Outer100
Ball100Ball100
Average100Average99.5
Total average99.75

ALO30.00001.79083.447910027Normal94Normal88
Inner100Inner100
Outer100Outer100
Ball100Ball100
Average98.5Average97
Total average97.75

GA-124.43225.47879.908310021Normal100Normal100
Inner100Inner98
Outer100Outer100
Ball100Ball100
Average100Average99.5
Total average99.75

GA-223.23817.713710.244910022Normal100Normal100
Inner100Inner98
Outer100Outer100
Ball100Ball100
Average100Average99.5
Total average99.75

GA-321.72898.69579.104110023Normal100Normal100
Inner100Inner98
Outer100Outer100
Ball100Ball100
Average100Average99.5
Total average99.75

GA-422.46156.46808.484010022Normal100Normal100
Inner100Inner98
Outer100Outer100
Ball100Ball100
Average100Average99.5
Total average99.75

GA-524.90119.325810.861810024Normal100Normal100
Inner100Inner98
Outer100Outer100
Ball100Ball100
Average100Average99.5
Total average99.75

PSO-130.00001.80066.923310026Normal88Normal84
Inner100Inner100
Outer100Outer100
Ball100Ball100
Average97Average96
Total average96.5

PSO-230.00001.78888.849310027Normal92Normal88
Inner100Inner100
Outer100Outer100
Ball100Ball100
Average98Average97
Total average97.5

PSO-330.00001.881317.688210029Normal100Normal100
Inner100Inner96
Outer100Outer100
Ball100Ball100
Average100Average99
Total average99.5

PSO-430.00007.45306.546810024Normal100Normal100
Inner100Inner98
Outer100Outer100
Ball100Ball100
Average100Average99.5
Total average99.75

PSO-520.58459.08987.260810023Normal100Normal100
Inner100Inner98
Outer100Outer100
Ball100Ball100
Average100Average99.5
Total average99.75

As shown in Table 3, when the rotation speed was 1500 r/min, the number of EALO iterations proposed here was 13. Moreover, the optimization time was the shortest (0.9065 s), followed by the ALO approach (4.6771 s). The GA approach took the longest (9.6264 s). The same results were found when the rotation speeds were 1800 r/min (Table 4), 2100 r/min (Table 5), and 2400 r/min (Table 6). The reasons for these results are that the GA method requires a series of operations (e.g., encoding, selection, crossover, mutation, and decoding), resulting in a complex genetic algorithm. Contrastingly, the interaction between ants and ant lions in the EALO did not require complex operations. Additionally, the escape mechanism and effective adaptive convergence conditions were introduced, which greatly reduced the number of iterations and improved its optimization performance. In the PSO, particle velocity and direction were controlled by many parameters, including the acceleration factors C1 and C2 as well as the inertia weight W; these parameters had greater influence on PSO algorithm convergence with improper parameters resulting in the algorithm falling into the local extremum. Optimal parameters are difficult to obtain in practice, and as a result, it was difficult for the particles to converge. Ultimately, this reduced the PSO’s performance.

Table 7 details the support vector (SV) number of the SVM model optimized by different methods at different speeds. When compared with the complexity of the optimized SVM model, the total average number of SVs of the SVM model optimized by the EALO was 23; notably, this was the least average number. Comparatively, the total average number of ALO, PSO, and GA support vectors was 34.75, 45.15, and 55.75, respectively. To some extent, the SV number represents the complexity of the high-dimensional space of the SVM model—the lower the SV number, the higher the linear separability in the high-dimensional space. Taken together, the results presented here show that the kernel function parameters and penalty factors optimized by the improved EALO method allow the kernel function to have greater nonlinear mapping ability.


MethodsSV numberAverage SV numberTotal average SV number
1500 r/min1800 r/min2100 r/min2400 r/min

EALO3120192223.00
ALO5933202734.75
GA-14329222128.7555.75
GA-24818172226.25
GA-33421202324.5
GA-41632001922101
GA-5200301392498.25
PSO-14132192629.545.15
PSO-24731322734.25
PSO-317122222961
PSO-45631202432.75
PSO-520031192368.25

The average bearing fault recognition rates using different optimization methods (e.g., EALO, ALO, GA, and PSO) at different speeds are shown in Table 8. As shown, the EALO method proposed here achieved the highest recognition accuracy (99.5%) using the four rotation speeds. This was followed by the ALO, GA, and PSO methods, which had accuracies of 98.56%, 96.99%, and 96.84%, respectively. These findings show that the EALO method proposed here has better optimization ability than the other three methods. Moreover, the optimized parameters were closer to the real optimal solution. Therefore, these results show that the improved EALO method can effectively improve the recognition rate of bearing faults.


MethodsRecognition rate (%)Average recognition rate (%)Total average recognition rate (%)
1500 r/min1800 r/min2100 r/min2400 r/min

EALO99.2599.5099.5099.7599.50
ALO98.0099.0099.5097.7598.56
GA-195.7599.2599.5099.7598.5696.99
GA-298.2599.0099.5099.7599.13
GA-398.5099.2599.5099.7599.25
GA-495.5086.5099.5099.7595.31
GA-573.0099.0099.0099.7592.69
PSO-198.0099.2599.5096.5098.3196.84
PSO-288.2599.0099.2597.5096.00
PSO-386.7599.5099.5099.5096.31
PSO-498.0099.0099.5099.7599.06
PSO-579.7599.0099.5099.7594.50

6. Conclusion

Based on the classical ALO algorithm, the EALO algorithm was proposed by introducing an escape mechanism and adaptive iterative convergence conditions. This algorithm was then applied to the diagnosis of bearing faults. Comparing with more traditional methods (e.g., ALO, GA, and PSO), the following conclusions can be drawn:(1)The escape mechanism was effective and reduced the possibility that the classical ALO algorithm would fall into a local extremum value. This improved the global optimization performance.(2)The proposed adaptive convergence conditions effectively reduced the iteration number, saving optimization time and improving the optimization performance of the EALO algorithm.(3)The proposed EALO algorithm was suitable for SVM parameter optimization. When compared with the classical ALO, GA, and PSO approaches, the EALO algorithm had the best performance.(4)The feature extraction method based on the VMD and kernel function was effective and provides a new reference point for bearing fault diagnosis.

Data Availability

The data used to support the findings of this study are included in the supplementary file “Datasets.zip.”

Conflicts of Interest

The authors declare that they have no conflicts of interest.

Acknowledgments

This study was supported by the National Natural Science Foundation of China (Grant Nos. 11702091 and 51575178), the Natural Science Foundation of Hunan Province of China (Grant Nos. 2018JJ3140, 2019JJ50156, and 2018JJ4084), Hunan Provincial Key Research and Development Program (Grant No. 2018GK2044), and Open Funded Projects of Hunan Provincial Key Laboratory of Health Maintenance for Mechanical Equipment (Grant No. 201605).

Supplementary Materials

The supplementary file “Datasets.zip” is the original bearing faults data were used in experiments. (Supplementary Materials)

References

  1. L. Cui, J. Huang, and F. Zhang, “Quantitative and localization diagnosis of a defective ball bearing based on vertical-horizontal synchronization signal analysis,” IEEE Transactions on Industrial Electronics, vol. 64, no. 11, pp. 8695–8706, 2017. View at: Publisher Site | Google Scholar
  2. Q. Gao, H. Tang, J. Xiang, Y. Zhong, S. Ye, and J. Pang, “A Walsh transform-based teager energy operator demodulation method to detect faults in axial piston pumps,” Measurement, vol. 134, pp. 293–306, 2019. View at: Publisher Site | Google Scholar
  3. J. Tao, Y. Liu, and D. Yang, “Bearing fault diagnosis based on deep belief network and multisensor information fusion,” Shock and Vibration, vol. 2016, pp. 1–9, 2016. View at: Publisher Site | Google Scholar
  4. J. I. Taylor, “Back to the basics of the rotating machinery vibration analysis,” Sound and Vibration, vol. 29, no. 2, pp. 12–16, 1995. View at: Google Scholar
  5. L. Wang and J. Xiang, “A two-stage method using spline-kernelled chirplet transform and angle synchronous averaging to detect faults at variable speed,” IEEE Access, vol. 7, no. 1, pp. 22471–22485, 2019. View at: Publisher Site | Google Scholar
  6. X. Li, L. Jiang, D. Yang, and K. Wang, “Cluster analysis and fault diagnosis for gear based on bispectrum distribution,” Journal of Vibration Engineering, vol. 3, no. 24, pp. 304–308, 2011. View at: Google Scholar
  7. S. Wang, J. Xiang, Y. Zhong, and H. Tang, “A data indicator-based deep belief networks to detect multiple faults in axial piston pumps,” Mechanical Systems and Signal Processing, vol. 112, pp. 154–170, 2018. View at: Publisher Site | Google Scholar
  8. N. Saravanan and K. I. Ramachandran, “Incipient gear box fault diagnosis using discrete wavelet transform (DWT) for feature extraction and classification using artificial neural network (ANN),” Expert Systems with Applications, vol. 37, no. 6, pp. 4168–4181, 2010. View at: Publisher Site | Google Scholar
  9. B. Samanta and K. R. Al-Balushi, “Artificial neural network based fault diagnostics of rolling element bearings using time-domain features,” Mechanical Systems and Signal Processing, vol. 17, no. 2, pp. 317–328, 2003. View at: Publisher Site | Google Scholar
  10. Y. Zhu, L. Huo, and J. Lu, “Bayesian networks-based approach for power systems fault diagnosis,” IEEE Transactions on Power Delivery, vol. 21, no. 2, pp. 634–639, 2006. View at: Publisher Site | Google Scholar
  11. C. C. Wang, Y. Kang, and C. C. Liao, “Using bayesian networks in gear fault diagnosis,” Applied Mechanics and Materials, vol. 284-287, no. 287, pp. 2416–2420, 2013. View at: Publisher Site | Google Scholar
  12. R. Jiang, J. Yu, and V. Makis, “Optimal Bayesian estimation and control scheme for gear shaft fault detection,” Computers and Industrial Engineering, vol. 63, no. 4, pp. 754–762, 2012. View at: Publisher Site | Google Scholar
  13. C. Cortes and V. Vapnik, “Support vector machine,” Machine learning, vol. 20, no. 3, pp. 273–297, 1995. View at: Publisher Site | Google Scholar
  14. J. Xiang and Y. Zhong, “A novel personalized diagnosis methodology using numerical simulation and an intelligent method to detect faults in a shaft,” Applied Sciences, vol. 6, no. 12, p. 414, 2016. View at: Publisher Site | Google Scholar
  15. B. Samanta, “Gear fault detection using artificial neural networks and support vector machines with genetic algorithms,” Mechanical Systems and Signal Processing, vol. 18, no. 3, pp. 625–644, 2004. View at: Publisher Site | Google Scholar
  16. T. Joachims, “Making Large-Scale SVM learning practical,” in Advances in Kernel Methods-Support Vector Learning, B. C. J. Schölkopf and A. Smola, Eds., pp. 169–184, MIT Press, Cambridge, MA, USA, 1998. View at: Google Scholar
  17. Y. Yang and J. O. Pederson, “A comparative study on feature selection in text categorization,” in Proceedings of the 14th International Conference on Machine Learning, pp. 412–420, Morgan Kaufmann, Nashville, TN, USA, July 1997. View at: Google Scholar
  18. H. Drucker, C. J. Burges, L. Kaufman, A. Smola, and V. Vapnik, “Support vector regression machines,” in Neural Information Processing Systems, M. Mozer, M. Jordan, and T. Petsche, Eds., vol. 9, MIT Press, Cambridge, MA, USA, 1997. View at: Google Scholar
  19. L. Guyon, N. Matic, and V. Vapnik, Discovering Information Pattern and Data Cleaning, AT&T Bell Laboratory, Murray Hill, NJ, USA, 1997.
  20. L. Jiang, Y. Liu, X. Li, and A. Chen, “Gear fault diagnosis based on SVM and multi-sensor information fusion,” Journal of Central South University (Science and Technology), vol. 6, pp. 2184–2188, 2010. View at: Google Scholar
  21. X. Li, D. Yang, D. Guo, and L. Jiang, “Fault diagnosis method based on multi-sensors installed on the base and KPCA,” Chinese Journal of Scientific Instrument, vol. 7, pp. 1551–1557, 2011. View at: Google Scholar
  22. Q. Liu, G. Chen, X. Liu, and Q. Yang, “Genetic algorithm based SVM parameter composition optimization,” Computer Applications and Software, vol. 4, pp. 94–96, 2012. View at: Google Scholar
  23. H. Cheng, C. Huang, and Y. Zhang, “Constructed of SVM decision tree based on particle swarm optimization algorithm for gear box fault diagnosis,” Journal of Vibration, Measurement and Diagnosis, vol. 1, pp. 153–156, 2013. View at: Google Scholar
  24. D. Yang, Y. Liu, S. Li, X. Li, and L. Ma, “Gear fault diagnosis based on support vector machine optimized by artificial bee colony algorithm,” Mechanism and Machine Theory, vol. 90, no. 3, pp. 219–229, 2015. View at: Publisher Site | Google Scholar
  25. S. Mirjalili, “The ant lion optimizer,” Advances in Engineering Software, vol. 83, pp. 80–98, 2015. View at: Publisher Site | Google Scholar
  26. W. Wu, J. Zhang, Z. Lin, and Q. Su, “Ant lion optimization algorithm based on double feedback mechanism,” Computer Engineering and Applications, vol. 53, no. 12, pp. 31–35, 2017. View at: Google Scholar
  27. K. Jing, X. Zhao, X. Zhang, and D. Liu, “Ant lion optimizer with levy variation and adaptive elite competition mechanisms,” CAAI Transactions on Intelligent Systems, vol. 2, pp. 1–8, 2018. View at: Google Scholar
  28. S. Talatahari, “Optimum design of skeletal structures using ant lion optimizer,” International Journal of Optimization in Civil Engineering, vol. 6, no. 1, pp. 3–25, 2016. View at: Google Scholar
  29. P. Saxena and A. Kothari, “Ant Lion Optimization algorithm to control side lobe level and null depths in linear antenna arrays,” AEU-International Journal of Electronics and Communications, vol. 70, no. 9, pp. 1339–1349, 2016. View at: Publisher Site | Google Scholar
  30. J. H. M. Mohammad, A. N. Saber, and B. Mehdi, “A multi-objective optimal Sizing and siting of distributed generation using ant lion optimization technique,” Ain Shams Engineering Journal, vol. 9, no. 4, pp. 2101–2109, 2017. View at: Publisher Site | Google Scholar
  31. R. G. S. Mei, M. H. Sulaiman, and A. Mustaffa, “Ant lion optimizer for optimal reactive power dispatch solution,” Journal of Electrical Systems, vol. 3, pp. 67–74, 2015. View at: Google Scholar
  32. A. Mahajan and M. G. Kaur, Community Detection in Complex Networks Using a Novel Nature Inspired Algorithmic Approach based on Ant Lion Optimizer, Thapar University Patiala, Patiala, India, 2015.
  33. Y. Zhong, J. Zha, and S. Zhu, “Feature selection algorithm using Bayesian harmony measures,” Computer Engineering and Applications, vol. 51, no. 23, pp. 125–130, 2015. View at: Google Scholar
  34. Z. He, “Beijing electricity demand forecast based on ant lion optimizer GM (1,1) model,” Modern Industrial Economy and Informatization, vol. 7, no. 22, pp. 57–60+83, 2017. View at: Google Scholar
  35. S. Zhao, L. Gao, D. Yu, and J. Tu, “Ant lion optimizer with chaotic investigation mechanism for optimizing SVM parameters,” Journal of Frontiers of Computer Science and Technology, vol. 10, no. 05, pp. 722–731, 2016. View at: Google Scholar
  36. C. Nello and S. T. John, An Introduction to Support Vector Machine and the Kernel-based Learning Methods, Cambridge University Press, Cambridge, UK, 2000.
  37. D. Yang, Y. Liu, S. Li, J. Tao, C. Liu, and J. Yi, “Fatigue crack growth prediction of 7075 aluminum alloy based on the GMSVR model optimized by the artificial bee colony algorithm,” Engineering Computations, vol. 34, no. 4, pp. 1–21, 2017. View at: Publisher Site | Google Scholar
  38. J. Liu, Z. Liu, and Y. Xiong, “Improved multi-category support vector machines based on binary tree,” Computer Engineering and Applications, vol. 46, no. 33, pp. 117–120, 2010. View at: Google Scholar
  39. D. Konstantin and Z. Dominique, “Variational mode decomposition,” IEEE Transactions on Signal Processing, vol. 62, no. 3, pp. 531–544, 2014. View at: Publisher Site | Google Scholar
  40. Y. Wang, R. Markert, J. Xiang, and W. Zheng, “Research on variational mode decomposition and its application in detecting rub-impact fault of the rotor system,” Mechanical Systems and Signal Processing, vol. 60-61, pp. 243–251, 2015. View at: Publisher Site | Google Scholar
  41. C. Aneesh, S. Kumar, P. M. Hisham, and K. P. Soman, “Performance comparison of variational mode decomposition over empirical wavelet transform for the classification of power quality disturbances using support vector machine,” Procedia Computer Science, vol. 46, pp. 372–380, 2015. View at: Publisher Site | Google Scholar

Copyright © 2019 Dalian Yang et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.


More related articles

377 Views | 193 Downloads | 1 Citation
 PDF  Download Citation  Citation
 Download other formatsMore
 Order printed copiesOrder

Related articles

We are committed to sharing findings related to COVID-19 as quickly and safely as possible. Any author submitting a COVID-19 paper should notify us at help@hindawi.com to ensure their research is fast-tracked and made available on a preprint server as soon as possible. We will be providing unlimited waivers of publication charges for accepted articles related to COVID-19. Sign up here as a reviewer to help fast-track new submissions.