Abstract

Bearing is an important mechanical component that easily fails in a bad working environment. Support vector machines can be used to diagnose bearing faults; however, the recognition ability of the model is greatly affected by the kernel function and its parameters. Unfortunately, optimal parameters are difficult to select. To address these limitations, an escape mechanism and adaptive convergence conditions were introduced to the ALO algorithm. As a result, the EALO method was proposed and has been applied to the more accurate selection of SVM model parameters. To assess the model, the vibration acceleration signals of normal, inner ring fault, outer ring fault, and ball fault bearings were collected at different rotation speeds (1500 r/min, 1800 r/min, 2100 r/min, and 2400 r/min). The vibration signals were decomposed using the variational mode decomposition (VMD) method. The features were extracted through the kernel function to fuse the energy value of each VMD component. In these experiments, the two most important parameters for the support vector machine—the Gaussian kernel parameter σ and the penalty factor C—were optimized using the EALO algorithm, ALO algorithm, genetic algorithm (GA), and particle swarm optimization (PSO) algorithm. The performance of these four methods to optimize the two parameters was then compared and analyzed, with the EALO method having the best performance. The recognition rates for bearing faults under different tested rotation speeds were improved when the SVM model parameters optimized by the EALO were used.

1. Introduction

The rolling bearing is a core mechanical component that is widely used in wind turbines, aeroengines, ships, automobiles, and other important mechanical equipment [1, 2]. Rolling bearings usually perform for extended periods of time in extreme conditions such as high temperatures, high speeds, and high loading. These long-term, extreme running conditions lead to a variety of serious failures, including ball bearing wear, metal spalling on the inner and outer raceways, and cracks in the cage [3]. A weak fault results in abnormal vibrations, which hinder the performance of the mechanical equipment and reduce work efficiency. A serious fault may result in the destruction of the machine and—depending on the severity—lead to employee death. Whether a mechanical malfunction or human injury and/or death, the result is a huge loss to the business enterprise and society as a whole; ultimately, this results in a serious barrier to the harmonious development of a national economy. Although bearing failure is inevitable, abiding by a standard maintenance schedule can partially reduce the accident rate caused by bearing failure. However, bearings still suffer issues of over- and undermaintenance, resulting in increased business costs. Developments in computer and testing technology have led to many advanced automatic monitoring and intelligent diagnostics. These have been adopted to allow for online monitoring of working conditions, which allow for timely fault detection as well as the development of accurate reference points for future maintenance decisions. Given these advantages, it is important to study the application of intelligent fault diagnostic technology to rolling bearing performance.

Currently, the vibration monitoring method is the most commonly adopted method to monitor bearing conditions [4]. When the fault first appears, its characteristic signal is very weak [5]. This is because it is overwhelmed by the power of the natural frequency vibrations, transfer modulation, and noise interference. Despite being weak, the signal has obvious nonlinear and nonstationary characteristics [6]. Traditional fault diagnostic methods include analyzing vibration signals from the time, frequency, and time-frequency domains. Traditional approaches have difficulty detecting these signals because they are based on the neural network [79] and the Bayesian decision [1012] methods, which require a large number of valid samples to function properly. This means that when the sample size is too small, model accuracy decreases; however, a large number of fault samples are difficult to discern. Therefore, the application of traditional pattern recognition methods such as the neural network and the Bayesian decision is restricted.

The support vector machine (SVM) [1315] is a pattern recognition method developed in the 1990s that is suitable for small sample conditions. This method takes a kernel function as the core and implicitly maps the original data of the original space to the feature space. In this manner, a search for linear relations in the feature space is conducted, which can then determine efficient solutions to nonlinear problems. SVM has been widely applied in the field of pattern recognition and has been applied to such problems as text recognition [16], handwritten numeral recognition [17], face detection [18], system control [19], and many other related applications. The accuracy of SVM classification is highly affected by the kernel function and its parameters since the relationship between the parameters and model classification accuracy in a multimodal function is irregular. Given this, improper parameter values worsen the model’s generalization ability, leading to more inaccurate fault recognition. Unfortunately, it is difficult to obtain optimal parameters. When applied, empirical selection is unreliable. Computer-driven parameter optimization not only reduces the workload of human engineers but also provides a more reliable basis for the selection of optimal solutions. Current methods used for parameter optimization include grid cross-validation (GCV) [20, 21], genetic algorithm (GA) [22], and particle swarm optimization (PSO) [23, 24]; despite this variety, the optimization efficiency of these methods remains imperfect.

In 2015, a new bionic intelligent algorithm termed “Ant Lion Optimizer” (ALO) was devised by Mirjalili [25]. ALO has many advantages, including its simple principle, ease of implementation, reduced need for parameter adjustment, and high precision [26, 27]. It has been successfully applied to a variety of fields like structure optimization [28], antenna layout optimization [29], distributed system siting [30], idle power distribution problem [31], community mining in complex networks [32], and feature extraction [33]. Recently, He et al. [34] utilized the ant lion optimizer to optimize a GM (1,1) model to predict the power demands of Beijing. Their results showed that this approach improved the adaptability of the GM (1,1) model. Relatedly, Zhao et al. [35] improved the ant lion optimizer by using a chaos detection mechanism to optimize SVM. They then used a UCI standard database for verification. Collectively, their results showed the ALO algorithm improved classification accuracy.

At present, there are few studies regarding ALO application in bearing fault diagnosis. As a new bionic optimization algorithm, there are some ant lion individuals in the ALO algorithm with relatively poor fitness in the iteration process. If the ants select poor fitness ant lions for walking, the probability of falling into a local extremum increases. In addition, resource waste will result if poor fitness ant lions search around the local extremum and partially affect the optimization performance and convergence efficiency of the ALO algorithm.

Given the aforementioned problems, this paper uses the rolling bearing as its test object and to improve the ant lion algorithm. When combined with SVM, it also sought to diagnose bearing faults. This work has both great theoretical significance and practical value to improve the accuracy of fault diagnosis in rolling bearings, thereby ensuring the safety and stability of functional rolling bearings.

2. Basic Algorithm Principles

2.1. Ant Lion Optimizer

The ALO algorithm was modeled on the hunting behavior of ant larvae in nature. As constructed, the optimization algorithm mimics the walking of random ants, constructs traps, lures the ants into the trap, captures the ants, and reconstructs the traps. The ALO algorithm conducts a global search by walking around randomly selected ant lions, and local refinement optimization is achieved by adaptive boundary of the ant lion trap.

The total number of ants and ant lions is defined by N, the problem dimension is D, the maximum number of iterations is , the lower boundary of the optimal space is , the upper boundary is , the position matrix of ants is , and the position matrix of ant lions is :where represents the current number of iterations and satisfies , represents the position of the ant in the dimensional space after the iteration, and represents the position of the ant lion in the dimension space after the iteration. When , the position of the initial ant and ant lion populations can be assigned by the following formulae:

The fitness vectors and can be expressed aswhere is the fitness evaluation function. Define as the elite ant lion after the iteration, which satisfies

The iterative process of the ALO algorithm is to continuously update the position according to the interaction between the ants and the ant lions; after this update, it then reselects the elite ant lions. The ALO algorithm primarily includes random ant walks, trapping in an ant lion’s pit, building traps, sliding ants towards the ant lion, catching prey, rebuilding the pit, and elitism.

2.1.1. Random Ant Walks

When searching for food in nature, ants move stochastically; as such, a random walk is used to model ants’ movement as follows:where is a stochastic function that is defined as follows:

In order to ensure the random walks of ants in the search space, the position of the ants is normalized by

2.1.2. Building a Trap

According to its fitness, an individual ant lion is selected from the ant lion population of the previous generation through a roulette operation, defined as follows:where is the sorting function (in positive order), which is defined as

As increases from 1 and when the first is satisfied, is the selected individual, which then builds the trap together with the elite named “.

2.1.3. Trapping in an Ant Lion’s Pit

This process is described as the ants walking around the “trap.” The boundary of the walking area is affected by the position of the elite, which can be defined by the following formula:

2.1.4. Sliding Ants towards the Ant Lion

As soon as the ant starts sliding towards the trap, the ant lions realize the ant is in the trap and shoot sand to the center of the pit to prevent it from escaping. The process can be described as an adaptive decrease in the radius of a given ant’s random walk hypersphere:where G is the ratio of the current iteration number to the maximum iteration number and is the radius reduction scale index, which satisfies

2.1.5. Catching Prey and Rebuilding the Pit

The ant lion kills the ant and eats its body. If a prey ant’s fitness is higher than the elite’s fitness, the elite ant lions update their position to the prey ants. In other words, the elite ant lions will build new traps for the next prey. Given this scenario, the following equation is proposed:

2.1.6. Elitism

The elite ant lion affects the movements of all ants, with each ant randomly walking around a selected ant lion; this walking is done according to both the roulette wheel and the elite, simultaneously. This behavior is modeled according to the following function:where represents the ant lions which are selected by the roulette wheel at the iteration and represents the ants who walk around the elite at the iteration.

2.2. Support Vector Machine

In the 1950s, Vapnik [13] proposed a new machine learning method that was termed the support vector machine (SVM). SVM is based on both the statistical learning theory and the structural risk minimization principle. Statistical learning theory is specialized for small sample situations of machine learning theory; given this, SVM has a good generalization ability. In addition, SVM is a convex quadratic optimization problem, which guarantees that the obtained extremum solution is also the global optimal solution [36, 37]. Collectively, these characteristics allow it to avoid the local extremum and dimensional disaster problems that are unavoidable when using a neural network. The standard SVM model has been established for two types of classification samples. Its basic principles are as follows.

For the two types of classes to be classified, the sample dataset is defined as follows:

The establishment of an SVM classification model is done to find an optimal classification surface , which satisfieswhere is the normal vector of the hyperplane and is the offset vector. Therefore, the following convex quadratic optimization model can be established:where is the punishment factor and is the relaxation factor.

By solving the following dual problem, the optimal solution is obtained:

This allows the optimal hyperplane normal vector in equation (19) to be obtained:

The corresponding samples to are called the support vectors (SVs), and the offset vector is calculated by

The decision function yielded is as follows:

The above SVM model has been established for linear sample classification; for nonlinear classification, the nonlinear transform is introduced. can transform the nonlinear samples in low-dimensional space into linear samples in high-dimensional Hilbert space. Unfortunately, this nonlinear transformation is difficult to obtain. In practice, a kernel function is often used instead of the explicit nonlinear transform equation . The kernel function transforms the nonlinear samples in low-dimensional space into high-dimensional Hilbert space by calculating the inner product. So the discriminant function is described as

As shown in formula (24), the kernel function is the core of the SVM, and it plays an important role in its generalization ability. The common kernel functions are shown in Table 1.

As shown in Table 1, different kernel functions have different expressions and parameters. Therefore, different kernel functions and parameters have different abilities to map data to higher-dimensional space. Since kernel functions and parameter values affect the generalization ability of the SVM model, the selection of the best parameters is extremely important.

The standard SVM solves the problem of two-class classification; in reality, encountering a multiclass problem is more common than a two-class problem. Therefore, the study of a multiclass SVM problem is of great significance. At present, researchers have proposed a handful of effective multiclass SVM construction methods. These approaches can be divided into two categories, with the first being the direct construction method. This method improves the discriminant function of a two-class SVM model to construct a multiclass model. This method uses only one SVM discriminant function to achieve a multiclass output. The discriminant function of this algorithm is very complex, and its classification accuracy is not good. The second method is to realize the construction of a multiclass SVM classifier by combining multiple two-class SVMs. In practice, this method is more widely used and includes one-against-one, one-against-all, direct acyclic graph, and binary tree approaches [38].

3. Ant Lion Optimizer Improvement

3.1. Improvement Based on the Escape Mechanism

In the ALO algorithm, ants randomly walk around the elite ant lion and the roulette wheel-selected ant lion; these ants gradually fall into the trap set by the ant lion. As the number of iterations increases, the walk range of the ants becomes increasingly smaller. In turn, this means the range of the search optimization solution becomes increasingly smaller as well. If the elite ant lion is located at the local extremum value, the risk of falling into the local extremum is increased. This reduces the optimization performance of the ALO algorithm. In nature, when an ant lion builds an ant trap, it is not always successful in catching the ants that fall into the trap. If the ants find that there is an ant lion nearby, they will avoid it to escape being eaten.

Here—and based on the aforementioned considerations—the ant escape mechanism was introduced into the ALO algorithm. This introduction resulted in an improved ALO algorithm, termed here as the EALO algorithm. By introducing the ant escape mechanism, the possibility of the algorithm falling into a local extremum value is reduced, thereby improving the optimization ability of the algorithm.

is defined as the escape probability of the ants, is the maximum number of ants, and is the maximum number of escaped ants, satisfying . The fitness of the ants is ranked after walking around the elite ant lion and the roulette-selected ant lion, where is the iteration number. Then, the former ants with low fitness are selected and randomly assigned to any location within the search field. That is,where and .

3.2. Improvement Based on Adaptive Convergence Conditions

The optimization performance of the ALO algorithm includes primarily precision and time consumption. In the ALO algorithm, algorithm convergence is controlled by setting the maximum number of iterations . If the maximum preset number of iterations is too large, the algorithm will take too long to complete; if the maximum number of iterations is too small, the precision of the algorithm cannot be guaranteed. For this reason, usually takes a large value in practical applications, with algorithm accuracy taking priority over time. However, some applications like online monitoring and fault diagnosis require more accurate solutions that are obtained in a short time. This is because it is necessary to have rapid assessment of the running state of mechanical equipment. Given this, it is unreasonable to adopt a fixed number of iterations. This is because as the number of iterations increases, the walking range of the ants will decrease; moreover, fitness differences between individuals will also decrease.

Assuming that is a small positive number, if the following formula is satisfied, then the EALO algorithm terminates and returns an optimal solution :

4. EALO-SVM Modeling

Assuming that is a sample set containing -class faults, the same number of samples are randomly selected from each fault sample set to form the training sample set . The rest of the samples form the test sample set . Taking the radial basis kernel function as an example, the parameters that need to be optimized are the kernel function parameter and penalty factor . A diagram of the SVM parameter optimization based on EALO is shown in Figure 1, and the modeling steps are as follows:(1)Parameter Presetting. Set the population number of ants as and the population number of ant lions as . The lower boundary of the searching space is . The upper boundary of the searching space is . The probability of ant escape is , the maximum number of ants to escape is , and the convergence threshold is .(2)Ant and Ant Lion Population Initialization. According to the preset parameters and the parameters to be optimized, the ant position matrix and ant lion position matrix are randomly generated according to formula (2) and formula (3).(3)Fitness Estimation. According to the ant position matrix and the ant lion position matrix , training the SVM model uses the sample set and then predicts the testing sample set . The fitness vectors of the ant and of the ant lion are obtained. Thereby, the fitness estimation function is defined by the following formula:(4)Natural Elite Selection. According to formula (5), the ant lion with the highest fitness is selected as the natural elite ant lion .(5)Roulette Elite Selection. According to formulae (9)–(11), roulette elite ant lions were selected.(6)Ant Random Walking. According to formulae (12)–(14), ants randomly walking around the natural elite ant lions are and roulette elite ant lions are .(7)Random Escape. Some ants are randomly assigned to any position in the searching space according to formula (25).(8)Rebuilding the Traps. After feeding on the ants, the position of the natural elite ant lion is updated according to formula (15). The natural elite ant lion continues to rebuild traps in the new position to prepare for the next predation.(9)Stopping the Iteration. To calculate the maximum and minimum fitness of the ants, formula (26) is used. If formula (26) is not true, then return to step (5) and continue to the next iteration. Otherwise, the iteration is stopped and output the natural elite ant lions ; its position is taken as the optimal solution .

5. Experiments

5.1. Data Acquisition

Bearing fault vibration tests were conducted to verify the performance of the EALO-SVM algorithm. These tests were performed on a machinery fault simulator (SpectraQuest Inc., Hermitage Road, Richmond, Virginia, USA) and are shown in Figure 2. The simulator is driven by a three-phase motor and allows for accurate control of the rotation speed. In the experiment, the fault bearing was installed on the bearing pedestal near the motor, and the normal bearing was installed on the right bearing pedestal. The vibration signals were collected using a high-speed multichannel data acquisition instrument (DEWETRON, Graz, Austria). Eight acceleration sensors were installed: Four of them were installed on the base (sensors 1–4), three on the fault-bearing pedestal to obtain vibration acceleration signals in the x, y, and z directions (sensors 5–7), and one on the other bearing pedestal (sensor 8). The sampling frequency was 10 kHz. Four typical bearing faults (normal, inner ring, outer ring, and ball) were simulated at four motor speeds (1500, 1800, 2100, and 2400 r/min), and their vibration acceleration signals were collected. The time waveform of the partially collected acceleration signal is shown in Figure 3.

5.2. Feature Extraction

As shown in Figure 3, bearing fault signals were highly nonlinear and nonstationary, making it difficult to directly identify the fault type from the time-domain signals. The weak fault feature was overwhelmed with the strong background noise, and traditional methods extract it effectively.

Variational mode decomposition (VMD) is a new signal decomposition technology that was proposed by Dragomiretskiy [39] in 2014. This method is highly suitable for nonlinear and nonstationary signals. VMD removes the recursive modal decomposition framework of empirical mode decomposition (EMD); this introduces the variational model into the signal decomposition and decomposes the input signal into a series of modal components of different frequency bands by searching for the optimal solution of the constraint variational model [40, 41]. VMD has a strong theoretical basis, and selecting the basis function is unnecessary. In essence, VMD is a group of multiple, adaptive Wiener filters that have good robustness. With these characteristics, VMD has better performance across many domains relative to both wavelet transform and EMD.

Assuming that is a discrete sequence of bearing fault signals collected by the sensor with a length of . VMD is first conducted to obtain modal functions , where

is defined by the following formula:and then, the vector can be obtained from modal components:

Introducing Gaussian kernel function results in

Define vector ; then the kernel feature is obtained according to equation (31):

Finally, a feature sample is obtained by calculating the signals of sensors:

In this experiment, and . 100 groups of feature samples were extracted for the normal bearing to form a dataset with dimensions of 8 × 100; comparatively, the dataset matrix with dimensions of 8 × 100 of each bearing (inner ring fault, outer ring fault, and ball fault) was also extracted.

Figure 4 shows the fault feature vector corresponding to sensor 1 at different rotating speeds. As shown, the proposed VMD fault feature method better represents different fault information.

5.3. Model Optimization and Performance Analysis

Fifty groups of samples were selected from each fault sample dataset (normal, inner ring, outer ring, and ball fault bearings) to construct the training sample matrix with dimensions of 8 × 200; the remaining samples of each fault bearing dataset were used to construct the testing sample matrix with dimensions of 8 × 200. The parameters Gaussian kernel function σ and penalty factor C greatly influence diagnostic accuracy; given this, when the EALO algorithm was used for optimal parameter selection, the searching space range was defined as and .

To verify the effectiveness of the EALO method proposed here, the ALO, genetic algorithm (GA), and particle swarm optimization (PSO) methods with different parameters were selected for comparison. All algorithms were executed on a computer running Windows 10 ×64 operating system with an Intel® Core™ i7-8700k [email protected] GHz and with a memory capacity of 64 GB. To prevent the algorithms from iterating indefinitely, the maximum number of iterations was adopted. Additional parameters for different algorithms are shown in Table 2.

Using the rotation speed of 1500 r/min as an example, the distributional status of the ant and ant lion populations in the SVM parameter optimization processed by EALO is shown in Figure 5. As shown, as the iteration number increased, the ant and ant lion populations gradually converged to the optimal solution region, indicating that the EALO algorithm was convergent. As shown in Figure 5(a), the initial distributions of the ant and ant lion populations are randomly distributed. The ant lions build traps at random initial locations to prepare for the later capture of ants.

When considering the escape mechanism, there is a given probability that some ants will escape the trap of an ant lion. As shown in Figure 5, when the EALO iterates for the third (Figure 5(b)), seventh (Figure 5(d)), and ninth (Figure 5(e)) times, some of the ants exit the ant lion’s trap and escape to other random locations (labeled “o”). Therefore, the probability that the algorithm falls into local extremum is effectively reduced, and the overall optimization performance of the algorithm is improved. It can also be seen from Figure 5 that the kernel function parameter and the penalty factor greatly influenced the accuracy of the bearing fault classification. When the EALO algorithm stopped iteration (Figure 5(f)), the ant and ant lion locations did not converge to a single point. Rather, these locations converged to multiple points, which demonstrated that there were multiple feasible solutions. Therefore, any ant lion could be selected as the elite ant lion, and its position is regarded as the optimal algorithm solution.

Using bearing fault diagnosis at a speed of 1500 r/min as an example, the convergence curves for EALO, ALO, GA, and PSO are all shown in Figure 6. As shown, the EALO method iterates only 13 times, while the ALO, GA, and PSO iterate 100 times. Although threshold conditions were satisfied after 70 iterations, the ALO method did not stop iterating because it did not have an escape mechanism or adaptive convergence condition. During iteration, if the ants walk around the poor fitness ant lions, the probability of falling into local extremum is increased; simultaneously, the resources of the ant lion individual will be wasted owing to the ant lion’s search in the local extremum neighborhood. Therefore, the optimization performance and the convergence efficiency of the ALO algorithm is partly reduced. The threshold values for the EALO and ALO methods gradually decrease with increasing iteration time.

Using different parameters, the convergence performance of the GA method is different. Using inappropriate parameters resulted in GA performance deterioration; as shown in GA-5, the performance threshold increased with 60 iteration times and no longer decreased after a certain number of iterations. Finally, the PSO optimization performance was also greatly affected by its attendant parameters. The PSO iterative curve oscillated and had no obvious convergence trend, indicating that the convergence condition proposed here was not suitable for the PSO method. Taken together, these findings show that the EALO method proposed here had the fastest convergence speed.

5.4. Results and Discussion

It has been reported that the binary tree model is a more suitable approach to classify bearing faults than many other classification models [19]. This is because the radial basis kernel function has better nonlinear mapping ability in high-dimensional space than other kernel functions, making this model better suited for the fault classification of bearings. Therefore, both the binary tree support vector machine model and the radial basis kernel function model were used in these experiments. The EALO, traditional ALO, GA, and PSO methods with different parameters were used to optimize the SVM model parameters. Four faults of bearings at different speeds were then diagnosed by this optimized SVM model. The diagnostic results are shown in Tables 36.

As shown in Table 3, when the rotation speed was 1500 r/min, the number of EALO iterations proposed here was 13. Moreover, the optimization time was the shortest (0.9065 s), followed by the ALO approach (4.6771 s). The GA approach took the longest (9.6264 s). The same results were found when the rotation speeds were 1800 r/min (Table 4), 2100 r/min (Table 5), and 2400 r/min (Table 6). The reasons for these results are that the GA method requires a series of operations (e.g., encoding, selection, crossover, mutation, and decoding), resulting in a complex genetic algorithm. Contrastingly, the interaction between ants and ant lions in the EALO did not require complex operations. Additionally, the escape mechanism and effective adaptive convergence conditions were introduced, which greatly reduced the number of iterations and improved its optimization performance. In the PSO, particle velocity and direction were controlled by many parameters, including the acceleration factors C1 and C2 as well as the inertia weight W; these parameters had greater influence on PSO algorithm convergence with improper parameters resulting in the algorithm falling into the local extremum. Optimal parameters are difficult to obtain in practice, and as a result, it was difficult for the particles to converge. Ultimately, this reduced the PSO’s performance.

Table 7 details the support vector (SV) number of the SVM model optimized by different methods at different speeds. When compared with the complexity of the optimized SVM model, the total average number of SVs of the SVM model optimized by the EALO was 23; notably, this was the least average number. Comparatively, the total average number of ALO, PSO, and GA support vectors was 34.75, 45.15, and 55.75, respectively. To some extent, the SV number represents the complexity of the high-dimensional space of the SVM model—the lower the SV number, the higher the linear separability in the high-dimensional space. Taken together, the results presented here show that the kernel function parameters and penalty factors optimized by the improved EALO method allow the kernel function to have greater nonlinear mapping ability.

The average bearing fault recognition rates using different optimization methods (e.g., EALO, ALO, GA, and PSO) at different speeds are shown in Table 8. As shown, the EALO method proposed here achieved the highest recognition accuracy (99.5%) using the four rotation speeds. This was followed by the ALO, GA, and PSO methods, which had accuracies of 98.56%, 96.99%, and 96.84%, respectively. These findings show that the EALO method proposed here has better optimization ability than the other three methods. Moreover, the optimized parameters were closer to the real optimal solution. Therefore, these results show that the improved EALO method can effectively improve the recognition rate of bearing faults.

6. Conclusion

Based on the classical ALO algorithm, the EALO algorithm was proposed by introducing an escape mechanism and adaptive iterative convergence conditions. This algorithm was then applied to the diagnosis of bearing faults. Comparing with more traditional methods (e.g., ALO, GA, and PSO), the following conclusions can be drawn:(1)The escape mechanism was effective and reduced the possibility that the classical ALO algorithm would fall into a local extremum value. This improved the global optimization performance.(2)The proposed adaptive convergence conditions effectively reduced the iteration number, saving optimization time and improving the optimization performance of the EALO algorithm.(3)The proposed EALO algorithm was suitable for SVM parameter optimization. When compared with the classical ALO, GA, and PSO approaches, the EALO algorithm had the best performance.(4)The feature extraction method based on the VMD and kernel function was effective and provides a new reference point for bearing fault diagnosis.

Data Availability

The data used to support the findings of this study are included in the supplementary file “Datasets.zip.”

Conflicts of Interest

The authors declare that they have no conflicts of interest.

Acknowledgments

This study was supported by the National Natural Science Foundation of China (Grant Nos. 11702091 and 51575178), the Natural Science Foundation of Hunan Province of China (Grant Nos. 2018JJ3140, 2019JJ50156, and 2018JJ4084), Hunan Provincial Key Research and Development Program (Grant No. 2018GK2044), and Open Funded Projects of Hunan Provincial Key Laboratory of Health Maintenance for Mechanical Equipment (Grant No. 201605).

Supplementary Materials

The supplementary file “Datasets.zip” is the original bearing faults data were used in experiments. (Supplementary Materials)