Abstract

Balancing machine is a general equipment for dynamic balance verification of rotating parts, whether it breaks down or does not determine the accuracy of dynamic balance verification. In order to solve the problem of insufficient fault diagnosis accuracy of balancing machine, a fault diagnosis method of balancing machine based on the Improved Sparrow Search Algorithm (ISSA) optimized Extreme Learning Machine (ELM) was proposed. Firstly, iterative chaos mapping and Fuch chaos mapping were introduced to initialize the population and increase the population diversity. Secondly, the adaptive dynamic factor and Levy flight strategy were also introduced to update the individual positions and improve the model convergence speed. Finally, the fault feature vector was input to the ISSA-ELM model with the fault type as the output. The experiment showed that the fault diagnosis accuracy of ISSA-ELM is as high as 99.17%, which is 1.67%, 2.50%, 7.50%, and 17.50% higher than that of SSA-ELM, HHO-ELM, PSO-ELM, and ELM, respectively, further improving the prediction accuracy of the operation state of the balancing machine.

1. Introduction

In the machinery manufacturing industry, unbalanced centrifugal forces and centrifugal moments appear in rotating parts (shafts, gears, etc.), which lightly lead to an increase in the load of rotating parts and shorten their service life and seriously cause the generation of fatigue notches in rotating shafts and their mounting parts, causing fractures and endangering personal safety. So, it is essential to correct the dynamic balance of rotating parts. Balancing machine is a common piece of equipment for dynamic balancing calibration of rotating parts, whether its failure occurs or does not determine the accuracy of dynamic balancing calibration. Therefore, it is very meaningful to carry out fault diagnosis for balancing machine.

Extreme learning machine (ELM) is a single hidden layer feedforward neural network [1, 2], which is characterized by simple structure and fast training speed compared with the traditional BP neural network trained based on gradient descent [3, 4]. Therefore, ELM is widely used in the field of fault diagnosis. Lim and Ji [5] apply ELM to fault diagnosis of PV systems. In [6], a new fault diagnosis model for rotating machinery based on residual network (Reset) and ELM is established. In [7], the cuckoo search algorithm (CSA) is introduced to optimize the ELM to achieve the goal of fault diagnosis for fans. In [8], finite element method (FEM) simulation and ELM are combined to detect gear faults. In [9], a multiscale fractal box dimension based on complementary ensemble empirical modal decomposition (CEEMD) and ELM is proposed for planetary gear fault diagnosis.

However, the randomly set weights and biases in ELM affect the computational speed and accuracy of the algorithm, so an optimization algorithm needs to be introduced to optimize the weights and biases to improve the computational efficiency of the algorithm [10]. With the development of machine learning, a large number of swarm intelligence optimization algorithms have been proposed and used. Yang et al. [11] use PSO to optimize the parameters of DBN models and use the optimized DBN models to identify faults. Abbas et al. [12] apply the WOA algorithm to the diagnosis of breast cancer. Han et al. [13] propose a power transformer fault diagnosis model based on HHO optimized KELM. Zhang et al. [14] proposed a fault-detection method based on multiscale permutation entropy and SOA-SVM. The sparrow search algorithm (SSA) [15], as a novel optimization algorithm, is characterized by strong merit-seeking ability and fast convergence compared with traditional swarm intelligence optimization algorithms such as particle swarm algorithm (PSO), and has been applied in a large number of engineering fields. In [16], a new fault diagnosis method based on elite opponent Sparrow search algorithm (EOSEA) optimized LightGBM is proposed. In [17], a deep belief network (DBN) approach based on parameter optimization of SSA is proposed in order to detect gear fault severity. In [18], the signal is decomposed by variational modal decomposition (VMD), and the signal features are extracted by RCMDE and input to the support vector machine (SVM) model optimized by SSA to achieve the fault diagnosis of rolling bearings. In [19], the penalty factor and kernel function parameters of the SVM are optimized using the SSA, and the SSA-SVM wind turbine fault diagnosis model is constructed. In [20], an ELM arc fault diagnosis model optimized by the SSA is developed. However, the SSA algorithm also has the disadvantage of easily falling into local optimum. Therefore, this study proposes an improved SSA algorithm (ISSA) and introduces the ISSA algorithm into the optimization process of weights and thresholds of ELM to construct an ISSA-ELM fault diagnosis model.

The rest of this study is organized as follows. Section 2 focuses on the principles related to the algorithm. Section 3 introduces the ISSA-ELM model. Section 4 describes the experiments. Finally, conclusions are given in Section 5.

2. Basic Algorithm Principles

For rotating mechanism such as balancing machine, machine learning is often used to construct fault diagnosis model to analyse its fault. ELM, as a machine learning model, is applied to fault diagnosis by its fast training speed and high accuracy of operation, but the weights and bias in the extreme learning machine are randomly generated, which seriously affect the accuracy of its operation, so the optimization algorithm needs to be introduced to improve the accuracy of ELM, and the standard SSA has the disadvantage of easily falling into local optimum, so the improvement strategy needs to be introduced to improve it.

2.1. Sparrow Search Algorithm and Improvement
2.1.1. Sparrow Search Algorithm

The SSA algorithm simulates sparrow foraging and anti-predation behaviors by continuously updating individual positions for the purpose of finding optimal values. In the SSA algorithm, all individuals can be divided into discoverers, followers, and vigilantes.

The update equation for the location of the discoverers is as follows:where represents the current iteration number, denotes the position of the ith sparrow in the th dimension in the th generation, , is the maximum iteration number, denotes the alarm value, denotes the safety threshold, is a random number obeying normal distribution, is a all-1 matrix, and denotes the dimensionality. When , it means there are no predators around the foraging area and the finders can search for food extensively; when , it means predators appear and all the finders need to fly to the safety area.

The update equation for the location of the followers is as follows:where denotes the position of the individual with the worst fitness value in the tth generation, and denotes the position of the individual with the best fitness value in the generation. A denotes a matrix with each element in the matrix randomly predefined as -1 and 1, and . When , it means that the follower has low fitness and needs to fly to other regions; when , the follower will forage near the optimal individual .

The update equation for the location of the vigilantes is as follows:where denotes the global optimal position in the tth generation, is the control step and follows a normal distribution with mean 0 and variance 1, , is set as a constant to avoid the denominator being 0, denotes the fitness value of the current individual, and and denote the fitness values of the current global optimal and worst individuals. When , it means that the individual is at the periphery of the population and needs to adopt anti-predatory behavior and keep changing its position to obtain higher fitness; when , it means that the individual is at the center of the population and it will keep approaching its nearby companions in order to stay away from the danger area.

2.1.2. Improved Population Initialization Method

The distribution of the initial solution in the solution space largely affects the convergence speed and the search accuracy of the algorithm. The SSA algorithm uses random generation to generate the initialized population, and such a way will destroy the diversity of the population. Chaotic sequences have the characteristics of randomness, ergodicity, and regularity [21], which can increase the population diversity and enhance the ability of the algorithm to search globally. In this study, we introduce iterative chaos mapping [22] and Fuch chaos mapping [23] to improve the population initialization. The expressions of iterative chaos mapping and Fuch chaos mapping are shown in formulae (4) and (5):where .

The steps of the improved population initialization method are as follows:(1)randomly generate the population , where and represents the dimensionality.(2)Let the range of values of the global solution be [], and we generate iterative population and Fuch population according to the following formulae:where is the upper bound of the search space, is the lower bound of the search space, and is the chaotic sequence generated by using formulae (4). is the chaotic sequence generated with formulae (5).(3)combine the populations and into a new population with the new population , find the fitness value of , rank the fitness values of the new population individuals in order from smallest to largest, and take the first N optimal initial solutions as the new initial population of sparrows.

2.1.3. Dynamic Adaptive Factor

Like other swarm intelligence algorithms, the SSA algorithm suffers from disadvantages such as poor global search capability and the tendency to fall into local optimality, resulting in insufficient algorithm development. Therefore, the dynamic adaptive weights are therefore improved for the discoverers’ position update equations.

The update equation for the location of the discoverers has been improved as follows:where the equation of dynamic adaptive weights is as follows:where is the number of current iterations and denotes the maximum number of iterations.

The variation curves of dynamic adaptive factors are shown in Figure 1.

As can be seen from Figure 1, has larger values at the beginning of the iteration, and the algorithm has stronger global search ability, larger search range and faster convergence, smaller values at the end of the iteration, and stronger local search ability, which can accurately find the global optimal solution.

2.1.4. Levy Flight Strategy

As with other optimization algorithms, standard SSA may fall into local optimal solutions and thus fail to find the global optimal solution. In this study, we introduce the Levy flight strategy [24] to update the position of the vigilantes in the SSA algorithm. The Levy flight strategy is characterized by a long time of small-step random wandering with occasional large steps. The improved SSA algorithm reduces the risk of falling into local optimal solutions. The improved update equation for the location of the vigilantes is shown below:where the equation of Levy flight strategy is as follows:where is a gamma function, is a constant, taken as 1.5, and and are random numbers from 0 to 1.

2.2. Extreme Learning Machine

ELM is a fast learning algorithm, which obtains the corresponding output weights by randomly initializing the input weights and offsets [25, 26]. The ELM network structure is shown in Figure 2. The mathematical expression of elm can be expressed as follow:where is the activation function, is the input weight, is the output weight, is the hidden layer offset, is the input vector, and is the output vector. Let . Then, formulae (13) can be obtained:where is the output matrix of the hidden layer, is the output weight, and is the desired output. The learning process of ELM has the following main steps.(1)We determine the number of neurons in the hidden layer and randomly set the connection weights between the input layer and the hidden layer and the bias of the neurons in the hidden layer(2)We select an infinitely differentiable function as the activation function of the hidden layer neurons and then calculate the hidden layer output matrix (3)We calculate the output layer weights , where is the Moore–Penrose generalized inverse matrix of

3. ISSA-ELM Modeling

The weights and bias of ELM are set randomly [27], which seriously affects the accuracy and speed of the algorithm’s operation. In this study, an ISSA algorithm is introduced into the process of selecting the ELM weights and bias , and an ISSA-ELM model is proposed. The method uses the ISSA algorithm to optimize the weights and bias of ELM, and the weights and bias in ELM are used as the population individuals in the ISSA algorithm, and the classification error rate of ELM is used as the fitness function to globally search for the optimal value. The ISSA-ELM fault diagnosis flowchart is shown in Figure 3, and the modeling steps are as follows:(1)We set ELM-related parameters.(2)We set the ISSA-related parameters as well as the fitness function. The relevant parameters of ISSA to be set include the maximum number of iterations, the ratio of discoverers, followers and vigilantes, and the population size according to the weights and deviations of ELM. The classification error rate is chosen as the fitness function.(3)We initialize the population. A modified population initialization method is used to generate populations.(4)We calculate initial fitness values and rank them and the fitness value of each individual in the population and rank the individuals in order from smallest to largest fitness value.(5)Sparrow individual position update: we update the discoverers, followers, and vigilantes’ positions according to formulae (8), (2), and (10).(6)We calculate the fitness value of the individual after updating the position and determine whether the termination condition is satisfied or the maximum number of iterations is reached; if it is satisfied, the optimal weight and bias are output, if not, we iterate through the loop until the termination condition is met or the maximum number of iterations is reached.(7)The ISSA-ELM model is constructed using the obtained optimal weights and biases.(8)We evaluate the performance of the ISSA-ELM model.

4. Experiments

4.1. Sample Selection and Feature Extraction

In this study, the fault diagnosis experiment is carried out by using Dewesoft data acquisition system and HY40WUB type hard support balancing machine. As shown in Figure 4, the experiment installs single-phase vibration sensors in the X-axis and Y-axis directions of the support frame of HY40WUB hard support balancing machine and collects vibration signals through Dewesoft acquisition instrument with a frequency of 2560 HZ. We separate acquisition of fault signals under eight operating conditions of balancing machines.

We extract the fault feature quantity that can reflect the working state of the balancing machine from the x-axis and y-axis, as shown in Table 1. To remove the influence of different magnitudes, the fault data are normalized according to formulae (14). The feature vectors are obtained, and each group of feature vectors is 44-dimensional. Each class of sample data is 50 groups, totaling 400 sets of fault data, of which 280 groups are used as the training set and 120 groups are used as the test set. The fault types of balancing machine are numbered as 1–8, which are divided into 8 types: normal, belt deterioration, universal coupling wearing, universal coupling rusting, journal abrasion, roller outer diameter wearing, roller outer diameter breakage, and roller rusting. The distribution of sample capacity of training set and test set is shown in Table 2:where is the normalized data, is the original data, is the maximum value in the original data set, and is the minimum value in the original dataset.

From Table 1, is the number of sampling points, is the ith element of any sample in the vibration data set, is the frequency instantaneous value at moment , and is the Fourier spectrum instantaneous value at moment .

4.2. Define the ELM Network Structure

According to the dimension of the input and output quantities, the number of nodes at the input of ELM can be determined as 44 and the number of nodes at the output as 1. The selection of the number of nodes in the hidden layer affects the accuracy of ELM classification, and this study calculates the train accuracy of ELM for the cases of the number of nodes in the hidden layer from 1 to 200, respectively, and the graph of the number of nodes in the hidden layer and the train accuracy is shown in Figure 5.

From Figure 5, it can be seen that the train accuracy of the activation function “sigmod” is higher than that of “sin” and “hardlim”. Therefore, the “sigmod” type function is chosen as the activation function in this study. When the number of nodes in the implied layer is 90∼140, the train accuracy is higher than other ranges. The number of nodes in the implied layer is set to 90, 100, 110, 120, 130, and 140, respectively, and 10 tests are conducted, and the results are shown in Figure 6.

It can be seen from Figure 6 that when the number of ELM nodes is 100, the accuracy rates are all higher than the other node numbers. Therefore, the number of nodes in this study is set to 100.

4.3. Analysis of Experimental Results
4.3.1. Performance Comparison of Optimization Algorithms

The adaptation curves of ISSA, SSA, HHO, and PSO to the ELM optimization parameters are shown in Figure 7. The parameters of the four algorithms are set as follows: the maximum number of iterations of ISSA is set to 50, the population size is 20, the proportion of discoverers is 0.2, the proportion of joiners is 0.8, the proportion of vigilantes is 0.1, and the early warning value is set to 0.6; the maximum number of iterations of SSA is set to 50, the population size is 20, the proportion of discoverers is 0.2, the proportion of joiners is 0.8, the proportion of vigilantes is 0.1, and the early warning value is set to 0.6; the maximum number of iterations of HHO is set to 50, the population size is 20; the maximum number of iterations of PSO is set to 50, the particle population size is 20, the inertia factor is set to 0.9, and the acceleration constants c1 and c2 are both 2.

From the fitness curves in Figure 7, it can be seen that the fitness values of all four algorithms start to decrease and converge as the number of iterations increases, so as to obtain the best weights and biases. The initial fitness values are sorted from lowest to highest as ISSA, SSA, HHO, PSO, and ISSA has the lowest initial fitness value; ISSA converges at about the 8th generation, SSA converges at about the 21st generation, HHO converges at about the 26th generation, and PSO converges at about the 29th generation; ISSA finds the optimal solution in the shortest time and converges the fastest; the final convergence fitness values of the four algorithms are ISSA, SSA, HHO, and PSO in the order from smallest to largest, and ISSA has the smallest final fitness value. Therefore, ISSA has the most obvious optimization for ELM.

4.3.2. Comparison of Fault Diagnosis Models

The fault characteristic quantities are input to the ISSA-ELM, SSA-ELM, HHO-ELM, PSO-ELM, and ELM diagnostic models, and the diagnostic time and accuracy are shown in Table 3; the diagnostic results are shown in Figure 8.

In Figure 8, the blue circles represent the actual fault types, the red diamonds represent the fault categories predicted by different models, the horizontal coordinates are the diagnostic sample numbers, and the vertical coordinates are the fault category labels. From Table 3 and Figure 8, it can be seen that ISSA-ELM correctly predicts 119 balancing machine faults with 99.17% fault diagnosis accuracy, which has the highest fault diagnosis accuracy compared with the other four diagnostic models. The results show that the comprehensive fault diagnosis model established by the ISSA algorithm by seeking the weights and biases of ELM can effectively improve the fault diagnosis accuracy of balancing machine.

5. Conclusion

To address the problem that the fault diagnosis accuracy of balancing machine needs to be improved, this study proposes a fault diagnosis method based on ISSA-ELM and compares it with SSA-ELM, HHO-ELM, PSO-ELM, and ELM, and the following conclusions can be drawn:(1)The introduction of the iterative chaos mapping and the Fuch chaos mapping to improve the population initialization method can increase the population diversity and enhance the algorithm’s global optimization seeking ability(2)The use of dynamic adaptive factor and Levy flight strategy to improve the position update formula of SSA algorithm can effectively improve the disadvantage that SSA algorithm is easy to fall into local optimum(3)Using ISSA algorithm to find the optimal values of weights and thresholds of ELM, the results show that the diagnostic accuracy of ISSA-ELM is higher compared with SSA-ELM, HHO-ELM, PSO-ELM, and ELM, reaching 99.17%, which obviously improves the diagnostic efficiency

In fault diagnosis, multisource information fusion can take into account multiple signals, which can further improve the accuracy of fault diagnosis; therefore, multisource information fusion of balancing machines will be further investigated in future work.

Data Availability

The data used to support the findings of this study can be obtained from the corresponding author upon request.

Conflicts of Interest

The authors declare that there are no conflicts of interest regarding the publication of this paper.

Acknowledgments

This work was supported by the National Key Research and Development of Program-network collaborative manufacturing and intelligent factory special item (Grant nos. 2020YFB1712600 and 2020YFB1712602) and the National Defense Basic Scientific Research Funding Project (Grant no. JCKY2021414B011).