#### Abstract

In order to realize the unsupervised extraction and identification of fault features in power electronic circuits, we proposed a fault diagnosis method based on sparse autoencoder (SAE) and broad learning system (BLS). Firstly, the feature is extracted by the sparse autoencoder, and the fault samples and feature vectors are combined as the input of the broad learning system. The broad learning system is trained based on the error precision step update method, and the system is used to the fault type identification. The simulation results of the thyristor fault diagnosis of the three-phase bridge rectifier circuit show that the method is effective and has better performance than other traditional methods.

#### 1. Introduction

In recent years, power electronic converters have been widely used in new energy vehicles, industrial robots, high-voltage direct current transmission, high-power electrolysis, and motor drive systems [1–4]. However, because of the exposure to poor working environments, the rectifier is prone to suffer critical failures because of device aging, overloading, unexpected operating conditions, etc. It has been reported that about 38% of the faults of power electronic system are due to failures of power electronic switches. Switch failures mainly include open circuits and short circuits. Due to the presence of relay protection devices in the circuit, most of the switch failures are open circuit failures; although the open circuit fault will not immediately damage the system, it will gradually reduce the performance of the rectifier, and if not handled in time, it will cause serious damage to other components, even the entire power system. Although there are various methods to improve the stability of power electronic converters, faults are always unavoidable. Therefore, the fault diagnosis research of power electronic converter switch tube plays a vital role in improving system stability and ensuring safe and efficient operation of the system [5–7], and it has also become a hot spot in power electronics research in recent years [8–12].

Fault diagnosis methods are usually classified into model-based and data-driven methods. The method in [13–15] uses the analytical model method to analyze the detailed fault equation by establishing a fault model for the circuit. However, this method is sensitive to parameters and is highly susceptible to external interference, so the fault mathematical model is difficult to establish [16, 17]. Compared with model-based methods, machine learning methods are efficient and rely less on the circuit models, which only require the historical data [18]. Khomfoi and Tolbert [19] proposed a multilevel inverter drive (MLID) fault diagnosis method based on artificial intelligence (AI) technology in which nerves are trained by phase voltage after principal component analysis (PCA) of network fault classifier. Martins et al. [20] proposed a stator fault algorithm for three-phase induction motors based on unsupervised neural networks. This algorithm requires less mathematical model of the motor. Huang et al. [21] proposed a fault diagnosis algorithm based on multistate data processing and segmented fluctuation analysis, which can realize multiopen fault diagnosis of photovoltaic inverters. In [22], a multiscale adaptive fault diagnosis (MAFD) method was proposed which is based on signal symmetric reconstruction preprocessing (SSRP), in which an artificial neural network (ANN) was used to detect the type and location of switching faults. Since most networks are affected by time-consuming training processes and complex structures, many studies require high-performance computing and powerful facilities. Recently, Chen and Liu developed a very fast and effective broad learning system (BLS) [23]. In the absence of a stacked-layer structure, the designed neural network broadly extends the neural nodes and incrementally updates the weight of the neural network when additional nodes are needed and as the input data continuously enters the neural network. Therefore, the BLS structure is well suited for modeling and learning in time-varying big data environments [24, 25]. Therefore, in the paper, we will try to use BLS as the training model for fault classification of power electronic converters to improve the efficiency of fault diagnosis of our entire model. Nowadays, as the system continues to be complex, in addition to the increase in the amount of data, the data dimension will increase dramatically. If you use high-dimensional raw data directly, it will not only improve the system run time but also reduce efficiency. The main solution to the problem of high-dimensional data is feature extraction. Jin et al. [26] proposed using wavelet transform to extract fault feature vectors. The fault diagnosis method in [27, 28] relies too much on features that require a priori cognition, while the sparse self-encoder is an unsupervised machine deep learning method [29, 30]. The use of sparse encoders for fault feature extraction makes it easy to reduce input dimensions and improve fault diagnosis efficiency. Therefore, in this paper, the learning algorithm combining the BLS and the SAE is applied to the fault diagnosis of the three-phase rectifier circuit to improve the efficiency of fault diagnosis of our entire model. In addition, since most of the algorithm step parameters are generally fixed, we proposed a new adaptive step update method based on error precision; the disadvantages of slow convergence caused by fixed step size can be effectively solved. Compared to recent fault diagnosis of power electronic converter, the principal contributions are of two aspects: firstly, we use BLS as the training model for fault classification of power electronic converters to improve the efficiency of fault diagnosis of our entire model. Secondly, according to the BLS fault classification model, we propose a new step update strategy to improve the training speed of the model.

The second section of this paper mainly introduces the implementation steps of the SAE-BLS fault diagnosis method. It is mainly divided into three parts. The first part is to introduce how to extract fault features through SAE. The second part mainly introduces how to establish the BLS network model. The third part is to introduce the improvement method of BLS network training. The third section is to verify the effectiveness of the proposed method by some simulation experiments. The fourth section is the conclusion of this paper.

#### 2. Fault Diagnosis Method Based on SAE-BLS

##### 2.1. Extracting Fault Features from SAE

Sparse autoencoding means that the hidden layer feature has a sparse response characteristic (the dimension of the hidden layer feature is generally smaller than the input), and its structure is shown in Figure 1.

Usually, the introduction of sparsity makes the signal clearer and easier to calculate. Taking into account the relationship between the dimension of the hidden layer and the input layer, the sparsity constraint is introduced using the KL distance. The output with hidden features is shown by the following equation:where **W**_{1} and **b**_{1}, respectively, represent the weight and bias of the input layer and the intermediate layer feature output **Z** is obtained after the sigmoid activation function transformation; the calculation method is shown by equation (2). Similarly, the actual output **y** can be obtained by (3).where **W**_{2} and **b**_{2} represent the weight and bias of the reconstructed layer, and the average value of each node in the middle layer output can be obtained by the following equation:

represents the average activation degree of the *j*th unit in the middle layer. It is expected that the average output value of each node in the middle layer is as zero as possible. In order to quantify the characteristics of the middle layer, it is usually assumed that each node in the middle layer responds with a certain probability, and the nodes are independent of each other, and the expected value *ρ* (e.g., *ρ* = 0.05) of each point response is generally given in advance. Then, use the KL distance to construct a sparse regular term:where the KL distance represents the difference between the average activation value and the expected value. The closer *ρ* is to zero, the smaller the average activation degree of the intermediate layer is. Add the sparse regularization term to obtain the SAE network optimization objective function *E*:where *y*_{n} and *r*_{n} represent the actual output and the expected output, respectively. The closer the average activation degree is to the expected value, the closer the error function *E* is to the convergence value. *β* is a parameter added on the basis of the original error function to control the weight of sparsity. In the training network, it is necessary to constantly adjust the parameters so that *β* reaches a minimum value. Like neural networks, the training of SAE also uses a backpropagation algorithm that adjusts the weights. The fault feature **Z** is obtained by the SAE will be part of the BLS input.

##### 2.2. Broad Learning System

Assume that we present the input data *X* and project the data, using Φ_{j (}**XW**_{ei} **+** *β*_{ei}), to become the *i*th mapped features **Z**_{i}, where **W**_{ei} is the random weight with the proper dimensions. Similarly, the *j*th group of enhancement nodes, Φ_{k}(**Z**_{i}**W**_{hj} + *β*_{hj}), is denoted as **H**_{j}. Furthermore, Φ_{j} and Φ_{k} can be different functions. The structure is illustrated in Figure 2.

In BLS, *W*_{ei} can be adjusted by the SAE. So, the *n*th mappings can be denoted as

The feature nodes are denoted as **Z**^{n} = [**Z**_{1},…,**Z**_{n}], where **W**_{hj} and *β*_{hj} are the random weights. The enhanced nodes are denoted as

Therefore, the output of the BLS can be denoted aswhere **W**^{n} represents the output layer weight. Since *W*_{hj} and *β*_{hj} are randomly generated, **H**^{n} and **Z**^{n} cannot establish a correspondence. As shown in Figure 3, we simplify the BLS model because we only need to observe the output voltage of the three-phase full-bridge circuit to determine the fault type.

where *ϕ* represents the neural network activation function and **W**^{n+m} represents the input layer weight. In order to reduce the overfitting situation in the network training process, this paper uses the L2 norm as the loss function *E*(*W*) penalty term, as shown by the following equation:where *σ* and represent the actual output and expected output of the broad neural network in Figure 3, *q* represents the dimension of the output, represents the L2 regularization term, and *α* is the user-specified parameter. represents the Frobenius norm, and its calculation method is shown in the following equation:

The weights of the broad neural network are also updated by the gradient algorithm. By deriving the error function to obtain a gradient of the given point, the adjustment value can be increased when the error is large. The weight adjustment value can be expressed by (14). Similarly, the update equation of the **W**^{n + m} can be obtained by (15).where *J* represents the step-size parameter and is used for the adjustment of the weight according to the degree of error. represents the weight update value.

##### 2.3. Step-Size Parameter Update Strategy

In most neural network algorithms, the step size is generally fixed and frequency-independent constant (e.g., *J* = 0.1). The fixed step method does not depend on the separation error. Due to the particularity of the samples, there may be cases where the convergence speed is slow or even unable to converge.

Figure 4 shows the primary problems of using fixed step size. If there is outside noise or other environmental changes, causing the curve to be steeper than the previously estimated curve, the amount of update *W* should be reduced. If a fixed step size is used, the updated quantity will be increased. So, when the separation error is high, the step size should be set to a larger value. Conversely, when the separation error is small, the step size should be set to a smaller value. The method convergence can be accelerated by a method proportional to the *E*(*W*) gradient *E*′(*W*), as shown by the following equation:

**(a)**

**(b)**

Use this step-size adaptive method; the step size increases when the separation error is high, and it is low when the error is small. However, when the amount of data samples is small, the error gradient disappears during the training process, that is,where represents the element in the *q*th row and (*n* *+* *m*)th column of . As the gradient disappears, the training stops early, but the actual accuracy does not reach the ideal value. Therefore, in the actual calculation process, a gradient compensation value should be artificially added so that the error gradient does not fall to zero. The new step size is calculated as

In theory, this adaptive step-size update method can speed up the convergence of network training. It is assumed that when the gradient approaches a critical value, the error tends to be constant *ε*. At this time, the step size is related to the parameters *γ* and *ε*.

Equation (19) shows that when the gradient tends to a critical value, the step size is a constant that has no relationship with the separation error. So, the separation error is not reduced, but an increase occurs. In order to solve this problem, we proposed a method that when the error reaches the reference value *η*, the step size is automatically updated to a constant *l* to avoid the increase of the error, as shown by the following equation:

Since the *η* is generally larger than the error convergence constant *ε*, we need to use a new step update strategy in order to accelerate the error convergence rate. When the error does not reach the reference value, the step size can be updated according to the following equation:

Figure 5 shows that the training error decreases with the increase of number of iterations, and the gradient also decreases, but the rate is always bigger than 1. It can be seen from point A that using *J*_{s} step size, the error reduced to *η* faster than using fixed step size.

Using the new step update strategy can not only speed up the convergence of the algorithm but also avoid the situation where the error curve is diverged due to the excessive step size. In summary, we can obtain the flowchart of SAE-BLS fault diagnosis method as shown in Figure 6.

#### 3. Experimental Results and Analysis

This paper uses Matlab platform to carry out simulation experiments. A total of 22 open circuit fault types were extracted from a single switch, the same bridge arm switches, and different bridge arm switches. For ease of analysis, we code the type of fault, as shown in Table 1.

In order to verify the effectiveness of the adaptive step-size update strategy based on error precision, this paper compares it with the fixed step-size algorithm, and the result is shown in Figure 7.

**(a)**

**(b)**

It can be seen from Figure 7(a) that in the BP classification training background, the step-size *J*_{s} error is reduced to the point A (error reference value *η*) faster than the fixed step size. Figure 7(b) shows that in the BLS classification background, the convergence error is more smaller when we use the new step update strategy. The parameters are shown in Table 2.

In order to verify the effectiveness of the SAE-BLS fault diagnosis method, we apply the classification experiment to the fault type identification of the three-phase rectifier and compare with the classification method proposed in other papers. The experimental results are shown in Table 3.

Table 3 shows that the method’s test accuracy is more than 90% when the signal-to-noise ratio is bigger than 30 dB, and the test accuracy is 100% when the signal-to-noise ratio is 35 dB, which is higher than other methods’ classification accuracy.

#### 4. Conclusion

In the paper, in order to realize the unsupervised fault diagnosis of power electronic converter, SAE is used to extract the characteristics of the fault signal generated by the switches. An adaptive step-size update method based on error precision is used to optimize the BLS classifier and combine it with the SAE feature extraction method to diagnose faults in power electronic converters. The simulation experiment of three-phase bridge rectifier circuit switching tube fault diagnosis shows the feasibility of this method, and it has higher precision compared with other traditional methods. It is worth mentioning that this method can also be extended to the fault diagnosis of other types of power electronic circuits.

#### Data Availability

The data used to support the findings of this study are available from the corresponding author upon request.

#### Conflicts of Interest

The authors declare that they have no conflicts of interest.

#### Acknowledgments

This study was funded by the National Natural Science Foundation of China under grant no. 51879118, the Fujian Province Office of Science and Technology Support for Army under grant no. B19101, the Excellent Talent Support Program for Fujian Higher School in the New Century under grant no. B17159, the Scientific Research Foundation of Key Laboratory of Fishery Equipment and Engineering, Ministry of Agriculture of the People’s Republic of China under grant nos. 2018001 and 2016002, the Scientific Research Foundation of Artificial Intelligence Key Laboratory of Sichuan Province under grant no. 2017RYJ02, and the Scientific Research Foundation of Jiangsu Key Laboratory of Power Transmission & Distribution Equipment Technology under grant no. 2017JSSPD01.