Abstract

This paper presents a novel fault diagnosis method for analog circuits using ensemble empirical mode decomposition (EEMD), relative entropy, and extreme learning machine (ELM). First, nominal and faulty response waveforms of a circuit are measured, respectively, and then are decomposed into intrinsic mode functions (IMFs) with the EEMD method. Second, through comparing the nominal IMFs with the faulty IMFs, kurtosis and relative entropy are calculated for each IMF. Next, a feature vector is obtained for each faulty circuit. Finally, an ELM classifier is trained with these feature vectors for fault diagnosis. Via validating with two benchmark circuits, results show that the proposed method is applicable for analog fault diagnosis with acceptable levels of accuracy and time cost.

1. Introduction

Numerous researches have indicated that analog circuit fault diagnosis is a significant fundamental for design validation and performance evaluation in the integrated circuit manufacturing fields [13]. In contrast to the well-developed diagnostic methods for digital circuits, diagnosis for analog circuits is an extremely difficult problem and an active research due to the following reasons: () there is lack of a reliable and practical fault modeling method for analog circuits because of the complexity and variability of analog circuit structures; () the parameter values of analog components are continuous; () the impact of tolerance and nonlinear nature issues cannot be ignored; () for actual analog circuits, test points are limitations.

The procedure of fault diagnosis for analog circuits can be generally classified into four stages: data acquisition, feature extraction, fault detection, and fault identification and isolation. As one of the foremost stages in fault diagnosis, feature extraction methods are closely related to the efficiency of fault diagnosis. Many feature extraction methods have been proposed such as correlation function technique [4], information entropy approach [5], the fast Fourier transform technique [6], and the wavelet transform technique [7]. Zhang et al. [8] directly used the output voltage as features for fault diagnosis of analog circuits without preprocessing methods, and the results of fault diagnosis are not very good. M. Aminian and F. Aminian proposed a diagnostic method of analog circuits using wavelet decomposition coefficients, principal component analysis (PCA), and data normalization to construct fault feature vectors and then trained and tested neural network classifiers [3]. The method can obtain higher accuracy of diagnosis. In [9], Long et al. adopted conventional time-domain feature vectors to train and test least squares support vector machines (LS-SVM) for fault diagnosis of analog circuits which has better accuracy than that with traditional wavelet feature vectors. For information entropy techniques, it is more sensitive to parameter variations of components in CUTs. Therefore, information entropy is widely used with other techniques for fault diagnosis [5, 1012]. Xie et al. diagnosed soft faults of analog circuits using Rényi’s entropy and the result is effective [5]. In [11], authors have developed a new fault diagnosis approach by using kurtosis and entropy of sampled signals as feature vectors to train a neural network classifier.

However, there are some problems which should be considered and solved in feature extraction. Firstly, how to select features to train classifiers should be considered because different features with different classifiers for analog fault diagnosis have different results. Secondly, we find that most of the aforementioned methods were validated with some discrete simulations data. That is, they only considered a CUT to be faulty when a component value is higher or lower than its nominal value by 50%. It means this method has low fault coverage. Thirdly, some methods should take the influence of tolerance and the continuity of faulty parameters into account.

In our work, therefore, we use the techniques of EEMD, kurtosis, and relative entropy to construct new feature vectors to train an ELM classifier to improve the diagnosability and reduce time cost. As an adaptive time frequency data analysis method ensemble empirical mode decomposition (EEMD) is suitable for linear, nonlinear, and no-stationary signals [13]. Recently, it has been successfully applied to extract significant fault features in many fields such as rotating machinery and locomotive roller bearings fault diagnosis [1315]. Relative entropy method is rarely used in the analogy circuit fault diagnosis field. The difference between the probability distributions of faulty and fault-free circuits can be distinguished clearly by adopting relative entropy, because when a component is varied, the energy distribution is also changed which leads to change in relative entropy. Kurtosis is a measure of heavy tailed distribution of a real valued random variable. It can clearly describe the difference from waveforms. As a result, the combinational methods of kurtosis and relative entropy are suitable as fault features for analog fault diagnosis.

As a consequence, in this paper, we decomposed impulse responses of a CUT into IMFs using EEMD method and then adopting kurtosis and relative entropy techniques to obtain feature vectors. These features vectors can be used for diagnosis of faulty components among various variation possibilities. For this purpose, a classifier is needed. We selected extreme learning machine (ELM) classifier because it is proven to have excellent generalization performance and low computational cost [16, 17] when it is fed to train and test with fault features. Utilizing the combination of EEMD, relative entropy, and ELM algorithms for feature extraction and classification we can complete analog circuit fault diagnosis. It demonstrates reliable and accurate fault diagnosis with reduced test time.

This paper is organized as follows: Section 2 briefly presents the principle of EEMD, relative entropy, and ELM algorithms. In Section 3, the diagnostic procedure of the proposed method is introduced. Section 4 shows the simulation experiment details and results for two benchmark analog circuits. And then the performance of the proposed method is also discussed in the Section. Finally the conclusions are drawn in Section 5.

2. A Review of Fundamental Theory

In the work, we combined EEMD, relative entropy, and ELM to perform fault diagnosis of analog circuits. Fundamentals of EEMD, relative entropy, and ELM are introduced firstly as follows.

2.1. Ensemble Empirical Mode Decomposition (EEMD)

Ensemble empirical mode decomposition, based on empirical mode decomposition (EMD), is to solve the aliasing in time frequency distribution with Gaussian white noise [13]. Based on simple assumption any signal consists of different simple intrinsic modes of oscillations from low to high frequency [13, 19]. Thus, original signal is defined aswhere is the intrinsic mode functions (IMF). An IMF is defined as a simple oscillatory function that satisfies two conditions [18]:(1)It has the same number of extrema and zero crossing or has the difference no more than one between them.(2)The mean value of the envelopes defined by the local maxima and minima is zero.

From (1), we can see that the original signal is decomposed into IMFs and one residue . The procedure of decomposition with shifting method is described as follows.

Step 1. Given a signal , all local maxima and minima of it are gained firstly. Then upper and lower envelopes of the given signal are determined from a cubic spline interpolation of the local maxima and minima. Let be the mean of the two envelopes and the first component is obtained as

Step 2. Let be the mean of ’s upper and lower envelopes and is calculated as follows:

Step 3. Repeat the above procedure times until satisfies IMF conditions. The first IMF is obtained by .

Step 4. Subtract from , and a residue is obtained as

Step 5. The residue, which contains useful information, is considered as main signal and Steps are repeated to gain other IMFs. Formula (4) is rewritten as

Step 6. When the residue becomes monotonic slope or has only one extreme, the whole procedure is stopped.

From the procedure, we can see that IMFs represent the degree of oscillation of signal in amplitude and frequency. It means that these IMFs contain much time frequency information of the signal. Thus, the authors in [13] indicated that the algorithm is a new high-performance signal processing approach which can deal with linear, nonlinear, and no-stationary signals. More details about this technique can be found in [13, 19].

2.2. Relative Entropy

Let be a continuous random variable. and are the probability distributions of . Relative entropy describes the distance between two probability distributions of . The relative entropy is calculated aswhere denotes energy probability distribution function (PDF) of response voltages for faulty CUT and indicates normal response voltage PDF of fault-free CUT. When parameters of one or more components of CUT are changed, the PDF of corresponding output voltage will also vary. This means that it is more sensitive to parameter variations of components in CUT. By calculating the relative entropy between faulty and fault-free circuit, faults can be detected. Consequently, for fault diagnosis, relative entropy is suitable as fault feature.

2.3. Extreme Learning Machine

In order to accurately and quickly diagnose faults, in our work, extreme learning machine (ELM) is adopted. ELM is one kind of fast algorithm of single hidden-layer feedforward networks (SLFN) as shown in Figure 1. The hidden layer of SLFN need not be tuned. It is proven that it has excellent generalization performance and low computational cost in many applications [16, 17]. In the paper we utilize it to do fault diagnosis as a classifier. A brief of review of ELM is described as follows [16].

Suppose are arbitrary distinct samples, . For a SLFN with hidden nodes, taking one output node as example, the output function is defined aswhere is the output weight between hidden layer and output layer. is the activation function which demonstrates the output vector of the hidden layer with respect to the input . is the input weight of the th hidden node and denotes the bias of hidden node . And represents the dot-product between and . is the hidden-layer output matrix. It isThe target of ELM is to minimize the output error; hence the minimal norm least square method is adopted.where indicates the expected value of output. Once and are determined, is also uniquely confirmed. According to formula (7), the output weight can be calculated bywhere is the Moore–Penrose generalized inverse of matrix .

3. Diagnostic Procedure

3.1. Diagnostic Procedure

The diagnostic procedure based on EEMD, relative entropy, and ELM is shown in Figure 2. The procedure of the proposed method involves four major stages: data acquisition, data processing, training, and fault diagnosis. Once the response voltage waveforms of fault-free circuit and fault circuits are recorded, respectively, they will be decomposed into IMF components by using EEMD. Through utilizing the energy of each IMF, then, kurtosis and relative entropy can be obtained between faulty IMFs and fault-free IMFs. Kurtosis and relative entropy of some IMFs of each fault are selected to compose a fault feature vector. The unique feature vector is extracted for each fault which is used for training and testing ELM classifier to complete fault diagnosis.

3.2. The Procedure of Feature Extraction

The procedure of feature extraction of the proposed method is described as follows.

Step 1. Every fault (including fault-free status) of CUT is simulated in PSPICE. And the relevant output waveforms are obtained.

Step 2. Decompose each waveform with EEMD into IMFs according to the method in Section 2.1.

Step 3. Calculate kurtosis and relative entropy of each IMF.

Step 3.1. Obtain kurtosis from each IMF. According to [11], kurtosis is a measure of the heaviness of the tails in a distribution of the signal [20]; hence, kurtosis could react to the change of signals and be used as feature of signals. Kurtosis is defined in the zero-mean case as follows [11]: where is the expectation operator; is the kurtosis of IMF for a fault.

Step 3.2. Calculate relative entropy of each IMF.(1)Calculate total energy of each IMF bywhere is the th IMF and the length of is equal to .(2)Calculate probability distribution. According to [21], the nonnegative energy distribution can be visualized as probability distribution of signal. Hence, the process of calculating energy distribution is as follows: each IMF is averagely divided into segments as shown in Figure 3 where is 6. The energy of each segment is equal towhere is the number of segments; and are starting and stopping time points of the segment. The energy distribution of each segment in the whole IMF can be expressed as (3)According to the relative entropy theory, the definition of relative entropy of each IMFs iswhere is the energy distribution of segment of nominal IMF of fault-free circuit.

Step 4. A feature vector for each fault can be given aswhere denotes the number of fault samples in a circuit and is the number of IMFs of one fault. Normalizing the feature vectors in formula (16) is reasonable to do. Here, we use partly normalized method to normalize some features in the feature vector which is defined as follows:Finally, we could use the feature vectors to train and test an ELM classifier for fault diagnosis.

4. Experiment and Performance Results

4.1. A Sallen-Key Bandpass Filter

To verify the capacity of fault diagnosis with the proposed method, the first example circuit is a second-order Sallen-Key bandpass filter circuit, which is a benchmark circuit and is used as a CUT in [3, 9, 18]. Figure 4 shows the schematic of the circuit with nominal parameter values. From the figure, we can see that the filter circuit consists of 5 resistors, 2 capacitors, and 1 operational amplifier. First, the operational amplifiers in the circuit are assumed to be fault-free. Second we suppose each potential faulty component’s nominal value is k and its faulty parameter range is [, ] and [, ]. The nominal and faulty parameter ranges of the filter’s components are shown in Table 1. In the table, there are total 15 faults including fault-free status where and stand for being higher and lower than nominal values, respectively.

According to the fault classes in Table 1, we use OrCAD/PSpice to simulate the circuit with time-domain transient analysis and Monte Carlo analysis methods to obtain the simulation fault data. First, the Sallen-Key bandpass filter is stimulated by a excitation signal V1 which is a single pulse of 10 V with 10 μs duration. The run to time and max step size are set as 300 μs and 0.1 μs, respectively. The output voltage values are gained at the point “out.” And, to consider the effects of the component tolerances, the resistors and capacitors are assumed to have tolerance limits of ±5%. When all the components are varying within their tolerances the circuit is considered no-fault. Otherwise, the parameter value of any component is out of scope of its tolerance limit with the other components varying within their tolerances which is regarded as a fault.

In order to close to the actual circuit characteristic, every fault class will be simulated 150 times in faulty parameter ranges using Monte Carlo analysis method in time domain and a total of 2250 corresponding impulse response waveforms are obtained. Some related waveforms are shown in Figure 5. In the figure, (a) is the fault-free waveform and others are different impulse response waveforms about different faulty circuits.

The simulation data in PSpice are recorded and imported into Matlab, and then their feature vectors are constructed with kurtosis and relative entropy to train an ELM classifier. The detail is described as follows.

First, to construct the feature vectors, we decompose these stored responses data into IMF components with EEMD method based on the discussion in Section 3. Figure 6 displays the results of EEMD decomposition of no-fault circuit and faulty C1 . In the figure, each of the two response signals is decomposed into 10 IMF curves and a residue from high frequency to low frequency. From the figure, we can clearly see that in same decomposition layer, faulty IMF differs obviously from fault-free IMF. Therefore, here, we only take C4–C9 IMF components into account to improve fault distinguishability that can satisfy our work.

Next, kurtosis of each IMF for a certain fault circuit is calculated. Meanwhile each IMF waveform (300 μs, 3000 samples) is averagely divided into 6 segments. The PDF of each IMF is calculated according to (12), (13), and (14). Table 2 demonstrates PDFs of the nominal response IMFs for fault-free circuit. Therefore, relative entropy between each IMF of faulty components and corresponding nominal IMF of fault-free circuit is achieved by adopting formula (15). Feature vector for each fault class will be built to feed directly into an ELM classifier. Take C1 ; for example, Table 3 shows a result of feature extraction which is a feature vector of the faulty C1 through calculating its kurtosis and relative entropy with (11), (15), and (16). Its fault feature vector is = [0.6162, 0.8729, 0.9813, 0.8713, 0.9203, 0.7989, 0.0099, 0.0109, 0.2260, 0.7400, 0.1266, 0.4903]. As the same way, a total of 2250 fault feature vectors of the circuit can be obtained.

Finally, for every fault class of the Sallen-Key circuit, 150 samples are split into two parts. The first 100 fault feature vectors are adopted to train an ELM classifier and the remaining 50 fault feature vectors are used to test the ELM. Because the testing accuracy is sensitive to the selection of activation functions, the RBF function is proper for the diagnostic and the number of neurons is set as 250.

In order to show the performance of the proposed diagnostic method, we compare our method with other existing feature extraction methods which are presented in [3, 11, 18] to train an ELM classifier. The results of classification are demonstrated in Table 4. For the single faults diagnosis of the Sallen-Key bandpass filter circuit, the average test accuracy of our method is 99.4%. In contrast, the wavelet and ELM method (88.8%), the lifting wavelet and ELM method (99.3%), and the method in [11] (97.9%) are lower than ours in test accuracy. Thus, we can see that the performance of the proposed method is superior to the combination method of wavelet and ELM and the method of [11]. Meanwhile, it has nearly the same accuracy as the lifting wavelet and ELM method. Moreover, these methods [3, 11, 18] only considered a CUT to be faulty when the value of potential faulty component is higher or lower than the nominal value by 50% and did not take the continuity of faulty parameters and the influence of tolerances into account. If we use the same method considering only 50% variation as faulty parameter values, the test accuracy of our method could be up to 100% in simulation.

For reducing time cost, we adopt an ELM algorithm as a classifier because it is one of the best classification algorithms and it also can provide higher performance in time cost. Table 5 shows ELM-based method’s performance and SVM-based method’s performance in time cost with the same four types of original feature vectors, respectively. The four feature vectors are based on different feature extraction methods. From the table, it can be seen that the test accuracy of ELM-based method is approximate to SVM-based method’s accuracy. For example, the test accuracy of SVM-based method is 98.6% for the kurtosis and entropy technique, which is similar to the ELM-based method (97.9%). However, the time consumption of the ELM-based method is much lower than the SVM-based method. For instance, using wavelet coefficients as features to train SVM classifiers, it takes 11.2 s; on the contrary, it takes 0.0289 s with an ELM classifier. For our proposed method, its time cost is better than wavelet coefficients technique and lifting wavelet technique. As a result, the proposed method in the paper can reduce time cost greatly.

4.2. A Leapfrog Filter

The second example circuit is a leapfrog filter, which is used as a CUT in [9]. The nominal values of the benchmark circuit’s components are shown in Figure 7. The input signal is also a single pulse with 5 V amplitude and 10 μs duration. The “out” point of the circuit is the only test point. As we all know, several components of a CUT may cause faults simultaneously in practice. Therefore, in the experiment, 10 multifault cases are selected to verify our proposed method’s diagnostic performance for multifaults in the CUT, which is the same fault option as in [9]. These fault classes are shown in Table 6. The experiment is also carried out through injecting these faults classes to the CUT, respectively, according to the diagnostic procedure already discussed in Section 3. Diagnostic results of the circuit for multifaults are shown in Table 7. In the table, the average test accuracy of our method is 98.9%, whereas for these methods adopting these feature extraction methods in [3, 11, 18] to diagnose these multifaults in the leapfrog filter circuit their diagnostic accuracies are 88.1%, 90.5%, and 86.8%, respectively. Therefore, we can see that the proposed method is better than the other diagnostic methods for multifaults in the leapfrog filter circuit.

Through the two experiments, the results of the proposed method can be summarized as follows:(1)The proposed method in the paper has better accuracy than other methods such as the first wavelet coefficients technique and the lifting wavelet method.(2)For multifaults diagnosis, the method adopting EEMD, kurtosis, and relative entropy to construct feature vectors has better classification accuracy than the traditional method used in [3, 11, 18].(3)ELM classifiers with the techniques of EEMD, kurtosis, and relative entropy sometimes get the same better classification results as SVM classifiers with the same original feature vectors. Meanwhile, ELM-based method has much lower classification time than SVM-based method.

To sum up, the proposed method in the paper is acceptable from two aspects: test accuracy and time cost. It has higher test accuracy and fast classification capacity.

5. Conclusions

In this paper, a combinational diagnostic method for analog circuit with EEMD, relative entropy, and ELM is proposed. The proposed method makes good use of the EEMD, kurtosis, and relative entropy technique to construct fault feature vectors, and then faults classification on CUTs are performed using the ELM classifier. The effectiveness of the proposed method has been validated with the classical two benchmark circuits for single and multifault diagnosis. The results of experiments show that the method can distinguish effectively different faults of circuit with the higher testing accuracy (99.4%) and the lower testing time.

Competing Interests

The authors declare that there are no competing interests regarding the publication of this paper.

Acknowledgments

This work was supported in part by the Fundamental Research Funds for the Central Universities of China (Grant no. ZYGX2015J074), Science and Technology Support Project of Sichuan Province, China (2014FZ0037, 2015FZ0111), and Support Project of CDTU (KY1311018B).