Computational Intelligence and Neuroscience

Volume 2016 (2016), Article ID 7657054, 9 pages

http://dx.doi.org/10.1155/2016/7657054

## Fault Diagnosis for Analog Circuits by Using EEMD, Relative Entropy, and ELM

^{1}School of Automation Engineering, University of Electronic Science and Technology of China, Chengdu 611731, China^{2}Department of Communication Engineering, Chengdu Technological University, Chengdu 611731, China

Received 21 May 2016; Revised 18 July 2016; Accepted 28 July 2016

Academic Editor: Rodolfo Zunino

Copyright © 2016 Jian Xiong et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

#### Abstract

This paper presents a novel fault diagnosis method for analog circuits using ensemble empirical mode decomposition (EEMD), relative entropy, and extreme learning machine (ELM). First, nominal and faulty response waveforms of a circuit are measured, respectively, and then are decomposed into intrinsic mode functions (IMFs) with the EEMD method. Second, through comparing the nominal IMFs with the faulty IMFs, kurtosis and relative entropy are calculated for each IMF. Next, a feature vector is obtained for each faulty circuit. Finally, an ELM classifier is trained with these feature vectors for fault diagnosis. Via validating with two benchmark circuits, results show that the proposed method is applicable for analog fault diagnosis with acceptable levels of accuracy and time cost.

#### 1. Introduction

Numerous researches have indicated that analog circuit fault diagnosis is a significant fundamental for design validation and performance evaluation in the integrated circuit manufacturing fields [1–3]. In contrast to the well-developed diagnostic methods for digital circuits, diagnosis for analog circuits is an extremely difficult problem and an active research due to the following reasons: () there is lack of a reliable and practical fault modeling method for analog circuits because of the complexity and variability of analog circuit structures; () the parameter values of analog components are continuous; () the impact of tolerance and nonlinear nature issues cannot be ignored; () for actual analog circuits, test points are limitations.

The procedure of fault diagnosis for analog circuits can be generally classified into four stages: data acquisition, feature extraction, fault detection, and fault identification and isolation. As one of the foremost stages in fault diagnosis, feature extraction methods are closely related to the efficiency of fault diagnosis. Many feature extraction methods have been proposed such as correlation function technique [4], information entropy approach [5], the fast Fourier transform technique [6], and the wavelet transform technique [7]. Zhang et al. [8] directly used the output voltage as features for fault diagnosis of analog circuits without preprocessing methods, and the results of fault diagnosis are not very good. M. Aminian and F. Aminian proposed a diagnostic method of analog circuits using wavelet decomposition coefficients, principal component analysis (PCA), and data normalization to construct fault feature vectors and then trained and tested neural network classifiers [3]. The method can obtain higher accuracy of diagnosis. In [9], Long et al. adopted conventional time-domain feature vectors to train and test least squares support vector machines (LS-SVM) for fault diagnosis of analog circuits which has better accuracy than that with traditional wavelet feature vectors. For information entropy techniques, it is more sensitive to parameter variations of components in CUTs. Therefore, information entropy is widely used with other techniques for fault diagnosis [5, 10–12]. Xie et al. diagnosed soft faults of analog circuits using Rényi’s entropy and the result is effective [5]. In [11], authors have developed a new fault diagnosis approach by using kurtosis and entropy of sampled signals as feature vectors to train a neural network classifier.

However, there are some problems which should be considered and solved in feature extraction. Firstly, how to select features to train classifiers should be considered because different features with different classifiers for analog fault diagnosis have different results. Secondly, we find that most of the aforementioned methods were validated with some discrete simulations data. That is, they only considered a CUT to be faulty when a component value is higher or lower than its nominal value by 50%. It means this method has low fault coverage. Thirdly, some methods should take the influence of tolerance and the continuity of faulty parameters into account.

In our work, therefore, we use the techniques of EEMD, kurtosis, and relative entropy to construct new feature vectors to train an ELM classifier to improve the diagnosability and reduce time cost. As an adaptive time frequency data analysis method ensemble empirical mode decomposition (EEMD) is suitable for linear, nonlinear, and no-stationary signals [13]. Recently, it has been successfully applied to extract significant fault features in many fields such as rotating machinery and locomotive roller bearings fault diagnosis [13–15]. Relative entropy method is rarely used in the analogy circuit fault diagnosis field. The difference between the probability distributions of faulty and fault-free circuits can be distinguished clearly by adopting relative entropy, because when a component is varied, the energy distribution is also changed which leads to change in relative entropy. Kurtosis is a measure of heavy tailed distribution of a real valued random variable. It can clearly describe the difference from waveforms. As a result, the combinational methods of kurtosis and relative entropy are suitable as fault features for analog fault diagnosis.

As a consequence, in this paper, we decomposed impulse responses of a CUT into IMFs using EEMD method and then adopting kurtosis and relative entropy techniques to obtain feature vectors. These features vectors can be used for diagnosis of faulty components among various variation possibilities. For this purpose, a classifier is needed. We selected extreme learning machine (ELM) classifier because it is proven to have excellent generalization performance and low computational cost [16, 17] when it is fed to train and test with fault features. Utilizing the combination of EEMD, relative entropy, and ELM algorithms for feature extraction and classification we can complete analog circuit fault diagnosis. It demonstrates reliable and accurate fault diagnosis with reduced test time.

This paper is organized as follows: Section 2 briefly presents the principle of EEMD, relative entropy, and ELM algorithms. In Section 3, the diagnostic procedure of the proposed method is introduced. Section 4 shows the simulation experiment details and results for two benchmark analog circuits. And then the performance of the proposed method is also discussed in the Section. Finally the conclusions are drawn in Section 5.

#### 2. A Review of Fundamental Theory

In the work, we combined EEMD, relative entropy, and ELM to perform fault diagnosis of analog circuits. Fundamentals of EEMD, relative entropy, and ELM are introduced firstly as follows.

##### 2.1. Ensemble Empirical Mode Decomposition (EEMD)

Ensemble empirical mode decomposition, based on empirical mode decomposition (EMD), is to solve the aliasing in time frequency distribution with Gaussian white noise [13]. Based on simple assumption any signal consists of different simple intrinsic modes of oscillations from low to high frequency [13, 19]. Thus, original signal is defined aswhere is the intrinsic mode functions (IMF). An IMF is defined as a simple oscillatory function that satisfies two conditions [18]:(1)It has the same number of extrema and zero crossing or has the difference no more than one between them.(2)The mean value of the envelopes defined by the local maxima and minima is zero.

From (1), we can see that the original signal is decomposed into IMFs and one residue . The procedure of decomposition with shifting method is described as follows.

*Step 1. *Given a signal , all local maxima and minima of it are gained firstly. Then upper and lower envelopes of the given signal are determined from a cubic spline interpolation of the local maxima and minima. Let be the mean of the two envelopes and the first component is obtained as

*Step 2. *Let be the mean of ’s upper and lower envelopes and is calculated as follows:

*Step 3. *Repeat the above procedure times until satisfies IMF conditions. The first IMF is obtained by .

*Step 4. *Subtract from , and a residue is obtained as

*Step 5. *The residue, which contains useful information, is considered as main signal and Steps are repeated to gain other IMFs. Formula (4) is rewritten as

*Step 6. *When the residue becomes monotonic slope or has only one extreme, the whole procedure is stopped.

From the procedure, we can see that IMFs represent the degree of oscillation of signal in amplitude and frequency. It means that these IMFs contain much time frequency information of the signal. Thus, the authors in [13] indicated that the algorithm is a new high-performance signal processing approach which can deal with linear, nonlinear, and no-stationary signals. More details about this technique can be found in [13, 19].

##### 2.2. Relative Entropy

Let be a continuous random variable. and are the probability distributions of . Relative entropy describes the distance between two probability distributions of . The relative entropy is calculated aswhere denotes energy probability distribution function (PDF) of response voltages for faulty CUT and indicates normal response voltage PDF of fault-free CUT. When parameters of one or more components of CUT are changed, the PDF of corresponding output voltage will also vary. This means that it is more sensitive to parameter variations of components in CUT. By calculating the relative entropy between faulty and fault-free circuit, faults can be detected. Consequently, for fault diagnosis, relative entropy is suitable as fault feature.

##### 2.3. Extreme Learning Machine

In order to accurately and quickly diagnose faults, in our work, extreme learning machine (ELM) is adopted. ELM is one kind of fast algorithm of single hidden-layer feedforward networks (SLFN) as shown in Figure 1. The hidden layer of SLFN need not be tuned. It is proven that it has excellent generalization performance and low computational cost in many applications [16, 17]. In the paper we utilize it to do fault diagnosis as a classifier. A brief of review of ELM is described as follows [16].