Abstract

In previous works of these authors, a technique for doing single-fault diagnosis in linear analog circuits was developed. Under certain conditions, one of them assuming nominal values for the circuit parameters, it was shown that only two measurements taken on two selected circuit nodes, at a single frequency, were needed to detect and diagnose any parametric fault. In this paper, the practical value of the technique is improved by extending the application to the diagnosis of faults in circuits with parameters subject to tolerance. With this in mind, single parametric faults with several strengths are randomly injected in the circuit under study and, afterwards, these faults are diagnosed (or the diagnosis fails). Results are reported on a simple active filter. Conclusions are drawn about the robustness and effectiveness of the technique.

1. Introduction

The practical implementation of analog fault detection and diagnosis is a matter of importance, and many researchers are nowadays working on it. In recent papers, a technique for doing single-fault diagnosis in linear analog circuits was developed [13]. This technique presents several nice properties.

(i)It shows that fault diagnosis can theoretically be done with only two measurements (corresponding to two circuit variables) at a single frequency, in case where there are not any ambiguity sets in the circuit. (ii)It allows to build fault dictionaries (FDs) where only one complex vector is needed to be stored for each circuit parameter and each test frequency.(iii)In theory, a sufficient FD can be built with information gathered only at one single frequency.(iv)The technique indicates how to reduce (if possible) the number of test points to a minimum of two. This is an important issue for industry application. However, the technique has also several drawbacks:

(i)it applies only to linear circuits;(ii)it applies only to single faults;(iii)all the circuit parameters, except the faulty one, must have their nominal values.

This last disadvantage is the most important. In fact, in real circuits all parameters are subject to fluctuations due to the manufacturing process and aging, for instance, and there is a tolerance range where the real value of the component is.

Here, the application of the diagnosis technique to circuits where the components, such as resistors, capacitors, and amplifier gains, have random values located in a specified tolerance interval is reported. The study relies on simulations, and thus the effect of some real sources of errors (e.g., imprecision in the measurements) is not accounted for.

The paper is organized as follows. In the next two sections, we report briefly on relevant published results and present the theoretical support of the technique. Then, the procedure which tackles the tolerances and the tools developed for its implementation are presented. Finally, results on the fault diagnosis of an active filter are reported and the practical applicability of the technique is discussed.

2. Previous Work and State-of-the-Art

The problem of practical single-fault diagnosis under the assumption of element tolerances is addressed in this paper. Under the hypothesis of “exact” parameter values it was shown before [1, 4] that two measurements, at a single frequency, done at two different test points properly chosen, could be sufficient to diagnose any fault. The necessary conditions and an algorithm for selecting the test points were presented.

In [3] a set of tools developed for evaluating the technique under the above “ideal” case was presented and its application was reported. In the present paper, we give an account of an extended set of tools that can cope with the existence of tolerances in the components, the “real” case.

In terms of fault diagnosis' techniques taxonomy [5], this one can be classified as a simulation-before-test (SBT) approach since the information for diagnosis is stored before the circuit is tested. It can also be classified as a fault-dictionary technique, as one set of vectors (T-vectors), independent of the fault value, is recorded before the test and is used in diagnosis.

The fault diagnosis problem has been addressed under many perspectives and according to several goals (e.g., fault detection or fault diagnosis) and problem formulation (e.g., single or multifrequency diagnosis, circuit parameters with or without tolerance, all or only some circuit variables available for measurement), not to mention the diversity of circuits themselves: linear, nonlinear, resistive, dynamic, analog, and mixed (or analog-digital). Recall that fault diagnosis in digital circuits is a mature discipline (see, e.g., [68]), quite self-contained, and independent of analog-fault diagnosis.

The problem of defining conditions under which a circuit is diagnosable has had many answers. When all the element values must be calculated (i.e., when multiple faults are allowed), usually several test points and test frequencies must be used. Seminal compilations of works on this issue are [913], and many more contributions are spread through circuits and systems journals and conference proceedings.

Some researchers tackled the fault-diagnosis problem in electronic circuits by using artificial intelligence (AI) techniques. For instance, [14] presents a matured methodology for model-based diagnosis of analog circuits using the constraint-logic programming approach. The modeling of the diagnosed circuit is generalized to arbitrary analog circuits consisting of linear elements (nonlinear circuits, such as diodes and operational amplifiers with output saturation, are included in the approach by doing the piecewise linearization of their characteristics). Both hard and parametric faults are considered in the diagnostic process. Fault situations with multiple-hard and single parametric faults can be diagnosed. An example, a 4-stage, eighth order, bandpass filter illustrates the approach.

The work from [15] is also based on AI. The detection and location of faults in analog circuits are done by checking that the measurements are consistent with the circuit function. The representation of information (component behavior and structure of the circuit) is unique (according to the authors) and accommodates the imprecise nature of analog circuits. A model of the circuit is formed from the constraints imposed by the behavior of the components and the interconnections. The values of the parameters within the circuit are deduced by propagating the effects of measurements through this model. Faults are implied from the detection of inconsistencies, and located by tentatively suspending constraints within the model.

In [16], is presented a quite original approach to circuit fault diagnosis of nonlinear electronic circuits. It consists in a branch-fault location (as opposed to node fault location) approach which requires a single excitation source at one test frequency. Branch diagnosis equations are constructed by modulating the bias of nonlinear devices in the network. The constraints on the bias equations and test frequency are addressed for a general network where network elements are modulated or bias transitions are used. A network element whose value is varied externally (i.e., by test equipment) is defined as a modulated element. The proposed technique has been applied to several examples consisting, mainly, in cascade amplifiers with bipolar transistors.

We now review some recent work relevant to the present article.

In [17], the selection of test nodes has been studied extensively and efficient techniques, called inclusion methods and exclusion methods, were proposed. The order of computation of the methods depends linearly on the number of test nodes. It is also proportional to where “” is the number of faults. The concept of “minimal set of test nodes”, a novelty in analog circuit fault diagnosis, was defined. Polynomial time algorithms were proposed for the first time to generate such sets. Note that the faults considered in the study were hard or catastrophic faults (shorts or opens), while our method, based on the “T-vectors”, is appropriate for both hard and soft (or parametric) faults.

In [18], an efficient method to select an optimum set of test points for dictionary techniques in analog fault diagnosis was proposed. It is based on searching for the minimum of the entropy index attained with the available test points. The test point with the minimum entropy index is selected to construct the optimum set of test points. The method is polynomial bounded in terms of computational cost. The frequency used was 1000 Hz (note: we can use any number of test frequencies in our work). The faults under study were also catastrophic; in fact, they were the same used in [17].

In [19], were presented symbolic techniques for the selection of test frequencies in multifrequency parametric fault diagnosis of analog linear circuits. The proposed approach was based on the evaluation of the condition number and the norm of a sensitivity matrix of the circuit under test. The initial set of frequencies, from where the test frequencies were chosen, had values separated by octaves (heuristic choice).

3. Fault Simulation Equations

The theoretical results in this work are a by-product of the technique used to assemble the equations that simulate the circuit with a single fault. This technique was coined FARUBS (from “Fault Rubber Stamps”) in [20], and has been used to build efficient fault simulators for nonlinear DC circuits [21, 22], for linear dynamic circuits in the time domain [23, 24] and for linear dynamic circuits in the frequency domain [25]. We briefly review the technique here, as it is the base for the diagnosis method discussed in the present work.

The circuit equations, which are assembled using modified nodal analysis (MNA) according to the “rubber stamps” methodology [26], are written as In most simulators, (including those we have developed) this linear system of equations is solved with triangular decomposition (or “LU decomposition”) of the circuit matrix ; that is, after calculating a lower triangular matrix and an upper triangular matrix such that then two triangular systems of equations are solved: The solution, , consists of the node voltages and some currents in the circuit.

To simulate single faults, it was developed a fault-injection technique that reuses the above matrix and joins to it an additional line (representing a scalar equation) and an extra column (corresponding to an extra variable , the fault variable). These additional line and column are placed in the bottom and in the right position, respectively, in the matrix of the faulty circuit, which is where is a scalar that depends on the fault value. Note that the dimension of the faulty circuit matrix is equal to that of plus one.

As was already factorized during the simulation of the nominal circuit, and due to the special location of and in the faulty matrix, the LU factorization of the faulty system is Thus, only the vectors and and the scalar have to be calculated (or factorized), and two triangular systems have to be solved, to get the solution of each faulty circuit. This approach is much faster to accomplish than doing the factorization of the complete faulty circuit matrix, and it allowed the authors to simulate faults efficiently in quite a variety of circuit types and analysis domains (nonlinear DC circuits, linear circuits in the time domain and in the frequency domain) [21, 23, 25].

3.1. Mathematical Results on Testability and Diagnosis

We now proceed with the development of the diagnosis equations, which are based on those previously presented.

Equation (4) can be recast as From (1) it is known that . Thus, The difference between the nominal and the faulty solutions, , is a very important quantity in the diagnosis technique.

is a complex scalar and is a complex column vector. We name it the testability vector, or T-vector, : The testability vector is clearly associated to a specific circuit element. Recall that the connectivity vector is used to insert the fault effect in the correct lines of the matrix. It contains the information about the position of the faulty element.

Finally, we can state that that is, the difference vector is related to the testability vector through a multiplication by a complex scalar. This scalar, , is the fault variable (see (4)).

Recall that the equations are written in the complex domain. Multiplication of one complex vector by a complex scalar, say by , corresponds to rotate by an angle , and magnify by a quantity all the complex (scalar) elements belonging to the vector. Thus, the relation between and is such that all the element-wise divisions of the corresponding elements of these vectors are equal to .

This fundamental observation is the basis of the diagnosis technique. To diagnose a circuit, given a circuit measurement taken at a known test frequency, , we calculate and test it against all the T-vectors calculated at the same by performing the element-wise division of the vectors. The faulty element is the one corresponding to a T-vector where those divisions lead to the same value (or approximate values, as we will discuss in Section 4).

An example, with four variables, is shown in Figure 4. In this case, ; thus all elements from are obtained from the elements by performing a rotation of and without magnifying the amplitudes (as ).

It is important to remember that all the equations developed so far assume a test frequency ; for each test frequency new systems of equations (nominal and faulty) must be assembled and solved and, of course, the T-vectors also depend on the test frequency. A fault dictionary is assembled with the knowledge of the nominal circuit, the test frequencies, and the test variables (or observed variables).

4. The Diagnosis Tools

The set of tools developed for implementing the technique comprises several modules implemented in Ruby (http://www.ruby-lang.org/en):

(i)a custom, Spice-like, AC simulator: Bode diagrams are plotted with gnuplot (http://www.gnuplot.info) (an example is in Figure 6); the simulator optionally generates the T-vectors (option .tcalc);(ii)a tool for detecting ambiguity sets in the circuit;(iii)a tool for diagnosing faults;(iv)a generator of random faulty circuits;(v)several short scripts for gluing the above tools, plotting, doing batch simulations, and so on.

The detection of ambiguity was discussed in [3]. The presentation here is focusing instead on the fault-diagnosis procedures.

We begin by showing an example of diagnosis in the ideal case (a circuit having one faulty element, but where the remaining elements have their nominal values). Only after this preliminary example, the tolerance issue will be addressed.

It should be taken in mind that the final purpose of the experiments reported here is to gather data on how well parametric faults of several strengths are correctly diagnosed in circuits with elements subject to tolerances (the “real”, practical, case). This will provide data about the robustness of the technique. The “ideal” fault diagnosis example has the objective of introducing the main ideas, and paves the way for the “real” fault diagnosis with tolerances presented afterwards.

4.1. Fault Diagnosis with Nominal Component Values

For diagnosing a fault there are available the nominal solution and the T-vectors, both calculated at the selected test frequencies. The faulty circuit is simulated (this emulates the process of performing measurements in a real faulty circuit) and the output (i.e., the solution) is collected. The vectors are calculated from this data. (It should not be any confusion between , the scalar in the equations, and the vector , whose elements are , which is calculated in the practical application of the technique. In the ideal circuit case, these are all equal and are also equal to when the T-vector corresponds to the faulty element; but in the real case, due to the tolerances, the are different among them and different from even when the T-vector corresponds to the faulty element.)

The diagnosis tool implements the algorithm described, in pseudocode, below. For each test frequency it calculates for each possible faulty element (i.e., evaluates the components of , corresponding to the element-wise ratio defined by (9)) and also computes the average , standard deviation , and coefficient of variation of the real and of the imaginary parts of the elements of . Then, by summing those coefficients of variation for all the test frequencies, the cumulative coefficients of variation for each element are calculated and sorted: the lowest one indicates the diagnosed faulty element.

Parametric faults corresponding to 5 times the allowed tolerance for the component are injected in the nominal circuit.

In this example it was inserted a specific fault in in the Sallen-Key bandpass filter (Figure 5). For simplicity we assume here that all circuit variables are measured.

When the circuit parameters are nominal except the faulty one (the “ideal” faulty circuit case), has its components, , all equal, at each frequency, only when “probing” the faulty element (i.e., when the T-vector in the division corresponds to it).

The coefficients of variation are ideally zero when probing the faulty element and not zero in the other cases. This is illustrated below (the complete output is truncated, and some comments were inserted in the listing): the cumulative coefficient of variation of , is not zero due to rounding errors, but is correctly diagnosed as faulty.

The tool uses the database of T-vectors (fault dictionary), created when simulating the nominal circuit, and the simulation output file of the faulty circuit (which simulates the measurements).

The above report from the tool has the following information:

(i)the simulation frequencies;(ii)a list with the circuit variables, where numbers are node voltages and VE and EK refer to the currents in the input source (VE) and in the controlled source (EK);(iii)a list of the for each circuit parameter, here shown only for and at 1000 Hz;(iv)the first value, NaN (means “not a number”), corresponds to node 1, which is forced by the input source. Obviously this node is useless for diagnosis.

4.2. Diagnosis with Circuit Parameters Subject to Tolerance

When the parameters are subject to tolerance, there is not a “zero” cumulative coefficient of variation for the faulty element. We can expect, however, that the minimum cumulative coefficient corresponds to the faulty element.

The output below shows the results for a random circuit where the resistors have 1%, and the capacitors and the amplifier gain have 2% tolerance. The inserted fault was also in . Notice that the components of , for , are not all equal, as in the ideal case, although they show approximate values. However, the cumulative coefficient of variation is minimum for (although quite different from zero), which corresponds to a correct diagnosis.

5. Robustness of the Technique

A large set of experiments was conducted to check the robustness of the diagnosis tool under the effect of tolerances. Larger circuits are being investigated, but here we only present the results obtained from the Sallen-Key bandpass filter. The parameters of the experiment are the following.

(i)The circuit under diagnosis is shown in Figure 5. There are 6 parameters to be diagnosed in this circuit. This means that diagnosis “by chance” has probability of being correct (this value is the bottom line for comparing the values in Tables 1 and 2 and in Figures 7 and 8).(ii) Six test frequencies are used, spanning (in a logarithmic scale) from 500 Hz to 5 KHz.(iii)Each percentage of diagnosis, shown in Tables 1 and 2 and in Figures 7 and 8, was calculated from a sample of 500 circuits randomly generated (this means that 10000 circuits were simulated and diagnosed to gather the data).(iv)Tables 1 and 2 are plotted in Figures 7 and 8. Although the information conveyed by them is the same, we feel that both representations are useful.(v)It takes 0.36 seconds to generate, simulate, and diagnose one circuit sample.

The multiplier coefficient, , defines the strength of the fault. The fault injected randomly is calculated as , where is the nominal value, is the faulty value, and is the tolerance assigned to the circuit element subject to the fault. The “+” signal in the definition of leads to the column labeled larger; the “−” signal leads to the column labeled smaller.

5.1. Analysis of the Results

There are some conclusions to be drawn from the analysis of Tables 1 and 2 (or from Figures 7 and 8).

First, there is a strong increase in the percentage of correct diagnosis when the strength of the fault increases. This is expected because with “harder” faults the solution of the faulty circuit goes away from the region of an “acceptable” value explained by the tolerances.

Second, for equal fault strength, the fault corresponding to a value smaller than the nominal is detected correctly more often than that corresponding to a value larger than the nominal. This can be seen in Figures 7 and 8 because the darker bars in the back of the plot, corresponding to the smaller fault values, are always taller than those in the front. We could not find a reasonable explanation for this observation.

Two sets of tolerances were studied (in one case, 1% for all components; in the other case, 2% for capacitors and for amplifier gains). A comparison between the corresponding plots does not reveal significant trends among them.

Finally, and recalling that diagnosing by chance in this example has an expected success rate of %, we can guess that the diagnosis of “light” faults is not robust by comparing the percentages in the first line in the tables, corresponding to , with %; these faults are almost inside the tolerance region.

6. Conclusion

In this paper are presented the first diagnosis results with practical meaning obtained with a tool built on top of a novel fault diagnosis technique which has been developed in the last few years.

It were presented an introduction and a review of the current state-of-the art on the subject of fault diagnosis, and the mathematical principles supporting the technique. Then, an extensive set of fault diagnosis results based on a Sallen-Key bandpass section was reported. It was seen that, when specifying common tolerance values, the likelihood of a correct diagnosis went from about 30% in the case where the fault was twice the maximum allowed tolerance (“light” fault) to about 80%–90% when the fault was 40 times that limit (almost an “hard” fault).

There are other issues of practical importance to be researched in the near future: the reduction on the number of diagnosis variables; the selection of effective test frequencies; the observation of the variation of the percentage of correct diagnosis with the type of the faulty element; the collection of statistics on the rank of the coefficients of variation of the faulty element (when it is not the first, is it the second, the third...?); and, obviously, the application of the technique to more complex circuits.

Acknowledgment

The work presented in this paper was partially developed under the Projects POCI/FP/63434/2005 granted by the Portuguese “Fundação para a Ciência e Tecnologia”, and under the European MEDEA+ Project 2A702 NanoTEST.