Abstract

In large-scale industrial processes, a fault can easily propagate between process units due to the interconnections of material and information flows. Thus the problem of fault detection and isolation for these processes is more concerned about the root cause and fault propagation before applying quantitative methods in local models. Process topology and causality, as the key features of the process description, need to be captured from process knowledge and process data. The modelling methods from these two aspects are overviewed in this paper. From process knowledge, structural equation modelling, various causal graphs, rule-based models, and ontological models are summarized. From process data, cross-correlation analysis, Granger causality and its extensions, frequency domain methods, information-theoretical methods, and Bayesian nets are introduced. Based on these models, inference methods are discussed to find root causes and fault propagation paths under abnormal situations. Some future work is proposed in the end.

1. Introduction

In a large-scale industrial process, process units are connected; thus a fault can easily propagate from one unit to another along material or information flow paths. Therefore, the problem of fault detection and isolation cannot be limited in a local unit, but should be laid in a large scale, which leads to a set of new problems that have attracted many researchers.

First of all, the large-scale problem is featured by causality. Causality is a physical phenomenon based on cause-effect relationship between different variables [1]. When one focuses on the interconnections of the process units, the first step is to recognize the causality between variables and that is what an engineer is interested in because one should find the root cause and the fault propagation paths in a faulty mode [2, 3] before analyzing the accurate dynamics based on first-principle or mathematical models.

The main research topics are modelling methods from process knowledge and process data and inference methods based on the model. Initially, the signed directed graph (SDG) is established by representing the process variables as graph nodes and representing causal relationships as directed arcs [4, 5]. An arc from node 𝐴 to node 𝐡 implies that the deviation in 𝐴 may cause a deviation in 𝐡. Positive or negative influence between nodes is assigned to the arc. This is a qualitative description of process knowledge. When a fault occurs, the fault propagates along consistent paths forming a set of nodes whose values are beyond the normal range. This set of variables with signs is called a symptom. Different symptoms reveal different fault types. In real-time supervision, symptoms are obtained by sensor readings. As soon as a symptom is triggered, operators should identify the possible cause(s) and take appropriate actions immediately for remedial action. The SDG model has its obvious disadvantages due to its qualitative features; thus we should explore other established methods, including quantitative models and take into account latest and effective formal techniques. In Section 2, the model description methods are summarized; process knowledge is also the main resource for modeling.

Another resource for modelling is process data because process knowledge is not always available. Even if it is available, a lot of insignificant information may easily disturb the modelling procedure and make it too complex. Process data can effectively complement the information requirement and simplify the procedure; moreover, it can screen the nuisance information and improve the accuracy of the models. Here, the pairwise causality capture methods are developed to identify cause and effect. In a real process that is usually multivariate, a topology should be constructed based on pairwise analysis results. Several sets of method are introduced in Section 3.

Based on the models, diagnosis applications consist in finding the root cause whose abnormity accounts for all the abnormities detected in other parts [6]. Thus the purpose of the model-based inference is to interpret the symptom detected by finding the root cause and fault propagation paths. The most common algorithm for searching for root cause(s) is β€œdepth-first traversal on the graph” [4, 7]. However, since there are various models, corresponding inference methods are needed. They are overviewed in Section 4.

2. Model Description Based on Process Knowledge

Based on a priori process knowledge, including first-principle and mathematical models, models can be built to describe process topology. Here, the term model has a broad meaning, not limited to equations.

2.1. Structural Equation Models

Structural equation modeling (SEM) is a statistical technique for testing and estimating causal relations [1, 8]. A structural model shows potential causal dependencies between endogenous/output and exogenous/input variables, and the measurement model shows relations between latent variables and their indicators. For example, if endogenous variable 𝑦 is influenced by exogenous variables π‘₯1 and π‘₯2 (assume that all variables are normalized to be zero mean and unit variance), a regression model can be built as𝑦=𝑝𝑦1π‘₯1+𝑝𝑦2π‘₯2+π‘π‘¦πœ€πœ€(1) and thus be depicted as a path diagram in Figure 1, where each parameter 𝑝 is called a path coefficient, and πœ€ represents the residual, that is, collective effect of all unmeasured variables that could influence 𝑦. The directed arrows represent the influence of the exogenous variables and the residual on the output variable, and the bidirectional arrow represents the correlation between exogenous variables.

Since the exogenous variables are not independent, there is some ambiguity about the real or dominant path. Based on the statistical analysis, components of direct and indirect relations can be evaluated via variance decomposition [2]; this gives some indication of the model structure. Typically, factor analysis, path analysis, and regression, as special cases of SEM, are widely used in exploratory factor analysis, such as psychometric design. IBM SPSS Amos (Analysis of Moment Structures) provides an easy-to-use program for visual SEM.

The limitations of this modeling approach are as follows (1) exogenous and endogenous variables should be selected in advance as a hypothesis and the result highly depends on this partition; (2) the causal relations are static relations; (3) only linear regression is considered. To overcome the last two limitations, dynamic causal modeling embraces nonlinear and dynamic nature [9]. In total, this approach is more suitable for confirmatory modeling than exploratory modeling to construct a network topology and suffers from large number of variables.

In recent years, some novel models have been developed such as undirected or directed graphs or networks [10], data models in databases [11], and production rules in expert systems [12]. Following is introduction to some typical cases and applications of these models.

2.2. Causal Graphs

We have seen that a graphical model provides an intuitive way to show causality. There are quite a few causal graphs that are dedicated to this description.

2.2.1. Signed Directed Graphs

Signed directed graphs (SDGs) are established by representing the process variables as graph nodes and representing causal relations as directed arcs. An arc from node 𝐴 to node 𝐡 implies that the deviation of 𝐴 may cause the deviation of 𝐡. For convenience, β€œ+”, β€œβˆ’β€, or β€œ0” is assigned to the nodes in comparison with normal operating value thresholds to denote higher than, lower than, or within the normal region, respectively. Positive or negative influence between nodes is distinguished by the sign β€œ+” (promotion) or β€œβˆ’β€ (suppression), assigned to the arc [4, 5, 13, 14].

Take a bitank system as an example, as shown in Figure 2. Two tanks are connected by a pipe; both tanks have outlet pipes, and Tank 1 has a feed flow. This system can be described by the following set of differential and algebraic equations:𝐢1d𝑒2d𝑑=𝑓1βˆ’π‘“3βˆ’π‘“5,𝐢2d𝑒7d𝑑=𝑓5βˆ’π‘“8,𝑓3=1𝑅𝑏1βˆšπ‘™2,𝑓5=1𝑅12ξ‚€βˆšπ‘™2βˆ’βˆšπ‘™7,𝑓8=1𝑅𝑏2βˆšπ‘™7,(2) where 𝑙2 and 𝑙7 are the levels in Tanks 1 and 2, 𝑓1, 𝑓3, 𝑓5, and 𝑓8 are flowrates, and 𝑅12, 𝑅𝑏1, and 𝑅𝑏2 are the resistances of the pipes between Tanks 1 and 2 and the two outlet pipes, respectively. Since 𝑙𝑖(𝑖=2or7) appears as the square root form, we use 𝑒𝑖 to denote it. One can convert these equations to nodes and arcs to form an SDG, as shown in Figure 3, where solid lines denote positive influences and broken lines denote negative influences. Although no control is taken, there are still some recycles based on the principles.

An SDG can be built manually from first principles and mathematical models and more practically from process knowledge including flowsheets [15, 16].

2.2.2. Other Causal Graphs

Graphical models are commonly used to describe large systems, and yet they have different forms with different meanings. Bond graphs [17] and their extension, temporal causal graphs [18], use different symbols to further describe dynamic characteristics. More precisely, qualitative transfer functions [19], differential equations [20], and trend analysis [21, 22] have been integrated into causal graphs, and complex algorithms are introduced to improve their correctness [23]. Similar or improved approaches were investigated by many researchers [24–27].

The bond graph of the bitank system is shown as Figure 4, and the temporal causal graph is shown as Figure 5. In the bond graph, there are two types of junctionsβ€”common effort (0βˆ’) junction and common flow (1βˆ’) junction. It is obvious that the bond graph describes the exchange of physical energy by bonds. A bond graph can be used to derive the steady-state model automatically; this property is similar with signal flow graphs, which, as another graphical model, can be used for derivation of transfer functions. The temporal causal graph converts the junctions and bonds in the bond graph into nodes and arcs and imposes labels on arcs to describe detailed temporal effects such as integration and rate of change.

Compared to the SDG in Figure 3, the temporal graph provides more detailed information and forms a quantitative model, while the SDG is only concerned with the qualitative trends. Since the exact model is often difficult to obtain for industrial processes, the SDG model is more widely used for its simplicity because it can be validated by process data [28].

2.3. Rule-Based Models

Kramer and Palowitch [29] used rules to describe SDG arcs and thus expert systems can be employed as a tool in this problem. Each arc can be described by a rule using logical functions 𝑝,π‘š, and 𝑧(𝑝𝐴𝐡)⟺𝐴⟢𝐡(positiverelation)(π‘šπ΄π΅)βŸΊπ΄β‡’π΅(negativerelation)(𝑧𝐴𝐡)⟺𝐴𝐡(zerorelation).(3) Therefore, an SDG can be converted into a set of rules. These rules can be expressed as IF-THEN forms to make reasoning by rule reduction. Since only qualitative information is included, there may exist a lot of illusive results. To prevent this disadvantage, some quantitative information, such as steady-state gain, is taken into account to find dominant propagation paths [30].

2.4. Ontological Models

In order to standardize the conversion procedure from process knowledge to ontology, the semantic web has been developed, the architecture of which includes a series of languages produced by World Wide Web Consortium (W3C, http://www.w3.org/), for example, XML, RDF, RDFS, and OWL. Extensive markup language (XML) is the basic and widely accepted open standard for the representation of arbitrary data structure in a text document, especially web services. XML gives the user sufficient freedom to further define and apply in their respective areas, but for the purpose of semantic description, we need a more uniform way to define the process units (considered as resources). Resource Description Framework (RDF) provides a general method for conceptual description or modeling of information that is implemented in web resources, using a variety of syntax formats (http://www.w3.org/RDF/). To structure these RDF resources, RDFS (RDF Schema) is used. It provides an XML vocabulary to express classes with relationships (taxonomies) and define properties associated with classes, which facilitates the inferencing on the data [31]. RDFS is an ontological primitive, upon which Web Ontology Language (OWL) released in 2004 adds extensive features and becomes a more expressive language. An ontology is stored and referred to a unique name space to be retrieved easily. As an improvement over XML, RDF/OWL describes the semantics that is interchangeable between different programs and is convenient for inference. It is the trend of representation of process knowledge in the future. Matrikon’s new software platform, Intuition, is built on RDF/OWL standards and incorporates the semantics to enable all people, processes, and applications to work in concert. Several software tools are available for editing RDF/OWL files, such as TopBraid Composer and ProtΓ©gΓ©-OWL.

In RDF/OWL standards, a data model is described by a collection of triples of subject, predicate, and object expressed as XML syntax, where the subject denotes the resource, the predicate denotes a property of this resource (can be multiple), and the object denotes the value of this property (should be unique, and can be literal or another resource). By this way, not only the inclusive relationship between resources is defined by the taxonomy of classes and subclasses, but also the directed logic relationship or linkage between instances is described by properties.

Apart from datatype and annotation properties, we define the following object properties to describe the physical and information linkage;(i)UncontrolledElement.measuringElement: linkage from an uncontrolled element to a measuring element, for example, the level of a tank measured by a sensor.(ii)UncontrolledElementOutlet.uncontrolledElementIn-let: linkage from an uncontrolled element to another uncontrolled element, for example, a tank connected to a pipe as an outlet.(iii)UncontrolledElementOutlet.controllingElementIn-let: linkage from an uncontrolled element to a controlling element, for example, a pipe connected to a control valve.(iv)ControllingElementOutlet.uncontrolledElementIn-let: linkage from a controlling element to an uncontrolled element, for example, a valve connected to a pipe.(v)Computer.computer: linkage from a computer to another computer, for example, a controller connected to a signal line (information connecting element).

The domain and range of the properties should be defined as appropriate resources.

3. Topology Capturing from Process Data

The cause-effect relationship can be explained from several different viewpoints. First, the propagation needs time, so the cause precedes the effect; this property can be tested by cross-correlation with an assumed lag or fitting the input-output data into dynamic models. Second, cause-effect relationship means information transfer; thus the measure of transfer entropy in information theory can also be employed. Third, causal relationship shows probabilistic properties; thus Bayesian nets are introduced to describe these relationships.

3.1. Cross-Correlation Analysis

Assume that π‘₯ and 𝑦 are normalized time series of 𝑛 observations, then the cross-correlation function (CCF) with an assumed lag π‘˜ is [32]πœ™π‘₯𝑦π‘₯(π‘˜)=𝐸𝑖𝑦𝑖+π‘˜ξ€»,π‘˜=βˆ’π‘›+1,…,π‘›βˆ’1.(4) A value of the CCF is obtained by assuming a certain time delay for one of the time series. Thus the absolute maximum value can be regarded as the real cross-correlation and the corresponding lag as the estimated time delay between these two variables. For mathematical description, one can compute the maximum and minimum values πœ™max=maxπ‘˜{πœ™π‘₯𝑦(π‘˜),0}β‰₯0 and πœ™min=minπ‘˜{πœ™π‘₯𝑦(π‘˜),0}≀0, and the corresponding arguments π‘˜max and π‘˜min. Then the time delay from π‘₯ to 𝑦 isξ‚»π‘˜πœ†=max,πœ™maxβ‰₯βˆ’πœ™minπ‘˜min,πœ™max<βˆ’πœ™min(5) (corresponding to the maximum absolute value) and the actual time delayed cross-correlation is 𝜌=πœ™π‘₯𝑦(πœ†) (between βˆ’1 and 1). If πœ† is less than zero, then it means that the actual delay is from 𝑦 to π‘₯. Thus the sign of πœ† provides the directionality information between π‘₯ and 𝑦. The sign of 𝜌 corresponds to the sign of the arc in the signed directed graph meaning whether the correlation is positive or negative; this sign provides more information than the causality.

Although this method is practical and easy for computation, it has many shortages, some of which are explained below.(i)Nonlinear causal relationship does not necessarily show up in correlation analysis. For example, if 𝑦 equals the square of π‘₯ with the time delay of one sampling time, then based on the time-delayed cross-correlation, this obvious causality cannot be found because all the values are small relative to a threshold. This can be explained because the true correlation should be zero.(ii)Correlation simply gives us an estimate of the time delay. And the sign of the delay is an estimate of the directionality of the signal flow path. The time delay obtained, however, is only an estimate. In addition, the trend in a time series is ignored, and values at different time instances are regarded as samples of the same random event. Thus the causality obtained by this measure is purely the time delay based on the estimate of the covariance.

3.2. Granger Causality and Its Extensions

Regression is a natural way to test the relationship between variables. By taking into account dynamics, the lags in the models reflect the causality. A regression of a variable on lagged values of itself is compared with the regression augmented with lagged values of the other variable. If the augmentation is helpful for better regression, then one can conclude that this variable is Granger-caused by the other variable. Some tests are used, such as the 𝑑-test and the 𝐹-test.

Aiming at time series 𝑦 and π‘₯, to test if there is a Granger causality from π‘₯ to 𝑦, a univariate autoregression of 𝑦 is obtained first:𝑦𝑑=π‘Ž0+π‘Ž1π‘¦π‘‘βˆ’1+π‘Ž2π‘¦π‘‘βˆ’2+β‹―+π‘Žπ‘šπ‘¦π‘‘βˆ’π‘š+residual𝑑.(6) Next, lagged values of π‘₯ are included to obtain another regression:𝑦𝑑=π‘Ž0+π‘Ž1π‘¦π‘‘βˆ’1+π‘Ž2π‘¦π‘‘βˆ’2+β‹―π‘Žπ‘šπ‘¦π‘‘βˆ’π‘š+𝑏𝑝π‘₯π‘‘βˆ’π‘+β‹―+π‘π‘žπ‘₯π‘‘βˆ’π‘ž+residual𝑑.(7) If the result is significantly better than the previous one, then a Granger causality is detected.

The multivariate version is available for this method based on the vector regression model and thus the conditioning is performed to exclude the influence of the intermediate variables.

This method needs a regression model; thus the following disadvantages are obvious. First, a linear relation between π‘₯ and 𝑦 is assumed, which is very strict. Second, the model accuracy affects the result, especially the predefined model order. There are some extensions of the basic Granger causality concept, such as variants of the Wiener-Granger causality [33], to describe more general forms.

3.3. Frequency Domain Methods

A process can also be described in the frequency domain where the energy transfer at every frequency can be shown. Based on this idea, several methods have been developed, such as the directed transfer function (DTF) [34] and the partial directed coherence (PDC) [35]. These quantities DTF and PDC are normalized measures of the total and direct influence, respectively, between two variables in a multivariate process. Conditioning is conducted to exclude the influence of the confounding variables [36]; this is very important under the multivariate framework [37].

Gigi and Tangirala [36] did quantitative analysis on the strength and proved that the total effect, in fact, consists of three components, namely, direct, indirect, and an interference term. The total effect can be quantified by the DTF, whilst the direct effect is hard to quantify. Anyway, the analysis can be performed with the visualization of a curve matrix.

The frequency domain methods have the similar advantages as the corresponding time domain methods (Granger causality methods). However, they provide a better vision for the energy transfer description at different frequencies.

3.4. Information-Theoretical Methods: Transfer Entropy

According to information theory, the transfer entropy from π‘₯ to 𝑦 is defined as [38]𝑑(π‘¦βˆ£π‘₯)=𝑦𝑖+β„Ž,𝐲𝑖,𝐱𝑗𝑝𝑦𝑖+β„Ž,𝐲𝑖,𝐱𝑗𝑝𝑦⋅log𝑖+β„Žβˆ£π²π‘–,𝐱𝑗𝑝𝑦𝑖+β„Žβˆ£π²π‘–ξ€Έ,(8) where 𝑝 means the complete or conditional probability density function (PDF), 𝐱𝑗=[π‘₯𝑗,π‘₯π‘—βˆ’πœ,…,π‘₯π‘—βˆ’(π‘˜βˆ’1)𝜏], 𝐲𝑖=[𝑦𝑖,π‘¦π‘–βˆ’πœ,…,π‘¦π‘–βˆ’(π‘™βˆ’1)𝜏], 𝜏 is the sampling period, and β„Ž is the prediction horizon. The transfer entropy is a measure of information transfer from π‘₯ to 𝑦 by measuring the reduction of uncertainty while assuming predictability. It is defined as the difference between the information about a future observation of π‘₯ obtained from the simultaneous observation of past values of both π‘₯ and 𝑦, and the information about the future of π‘₯ obtained from the past values of π‘₯ alone. It gives a good sense of the causality information without having to require the delay information. Several parameters, especially 𝜏 and β„Ž, should be tried. If the transfer entropies in two directions are considered, then 𝑑(π‘₯→𝑦)=𝑑(π‘¦βˆ£π‘₯)βˆ’π‘‘(π‘₯βˆ£π‘¦) is used as a measure to decide the quantity and direction of information transfer, that is, causality. In (2), the PDF can be estimated by the kernel method [28, 39], to fit any shape of the distributions.

Transfer entropy is a model-free method. However, it has the following main shortcomings. First, it is highly dependent on the estimation of PDFs (although it can have any non-Gaussian forms); thus the computational burden is very high. Second, the time delay cannot be estimated, and the arc signs in SDGs cannot be obtained. Third, the assumption that the time series is stationary does not hold and thus the noise (may be nonstationary) is often greater than expected; these problems affect the computational results.

3.5. Bayesian Nets

Random phenomenon is everywhere in the real world, including industrial processes. Due to the existence of random noises, there are stochastic factors that can be described. The Bayesian net [40] provides a graph with probabilities, where nodes denote fault modes as well as process variables, and arcs denote conditional probabilities. Although the structure remains the same as an ordinary causal graph, both nodes and arcs mean probabilities. The causality from π‘₯ to 𝑦 is described by a conditional probability 𝑝(π‘¦βˆ£π‘₯) [41].

This model is also a general model, although the meaning is different from the previous one. It is to be noted that, in industrial processes, dynamics, or time factors, should be included, which is a key feature to capture causality. The traditional Bayesian net has a fatal limitation that it should be a directed acyclic graph. In a logical system with no time factor, this assumption makes sense, but in a dynamic process, cycles are very common. A cyclic causal discovery algorithm has been developed [42] to allow the existence of cycles.

The major limitations of the application of Bayesian nets are as follows the physical explanation of probabilities is not straightforward, which is sometimes unacceptable by engineers; and the data requirement is hard to meet because one needs the data in all modes to build the model.

3.6. Other Methods and Comments

In addition to the above methods, there are more alternative methods to capture causality between time series. For example, predictability improvement [43, 44] is another general method but without the shortcoming of requiring a large data set. It computes the reduction of uncertainty of one variable with the help of the other variable. Smith et al. and Lungarella et al. have summarized and compared many methods to capture causality for bivariate series [45] and in a network [46], respectively. Each of all these methods has its own advantages and limitations; they complement each other and no one method is powerful enough to replace the others. Hence we should try different methods to obtain reasonable results. In real applications, one may mainly choose one method but sometimes use other methods to gain additional insights or make validations.

Most of the above data-based methods (except model-based methods in Sections 3.2 and 3.3) cannot capture the true causality because they are pairwise methods. If both π‘₯ and 𝑦 are driven by a common third variable, sometimes with different lags, one might still find some causality. In fact, there is no causality between these two variables and neither of them can have influence on the other if the third variable does not change. Thus one needs to test all the pairs of variables to obtain their causality measures and then construct the structure. The structure should be a mixture of the typical serial structure and parallel structure. Indeed, the topology determination needs additional information beyond pairwise tests.

4. Model-Based Inference

Based on the models, inference should be made to find the fault propagation paths and thus the root cause. Following are some typical approaches.

4.1. Graph Traversal

The most common algorithm for searching the fault origin is depth-first traversal on the graph [4, 5], which is a kind of efficient fault inference for both the single and multiple fault origin cases [47]. Its theoretical basis is nodal balance [48]. A depth-first traversal algorithm constructs a path by moving each time to an adjacent node until no further arcs can be found that have not yet been visited, the implementation of which is a recursive procedure.

For the purpose of fault propagation analysis, forward traversal is applied from the assumed origin to predict all the variables based on consistency, which is deductive reasoning [49, 50]. For the fault detection purpose, backward traversal is applied within the causal-effect graph to find the maximal strongly connected component [4], which is abductive reasoning. Actually, the whole procedure includes two steps.

Step 1. Trace the possible fault origins back along the arcs.

Step 2. Make forward inference from these nodes to screen the candidates to choose which one is the real or most probable fault origin.

Loops should be treated specially, which is very common in control systems because of control loops [51].

The time complexity of a traversal search is 𝑂(𝑛2) in which 𝑛 denotes the node number in the graph. When the system scale increases, the time for a traversal is too long to meet the demands of fault detection. Thus the model structure should be transformed from a single-level one to a hierarchical one [52–54]. By this way, the search is first performed in the higher level to restrict the fault origin in a subsystem. Then the search is performed in the subgraph of this subsystem.

For the hierarchical model, hierarchical inference from top to bottom is obtained naturally. The graph traversal is performed firstly in the higher level finding the possible super-node that includes the fault origin. Next perform the graph traversal in the lower level to restrict the possible location of the root cause. Assume the subsystem contains π‘š control systems, and each control system contains π‘˜ variables, then the time complexity of a traversal in a single-level model is 𝑂(π‘š2π‘˜2), and the time complexity in a 2-level model is 𝑂(π‘š2+π‘˜2)β‰ͺ𝑂(π‘š2π‘˜2). Thus the fault analysis in a hierarchical model has much higher efficiency.

Take the case of a boiler system in a power plant [55] for example. There are about 40 key variables that are measured or manipulated, and several control loops are maintaining the steady operation. Of course, this process can be simulated by a large set of equations according to sufficient process knowledge. For fault analysis in an abnormal situation, however, we are more concerned about localizing the root cause before estimating the accurate value. For instance, if the coal quality is changed suddenly, many variables in different subsystems and control systems appear abnormal. If we look at the single-level model, it is not easy to focus on the faulty part. However, if we have a 2-level model, in which the high level describes the relationship between process units and the low level describes the detailed relationship between process variables, the traversal at the high level can help us find that the root cause is located within the overheated steam pressure control system. Then by digging into this system, we can more easily find the real problem is the change of the coal quality. In this case, the search efficiency is highly improved.

Here the number of fault origins is assumed to be only one, that is, the reason that leads to the fault is only one [4]. This is reasonable because multiple faults seldom appear at the same time [56]. For multiple fault origin cases, minimal cut sets diagnosis algorithm was presented [57], where all possible combinations of overall bottom events should be input into the computer to explore and those which make the top events appear are the cut sets. This algorithm has the distinct disadvantage of low efficiency because of exponential explosion.

In order to utilize the system information more sufficiently, Han et al. [58] used fuzzy set to improve the existing models and methods, but their method is not so convenient for online inference and is not applicable for dynamical systems. Some scholars introduced temporal evolution information such as transfer delay [59, 60] and other kind of information into SDG for dynamic description.

4.2. Inference Based on Expert Systems

Rule-based inference [29] is applicable when an expert system is available. This method can be used to improve the inference accuracy with the appropriate rule description and operation. Rough set theory provides an idea of handling vague information and can be used to data reduction; thus it can be introduced to the fault isolation problem (a kind of decision problems) to optimize the decision rules. The decision algorithm is proposed by Yang and Xiao [61], in which the generation and reduction methods of the rules are related to the structure of the SDG model.

The main steps are listed as follow.(1)List all the possible rules as Table A (as shown in Table 1), with each row denoting a rule πœ‘β†’πœ“, where πœ‘ denotes the values of the condition attributes are assumed and πœ“ denotes the decision to be obtained. For convenience, we can give each attribute value a notion.(2)Try to delete each condition attribute in turn and test the consistency of the formula and obtain the reducts and the core. Delete all the elements except the cores and get Table B. There are several methods to test the consistency. For example;(a)Each condition class E∈X|IND(𝐂) has the same decision value.(b)For each object π‘₯, the condition class covering π‘₯ is contained in the decision class covering π‘₯.(c)For every two decision rules πœ‘β†’πœ“ and πœ‘ξ…žβ†’πœ“ξ…ž, we have πœ‘=πœ‘ξ…žβ†’πœ“=πœ“ξ…ž.(3)Calculate the reducts of each rule by use of Table B and obtain Table C.(4)Delete redundant rules and thus obtain Table D.(5)Deduce the rules and the decision algorithm according to Table D.

The authors combine the algebraic and logical expression ways to achieve the purpose. Moreover, due to the convenience of expressing granularity, the decision algorithm is still applicable when the types of the faults of concern are changed or reformed.

4.3. Inference Based on Bayesian Nets

In Bayesian nets, probability and conditional probability of fault events is used to describe causes and effects among variables. Hence the inference is in respect to the fault probability.

We can use Bayesian inference on the graph to calculate the probabilities; it is a direct method. Suppose that the node set of the probabilistic SDG is 𝑉=𝐸βˆͺ𝐹βˆͺ𝐻, in which 𝐸 is the subset of evidence nodes whose value or probabilities are known, 𝐹 is the subset of query nodes whose probabilities are to be computed, and 𝐻 is the subset of hidden nodes which is not cared about in the inference. The inference process is to compute the conditional probability of π‘₯𝐹 given the known π‘₯𝐸𝑝π‘₯𝐹∣π‘₯𝐸=𝑝π‘₯𝐸,π‘₯𝐹𝑝π‘₯𝐸,(9) where𝑝π‘₯𝐸,π‘₯𝐹=π‘₯𝐻𝑝π‘₯𝐸,π‘₯𝐹,π‘₯𝐻,𝑝π‘₯𝐸=βˆ‘π‘₯𝐹𝑝π‘₯𝐸,π‘₯𝐹.(10)

To solve this problem, Bayesian formula and its chain rule should be used adequately, and also the junction tree algorithm can be used for multiple-fault origin cases. This method could be used where there is distinct random phenomenon, yet the cycles in SDGs should be handled. The algorithm is the combination of depth-first search and junction tree algorithm.

4.4. Query on Ontological Models

Similar to the query language SQL used in relational databases, query languages, SPARQL, RDQL, Versa, and so forth are used in ontology-based RDF/OWL files to capture useful information and conduct inference. Among them, SPARQL (SPARQL Protocol and RDF Query Language) is the predominant one and has been recommended by the W3C in 2008 (http://www.w3.org/TR/rdf-sparql-query/).

SPARQL uses query triples as expressions with logic operations such as conjunctions and disjunctions. It can perform inferences based on semantics.

The functions of SPARQL query can be summarized as follows.(i)To perform query based on specific property constraints. For example, we can search all the outlet pipes of a tank by defining the subject and the predicate and constraining class of the resulting objects.(ii)To test connectivity based on object properties. If we define a general object property and place all the other object properties meaning physical and information linkages under it, then the connectivity with specified steps can be obtained. In the matrix form, reachability matrix is defined as 𝐑=(𝐗+𝐗2+β‹―+𝐗𝑁)#where 𝐗 is the adjacency (connectivity) matrix [62], 𝑁 is the number of elements, and # is the Boolean operator [10]. But if we want to know the π‘˜-step propagation results from one element, then we should truncate the summation to the first π‘˜ terms, from which each element can be obtained by a query. This truncated reachability reflects the precedence and strength of the propagation.(iii)By defining the object property as transitive, reachability can be obtained directly to show the domain of influence triggered by a change in one object.

5. Conclusion and Future Directions

In this paper, various methods for the purpose of root cause and fault propagation analysis are introduced briefly, and the features and limitations are analyzed. For the fault detection and isolation of a large-scale industrial process, it is the first step to limit the scale of the problem by capturing the backbone and find the real problem before diagnosing the problem precisely.

We notice that there is no single method that can perfectly achieve our purpose. Therefore, a fusion of different methods is necessary. In real applications, one method, simple ones in most cases, can be used first; and then another method can be used as a validation or a comparison. To facilitate this procedure, a tool is under development by integrating various methods. The suggestions will also be given to the user when choosing appropriate methods.

There are also some theoretical problems that need attention. Instantaneous causality and bidirectional causality are possible in real cases; we need particular methods to deal with them. Most of the methods need some user-defined parameters; the choices of them should be studied to compromise between accuracy and computational complexity. Topology construction is still an open question; we should go beyond the pairwise analysis to study the multivariate analysis methods. Single-layer model is ineffective for large-scale systems; thus hierarchical models should be developed and the established various models should be extended. Under different abnormal situations, the model structure may be changed and thus the anomaly detection and model switch mechanism should be studied [63]. For the simulation study, the Tennessee Eastman process can be used as a benchmark [64].

Acknowledgments

This work was funded by the National Natural Science Foundation of China (Grant nos. 60736026 and 60904044) and Tsinghua National Laboratory for Information Science and Technology (TNList) Cross-Discipline Foundation. The authors would also acknowledge the guidance of Professors Sirish L. Shah and Tongwen Chen at the University of Alberta.