Mathematical Problems in Engineering

Volume 2015, Article ID 348729, 8 pages

http://dx.doi.org/10.1155/2015/348729

## Fault Prediction Algorithm for Multiple Mode Process Based on Reconstruction Technique

School of Automation, Beijing Information Science and Technology University, Beijing 100192, China

Received 29 October 2014; Accepted 21 December 2014

Academic Editor: Gang Li

Copyright © 2015 Jie Ma and Jianan Xu. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

#### Abstract

In the framework of fault reconstruction technique, this paper studies the problems of multiple mode process fault detection, fault estimation, and fault prediction systematically based on multi-PCA model. First, a multi-PCA model is used for fault detection in steady state process under different conditions, while a weighted algorithm is applied to transition process. Then, describe the faults quantitatively and use the optimization method to derive the fault amplitude under the sense of fault reconstruction. Fault amplitude drifts under different conditions even if the same fault occurs. To solve the above problem, consistent estimation algorithm of fault amplitude under different conditions has been studied. Last, employ the support vector machine (SVM) to predict the trend of the fault amplitude. Effectiveness of the algorithms proposed in this paper has been verified using Tennessee Eastman process as the study object.

#### 1. Introduction

Modern engineering systems become more and more complex while the scale becomes larger and larger simultaneously. However, the operation safety of complex systems is inversely proportional to their scales. Fault diagnosis and prediction techniques offer an important way to improve the operation safety of complex engineering systems, which are often operating under multiple mode process due to the following reasons: changes of the nature of raw materials, external environment disturbances, drifting of the load in a process and even equipment aging, and so forth. All of the factors mentioned above may lead to the difference between the practical operating processes and the rated operating processes; otherwise equipment itself may have a plurality of operating periods due to the adjustment of production programs. For example, a ship sailing task needs to go through several stages from set sail, offshore sailing, far-shore sailing, and returning back to the harbor. Operating condition of the marine system changes frequently during the sailing. And a marine system may have more than one working condition even in the same stage. Therefore, the monitoring technology of multicondition process has gradually achieved widespread attention both in industry and in academia [1].

With the wide application of distributed control system (DCS) in industrial processes, massive process data associated with the operating status of the device can easily be saved. Since the 1990s, the data-driven multivariate statistical monitoring technology has been successfully applied in industrial processes [2]. Traditional multivariate statistical monitoring techniques include methods based on PCA and PLS. The traditional methods are based on the assumption that the process data obey Gaussian distribution, the data must be linear, and the process must be stable with only one operating condition, and so forth. However, the practical industrial process data often do not strictly obey the Gaussian distribution and also are usually nonlinear, time-varying, multiconditioned, and dynamic. So if the traditional methods are being used to monitor those practical processes, it will inevitably lead to inaccuracy analysis of process performance as well as false alarm and missing alarm of process failures [3]. For multiple mode process, improved methods have been proposed based on the traditional PCA/PLS, which are mainly divided into recursive iterative method and multimodel method. The basic idea of recursive iteration method is to add new process data into the modeling data matrix continuously. By updating the model parameters, the model can adapt to new conditions [4]. The recursive iteration method is used relatively less in practical applications, because the method cannot distinguish changes during normal operating conditions and fault conditions correctly and is more dependent on the process mechanism and knowledge. The basic idea of multimodel method is to divide different operating conditions first by clustering algorithm and then use process data for each condition to establish submodels. Finally, construct a global detection indicator to monitor the process data. Multiple PCA model is studied in [5–7], super PCA model is studied in [8], probabilistic principal component analysis (PPCA) is studied in [9], adjacent PCA model is studied in [10], PCA model based on Bayesian classifier is studied in [11–13], mixed PCA model is studied in [14], Gaussian mixture model (GMM) is studied in [15], and so on.

Multiple mode process switches constantly between “steady state mode 1 transition process-steady state mode 2” and the fault detection of steady state modes should be considered and also the fault detection of the transition process should be studied. For example, in the literature [16], Lu et al. used “hard partition” to obtain the transition region between steady state modes. In the literature [17], Zhao et al. and another literature [18], Yao and Gao further proposed the concept of “soft partition” and separate the data of the transition region and the data of the stable region preferably.

Currently, fault detection and fault diagnosis of multiple mode process have achieved remarkable achievements while the research of fault prediction is still rare [19, 20]. This paper proposed a fault prediction method of multiple mode process based on fault reconstruction technology. First, multiple PCA models are applied to fault detection of multiple mode process. Then, fault reconstruction technology is used to estimate the fault amplitude. Last, employ the support vector machine (SVM) to predict the trend of the fault amplitude. The data of Tennessee Eastman process is applied to verify the validity of the algorithm.

#### 2. Fault Detection Algorithms under Multiple Mode Process

Multiple mode process includes steady state process and transition process. Build multi-PCA models to adapt to steady state processes of different conditions and calculate statistics of the corresponding detection indicator Hotelling’s and SPE (squared prediction error) of each PCA model for fault detection. For transition processes, a weighted method is applied to calculate statistics and control limits for fault detection.

Define that represents a sample vector with measured variables, and there are samples during operation. The data matrix is composed by samples, in which each row represents a sample and each column represents that a measured variable includes samplings. First, transfer each column of the data matrix to zero means and unit variance variable through standardized processing; then, after standardization, the covariance matrix of sample can be obtained:

Then analyse the eigenvalues of the covariance matrix and arrange the eigenvalues in descending order. Each column of the data matrix transfers to zero mean and unit variance variable can be obtained as follows: subtract the corresponding variable from each column of and then divide by the corresponding variable standard deviation.

The PCA model divides the measured variable space into principal subspace and residual subspace; they are orthogonal and complementary. Any sample vector can be decomposed into projections on principal subspace and residual subspace; that is to say, the PCA model decomposes the sample matrix into two parts: and . Considerwhere is the modeled part; is the residual part; is load matrix, which is made up of the former eigenvectors of ; is the number of principal elements; is scoring matrix, .

The multi-PCA method is to establish a corresponding principal element model according to the historical data of existing measured variables in each steady state operating condition, thus establishing the multiprincipal element model group which contains all operating conditions; namely,where is the number of stable conditions.

##### 2.1. Fault Detection of Steady State Process

In multi-PCA model, it is needed to calculate the statistics of the corresponding detection indicator and SPE for each separate PCA model, that is, and . SPE is used to measure the changes of sample vector projections on residual subspace, and is used to measure the changes of measured variables on principal subspace. SPE is given aswhere is the control limit of SPE when confidence level is . When SPE is in control limit, the device is running in normal state; however, when SPE is beyond the control limit, a failure occurs. The change of SPE represents the change of correlation between the data. is given aswhere (), , and is the eigenvalue of covariance matrix corresponding to sample matrix . is the threshold value of standard normal distribution when confidence level is . is the dimension of sample .

is given aswhere and is the statistical limit of when confidence level is . When is in control limit, the device is running in a normal state.

##### 2.2. Fault Detection of Transition Process

In multiple mode process, when a production process switches from one steady state to another, it will go through a slow-changing transition process and cause false alarms of a specific fault if the fault detection method for steady state process is being used in transition process. This paper uses a weighted method that evolves with time to optimize each of and SPE of each separate PCA model. Then use the optimized and SPE for fault detection. Thus false alarms of a fault during the transition process can be effectively avoided.

When performing an average in statistics, the value which gives some elements more “weight” or influence on the result than other elements in the same set is called weight. Weighted algorithm is the weight multiplied by the value of the corresponding element and then divided by the sum of every weight. In the instant of a steady state condition just to transfer to the next condition, the former condition has a greater impact on the characteristics of the transition process. With time passing, the characteristics of the transition process become more and more close to the next condition. Therefore, during the transition process the value of weights should be time changing. The weight of the former condition transfers from 1 to 0; on the contrary the weight of the new condition transfers from 0 to 1. The weight of the former condition and the weight of the new condition are given aswhere is the moment when the transition process begins; is the moment when the transition process ends, ; .

Suppose that , represent the mean of measured variables of the two adjacent conditions and and represent the standard deviation of measured variables of the two adjacent conditions; thus the mean and standard deviation of the transition process are given as

The sample data can be pretreated by the weighted mean and standard deviation , so covariance matrix of the sample vector and load matrix of the transition process can be obtained, and then the optimized SPE and can be derived:

#### 3. Fault Amplitude Estimation Based on Reconstruction Technique

When a fault is detected, the fault amplitude which measures the extent of the current fault can be estimated by fault reconstruction technique. Whether the estimation of amplitude of the same fault is consistent under different conditions is an important problem that multicondition brought. If it is not, it should achieve consistency by employing some specific algorithms.

##### 3.1. The Basic Idea of Fault Reconstruction

Fault reconstruction is to reconstruct the process data and remove the effects of faults. The data reconstructed is within the control limits theoretically and is approximately the normal data. Fault estimation is to estimate the fault amplitude after fault reconstruction. When observed data is missing or is obviously failure, the practical data can be replaced by reconstructed data. Fault reconstruction technology has obtained some achievements in the field of fault diagnosis; refer to [21–30].

Suppose represents the data of the detected fault, represents the normal data, , represents the number of sensors, which represents the dimension of each measurement sample, and represents the fault amplitude. Then where represents fault subspace, also known as fault direction matrix.

Reconstruct the normal data to eliminate the influence of faults and obtain , which is the estimated value of normal data where represents the estimated value of . In geometric meaning, it is pulling sample back to the principal subspace along the fault subspace.

Based on which is the estimation of the normal data after reconstruction, we can have which is the estimation of fault amplitude in the sense of least SPE after reconstruction:where denotes the projection from fault subspace to residual subspace .

The SPE after reconstruction is

Reconstruction means searching to fit the following equation based on (13)The optimal solution of (14) iswhere denotes the Moore-Penrose pseudoinverse of .

##### 3.2. Consistent Estimation of Multiple Mode Process Fault Amplitude

When a fault occurs in condition 1 and lasts to condition 2, the fault direction matrix changes, but the fault magnitude remains unchanged. In theory, for the same fault, the fault estimation should be consistent even in different conditions. But in practical industrial process, the result varies due to data noises and machine interference in different conditions. So the consistent estimation of the same fault under different conditions should be studied.

We assume thatwhere and denote the amplitude estimation of the same fault in the condition and the next condition separately. and denote the projection matrix on residual subspace of the corresponding fault separately; we suppose that the dimensions of and are both .

Based on (16), we can derive the amplitude estimations of the same fault under different conditions:

We can derive and in the way of minimizing .

We define here, . The singular value decomposition of can be described aswhere and are both unitary matrix with the corresponding dimensions of and , is diagonal matrix, and the diagonal elements are singular values. Sort the singular values in descending order. Take the last column eigenvectors of left singular vector , and the dimension is . The former line is the estimation of , and the latter line is the estimation of . Put and in (17); the obtained and are the consistent estimations of the same fault under different conditions.

#### 4. Fault Prediction Algorithms

The amplitude changes with the evolution of the fault. When the fault amplitude is estimated, the support vector machine (SVM) prediction model can be used to predict the trend of the fault amplitude.

For the given time series , , is the target value of prediction, inputs are , and is the embedding dimension. Build the mapping , which is between input and output , and the learning sample for supporting vector machine is as follows:

The regression function for training the vector machine is where and are Lagrange multipliers, is threshold value, is a Kernel function, and the radial basis function is used here:And the one-step prediction model is

And then obtain a sample and represents the prediction of the ()th data.

Furthermore we can haveEquation (23) represents the prediction model of the th step and we can have , represents the actual value of the th data, and represents the prediction of the th data [31].

Use mean squared error (MSE) and average relative prediction error (ARE) to evaluate the accuracy of prediction of the trend of the fault amplitude using the SVM prediction model. The MSE and ARE can be described as follows:

#### 5. Simulations

Tennessee Eastman process contents 41 measured variables (including 22 continuous variables and 19 component variables) and 12 operating variables and can set 21 kinds of faults; refer to paper [32–35] for more details of Tennessee Eastman process. Simulation of this paper selects fault 2, namely, step transition of flow B. Add a step signal to flow B in a specific instant, and then the variable begins to have a slowly changing process and finally stays at a stable state, so the data of two stable conditions and a transition process can be obtained. Select 11 control variables and 18 process variables as a sample data set, and the sampling interval is 3 minutes, and take 2000 normal samples which contain both working condition 1, transition process and condition 2 as training samples. Under normal working conditions, changes in working conditions occur at sample point 900, and, after a period of transition process, the steady state of condition 2 arrives at about sample point 1100. Under fault conditions, changes in working conditions still occur at sample point 900, which the time of fault include two cases in Case 1, the fault occurs in condition 1, as the condition changes and the fault enters the transition process and then lasts to condition 2, and in this case the fault is set at sample point 160. In Case 2, the fault occurs in condition 2, condition 1 and transition process are in normal state, and, in this case, the fault is set at sample point 1260.

First, with the multicondition data after pretreatment, the multi-PCA model can be established and set the control limit of statistical indicators for fault detection as 99%. The fault that occurs at both cases has been detected; the detection result of the first case is showed in Figures 1 and 2; the detection result of the second case is showed in Figures 3 and 4.