Abstract
Fault detection is fundamental to many industrial applications. With the development of system complexity, the number of sensors is increasing, which makes traditional fault detection methods lose efficiency. Metric learning is an efficient way to build the relationship between feature vectors with the categories of instances. In this paper, we firstly propose a metric learning-based fault detection framework in fault detection. Meanwhile, a novel feature extraction method based on wavelet transform is used to obtain the feature vector from detection signals. Experiments on Tennessee Eastman (TE) chemical process datasets demonstrate that the proposed method has a better performance when comparing with existing methods, for example, principal component analysis (PCA) and fisher discriminate analysis (FDA).
1. Introduction
Due to the fact that industrial systems are becoming more complex, safety and reliability have become more critical in complicated process design [1–3]. Traditional model-based approaches, which require the process modeled by the first principle or prior knowledge of the process, have become difficult, especially for large-scale processes. With significantly growing automation degrees, a large amount of process data is generated by the sensors and actuators. In this framework, the data-based techniques are proposed and developed rapidly over the past two decades. Data-driven fault diagnosis schemes are based on considerable amounts of historical data, which take sufficient use of the information provided by the historical data instead of complex process model [4, 5]. This framework can simplify the design procedure effectively and ensure safety and reliability in the complicated processes [6]. Many fault diagnosis techniques have been used in the complicated industrial systems [7–9]. In this framework, PCA [10] and FDA [11] are regarded as the most mature and successful methods in real industrial applications.
PCA aims at dimensionality reduction, which captures the data variability in an efficient way. In PCA method, process variables are projected onto two orthogonal subspaces by carrying out the singular value decomposition on the sample covariance matrix. And cumulative percent variance [12] is the standard to determine the number of principal components. To detect the variability information in two orthogonal subspaces, the squared prediction error (SPE) statistic [13] and the statistic [14] are calculated. PCA is a sophisticated method. However, PCA determines the lower dimensional subspaces without considering the information between the classes. FDA [15] is a linear dimensionality reduction technique. It has advantages over PCA because it takes into consideration the information between different classes of the data. The aim of FDA is to maximize the dispersion between different classes and minimize the dispersion within each class by determining a group of transformation vectors. In FDA method, three matrices are defined to measure dispersion. The problem of determining a set of linear transformation vectors is equal to the problem of solving generalized eigenvalues [16]. However, FDA has difficulty in dealing with online applications. Motivated by the aforementioned studies, in this paper, we proposed a fault detection scheme based on metric learning which has been used extensively in the pattern classification problem. The purpose of metric learning is to learn a Mahalanobis distance [17] which can represent an accurate relationship between feature vector and categories of instances. The model focuses on the divergence among classes, instead of extracting the principal components. Meanwhile, the Mahalanobis distance learned from the historical data can be utilized in online detection without real-time update. So, metric learning is more suitable than PCA and FDA for fault diagnosis theoretically.
In practice, selecting an appropriate metric plays a critical role in recent machine learning algorithms. Because the scale of the Mahalanobis distance has no effect on the performance of classification, Mahalanobis distance is the most popular one among numerous metrics. Besides, Mahalanobis distance takes into account of the correlations of different features which can build an accurate distance model. A good metric learning algorithm should be fast and scalable. At the same time, a good metric learning algorithm should emphasize the relevant dimensions while reducing the influence of noninformative dimensions [18]. In this paper, we adopt information-theoretic metric learning (ITML) algorithm to learn Mahalanobis distance function [19]. In ITML algorithm, the distances between similar pairs are bounded in a small given value, while the distances between dissimilar pairs are required to be larger than a large given value in the algorithm. The algorithm is expressed as a particular Bregman optimization problem. To avoid overfitting problem, a method based on LogDet divergence to regularize the target matrix to a given Mahalanobis matrix is adopted. It is necessary to remark that a feature extraction method based on wavelet transform is proposed to do the data preprocessing of the algorithm.
The remainder of this paper is organized as follows. In Section 2, we give background knowledge of ITML. Then, wavelet transform is described in Section 3. Section 4 illustrates TE process [20] and gives the experimental results on TE process dataset to demonstrate the good effect of the proposed algorithm. Finally, we draw conclusions and point out future directions in Section 5.
2. Related Work
ITML is a metric learning algorithm without eigenvalue computations or semidefinite programming. And the strategy of regularizing metric in ITML is to minimize the divergence between the target matrix and a given matrix.
Given a dataset with , . The Mahalanobis distance between and can parameterized by a matrix as follows:
In ITML, pair constraints are used to represent the relationship of data in the same or different categories. If and are in the same categories, the Mahalanobis distance between them should be smaller than a given value . Similarly, if and are in different categories, the Mahalanobis distance between them should be larger than a given value . The purpose of the ITML is to find a matrix which satisfies the following pair constraint sets: where and represent the set of pairs of data in the same and different categories, respectively.
It deserves pointing out that there will be not only one matrix which satisfies all the constraints. To ensure the stability of the metric learning, the target matrix is regularized to a given function . The distance between and can be expressed as a type of Bregman matrix divergence [21] as follows:
in which denotes the trace of matrix and is a given strictly convex differentiable function that plays a determinant role in the properties of the Bregman matrix divergence. Taking the advantages of different differentiable functions into account, is chosen as . And the corresponding, Bregman matrix divergence is called LogDet divergence. According to the further generalization, the LogDet divergence keeps invariant when performing the invertible linear transformation , expressed as [22]
The metric learning problem can be translated into a LogDet optimization problem as follows:
It is worth pointing out that distance constraints are equivalent to the linear constraints . To guarantee the existence of the feasible solution to (5), Kulis proposed an iterative algorithm which introduce slack variable in it [21]. In this way, an iterative equation to update the Mahalanobis distance function is found as follows: where is a parameter mentioned in Algorithm 1. In the algorithm, the slack variable balanced the satisfaction of and the linear constraints. Learning the Mahalanobis matrix based on the given matrix , we can classify the data using -nearest neighbor classifier to realize failure diagnosis.
|
3. Fault Diagnosis Using ITML
In the data-driven fault diagnosis system based on the ITML, the system is sensitive to values of the datasets. However, the faults are reflected in vibration amplitude or variation tendency in certain situations. Wavelet transform performs multiscale analysis to the dataset by dilating and shifting the wavelet functions. It transforms the discrepancies of vibration amplitude or variation tendency into the discrepancies of values.
Wavelet functions are localized in time and frequency. Wavelet transform has two main advantages. Firstly, the analysis window changes itself rather than other complex exponential. Secondly, the duration of the analysis window is not fixed. The wavelet functions are created from the wavelet mother function, by dilating and shifting the window. The wavelet mother function is a function with zero mean which has limited duration and salutatory duration and amplitude. The wavelet functions can be express as [23] where is scaling factor and is translation factor, with , . Through increasing the scaling factor , the wavelet function is expanded and is conducive to analysis signals with low frequency and long duration. Correspondingly, by reducing the scaling factor , the wavelet function is shrunk and is conducive to analysis signals with high frequency and short duration. By changing the translation factor , the wavelet functions can realize the traversal along the time axis to get the information of time domain. The wavelet transform can study different scale features and information of time domain which can be expressed as in Figure 1.

The wavelet transform aims at getting a linear combination of the wavelet functions to describe the features in the signal. The value of the wavelet transform is generated by different scaling factors and translation factors. The wavelet transform is defined as [23]
Wavelet transform performs multiscale analysis to the dataset which is conducive to the results of ITML. In order to verify this, a wavelet transform to the dataset of TE process is constructed. TE process is introduced in Section 4. Selecting the corresponding 20 consecutive observations of the 9 variables of fault 12 dataset in the TE process randomly, the results of the wavelet transform are shown in Figures 3 and 4. The red lines in Figures 2 and 3 represent the value of fault-free dataset and the blue lines represent the value of fault 12 dataset.

(a)

(b)

(c)

(d)

(e)

(f)

(g)

(h)

(i)

(a)

(b)

(c)

(d)

(e)

(f)

(g)

(h)

(i)

The results of wavelet transform show that features in the signal are converted into the discrepancies of values. Wavelet transform performs well in doing the feature extraction of the ITML.
4. Experimental Results
4.1. Dataset
The designed method of the data-driven fault diagnosis system proposed in this work is applied on the Tennessee Eastman chemical process.
TE process is a chemical plant using as an industrial benchmark process; the schematic flow diagram and instrumentation of which are shown in Figure 4 [24]. TE process gets two products from four reactants. All the 52 variables contained in the process are 11 control variables and 41 measurement variables, respectively, as listed in Table 1 [16] and Table 2 [16].
20 process faults and a valve fault are defined in TE process, as shown in Table 3 [16]. In the work of Chiang et al. [15], a widely used dataset of TE process is given. To copy the measurements of 52 variables for 24 hours, 22 training datasets are contained in the dataset corresponding to the fault-free operating condition and 21 fault operating conditions. Simultaneously, 22 test datasets are contained in the dataset, in which the measurements of 52 variables for 48 hours are collected. It is worth pointing out that the faults in the 22 test datasets are added after 8 simulation hours. The sampling time of both of 22 training datasets and 22 test datasets is 3 minutes.
4.2. Performance Comparing with Classical Methods
To demonstrate the advantages of the proposed fault detection method, we compare it to two classical methods, PCA and FDA. We carried out experiments on the dataset of TE process and the classification accuracy of -nearest neighbor is chosen to evaluate the performance of classification.
The experiments are conducted on 6 datasets in the TE process, fault-free dataset, fault 1 dataset, fault 2 dataset, fault 4 dataset, fault 6 dataset, and fault 7 dataset, respectively. The feature extraction method of the datasets of TE process is selected as wavelet transform. To balance the performance of the feature extraction with the amount of delay, every 7 consecutive samples are collected to do a wavelet transform. The slack variable used to avoid the overfitting problem is set as and all results presented are the average over 10 runs. The experimental results of fault 1 dataset are given in Figures 5 and 6.

(a)

(b)

(a) FDA method

(b) ITML method
Figure 5 shows the result of fault detection of fault 1 dataset for PCA method when fault occurs in both of the two orthogonal subspaces, which can be successfully detected by and statistics. And the fault detection accuracy of fault 1 dataset for PCA method is 0.99. PCA method provides a satisfactory fault detection rate, but it cannot estimate fault types because it determines the lower dimensional subspaces without considering the information between the classes. Figure 6(a) indicates that the classification accuracy of FDA method float in line with the order of model and the classification accuracy are not totally satisfactory. Figure 6(b) illustrates that the ITML method gives higher fault detection rate than FDA method and it remains stable for different th nearest neighbor. Furthermore, ITML method takes advantages of PCA method that it can estimate fault types directly.
Experimental results are summarized in Figure 7 and these results reveal that ITML method is more robust than PCA and FDA. Considering the ability of estimating fault types directly, ITML method achieves the best classification accuracy across all datasets. And the performance and effectiveness of the wavelet transform based feature extraction are demonstrated by the results of the experiment.

5. Conclusion
In this paper, we proposed a fault detection scheme based on information-theoretic metric learning. ITML performs well in learning Mahalanobis distance function. In the proposed framework, the feature vector is firstly extracted by applying wavelet transform. After that, we apply the ITML algorithm in fault detection method to improve fault detection accuracy and estimate fault types. Comparing with the fault detection schemes based on PCA and FDA, experiments on TE process dataset demonstrate that the proposed method is more robust. The performance and effectiveness of the wavelet transform-based feature extraction are demonstrated by the results of the experiments at the same time.
Conflict of Interests
The authors declared that there is no conflict of interests regarding the publication of this paper.
Acknowledgments
The authors acknowledge the support of China Postdoctoral Science Foundation Grant no. 2012M520738 and Heilongjiang Postdoctoral Fund no. LBH-Z12092.