Data-Driven Fault Supervisory Control Theory and ApplicationsView this Special Issue
Research Article | Open Access
Yingwei Zhang, Lingjun Zhang, Hailong Zhang, "Fault Detection for Industrial Processes", Mathematical Problems in Engineering, vol. 2012, Article ID 757828, 18 pages, 2012. https://doi.org/10.1155/2012/757828
Fault Detection for Industrial Processes
A new fault-relevant KPCA algorithm is proposed. Then the fault detection approach is proposed based on the fault-relevant KPCA algorithm. The proposed method further decomposes both the KPCA principal space and residual space into two subspaces. Compared with traditional statistical techniques, the fault subspace is separated based on the fault-relevant influence. This method can find fault-relevant principal directions and principal components of systematic subspace and residual subspace for process monitoring. The proposed monitoring approach is applied to Tennessee Eastman process and penicillin fermentation process. The simulation results show the effectiveness of the proposed method.
Process monitoring and fault diagnosis are important for the safety and reliability of industrial processes [1–12]. As a data-driven process monitoring methodology, multivariate statistical analysis techniques, such as principal component analysis (PCA) and partial least squares (PLS), have been used widely for detection and diagnosis of abnormal operating situations in many industrial processes in the last few decades [5, 13–16]. The major advantage of these methods is ability to handle larger number of highly correlated variables and reduce the high-dimensional process measurement into a low-dimensional latent space. The monitoring based onthese methods is straightforward.
PCA is one of the most widely used linear techniques for transforming data into a new space. It divides data information into the significant patterns, such as linear tendencies or directions in model subspace, and the uncertainties, such as noises or outliers located in residual subspace. statistic and statistic, represented by Mahalanobis and Euclidian distances, are used to elucidate the pattern variations in the model and residual subspaces, respectively [17–19]. PLS decomposition methods are used similar to PCA for process monitoring and are more effective in supervising the variations in the process variables that are more influential on quality variables [20–22]. statistic and statistic are also employed in the PLS monitoring system. These methods develop a normal operating model with the normal data gathered from the normal process and define the normal operation regions. The new process behaviors can thus be compared with the predefined ones by the monitoring system to ensure whether they remain normal or not. When the process moves out of the normal operation regions, it is concluded that an “unusual and faulty” change in the process behaviors has occurred. Nowadays, many extensions of the conventional PCA/PLS algorithms have been reported [15, 23–31]. Recently, Li et al. proposed a total projection to latent structure (T-PLS) and discussed the policy of process monitoring and fault diagnosis based on the new structure [32, 33]. They analyzed the problem faced in conventional PLS based process monitoring policy which only divides the measured variable space into two subspaces and uses two monitoring statistics for PLS scores and residuals, respectively. They indicated that the output-irrelevant variations are also included by PLS scores and PLS residuals do not necessarily cover only small -variations. The proposed T-PLS algorithm further decomposed the PLS systematic subspace to separate the output-orthogonal part from the output-correlated part, and the PLS residual subspace to separate large variations from noises. T-PLS based monitoring system was then developed based on the four-process subspace.
KPCA is one nonlinear version of PCA. It can efficiently compute PCs in a high-dimensional feature space using nonlinear kernel functions. The core idea of KPCA is to first map the data space into a feature space using a nonlinear mapping and then carry out the PCA operation in the feature space. KPCA divides the data into a systematic subspace and a residual subspace and uses statistic and statistic to monitor these two subspaces, respectively [13, 15, 16, 34].
In this paper, to improve the KPCA model, a fault-relevant KPCA algorithm is proposed and the approach of process monitoring based on the new fault-relevant KPCA algorithm is proposed for fault detection. The proposed method further decomposes both the KPCA principal space and residual space into two subspaces by checking the influences by process disturbances. The basic objective for further subspace decomposition is to separate the part which is influenced greatly by the fault from the part that is not clearly fault-relevant, that is, to find the fault-relevant directions and fault-relevant principal components. Then a new monitoring method is proposed based on the fault-relevant directions. Compared with traditional statistical techniques, the fault subspace is separated based on the fault-relevant influence.
The remaining sections of this paper are organized as follows. Section 2 revisits the KPCA model and then presents the fault-relevant KPCA algorithm. Section 3 introduces the on-line monitoring method of fault-relevant KPCA. Model development and on-line monitoring are proposed in Section 3. In Section 4, the simulation results are given to illustrate the effectiveness of the new method. At last the conclusions are drawn in Section 5.
2. Algorithm of Fault-Relevant KPCA
For the traditional PCA algorithm, some faults may not influence all the principal directions; that is, to a given fault, some principal directions are not relevant. KPCA algorithm is the method for nonlinear data extended from PCA algorithm, so it has the same characteristics mentioned above . The proposed fault-relevant KPCA algorithm finds the principal directions which are relevant to, or affected by the disturbances and then measures the changes of the variation along these principal directions. Therefore the proposed algorithm has higher sensitivity and accuracy for process monitoring and it can detect the faults faster.
The purpose of the proposed algorithm is to get the fault-relevant principal directions in the systematic subspace and those of the residual subspace. With the obtained fault-relevant principal directions and a new set of data, the scores of new data can be gotten. Therefore, the statistic and statistic can be further calculated to monitor the process.
In KPCA, the training samples gotten from normal process are mapped into feature space using nonlinear mapping: . The covariance matrix in the feature space can be calculated as follows: where it is assumed that , and is a nonlinear mapping function that projects the input vectors from the input space to . Principal components in can be obtained by finding eigenvectors of , which is straightforward to the PCA procedure in input space, using the equation as follows: where denotes eigenvalues and denotes eigenvectors of covariance matrix .
For , solution (eigenvector) can be regarded as a linear combination of , that is, .
Using kernel trick , the eigenvalue problem can be expressed as a simplified form as follows: where and are a gram matrix which is composed of .
Then, the calculation is equivalent to resolving the eigen problem of (2.3). To satisfy the assumption must be mean-centered before the calculation. The centered gram matrix can be easily obtained by , where each element of is equal to and . Also, the coefficient should be normalized to satisfy , which corresponds to the normality constraint, of eigenvector.
The scores of vector are then extracted by projecting onto eigenvectors in and the number of scores is . As some principles, scores are selected to be principal components, and corresponding directions of are gotten at the same time [36, 37]. directions selected from span the systematic subspace and the remaining - directions span the residual subspace. The PCs of is where , and is the number of principal components.
Now PCs of training data are gotten the PCs that are relevant to faults will be found next as follows.
First, a fault process space is separated into a systematic subspace and a residual subspace following the separation rule of the process space . One data set collected from a fault case is projected into with the same mapping function to get where , , and
spans the systematic subspace of .
And the fault-relevant PCs of normal data can be calculated with the same principle:
In this way, some largest fault-relevant directions of normal data and fault data are revealed, respectively. Define the ratio of the fault-relevant PC variances between fault case and normal case as follows: where denotes the variance of PCs and denotes the th column vector in matrix as well as .
The largest value of denotes the direction along which there are the largest changes of process variation from normal status to fault case. If the index is smaller than 1, it means that the concerned variations in the fault status are smaller than those in normal case. Keep the directions with values of larger than 1 which are the fault-relevant directions with increased variations. The number of retained principal directions is . The fault-relevant principal directions compose and the remaining - directions of , which are fault irrelevant, compose . are the fault-relevant PCs of normal data with components and are the fault-irrelevant PCs, with - components.
The , span the systematic subspace and the , span the residual subspace. Define the column number of principal directions in residual subspace are , and define as follows:
Then, the PCs of normal case in the residual subspace can be calculated as follows:
Similarly, the PCs of fault case in the residual subspace can be calculated as follows:
Following the ways of (2.7), the fault-relevant principal directions and principal components in residual subspace of fault case can be obtained, respectively. One has where . Then the principal components in residual subspace of normal case can be also worked out with obtained fault-relevant principal directions in (2.15) as follows:
Then the fault-relevant residual subspace of fault case is and the fault-relevant residual subspace of normal case is . Define the ratio of the squared errors between fault case and normal case along each direction in the fault-relevant residual subspace:
The largest value denotes the direction along which there are the largest changes of squared errors from normal status to fault case. Keep those fault-relevant residual directions with values of larger than 1 which are the fault-relevant directions with increased squared errors. The final number of dimensions of fault-relevant residual subspace is . Correspondingly, the fault-relevant residual subspace is spanned by which is composed of the sorted directions extracted from . The remaining directions of , which are fault irrelevant, compose . The fault-relevant PCs of normal case compose which has components and the fault-irrelevant PCs compose , with - components.
There exist a number of kernel functions. According to Mercer’s theorem of functional analysis, there exists a mapping into a space where a kernel function acts as a dot product if the kernel function is a continuous kernel of a positive integral operator. Hence, the requirement on the kernel function is that it satisfies Mercer’s theorem. Theoretically, all functions that satisfy Mercer’s theorem can be utilized, while there are several most widely used kernel functions such as Gaussian function , polynomial , sigmoid , where , , , and are specified a priori by the user. Gaussian kernel is selected in this paper for its good performance.
3. On-Line Monitoring Strategy of Fault-Relevant KPCA
The fault-relevant KPCA-based monitoring method is similar to that using KPCA. The Hotelling’s statistic and the -statistic in the feature can be interpreted in the same way. Two systematic subspaces both have their own statistic and two residual subspaces both have their own -statistic too. Define the statistic of fault-relevant systematic subspace and that of fault-irrelevant systematic subspace . Define the -statistic of fault-relevant residual subspace and that of fault-irrelevant residual subspace . For one new data set , those four statistics can be obtained as follows:
In (3.3), is the th component of and , denotes the fault-relevant directions of residual subspace with components.
In (3.4), is the th components of and , denotes the fault-irrelevant directions of residual subspace with - components.
The confidence limit of is obtained using the -distribution : where is the number of samples in the model and is the number of PCs.
The confidence limit of can be computed from its approximate distribution : where is a weighting parameter included to account for the magnitude of and accounts for the degrees of freedom. If and are the estimated mean and variance of the , then and can be approximated by and .
3.1. Developing the Different Fault-Relevant Models
(1) Acquire normal operating data and several different known fault data sets.(2) Given a set of -dimensional normal operating data , and a set of -dimensional fault data , , compute the kernel matrix by and by .(3) Carry out centering in the feature space for and as follows: where For different faults, different , that is, different models can be gotten.(4) Solve the eigenvalue problem and normalize such that .
3.2. On-Line Monitoring
The main thought of on-line monitoring is that different models are developed with different fault data sets. Monitoring statistics are calculated, respectively, in these models with the on-line data at the same time. When the monitoring statistic of one model goes out of the confidence limit, the abnormality is detected and the fault is identified meanwhile, that is, the type of fault with which that model was developed. The specific steps are as follows.(1) Obtain new data for each sample.(2) Given the -dimensional test data , compute the kernel vector ,, where is the normal operating data. (3) Mean center the test kernel vector as follows: where and are obtained from step 2 and 3 of the modeling procedure and .(4) For the test data , compute ,,,, with , , , , in models, respectively.(5) Calculate the monitoring statistics of four subspaces of the test data in different models.(6) Monitor whether or exceeds its control limit calculated in the modeling procedure.
4. Simulation Study
The proposed fault-relevant KPCA method was applied to fault detection and diagnosis in benchmark simulations of Tennessee Eastman process and penicillin fermentation process and compared with the conventional KPCA model.
4.1. Tennessee Eastman Benchmark
The well-known TE process has been widely used for testing various process monitoring and fault diagnosis methods [11, 12] since it was first introduced by Downs and Vogel . The process is constructed by five major operation units: a reactor, a product condenser, a vapor-liquid separator, a recycle compressor, and a product stripper. It contains two blocks of process variables: 41 measured variables and 11 manipulated variables. Process measurements are sampled with interval of three minutes. The details on the process description can be found in Downs and Vogel’s work.
As a complex chemical process, TE process provides a superior simulation platform to validate the proposed method. In this study, fifty-two variables, including 41 process measurement variables and 11 manipulated variables, are used. Four hundred and eighty normal samples are used for model identification. Fifteen known faults as described in Downs and Vogel’s work are considered. Faults 1–7 are associated with step changes in different process variables, for example, in the feed ratio and feed temperature. Faults 8–12 are associated with random variables in certain variables, for example, an increase in the variability of reactor cooling water inlet temperature. For Fault 13, there is a slow drift in the reaction kinetics. For Faults 14 and 15, two cooling water valves are stuck.
Based on KPCA algorithm, the normal process space is decomposed into a systematic subspace and a residual subspace first. Then some fault-relevant directions or principal components are picked up from the systematic subspace with the help of information extracted from fault data. In this article, Fault 1, Fault 7, and Fault 13 are used to develop different monitoring models. In the models built with these faults, all the principal components in the residual subspace are fault relevant, so that the charts are the same as that of KPCA.
For Fault 1, Figure 1(a) shows the statistic values calculated with fault-relevant principal components calculated by fault-relevant KPCA method and Figure 1(b) shows the KPCA statistic values. The KPCA statistic give alarming signals from the sample and fault-relevant goes out of control from the sample. The statistic detected the fault earlier than statistic.
(a) Fault-relevant KPCA
For Fault 7, the results in Figure 2 show that the statistic notice the fault earlier than statistic. The real chart for this fault goes down when it detects the fault; therefore it was turned so that conventional chart confidence limit could be used to detect the fault. The fault-relevant method detected the fault from the sample while the KPCA method detected the fault from the sample.
(a) Fault-relevant KPCA
For Fault 13, as shown in Figure 3, the two statistics have the same monitoring result. Both statistics detected the fault from the sample.
(a) Fault-relevant KPCA
In summary, the proposed method pays more attention to the fault-relevant process variations and separates them from the fault-irrelevant variations for monitoring. Comparatively, KPCA model treats them together. For Fault 1, Fault 7, and Fault 13, the monitoring results show that the fault-relevant KPCA based monitoring performance is better than that based on KPCA model. For Fault 13, the proposed method based on monitoring performance is not worse than that based on KPCA.
The choosing of the kernel parameter is important for KPCA and other kernel methods, which would affect their performances. Similarly, the kernel parameter is also an influential factor in this method and its monitoring. Changing of the kernel parameter, the shape of the chart changes. For some faults, a good kernel parameter for statistic of KPCA may not be appropriate for statistic which is sensitive to the fault with another kernel parameter. For some faults, the fault-relevant principal components are also sensitive, but the statistic calculated by these components is not satisfactory. Therefore, for some faults, the proposed method does not have a satisfactory performance.
4.2. Penicillin Fermentation
In this section, the proposed method is applied to the monitoring of a well-known benchmark process, penicillin fermentation process. A flow diagram of the penicillin fermentation process is given in Figure 4. Trajectories of nine variables from a nominal batch run are shown in Figure 5. The production of secondary metabolites such as antibiotics has been the subject of many studies because of its academic and industrial importance. Here, we focus on the process to produce penicillin, which has nonlinear dynamics and multiphase characteristics. In typical operating procedure for the modeled fed-batch fermentation, most of the necessary cell mass is obtained during the initial preculture phase. When most of the initially added substrate has been consumed by the microorganisms, the substrate feed begins. The penicillin starts to be generated at the exponential growth phase and continues to be produced until the stationary phase. A low substrate concentration in the fermentor is necessary for achieving a high product formation rate due to the catabolite repressor. Consequently, glucose is fed continuously during fermentation at the beginning. In the present simulation experiment, a total of 60 reference batches are generated using a simulator (PenSim v2.0 simulator). Detail process description is well explained from http://www.chee.iit.edu/~cinar/software.htm. These simulations are run under closed-loop control of pH and temperature, while glucose addition is performed under open loop. Small variations are automatically added to mimic the real normal operating conditions under the default initial setting conditions. The duration of each batch is 400 h, consisting of a pre-culture phase of about 45 h and a fed-batch phase of about 355 h [41, 42].
The models are constructed using the proposed method. KPCA is then tested against monitoring of fault batches. Fault 1 is implemented by introducing a 10% step increase in the Aeration rate at 100 h and retaining until 300 h. Fault 2 is implemented by introducing a 2% step increase in the Aeration rate at 100 h and retaining until 300 h. Fault 3 is implemented by introducing a 10% step increase in the agitator power at 100 h and retaining until 300 h. The monitoring results are shown in Figures 6, 7, and 8, respectively. As shown in Figure 6, the proposed fault-relevant KPCA method and KPCA can detect faults varying in large ranges. In our study, when the faults vary in a small range, the proposed method can detect faults successfully, but the of KPCA cannot detect the faults, as shown in Figures 7 and 8. Therefore the proposed method can detect tiny fault and it is more sensitive than KPCA for these faults.
(a) Fault-relevant KPCA
(a) Fault-relevant KPCA
(a) Fault-relevant KPCA
In this article, the fault-relevant KPCA algorithm is proposed to decompose the process variations from the fault-relevant perspective. By further decomposing the KPCA subspaces, the underlying process information can be more comprehensively looked into, which is helpful to the detection of abnormal changes. Fault-relevant principal components extracted from KPCA systematic subspace and residual subspace are used to monitor the process. With fault-relevant principal components, instead of with all principal components which may not be influenced by the disturbances, better monitoring results are gotten. The case study on TEP and penicillin fermentation process is performed to show the performance of the fault-relevant KPCA algorithm for process monitoring. In general, swifter and more sensitive fault detection is reported in comparison with the conventional KPCA method.
The work is supported by China’s National 973 program (2009CB320602 and 2009CB320604) and the NSF (60974057 and 61020106003).
- H. Chun-Chin and S. Chao-Ton, “An adaptive forecast-based chart for non-Gaussian processes monitoring: with application to equipment malfunctions detection in a thermal power plant,” IEEE Transactions on Control Systems Technology, vol. 19, no. 5, pp. 1245–1250, 2010.
- P. A. Samara, G. N. Fouskitakis, J. S. Sakallariou, and S. D. Fassois, “A statistical method for the detection of sensor abrupt faults in aircraft control systems,” IEEE Transactions on Control Systems Technology, vol. 16, no. 4, pp. 789–798, 2008.
- T. Chen and J. Zhang, “On-line multivariate statistical monitoring of batch processes using Gaussian mixture model,” Computers and Chemical Engineering, vol. 34, no. 4, pp. 500–507, 2010.
- B. Zhang, C. Sconyers, C. Byington, R. Patrick, M. E. Orchard, and G. Vachtsevanos, “A probabilistic fault detection approach: application to bearing fault detection,” IEEE Transactions on Industrial Electronics, vol. 58, no. 5, pp. 2011–2018, 2011.
- S. J. Qin, “Statistical process monitoring: basics and beyond,” Journal of Chemometrics, vol. 17, no. 8-9, pp. 480–502, 2003.
- Q. Chen and U. Kruger, “Analysis of extended partial least squares for monitoring large-scale processes,” IEEE Transactions on Control Systems Technology, vol. 13, no. 5, pp. 807–813, 2005.
- B. Ayhan, M. Y. Chow, and M. H. Song, “Multiple discriminant analysis and neural-network-based monolith and partition fault-detection schemes for broken rotor bar in induction motors,” IEEE Transactions on Industrial Electronics, vol. 53, no. 4, pp. 1298–1308, 2006.
- U. Kruger, S. Kumar, and T. Littler, “Improved principal component monitoring using the local approach,” Automatica. A Journal of IFAC, the International Federation of Automatic Control, vol. 43, no. 9, pp. 1532–1542, 2007.
- C. F. Alcala and S. J. Qin, “Reconstruction-based contribution for process monitoring,” Automatica, vol. 45, no. 7, pp. 1593–1600, 2009.
- R. Muradore and P. Fiorini, “A PLS-based statistical approach for fault detection and isolation of robotic manipulators,” IEEE Transactions on Industrial Electronics, vol. 59, no. 8, pp. 3167–3175, 2012.
- G. Li, S. J. Qin, and D. Zhou, “Geometric properties of partial least squares for process monitoring,” Automatica. A Journal of IFAC, the International Federation of Automatic Control, vol. 46, no. 1, pp. 204–210, 2010.
- G. Li, S. Joe Qin, and D. Zhou, “Output relevant fault reconstruction and fault subspace extraction in total projection to latent structures models,” Industrial and Engineering Chemistry Research, vol. 49, no. 19, pp. 9175–9183, 2010.
- J. M. Lee, C. K. Yoo, S. W. Choi, P. A. Vanrolleghem, and I. B. Lee, “Nonlinear process monitoring using kernel principal component analysis,” Chemical Engineering Science, vol. 59, no. 1, pp. 223–234, 2004.
- J. F. MacGregor and T. Kourti, “Statistical process control of multivariate processes,” Control Engineering Practice, vol. 3, no. 3, pp. 403–414, 1995.
- G. Lee, C. Han, and E. S. Yoon, “Multiple-fault diagnosis of the Tennessee Eastman process based on system decomposition and dynamic PLS,” Industrial and Engineering Chemistry Research, vol. 43, no. 25, pp. 8037–8048, 2004.
- J. M. Lee, C. Yoo, and I. B. Lee, “Statistical process monitoring with independent component analysis,” Journal of Process Control, vol. 14, no. 5, pp. 467–485, 2004.
- S. Wold, K. Esbensen, and P. Geladi, “Principal component analysis,” Chemometrics and Intelligent Laboratory Systems, vol. 2, no. 1–3, pp. 37–52, 1987.
- G. H. Dunteman, Principal Component Analysis, SAGE publication LTD, London, UK, 1989.
- J. E. Jackson, A User's Guide to Principal Component, Wiley, New York, NY, USA, 1991.
- D. G. Kleinbaum, L. L. Kupper, and K. E. Muller, Applied Regression Analysis and Other Multivariable Methods, Wdasworth Publishing Co Inc, California, Calif, USA, 2033.
- A. J. Burnham, R. Viveros, and J. F. Macgregor, “Frameworks for latent variable multivariate regression,” Journal of Chemometrics, vol. 10, no. 1, pp. 31–45, 1996.
- B. S. Dayal and J. F. Macgregor, “Improved PLS algorithms,” Journal of Chemometrics, vol. 11, no. 1, pp. 73–85, 1997.
- S. J. Qin, “Recursive PLS algorithms for adaptive data modeling,” Computers and Chemical Engineering, vol. 22, no. 4-5, pp. 503–514, 1998.
- C. Zhao, F. Wang, and Y. Zhang, “Nonlinear process monitoring based on kernel dissimilarity analysis,” Control Engineering Practice, vol. 17, no. 1, pp. 221–230, 2009.
- Y. W. Zhang, H. Zhou, and S. J. Qin, “Decentralized fault diagnosis of large-scale processes using multiblock kernel principal component analysis,” Zidonghua Xuebao/ Acta Automatica Sinica, vol. 36, no. 4, pp. 593–597, 2010.
- Y. Zhang, H. Zhou, S. J. Qin, and T. Chai, “Decentralized fault diagnosis of large-scale processes using multiblock kernel partial least squares,” IEEE Transactions on Industrial Informatics, vol. 6, no. 1, pp. 3–10, 2010.
- Y. Zhang and Z. Hu, “Multivariate process monitoring and analysis based on multi-scale KPLS,” Chemical Engineering Research and Design, vol. 89, no. 12, pp. 2667–2678, 2011.
- H. D. Jin, Y. H. Lee, G. Lee, and C. Han, “Robust recursive principal component analysis modeling for adaptive monitoring,” Industrial and Engineering Chemistry Research, vol. 45, no. 2, pp. 696–703, 2006.
- Y. H. Lee, H. D. Jin, and C. Han, “On-line process state classification for adaptive monitoring,” Industrial and Engineering Chemistry Research, vol. 45, no. 9, pp. 3095–3107, 2006.
- J. Yang, A. F. Frangi, J. Y. Yang, D. Zhang, and Z. Jin, “KPCA plus LDA: a complete kernel fisher discriminant framework for feature extraction and recognition,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 27, no. 2, pp. 230–244, 2005.
- X. Wang, U. Kruger, G. W. Irwin, G. McCullough, and N. McDowell, “Nonlinear PCA with the local approach for diesel engine fault detection and diagnosis,” IEEE Transactions on Control Systems Technology, vol. 16, no. 1, pp. 122–129, 2008.
- D. Zhou, G. Lee, and S. J. Qin, “Total projection to latent structures for process monitoring,” AIChE Journal, vol. 56, pp. 168–178, 2010.
- G. Li, C. F. Alcala, S. J. Qin, and D. Zhou, “Generalized reconstruction-based contributions for output-relevant fault diagnosis with application to the tennessee Eastman process,” IEEE Transactions on Control Systems Technology, vol. 19, no. 5, pp. 1114–1127, 2010.
- J. H. Cho, J. M. Lee, S. W. Choi, D. Lee, and I. B. Lee, “Fault identification for process monitoring using kernel principal component analysis,” Chemical Engineering Science, vol. 60, no. 1, pp. 279–288, 2005.
- S. W. Choi, C. Lee, J. M. Lee, J. H. Park, and I. B. Lee, “Fault detection and identification of nonlinear processes based on kernel PCA,” Chemometrics and Intelligent Laboratory Systems, vol. 75, no. 1, pp. 55–67, 2005.
- S. Valle, W. Li, and S. J. Qin, “Selection of the number of principal components: the variance of the reconstruction error criterion with a comparison to other methods,” Industrial and Engineering Chemistry Research, vol. 38, no. 11, pp. 4389–4401, 1999.
- S. Wold, “Cross-validatory estimation of the number of components in factor and principal component models,” Technometrics, vol. 4, pp. 397–405, 1978.
- C. A. Lowry and D. C. Montgomery, “Review of multivariate control charts,” IIE Transactions, vol. 27, no. 6, pp. 800–810, 1995.
- P. Nomikos and J. F. MacGregor, “Multivariate SPC charts for monitoring batch processes,” Technometrics, vol. 37, no. 1, pp. 41–59, 1995.
- J. J. Downs and E. F. Vogel, “A plant-wide industrial process control problem,” Computers and Chemical Engineering, vol. 17, no. 3, pp. 245–255, 1993.
- Y. Zhang and Y. Zhang, “Complex process monitoring using modified partial least squares method of independent component regression,” Chemometrics and Intelligent Laboratory Systems, vol. 98, no. 2, pp. 143–148, 2009.
- Y. Zhang, S. Li, and Y. Teng, “Dynamic processes monitoring using recursive kernel principal component analysis,” Chemical Engineering Science, vol. 72, pp. 78–86, 2012.
Copyright © 2012 Yingwei Zhang et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.