Mathematical Problems in Engineering

Volume 2015, Article ID 923584, 14 pages

http://dx.doi.org/10.1155/2015/923584

## An Improved Generalized Predictive Control in a Robust Dynamic Partial Least Square Framework

State Key Lab of Industrial Control Technology, Department of Control Science & Engineering, Zhejiang University, Hangzhou 310027, China

Received 20 March 2015; Accepted 16 September 2015

Academic Editor: Qing Chang

Copyright © 2015 Jin Xin et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

#### Abstract

To tackle the sensitivity to outliers in system identification, a new robust dynamic partial least squares (PLS) model based on an outliers detection method is proposed in this paper. An improved radial basis function network (RBFN) is adopted to construct the predictive model from inputs and outputs dataset, and a hidden Markov model (HMM) is applied to detect the outliers. After outliers are removed away, a more robust dynamic PLS model is obtained. In addition, an improved generalized predictive control (GPC) with the tuning weights under dynamic PLS framework is proposed to deal with the interaction which is caused by the model mismatch. The results of two simulations demonstrate the effectiveness of proposed method.

#### 1. Introduction

In the past decades, PLS, a multivariable regression method, has been applied to many areas, such as quality prediction, process monitoring, and chemometrics [1]. PLS is not only able to extract principle component from both input and output datasets, but also able to determine the direction on which input and output data have the largest covariance [2]. Considering the advantage of dimension reduction and automatic decoupling, many researchers applied PLS to the modeling and control of dynamic systems. Kaspar and Ray [3, 4] proposed a dynamic PLS framework by utilizing the PLS loading matrices to construct precompensators and postcompensators. Chen et al. [5] proposed another dynamic PLS framework with ARX model. Laurí et al. [6] proposed a PLS-based model predictive control (MPC) relevant identification method.

There are many uncertain factors in industrial fields which make the clean data cannot be acquired directly from actual systems. It is well known that the standard algorithms for PLS regression (NIPALS and SIMPLS) [7] are very sensitive to outliers in the dataset. Outliers can distort the curve of the data. It may lead to undesirable results if standard PLS is directly applied in practical applications. To overcome this problem, several robust PLS methods are proposed in recent years. One of them is obtained by robustifying the sample covariance matrix between the input and output datasets based on SIMPLS algorithm [8, 9]. Another is PLS calibration with outliers detection, which selects a subset of samples dataset randomly and gets the initial residual set and then detects and discards outliers in the subset [10]. These methods are not suitable for the dynamic PLS control framework because the samples are a time sequence series and cannot be selected randomly. Moreover, these methods only focus on the regression; they do not fit for dynamic modeling.

There is one way to get a robust dynamic PLS model. In contaminated data, several data preprocessing methods could be applied to detect outliers and eliminate the influence of them. Various methods for outliers detection have been proposed in the literature, such as statistics-based algorithms, machine learning-based algorithms, and neural network-based algorithms. In these methods, outliers are the point far away from the majority of the data; however this situation is not always right [11]. Outliers can be close to the major curve of data as well. In order to detect general outliers in the dataset which is used for modeling, the relationship between inputs and outputs should be known. Liu et al. [11, 12] proposed an autoregression structure RBFN to predict the outputs of the system and detected outliers with this predictive model combined with HMM. The limitation is that the output of the RBFN is composed of the inputs and outputs of the process, but the prediction of system input is useless. Because outliers are usually contained in the output dataset, outliers in input dataset are commonly calculated by the controllers. Therefore, it is not useful to increase the dimension of the network weights to estimate the value of the inputs. In addition, Liu et al. [11] analyzed the performance of the algorithm with single-input single-output (SISO) system at low rate of outliers. There is no analysis with more complex situations such as multiple-input multiple-output (MIMO) systems and high rate of outliers.

On the other hand, many control schemes are applied under the dynamic PLS framework. Kaspar and Ray proposed a PID control scheme under this framework [3], and Chen et al. [5] designed multiloop adaptive PID controllers based on an modified decoupling PLS framework. Hu et al. [13, 14] proposed a multiloop internal model controller in the dynamic PLS framework and achieved better performance for disturbance rejection. Lü and Liang [15] proposed a multiloop constrained MPC scheme. Ideally, with the PLS decoupled structure, there should be no interaction between PLS components [3]. However, there are always some interactions observed due to the plan/model mismatch which is caused by contaminated data and filtering color noises incompletely.

Aiming at these limitations, a robust dynamic PLS modeling and improved GPC control scheme is proposed in this paper. A new RBFN structure is applied to predict the outputs of the system. With the difference between the prediction and the real data, a simple HMM algorithm is used to detect the outliers, and then they are replaced by the mean of clean data nearby. After outliers detection, dynamic PLS modeling would be more robust. Based on this model, a dynamic tuning weight sequence is introduced into the cost functions of improved GPC under the PLS latent space to reduce the interaction of different outputs which is caused by mismatch between the PLS model and real process.

The rest of paper is organized as follows: in Section 2, an improved RBFN training method and an HMM based outliers detection algorithm are presented. In Section 3, dynamic PLS modeling method is introduced and an improved GPC scheme is proposed. In Section 4, two examples are used to demonstrate the effectiveness and better control performance of the proposed method. In the last section, the conclusions are summarized.

#### 2. Outliers Detection for Modeling Data

##### 2.1. Improved Radial Basis Function Network

As a feedforward network, the RBFN neural network has three layers: input layer, hidden layer, and output layer. And, in many practical applications, Gaussian RBF is used in the hidden layer. It can approximate the linear and nonlinear functions with arbitrary precision in theory. When fitting the dynamic systems, several past input and output samples should be used as the input of the RBFN. To introduce the new structure of RBFN, 3 assumptions are firstly given as follows: the system has inputs and outputs; only the outputs of the system contain outliers; and each input signal and each output signals have the same lags, respectively. With assumption , arbitrary input and output dimension system could be involved. Outliers in outputs are usually caused by some unmeasured reasons such as unknown sensor failures. Hence it is what we focus on (with assumption ). While it is easy to detect outliers in inputs, for the inputs can be calculated with set points and detected values by controllers. In order to simplify the analysis, assumption is employed. Signal with different lags can be treated as a special case of that signal with the same lags. Past samples used as the inputs of the network have the formwhere is an -dimension vector and is an -dimension vector. denotes the input values from to , denotes the output values from to . and denote the lag of input and output, respectively.

The structure of the modified RBFN is shown in Figure 1. past input values and past output values enter the RBFN. The output of RBFN denotes the predictive vector, which is calculated by summing the product of the hidden layer outputs and the output layer weights . The superscripts and denote which input or output it belongs to, and the subscript denotes which hidden node it belongs to. is computed by the hidden node center vector with the Gaussian function, where and . Then the output of the RBFN is When is inputted into the RBFN, there are hidden nodes for it to cluster. The outputs of hidden layers with Gaussian function RBFN for the inputs areIt is noted that is an -dimension vector for or an -dimension vector for because RBFN input represents both and vectors. The input of the conventional RBFN should be a -dimensional vector , and the center vector of it has the same dimension with it, which is denoted by . should be chosen from ; therefore for each there are possible. In other words, conventional RBFN needs hidden nodes to be equal to the improved RBFN with hidden nodes. It is obvious that is less than ; therefore, the improved RBFN has less hidden nodes.