Table of Contents Author Guidelines Submit a Manuscript
Mathematical Problems in Engineering
Volume 2015, Article ID 808903, 10 pages
http://dx.doi.org/10.1155/2015/808903
Research Article

Optimal Control for a Linear System Subject to a General ARIMA Disturbance

1School of Economics, Southwestern University of Finance and Economics, Chengdu, Sichuan 611130, China
2School of Finance, Southwestern University of Finance and Economics, Chengdu, Sichuan 611130, China

Received 28 November 2014; Revised 24 January 2015; Accepted 16 February 2015

Academic Editor: Alain Vande Wouwer

Copyright © 2015 Hongyan Xie and Fangyi He. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Abstract

A novel run-to-run (R2R) control algorithm based on Kalman filter approach is proposed to deal with a linear system with a general ARIMA() process, in the presence of measurement error and adjustment error together with a random initial bias. We mathematically prove its optimality. The performance of the newly proposed controller and the proportional-integral () controller is evaluated and compared under multiple scenarios through Monte Carlo simulations. Almost all the results reflect the new controller’s superiority.

1. Introduction and Motivation

Optimizing a sequence of actions to obtain some future goal is the general topic of control theory (see [1]). Many optimal control methods have been widely used in actuarial science (e.g., [2, 3]), production management (e.g., [46]), and quality management (e.g., [79]). Specifically, disturbance rejection is one of the major concerns in quality control problems, such as machine setup adjustment problems and R2R process control problems in semiconductor manufacturing (see [10, 11]). Grubbs [12] originally studies the machine setup adjustment problem and develops an optimal adjustment rule for a linear system with normal disturbances, which is referred to by Trietsch [13] as the “harmonic rule.” Vander Wiel et al. [14] give an optimal control algorithm for a linear system with ARMA() disturbances. For a linear system with IMA() disturbances, Ingolfsson and Sachs [15] and Box et al. [16] introduce the exponential weighted moving average (EWMA) control algorithm and prove its optimality; He et al. [17] recently develop an optimal control algorithm, the ARMA controller, for a linear system with a general ARMA() disturbance.

In this paper, we will extend the results in [17] and develop an optimal control algorithm for a linear system with a general ARIMA() disturbance, where . The ARIMA disturbance has been widely used to describe process dynamics (see [1821]). Since ARIMA() processes can be used to model a large class of nonstationary disturbances, many machine setup problems and R2R control problems can be solved under this new framework. Similar to [17], the newly proposed controller will consider a more realistic case such that both measurement error and adjustment error exist, and the initial bias of the process is a random variable. We present the problem as follows.

Supposing a process to be controlled can be expressed as where is the state of at time ; is an ARIMA() process satisfying where , , and is a white noise process with mean 0 and variance , that is, ; is the measurement error for and . is the backward shift operator defined by Neither nor can be measured or observed directly. At time , suppose we need to make an adjustment of magnitude to to bring the process output to target in the next run. That is, where is the adjustment error and . In practice, the process adjustment is assumed to be done by adjusting one controllable factor, , via the following model: where is called the process gain and . Assume , . The initial value is assumed to be a random variable with , , and . Without loss of generality, the target is assumed to be 0 in the rest of this paper. For a positive time index , we hope to determine the optimal , that satisfy where is the expectation operator conditional on all the information until time . Problem (6) is a finite-horizon problem. When , we get the infinite-horizon problem that searches the optimal satisfying The purpose of this paper is to find the solutions to both problems (6) and (7).

The rest of this paper is organized as follows. In Section 2 we derive the state-space representation of the system and get the recursive estimation formulae of the system states using Kalman filter. In Section 3, we develop a control algorithm for the system and prove its optimality without normal distribution assumptions. We further give the implementation steps of the controller in practice. Simulation studies are done in Section 4 under multiple scenarios. Section 5 gives an illustrative example for the application of the control algorithm. Concluding remarks are included in Section 6.

2. The State-Space Representation

Let and be the coefficients of in the power series expansion of , . Brockwell and Davis [22] gave a state-space representation for a general ARIMA() process. By extending their results we can get the following Theorem 1 easily.

Theorem 1. The linear system (1)–(5) has a state-space representation as follows:where and are the and matrices defined by if and if ; and are the and matrices defined by , if and if ; , , , .

A proof of Theorem 1 is presented in Appendix A. For simplicity, in the rest of this paper we denote where is defined to be a vector of ones.

That the problems of optimal control and state estimation can be decoupled in certain cases is one of the most fundamental principles in feedback control theory. This is known as the separation principle (see [23]). The Kalman filter produces the statistically optimal estimate of the state of system (8)-(9). Note the fact that the disturbances of (8) and (9) are correlated to each other, so we should use the Kalman filter formulae for correlated measurement and process noise. At any time , let us define and and assume and . Using the results on page 123 in Lewis’ book [24], we can directly get the following Lemma 2.

Lemma 2. For the systems (8)-(9), we have the following recursive formulae: where is called the modified Kalman Gain that

3. The Optimal Control Algorithm

In this section, we will derive an optimal control algorithm without the normal distribution assumptions on , , and , . This optimal control algorithm can be applied to both the finite-horizon and infinite-horizon problems.

Theorem 3. The optimal control algorithm that solves both for the system (1)–(5) is where is updated according to and (14), (15) are with initial values and .

Proof. For a given positive time index , let and . We can get the Bellman equation as From (8), it is evident that . Using the fact that and the property of trace operator, we obtain that From (13) and the fact that , we have Let . It is evident that is irrelated to since is updated based on (14) and (15). Now (19) changes to When , we get Solving the first order condition, we get that Putting (25) into (24), we get . When , we have Solving the first order condition, we get and . Repeating the procedures above, we can prove that (17) is the optimal adjustment strategy at time and get At last, we can derive (18) by putting (17) into (13). Since (17) is irrelated to the given time index , the control strategy also solves the infinite-horizon problem. Then we finish the proof.

In practice, the steps to implement the proposed control algorithm are as follows.

Step 0. Setup the algorithm’s parameters and initial values, such as , , , , , , based on experience or history data.

Step 1. Compute based on (15).

Step 2. Collect the new observation and compute the adjustment based on (17).

Step 3. Update based on (18).

Step 4. Update based on (14); let and go back to Step 1.

Note that Step 0 is an off-line procedure and Step 1 to Step 4 are on-line procedures. As the newly proposed control algorithm is specially designed for adjusting any ARIMA disturbances, we call it ARIMA controller in the rest of this paper.

4. Simulation Study

In this section, we will study the performance of the ARIMA controller under multiple scenarios through Monte Carlo simulations. For simplicity, we only focus on the system (1)–(5) with being an ARIMA() disturbance, although the ARIMA controller can be applied to any general ARIMA() disturbance. Without loss of generality, we set the parameters in the ARIMA() disturbance as , , and . For comparison, a controller’s performance is also evaluated. The controller is widely used in feedback control, and it involves two separate constant parameters: the proportional and the integral values, denoted by and . The steps to implement a controller can be found in [25].

Tuning a controller could be a challenging task. There are many ways of tuning, such as Ziegler-Nichols tuning, lambda tuning, robust loop shaping, optimization methods, and others (see [26]). Among them, optimization methods are powerful and direct ways. In order to do fair comparisons with the ARIMA controller, we need to choose and to make the controller have the best performance for controlling the specified ARIMA() disturbance. To the best of our knowledge, there are no closed-form expressions for and to optimally control a ARIMA() process; then experimental tuning of the parameters has to be performed. The procedure of optimally choosing and in this paper is described in Appendix B. For the ARIMA() disturbance with and , we get and according to the optimization procedure.

Four scenarios are examined in the following simulations. In Scenario 1, the effect of the measurement error and the adjustment error on the ARIMA controller and the controller is investigated; in Scenario 2, the effect of the process initial bias on the two controllers is explored; in Scenario 3, we study how the estimate of the process gain affects the two controllers; in Scenario 4, we investigate both controllers’ performance when the disturbance parameters are not estimated accurately.

We do 1000 replications for each case and run 100 steps for in each replication. The first 100-run mean square error (MSE) of is computed. The Average MSE (AMSE) of the 1000 replications is reported in Tables 14. Also the standard error in the AMSE (SEAMSE) is computed, which is where SDMSE is the standard deviation of mean square errors, and the number of replicates here is 1000. AMSE measures the performance of the controllers, and SEAMSE reflects the variability of the AMSE. The smaller the AMSE is, the better performance the controller has.

Table 1: AMSE when both types of error exist. is the ARIMA(1, 1, 1) disturbance with and .
Table 2: AMSE when the initial bias’ mean and standard deviation vary. is the ARIMA(1, 1, 1) disturbance with and .
Table 3: AMSE with different estimates of the process gain. The true process gain .
Table 4: AMSE with uncertainties in the ARIMA parameters.
4.1. Effects of Measurement Error and Adjustment Error

In order to focus on the effects of the measurement error and the adjustment error on the controllers, we set the initial process bias , that is, , and set the estimate of the process gain to the true value . We also assume all the ARIMA() parameters can be accurately estimated, that is, and .

Table 1 and Figure 1 show the performance of the two controllers affected by the measurement errors and adjustment errors. We set the values of and to be 0, 0.5, 1.0, 1.5 and 2.0, respectively, so there are 25 pairs of (). For each pair of (), we repeat the simulations. We can observe that for both controllers the AMSE of increases when or increases. However, the AMSE of using the ARIMA controller is always smaller than that using the controller at a significant level for all the pairs of .

Figure 1: The performance of the ARIMA controller and the controller when both measurement error () and adjustment error () exist and vary.
4.2. Effects of Initial Bias

For varying sizes of and in the prior distribution of the process initial bias, Table 2 and Figure 2 show the results of the AMSE of using the ARIMA controller or the controller for the ARIMA() disturbance. In order to focus on the effects of the initial bias, we set all the and to 0, , , and .

Figure 2: The performance of the ARIMA controller and the controller when the process initial bias varies.

It can be seen that for both controllers, the AMSE increases with when is fixed; and the AMSE increases with when is fixed. For the same pair of , the AMSE of using the ARIMA controller is always smaller than that using the controller at a significant level.

4.3. Effect of Estimation Uncertainties in the Process Gain

In order to focus on the effect of the estimate of the process gain brought to the controller’s performance, we assume both measurement error and adjustment error are absent, that is, ; the initial bias is 0, that is, ; and all the ARIMA parameters are accurately estimated, that is, and .

We set the process gain to 2.5 and assume . Table 3 and Figure 3 present the performance of the two controllers when varies. It is shown that the ARIMA controller always outperforms the controller for all the s we examined at a significant level. Additionally, underestimated (i.e., ) will hurt both controllers’ performance more than overestimated (i.e., ). It is shown that if the process gain is underestimated to some degree, both controllers would completely fail.

Figure 3: The performance of the ARIMA controller and the controller when the process gain is not accurately estimated.
4.4. Effect of Estimation Uncertainties in the Disturbance

Again, in order to focus on the effect of the estimation uncertainties in the disturbance, we assume and . Suppose the disturbance parameters are estimated as and . However, the disturbance’s true parameters may not be equal to the estimates, that is, and/or . We choose 5 values of and 5 values of as the true parameters for the ARIMA() disturbance, so there are a total of 25 disturbance ARIMA() models that are studied to investigate both controllers’ robustness.

Table 4 reports the results of the AMSE of using the ARIMA controller and the controller for different ARIMA() disturbances with varying and . It is shown that the ARIMA controller’s performance is better than the controller’s in most of the cases at a significant level. Only when is the controller’s performance better than the ARIMA controller’s. This is because a ARIMA() process will become a ARIMA() process when . As this ARIMA controller is purposely designed to control a ARIMA() disturbance, its performance will be contaminated when is close to 1. The results also imply that if a ARIMA() disturbance is misidentified to be a ARIMA() disturbance, the ARIMA controller would have a bad performance.

For illustration, Figure 4 further presents the contour plots for the AMSE as a function of and by using the ARIMA controller and the controller, respectively. It can be generally observed that both controllers that are designed based on underestimated disturbance parameters (i.e., , ) will have worse performance than the ones designed based on overestimated disturbance parameters (i.e., , ). Specially, as can be seen from Figure 4(a), when is not greater than 0.9, the AMSE of by using the ARIMA controller changes very slowly with varying and . However, when is greater than 0.9, the AMSE increases very quickly with increasing. The parameter has less effects than on the performance of the ARIMA controller. In comparison with the ARIMA controller, the controller’s performance is affected by both and in almost the same degree as shown in Figure 4(b). In all the change area of and except the area , the controller’s performance is worse than the ARIMA controller’s. However, the maximum AMSE (the worst case) used by the controller in the whole change area of and is smaller than that used by the ARIMA controller. In this sense, we can say that the controller’s performance is more robust than the ARIMA controller’s when uncertainties exist in the ARIMA parameters.

Figure 4: Contour plots of AMSE for the ARIMA controller and the controller when the disturbance parameters vary.

5. An Illustrative Example

The studies in the last section are carried out based on an ARIMA() disturbance. As aforementioned, the proposed ARIMA controller can be applied to any general ARIMA() disturbance. For illustration purpose, we assume a process as follows: and , ; is the deviation from target. The disturbance model is the same as one of the ARIMA() disturbances presented in [21]. For the sake of simplicity, we further set , , and assume that all the disturbance parameters are known. That is, we ignore measurement errors, adjustment errors, initial bias uncertainties, and disturbance parameter uncertainties and only show the ARIMA controller’s superiority in controlling higher order ARIMA disturbance models.

For the above ARIMA() disturbance, we can get and So, in the offline procedure, the ARIMA controller’s parameters are given as follows: The initial values . Using the same optimization tuning techniques described in Appendix B, we get and .

We randomly draw one simulation and show the paths of the process output in Figure 5 when the two control algorithms are implemented, respectively. The MSE of implemented by the controller is 0.819 while that implemented by the ARIMA controller is 0.646. The paths suggest that the ARIMA controller maintains the process output closer to the target than the controller in much more places in the first 100 simulated runs.

Figure 5: One simulated path by using the ARIMA and controllers, respectively, for the ARIMA(1,1,3) disturbance that with .

6. Concluding Remarks

In this paper, we have developed an optimal control algorithm, the ARIMA controller, for a linear system with a general ARIMA() disturbance, in the presence of measurement errors and adjustment errors together with a random initial bias. We theoretically prove that the ARIMA controller is optimal to both the finite-horizon and infinite-horizon problems. The performances of the ARIMA and controllers have been evaluated and compared via Monte Carlo simulations under multiple scenarios. In almost all the analyzed scenarios, the ARIMA controller outperforms the controller. Only when uncertainties exist in the ARIMA parameters does the controller have more robust performance than the ARIMA controller. Our simulation studies also show that when designing the two controllers, underestimating the process parameters, including the process gain and the ARIMA parameters, will have more negative impact on the controllers’ performance than overestimating the parameters. Although the steps to implement the controller are simple, a tuning process is always needed to determine the parameters and for each given disturbance. It is usually time-consuming. Conversely, the ARIMA controller can automatically determine its algorithm parameters for each given disturbance.

The ARIMA controller is a complement to the ARMA controller which is designed to optimally control any general ARMA() disturbance. The ARMA controller can be used to control a large class of weakly stationary disturbances while the ARIMA controller can handle a large class of nonstationary or periodic disturbances. Such disturbances are common in rapid thermal processing, reactive ion etching, I-line lithography, and lapping processes in the semiconductor manufacturing.

Appendices

A. Proof of Theorem 1

In the following proof, all the notations are the same as defined in Theorem 1. For the general ARIMA() process in (2), Brockwell and Davis [22] already provided its state-space representation. Referring to Example on page 471 in the book [22], we can get ’s state-space representation as follows:

Now, we will give the state-space representation of the linear system (1)–(5). Based on (1) and (A.1), we can get So, we proved (8) in Theorem 1. Then, based on (4) and (5), we can get Combining (A.4) with (A.2), we can obtain the state equation: Thus, we proved (9) in Theorem 1. Then, we finish the proof of Theorem 1.

B. Tuning Procedure for Parameters

Given each pair of and , the MSE of the first 100 outputs is computed. We do 1000 replicates of this procedure and use the AMSE as the performance of the controller. Obviously, AMSE is a function of and . Our way to determine and is to solve the following optimization problem: To solve (B.1), we use the function optim in the base package of R software (see [27]). Two types of numerical optimization techniques, the Nelder-Mead method (see [28]) and the Broyden-Fletcher-Goldfarb-Shanno algorithm (see [2932]), are used, respectively. For the two different numerical optimization techniques, we get the same optimal solution that and for the ARIMA() disturbance with and investigated in Section 4; and the same optimal solution that and for the ARIMA() disturbance with , , , and is investigated in Section 5.

Conflict of Interests

The authors declare that there is no conflict of interests regarding the publication of this paper.

Acknowledgments

This work was supported by the National Natural Science Foundation of China (Grant no. 71102145) and the Fundamental Research Funds for the Central Universities of China (Grant no. JBK150144). The authors are grateful to the editors and the anonymous reviewers for their valuable comments and suggestions that have greatly improved the quality of this paper.

References

  1. R. F. Stengel, Optimal Control and Estimation, Dover Publications, New York, NY, USA, 1993.
  2. A. Cadenillas, T. Choulli, M. Taksar, and L. Zhang, “Classical and impulse stochastic control for the optimization of the dividend and risk policies of an insurance firm,” Mathematical Finance, vol. 16, no. 1, pp. 181–202, 2006. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus
  3. N. Kulenko and H. Schmidli, “Optimal dividend strategies in a Cramer-Lundberg model with capital injections,” Insurance: Mathematics and Economics, vol. 43, no. 2, pp. 270–278, 2008. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus
  4. K. Nakashima, H. Arimitsu, T. Nose, and S. Kuriyama, “Optimal control of a remanufacturing system,” International Journal of Production Research, vol. 42, no. 17, pp. 3619–3625, 2004. View at Publisher · View at Google Scholar · View at Scopus
  5. H. Liu, Z. B. Zabinsky, and W. Kohn, “Rule-based forecasting and production control system design utilizing a feedback control architecture,” IIE Transactions, vol. 43, no. 2, pp. 143–152, 2011. View at Publisher · View at Google Scholar · View at Scopus
  6. G. Singer and E. Khmelnitsky, “A finite-horizon, stochastic optimal control policy for a production-inventory system with backlog-dependent lost sales,” IIE Transactions, vol. 42, no. 12, pp. 855–864, 2010. View at Publisher · View at Google Scholar · View at Scopus
  7. F. He, K. Wang, and W. Jiang, “A general harmonic rule controller for run-to-run process control,” IEEE Transactions on Semiconductor Manufacturing, vol. 22, no. 2, pp. 232–244, 2009. View at Publisher · View at Google Scholar · View at Scopus
  8. Z. K. Nagy and R. D. Braatz, “Open-loop and closed-loop robust optimal control of batch processes using distributional and worst-case analysis,” Journal of Process Control, vol. 14, no. 4, pp. 411–422, 2004. View at Publisher · View at Google Scholar · View at Scopus
  9. J. Shi and S. Zhou, “Quality control and improvement for multistage systems: a survey,” IIE Transactions (Institute of Industrial Engineers), vol. 41, no. 9, pp. 744–753, 2009. View at Publisher · View at Google Scholar · View at Scopus
  10. E. del Castillo, “Statistical process adjustment: a brief retrospective, current status, and some opportunities for further work,” Statistica Neerlandica, vol. 60, no. 3, pp. 309–326, 2006. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus
  11. A.-J. Su, J.-C. Jeng, H.-P. Huang, C.-C. Yu, S.-Y. Hung, and C.-K. Chao, “Control relevant issues in semiconductor manufacturing: overview with some new results,” Control Engineering Practice, vol. 15, no. 10, pp. 1268–1279, 2007. View at Publisher · View at Google Scholar · View at Scopus
  12. F. E. Grubbs, “An optimum procedure for setting machines or adjusting processes,” Journal of Quality Technology, vol. 15, pp. 186–189, 1983. View at Google Scholar
  13. D. Trietsch, “The harmonic rule for process setup adjustment with quadratic loss,” Journal of Quality Technology, vol. 30, no. 1, pp. 75–84, 1998. View at Google Scholar · View at Scopus
  14. S. A. Vander Wiel, W. T. Tucker, F. W. Faltin, and N. Doganaksoy, “Algorithmic statistical process control. Concepts and an application,” Technometrics, vol. 34, no. 3, pp. 286–291, 1992. View at Publisher · View at Google Scholar · View at Scopus
  15. A. Ingolfsson and E. Sachs, “Stability and sensitivity of an EWMA controller,” Journal of Quality Technology, vol. 25, pp. 271–287, 1993. View at Google Scholar
  16. G. E. P. Box, G. M. Jenkins, and G. C. Reinsel, Time Series Analysis: Forecasting and Control, Prentice Hall, Upper Saddle River, NJ, USA, 3rd edition, 1994. View at MathSciNet
  17. F. He, H. Xie, and K. Wang, “Optimal setup adjustment and control of a process under ARMA disturbances,” IIE Transactions, vol. 47, no. 3, pp. 230–244, 2015. View at Publisher · View at Google Scholar
  18. J. F. MacGregor, T. J. Harris, and J. D. Wright, “Duality between the control of processes subject to randomly occurring deterministic disturbances and ARIMA stochastic disturbances,” Technometrics, vol. 26, no. 4, pp. 389–397, 1984. View at Publisher · View at Google Scholar · View at MathSciNet
  19. F. Tsung, H. Wu, and V. N. Nair, “On the efficiency and robustness of discrete proportional-integral control schemes,” Technometrics, vol. 40, no. 3, pp. 214–222, 1998. View at Publisher · View at Google Scholar · View at Scopus
  20. D. W. Apley, “A cautious minimum variance controller with ARIMA disturbances,” IIE Transactions, vol. 36, no. 5, pp. 417–432, 2004. View at Publisher · View at Google Scholar · View at Scopus
  21. M.-D. Ma, C.-C. Chang, S.-S. Jang, and D. S.-H. Wong, “Mixed product run-to-run process control—an ANOVA model with ARIMA disturbance approach,” Journal of Process Control, vol. 19, no. 4, pp. 604–614, 2009. View at Publisher · View at Google Scholar · View at Scopus
  22. P. J. Brockwell and R. A. Davis, Time Series: Theory and Methods, Springer, New York, NY, USA, 2009.
  23. K. J. Åström, Introduction To Stochastic Control Theory, Mathematics in Science and Engineering, Vol. 70, Academic Press, New York, NY, USA, 1970. View at MathSciNet
  24. F. L. Lewis, Optimal Estimation With An Introduction To Stochastic Control Theory, John Wiley & Sons, New York, NY, USA, 1986. View at MathSciNet
  25. E. del Castillo, Statistical Process Adjustment for Quality Control, Wiley Series in Probability and Statistics, John Wiley & Sons, New York, NY, USA, 2002. View at MathSciNet
  26. K. J. Åströmand and T. Hägglund, Advanced PID Control, ISA, Research Triangle Park, NC, USA, 2006.
  27. R Core Team, R: A Language and Environment for Statistical Computing, R Foundation for Statistical Computing, Vienna, Austria, 2014, http://www.R-project.org/.
  28. J. A. Nelder and R. Mead, “A simplex algorithm for function minimization,” Computer Journal, vol. 7, pp. 308–313, 1965. View at Google Scholar
  29. C. G. Broyden, “The convergence of a class of double-rank minimization algorithms,” Journal of the Institute of Mathematics and Its Applications, vol. 6, pp. 76–90, 1970. View at Google Scholar
  30. R. Fletcher, “New approach to variable metric algorithms,” Computer Journal, vol. 13, no. 3, pp. 317–322, 1970. View at Publisher · View at Google Scholar · View at Scopus
  31. D. Goldfarb, “A family of variable-metric methods derived by variational means,” Mathematics of Computation, vol. 24, pp. 23–26, 1970. View at Publisher · View at Google Scholar · View at MathSciNet
  32. D. F. Shanno, “Conditioning of quasi-Newton methods for function minimization,” Mathematics of Computation, vol. 24, pp. 647–656, 1970. View at Publisher · View at Google Scholar · View at MathSciNet