About this Journal Submit a Manuscript Table of Contents
Abstract and Applied Analysis
Volume 2013 (2013), Article ID 715026, 8 pages
http://dx.doi.org/10.1155/2013/715026
Research Article

Optimal Control for a Family of Systems in Novel State Derivative Space Form with Experiment in a Double Inverted Pendulum System

Department of Electrical Engineering, I-Shou University, Kaohsiung 84001, Taiwan

Received 13 June 2013; Accepted 29 July 2013

Academic Editor: Chang-Hua Lien

Copyright © 2013 Yuan-Wei Tseng and Jer-Guang Hsieh. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Abstract

Optimal control for a family of systems in novel state derivative space form, abbreviated as SDS systems in this study, is proposed. The first step in deriving optimal control laws for SDS systems is to form an augmented cost functional. It turns out that novel differential Lagrange multipliers must be used to adjoin SDS system constraints (namely, the dynamical equations of the control system) to the integrand of the original cost functional which is a function of state derivatives. This not only eases our derivation but also makes our derivation parallel to that for systems in standard state space form. We will show via a real electric circuit that optimal control for a class of descriptor systems with impulse modes can easily be carried out using our design method. It will be shown that linear quadratic regulator (LQR) design for linear time-invariant SDS systems using state derivative feedback can be obtained via an algebraic Riccati equation. Furthermore, this optimal state derivative feedback may also be implemented using an equivalent state feedback. This is useful in real situations when only states but not the state derivatives are available for measurement. The LQR design for a double inverted pendulum system is implemented to illustrate the use of our method.

1. Introduction

Optimal control for systems in standard state space form has long been developed [14]. The optimization problem can be stated as follows:

Note that the integrand in the cost functional is a function of states and controls . However, in some applications, the integrand of an appropriate cost functional for the problem at hand is not only a function of states and controls but also a function of the state derivatives . For instance, to improve people’s comfort and safety when ridding a vehicle or an airplane, acceleration during the course of riding must be taken into account. It is then desirable to minimize the integral of a function of the acceleration during the course. Since acceleration is the derivative of the velocity, which is usually a state variable, we may also consider the cost functional of the form

Note also that not every control system can be modeled in standard state space form or can easily be handled if it is expressed in standard state space form. For instance, consider the following system: If the coefficient matrix in state derivatives, , is singular, then the system cannot be expressed in standard state space form. Such systems are called generalized state space systems [5], descriptor systems [6], singular systems [7], or semistate systems [8]. If is nearly singular, then it is not easy to handle such a system using the standard control design.

Extensive applications of descriptor systems arise in many areas of engineering such as electrical networks [8], aerospace systems [9], smart structures [10], and chemical processes [11]. Descriptor systems also appear in other areas such as the dynamic Leontief model for economic production sectors [12] and biological complex systems [13]. A comprehensive review is available in [14]. In previous studies, descriptor systems are further categorized as impulse-free systems and systems with impulse modes [14]. If the descriptor system has impulsive modes, the integration part of the cost functional may become infinite. In this situation, further investigation of impulse controllability and the impulse mode elimination has to be analyzed in control designs [15]. Therefore, descriptor systems with impulse modes are considered to be difficult in control designs. On the other hand, control designs for impulse-free descriptor systems can be carried out in easier ways. Variational calculus for impulse-free descriptor systems was derived by Jonckheere [16]. Since then, many optimal control algorithms were proposed for impulse-free descriptor systems [14]. Roughly speaking, the available control design algorithms for descriptor systems [15, 1720] are much more complex than those for the systems in standard state space form. Consequently, there are barricades for engineers without strong mathematical background to apply those sophisticated control algorithms.

If the matrix in (3) is nonsingular, it can be expressed in reciprocal state space (RSS) form [10, 21] as follows: Since the system eigenvalues are the reciprocals of the eigenvalues of matrix in (4), the name of reciprocal state space form was given by the first author of this paper. In this form, one can easily carry out control designs such as pole placement [22], eigenstructure assignment [22], and linear quadratic regulator (LQR) designs [22] using state derivative feedback. Therefore, as long as a descriptor system with impulsive mode can be expressed in RSS form, control design can be easily performed by applying state derivative feedback.

Motivated by the analysis above, instead of the systems in standard state space form, a family of control systems in novel state derivative space form, abbreviated as SDS systems, is investigated in this study. The dynamic equation of an SDS system can be described as

2. Optimality Conditions

The optimal control problem in this study can be stated as follows:

Further, we assume that the terminal time is free and the final states are free but must be constrained to the surface defined by . Notice that the integrand of the cost functional may also include the state ; namely, . Simply using the system equation (6b), this function can be reduced to a function that depends only on , , and .

To handle this optimization problem, we first form an augmented cost functional as follows:

Here, is a vector of usual Lagrange multipliers for the final state constraints. Note that novel differential Lagrange multipliers , instead of the usual , have been used to adjoin SDS system constraints (6b) to the integrand of the original cost functional which is a function of state derivatives. This not only eases our derivation but also makes our derivation parallel to that for systems in standard state space form.

The Hamiltonian is defined by Substituting (8) to (7), we have Now consider the following small perturbations: To minimize (9), the variation of (9) must be zero. That is, The individual terms of variation in (11) can be obtained as follows:

Because of the system constraint (6b), we have . Adding (12) yields Integrating by parts for the term , we obtain Substituting (15) back to (14), we have It can be shown [23] that Therefore, in (16), we have In (11), we have Applying (14)–(19) to (11), one obtains Since the final state is free, in addition to SDS system constraint in (6b), from (20), necessary conditions of the optimization problem in (6a) and (6b) can be stated as follows: From (6b) and (8), the “transversality condition” [23] for can be determined via the following equation:

In the following simple example, we wish to illustrate the point that when the system under consideration can be expressed in both standard state space (SSS) form and state derivative space (SDS) form, optimal designs are equivalent.

Example 1. Suppose that we wish to find a control to minimize the following cost functional: subject to , and is free.
Let us first solve this problem using the SDS approach. The Hamiltonian is given by Necessary conditions (21) and (22) give that is, Since the initial condition is given, we have . Hence, (23) is satisfied. Since and the is independent of , the terminal condition (24) gives Since is free, the transversality condition (25) now becomes In summary, (27) and (31)–(33) constitute the optimality conditions for the system (27) in SDS form.
For comparison, we then solve this problem again for the system (28) in standard state space form using the standard optimization techniques. The Hamiltonian is given by Necessary conditions for the optimality [23] are given by that is, The terminal condition is given by The transversality condition is given by
It is not surprising to find that necessary conditions (31)–(33) for the system (27) in SDS form and those necessary conditions (36)–(38) for the system (28) in standard state space form are identical. This means that optimal control design for systems that can be expressed in both SDS and standard state space forms is the same.
Solving (27) and (31)–(33), we have

3. Descriptor Circuit System with Impulse Modes

The following circuit shown in Figure 1 is an example of a descriptor system with impulse mode [8]. It is generally considered to be difficult to carry out the optimal control design for such a system. However, we will find that, simply transforming the dynamic equations of the system into the SDS form in the first place, the optimal control design becomes as straightforward as the usual optimal control design approach used for standard state space systems.

715026.fig.001
Figure 1: Descriptor circuit system with impulse mode.

Example 2. Consider the descriptor system of the circuit shown on the left-hand side of Figure 1, and its equivalent alternating current circuit is shown on the right. The dynamic equations for the descriptor system are given as In the figure, is the capacitor voltage, is the emitter current, and is the common-base current gain of the bipolar junction transistor. Suppose that we wish to make the changing rates of and to be as small as possible in the time interval , that is, keeping the values of and to be constant as possible. Then the cost functional can be selected as
The system may be transformed into the SDS form: The Hamiltonian is given by Necessary conditions (21) and (22) give Thus, the optimal control is given by We may put (42) and (44) in matrix form: Note that the matrix is nonsingular for any capacitance . For illustrational purpose, we let . The matrix equation (46) can now be rewritten as The general solution of (47) is given by where are constants. The terminal conditions (24) imply that . These, together with the initial states and , can be used to determine the constants . If we let and , then The optimal control given by (45) can then be determined.

4. LQR Design for SDS Systems

In this section, we consider the LQR problem for linear time-invariant SDS systems. Suppose that the optimization problem can be stated as follows:

For simplicity, we assume that and are symmetric positive definite matrices. The Hamiltonian is given by Necessary conditions (21) and (22) imply that that is, For a linear time-invariant SDS system, one can let where is a constant matrix to be determined. By (53) and (54), we have Substituting (55) into (50b), the closed-loop system becomes By (53), (54), and (56), we have that is, Since this is true for any , we obtain the following Riccati equation for SDS systems: By (55), the optimal state derivative feedback control is given by and the closed-loop system becomes

The optimal state derivative feedback control law may also be implemented using an equivalent state feedback . This is useful in real situations when only states but not the state derivatives are available for measurement. This can be proved as follows. The closed-loop system after applying becomes implying or In view of (61), we set Then, we have or Now let implying

5. Controlling a Double Inverted Pendulum

Consider the double inverted pendulum system shown in Figure 2 [24, 25], where is the angle of inclination of the first arm and is the angle of inclination of the second arm relative to the first arm. This experiment facility is controlled by Simulink interface [24]. As shown in the figure, there are two subcontrol systems in this double inverted pendulum.

715026.fig.002
Figure 2: Double inverted pendulum.

Our goal is to keep the first arm in a vertical position pointing downward and second arm in a vertical position but pointing upward. This means that we wish that For this system, an initial control is usually needed so that approximately lies between and [25, 26]. An appropriate linearized model, accurate enough for , is given by [24]: where and is the input voltage. Suppose that we wish to minimize the following cost functional: where . The controllable system (71) can be converted in SDS form: Using the method proposed in the last section, the optimal state derivative feedback control law (60) is given by , where An equivalent state feedback control law is given by , where We will use the state feedback control law in our real implementation through the Simulink control interface given in [24]. Typical simulated trajectories for and are shown in Figures 3 and 4, respectively. It is seen that the control goal is achieved. The full-length video taken during the experiment is available in [27].

715026.fig.003
Figure 3: Trajectory of .
715026.fig.004
Figure 4: Trajectory of .

6. Conclusion

Optimal control for SDS systems has been investigated in this paper. Necessary conditions for optimality were derived by the use of novel differential Lagrange multipliers. In the past, optimal control design for descriptor systems with impulse modes is not an easy task. We have shown via a real electric circuit that optimal control for a class of descriptor systems with impulse modes can easily be carried out using our design method. The optimal LQR design for linear time-invariant SDS systems using state derivative feedback can be obtained via an algebraic Riccati equation. Moreover, this optimal state derivative feedback may also be implemented using an equivalent state feedback. This is useful in real situations when only states are available for measurement. Note that our derivation and design method parallel those for systems in standard state space form. The LQR design for a double inverted pendulum system has been implemented to illustrate the use of our method.

Acknowledgment

The research reported here was supported by the National Science Council, Taiwan, under Grants NSC100-2221-E-214-017 and NSC101-2221-E-214-042.

References

  1. L. S. Pontryagin, V. G. Boltyanskii, R. V. Gamkrelidze, and E. F. Mishchenko, The Mathematical Theory of Optimal Processes, Wiley, New York, NY, USA, 1962. View at Zentralblatt MATH · View at MathSciNet
  2. K. Zhou and J. C. Doyle, Essentials of Robust Control, Prentice-Hall, New Jersey, NJ, USA, 1997.
  3. Y. Shoham and K. Leyton-Brown, Multiagent Systems: Algorithmic, Game-Theoretic, and Logical Foundations, Cambridge University Press, New York, NY, USA, 2009.
  4. D. Simon, Optimal State Estimation: Kalman, H, and Nonlinear Approaches, Wiley, New York, NY, USA, 2006.
  5. G. C. Verghese, B. C. Lévy, and T. Kailath, “A generalized state-space for singular systems,” IEEE Transactions on Automatic Control, vol. 26, no. 4, pp. 811–831, 1981. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet
  6. D. G. Luenberger, “Dynamic equations in descriptor form,” IEEE Transactions on Automatic Control, vol. 22, no. 3, pp. 312–321, 1977. View at Zentralblatt MATH · View at MathSciNet
  7. S. L. Campbell, Singular Systems of Differential Equations, Pitman, San Francisco, Calif, USA, 1980. View at MathSciNet
  8. R. W. Newcomb, “The semistate description of nonlinear time-variable circuits,” IEEE Transactions on Circuits and Systems, vol. 28, no. 1, pp. 62–71, 1981. View at Publisher · View at Google Scholar · View at MathSciNet
  9. K. E. Brenan, “Numerical simulation of trajectory prescribed path control problems by the backward differentiation formulas,” IEEE Transactions on Automatic Control, vol. 31, no. 3, pp. 266–269, 1986. View at Scopus
  10. Y. W. Tseng, “Vibration control of piezoelectric smart plate using estimated state derivatives feedback in reciprocal state space form,” International Journal of Control Theory and Applications, vol. 2, no. 1, pp. 61–71, 2009.
  11. C. C. Pantelides, “The consistent initialization of differential-algebraic systems,” SIAM Journal on Scientific and Statistical Computing, vol. 9, no. 2, pp. 213–231, 1988. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet
  12. D. G. Luenberger and A. Arbel, “Singular dynamic Leontief systems,” Econometrica, vol. 45, no. 4, pp. 991–995, 1977.
  13. P. Liu, Q. Zhang, X. Yang, and L. Yang, “Passivity and optimal control of descriptor biological complex systems,” IEEE Transactions on Circuits and Systems, vol. 53, special issue on systems biology, pp. 122–125, 2008. View at Publisher · View at Google Scholar · View at MathSciNet
  14. F. B. Yeh and H. N. Huang, “Hstate feedback control of smart beam-plates via the descriptor system approach,” Tunghai Science, vol. 2, pp. 21–42, 2000.
  15. D. Cobb, “State feedback impulse elimination for singular systems over a Hermite domain,” SIAM Journal on Control and Optimization, vol. 44, no. 6, pp. 2189–2209, 2006. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet
  16. E. Jonckheere, “Variational calculus for descriptor problems,” IEEE Transactions on Automatic Control, vol. 33, no. 5, pp. 491–495, 1988. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet
  17. D. Liu, G. Zhang, and Y. Xie, “Guaranteed cost control for a class of descriptor systems with uncertainties,” International Journal of Information & Systems Sciences, vol. 5, no. 3-4, pp. 430–435, 2009. View at MathSciNet
  18. S. M. Saadni, M. Chaabane, and D. Mehdi, “Robust stability and stabilization of a class of singular systems with multiple time-varying delays,” Asian Journal of Control, vol. 8, no. 1, pp. 1–11, 2006. View at Publisher · View at Google Scholar · View at MathSciNet
  19. A. Varga, “Robust pole assignment for descriptor systems,” in Proceeding of Mathematical Theory of Networks and Systems, Perpignan, France, 2000.
  20. D. Cobb, “Eigenvalue conditions for convergence of singularly perturbed matrix exponential functions,” SIAM Journal on Control and Optimization, vol. 48, no. 7, pp. 4327–4351, 2010. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet
  21. Y. W. Tseng and R. K. Yedavalli, “Vibration control of a wing box via reciprocal state space framework,” in Proceedings of the IEEE International Conference on Control Applications, pp. 854–859, Hartford, Conn, USA, October 1997. View at Scopus
  22. Y. W. Tseng, “Control designs of singular systems expressed in reciprocal state space form with state derivative feedback,” International Journal of Control Theory and Applications, vol. 1, no. 1, pp. 55–67, 2008.
  23. T. F. Elbert, Estimation and Control of Systems, Van Nostrand Reinhold, New York, NY, USA, 1984. View at MathSciNet
  24. EMECS (electro-mechanical engineering control system) developed by TeraSoft Inc, http://www.terasoft.com.tw/product/product_dsp_control.asp.
  25. C. H. Huang, An application of FPGA board in the inverted pendulum system with time delay [M.S. thesis], Department of Electrical Engineering, I-Shou University, Kaohsiung, Taiwan, 2012.
  26. X. H. Yang, H. S. Liu, G. P. Liu, and G. F. Xiao, “Control experiment of the inverted pendulum using adaptive neural-fuzzy controller,” in Proceedings of the International Conference on Electrical and Control Engineering (ICECE '10), pp. 629–632, Wuhan, China, June 2010. View at Publisher · View at Google Scholar · View at Scopus
  27. http://www.youtube.com/watch?v=cj603v-bkL8.