- About this Journal ·
- Abstracting and Indexing ·
- Aims and Scope ·
- Annual Issues ·
- Article Processing Charges ·
- Author Guidelines ·
- Bibliographic Information ·
- Citations to this Journal ·
- Contact Information ·
- Editorial Board ·
- Editorial Workflow ·
- Free eTOC Alerts ·
- Publication Ethics ·
- Recently Accepted Articles ·
- Reviewers Acknowledgment ·
- Submit a Manuscript ·
- Subscription Information ·
- Table of Contents

Abstract and Applied Analysis

Volume 2013 (2013), Article ID 240352, 8 pages

http://dx.doi.org/10.1155/2013/240352

## Feedback Control Method Using Haar Wavelet Operational Matrices for Solving Optimal Control Problems

Institute of Mathematical Sciences, University of Malaya, 50603 Kuala Lumpur, Malaysia

Received 4 April 2013; Revised 27 June 2013; Accepted 1 July 2013

Academic Editor: Shawn X. Wang

Copyright © 2013 Waleeda Swaidan and Amran Hussin. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

#### Abstract

Most of the direct methods solve optimal control problems with nonlinear programming solver. In this paper we propose a novel feedback control method for solving for solving affine control system, with quadratic cost functional, which makes use of only linear systems. This method is a numerical technique, which is based on the combination of Haar wavelet collocation method and successive Generalized Hamilton-Jacobi-Bellman equation. We formulate some new Haar wavelet operational matrices in order to manipulate Haar wavelet series. The proposed method has been applied to solve linear and nonlinear optimal control problems with infinite time horizon. The simulation results indicate that the accuracy of the control and cost can be improved by increasing the wavelet resolution.

#### 1. Introduction

Optimal control is an important branch of mathematics and has been widely applied in a number of fields, including engineering, science, and economics. Although, the necessary and sufficient conditions for optimality have already been derived for and optimal controls, they are only useful for finding analytical solutions for quite restricted cases. If we assume full-state knowledge, and if the optimal control problem is linear, then the optimal control is a linear feedback of the state, which is obtained by solving a matrix Riccati equation. However, if the system is nonlinear, then the optimal control is a state feedback function, which depends on the solution to a Hamilton-Jacobi-Bellman equation (HJB) or a Hamilton-Jacobi-Issac equation (HJI) for or optimal control problem, respectively [1], and is usually difficult to solve analytically. Feng et al. [2] have solved an HJI equation iteratively by solving a sequence of HJB equation. In this paper, we are more concerned with approximate solution for HJB equation. Among numerous computational approach for solution of HJI equation, we refer in particular to [3–5]. Robustness of nonlinear state feedback is discussed in [6].

Broadly speaking, and in general, numerical methods for solving optimal control problem are divided into two categories: direct and indirect methods. The direct methods reduce optimal control problem to a nonlinear programming problem, by parameterizing or discretizing the infinite-dimensional optimal control problem, into finite-dimensional optimization problem. On the other hand, the indirect methods solve HJB equation or the first order necessary condition for optimality, which are obtained from Pontryagin minimum principle. Both these methods are important for solving optimal control problems; however, the difference between them is that the indirect methods are believed to yield more accurate result, whereas the direct methods tend to have better convergence properties. von Stryk and Bulirsch [7] have used both direct and indirect methods to solve optimal control problem for trajectory optimization in Apollo capsule. Beard et al. [8] have introduced Generalized Hamilton-Jacobi-Bellman equation to successively approximate solution of the HJB equation. Given an arbitrary stabilizing control law, their method can be used to improve the performance of the control. Moreover, Jaddu [9] has reported some numerical methods to solve unconstrained and constrained optimal control problems, by converting optimal control problems into quadratic programming problem. He has used a parameterization technique using the Chebyshev polynomials. Meanwhile, Beeler et al. [10] have performed a comparison study of five different methods for solving nonlinear control systems and studied the performance of the methods on several test problems. Park and Tsiotras [11] have proposed a successive wavelet collocation algorithm which used interpolating wavelets, to iteratively solve the Generalized Hamilton-Jacobi-Bellman equation and the corresponding optimal control law.

Wavelet basis that has compact support allows us to better represent functions with sharp spikes or edges than other bases. This property is advantageous in many applications in signal or image processing. In addition, the availability of fast transform makes it attractive as a computational tool. Numerical solutions of integral and differential equations have been discussed in many papers, which basically fall either in the class of spectral Galerkin and Collocation methods or finite element and finite difference methods.

Haar wavelet is the simplest orthogonal wavelet with a compact support. Chan and Hsiao [12] have used the Haar operational matrix method to solve lumped and distributed parameter systems. Hsiao and Wang [13] have solved optimal control of linear time-varying systems via Haar wavelets. Dai and Cochran Jr. [14] have considered a Haar wavelet technique to transform optimal control problems into nonlinear programming (NLP) parameters at collocation points. This NLP can be solved using nonlinear programming solver such as SNOPT.

In the present paper we have considered the method of Beard et al. [8] to successively approximate the solution of HJB equation. Instead of using the Galerkin method with polynomial basis, we have used collocation method with Haar wavelet basis to solve the Generalized Hamilton-Jacobi-Bellman equation. Galerkin method requires the computation of multidimensional integrals which makes the method impractical for higher order systems [15]. The main advantage of using collocation method in general is that computational burden of solving Generalized Hamilton-Jacobi-Bellman equation is reduced to matrix computation only. Our new successive Haar wavelet collocation method is used to solve linear and nonlinear optimal control problems. In the process of establishing the method we have to define new operational matrices of integration for a chosen stabilizing domain and new operational matrix for the product of two dimensions Haar wavelet functions.

#### 2. Haar Wavelets

The orthogonal set of the Haar wavelets is a group of square wave over the interval defined as follows: Other wavelets can be obtained by dilation and translation of the mother wavelet . In general, , where , , and .

Each can be expanded into Haar series of infinite terms: If is approximated as piecewise constants then it can be decomposed as where , , and .

The Haar coefficients that are can be obtain by minimizing the integral square error .

The sum in (3) can be compactly written in the form where is called the coefficient vector and is the Haar function vector.

At collocation points , , the Haar function vector can be expressed in matrix form as For instance, the fourth Haar wavelet matrix can be represented in matrix form as follows:

#### 3. Haar Wavelet Operational Matrices

The integration of in the interval of can also be expanded into a Haar series, that is, where the matrix is called the operational matrix of integration obtain recursively as The formula in the interval of was first given by Chen and Hsiao [12].

In order to solve nonlinear optimal control problem, it is essential to have the product of and . The product of two functions and can be expanded into a Haar series with a Haar coefficient matrix as where is an matrix referred to as the product operational matrix. It was first given by Hsiao and Wu [16] as where and , .

Two-dimensional Haar wavelets basis can be formed by taking a tensor product of and . Let the basis be , , . Then the two dimensions Haar function vector can be expressed as Any function can be written as where . Subsequently, we assume that and , so that the operation matrix will be a square matrix. Let where is a matrix. By using the Haar wavelet matrix in (6), the coefficient in (13) can be obtained from as follows: and , .

The integration of two dimensions Haar function vectors in is where and for are the operational matrices given as follows: where denotes the Kronecker product [17], denotes identity matrix, and As in (10), we also required the product of and . Let The algorithm to obtain is as follows.

*Step 1. *Let be a matrix of , or equivalently .

*Step 2. *Compute , according to (11) using the column as the coefficient vector.

*Step 3. *For , compute .

*Step 4. *Form a big matrix by concatenating all vectors from Step 3; that is, .

*Step 5. *For each row of matrix , compute according to (11) using the row as the coefficient vector.

*Step 6. *Form the matrix as follows:

*Step 7. *End.

#### 4. Problem Statement

The system to be controlled is given by the nonlinear differential equation of the form where is the state vector, is the control, and are continuously differentiable with respect to all its arguments, is the initial condition vector, and is domain of attraction.

The problem is to find the optimal control that minimizes the following performance index: where is a positive semidefinite matrix and is a positive definite matrix. Given an arbitrary control , the performance of the control at is given by a Lyapunov function for the system [8] where, and . The optimal controller in feedback form is presented as follows [8]: where is the solution to the following Hamilton-Jacobi-Bellman (HJB) equation with boundary condition ; that is for all , and is the solution of . Basically, it is not so easy to solve the nonlinear partial differential equation in (24) for the purpose of obtaining and consequently from (23); rather the following two linear equations have been iterated by the algorithm proposed by [8] with initial condition and Equation (25) is called the Generalized Hamilton-Jacobi-Bellman (GHJB) equation in [8]. In case of moderate presumptions, it has been established in [8] that the iteration between the GHJB (25) and the control (26) coincide with original HJB equation solution (24). If we can find a stabilizing control to start off, it is possible to iteratively enhance the performance of this controller using (25), (26), and finally the optimal controller can be optimally approximated. Moreover, at each iteration step the controller is a stable control.

#### 5. The Successive Haar Wavelet Collocation Method

The following section describes the successive Haar wavelet collocation method (SHWCM) used for obtaining the two dimensional numerical solution to the HJB equation. In every step of this algorithm, an approximate solution to the GHJB equation (25) has been identified, namely, , , and ; all can be approximately expressed in term of Haar wavelets. As , and will approach the optimal solution and , respectively.

Let us consider the following two-dimensional optimal feedback control problem subject to the dynamics where , , , and .

Without loss of generality, the domain of attraction has been selected as for the sake of convenience. The following equations express the pair of GHJB equation and the control law: with initial condition and For (28), if initially is a stabilizing control, then from (29) the solution to GHJB equation affiliated with becomes a Lyapunov function for the system and equals to the cost associated with as follows: According to (13), function approximation for , and , can be written as where the coefficient vectors, , , and , can be calculate from (14). Since it is not possible to differentiate Haar functions, and as (29) only involves first-order derivatives of , we assume that second-order partial derivative of exists; that is, for some coefficient vector .

With the assumption the first-order partial derivative can be obtained by integrating (33), with respect to and , respectively, where and .

It should be noted that has unknown variables while and have only unknown variables each. Now substituting (32) and (35) into (29), we have Equation (36) is a system of underdetermined linear equations with equations and unknown variables which can solve for the unknown vectors , , and using Moore-Penrose pseudoinverse [18]. The underdetermined equation is expected because the Lyapunov function is not unique. The Moore-Penrose solution is the particular solution whose vector 2-norm is minimal.

By using the solution of GHJB equation (29), a feedback control law is constructed using (30), which improves the efficiency of . The solution of the Hamilton-Jacobi-Bellman equation is uniformly approximated by repeating the above process.

Knowing that depends only on the initial and final points, not on the path followed, we can calculate the Lyapunov function by integrating parallel to the axes [19] as follows: This gives where .

#### 6. Numerical Examples

To show the efficiency of the proposed method, we applied our method to a linear quadratic optimal control problem and two nonlinear quadratic optimal control problems.

*Example 1. *Consider the following linear quadratic regulator (LQR):
subject to

To solve this problem we take the initial stabilizing control . Tables 1 and 2 show sample iteration results for and , respectively, when , . The iteration is terminated when the difference between two successive controls is less than . Subsequent, in order to display two-dimensional plots, we fix the value for at and . Figure 1 shows that for the particular LQR problem, the usage of is enough to approximate the exact optimal feedback control ; however, to approximate the exact cost function we require higher value of as shown in Figure 2.

*Example 2. *Consider the following nonlinear optimal control problem [15]:
subject to
The optimum solution for this problem is and . To solve this nonlinear optimal control problem, we started with the initial stabilizing control . Figure 3 shows approximate optimal feedback control law for , and . The graph for overlaps with the exact optimal feedback control, and Figure 4 shows that the approximate cost function converges to the exact cost function as we increase the resolution. Figure 5 compares the exact state trajectories with approximate trajectories.

*Example 3. *Consider the following optimal control problem [8]:
subject to

The initial stabilizing control can be obtained using feedback linearization method as outlined in [20]. The optimal feedback control and cost function obtained using SHWCM for various resolution = 8, 16, and 32 are illustrated in Figures 6 and 7, respectively. We believe that, by increasing Haar wavelet resolution, the SHWCM will be capable of yielding more accurate results. Figure 8 shows simulation of the system trajectories.

#### 7. Conclusion

In this paper we had proposed a new numerical method for solving the Hamilton-Jacobi-Bellman equation, which appears in the formulation of optimal control problems. Our approach uses a combination of successive Generalized Hamilton-Jacobi-Bellman equation and Haar wavelets operational matrix methods. The proposed approach is simple and stable and has been tested on linear and nonlinear optimal control problem in two-dimensional state space. Generally, by using our method, the approximate solutions for optimal feedback control require lower resolution, than the approximate solutions for the cost function. However, in both cases, it is clear that more accurate results can be obtained by increasing the resolution of Haar wavelet.

#### Acknowledgments

The authors are very grateful to the referees for their valuable comments and suggestions, which greatly improved the presentation of this paper. This research has been funded by University of Malaya, under Grant No. RG208-11AFR.

#### References

- R. W. Beard and T. W. McLain, “Successive Galerkin approximation algorithms for nonlinear optimal and robust control,”
*International Journal of Control*, vol. 71, no. 5, pp. 717–743, 1998. View at Scopus - Y. Feng, B. D. O. Anderson, and M. Rotkowitz, “A game theoretic algorithm to compute local stabilizing solutions to HJBI equations in nonlinear ${H}_{\infty}$ control,”
*Automatica*, vol. 45, no. 4, pp. 881–888, 2009. View at Publisher · View at Google Scholar · View at Scopus - J. Huang and C.-F. Lin, “Numerical approach to computing nonlinear ${H}_{\infty}$ control laws,”
*Journal of Guidance, Control, and Dynamics*, vol. 18, no. 5, pp. 989–996, 1995. View at Scopus - M. D. S. Aliyu, “An approach for solving the Hamilton-Jacobi-Isaacs equation (HJIE) in nonlinear
*ℋ*_{∞}control,”*Automatica*, vol. 39, no. 5, pp. 877–884, 2003. View at Publisher · View at Google Scholar · View at Scopus - M. Abu-Khalaf, F. L. Lewis, and J. Huang, “Policy iterations on the Hamilton-Jacobi-Isaacs equation for ${H}_{\infty}$ state feedback control with input saturation,”
*IEEE Transactions on Automatic Control*, vol. 51, no. 12, pp. 1989–1995, 2006. View at Publisher · View at Google Scholar · View at Scopus - S. T. Glad, “Robustness of nonlinear state feedback—a survey,”
*Automatica*, vol. 23, no. 4, pp. 425–435, 1987. View at Scopus - O. von Stryk and R. Bulirsch, “Direct and indirect methods for trajectory optimization,”
*Annals of Operations Research*, vol. 37, no. 1, pp. 357–373, 1992. View at Publisher · View at Google Scholar · View at Scopus - R. W. Beard, G. N. Saridis, and J. T. Wen, “Galerkin approximations of the generalized Hamilton-Jacobi-Bellman equation,”
*Automatica*, vol. 33, no. 12, pp. 2159–2176, 1997. View at Scopus - H. M. Jaddu,
*Numerical methods for solving optimal control problems using Chebyshev polynomials [Ph.D. thesis]*, School of Information Science, Japan Advanced Institute of Science and Technology, 1998. - S. C. Beeler, H. T. Tran, and H. T. Banks, “Feedback control methodologies for nonlinear systems,”
*Journal of Optimization Theory and Applications*, vol. 107, no. 1, pp. 1–33, 2000. View at Scopus - C. Park and P. Tsiotras, “Approximations to optimal feedback control using a successive wavelet collocation algorithm,” in
*Proceedings of the American Control Conference*, vol. 3, pp. 1950–1955, June 2003. View at Scopus - C. F. Chen and C. H. Hsiao, “Haar wavelet method for solving lumped and distributed parameter systems,”
*IEE Proceeding on Control Theory and Application*, vol. 144, no. 1, pp. 87–94, 1997. View at Publisher · View at Google Scholar - C. H. Hsiao and W. J. Wang, “Optimal control of linear time-varying systems via Haar wavelets,”
*Journal of Optimization Theory and Applications*, vol. 103, no. 3, pp. 641–655, 1999. View at Scopus - R. Dai and J. E. Cochran Jr., “Wavelet collocation method for optimal control problems,”
*Journal of Optimization Theory and Applications*, vol. 143, no. 2, pp. 265–278, 2009. View at Publisher · View at Google Scholar · View at Scopus - J. W. Curtis and R. W. Beard, “Successive collocation: an approximation to optimal nonlinear control,” in
*Proceeding of the American Control Conference*, vol. 5, pp. 3481–3485, June 2001. View at Scopus - C. H. Hsiao and S. P. Wu, “Numerical solution of time-varying functional differential equations via Haar wavelets,”
*Applied Mathematics and Computation*, vol. 188, no. 1, pp. 1049–1058, 2007. View at Publisher · View at Google Scholar · View at Scopus - J. W. Brewer, “Kronecker products and matrix calculus in system theory,”
*IEEE Transactions on Circuits and Systems*, vol. 25, no. 9, pp. 772–781, 1978. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet - P. Courrieu, “Fast computation of Moore-Penrose inverse matrices,”
*Neural Information Processing-Letters and Reviews*, vol. 8, no. 2, pp. 25–29, 2005. - J.-J. Slotine and W. Li,
*Applied Nonlinear Control*, Prentice-Hall, Englewood Cliffs, NJ, USA, 1991. - A. Isidori,
*Nonlinear Control Systems*, Communication and Control Engineering, Springer, New York, NY, USA, 2nd edition, 1989.