Recent Developments in Integral Transforms, Special Functions, and Their Extensions to Distributions TheoryView this Special Issue
Research Article | Open Access
Emran Tohidi, F. Soleymani, Adem Kilicman, "Robustness of Operational Matrices of Differentiation for Solving State-Space Analysis and Optimal Control Problems", Abstract and Applied Analysis, vol. 2013, Article ID 535979, 9 pages, 2013. https://doi.org/10.1155/2013/535979
Robustness of Operational Matrices of Differentiation for Solving State-Space Analysis and Optimal Control Problems
The idea of approximation by monomials together with the collocation technique over a uniform mesh for solving state-space analysis and optimal control problems (OCPs) has been proposed in this paper. After imposing the Pontryagins maximum principle to the main OCPs, the problems reduce to a linear or nonlinear boundary value problem. In the linear case we propose a monomial collocation matrix approach, while in the nonlinear case, the general collocation method has been applied. We also show the efficiency of the operational matrices of differentiation with respect to the operational matrices of integration in our numerical examples. These matrices of integration are related to the Bessel, Walsh, Triangular, Laguerre, and Hermite functions.
In the last four decades, numerical methods which are based on the operational matrices of integration (especially for orthogonal polynomials and functions) have received considerable attention for dealing with a huge size of applied mathematics problems such as state-space analysis and optimal control. The key idea of these methods is based on the integral expression where is an arbitrary basis vector and is a constant matrix, called the operational matrix of integration. The matrix has already been determined for many types of orthogonal (or nonorthogonal) bases such as Walsh functions [1–3], block-pulse functions , Laguerre polynomials , Chebyshev polynomials , Legendre polynomials , Hermite polynomials , Fourier series , Bernstein polynomials , and Bessel functions . As a primary research work which was based on the operational matrices of integration, one can refer to the work of Corrington . In , the author proposed a method of solving nonlinear differential and integral equations using a set of Walsh functions as the basis. His method is aimed at obtaining piecewise constant solutions of dynamic equations and requires previously prepared tables of coefficients for integrating Walsh functions. To alleviate the need for such tables, Chen and Hsiao [2, 3] introduced an operational matrix to perform integration of Walsh functions. This operational matrix approach has been applied to various problems such as time-domain analysis and synthesis of linear systems, and piecewise constant-feedback-gain determination for optimal control of linear systems and for inverting irrational Laplace transforms.
On the other hand, since the beginning of 1994, the Bernoulli, Chebyshev, Laguerre, Bernstein, Legendre, Taylor, Hermite, and Bessel matrix methods have been used in the works [12–24] to solve high-order linear and nonlinear differential (including hyperbolic partial differential equations) Fredholm Volterra integrodifferential difference delay equations and their systems. The main characteristic of these approaches is based on the operational matrices of differentiation instead of integration. The best advantage of these techniques with respect to the integation methods is that, in the fundamental matrix relations, there is not any approximation symbol, meanwhile in the integration forms such as (1) the approximation symbol could be seen obviously. In other words where is the operational matrix of differentiation for any selected basis such as the previously mentioned polynomials, functions, and truncated series. The readers can see that there is no approximation symbol in (2), meanwhile this can be seen in (1) by using operational matrices of integration. For justifying this expression, one can refer to this subject that after differentiating an th degree polynomial we usually reach to a polynomial which has less than th degree. However, in the integration processes the degree of polynomials would be increased.
In this paper, we generalize a new collocation matrix method that was applied for solving a huge size of applied mathematics models (see for instance  and the references therein), to several special classes of systems of ordinary differential equations (ODEs). Two important classes of such systems of ODEs are(i)State space analysis,(ii)Hamiltonian system,which are the necessary (and also are sufficient in several special cases) conditions for optimality of the solutions of OCPs, originate from the PMP, and have considerable importance in optimal control and calculus of variation.
We again emphasized that the methods that are based on the operational matrices of differentiation are more accurate and effective with regard to the integration ones. We illustrate this fact through several examples for dealing with the previously mentioned systems in the section of numerical examples. It should be noted that one of the best tools for the integration approaches is using high accurate Gauss quadrature rules such as the method of [25, 26]. However, more CPU times are required for using such quadrature rules, and also the matrix coefficient associated to these methods is ill-conditioned usually and should be preconditioned.
The remainder of this paper is organized as follows. In Section 2, the considered problems such as state-space analysis and Hamiltonian system are introduced. In Section 3, the fundamental matrix relations together with the method of obtaining approximate solutions are described. In Section 4, several numerical examples are provided for confirming high accuracy of the proposed method. The last Section is devoted to the conclusions.
2. Problems Statement
In this section two types of problems are considered. In the first subsection, we show that how the Hamiltonian systems can be obtained in both linear and nonlinear forms. In the second subsection, we introduce a general form of state-space analysis problems.
2.1. Hamiltonian Systems
2.1.1. Linear Quadratic Optimal Control Problems
In this part, we consider the following linear optimal control problem (OCP): where , , and . The control is an admissible control if it is piecewise continuous in for . Its values belong to a given closed subset of . The input is derived by minimizing the quadratic performance index , where is positive semidefinite matrix and is a positive definite matrix. We consider Hamiltonian for system (3) as where is the costate vector.
2.1.2. Nonlinear Quadratic Optimal Control Problems
Consider the nonlinear dynamical system with denoting the state variable, the control variable, and is the given initial state at . Moreover, and are two continuously differentiable functions in all arguments. Our aim is to minimize the quadratic objective functional subject to the nonlinear system (8), for , positive semidefinite and positive definite matrices, respectively. Since the performance index (9) is convex, the following extreme necessary conditions are also sufficient for optimality : where is referred to the Hamiltonian. Equivalently, (10) can be written in the form of where is the costate vector with the th component , and with , .
Also the optimal control law is obtained by
2.2. State Space Analysis Problems
In this part, we consider the following state space analysis problem: where , , and are known, meanwhile is unknown. The goal is to obtain the approximation of in (13). The previously mentioned system (13) is similar to Hamiltonian system (7) and the scheme of their solutions is the same.
Remark 1. We recall that the main goal of this paper is to approximate the solution of the systems (7), (11), and (13) by applying a new matrix method which is based on the operational matrix of differentiation and also the uniform collocation scheme in the parts of Hamiltonian systems and state space analysis problems.
3. Fundamental Matrix Relations and Method of the Solution
In this section, by using the collocation points and the matrix relations between the monomials and their derivatives, we will find the approximate solution of the system (7) expressed in the truncated monomial series form (assuming that and and also together with is independent of time , that is, and ) so that and ; are the unknown coefficients.
Let us consider the desired solutions and , of (7) defined by the truncated monomial series (14). We can write the approximate solutions, which are given in relation (14) in the matrix form where and , .
The matrix form of the relation between the matrix and its th derivative is so that By using the relations (15) and (16), we have the following relations: Thus, we can express the matrices and as follows: where
Now, we can restate the system (7) in the matrix form where Applying the following collocation points in (21): yields to equations as follows: All of the these equations can be written in the following matrix form: where where denotes the Kronecker product and is the identity matrix of dimension .
With the aid of relation (19) and the collocation points (23), we gain which can be written as where If the relation (28) is substituted into (25), the fundamental matrix equation is obtained as Thus, the fundamental matrix equation (30) corresponding to (7) can be written in the form which corresponds to a linear system of algebraic equations in the unknown monomial coefficients so that By the aid of the relation (19), the matrix form for the boundary conditions which are given in (7) can be written as
Finally, by replacing the rows of the matrices by the last rows of the matrices , we obtain the new augmented matrix The unknown monomials coefficients which exist in are determined by solving this linear system, and hence , are substituted in (14). Therefore, we find the approximated solutions
We can easily check the accuracy of the method. Since the truncated monomial series (14) are the approximate solutions of (7), when the functions , and their derivatives are substituted in (7), the resulting equation must be satisfied approximately; that is, for , and , ( positive integer).
If ( positive integer) is prescribed, then the truncation limit is increased until the difference at each of the points becomes smaller than the prescribed , see .
Remark 2. We must recall that a similar approach can be applied for the state space analysis problem (13). Moreover, as we say before, for solving a general nonlinear system of ODEs such as (11), we apply a generalization of the collocation method that was proposed in .
4. Numerical Examples
In this section, several numerical examples are given to illustrate the accuracy and effectiveness of the proposed method. All calculations are designed in MAPLE 13 and run on a Pentium 4 PC Laptop with 2 GHz of CPU and 2 GB of RAM. In this regard, in tables and figures, we report the absolute error functions associated to the trajectory and control variables and also the approximated values of performance index. In the first example, we provide an OCP that was recently considered by a new method (which is based on the operational matrix of integration of Triangular functions)  and reach to more accurate results. Also, in the second example, we consider another OCP (or Hamiltonian system) with time variant dynamical system, in which our results have more accuracy and credit with regard to methods [30, 31]. Moreover, we consider a nonlinear OCP as our third numerical illustration. In the fourth example, we provide a state space analysis problem together with a full comparison with the methods that are based on the operational matrices of integration such as Bessel  and Laguerre .
Example 3 (see  linear Hamiltonian system). Consider the problem of minimizing
The purpose is to find the optimal control which minimizes (37) subject to (38). The Optimal value of performance index for this problem is and also exact solutions have been given in  as
Since the objective function of this OCP is convex, therefore the following necessary conditions (i.e., linear Hamiltonian system) for optimality are also sufficient:
Hence, we need to solve the previous system of differential equations such that the obtained numerical solution is the optimal solution of problem (37)-(38). It should be noted that according to (6) the optimal control is computed by , where is the solution of the previous system.
We solve this problem by using our proposed method in the cases of , and 8. The approximated solutions corresponding to these values of are provided below The associated performance indexes for the selected values of are , , , , and . We provide the , associated to our proposed method (PM) and a new method that is based on the operational matrix of integration of Triangular functions  for different values of in Table 1. It can be seen from this table that our obtained results for such considered values of (i.e., 4, 5, 6, 7, and 8) are the same and equal to the obtained results of  for higher values of such as , and 64 in computation of . Moreover, our results corresponding to the are more accurate with regard to the method of  even by choosing lower values of .
Example 4 (see [30, 31] linear Hamiltonian system). Consider the linear time-varying system with the cost functional The problem is to obtain the optimal control which minimizes (43) subject to (42). The optimal control is where is the feedback controller gain matrix and the solution of the Riccati equation  According to the optimality conditions (5) and (6) we have We first solve the previous system and obtain the numerical solutions and for , and then solve (45) by ODE solver commands which exist in MAPLE 13. Since, then our numerical results of are equal to , that is, . The numerical results of system (46), which are obtained by the proposed method could be deduced as Also, the exact solution of the Riccati equation (45) at the uniform mesh in the interval are , , , , , , , , , , and . In Table 2, we provide the absolute values of errors at the selected points for the previously considered values of together with the same errors associated with other methods [30, 31]. Again, we can see the accuracy of method with regard to the methods that are based on operational matrices of integration.
Example 5 (nonlinear Hamiltonian system). As our third illustration, consider the following nonlinear optimal control problem:
Trivially , , , , , and . As mentioned in Section 2.2, we solve the following system of ordinary differential equations:
Also the optimal control law is given by
Similar to the linear cases, we suppose that the state and costate variables could be written in terms of linear combination of monic polynomials which are defined in Section 3, with the unknown monomial coefficients. These coefficients will be determined after imposing the previous system of differential equations at the uniform mesh in the interval . In other words, applying these collocation points to the main system together with the considered boundary conditions on and transforms the basic problem to the corresponding system of nonlinear algebraic equations. By assuming different values of such as 5, 7, and 9, we solve the previously mentioned system. In Table 3, we provide the approximated performance index , which is obtained by our proposed method and also the difference between the approximated for the considered values of .
Example 6 (see [11, 32] state-space analysis). We consider a linear-time invariant state equation where We are given that the input is the unit step function in the interval and the initial state is The exact solution for (51) is We solve this problem by using our proposed method in the cases of , and 20. The approximated solutions corresponding to the , and 10 are provided later