Table of Contents
ISRN Aerospace Engineering
Volume 2014, Article ID 136315, 15 pages
http://dx.doi.org/10.1155/2014/136315
Research Article

Satellite Attitude Control Using Analytical Solutions to Approximations of the Hamilton-Jacobi Equation

University of Toronto Institute for Aerospace Studies, 4925 Dufferin Street, Toronto, ON, Canada M3H 5T6

Received 25 October 2013; Accepted 11 December 2013; Published 20 February 2014

Academic Editors: A. Desbiens, C. Meola, and S. Simani

Copyright © 2014 Stefan LeBel and Christopher J. Damaren. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Abstract

The solution to the Hamilton-Jacobi equation associated with the nonlinear control problem is approximated using a Taylor series expansion. A recently developed analytical solution method is used for the second-, third-, and fourth-order terms. The proposed controller synthesis method is applied to the problem of satellite attitude control with attitude parameterization accomplished using the modified Rodrigues parameters and their associated shadow set. This leads to kinematical relations that are polynomial in the modified Rodrigues parameters and the angular velocity components. The proposed control method is compared with existing methods from the literature through numerical simulations. Disturbance rejection properties are compared by including the gravity-gradient and geomagnetic disturbance torques. Controller robustness is also compared by including unmodeled first- and second-order actuator dynamics, as well as actuation time delays in the simulation model. Moreover, the gap metric distance induced by the unmodeled actuator dynamics is calculated for the linearized system. The results indicated that a linear controller performs almost as well as those obtained using higher-order solutions for the Hamilton-Jacobi equation and the controller dynamics.

1. Introduction

The attitude control problem is critical for most satellite applications and has thus attracted extensive interest. While many control methods have been developed to address this problem, most of them are concerned primarily with the optimality of attitude maneuvers [14]. In the present work, we shall focus on robust nonlinear control systems. We note that, throughout this paper, by nonlinear we mean the -gain of a nonlinear system.

Control laws are generally developed based on mathematical models that are, at best, a close approximation of real-world phenomena. For such control methods to have any real practical value, they must be made robust with regard to unmodeled dynamics and disturbances that may act on the system. The study of robust control is therefore an essential part of the application of control theory to physical systems. In general, the development of an optimal nonlinear state feedback control law is characterized by the solution to a Hamilton-Jacobi partial differential equation (HJE) [5], while a robust nonlinear controller is obtained from the solution of one or more Hamilton-Jacobi equations [69]. However, no general analytical solution has yet been obtained to solve this optimization problem. Solutions have thus far only been obtained under certain conditions: in the case of linear systems with a quadratic performance index, the HJE reduces to the well-known algebraic Riccati equation (ARE). It is noted that the concept of dissipativity, which is closely related to optimal and robust control, is characterized by a Hamilton-Jacobi inequality [1012].

Extensive work has been carried out to approximate the solution of Hamilton-Jacobi equations through a Taylor series expansion [1317]. Although such a series expansion results in an infinite-order polynomial, finite-order approximations can be used to obtain suboptimal solutions to an HJE. We also note the work in [18, 19] which uses series solution methods for nonlinear optimal control problems. It has been shown that a local solution to an HJE can be obtained by solving the ARE for the linear approximation of the system [6, 7, 20]. Methods that have been developed over the past decades to attempt to solve this problem include the Zubov procedure [21, 22], the state-dependent Riccati equation [23, 24], the Galerkin method for the equivalent sequence of first-order partial differential equations [25, 26], and the use of symplectic geometry to examine the associated Hamiltonian system [27]. However, one aspect that is lacking in all the above methods is an analytical solution to the approximate equations.

The primary purpose of this paper is to develop robust nonlinear controllers based on analytical expressions for approximate solutions to the Hamilton-Jacobi equation. In particular, we shall provide analytical expressions for the second-, third-, and fourth-order terms of the approximation solution. These controllers are then compared through numerical simulations with existing methods from the literature for spacecraft attitude regulation [14]. Our objective is to examine the effects of different disturbances and uncertainties on the performance and robustness of the various controllers. More specifically, we include gravity-gradient and geomagnetic torques, as well as unmodeled actuator dynamics and actuation time delays. Moreover, we make use of the gap metric [28] to characterize the difference in the input-output (IO) map of the system induced by the unmodeled actuator dynamics. However, since we cannot calculate the gap between two nonlinear systems, we calculate the gap metric distance for the linearized system only. In contrast with some of the methods used for comparison, which were developed specifically to address the attitude control problem, the method presented in this paper is a general controller synthesis method and has also been applied to spacecraft formation flying [29].

The outline of the paper is as follows. In Section 2, a detailed description of the general class of systems is given, along with the controller synthesis method that is proposed. This controller is the solution of an appropriate nonlinear problem and is taken from the work of James et al. [9]. Then, the nonlinear equations of motion for the satellite attitude dynamics are given in Section 3. Section 4 presents simulation results using the proposed controller and comparisons are made with existing methods. Finally, some conclusions and suggestions for future work are stated in Section 5. We note here that some of these results also appear in past conference proceedings [30, 31] with the present paper containing improvements to the overall presentation.

2. Nonlinear Controller Synthesis Approach

This section provides the main results of our approach to robust nonlinear controller synthesis. We begin by describing the class of nonlinear systems with which we are concerned and define robustness in the gap metric. Then, the nonlinear control problem is presented. Finally, the analytic solutions for the second-, third-, and fourth-order terms in the Taylor series approximation to the solution of the Hamilton-Jacobi equation (HJE) are stated.

2.1. Class of Nonlinear Systems

Consider the nonlinear system shown in Figure 1. The plant is given by where , , and are assumed to be smooth (i.e., ) nonlinear functions of the plant states with and (i.e., is an equilibrium). The controller is described by where , , and are smooth nonlinear functions of the controller states. Additionally, the following relations hold: In the above, is the plant state vector, is the controller state vector, is the control signal, is the exogenous (disturbance) input, is the actual plant input, is the actual plant output, is the reference and/or sensor noise signal, and is the tracking error.

136315.fig.001
Figure 1: Block diagram of system (1) with controller (2).

The plant in (1) can be written as where The generalized system in (4) is shown in Figure 2. The first equation in (4) defines the plant dynamics with state variable , control input , and subject to a set of exogenous inputs , which includes disturbances (to be rejected), references (to be tracked), and/or noise (to be filtered). The second equation defines the penalty variable representing the outputs of interest, which may include a tracking error, a function of some of the exogenous variables , and a cost of the input needed to achieve the prescribed control goal. The third equation defines the set of measured variables , which are functions of the plant state and the exogenous inputs . As we shall see next, we will be concerned with defining an upper bound on the -gain from the disturbance inputs to the outputs of the system in (4).

136315.fig.002
Figure 2: Block diagram of generalized system (4) with controller (2).
2.2. Robustness in the Gap Metric

In general, the plant and controller are assumed to be causal mappings from their respective inputs and outputs; that is, and , which satisfy and , where and are appropriate signal spaces. In particular, these mappings can be represented by (1) and (2). Thus, in the feedback configuration of Figure 1, the signals belong to and the signals belong to , where . The input-output relations describing the plant and controller can be represented by their respective graphs: where , . For a plant with and a plant with , the -gap between these two plants is defined as [9, 28] where and is similarly defined. The norm used here, , is defined in [9].

It has been shown [9, 32] that, for a stable feedback pair , if a system is such that then the feedback system is also stable. The symbol defines a parallel projection operator [32] and represents the closed-loop mapping from the disturbances to the outputs shown in Figure 2. The -gain of is the induced norm defined by where the -norm is defined by .

Thus, to minimize the effects of the disturbances on the outputs , which simultaneously achieves optimal robustness, the objective is to design a controller such that is minimized. However, no closed-form solution exists to this optimal control problem. Instead, we shall be concerned with the suboptimal problem: given some constant scalar , design such that . Note that a -iteration procedure can be applied to obtain the optimal solution within an arbitrary tolerance. It will be shown that the controller that satisfies this objective can be obtained from the solution to a single Hamilton-Jacobi equation.

2.3. Nonlinear Control Problem

As indicated in the previous subsection, we are concerned with rendering . In other words, we wish to bound the -gain from the disturbance inputs to the outputs of the system in (4). Without providing the details, it is noted that the HJE corresponding to (4) is dependent on and the quadratic term is sign-indefinite (i.e., is neither positive definite nor negative definite). However, by performing a certain transformation [9, 33], the generalized system can be written in a form where the outputs are not explicit functions of the disturbances . The resulting HJE is no longer dependent on and is sign-definite. The system resulting from this transformation is described by where . Now, the HJE corresponding to the system in (11) is given by where is the Jacobian matrix of the storage function and is defined as with denoting a row matrix in the index . For the system in (11), we have the parameters [9] The name used in the literature for the HJE in (12) is not uniform. In the context of robust control, it is sometimes referred to as the Hamilton-Jacobi-Isaacs equation and when used in nonlinear optimal control it is often referred to as the Hamilton-Jacobi-Bellman equation. In [9] it is referred to as the Hamilton-Jacobi-Bellman-Isaacs equation. We will simply call it the HJE.

It is shown in [9] that the controller that solves the suboptimal problem for the plant (i.e., renders ) is related to that (call it ) which solves the same problem for the modified plant as follows: . We now construct this . Denote by the unique smooth solution to the HJE (12) that satisfies and , with asymptotically stable. Similarly, we denote by the unique smooth solution to the HJE that satisfies and , with asymptotically stable.

Following the approach of James et al. [9], a local solution to the nonlinear control problem for the plant in (4) is obtained if the following conditions are satisfied.(1)There exists a positive-definite function defined in a neighbourhood of the origin with that satisfies the HJE (12).(2)There exists a negative-definite function defined in a neighbourhood of the origin with that satisfies the HJE (12), and additionally satisfies where (3)There exists a function such that in a neighbourhood of the origin.

The resulting nonlinear controller is given by

In the next subsection, we shall examine how to obtain analytical expressions for the approximate solution to the HJE in (12) required for this control law.

2.4. Analytical Solutions to HJE Approximation

Our approach for the synthesis of nonlinear controllers is based on the Taylor series approximation of the solution to the Hamilton-Jacobi equation, where each order of the controller is built using the previous orders. The following notation will be used: denotes a row matrix with index , denotes a column matrix with index , and denotes a matrix with row index and column index . It should be emphasized that the symbols , , and used here are dummy indices. In general, refers to the th entry of the matrix . We will denote the positive-semidefinite solution of the HJE (12) simply by .

Consider the nonlinear system in (1). We begin by making the assumptions that and , where and are constant matrices. From these assumptions, is a constant matrix, which we will denote simply by , and is a quadratic form, which we will write as , where . Thus, the only nonlinearities present are in the system matrix. For the purpose of the results to be presented, this nonlinear function will be approximated to fourth order. It should be noted that some of these terms may be zero, depending on the system considered. Therefore, we have the following approximation: where and the summations run from 0 to . Here, , , , and are families of square matrices. We shall also find it useful to define the column matrices and whose entries can be used to form and , respectively.

Additionally, consider a storage function for which where Here, , , , and are families of square matrices. In general, refers to the th entry of the matrix . We shall also find it useful to define the column matrices and whose entries can be used to form and , respectively.

It is important to recognize that, while may be zero for some , the corresponding is not necessarily zero as well. This is because the present method involves a Taylor series expansion of the nonlinear solution to the HJE. Substituting (21) and (23) into the HJE in (12) and grouping terms of the same order yields for , where with and for . Thus, at each order , the objective is to solve for the unknown . Note that in (26) the summation term is equal to zero for , since .

We now present the general expressions to compute the unknowns in (23). The first-order solution, , is obtained by solving the ARE corresponding to (25), which is given by where it has been noted that and . We will assume that is controllable and () is observable so that is positive definite. Then, the higher-order solutions are obtained by solving (26) recursively for increasing values of .

The second-order solution is given by where the column matrices and were defined above. We emphasize that the multiplications indicated in (28) are standard matrix multiplications.

The third-order solution is given by where . Here, consists of a square matrix (with row index and column index ) containing the th entries of each for given and . Again, all of the indicated multiplications are standard matrix multiplications.

The fourth-order solution is given by where . Similar comments on the multiplications involved above apply here.

The negative-semidefinite solution of the HJE, , can be determined using (23)–(30) with the proviso that the matrix is replaced with the negative-definite solution of the Riccati equation in (27) which we will denote by and is replaced by . The matrices corresponding to those in (24) will be denoted by , , , and and they are obtained by solving (27)–(30) with replacing , replacing , and so forth. In the next section, the satellite attitude dynamics are presented.

3. Attitude Dynamics and Control

3.1. The Attitude Dynamics and Kinematics

The attitude dynamics of a rigid spacecraft are given by Euler's equation: where are the body angular velocities, is the moment of inertia matrix, and are the body torques. The notation is the matrix representation of the cross product and is defined as

While many representations are possible to define the spacecraft attitude kinematics, the modified Rodrigues parameters (MRPs) are chosen here because they are polynomial in the states, which fit nicely with the present controller synthesis approach, and they possess neither singularities nor norm constraints when used in conjunction with the shadow parameters. The MRP vector can be defined in terms of the principal rotation axis and principal rotation angle of Euler's theorem according to The attitude kinematics using MRPs are defined by where is the identity matrix.

Upon closer inspection of (33), it is seen that the MRPs encounter a singularity for rotations of rad. This corresponds to a complete rotation in either direction about the principal axis. To circumvent this, another set of MRPs, called the shadow parameters and denoted by , is used in conjunction with the regular MRPs. By switching from one set to the other at rotations of rad, it is possible to avoid any singularities. The parameter switching occurs on the surface defined by The kinematics are identical for both the regular and the shadow parameters. However, when the switching surface is encountered, both the MRPs and their rates must be converted from one set to the other. This can be accomplished with the following relations: More details can be found in the text by Schaub and Junkins [34].

Defining the state vector and grouping terms of the same order, the attitude dynamics in (31) and kinematics in (34) can be expressed as where and the second- and third-order terms are given by respectively. It should be noted that these third-order attitude dynamics are exact so that we may take and hence .

3.2. The Attitude Controller

In this section, the proposed controller synthesis methods will be applied to the satellite attitude dynamics given by (37)–(39). For simplicity, we shall assume that the dynamics are formulated in principal axes so that the inertia matrix is given by . Let us begin by comparing the definitions of and in (22) with the specific ones given in (39). From this, we identify the nonzero elements of the matrices and as follows: where , and . In addition,

Given the definitions of , , and in (38), the positive-definite solution of the algebraic Riccati equation in (27) is easily determined. The nonzero elements are given by The negative-definite solution (nonzero elements) is given by The corresponding closed-loop matrices and are readily determined:

Using the above quantities, the entries in (via ), , and can be calculated using (28), (29), and (30). The same equations can be used to determine (via ), , and with replaced with and replaced with . The dynamic controller in (20) can be made specific to the attitude control problem: where is determined using (22) in conjunction with (40) and (41) and is determined using (24) and the solutions in (27)–(30). The observer gain which is defined by (19) can be determined as follows. Since, , we have used (24): Using this in (19) yields the following expression for the observer gain: The condition in (17) needs to be satisfied in a region of the origin. Using the lowest order terms in the expansions for , , , and , the Hessian matrix defined by (17) is given by which must be negative definite. This condition limits the chosen value of . In the sequel, we shall refer to the controller in (45) as the controller of order , where is the order of the approximation adopted for , , , and in determining .

4. Numerical Example and Comparisons

In this section, the nonlinear controller presented above is compared through numerical simulations with existing methods from the literature for spacecraft attitude regulation. The purpose of these comparisons is to examine the effects of different disturbances and uncertainties on the performance and robustness of the various controllers. In particular, we include gravity-gradient and geomagnetic torques, as well as unmodeled actuator dynamics and actuation time delays. In addition to these comparisons, we make use of the gap metric [28] to characterize the difference in the input-output (IO) map of the system induced by the unmodeled actuator dynamics.

The simulation parameters are as follows. The satellite is in a circular orbit with an inclination of degrees and a longitude of the ascending node of . The initial value of the argument of latitude is zero. The altitude is  km and we take  km for the Earth's radius. These orbital parameters will be used to determine the gravity-gradient and geomagnetic disturbance torques acting on the spacecraft. The spacecraft position in the geocentric inertial frame, , is determined using a simple Keplerian model. The gravity-gradient torque is then given by , where is the geocentric gravitational parameter, , and , where the rotation matrix relating the body-fixed frame to the inertial frame may be expressed in terms of the MRPs as .

For the purposes of the geomagnetic disturbance torque, the satellite is assumed to generate a magnetic dipole of Am2. The magnetic field model is the tilted dipole model, , presented in [35], where are the geomagnetic field components expressed in the geocentric inertial frame. The geomagnetic disturbance torque is given by . The satellite inertia matrix is given by kgm2. In all comparisons, we consider the regulation problem only, hence and the input to the controller is . Hence, we assume perfect measurements of the state.

In the case of the disturbance rejection comparisons, the simulations start from the desired attitude and we compare the ability of the different controllers to maintain that attitude. For all other comparisons, the initial states are chosen from Schaub et al. [2] to be rad/s and . These initial conditions have the satellite oriented almost rad from the desired attitude with large angular velocities moving it towards this upside-down attitude. All simulations will be performed for one orbit using a 4th-order Runge-Kutta numerical integration method with a step-size  s. The controllers are designed with , which was chosen to satisfy the linear version of the condition in (17); that is, such that the matrix in (48) is negative definite.

The methods to be used, in addition to the controllers of the previous section, are the linear and nonlinear proportional-derivative (PD) laws of Tsiotras [1], the open-loop (OL) optimal control method by Schaub et al. [2], the closed-loop (CL) optimal nonlinear method of Tewari [3], and the sum of squares (SOS) approach of Gollu and Rodrigues [4]. In order to gauge the performance and properly compare the different controllers, we will use the following metrics: where , is the orbital period, is the RMS tracking error, and is the RMS control torque (note that are the control torques).

4.1. Existing Control Methods

We now provide a brief overview of the controller synthesis methods to be used for comparisons with our method. The interested reader is referred to the appropriate literature for a more detailed exposition of these methods.

4.1.1. Proportional-Derivative Controllers

Tsiotras [1] developed two proportional-derivative (PD) control laws for the attitude control problem. The linear PD controller is given by while the nonlinear version is given by

4.1.2. Open-Loop Optimal Control

The optimal control method presented by Schaub et al. [2] is designed to minimize the cost function where and are scalar weights, , , and are weighting matrices, and The Hamiltonian relating to this optimal control problem is defined as where we have used the plant model in (1) and the second relation of (3) with . The costates, denoted by , have dynamics Note that the costates are specified at some final time , not the initial time. The optimal control law for this problem is determined from which yields The primary disadvantage of this method from a practical point of view is that it requires the solution of a two-point boundary value problem (TPBVP) and results in an open-loop control strategy. For the simulation results presented below, the following weighting parameters are used: , ,  , , and . Moreover, the maneuver is optimized for a final time  s.

4.1.3. Closed-Loop Optimal Controller

The optimal control method presented by Tewari [3] is based on obtaining an exact analytical solution to the Hamilton-Jacobi equation. Consider the HJE in (12) with parameters where the inertia matrix is defined as , is symmetric and positive definite, is symmetric and positive semidefinite, and . It is assumed that has the same form as ; thus, where is symmetric and positive definite. The state feedback controller is given by where

The matrix is obtained as follows. First, is calculated from the algebraic Riccati equation in (27) corresponding to the parameters in (58), that is, with in (27) replaced by and in (27) replaced by . Then, the equations in (26) with , are solved simultaneously for the remaining unknowns. In particular, the nonzero elements of the matrices and are where, as before, , and .

For the simulation results presented below, the following weighting parameters are used [3]:

We now make a few remarks regarding the characteristics of the synthesis method of Tewari [3]. Like the controller presented in this paper, Tewari's closed-loop optimal method results in a polynomial feedback controller. However, his solution is derived specifically for the attitude control problem. In contrast, the method developed in the present paper is applicable to a wider class of systems.

4.1.4. Sum of Squares Controller

A multivariate polynomial is a sum of squares (SOS) if there exist some polynomials , , such that The SOS controller synthesis approach relaxes the search for positive- or negative-definite functions to a search for SOS functions. It should be noted, however, that the use of sums of squares is conservative, since being SOS implies that , while the converse is not true in general. Also, it can be shown that if is SOS for , then for all [36].

In applying the SOS controller synthesis method, we first rewrite the system in (1) as where we have used the second relation of (3) with . Note that the matrix is not unique. The SOS state feedback controller for this problem is given by [4] where we note that is now taken to be constant (state independent).

Consider the Lyapunov function Taking the time derivative of this function along the trajectories of the system in (65) with the controller of (66) yields Using the change of variables , this last expression can be written as The conditions and can be replaced by the conditions that   is positive definite and is SOS. The second of these conditions will be strengthened to being SOS, where is some SOS function. This semidefinite programming (SDP) problem can then be written as follows: This optimization problem can be solved using the SOSTOOLS software package [37].

4.2. Disturbance Rejection

We begin by comparing the controllers presented in this paper with the methods from the literature with regard to disturbance rejection. The two disturbances considered here are the gravity-gradient and geomagnetic torques. For the purposes of this comparison, the simulations start from the desired attitude (i.e., ) and we compare the ability of the different controllers to maintain that attitude over one complete orbit. Table 1 presents values of the performance metrics for the different controllers.

tab1
Table 1: Controller disturbance rejection.

The optimal control method of Schaub et al. [2] is not included in the table because it is unable to reject any disturbances, which is entirely due to its open-loop nature. There is no apparent difference in performance between the linear and nonlinear PD controllers. There is also very little difference between any of the nonlinear controllers developed using the present method. This is due to the very small magnitude of these disturbance torques, which the linear term in the controller can effectively overcome. With regard to disturbance rejection, the present controllers do not perform as well as the existing methods.

4.3. Response to Initial Conditions

In this and the following subsections, the disturbance torques are set to zero and the nonzero initial conditions noted at the beginning of this section are applied. All of the resulting controllers described previously are stable.

The resulting attitude and control torques when the fourth-order controller is applied are given in Figures 3 and 4. The MRP switching can clearly be seen in Figure 3, where we also note that it requires several rotations for the controller to sufficiently slow down the spacecraft. This is due to the small torques applied, as seen in Figure 4. It is important to note that the control torques are continuous, which is a result of the dynamic aspect of the controller developed in this paper. Once again, there is little to distinguish the behaviour of the controllers of varying order of approximation. This will be discussed further after the robustness of these controllers has been assessed.

136315.fig.003
Figure 3: MRP trajectories for kgm2 with 4th-order controller.
136315.fig.004
Figure 4: Control torques for kgm2 with th-order controller.
4.4. Robustness to Actuation Time Delay

The robustness properties of the different controllers are now examined with regard to a time delay in the actuation. Such a delay could represent the finite time required by a satellite on-board computer to take the sensor measurements and calculate the required control signal. The time delay is made equal to an integer number of the numerical integration step-size . Table 2 indicates the maximum allowable time delay, , for each controller such that the desired attitude maneuver is achieved within one orbit. As can be seen from these results, the four controllers are more robust with regard to this effect than the other control methods. In particular, the present fourth-order controller is the most robust. We also note that the open-loop method of Schaub et al. [2] has no robustness to this effect; the time delay results in a nonzero final angular velocity such that the system will rotate endlessly.

tab2
Table 2: Robustness to actuation time delay.
4.5. Robustness to Unmodeled Actuator Dynamics

The robustness properties of the different controllers are now examined with regard to unmodeled actuator dynamics. For the purposes of this study, we make use of first- and second-order actuator models. In practice the actuator dynamics may be far more complex than the ones used here. However, we use these simple models here for the purposes of studying the capabilities of the different control methods. Evidently, as the actuator bandwidth decreases, it becomes harder for the controller to stabilize the system. Thus, we are able to infer the relative robustness properties of the different controllers by examining Figures 58. In particular, the farther a line reaches towards the left-hand side of the graph, the more robust that controller is with regard to the unmodeled actuator dynamics. As we shall see, the present controllers are always more robust than the other methods.

136315.fig.005
Figure 5: RMS tracking error with respect to 1st-order actuator bandwidth.
136315.fig.006
Figure 6: RMS control effort with respect to 1st-order actuator bandwidth.
136315.fig.007
Figure 7: RMS tracking error with respect to 2nd-order actuator bandwidth.
136315.fig.008
Figure 8: RMS control effort with respect to 2nd-order actuator bandwidth.
4.5.1. Unmodeled 1st-Order Actuator Dynamics

We begin by examining the robustness of the controllers with regard to unmodeled first-order actuator dynamics. This shall be accomplished by including the following first-order dynamics model in each component () of the controller output: where is the actuator bandwidth. Figures 5 and 6 show and , respectively, as a function of the actuator bandwidth. It is noted from these figures that all four controllers provide nearly the same tracking error and control effort. Moreover, while the closed-loop optimal control law provides better tracking error compared with the controllers, the trade-off is that it requires more control effort. Similarly, the SOS controller yields very low tracking error at the expense of greater control effort. The two PD laws, on the other hand, have a higher tracking error and control effort than the other methods.

4.5.2. Unmodeled 2nd-Order Actuator Dynamics

The robustness properties of the different controllers are now examined with regard to unmodeled second-order actuator dynamics. This shall be accomplished by including the following second-order dynamics model in each component () of the controller output: where is the actuator damping ratio and is the actuator bandwidth. All simulations were performed with a damping ratio . Figures 7 and 8 show and , respectively, as a function of the actuator bandwidth. It is seen from these figures that the various control methods follow the same trends as in the case of the first-order actuator dynamics.

4.6. Robustness in the Gap Metric (Revisited)

We now make use of the gap metric [28] to characterize the difference in the input-output (IO) map of the system induced by the unmodeled actuator dynamics. However, since we cannot calculate the gap between two nonlinear systems, we calculate the gap metric distance for the linearized system only. As the actuator bandwidth and damping ratio change, the value of the gap metric will also vary. Figure 9 shows the gap metric value with respect to the first-order actuator bandwidth. Figure 10 shows the gap metric value with respect to the second-order actuator bandwidth for a damping ratio as used in the above numerical simulations. As the damping ratio varies, the curve of this figure moves to the left or right slightly, although there does not appear to be any discernible trend. Moreover, it should be noted that as the actuator bandwidth approaches infinity, the effects of the actuator in the controller input-output map become negligible. This is to be expected and is seen in both Figures 9 and 10, where the gap metric approaches zero with increasing actuator bandwidth.

136315.fig.009
Figure 9: Gap metric with respect to 1st-order actuator bandwidth.
136315.fig.0010
Figure 10: Gap metric with respect to 2nd-order actuator bandwidth.

We now return for a moment to the question of controller robustness in the gap metric. From (9) we have the following small-gain-type stability criterion: In the numerical simulations performed with , it was determined that the closed-loop system is stable in the presence of the unmodeled second-order actuator with bandwidth as low as rad/s. With this actuator bandwidth, the calculation of the gap between the linear attitude dynamics with and without the actuator results in a value of , which does not satisfy the condition of (73). This could be explained by the conservativeness of the small-gain criterion and the fact that it is a sufficient (but not necessary) condition for stability. However, it can also be attributed to the nonlinear effects of the dynamics not taken into account in the gap calculation here. This emphasizes the need for a method to calculate the gap metric for nonlinear systems.

4.7. Overall Assessment of the Controllers

The previous sections show that the methodology presented here can be used to develop very robust attitude controllers. However, their performance was not as good as some of the other techniques to which it was compared. We have not demonstrated significant benefits for the higher-order terms in the controller which, while disappointing, is an important contribution to the literature. The fact that a linear controller can perform very closely to the higher-order ones is a strong vindication of the most popular approach used for actual attitude control: linear feedback of angular velocity and attitude information. This is consistent with [38] which showed that a linear combination of angular velocity and quaternion (Euler parameter) feedback solves the state feedback nonlinear (suboptimal)- control problem for rigid spacecraft attitude control. Our results are entirely consistent with this and we strongly suspect that results analogous to [38] can be obtained for the case of angular velocity and MRPs.

5. Conclusions

The results presented here clearly show the trade-off between performance and robustness. Existing methods from the attitude control literature typically focus on performance. In contrast, the method developed in this paper emphasizes robustness. In particular, while the methods from the literature are better at disturbance rejection, the new nonlinear controllers have overall better robustness properties. In general, there is still a need to characterize the trade-off between these two properties. In addition to the results presented in this paper, there is still much room for improvement and many more areas to be explored. Some topics that could be explored in future work include analytical solutions to higher-order terms in the approximation and the effects of varying on the closed-loop response. The need for methods to calculate the gap metric between nonlinear systems was also motivated.

Conflict of Interests

The authors declare that there is no conflict of interests regarding the publication of this paper.

Acknowledgment

This work was supported by the Natural Sciences and Engineering Research Council of Canada [Application no. 121947-2010].

References

  1. P. Tsiotras, “Stabilization and optimality results for the attitude control problem,” Journal of Guidance, Control, and Dynamics, vol. 19, no. 4, pp. 772–779, 1996. View at Google Scholar · View at Scopus
  2. H. Schaub, J. L. Junkins, and R. D. Robinett, “New penalty functions and optimal control formulation for spacecraft attitude control problems,” Journal of Guidance, Control, and Dynamics, vol. 20, no. 3, pp. 428–434, 1997. View at Google Scholar · View at Scopus
  3. A. Tewari, “Optimal nonlinear spacecraft attitude control through Hamilton-Jacobi formulation,” Journal of the Astronautical Sciences, vol. 50, no. 1, pp. 99–112, 2002. View at Google Scholar · View at Scopus
  4. N. Gollu and L. Rodrigues, “Control of large angle attitude maneuvers for rigid bodies using sum of squares,” in Proceedings of the American Control Conference (ACC '07), pp. 3156–3161, New York, NY, USA, July 2007. View at Publisher · View at Google Scholar · View at Scopus
  5. A. P. Sage and C. C. White III, Optimum System Control, Prentice-Hall, Englewood Cliffs, NJ, USA, 2nd edition, 1977.
  6. A. J. van der Schaft, “On a state space approach to nonlinear H control,” Systems and Control Letters, vol. 16, no. 1, pp. 1–8, 1991. View at Google Scholar · View at Scopus
  7. A. J. van der Schaft, “L2-gain analysis of nonlinear systems and nonlinear state-feedback H control,” IEEE Transactions on Automatic Control, vol. 37, no. 6, pp. 770–784, 1992. View at Publisher · View at Google Scholar · View at Scopus
  8. J. A. Ball, J. W. Helton, and M. L. Walker, “H control for nonlinear systems with output feedback,” IEEE Transactions on Automatic Control, vol. 38, no. 4, pp. 546–559, 1993. View at Publisher · View at Google Scholar · View at Scopus
  9. M. R. James, M. C. Smith, and G. Vinnicombe, “Gap metrics, representations, and nonlinear robust stability,” SIAM Journal on Control and Optimization, vol. 43, no. 5, pp. 1535–1582, 2005. View at Publisher · View at Google Scholar · View at Scopus
  10. J. C. Willems, “Dissipative dynamical systems part I: general theory,” Archive for Rational Mechanics and Analysis, vol. 45, no. 5, pp. 321–351, 1972. View at Publisher · View at Google Scholar · View at Scopus
  11. J. C. Willems, “Dissipative dynamical systems. Part II: linear systems with quadratic supply rates,” Archive for Rational Mechanics and Analysis, vol. 45, no. 5, pp. 352–393, 1972. View at Publisher · View at Google Scholar · View at Scopus
  12. D. Hill and P. Moylan, “The Stability of nonlinear dissipative systems,” IEEE Transactions on Automatic Control, vol. AC-21, no. 5, pp. 708–711, 1976. View at Google Scholar · View at Scopus
  13. E. G. AlBrekht, “On the optimal stabilization of nonlinear systems,” Prikladnaya Matematika, vol. 25, no. 5, pp. 836–844, 1961. View at Google Scholar
  14. D. L. Lukes, “Optimal regulation of nonlinear dynamical systems,” SIAM Journal on Control, vol. 7, no. 1, pp. 75–100, 1969. View at Google Scholar · View at Scopus
  15. W. L. Garrard, “Suboptimal feedback control for nonlinear systems,” Automatica, vol. 8, no. 2, pp. 219–221, 1972. View at Google Scholar · View at Scopus
  16. W. L. Garrard and J. M. Jordan, “Design of nonlinear automatic flight control systems,” Automatica, vol. 13, no. 5, pp. 497–505, 1977. View at Google Scholar · View at Scopus
  17. J. Huang and C.-F. Lin, “Numerical approach to computing nonlinear H control laws,” Journal of Guidance, Control, and Dynamics, vol. 18, no. 5, pp. 989–994, 1995. View at Google Scholar · View at Scopus
  18. S. R. Vadali and R. Sharma, “Optimal finite-time feedback controllers for nonlinear systems with terminal constraints,” Journal of Guidance, Control, and Dynamics, vol. 29, no. 4, pp. 921–928, 2006. View at Publisher · View at Google Scholar · View at Scopus
  19. R. Sharma, S. R. Vadali, and J. E. Hurtado, “Optimal nonlinear feedback control design using a waypoint method,” Journal of Guidance, Control, and Dynamics, vol. 34, no. 3, pp. 698–705, 2011. View at Publisher · View at Google Scholar · View at Scopus
  20. A. J. van der Schaft, “Relations between (H) optimal control of a nonlinear system and its linearization,” in Proceedings of the 30th IEEE Conference on Decision and Control, pp. 1807–1808, Brighton, UK, December 1991. View at Scopus
  21. S. G. Margolis and W. G. Vogt, “Control engineering applications of V. I. Zubovs construction procedure for Lyapunov functions,” IEEE Transactions on Automatic Control, vol. 8, no. 2, pp. 104–113, 1963. View at Google Scholar
  22. J. R. Hewit and C. Storey, “Optimization of the Zubov and Ingwerson methods for constructing Lyapunov functions,” IEE Electronics Letters, vol. 3, no. 5, pp. 211–213, 1967. View at Google Scholar
  23. J. R. Cloutier, “State-dependent Riccati equation techniques: an overview,” in Proceedings of the American Control Conference, pp. 932–936, Albuquerque, NM, USA, June 1997. View at Scopus
  24. J. R. Cloutier and D. T. Stansbery, “The capabilities and art of State-dependent Riccati equation-based design,” in Proceedings of the American Control Conference, pp. 86–91, Anchorage, Alaska, USA, May 2002. View at Scopus
  25. R. Beard, G. Saridis, and J. Wen, “Improving the performance of stabilizing controls for nonlinear systems,” IEEE Control Systems Magazine, vol. 16, no. 5, pp. 27–35, 1996. View at Publisher · View at Google Scholar · View at Scopus
  26. R. W. Beard, T. W. McLain, and J. T. Wen, “Successive Galerkin Approximation of the Isaacs Equation,” in Proceedings of the IFAC World Congress, Beijing, China, 1999.
  27. N. Sakamoto and A. J. van der Schaft, “Analytical approximation methods for the stabilizing solution of the Hamilton-Jacobi equation,” IEEE Transactions on Automatic Control, vol. 53, no. 10, pp. 2335–2350, 2008. View at Publisher · View at Google Scholar · View at Scopus
  28. A. K. El-Sakkary, “The gap metric: robustness of stabilization of feedback systems,” IEEE Transactions on Automatic Control, vol. AC-30, no. 3, pp. 240–247, 1985. View at Google Scholar · View at Scopus
  29. S. LeBel and C. Damaren, “Analytical solutions to approximations of the Hamilton-Jacobi equation applied to satellite formation flying,” in Proceedings of the AIAA Guidance, Navigation, and Control Conference and Exhibit, pp. 1–11, Chicago, Ill, USA, August 2009. View at Scopus
  30. S. LeBel and C. J. Damaren, “A Nonlinear robust control method for the spacecraft attitude problem,” in Proceedings of the 15th CASI Conference on Astronautics (ASTRO '10), pp. 1–6, Toronto, Canada, May 2010.
  31. S. LeBel and C. Damaren, “Analytical solutions to approximations of the Hamilton-Jacobi equation applied to satellite formation flying,” in Proceedings of the AIAA Guidance, Navigation, and Control Conference and Exhibit, Toronto, Canada, August 2009. View at Scopus
  32. T. T. Georgiou and M. C. Smith, “Robustness analysis of nonlinear feedback systems: an input-output approach,” IEEE Transactions on Automatic Control, vol. 42, no. 9, pp. 1200–1221, 1997. View at Publisher · View at Google Scholar · View at Scopus
  33. M. Green and D. J. N. Limebeer, Linear Robust Control, Prentice-Hall, Englewood Cliffs, NJ, USA, 1995.
  34. H. Schaub and J. L. Junkins, Analytical Mechanics of Space Systems, 2003.
  35. J. R. Wertz, Spacecraft Attitude Determination and Control, Reidel, 1978.
  36. S. Prajna, A. Papachristodoulou, and F. Wu, “Nonlinear control synthesis by sum of squares optimization: a Lyapunov-based approach,” in Proceedings of the 5th Asian Control Conference, pp. 1–9, Melbourne, Australia, July 2004. View at Scopus
  37. A. Papachristodoulou and S. Prajna, “On the construction of Lyapunov functions using the sum of squares decomposition,” in Proceedings of the 41st IEEE Conference on Decision and Control, pp. 3482–3487, Las Vegas, Nev, USA, December 2002. View at Scopus
  38. M. Dalsmo and O. Egeland, “State feedback H-suboptimal control of a rigid spacecraft,” IEEE Transactions on Automatic Control, vol. 42, no. 8, pp. 1186–1189, 1997. View at Publisher · View at Google Scholar · View at Scopus