Abstract

An algorithm for constructing a control function that transfers a wide class of stationary nonlinear systems of ordinary differential equations from an initial state to a final state under certain control restrictions is proposed. The algorithm is designed to be convenient for numerical implementation. A constructive criterion of the desired transfer possibility is presented. The problem of an interorbital flight is considered as a test example and it is simulated numerically with the presented method.

1. Introduction and Problem Formulation

Transferring an object from a known initial state to a desired point in the phase space is an important problem of the motion control theory. It is quite difficult and has not been solved in the general case. Two classic approaches to solve it are program trajectories and targeting trajectories. In the former case, the object moves along the fixed trajectory which cannot be changed (i.e., the problem of its stabilization arises). This leads to strong restrictions on the control system possibilities. In the latter case, the motion is controlled constantly (with some time step) based on the current object state in order to perform the desired transfer. This demands constant target point tracking with some technical means, which reduces the control system independence.

Nowadays, the progress in computers, measurement tools, and other instruments’ engineering makes it possible to start development and production of autonomous intellectual control systems, which themselves can determine the transfer trajectory with onboard computational and navigational systems using only the initial (or current) state of the object. This greatly extends the capabilities of control systems, since there is no need in constant target point tracking, which often cannot be practically realized.

Intellectual control systems for many technical objects (aerospace vehicles, gyroscopic mechanisms, robotic manipulators, etc.) are modeled mathematically with nonlinear stationary controllable systems of ordinary differential equations (ODEs). The necessary control action is found as control functions, which provide the notion that phase coordinates satisfy the given boundary conditions. Such problems of the mathematical control theory are called boundary value problems (BVPs) for ODEs. They form an important class of problems and are quite difficult, especially when additional details are introduced into the model, like discrete time consideration, control signal and measurement delays, permanent perturbations, or integral-differential components in the object behavior.

For the first time, the complete solution of the boundary value problem for linear controllable systems in a class of square-integrable control functions was proposed by Kalman et al. [1]. During the following decades, a lot of papers and monographies studying BVPs for linear and nonlinear controlled systems of ODEs and integral-differential equations of a special kind have appeared [133]. It should be also mentioned that the control methods based on the classic approaches continue to appear, especially in the areas where models become even more complex, as fuzzy control systems (see, e.g., [3437]).

Today, there are three main directions of BVP study. The first deals with finding the necessary and sufficient conditions imposed to the right part of the controlled system which guarantee the object transfer to the desired point in the phase space [15, 710, 1316, 18, 19, 2132].

The second problem is to determine the final states set, where the object can be transferred from the initial state [3, 58, 11, 17, 2022, 33].

The third direction concerns development of exact or approximate methods for constructing or synthesizing program control functions and the corresponding trajectories which connect the given points in the phase space [1, 36, 11, 18, 2022, 24, 26].

BVPs’ solution is studied sufficiently well for linear controllable systems. For quasi-linear (semilinear) and nonlinear systems of the special forms, mainly iterative methods and their modifications are used [4, 5]. The weaknesses of such methods are instability to computational errors and outer perturbations and high computational time, since each stage includes the solution of a nonlinear system of functional equations. Moreover, it is quite difficult to find good initial approximations to the control functions and time functions of phase coordinates to provide fast convergence. Initial approximations are often chosen intuitively. Due to the mentioned problems, it is impossible to use successive approximations to solve BVPs in essentially nonlinear controllable systems. Theory and methods of solving BVPs for nonlinear systems of general form have yet to be developed.

The present paper is based on the results presented in [2, 24]. In [2], the condition of Kalman type, which guarantees the local controllability for a wide class of nonlinear stationary systems of ODEs, is considered. In [24], the problem of local controllability of a nonlinear stationary system is considered. The notions of auxiliary controllable systems linearized along the trajectory and control and auxiliary controllable system linearized about the original nonlinear system equilibrium point are introduced. It is proven that the local controllability of the original nonlinear system follows from the full controllability of the auxiliary systems.

In the current paper, a nonlinear stationary system local controllability condition is found, which is analogous to the condition presented in [2, 24]. The main difference from the previous results is that we develop an algorithm for constructing the synthesizing control function, which provides the transfer from the origin of coordinates to a given final state for a wide class of nonlinear stationary systems of ODEs under restrictions on the control action magnitude. The algorithm is quite easy to implement and it has stable input and computational errors. Moreover, the estimations of the reachable final states set are presented. The algorithm realization is simple: the original problem is reduced to the stabilization of a nonlinear system of the special form under permanent perturbations, and then an initial value problem (IVP) for an auxiliary system of ODEs is solved. The most complex and time-consuming part, which is the auxiliary system construction and its stabilization, can be made with analytical methods and realized with computer algebra means. Additionally, the required control law found from the auxiliary system stabilization condition provides the stability of computational procedures to input and computational errors. The effectiveness of the presented algorithm is demonstrated with numerical solution of a practical problem.

In the paper, we consider a controllable system of ordinary differential equations (ODEs):where is a vector containing phase coordinates and control is a vector of the same or lesser dimension: and . We choose the time scale so that the considered time . The right-hand side isNotice thatwhereThe possible control magnitude is bounded:

Problem 1. Find a pair of functions and that satisfy (1) and the boundary conditionsWe say that the pair and is a solution of problems (1) and (7).

Theorem 2. If conditions (2), (3), and (4) are satisfied for the right-hand side of (1), then such that there exists a solution of problems (1) and (7), which can be found after solving, first, a problem of stabilizing a linear nonstationary system with exponential coefficients and, second, an initial value problem for an auxiliary ODE system.

The main idea of the proof is to use successive changes of independent and dependent variables to reduce the process of solving the original system to the problem of stabilizing a nonlinear auxiliary system of ODEs of the special form under permanent perturbations. To solve the latter, we find a synthesizing control, which provides exponential decrease of the linear auxiliary system fundamental matrix. At the final stage, we return to the original variables.

2. Auxiliary System Construction

We find the function being a part of the solution of (1) and (7) in the formIn the new variables, system (1) and the boundary conditions are written as

We call the pair of functions and , which satisfy system (9) and conditions (10), a solution of (9) and (10). Now consider a problem of finding and , which satisfy (9), such that

Remark 3. The limit with of the solution of (9) and (11) is the solution of (9) and (10).

We switch from the independent variable in system (9) to according towhere is a certain constant value to be determined. Then, in terms of , (9) and (11) take the formWe call the pair of functions and , which satisfy system (13) with conditions (14), a solution of problem (13) and (14). With the solution of (13) and (14), one can return to the solution of (9) and (11) with (12) and (15).

Let us denoteUsing property (2) and Taylor series expansion of the right-hand side of (1) about , we can rewrite system (13) as

Let us bound the range of withWe will now shift the functions , , a number of times. Our aim is to get an equivalent system where all the terms in the right-hand side, which do not contain powers of or in explicit form, would be of the order as and in domain (6) and (18).

At the first stage, we change to by the following rule:Let for all . After substitution of (19) into the left- and right-hand sides of (17), we obtain the equivalent system

It follows from (14) and (19) thatIt is easy to see that, in the right-hand side of (20), the terms, which do not contain the powers of the components of or in explicit form, are bounded with as and in (6) and (18).

At the second stage, we make a change of variablesWith respect to these new variables, the original system (20) and the initial conditions (21) are written asCompared to the previous variables shift, the right-hand side terms of (23), which do not contain the powers of the components of or in explicit form, are now of the order as and in domain (6) and (18).

By induction, at the th stage with (19)–(24), we have the necessary shift of the formWe apply (25) times and collect the terms, which are linear with respect to the components of and include the coefficients , , and also the terms, which are linear with respect to the components of and include the coefficients , . Now, we have the system, which according to (20)–(25) can be written in vector form as follows:

The functions includes all the terms which are linear with respect to the components of with coefficients , , and also the terms of the last sum of the right-hand side for which and . The function includes all the terms which are linear with respect to the components of with coefficients , , and also the terms of the last sum of the right-hand side for which and . In , all the terms, which are nonlinear with respect to the components of or , are contained. Finally, the function includes all the terms, which do not have powers of and components. The functions and have the form

3. Right-Hand Side Terms Evaluation

It follows from the construction of (26) that, in domain (6) and (18), the following estimations are true:Moreover, conditions (2) and (3) lead to Estimation (32) follows from the representation

Remark 4. Let us denote the th column of as . Construct the matrixHere, for each is the maximal number of columns of the form such that all the vectors , , , , are linearly independent.

It follows from (4) and (27) that there exists so that for any satisfying .

We now consider the system

4. Auxiliary Lemma

Lemma 5. Let conditions (2) and (4) be satisfied for system (1). Then, , such that there exists the control of the formthat provides an exponential decrease of the fundamental matrix in (35).

Proof. We use Krasovskii’s linear nonstationary systems stabilization method [3] in the proof. Let , , be the th column of . Construct the matrixHere, for each is the maximal number of columns such that the vectors , , , are linearly independent. We show that Let , , be the th column of the matrix . Consider the matrixwhere are determined the same way as for . Conditions (2), (27), and (29) provide the notion that as . This leads to the existence of : such that : Now, taking into account Remark 4 and equality (40), we can prove by contradiction that From (40) and (41), it follows that condition (38) is satisfied in the domain . Moreover, due to the structure of , the estimationis true in that domain.
With the use of (38), we change the variable according to the expressionAs a result, we get the system of ODEs:According to [38] for the first term of the right-hand side of (44) we have where is a zero vector of length with the only unit at the th place,and is the coefficient of expansion into the sum of the vectors , , , , (notice that ); that is,The second term isConsider the stabilization of the systemwhere is a zero vector of length with the only unit at the th place and .
Let . The phase variables of (49) are connected to and its derivatives with the equalitiesThe differentiation of the last equality in (50) together with (49) leads toThe functions , in (50) and (51) are linear combinations of the functions , , , and their derivatives.

Remark 6. From the structure of the matrices and , it follows (see (27) and representation (47)) that the functions and their derivatives, as well as , , are bounded. Other elements of the columns , , obey the estimation , .

Letwhere , , are chosen to make the roots of the equationsatisfy the conditions

Due to (50) and (54) and Remark 6, the control , , provides the exponential decrease of system (44) solutions. Switching back to the original variables in (52) and using (43), we havewhere ; is the inequality (50) matrix, which means that , ; is the matrix which consists of the corresponding rows of . The desired control can be written in the form of (36), where

Denote the fundamental matrix of system (51) closed with control (52) via (, an identity matrix). It is obvious that the elements of are exponential functions of negative argument or their derivatives.

Consider system (35) with control (55):Introduce a block-diagonal matrix . Its diagonal blocks are matrices , . Then, corresponding to (43) and (50), the fundamental matrix , , of system (57) has the form The estimationfollows from (58), the structure of matrices and , and Remark 6.

Let . Then, the correctness of the lemma becomes clear from (59).

Besides, on base of (42), (55), and (58) and Remark 6, we have

5. Main Theorem Proof

System (26) with control (55) has the formChange of the variablesleads to the following representation:

Let us show that all solutions of (63) with initial values (62) that start in a sufficiently small neighborhood of zero decrease exponentially. Let , , be the fundamental matrix of the system . Then, on the basis of (59), (60), and (62), we haveLet us choose to provide

The solution of (63) with initial values (28) can be presented as

Now, from (66) and (67), using (30), (31), (32), (36), (62), and (64) after changing to , the following estimations in domain (6) and (18) are true:The constant depends on (6) and (18).

We apply the well-known result from [39] to inequalities (68) and obtainUsing (65), we fix so that the inequality is satisfied.

Now, let us bound the choice of with the condition . Then, after the integration in the right-hand sides of (69), we haveNote that all .

On the basis of (28) and (32), two last estimations can be written as a single inequality in the domain :The constant depends on domain (6) and (18).

With the use of (36), (60), (62), and (71), we estimate :

Now, we can set the value from the theorem formulation, to be . This shows that the solution of system (63) with initial values (62) and (28) does not leave the area for and decreases exponentially. Besides, by (72), the corresponding function obeys restriction (6). Moreover, if we substitute the solution to (62) and (36) and return to the variables according to (19) and (22)–(25), we get the solution of (13) and (14).

Then, we write everything in the original variables with (15) and (12) and finally (8). Now, according to Remark 3, passing to the limit as gives the solution of the original problem (1) and (7). This proves the theorem.

The algorithm of solving problem (1) and (7) consists of the following steps:(1)Construction of the auxiliary system (26); performed with analytical methods.(2)Stabilization of system (35); the synthesizing control (55) is found; performed with analytical methods.(3)Solution of the IVP (26) and (28) with control (55); its solution is substituted into (55) and thus the functions and are found; performed with numerical integration methods.(4)Return to the original variables according to (22)–(25), (19), (15), (12), and (8); performed with analytical methods.

Remark 7. Consider the systemwhere , ; , ; , , , . is a constant perturbation. From the theorem follows Corollary 8.

Corollary 8. If conditions (2)–(4) are satisfied for system (73), then there exists such that, for any and if and , a solution of (73) and (7) exists an can be found after stabilizing a linear nonstationary system with exponential coefficients and solving the initial value problem for an auxiliary system of ODEs.

6. Example of the Algorithm Application

To demonstrate the effectiveness of the proposed method, we consider a problem of transferring a material point (a satellite) moving in a central gravitational field to a desired circular orbit with jet power.

According to [3], the systems in deviations from the prescribed circular motion and condition (7) take the formPhase coordinates , , and show the deviation from the original circular orbit with radius ; is the radial velocity; is the generalized momentum and ; and are the relative velocity vector projections onto the radial and tangential directions, respectively (they are constant); is the mass, is its change rate and the control ; , where is the universal gravitational constant; is the mass of Earth. The vector of phase coordinates . The control is scalar.

System (13) and conditions (14) for the problem have the formwithIn the following, we assume . It is necessary to make seven transforms of type (25) to solve problem (76) and (78):Let . The matrices and and the system analogous to (35) look like

To stabilize system (80), we construct the matrix , with , , ; that is,with . After introducing , , we get

The change reduces (82) to a linear equation of the third order:The variables , , and are connected to , , and with

LetAfter substitution of into (83), we get the equation with , , and as roots of the characteristic polynomial. The return to the original variables gives

It is obvious that (86) provides an exponential decrease of solutions of (80). At the final stage, we solve the IVP for the system obtained from (76) after changing the phase coordinates according to (79) with control (86). Then, we return to the original variables. The initial values for the Cauchy problem are

7. Numerical Simulation

For numerical simulation, we choose another problem—control of a single-link manipulator. According to [40], the system of equations describing the motion of a manipulator under perturbations has the formwhere is the manipulator deviation angle in the vertical axis, is the deflection angle change rate, , , , is the gravity factor, is the viscous friction coefficient, is the load mass, is the manipulator length, and is its mass. The vector of phase coordinates . The control is scalar. We consider the boundary conditionsSystem (13) and conditions (14) for problem (88) and (89) have the formTo solve (90) and (91), we perform one transformation of function :which leads to new functions The system takes the formThe linear part of (93) isAfter solving the stabilization problem (91), we obtain the control law

Next, we solve the Cauchy problem for (93) under (95) with initial data , . Finally, we return to original variables , , , and . For numerical simulations, we choose  rad, , , meters,  kg,  kg, and . Figures 1 and 2 show the control and the corresponding transient plots of the phase coordinates , , and with respect to the original independent variable .

8. Numerical Simulation Discussion

Numerical simulation consists of the following stages:(1)The auxiliary system (93) construction, performed with analytical methods and realized with computer algebra(2)System (93) stabilization and control law (95) determination, performed with analytical methods and realized with computer algebra(3)Change of variables from to , performed with computer algebra(4)Solution of the IVP (90) and (91) with control (95) in variables, performed with some numerical integration method(5)Return to the original variables according to (8), (12), and (15) along with the IVP solution.

It follows from the simulation results that(a)control at the initial stage demands the greatest energy input which is determined by the initial stage ;(b)multiple initial data simulations show that, for the given transfer time of 1 second, the reachable area of initial states is given by and ;(c)the numerical simulation can be performed with an average personal computer.

9. Conclusions

The analysis of the theorem proof shows that the most difficult and time-consuming part of the algorithm implementation can be proceeded with analytical methods of computer algebra packages. The results of the numerical simulation of interorbital flight prove that the method can be used for construction and simulation of various technical objects control systems.

Conflicts of Interest

The authors declare that there are no conflicts of interest regarding the publication of this paper.