About this Journal Submit a Manuscript Table of Contents
Abstract and Applied Analysis
Volume 2011 (2011), Article ID 486714, 18 pages
http://dx.doi.org/10.1155/2011/486714
Research Article

The Optimization of Solutions of the Dynamic Systems with Random Structure

1University of Žilina, Žilina, Slovakia
2Vadim Getman Kyjew National Economic University, Kyjew, Ukraine

Received 31 January 2011; Accepted 31 March 2011

Academic Editor: Josef Diblík

Copyright © 2011 Miroslava Růžičková and Irada Dzhalladova. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Abstract

The paper deals with the class of jump control systems with semi-Markov coefficients. The control system is described as the system of linear differential equations. Every jump of the random process implies the random transformation of solutions of the considered system. Relations determining the optimal control to minimize the functional are derived using Lyapunov functions. Necessary conditions of optimization which enables the synthesis of the optimal control are established as well.

1. The Statement of the Problem

The optimal control theory as mathematical optimization method for deriving control policies plays an important role in the development of the modern mathematical control theory. The optimal control deals with the problem of finding such a control law for a given system that a certain optimality criterion is achieved. The background for the optimization method can be found in the work of Lev Pontryagin with his well-known Pontryagin's maximum principle. The optimal control has been applied in diverse fields, such as economics, bioengineering, process control, and many others. Some real-life problems are described by a continuous-time or discrete-time linear system of differential equations, but a lot of them are described by dynamic systems with random jumping changes, for example economics systems. The general theory of random structure systems can be found in the work of Artemiev and Kazakov [1]. The optimization of linear systems with random parameters are considered in many works, for example in [212]. Particularly, the original results concerning the stabilization of the systems with random coefficients and a random process are derived using moment equations and Lyapunov functions in [4]. These results create a more convenient technique for applying the method in practice using suitable software for engineering or economics investigation. Our aim is the expansion of the achieved results to a new class of systems of linear differential equations with semi-Markov coefficients and random transformation of solutions performed simultaneously with jumps of semi-Markov process. We will focus on using the particular values of Lyapunov functions for the calculation of coefficients of the control vector which minimize the quality criterion. We will also establish the necessary conditions of the optimal solution which enables the synthesis of the optimal control for the considered class of systems.

Let us consider the linear control system 𝑑 𝑋 ( 𝑡 ) 𝑑 𝑡 = 𝐴 ( 𝑡 , 𝜉 ( 𝑡 ) ) 𝑋 ( 𝑡 ) + 𝐵 ( 𝑡 , 𝜉 ( 𝑡 ) ) 𝑈 ( 𝑡 ) ( 1 . 1 ) on the probability basis ( Ω , 𝔗 , 𝐏 , 𝐅 { 𝐅 t 𝑡 0 } ) and together with (1.1) we consider the initial conditions 𝑋 ( 0 ) = 𝜑 ( 𝜔 ) , 𝜑 Ω 𝑛 . ( 1 . 2 ) The coefficients of the system are semi-Markov coefficients defined by the transition intensities 𝑞 𝛼 𝑘 ( 𝑡 ) , 𝛼 , 𝑘 = 1 , 2 , , 𝑛 , from state 𝜃 𝑘 to state 𝜃 𝛼 . We suppose that the vectors 𝑈 ( 𝑡 ) belong to the set of control 𝑈 and the functions 𝑞 𝛼 𝑘 ( 𝑡 ) , 𝛼 , 𝑘 = 1 , 2 , , 𝑛 , satisfy the conditions [13]: 𝑞 𝛼 𝑘 ( 𝑡 ) 0 , 0 𝑞 𝑘 ( 𝑡 ) 𝑑 𝑡 = 1 , 𝑞 𝑘 ( 𝑡 ) 𝛼 = 1 𝑞 𝛼 𝑘 ( 𝑡 ) . ( 1 . 3 )

Definition 1.1. Let the matrices 𝑄 ( 𝑡 , 𝜉 ( 𝑡 ) ) , 𝐿 ( 𝑡 , 𝜉 ( 𝑡 ) ) with semi-Markov elements be symmetric and positive definite. The cost functional 𝐽 = 0 𝑋 ( 𝑡 ) 𝑄 ( 𝑡 , 𝜉 ( 𝑡 ) ) 𝑋 ( 𝑡 ) + 𝑈 ( 𝑡 ) 𝐿 ( 𝑡 , 𝜉 ( 𝑡 ) ) 𝑈 ( 𝑡 ) 𝑑 𝑡 , ( 1 . 4 ) defined on the space 𝐶 1 × 𝑈 , where denotes mathematical expectation, is called the quality criterion.

Definition 1.2. Let 𝑆 ( 𝑡 , 𝜉 ( 𝑡 ) ) be a matrix with semi-Markov elements. The control vector 𝑈 ( 𝑡 ) = 𝑆 ( 𝑡 , 𝜉 ( 𝑡 ) ) 𝑋 ( 𝑡 ) ( 1 . 5 ) which minimizes the quality criterion 𝐽 ( 𝑋 , 𝑈 ) with respect to the system (1.1) is called the optimal control.

If we denote 𝐺 ( 𝑡 , 𝜉 ( 𝑡 ) ) 𝐴 ( 𝑡 , 𝜉 ( 𝑡 ) ) + 𝐵 ( 𝑡 , 𝜉 ( 𝑡 ) ) 𝑆 ( 𝑡 , 𝜉 ( 𝑡 ) ) , 𝐻 ( 𝑡 , 𝜁 ( 𝑡 ) ) 𝑄 ( 𝑡 , 𝜁 ( 𝑡 ) ) + 𝑆 ( 𝑡 , 𝜁 ( 𝑡 ) ) 𝐿 ( 𝑡 , 𝜁 ( 𝑡 ) ) 𝑆 ( 𝑡 , 𝜁 ( 𝑡 ) ) , ( 1 . 6 ) then the system (1.1) can be rewritten to the form 𝑑 𝑋 ( 𝑡 ) 𝑑 𝑡 = 𝐺 ( 𝑡 , 𝜉 ( 𝑡 ) ) 𝑋 ( 𝑡 ) , ( 1 . 7 ) and the functional (1.4) to the form 𝐽 = 0 𝑋 ( 𝑡 ) 𝐻 ( 𝑡 , 𝜉 ( 𝑡 ) ) 𝑋 ( 𝑡 ) 𝑑 𝑡 . ( 1 . 8 )

We suppose also that, together with every jump of random process 𝜉 ( 𝑡 ) in time 𝑡 𝑗 , the solutions of the system (1.7) submit to the random transformation 𝑋 𝑡 𝑗 + 0 = 𝐶 𝑠 𝑘 𝑋 𝑡 𝑗 0 , 𝑠 , 𝑘 = 1 , 2 , , 𝑛 , ( 1 . 9 ) if the conditions 𝜉 ( 𝑡 𝑗 + 0 ) = 𝜃 𝑠 , 𝜉 ( 𝑡 𝑗 0 ) = 𝜃 𝑘 hold.

Definition 1.3. Let 𝑎 𝑘 ( 𝑡 ) , 𝑘 = 1 , , 𝑛 , 𝑡 0 be a selection of 𝑛 different positive functions. If 𝜉 ( 𝑡 𝑗 + 0 ) = 𝜃 𝑠 , 𝜉 ( 𝑡 𝑗 0 ) = 𝜃 𝑘 , 𝑠 , 𝑘 = 1 , , 𝑛 , and for 𝑡 𝑗 𝑡 𝑡 𝑗 + 1 the equality 𝑎 ( 𝑡 , 𝜉 ( 𝑡 ) = 𝜃 𝑠 ) = 𝑎 𝑠 ( 𝑡 𝑡 𝑗 ) holds, then the function 𝑎 ( 𝑡 , 𝜉 ( 𝑡 ) ) is called semi-Markov function.

The application of semi-Markov functions makes it possible to use the concept of stochastic operator. In fact, the semi-Markov function 𝑎 ( 𝑡 , 𝜉 ( 𝑡 ) ) is an operator of the semi-Markov process 𝜉 ( 𝑡 ) , because the value of the semi-Markov function 𝑎 ( 𝑡 , 𝜉 ( 𝑡 ) ) is defined not only by the values 𝑡 and 𝜉 ( 𝑡 ) , but it is also necessary to specify the function 𝑎 𝑠 ( 𝑡 ) , 𝑡 0 and the value of the jump of the process 𝜉 ( 𝑡 ) in time 𝑡 𝑗 which precedes the moment of time 𝑡 .

Our task is the construction of Lyapunov function for the new class of systems of linear differential equations with semi-Markov coefficients and then applying the function to solve the optimization problem which minimizes the quality criterion.

2. Auxiliary Results

In the proof of Theorem 3.1 in Section 3, we will employ two results concerning the construction of the Lyapunov function and the construction of the optimal control for the system of linear differential equations in a deterministic case. We will derive these auxiliary results in this part.

2.1. The Construction of the Lyapunov Function

Let us consider the system of linear differential equations 𝑑 𝑋 ( 𝑡 ) 𝑑 𝑡 = 𝐴 ( 𝑡 , 𝜉 ( 𝑡 ) ) 𝑋 ( 𝑡 ) ( 2 . 1 ) associated to the system (1.1).

Let us define a quadratic form 𝑤 ( 𝑡 , 𝑥 , 𝜉 ( 𝑡 ) ) = 𝑥 𝐵 ( 𝑡 , 𝜉 ( 𝑡 ) ) 𝑥 , 𝐵 ( 𝑡 , 𝜉 ( 𝑡 ) ) > 0 , ( 2 . 2 ) where elements of the matrix 𝐵 ( 𝑡 , 𝜉 ( 𝑡 ) ) are the semi-Markov processes. The matrix 𝐵 ( 𝑡 , 𝜉 ( 𝑡 ) ) is defined by such a set of 𝑛 different symmetric and positive definite matrices 𝐵 𝑘 ( 𝑡 ) , 𝑡 0 , 𝑘 = 1 , , 𝑛 , that the equality 𝜉 ( 𝑡 ) = 𝜃 𝑠 for 𝑡 𝑗 𝑡 𝑡 𝑗 + 1 implies 𝐵 ( 𝑡 , 𝜉 ( 𝑡 ) ) = 𝐵 𝑠 𝑡 𝑡 𝑗 , 𝑠 = 1 , 2 , , 𝑛 . ( 2 . 3 )

Our purpose in this section is to express the value of the functional 𝜈 = 0 𝑤 ( 𝑡 , 𝑋 ( 𝑡 ) , 𝜉 ( 𝑡 ) ) 𝑑 𝑡 ( 2 . 4 ) in a convenient form, which can help us to prove the 𝐿 2 -stability of the trivial solution of the system (2.1).

At first, we introduce the particular Lyapunov functions 𝜈 𝑘 ( 𝑥 ) = 0 𝑤 ( 𝑡 , 𝑋 ( 𝑡 ) , 𝜉 ( 𝑡 ) ) 𝑋 ( 𝑡 ) = 𝑥 , 𝜉 ( 0 ) = 𝜃 𝑘 𝑑 𝑡 , 𝑘 = 1 , 2 , , 𝑛 . ( 2 . 5 ) If we can find the values of the particular Lyapunov functions in the form 𝜈 𝑘 ( 𝑥 ) = 𝑥 𝐶 𝑘 𝑥 , 𝑘 = 1 , 2 , , 𝑛 , then value of the functional 𝜈 can be expressed by the formula 𝜈 = 𝐸 𝑛 𝑛 𝑘 = 1 𝜈 𝑘 ( 𝑥 ) 𝑓 𝑘 ( 0 , 𝑥 ) 𝑑 𝑥 = 𝑛 𝑘 = 1 𝐸 𝑛 𝐶 𝑘 𝑥 𝑥 𝑓 𝑘 ( 0 , 𝑥 ) 𝑑 𝑥 = 𝑛 𝑘 = 1 𝐶 𝑘 𝐷 𝑘 ( 0 ) , ( 2 . 6 ) where the scalar value 𝑁 𝑆 = 𝑙 𝑚 𝑘 = 1 𝑗 = 1 𝜈 𝑘 𝑗 𝑠 𝑘 𝑗 ( 2 . 7 ) is called the scalar product of the two matrices 𝑁 = ( 𝜈 𝑘 𝑗 ) , 𝑆 = ( 𝑠 𝑘 𝑗 ) and has the property [14] 𝐷 ( 𝑁 𝑆 ) 𝐷 𝑆 = 𝑁 . ( 2 . 8 )

The first auxiliary result contains two equivalent, necessary, and sufficient conditions for the 𝐿 2 -stability (see in [4]) of the trivial solution of the system (2.1) and one sufficient condition for the stability of the solutions.

Theorem 2.1. The trivial solution of the system (2.1) is 𝐿 2 -stable if and only if any of the next two equivalent conditions hold: (1)the system of equations 𝐶 𝑘 = 𝐻 𝑘 + 0 𝑛 𝑠 = 1 𝑞 𝑠 𝑘 ( 𝑡 ) 𝑁 𝑘 ( 𝑡 ) 𝐶 𝑠 𝑘 C 𝑠 𝐶 𝑠 𝑘 𝑁 𝑘 ( 𝑡 ) 𝑑 𝑡 , 𝑘 = 1 , 2 , , 𝑛 ( 2 . 9 ) has a solution 𝐶 𝑘 > 0 , 𝑘 = 1 , 2 , , 𝑛 for 𝐻 𝑘 > 0 , 𝑘 = 1 , 2 , , 𝑛 , (2)the sequence of the approximations 𝐶 𝑘 ( 0 ) 𝐶 = 0 , 𝑘 ( 𝑗 + 1 ) = 𝐻 𝑘 + 0 𝑛 𝑠 = 1 𝑞 𝑠 𝑘 ( 𝑡 ) 𝑁 𝑘 ( 𝑡 ) 𝐶 𝑠 𝑘 𝐶 𝑠 ( 𝑗 ) 𝐶 𝑠 𝑘 𝑁 𝑘 ( 𝑡 ) 𝑑 𝑡 , 𝑘 = 1 , 2 , , 𝑛 , 𝑗 = 0 , 1 , 2 , ( 2 . 1 0 ) converges.
Moreover, the solutions of the system (2.1) are 𝐿 2 -stabile, if there exist symmetric and positive definite matrices 𝐶 𝑘 > 0 , 𝑘 = 1 , 2 , , 𝑛 , such that the property 𝐶 𝑘 0 𝑛 𝑠 = 1 𝑞 s 𝑘 ( 𝑡 ) 𝑁 𝑘 ( 𝑡 ) 𝐶 𝑠 𝑘 𝐶 𝑠 𝐶 𝑠 𝑘 𝑁 𝑘 ( 𝑡 ) 𝑑 𝑡 > 0 , 𝑘 = 1 , 2 , , 𝑛 ( 2 . 1 1 ) holds.

Proof. We will construct a system of equations, which will define the particular Lyapunov functions 𝜈 𝑘 ( 𝑥 ) , 𝑘 = 1 , 2 , , 𝑛 . Let us introduce the auxiliary semi-Markov functions 𝑢 𝑘 ( 𝑡 , 𝑥 ) = 𝑤 ( 𝑡 , 𝑋 ( 𝑡 ) , 𝜉 ( 𝑡 ) ) 𝑋 ( 0 ) = 𝑥 , 𝜉 ( 0 ) = 𝜃 𝑘 , 𝑘 = 1 , 2 , , 𝑛 . ( 2 . 1 2 ) For the state 𝜉 ( 𝑡 ) = 𝜃 𝑘 , 𝑡 0 of the random process 𝜉 ( 𝑡 ) , the equalities 𝑋 ( 𝑡 ) = 𝑁 𝑘 ( 𝑡 ) 𝑥 , 𝑋 ( 0 ) = 𝑥 ( 2 . 1 3 ) are true. Simultaneously, with the jumps of the random process 𝜉 ( 𝑡 ) , the jumps of solutions of (2.1) occurred, so in view of (2.12), we derive the equations 𝑢 𝑘 ( 𝑡 , 𝑥 ) = 𝜓 𝑘 ( 𝑡 ) 𝑤 𝑘 𝑡 , 𝑁 𝑘 + ( 𝑡 ) 𝑥 𝑡 0 𝑛 𝑠 = 1 𝑞 𝑠 𝑘 ( 𝜏 ) 𝑢 𝑠 𝑡 𝜏 , 𝐶 𝑠 𝑘 𝑁 𝑘 ( 𝜏 ) 𝑥 𝑑 𝜏 , 𝑘 = 1 , 2 , , 𝑛 . ( 2 . 1 4 ) Further, if we introduce denoting 𝑢 𝑘 ( 𝑡 , 𝑥 ) = 𝑥 𝑢 𝑘 ( 𝑡 ) 𝑥 , 𝑘 = 1 , 2 , , 𝑛 , ( 2 . 1 5 ) then (2.14) can be rewritten as the system of integral equations for the matrix 𝑢 𝑘 ( 𝑡 ) in the form 𝑢 𝑘 ( 𝑡 ) = 𝜓 𝑘 ( 𝑡 ) 𝑁 𝑘 ( 𝑡 ) 𝐵 𝑘 ( 𝑡 ) 𝑁 𝑘 + ( 𝑡 ) 𝑡 0 𝑛 𝑠 = 1 𝑞 𝑠 𝑘 ( 𝜏 ) 𝑁 𝑘 ( 𝜏 ) 𝐶 𝑠 𝑘 𝑢 𝑠 ( 𝑡 𝜏 ) 𝐶 𝑠 𝑘 𝑁 𝑘 ( 𝜏 ) 𝑑 𝜏 , 𝑘 = 1 , 2 , , 𝑛 . ( 2 . 1 6 ) We define matrices 𝐶 𝑘 , 𝑘 = 1 , 2 , , 𝑛 and functions 𝜈 𝑘 ( 𝑡 ) , 𝑘 = 1 , 2 , , 𝑛 , with regard to (2.5) and (2.12), by formulas 𝐶 𝑘 = 0 𝑢 𝑘 ( 𝑡 ) 𝑑 𝑡 , 𝜈 𝑘 ( 𝑥 ) = 0 𝑢 𝑘 ( 𝑡 , 𝑥 ) 𝑑 𝑡 . ( 2 . 1 7 ) Integrating the system (2.16) from 0 to , we get the system 𝐶 𝑘 = 0 𝜓 𝑘 ( 𝑡 ) 𝑁 𝑘 ( 𝑡 ) 𝐵 𝑘 ( 𝑡 ) 𝑁 𝑘 + ( 𝑡 ) 𝑑 𝑡 0 𝑛 𝑠 = 1 𝑞 𝑠 𝑘 ( 𝜏 ) 𝑁 𝑘 ( 𝜏 ) 𝐶 𝑠 𝑘 𝐶 𝑠 𝐶 𝑠 𝑘 𝑁 𝑘 ( 𝜏 ) 𝑑 𝜏 , 𝑘 = 1 , 2 , , 𝑛 . ( 2 . 1 8 ) Similarly, integrating the system of (2.14), we get the system of equations determining the particular Lyapunov functions 𝜈 𝑘 ( 𝑥 ) = 0 𝜓 𝑘 ( 𝑡 ) 𝑤 𝑘 𝑡 , 𝑁 𝑘 ( 𝑡 ) 𝑥 𝑑 𝑡 + 0 𝑛 𝑠 = 1 𝑞 𝑠 𝑘 ( 𝑡 ) 𝜈 𝑘 𝐶 𝑠 𝑘 𝑁 𝑘 ( 𝑡 ) 𝑥 𝑑 𝑡 . ( 2 . 1 9 ) Let us denote 𝐻 𝑘 = 0 𝜓 𝑘 ( 𝑡 ) 𝑁 𝑘 ( 𝑡 ) 𝐵 𝑘 ( 𝑡 ) 𝑁 𝑘 ( 𝑡 ) 𝑑 𝑡 , 𝑘 = 1 , 2 , , 𝑛 . ( 2 . 2 0 ) If there exist such positive constants 𝜆 1 , 𝜆 2 that 𝜆 1 𝐸 𝐵 𝑘 ( 𝑡 ) 𝜆 2 𝐸 , ( 2 . 2 1 ) or equivalent conditions 𝜆 1 𝑥 2 𝑥 𝐵 𝑘 ( 𝑡 ) 𝑥 𝜆 2 𝑥 2 ( 2 . 2 2 ) hold, then the matrices 𝐻 𝑘 , 𝑘 = 1 , 2 , , 𝑛 are symmetric and positive definite. Using (2.17), the system (2.18) can be rewritten to the form 𝐶 𝑘 = 𝐻 𝑘 + 𝑛 𝑠 = 1 𝐿 𝑠 𝑘 𝐶 𝑠 , 𝑘 = 1 , 2 , , 𝑛 . ( 2 . 2 3 ) It is easy to see that the system (2.23) is conjugated to the system (2.9). Therefore, the existence of a positive definite solution 𝐶 𝑘 > 0 , 𝑘 = 1 , 2 , , 𝑛 of the system (2.23) is equivalent to the existence of a positive definite solution 𝐵 𝑘 > 0 , 𝑘 = 1 , 2 , , 𝑛 and it is equivalent to 𝐿 2 -stability of the solution of the system (2.1). On the other hand, if the existence of the particular Lyapunov functions 𝜈 𝑘 ( 𝑥 ) , 𝑘 = 1 , 2 , , 𝑛 in (2.5) implies 𝐿 2 -stability of the solutions of the system (2.1), then, in view of conditions (2.22) and the convergence of the integral (2.17), we get the inequality 0 𝑤 ( 𝑡 , 𝑋 ( 𝑡 ) , 𝜉 ( 𝑡 ) ) 𝑑 𝑡 0 𝑋 2 𝑑 𝑡 . ( 2 . 2 4 ) The theorem is proved.

Remark 2.2. If the system of linear differential equations (2.1) is a system with piecewise constant coefficients and the function 𝑤 ( 𝑡 , 𝑋 ( 𝑡 ) , 𝜉 ( 𝑡 ) ) has the form 𝑤 ( 𝑡 , 𝑋 ( 𝑡 ) , 𝜉 ( 𝑡 ) ) = 𝑥 𝐵 ( 𝜉 ( 𝑡 ) ) 𝑥 , 𝐵 𝑘 𝜃 𝐵 𝑘 , 𝑘 = 1 , 2 , , 𝑛 , ( 2 . 2 5 ) then the system (2.18) can be written in the form 𝐶 𝑘 = 0 𝜓 𝑘 ( 𝑡 ) 𝑒 𝐴 𝑘 𝑡 𝐵 𝑘 𝑒 𝐴 𝑘 𝑡 𝑑 𝑡 + 0 𝑛 𝑠 = 1 𝑞 𝑠 𝑘 ( 𝑡 ) 𝑒 𝐴 𝑘 𝑡 𝐶 𝑠 𝑘 𝐶 𝑠 𝐶 𝑠 𝑘 𝑒 𝐴 𝑘 𝑡 𝑑 𝑡 , 𝑘 = 1 , 2 , , 𝑛 . ( 2 . 2 6 )

Particularly, if the semi-Markov process 𝜉 ( 𝑡 ) is identical with a Markov process, then the system (2.26) has the form 𝐶 𝑘 = 0 𝑒 𝑎 𝑘 𝑘 𝑡 𝑒 𝐴 𝑘 𝑡 𝐵 𝑘 𝑒 𝐴 𝑘 𝑡 𝑑 𝑡 + 0 𝑛 𝑠 = 1 𝑠 𝑘 𝑎 𝑠 𝑘 𝑒 𝑎 𝑘 𝑘 𝑡 𝑒 𝐴 𝑘 𝑡 𝐶 𝑠 𝑘 𝐶 𝑠 𝐶 𝑠 𝑘 𝑒 𝐴 𝑘 𝑡 𝑑 𝑡 , 𝑘 = 1 , 2 , , 𝑛 , ( 2 . 2 7 ) or, more simply 𝐶 𝑘 = 0 𝑒 𝑎 𝑘 𝑘 𝑡 𝑒 𝐴 𝑘 𝑡 𝐵 𝑘 + 𝑛 𝑠 = 1 𝑠 𝑘 𝑎 𝑠 𝑘 𝐶 𝑠 𝑘 𝐶 𝑠 𝐶 𝑠 𝑘 𝑒 𝐴 𝑘 𝑡 𝑑 𝑡 , 𝑘 = 1 , 2 , , 𝑛 . ( 2 . 2 8 ) Moreover, under the assumption that the integral in (2.28) converges, the system (2.28) is equivalent to the system of matrices equations 𝐸 𝑎 𝑘 𝑘 + 𝐴 𝑘 𝐶 𝑘 + 𝐶 𝑘 𝐴 𝐾 + 𝐵 𝑘 + 𝑛 𝑠 = 1 𝑠 𝑘 𝑎 𝑠 𝑘 𝐶 𝑠 𝑘 𝐶 𝑠 𝐶 𝑠 𝑘 = 0 , 𝑘 = 1 , 2 , , 𝑛 , ( 2 . 2 9 ) which can be written as the system 𝐴 𝑘 𝐶 𝑘 + 𝐶 𝑘 𝐴 𝐾 + 𝐵 𝑘 + 𝑛 𝑠 = 1 𝑠 𝑘 𝑎 𝑠 𝑘 𝐶 𝑠 𝑘 𝐶 𝑠 𝐶 𝑠 𝑘 = 0 , 𝑘 = 1 , 2 , , 𝑛 , ( 2 . 3 0 ) if 𝐶 𝑘 𝑘 = 𝐸 , 𝑘 = 1 , 2 , , 𝑛 .

Example 2.3. Let the semi-Markov process 𝜉 ( 𝑡 ) take two states 𝜃 1 , 𝜃 2 and let it be identical with the Markov process described by the system of differential equations 𝑑 𝑝 1 ( 𝑡 ) 𝑑 𝑡 = 𝜆 𝑝 1 ( 𝑡 ) + 𝜆 𝑝 2 ( 𝑡 ) , 𝑑 𝑝 2 ( 𝑡 ) 𝑑 𝑡 = 𝜆 𝑝 1 ( 𝑡 ) 𝜆 𝑝 2 ( 𝑡 ) . ( 2 . 3 1 ) We will consider the 𝐿 2 -stability of the solutions of the differential equation 𝑑 𝑥 ( 𝑡 ) 𝜃 𝑑 𝑡 = 𝑎 ( 𝜉 ( 𝑡 ) ) 𝑥 ( 𝑡 ) , 𝑎 𝑘 𝑎 𝑘 , ( 2 . 3 2 ) constructing a system of the type (2.26) related to (2.32). The system is 𝑐 1 = 1 + 0 𝑒 2 𝑎 2 𝑡 𝜆 𝑒 𝜆 𝑡 𝑐 2 𝑑 𝑡 , 𝑐 2 = 1 + 0 𝑒 2 𝑎 1 𝑡 𝜆 𝑒 𝜆 𝑡 𝑐 1 𝑑 𝑡 , ( 2 . 3 3 ) and its solution is 𝑐 1 = 𝜆 𝑎 1 𝜆 2 𝑎 2 2 𝑎 1 𝑎 2 𝑎 𝜆 1 + 𝑎 2 , 𝑐 2 = 𝜆 𝑎 2 𝜆 2 𝑎 1 2 𝑎 1 𝑎 2 𝑎 𝜆 1 + 𝑎 2 ( 2 . 3 4 ) The trivial solution of (2.32) is 𝐿 2 -stable, if 𝑐 1 > 0 and 𝑐 2 > 0 . Let the intensities of semi-Markov process 𝜉 ( 𝑡 ) satisfy the conditions 𝑞 1 1 ( 𝑡 ) 0 , 𝑞 2 2 ( 𝑡 ) 0 , 𝑞 2 1 ( 𝑡 ) 𝜆 𝑒 𝜆 𝑡 0 , 𝑞 1 2 ( 𝑡 ) 𝜆 𝑒 𝜆 𝑡 0 . ( 2 . 3 5 ) Then, using the Theorem 2.1, the conditions 1 𝑐 1 0 𝑞 1 1 ( 𝑡 ) 𝑒 2 𝑎 1 𝑡 𝑑 𝑡 𝑐 2 0 𝑞 2 1 ( 𝑡 ) 𝜆 𝑒 𝜆 𝑡 𝑒 2 𝑎 2 𝑡 𝑑 𝑡 > 0 , 1 𝑐 1 0 𝑞 1 2 ( 𝑡 ) 𝜆 𝑒 𝜆 𝑡 𝑒 2 𝑎 1 𝑡 𝑑 𝑡 𝑐 2 0 𝑞 2 2 ( 𝑡 ) 𝑒 2 𝑎 2 𝑡 𝑑 𝑡 > 0 ( 2 . 3 6 ) are sufficient conditions for the 𝐿 2 -stability of solutions of (2.32).

2.2. The Construction of an Optimal Control for the System of Linear Differential Equations in the Deterministic Case

Let us consider the deterministic system of the linear equations 𝑑 𝑋 ( 𝑡 ) 𝑑 𝑡 = 𝐴 ( 𝑡 ) 𝑋 ( 𝑡 ) + 𝐵 ( 𝑡 ) 𝑈 ( 𝑡 ) ( 2 . 3 7 ) in the boundary field 𝐺 , where 𝑋 𝑚 , 𝑈 𝑙 , and together with (2.37) we consider the initial conditions 𝑋 ( 𝑡 ) = 𝑥 0 . ( 2 . 3 8 ) We assume that the vector 𝑈 ( 𝑡 ) belongs to the control set 𝑈 . The quality criterion has the form of the quadratic functional 1 𝐼 ( 𝑡 ) = 2 𝑡 𝑋 ( 𝜏 ) 𝐶 ( 𝜏 ) 𝑋 ( 𝜏 ) + 𝑈 𝐶 ( 𝜏 ) 𝐃 ( 𝜏 ) 𝐶 ( 𝜏 ) 𝑑 𝜏 , ( 𝑡 ) = 𝐶 ( 𝑡 ) , 𝐃 ( 𝑡 ) = 𝐃 ( 𝑡 ) ( 2 . 3 9 ) in the space 1 ( 𝐺 ) × 𝑈 . The control vector 𝑈 ( 𝑡 ) = 𝑆 ( 𝑡 ) 𝑋 ( 𝑡 ) , d i m 𝑆 ( 𝑡 ) = 𝑙 × 𝑚 , ( 2 . 4 0 ) which minimizes the quality criterion (2.39) is called the optimal control.

The optimization problem is the problem of finding the optimal control (2.40) from all feasible control 𝑈 , or, in fact, it is the problem of finding the equation to determine 𝑆 ( 𝑡 ) , d i m 𝑆 ( 𝑡 ) = 𝑙 × 𝑚 .

Theorem 2.4. Let there exist the optimal control (2.40) for the system of (2.37). Then the control equations 𝑆 = 𝐃 1 ( 𝑡 ) 𝐵 ( 𝑡 ) 𝚿 , 𝚿 = 𝐾 ( 𝑡 ) 𝑋 ( 𝑡 ) , ( 2 . 4 1 ) where the matrix 𝐾 ( 𝑡 ) satisfies the Riccati equation 𝑑 𝐾 ( 𝑡 ) 𝑑 𝑡 = 𝐶 ( 𝑡 ) 𝐾 ( 𝑡 ) 𝐴 ( 𝑡 ) 𝐴 ( 𝑡 ) 𝐾 ( 𝑡 ) + 𝐾 ( 𝑡 ) 𝐵 ( 𝑡 ) 𝐃 1 ( 𝑡 ) 𝐵 ( 𝑡 ) 𝐾 ( 𝑡 ) , ( 2 . 4 2 ) determines the synthesis of the optimal control.

Proof. Let the control for the system (2.37) have the form (2.40), where the matrix 𝑆 ( 𝑡 ) is unknown. Then, the minimum value of the quality criterion (2.39) is m i n 𝑆 ( 𝑡 ) 1 𝐼 ( 𝑡 ) = 2 𝑋 ( 𝑡 ) 𝐾 ( 𝑡 ) 𝑋 ( 𝑡 ) 𝜈 ( 𝑡 , 𝑋 ( 𝑡 ) ) . ( 2 . 4 3 ) Under assumption that the vector 𝑋 ( 𝑡 ) is known and using Pontryagin's maximum principle [1, 15], the minimum of the quality criterion (2.39) is written as m i n 𝑆 ( 𝑡 ) 1 𝐼 ( 𝑡 ) = 2 𝚿 ( 𝑡 ) 𝑋 ( 𝑡 ) , 𝜏 𝑡 , ( 2 . 4 4 ) where 𝚿 ( 𝑡 ) = 𝐷 𝜈 ( 𝑡 , 𝑥 ) 𝐷 𝑥 = 𝑋 𝐾 ( 𝑡 ) ( 2 . 4 5 ) is the row-vector. If we take Hamiltonian function [15] of the form 1 𝐻 ( 𝑡 , 𝑥 , 𝑈 , 𝚿 ) = 𝚿 ( 𝐴 ( 𝑡 ) 𝑥 + 𝐵 ( 𝑡 ) 𝑈 ) + 2 𝑥 𝐶 𝑥 + 𝑈 𝐃 𝑈 , 𝑈 = 𝑆 𝑥 , ( 2 . 4 6 ) the necessary condition for optimality is 𝜕 𝐻 𝜕 𝑠 𝑘 𝑗 = 0 , 𝑘 = 1 , 2 , , 𝑙 , 𝑗 = 1 , 2 , , 𝑚 , ( 2 . 4 7 ) where 𝑠 𝑘 𝑗 are elements of the matrix 𝑆 . The scalar value 𝑑 𝐻 = 𝑑 𝑆 𝜕 𝐻 𝜕 𝑠 𝑘 𝑗 , 𝑘 = 1 , 2 , , 𝑙 , 𝑗 = 1 , 2 , , 𝑚 , ( 2 . 4 8 ) is called derivative of the matrix 𝐻 with respect to the matrix 𝑆 .
Employing the scalar product of the two matrices in our calculation, the Hamiltonian function (2.46) can be rewritten into the form 1 𝐻 = 𝚿 𝐴 ( 𝑡 ) 𝑥 + 2 𝑥 𝐶 ( 𝑡 ) 𝑥 + 𝐵 ( 𝑡 ) 𝚿 𝑥 1 𝑆 + 2 𝐃 ( 𝑡 ) 𝑆 𝑥 𝑥 𝑆 , ( 2 . 4 9 ) and its derivative with respect to the matrix 𝑆 is 𝑑 𝐻 𝑑 𝑆 = 𝐵 ( 𝑡 ) 𝚿 𝑥 + 𝐃 ( 𝑡 ) 𝑆 𝑥 𝑥 = 0 . ( 2 . 5 0 ) Because the equality (2.50) holds for any value of 𝑥 , the expression of the vector control 𝑈 has the form 𝑈 = 𝑆 𝑥 = 𝐃 1 ( 𝑡 ) 𝐵 ( 𝑡 ) 𝚿 = 𝐃 1 ( 𝑡 ) 𝐵 ( 𝑡 ) 𝐾 ( 𝑡 ) 𝑥 , ( 2 . 5 1 ) which implies 𝑆 = 𝐃 1 ( 𝑡 ) 𝐵 ( 𝑡 ) 𝚿 . ( 2 . 5 2 ) If we put the expression of matrix 𝑆 to (2.49), we obtain a new expression for the Hamiltonian function 1 𝐻 = 𝚿 ( 𝑡 ) 𝐴 ( 𝑡 ) 𝑥 + 2 𝑥 1 𝐶 ( 𝑡 ) 𝑥 2 𝚿 𝐵 ( 𝑡 ) 𝐃 1 ( 𝑡 ) 𝐵 ( 𝑡 ) 𝚿 , ( 2 . 5 3 ) for which the canonical system of linear differential equations 𝑑 𝑥 = 𝑑 𝑡 𝐷 𝐻 , 𝐷 𝚿 𝑑 𝚿 = 𝑑 𝑡 𝐷 𝐻 𝐷 𝑥 ( 2 . 5 4 ) has the form 𝑑 𝑥 𝑑 𝑡 = 𝐴 ( 𝑡 ) 𝑥 𝐵 ( 𝑡 ) 𝐃 1 ( 𝑡 ) 𝐵 ( 𝑡 ) 𝚿 , 𝑑 𝚿 𝑑 𝑡 = 𝐶 ( 𝑡 ) 𝑥 𝐴 ( 𝑡 ) 𝚿 . ( 2 . 5 5 )
In the end, we define the matrix 𝐾 ( 𝑡 ) as the integral manifolds of solutions of the system equations 𝚿 = 𝐾 ( 𝑡 ) 𝑋 ( 𝑡 ) . ( 2 . 5 6 ) If we derive the system (2.56) with respect to 𝑡 regarding the system (2.55) and extract the vector Ψ , then we obtain the matrix differential equation (2.40). This equation is known as Riccati equation in literature, see for example in [16, 17]. The solution 𝐾 𝑇 ( 𝑡 ) of (2.42) satisfying the initial condition 𝐾 𝑇 ( 𝑡 ) = 0 , 𝑇 > 0 ( 2 . 5 7 ) determines the minimum of the functional m i n 𝑆 ( 𝜏 ) 𝑇 𝑡 𝑋 ( 𝜏 ) 𝐶 ( 𝜏 ) 𝑋 ( 𝜏 ) + 𝑈 1 ( 𝜏 ) 𝐃 ( 𝜏 ) 𝑈 ( 𝜏 ) 𝑑 𝜏 = 2 𝑋 ( 𝑡 ) 𝐾 𝑇 ( 𝑡 ) 𝑋 ( 𝑡 ) , ( 2 . 5 8 ) and 𝐾 ( 𝑡 ) can be obtained as the limit of the sequence { 𝐾 𝑇 ( 𝑡 ) } 𝑇 = 1 of the successive approximations 𝐾 𝑇 ( 𝑡 ) : 𝐾 ( 𝑡 ) = l i m 𝑇 𝐾 𝑇 ( 𝑡 ) . ( 2 . 5 9 )

Remark 2.5. Similar results can be obtained from the Bellman equation [18], where the function 𝜈 ( 𝑡 , 𝑥 ) satisfie m i n 𝑆 ( 𝑡 ) 𝜕 𝜈 ( 𝑡 , 𝑥 ) + 𝜕 𝑡 𝐷 𝜈 ( 𝑡 , 𝑥 ) [ ] 1 𝐷 𝑥 𝐴 ( 𝑡 ) + 𝐵 ( 𝑡 ) 𝑆 ( 𝑡 ) 𝑥 + 2 𝑥 1 𝐶 ( 𝑡 ) 𝑥 + 2 𝑥 𝑆 ( 𝑡 ) 𝐃 ( 𝑡 ) 𝑆 ( 𝑡 ) 𝑥 = 0 . ( 2 . 6 0 )

3. The Main Result

Theorem 3.1. Let the coefficients of the control system (1.1) be the semi-Markov functions and let them be defined by the equations 𝑑 𝑋 𝑘 ( 𝑡 ) 𝑑 𝑡 = 𝐺 𝑘 ( 𝑡 ) 𝑋 𝑘 ( 𝑡 ) , 𝐺 𝑘 ( 𝑡 ) 𝐴 𝑘 ( 𝑡 ) + 𝐵 𝑘 ( 𝑡 ) 𝑆 𝑘 ( 𝑡 ) , 𝑘 = 1 , , 𝑛 . ( 3 . 1 ) Then the set of the optimal control is a nonempty subset of the control 𝑈 , which is identical with the family of the solutions of the system 𝑈 𝑠 ( 𝑡 ) = 𝐿 𝑠 1 ( 𝑡 ) 𝐵 𝑠 ( 𝑡 ) 𝑅 𝑠 ( 𝑡 ) 𝑋 𝑠 ( 𝑡 ) , 𝑠 = 1 , , 𝑛 , ( 3 . 2 ) where the matrix 𝑅 𝑠 ( 𝑡 ) is defined by the system of Riccati type of differential equations 𝑑 𝑅 𝑠 ( 𝑡 ) 𝑑 𝑡 = 𝑄 𝑠 ( 𝑡 ) 𝐴 𝑠 ( 𝑡 ) 𝑅 𝑠 ( 𝑡 ) 𝑅 𝑠 ( 𝑡 ) 𝐴 𝑠 ( 𝑡 ) + 𝑅 𝑠 ( 𝑡 ) 𝐵 𝑠 ( 𝑡 ) 𝐿 𝑠 1 ( 𝑡 ) 𝐵 𝑠 ( 𝑡 ) 𝑅 𝑠 Ψ ( 𝑡 ) 𝑠 Ψ 𝑠 𝑅 ( 𝑡 ) 𝑠 ( 𝑡 ) 𝑛 𝑘 = 1 𝑞 𝑘 𝑠 ( 𝑡 ) Ψ 𝑠 𝐶 ( 𝑡 ) 𝑘 𝑠 𝑅 𝑘 ( 0 ) 𝐶 𝑘 𝑠 , 𝑠 = 1 , , 𝑛 . ( 3 . 3 )

3.1. The Proof of Main Result Using Lyapunov Functions

It should be recalled that the coefficients of the systems (1.1), (1.7) and of the functionals (1.4), (1.8) have the form 𝐴 ( 𝑡 , 𝜉 ( 𝑡 ) ) = 𝐴 s 𝑡 𝑡 𝑗 , 𝐵 ( 𝑡 , 𝜉 ( 𝑡 ) ) = 𝐵 𝑠 𝑡 𝑡 𝑗 , 𝑄 ( 𝑡 , 𝜉 ( 𝑡 ) ) = 𝑄 𝑠 𝑡 𝑡 𝑗 , 𝐿 ( 𝑡 , 𝜉 ( 𝑡 ) ) = 𝐿 𝑠 𝑡 𝑡 𝑗 , 𝑆 ( 𝑡 , 𝜉 ( 𝑡 ) ) = 𝑆 𝑠 𝑡 𝑡 𝑗 , ( 3 . 4 ) if 𝑡 𝑗 𝑡 < 𝑡 𝑗 + 1 , 𝜉 ( 𝑡 ) = 𝜃 𝑠 . In addition to this, we have 𝐺 ( 𝑡 , 𝜉 ( 𝑡 ) ) = 𝐺 𝑠 𝑡 𝑡 𝑗 𝐴 𝑠 𝑡 𝑡 𝑗 + 𝐵 𝑠 𝑡 𝑡 𝑗 𝑆 𝑠 𝑡 𝑡 𝑗 , 𝐻 ( 𝑡 , 𝜉 ( 𝑡 ) ) = 𝐻 𝑠 𝑡 𝑡 𝑗 𝑄 𝑠 𝑡 𝑡 𝑗 + 𝑆 𝑠 𝑡 𝑡 𝑗 𝐿 𝑠 𝑡 𝑡 𝑗 𝑆 𝑠 𝑡 𝑡 𝑗 . ( 3 . 5 ) The formula 𝑉 = 𝑛 𝑘 = 1 𝐶 𝑘 𝐷 𝑘 ( 0 ) = 𝑛 𝑘 = 1 𝐸 𝑚 𝜈 𝑘 ( 𝑥 ) 𝑓 𝑘 ( 0 , 𝑥 ) 𝑑 𝑥 ( 3 . 6 ) is useful for the calculation of the particular Lyapunov functions 𝜈 𝑘 ( 𝑥 ) 𝑥 𝐶 𝑘 𝑥 , 𝑘 = 1 , , 𝑛 of the functional (1.8). We get 𝜈 𝑘 ( 𝑥 ) 𝑥 𝐶 𝑘 𝑥 = 0 𝑋 ( 𝑡 ) 𝐻 ( 𝑡 , 𝜉 ( 𝑡 ) ) 𝑋 ( 𝑡 ) 𝑋 ( 0 ) = 𝑥 , 𝜉 ( 0 ) = 𝜃 𝑘 𝑑 𝑡 , 𝑘 = 1 , 2 , , 𝑛 , ( 3 . 7 ) or, the more convenient form 𝜈 𝑘 ( 𝑥 ) 𝑥 𝐶 𝑘 𝑥 = 0 𝑋 𝑘 ( Ψ 𝑡 ) 𝑘 ( 𝑡 ) 𝑄 𝑘 ( 𝑡 ) + 𝑛 𝑠 = 1 𝑞 𝑠 𝑘 ( 𝑡 ) 𝐶 𝑠 𝑘 𝐶 𝑠 𝐶 𝑠 𝑘 𝑈 𝑘 ( 𝑡 ) Ψ 𝑘 ( 𝑡 ) 𝐿 𝑘 ( 𝑡 ) 𝑈 𝑘 ( 𝑡 ) 𝑑 𝑡 , 𝑘 = 1 , 2 , , 𝑛 . ( 3 . 8 ) Then the system (3.1) has the form 𝑑 𝑋 𝑘 ( 𝑡 ) 𝑑 𝑡 = 𝐴 𝑘 ( 𝑡 ) 𝑋 𝑘 ( 𝑡 ) + 𝐵 𝑘 ( 𝑡 ) 𝑈 𝑘 ( 𝑡 ) , 𝑈 𝑘 ( 𝑡 ) 𝑆 𝑘 ( 𝑡 ) 𝑋 𝑘 ( 𝑡 ) , 𝑘 = 1 , , 𝑛 . ( 3 . 9 ) Let us assume that for the control system (1.1) the optimal control exists in the form (1.5) independent of the initial value 𝑋 ( 0 ) . Regarding the formula (3.6), there exist minimal values of the particular Lyapunov functions 𝜈 𝑘 ( 𝑥 ) , 𝑘 = 1 , , 𝑛 , which are associated with the optimal control. It also follows from the fact that the functions 𝜈 𝑘 ( 𝑥 ) , 𝑘 = 1 , , 𝑛 are particular values of the functional (3.6). Finding the minimal values 𝜈 𝑘 ( 𝑥 ) , 𝑘 = 1 , , 𝑛 by choosing the optimal control 𝑈 𝑘 ( 𝑥 ) is a well-studied problem, for the main results see [16]. It is significant that all matrices 𝐶 𝑠 , 𝑠 = 1 , , 𝑛 of the integrand in the formula (3.8) are constant matrices, hence, solving the optimization problem they can be considered as matrices of parameters.

Therefore, the problem to find the optimal control (1.5) for the system (1.1) can be transformed to 𝑛 problems to find the optimal control for the deterministic system (3.9), which is equivalent to the system of linear differential equations of type (2.37).

3.2. The Proof of the Main Result Using Lagrange Functions

In this part, we get one more proof of the Theorem 3.1 using the Lagrange function.

We are looking for the optimal control which reaches the minimum of quality criterion 𝑥 𝐶 𝑥 = 𝑇 0 𝑋 ( 𝑡 ) 𝑄 𝐴 𝑋 ( 𝑡 ) + 𝑈 ( 𝑡 ) 𝐿 ( 𝑡 ) 𝑈 ( 𝑡 ) 𝑑 𝑡 . ( 3 . 1 0 ) Let us introduce the Lagrange function 𝐼 = 𝑇 0 𝑋 ( 𝑡 ) 𝑄 ( 𝑡 ) 𝑋 ( 𝑡 ) + 𝑈 ( 𝑡 ) 𝐿 ( 𝑡 ) 𝑈 ( 𝑡 ) + 2 𝑌 ( 𝑡 ) 𝐴 ( 𝑡 ) 𝑋 ( 𝑡 ) + 𝐵 ( 𝑡 ) 𝑈 ( 𝑡 ) 𝑑 𝑋 ( 𝑡 ) 𝑑 𝑡 𝑑 𝑡 , ( 3 . 1 1 ) where 𝑌 ( 𝑡 ) is the column-vector of Lagrange multipliers. In accordance with Pontryagin's maximum principle, we put the first variations of the functionals 𝜕 𝐼 𝑥 , 𝜕 𝐼 𝑦 equal to zero and we obtain the system of linear differential equations 𝑑 𝑋 ( 𝑡 ) 𝑑 𝑡 = 𝐴 ( 𝑡 ) 𝑋 ( 𝑡 ) 𝐵 ( 𝑡 ) 𝐿 1 ( 𝑡 ) 𝐵 ( 𝑡 ) 𝑌 ( 𝑡 ) , 𝑑 𝑌 ( 𝑡 ) 𝑑 𝑡 = 𝑄 ( 𝑡 ) 𝑋 ( 𝑡 ) 𝐴 ( 𝑡 ) 𝑌 ( 𝑡 ) . ( 3 . 1 2 ) Then the optimal control 𝑈 ( 𝑡 ) can be expressed by 𝑈 ( 𝑡 ) = 𝐿 1 ( 𝑡 ) 𝐵 ( 𝑡 ) 𝑌 ( 𝑡 ) , 𝑌 ( 𝑇 ) = 0 . ( 3 . 1 3 ) The synthesis of the optimal control needs to find the integral manifolds of the solutions of the system (3.12) in the form 𝑌 ( 𝑡 ) = 𝐾 ( 𝑡 ) 𝑋 ( 𝑡 ) , 𝐾 ( 𝑇 ) = 0 . ( 3 . 1 4 ) According to the theory of integral manifolds [19] we construct the differential matrix equations of the Riccati type 𝑑 𝐾 ( 𝑡 ) 𝑑 𝑡 = 𝑄 ( 𝑡 ) 𝐴 ( 𝑡 ) 𝐾 ( 𝑡 ) 𝐴 ( 𝑡 ) 𝐾 ( 𝑡 ) 𝐵 ( 𝑡 ) 𝐿 1 ( 𝑡 ) 𝐵 ( 𝑡 ) 𝐾 ( 𝑡 ) . ( 3 . 1 5 ) for the matrix 𝐾 ( 𝑡 ) . Integrating them from time 𝑡 = 𝑇 to time 𝑡 = 0 and using the initial condition 𝐾 ( 𝑇 ) = 0 we obtain Lagrange functions for the optimal control 𝑈 ( 𝑡 ) = 𝐿 1 ( 𝑡 ) 𝐵 ( 𝑡 ) 𝐾 ( 𝑡 ) 𝑋 ( 𝑡 ) . ( 3 . 1 6 ) We will prove that 𝑇 𝑡 𝑋 ( 𝜏 ) 𝑄 ( 𝜏 ) 𝑋 ( 𝜏 ) + 𝑈 ( 𝜏 ) 𝐿 ( 𝜏 ) 𝑈 ( 𝜏 ) 𝑑 𝜏 = 𝑋 ( 𝑡 ) 𝐾 ( 𝑡 ) 𝑋 ( 𝑡 ) . ( 3 . 1 7 ) Differentiating the equality (3.17) with respect to 𝑡 we obtain the matrix equation 𝑋 ( 𝑡 ) 𝑄 ( 𝑡 ) 𝑋 ( 𝑡 ) 𝑈 ( 𝑡 ) 𝐿 ( 𝑡 ) 𝑈 ( 𝑡 ) = 𝑋 ( 𝑡 ) 𝑑 𝐾 ( 𝑡 ) 𝑑 𝑡 𝑋 ( 𝑡 ) + 𝑋 + 𝑋 ( 𝑡 ) 𝐾 ( 𝑡 ) ( 𝐴 ( 𝑡 ) 𝑋 ( 𝑡 ) + 𝐵 ( 𝑡 ) 𝑈 ( 𝑡 ) ) ( 𝑡 ) 𝐴 ( 𝑡 ) + 𝑈 ( 𝑡 ) 𝐵 ( 𝑡 ) 𝐾 ( 𝑡 ) 𝑋 ( 𝑡 ) , ( 3 . 1 8 ) and extracting the optimal control 𝑈 ( 𝑡 ) we obtain differential equation for 𝐾 ( 𝑡 ) identical with (3.15). The equality 𝐾 ( 𝑡 ) = 𝐾 ( 𝑡 ) follows from the positive definite matrices 𝑄 ( 𝑡 ) , 𝐿 ( 𝑡 ) for 𝑡 < 𝑇 . Therefore, from (3.17) we get 𝐾 ( 𝑡 ) = 0 ; moreover, from (3.10) it follows that 𝐶 = 𝐾 ( 0 ) . Applying the formulas (3.15), (3.16) to the system (3.8) with minimal functionals (3.9), the expression for the optimal control can be found in the form 𝑈 𝑠 ( 𝑡 ) = Ψ 𝑠 1 ( 𝑡 ) 𝐿 𝑠 1 ( 𝑡 ) 𝐵 𝑠 ( 𝑡 ) 𝐾 𝑠 ( 𝑡 ) 𝑋 𝑠 ( 𝑡 ) , 𝑠 = 1 , 2 , , 𝑛 , ( 3 . 1 9 ) where symmetric matrices 𝐾 𝑠 ( 𝑡 ) satisfy the matrix system of differential equations 𝑑 𝐾 𝑠 ( 𝑡 ) 𝑑 𝑡 = Ψ 𝑠 ( 𝑡 ) 𝑄 𝑠 ( 𝑡 ) 𝐴 𝑠 ( 𝑡 ) 𝐾 𝑠 ( 𝑡 ) 𝑛 𝑘 = 1 𝑞 𝑘 𝑠 𝐶 𝑘 𝑠 𝐶 𝑘 𝐶 𝑘 𝑠 + 𝐾 𝑠 ( 𝑡 ) 𝐵 𝑠 ( 𝑡 ) Ψ 𝑠 1 ( 𝑡 ) 𝐿 𝑠 1 ( 𝑡 ) 𝐵 𝑠 ( 𝑡 ) 𝐾 𝑠 ( 𝑡 ) 𝑠 = 1 , 2 , , 𝑛 . ( 3 . 2 0 ) The systems (3.9), (3.20) define the necessary condition such that the solutions of the systems (1.4) will be optimal. In addition to this, the system (3.8) defines the matrices 𝑆 𝑘 ( 𝑡 ) , 𝑘 = 1 , 2 , , 𝑛 , of the optimal control in the form 𝑆 𝑘 ( 𝑡 ) = Ψ 𝑘 1 ( 𝑡 ) 𝐿 𝑘 1 ( 𝑡 ) 𝐵 𝑘 ( 𝑡 ) 𝐾 𝑘 ( 𝑡 ) , 𝑘 = 1 , 2 , , 𝑛 . ( 3 . 2 1 ) We define matrices 𝐶 𝑠 from the system equations (3.20) in the view of 𝐶 𝑠 = 𝐾 𝑠 ( 0 ) , 𝑠 = 1 , 2 , , 𝑛 . ( 3 . 2 2 ) In regards to 𝑅 𝑠 ( 𝑡 ) = Ψ 𝑠 1 ( 𝑡 ) 𝐾 𝑠 ( 𝑡 ) , Ψ 𝑠 ( 0 ) = 1 , 𝐶 𝑠 = 𝑅 𝑠 ( 0 ) , 𝑠 = 1 , 2 , , 𝑛 , ( 3 . 2 3 ) it can makes the system (3.20) simpler. Then the system (3.20) takes the form (3.3), and formula (3.2) defines the optimal control.

Remark 3.2. If the control system (1.1) is deterministic, then 𝑞 𝑘 𝑠 ( 𝑡 ) 0 , Ψ 𝑠 ( 𝑡 ) 0 , 𝑘 , 𝑠 = 1 , 2 , , 𝑛 and the system (3.3) is identical to the system of the Riccati type equations (3.15).

4. Particular Cases

The optimal control 𝑈 ( 𝑡 ) for the system (1.1) has some special properties, and the equations determining it are different from those given in the previous section in case the coefficients of the control system (1.1) have special properties or intensities 𝑞 𝑠 𝑘 ( 𝑡 ) satisfy some relations or some other special conditions are satisfied. Some of these cases will be formulated as corollaries.

Corollary 4.1. Let the control system (1.1) with piecewise constant coefficients have the form 𝑑 𝑋 ( 𝑡 ) 𝑑 𝑡 = 𝐴 ( 𝜉 ( 𝑡 ) ) 𝑋 ( 𝑡 ) + 𝐵 ( 𝜉 ( 𝑡 ) ) 𝑈 ( 𝑡 ) . ( 4 . 1 ) Then the quadratics functional 𝑉 = 0 𝑋 ( 𝑡 ) 𝑄 ( 𝜉 ( 𝑡 ) ) 𝑋 ( 𝑡 ) + 𝑈 ( 𝑡 ) 𝐿 ( 𝜉 ( 𝑡 ) ) 𝑈 ( 𝑡 ) 𝑑 𝑡 ( 4 . 2 ) determines the optimal control in the form 𝑈 ( 𝑡 ) = 𝑆 ( 𝑡 , 𝜉 ( 𝑡 ) ) 𝑋 ( 𝑡 ) , ( 4 . 3 ) where 𝑆 ( 𝑡 , 𝜉 ( 𝑡 ) ) = 𝑆 𝑘 𝑡 𝑡 𝑗 , ( 4 . 4 ) and the matrices 𝑆 𝑘 ( 𝑡 ) satisfy the equations 𝑆 𝑘 ( 𝑡 ) = 𝐿 1 𝐵 𝑘 𝑅 𝑘 ( 𝑡 ) , 𝑘 = 1 , 2 , , 𝑛 ( 4 . 5 ) if 𝑡 𝑗 𝑡 < 𝑡 𝑗 + 1 , 𝜉 ( 𝑡 ) = 𝜃 𝑘 .
The matrices 𝑅 𝑘 ( 𝑡 ) , 𝑘 = 1 , 2 , , 𝑛 are the solutions of the systems of the Riccati-type equations: 𝑑 𝑅 𝑘 ( 𝑡 ) 𝑑 𝑡 = 𝑄 𝑘 𝐴 𝑘 𝑅 𝑘 ( 𝑡 ) 𝑅 𝑘 ( 𝑡 ) 𝐴 𝑘 + 𝑅 𝑘 ( 𝑡 ) 𝐵 𝑘 𝐿 𝑘 1 𝐵 𝑘 𝑅 𝑘 ( 𝑡 ) Ψ 𝑘 ( 𝑡 ) Ψ 𝑘 𝑅 ( 𝑡 ) 𝑘 ( 𝑡 ) 𝑛 𝑠 = 1 𝑞 𝑠 𝑘 ( 𝑡 ) Ψ 𝑘 𝐶 ( 𝑡 ) 𝑠 𝑘 𝑅 𝑠 ( 0 ) 𝐶 𝑠 𝑘 , 𝑘 = 1 , , 𝑛 . ( 4 . 6 )

Remark 4.2. In the corollary we mention piecewise constant coefficients of the control system (4.1). The coefficients of the functional (4.2) will be piecewise as well, but the optimal control is nonstationary.

Corollary 4.3. Assume that Ψ 𝑘 ( 𝑡 ) Ψ 𝑘 𝑞 ( 𝑡 ) = c o n s t , 𝑠 𝑘 ( 𝑡 ) Ψ 𝑘 ( 𝑡 ) = c o n s t , 𝑘 , 𝑠 = 1 , 2 , , 𝑛 . ( 4 . 7 ) Then the optimal control 𝑈 ( 𝑡 ) will be piecewise constant.

Taking into consideration that the optimal control is piecewise constant, we find out that the matrices 𝑅 𝑘 ( 𝑡 ) , 𝑘 = 1 , 2 , , 𝑛 in (4.5) are constant, which implies the form of the system (4.6) is changed to the form 𝑄 𝑘 + 𝐴 𝑘 𝑅 𝑘 + 𝑅 𝑘 𝐴 𝑘 𝑅 𝑘 𝐵 𝑘 𝐿 𝑘 1 𝐵 𝑘 𝑅 𝑘 + Ψ 𝑘 ( 𝑡 ) Ψ 𝑘 𝑅 ( 𝑡 ) 𝑘 ( 𝑡 ) + 𝑛 𝑠 = 1 𝑞 𝑠 𝑘 ( 𝑡 ) Ψ 𝑘 𝐶 ( 𝑡 ) 𝑠 𝑘 𝑅 𝑘 𝐶 𝑠 𝑘 = 0 , 𝑘 = 1 , , 𝑛 . ( 4 . 8 ) The system (4.8) has constant solutions 𝑅 𝑘 , 𝑘 = 1 , 2 , , 𝑛 , if conditions (4.7) hold. Moreover, if the random process 𝜉 ( 𝑡 ) is a Markov process then the conditions (4.7) have the form Ψ 𝑘 ( 𝑡 ) Ψ 𝑘 ( 𝑡 ) = 𝑎 𝑘 𝑘 𝑞 = c o n s t , 𝑠 𝑘 ( 𝑡 ) Ψ 𝑘 ( 𝑡 ) = 𝑎 𝑠 𝑘 = c o n s t , 𝑘 , 𝑠 = 1 , 2 , , 𝑛 , 𝑘 𝑠 , ( 4 . 9 ) and the system (4.8) transforms to the form 𝑄 𝑘 + 𝐴 𝑘 𝑅 𝑘 + 𝑅 𝑘 𝐴 𝑘 𝑅 𝑘 𝐵 𝑘 𝐿 𝑘 1 𝐵 𝑘 𝑅 𝑘 + 𝑛 𝑠 = 1 𝑎 𝑠 𝑘 𝐶 𝑠 𝑘 𝑅 𝑠 𝐶 𝑠 𝑘 = 0 , 𝑘 = 1 , , 𝑛 ( 4 . 1 0 ) for which the optimal control is 𝜃 𝑈 ( 𝑡 ) = 𝑆 ( 𝜉 ( 𝑡 ) ) 𝑋 ( 𝑡 ) , 𝑆 𝑘 𝑆 𝑘 , 𝑆 𝑘 = 𝐿 𝑘 1 𝐵 𝑘 𝑅 𝑘 , 𝑘 = 1 , 2 , , 𝑛 . ( 4 . 1 1 )

Corollary 4.4. Let the state 𝜃 𝑠 of the semi-Markov process 𝜉 ( 𝑡 ) be no longer than 𝑇 𝑠 > 0 . Then the system (3.8) has the form 𝜈 𝑘 ( 𝑥 ) 𝑥 𝐶 𝑘 𝑥 = 𝑇 𝑠 0 𝑋 𝑘 ( Ψ 𝑡 ) 𝑘 ( 𝑡 ) 𝑄 𝑘 ( 𝑡 ) + 𝑛 𝑠 = 1 𝑞 𝑠 𝑘 ( 𝑡 ) 𝐶 𝑠 𝑘 𝐶 𝑠 𝐶 𝑠 𝑘 𝑋 𝑘 ( 𝑡 ) + 𝑈 𝑘 ( 𝑡 ) Ψ 𝑘 ( 𝑡 ) 𝐿 𝑘 ( 𝑡 ) 𝑈 𝑘 ( 𝑡 ) 𝑑 𝑡 , 𝑘 = 1 , 2 , , 𝑛 . ( 4 . 1 2 ) Because 𝐾 𝑠 𝑇 𝑠 = Ψ 𝑠 ( 𝑡 ) 𝑅 𝑠 ( 𝑡 ) , 𝑠 = 1 , 2 , , 𝑛 , ( 4 . 1 3 ) then 𝐾 𝑠 𝑇 𝑠 = 0 , 𝑠 = 1 , 2 , , 𝑛 . ( 4 . 1 4 )

In this case, the search for the matrix 𝐾 𝑠 ( 𝑡 ) , 𝑠 = 1 , 2 , , 𝑛 in concrete tasks is reduced to integration of the matrix system of differential equations (3.15) on the interval [ 0 , 𝑇 𝑠 ] with initial conditions (4.14). In view of Ψ 𝑠 ( 𝑇 𝑠 ) = 0 , 𝑠 = 1 , 2 , , 𝑛 , we can expect that every equation (3.15) has a singular point 𝑡 = 𝑇 𝑠 . If Ψ 𝑠 ( 𝑡 ) has simple zero at the point 𝑡 = 𝑇 𝑠 , then the system (4.6) meets the necessary condition Ψ 𝑠 𝑇 𝑠 𝑅 𝑠 𝑇 𝑠 + 𝑛 𝑘 = 1 𝑞 𝑠 𝑘 𝑇 𝑠 𝐶 𝑘 𝑠 𝑅 𝑠 ( 0 ) 𝐶 𝑘 𝑠 = 0 , 𝑠 = 1 , , 𝑛 ( 4 . 1 5 ) for boundary of matrix 𝑅 𝑠 ( 𝑡 ) in the singular points.

Acknowledgments

The authors would like to thank the following for their support: the Slovak Research and Development Agency (Project APPV-0700-07), the Grant Agency of the Slovak republic (VEGA1/0090/09), and the National Scholarship Program of the Slovak Republic (SAIA).

References

  1. V. M. Artemiev and I. E. Kazakov, Handbook on the Theory of Automatic Control, Nauka, Moscow, Russia, 1987.
  2. K.G. Valeev and I. A. Dzhalladova, Optimization of Random Process, KNEU, Kyjew, Russia, 2006.
  3. I. I. Gihman and A. V. Skorohod, Controlable of Random Process, Izdat. “Naukova Dumka”, Kyjew, Russia, 1977.
  4. I. A. Dzhalladova, Optimization of Stochastic System, KNEU, Kyjew, Russia, 2005.
  5. V. K. Jasinskiy and E. V. Jasinskiy, Problem of Stability and Stabilization of Dynamic Systems with Finite after Effect, TVIMS, Kyjew, Russia, 2005.
  6. K. J. Astrom, Introduction to Stochastic Control Theory, vol. 70 of Mathematics in Science and Engineering, Academic Press, New York, NY, USA, 1970.
  7. R. Balaji, Introduction to Stochastic Finance, University to Conecticut, Academic Press, New York, NY, USA, 1997.
  8. J. K. Hale and S. M. Verduyn Lunel, Introduction to Functional-Differential Equations, vol. 99 of Applied Mathematical Sciences, Springer, New York, NY, USA, 1993.
  9. O. Hájek, Control Theory in the Plane, vol. 153 of Lecture Notes in Control and Information Sciences, Springer, Berlin, Germany, 1991. View at Publisher · View at Google Scholar · View at Zentralblatt MATH
  10. X. Liao and P. Yu, Absolute Stability of Nonlinear Control Systems, vol. 25 of Mathematical Modelling: Theory and Applications, Springer, New York, NY, USA, Second edition, 2008. View at Publisher · View at Google Scholar · View at Zentralblatt MATH
  11. G. D. Qushner, Stochastic Stability and Control, Mir, Moscow, Russia, 1969.
  12. L. Glass and M. C. Mackey, From Clocks to Chaos. The Rythms of Life, Princeton University Press, Princeton, NJ, USA, 1988. View at Zentralblatt MATH
  13. K.G. Valeev and O. L. Strijak, Methods of Moment Equations, AN USSR, Kyjew, Russia, 1985.
  14. K. G. Valeev, O. L. Karelova, and V. I. Gorelov, Optimization of a System of Linear Differential Equations with Random Coefficients, RUDN, Moscow, Russia, 1996.
  15. H. Kwakernaak and R. Sivan, Linear Optimal Control Systems, John Wiley & Sons, New York, NY, USA, 1972. View at Zentralblatt MATH
  16. K.G. Valeev and G. S. Finin, Constructing of Lyapunov Function, Naukova dumka, Kyjew, Russia, 1981.
  17. E. A. Barbashyn, Lyapunov Functions, Nauka, Moscow, Russia, 1970.
  18. R. Bellman, Introduction to Matrix Analysis, McGraw-Hill, New York, NY, USA, 1960. View at Zentralblatt MATH
  19. K. G. Valeev and O. A. Jautykov, Infinite Systems of Differential Equations, Nauka, Alma-Ata, Russia, 1974.