Abstract
In many applications, and generally speaking in many dynamical differential systems, the problem of transferring the initial state of the system to a desired state in (almost) zero-time time is desirable but difficult to achieve. Theoretically, this can be achieved by using a linear combination of Dirac -function and its derivatives. Obviously, such an input is physically unrealizable. However, we can think of it approximately as a combination of small pulses of very high magnitude and infinitely small duration. In this paper, the approximation process of the distributional behaviour of higher-order linear descriptor (regular) differential systems is presented. Thus, new analytical formulae based on linear algebra methods and generalized inverses theory are provided. Our approach is quite general and some significant conditions are derived. Finally, a numerical example is presented and discussed.
1. Introduction
From the point of view of several important applications, in several fields of research, see for instance [1, 2], taking a given state of a linear system to a desired state in minimum time is very desirable, though it is a challenging problem in control and system theory.
Significant attention has been given to this problem in the case of linear systems; see [1–3]. Recently, Kalogeropoulos et al. [4] have further enriched these first approaches, as they have relaxed some of the rather restricted assumptions that [1, 2] are considered. Afterwards, the method has been also applied to the more general class of linear descriptor (regular) systems; see [5].
In this paper, a further extension of [5], in the class of linear descriptor (regular) equations, is provided. Comparing with the existing literature, see [1–5], we solve this problem
(1)for higher-order linear differential descriptor (regular) systems (compare with [5]),(2)using higher-order consistent initial conditions (compare with [1–5]),(3)obtaining more analytical formulas, that is, see Appendices A and B, Theorems 20 and 23. (compare with [1–5]),(4)without using the controllability matrix (compare with [3]), (5)applying analytical methods for the exact determination of the generalized inverses of Vandermonde and relative to this matrices; see [6].To summarize, in this paper, we investigate how we can transfer the initial state of an open loop, linear higher-order descriptor (regular) differential system in (practically speaking, almost) zero-time, that is,
with known initial conditions
where , and (i.e., is the algebra of matrices with elements in the field ) with (is the zero element of ), and . For the sake of simplicity, we set in the sequel and .
In order to solve this problem, the appropriate input vector has to be made up as a linear combination of the -function of Dirac and its derivatives, see [1, 2], and for more details consult [7]. Obviously, such an input is very hard to imagine physically. However, we can think of it approximately as a combination of small pulses of very high magnitude and infinitely small duration.
Linear descriptor (singular or regular) differential systems have been extensively used in control theory; see for instance [8–10], and for more details [11].
A brief outline of the paper is as follows. Section 2 provides the incentives and the typical modelling features of the problem. Moreover, a classical approximated expression for the controller, that is, a linear combination of the -function of Dirac and its derivatives that is based on the normal (Gaussian) probability density function, is used. Then, the need to determine the unknown coefficients is derived. Section 3 is divided into four extensive subsections. In Section 3.1, the reduction of the higher-order system to first-order is discussed. The first-order descriptor (regular) is divided into a fast and a slow subsystem, using the Weierstrass canonical form. Section 3.2 investigates and presents some analytical formulas based on the slow subsystem. In Section 3.3, the theory of the -generalized inverses is used. Finally, some significant conditions for the solution of the slow subsystem are presented in Section 3.4. In Section 4, a necessary condition based on the fast subsystem is discussed and obtained. Section 5 provides an interesting numerical application from physics and Section 6 concludes the paper. Two appendices for the analytical calculations of two important integrals are also available.
2. Preliminary Results—Matrix Pencil Framework
In this section, some preliminary results for matrix pencil and system theory are briefly presented. First, we assume that for an -order linear differential system, see (1), the input can be a linear combination of Dirac -function and its first -derivatives as follows:
where or is the -derivative of the Dirac -function, and for are the magnitudes of the delta function and its derivatives. Furthermore, we assume that the state of the system at time is
and at time , it achieves
With the following definitions, a brief presentation of the most important elements of matrix pencil theory is given.
Definition 1. Given and an indeterminate , the matrix pencil is called regular when and . In any other case, the pencil will be called singular.
Definition 2. The pencil is said to be strictly equivalent to the pencil if and only if there exist nonsingular and such as
In this paper, we consider the case that pencil is regular. Thus, the strict equivalence relation can be defined rigorously on the set of regular pencils as follows.
This is the set of elementary divisors (e.d.s) obtained by factorizing the invariant polynomials into powers of homogeneous polynomials irreducible over field .
In the case where is a regular, we have e.d. of the following types:
(i)e.d. of the type are called zero finite elementary divisors (z. f.e.d.),(ii) e.d. of the type , are called nonzero finite elementary divisors (nz. f.e.d.),(iii) e.d. of the type are called infinite elementary divisors (i.e.d.).Let be elements of . The direct sum of them denoted by is the .
Then, the complex Weierstrass form of the regular pencil is defined by
Now, the Jordan type element, that is, , is uniquely defined by the set of f.e.d.
of and has the form
where
The blocks of the second, that is, see (7), are uniquely defined by the set of i.e.d.
of and has the form
where
Furthermore, is a nilpotent element of with index , where and are defined as
Moreover, for the matrices and , we have the parameterization
Since the state which we wish to reach is specified, we need to determine the unknown coefficients , for .
In practice, we cannot create an exact impulse function nor its derivatives. However, if we use one of the approximations of Dirac -function, we will be able to change the state in some minimum practical time depending mainly upon how well we generate the approximations. Let the Dirac -function be viewed as the limit of sequence function
where is called a nascent delta function. This limit is in the sense that
Some wellknown and very useful in applications nascent delta functions are the Normal and Cauchy distributions, the rectangular function, the derivative of the sigmoid (or Fermi-Dirac) function, the Airy function, and so forth; see for instance [2, 5, 12–17]. The results given below are based on the normal function. Thus, by taking into consideration expression (18) and the normal (Gaussian) probability density distribution, we obtain
where .
So, the approximate expression for the controller (2) is given by
Then, we take the limit
In the next section, the main results are presented.
3. The Main Results
3.1. Order Reduction of System (1)
The section begins with the following important lemma.
Lemma 3. System (1) is divided into the following two subsystems:
Proof. Consider the transformation
Substituting the previous expression into (1) and considering also (3), we obtain
Multiplying by , we arrive at
Now, we denote
where .
Taking into account the following expressions, that is,
we arrive easily at (23) and (24).
System (23) is standard form of nonhomogeneous higher-order linear differential equations of Apostol-Kolodner type, which may be treated by methods that are more classical; see for instance [18] and references therein.
Thus, it is convenient to define new variables as
Then, we have the following system of ordinary differential equations:
Finally, (31) can be expressed by using vector-matrix equations
where ( is the transpose tensor) and the coefficient matrices and are given by
with corresponding dimension of , .
The state of system (1) at time is and at time it achieves .
Now, considering (25), at time we have
and at time we obtain
Moreover, where for , and .
Furthermore,
3.2. The Solution of Subsystem (32)
In order to solve subsystem (32), the following definitions should be provided.
Definition 4. The characteristic polynomial of matrix is given by with for and . Without loss of generality, we assume that where is the geometric and algebraic multiplicity of the given eigenvalues , respectively.
Generally speaking, the matrix is not diagonalizable. However, we can generate linearly independent vectors and similarity transformation that takes into the Jordan canonical form, as the following definition clarifies.
Definition 5. There exists an invertible matrix such as , ; is it Jordan canonical form of matrix . Analytically, (i)The block diagonal matrix , where is also a diagonal matrix with diagonal elements the eigenvalue , for . Consequently, the dimension of is .(ii) Also, the block matrix , where
According to the classical theory of ordinary differential equations, the solution of system (32) is given by the following lemma.
Lemma 6. The solution of subsystem (32) is given by with initial condition .
Proof. Consider the transformation
where nonsingular , and .
Substituting (44) into (43), we obtain
Furthermore, we define , such as the last equation is transformed into
Now, according to the relevant theory of first-order differential systems of (46) form, see for instant [16], using also (44), the solution is expressed by (43).
Definition 7. The exponential matrix is defined as
where
for .
Furthermore,
where
where are the Weyr characteristics via Ferrer diagrams, for and . Note that is the index of annihilation for the eigenvalue .
Now, taking into consideration (21) the solution (22) is transposed into
or equivalently
where .
Remark 1. [4, 5] Given the large number of terms involved, in order to make our calculations affordable, we consider the fact that and its derivatives tend to zero very strongly with (note also that ). Thus, by letting , where is chosen large enough (i.e., ) the assumption as stated above is valid, that is, and its derivatives , for .
Now, using the Remark 1, we obtain that , for and , as well. Moreover, for the analytic determination of vector (3), we need to calculate the integral .
In this part of the paper, two subsystems are derived; see the following lemmas, and Appendices A and B.
Lemma 8. For the diagonal matrix , the following integral is given by (54):
Proof. See Appendix A.
Lemma 9. For the diagonal matrix , the following integral is given by (55): where
Proof. See Appendix B.
Now, we revisit (54), thus Note that is the well-known Vandermonde matrix.
Combining (52)–(57), we obtain the following system:
where and with and . Furthermore, we obtain the initial condition
The system (58) can now be divided into two types of subsystems:
(S1)
(S2) for and .
Proposition 10. System (60) is solvable if
for every nonzero element of vectors , where , for every .
Moreover, if one of the elements of vectors is zero, then the relative element of the th-row of the vector should be also zero.
Proof. System (60) contains -subsystems of the following type:
or equivalently
(i)If one of the coefficients for 1, 2,, , then the relative element of the row of the vector should be also zero (in order to obtain solution).(ii) If every of the coefficients for 1, 2,, , then we have
for every .
Consequently, system (60) is solvable if (62) is satisfied for every .
Now, we will work with subsystems (61) which can be written as follows:
The coefficient matrix can be transposed to the following equivalent matrix; see (67). Note that is a matrix. Moreover, from the elements , for , we can assume that . Thus, we obtain
In order to be able to understand (67) better, the following example is helpful.
Example 1. We take the coefficient matrix
We assume that and . Since , we are not interested in , .
Thus, under our assumption, we obtain
Afterwards, we multiply the 3rd row with and it is added to the 2nd row. Moreover, we multiply the 3rd row with and it is added with the 1st row. Then, the following equivalent matrix is derived:
Now, we multiply the 2nd row with and it is added with the 1st row. Thus, we obtain
Finally, we multiply the matrix with the element and we conclude to obtain
Remark 2. Considering the results already presented, it is clear that there exists a nonsingular matrix , such that Thus, system (61) can be transformed into where
Proposition 11. Subsystems (74) are solvable when the elements of vectors are included into the vector with the greatest dimension, that is, 1 where Equivalently, if one assumes that then each of the following vectors , ,, should be vectors of type with .
Proof. We take the subsystems; as follows:
The matrix has rows, has rows,
has rows.
Without loss of generality, we assume that .
Looking carefully at the type of matrices
we can easily verify that the -first rows of them are identically same. Thus, it is necessary the relevant -first rows of to be also identically the same. Analogously, the -first rows of , , should be identically the same with the relevant -first rows of . And so on until the row.
Consequently, it is time to use the results of Propositions 10 and 11. Thus, system (81) is solvable if we obtain the following.
(i) The first nonzero elements of vector coefficient (practically speaking, without loss of generality, we assume that the first elements are nonzero).(ii) The matrix with dimension , where .(iii) The matrix with dimension , where, (iv) The matrix with dimension , where, .Consequently, system (81) is transposed to the solvable system (83)
or equivalently,
where , since is nonsingular.
Remark 3. The matrices given by (67) with some row transformations can be transformed to the following: The matrix (84) is denoted by Thus, there is a nonsingular matrix