#### Abstract

In some interesting applications in control and system theory, linear descriptor (singular) matrix differential equations of higher order with time-invariant coefficients and (non-) consistent initial conditions have been used. In this paper, we provide a study for the solution properties of a more general class of the Apostol-Kolodner-type equations with consistent and nonconsistent initial conditions.

#### 1. Introduction

Linear Time-Invariant (LTI) (i.e., with constant matrix coefficients) descriptor matrix differential systems of type (1.1) with several kinds of inputs

where , , and , often appear in control and system theory. For instance, (1.1) identifies and models effectively many physical, engineering, mechanical, as well as financial phenomena. For instance, we can provide in economy, the well-known, famous input-output *Leontief* model and its several important extensions, advice [1, 2]. Moreover, in the beginning of this introductive section, we should point out that singular perturbations arise often in systems whose dynamics have sufficiently separate slow and fast parts. Now by considering the classical proportional feedback controller

we can obtain (1.3), where .

Our long-term purpose is to study the solution of LTI descriptor matrix differential systems of higher order (1.1) into the mainstream of matrix pencil theory, that is,

where, for (1.1), (1.2), and (1.3), th is the order of the systems, (where matrix is singular), and (note that can be either or ). For the sake of simplicity we set in the sequel and .

Matrix pencil theory has been extensively used for the study of LTI descriptor differential equations of first order; see, for instance, [3–6]. Systems of type (1.3) are more general, including the special case when , where is the identity matrix of , since the well-known class of *higher order linear matrix differential of Apostol-type* equations is derived straightforwardly; see [7–10]. In the same way, system (1.1) might be considered as the more general class of *higher order linear descriptor matrix differential equations of Apostol-Kolodner type*, since Kolodner has also studied such systems in nondescriptor form; see also [8].

Recently, in [5], the *regular* case of higher order linear descriptor matrix differential equations of Apostol-Kolodner type has been investigated. The regular case is simpler, since it considers square matrix coefficients and the Weierstrass canonical form has been applied. Actually, the recent work is a nonstraight generalization of [5]. Analytically, in this article, we study the linear descriptor matrix differential equations of higher order whose coefficients are rectangular constant matrices, that is, the singular case is examined. Adopting several different methods for computing the matrix powers and exponential, new formulas representing auxiliary results are obtained. This allows us to prove properties of a large class of linear matrix differential equations of higher order; in particular results of Apostol and Kolodner are recovered; see also [5, 8].

Finally, it should be mentioned that in the classical theory of linear (descriptor) differential systems, see, for instance, [1, 2, 11–13], one of the important features is that not every initial condition admits a functional solution. Thus, we shall call *a consistent initial condition *for (1.3) at if there is a solution for (1.3), which is defined on some interval , such that .

On the other hand, it is not rare to appear in some practical significant applications that the assumption of the *initial conditions* for (1.3) can be *nonconsistent, *that is, .

#### 2. Mathematical Background and Notations

In this preliminary section, some well-known concepts and definitions for matrix pencils are introduced. This discussion is highly important, in order to understand better the results of Section 3.

*Definition 2.1. *Given and an indeterminate , the matrix pencil is called regular when and (where is the zero element of ). In any other case, the pencil is called singular.

In this paper, as we are going to see in the next paragraph, we consider the case that the pencil is *singular*. The next definition is very important, since the notion of strict equivalence between two pencils is presented.

*Definition 2.2. *The pencil is said to be *strictly equivalent* to the pencil if and only if there exist nonsingular and such that

The characterization of singular pencils requires the definition of additional sets of invariants known as the minimal indices.

Let us assume that , where denotes the field of rational functions in having coefficients in the field . The equations

have nonzero solutions and which are vectors in the rational vector spaces

respectively, where

The sets of the minimal degrees and are known as *column minimal indices* (c.m.i.) and *row minimal indices* (r.m.i.) of , respectively. Furthermore, if , it is evident such that

where is the *complex Weierstrass canonical form*; see [3].

Let be elements of .

The direct sum of them denoted by is the .

Thus, there exists and such that the *complex Kronecker form * of the singular pencil is defined as follows:

where , , , and (see below). In more details, the following are given.

(S1) Matrix is uniquely defined by the sets and of zero column and row minimal indices, respectively.

(S2) The second normal block is uniquely defined by the *set of nonzero column minimal indices* (a new arrangement of the indices of must be noted in order to simplify the notation) of and has the form

where , for every , and and denote the identity and the nilpotent (with index of nilpotency ) matrix, respectively. and are the zero column and the column with element 1 at the place, respectively.

(S3) The third normal block is uniquely determined by the *set of nonzero row minimal indices* (a new arrangement of the indices of must be noted in order to simplify the notation) of and has the form

where , for every , and and denote the identity and nilpotent (with index of nilpotency ) matrix, respectively. and are the zero column and the column with element 1 at the first place, respectively.

(S4-S5) The forth and the fifth normal matrix block is the complex Weierstrass form of the singular pencil which is defined by

where the first normal Jordan-type element is uniquely defined by the set of *finite elementary divisors* (f.e.d.)

of and has the form

And also the blocks of the second uniquely defined block correspond to the *infinite elementary divisors* (i.e.d.)

of and have the form

Thus is a nilpotent element of with index , where

and are the matrices

In the last part of this introductive section, some elements for the analytic computation of , are provided. To perform this computation, many theoretical and numerical methods have been developed.

Thus, the interested reader might consult papers in [1, 2, 7–10, 14–16], and the references therein. In order to obtain more analytic formulas, the following known results should be mentioned.

Lemma 2.3 (see [15]). *
where
*

Another expression for the exponential matrix of Jordan block, see (2.18), is provided by the following lemma.

Lemma 2.4 (see [15]). *
where the satisfy the following system of equations:
**
where .*

#### 3. Solution Space for Consistent Initial Conditions

In this section, the main results for consistent initial conditions are analytically presented for the singular case. The whole discussion extends the existing literature; see, for instance [8]. Now, in order to obtain a solution, we deal with consistent initial value problem. More analytically, we consider the system

with known where (where matrix is singular), and .

From the singularity of , there exist nonsingular matrices and such that (see also Section 2)

where ,, , , , and are given by

By using the Kronecker canonical form, we might rewrite system (1.3), as the following lemma denotes.

Lemma 3.1. *System (1.3) may be divided into five subsystems:
**
the so-called slow subsystem
**
and the relative fast subsystem
*

*Proof. *Consider the transformation
where and . Substituting the previous expression into (1.3), we obtain
Whereby, multiplying by , we arrive at
Moreover, we can write as
where , , , , and . Note that is the number of zero column entries, , , , and .

And taking into account the above expressions, we arrive easily at (3.5)–(3.9).

Proposition 3.2. *For system (3.5), the elements of the matrix can be chosen arbitrarily.*

*Proof. *
Since , it is profound that any *g-*column vector can be chosen.

Proposition 3.3. *The analytic solution of system
**
is given by the expression
**
where
**
where is an arbitrary function, for every , , and . (Note that should be uniquely determined via the given initial conditions.)*

*Proof. *System (3.14) is rewritten as
for every . Now, we denote
where , with , and (vector, ).

Thus,

or, equivalently, we obtain
Note that is a matrix with -elements as follows
where , for .

Consequently, (3.20) is rewritten as follows:

or, equivalently,
and eventually, as a scalar system, we obtain

Denote that element is an arbitrary function; then the solution is given iteratively, as follows.

Firstly, we take the equation for every ,

We continue the procedure, for , and so forth. Thus, we finally obtain (3.15).

With the following remark, we obtain the solution of subsystem (3.6).

*Remark 3.4. *The solution of subsystem (3.6) is given by
where the results of Proposition 3.3 are also considered.

*Remark 3.5. *Considering the solution (3.14), and therefore the system (3.6), it should be pointed out that the solution is not unique, since the last component of the solution vector is chosen arbitrary. Moreover, it is worth to be emphasized here that the solution of the singular system (1.3) is not unique.

Proposition 3.6. *The system
**
has only the zero solution.*

*Proof. *Consider that system (3.27) can be rewritten as follows:
for every .

Afterwards, we obtain straightforwardly the following system:

Now, by successively taking th derivatives with respect to on both sides of
and left multiplying by the matrix , times (where is the index of the nilpotent matrix , i.e., ), we obtain the following equations:
Thus, we conclude to the following expression:

*Remark 3.7. *Consequently, the subsystem (3.7) has also the zero solution.

Proposition 3.8 (see [5]). *
(a) The analytic solution of the so-called slow subsystem (3.8) is given by
**
where ; such that . **Note that is the Jordan Canonical form of matrix**
and . **The eigenvalues of the matrix are given by**
where ( finite elementary divisors) and for every and . **
(b) However, the relative fast subsystem (3.9) has only the zero solution.**It is worth to say that the results of Ben Taher and Rachidi [14] can be compared with the results of Proposition 3.8, which has been discussed extensively in [5].*

*Remark 3.9. *The characteristic polynomial of is , with for and . Without loss of generality, we define that
where , are the geometric and algebraic multiplicities of the given eigenvalues , respectively.

(i) Consequently, when , then

is also a diagonal matrix with diagonal elements of the eigenvalue , for .

(ii) When , then

for , and .

Hence, the set of consistent initial conditions for system

has the following form:
In more details, since we have considered (3.14) and we can denote
then we can derive the following expression:
Then, the set of consistent initial conditions for (1.3) is given by
Now, taking into consideration (3.2) and (3.43), we conclude to

Theorem 3.10. *The analytic solution of (3.2) is given by
**
for , where for and
**
(Note that should be uniquely determined via the given initial conditions.) **The matrix is arbitrarily chosen. Moreover**
such that , where is the Jordan Canonical form of matrix , and
*

*Proof. *Using the results of Lemma 3.1, Propositions 3.2–3.8, Remarks 3.4 and 3.7 and (3.10) then we obtain
Finally, (3.45) is derived.

The next remark connects the solution with the set of initial condition for the system (1.3).

*Remark 3.11. *If is the existing left inverse of , then considering also (3.10)
Finally, the solution (3.45) is given by
where and is the existing left inverse of .

The following two expressions, that is, (3.52) and (3.54) are based on Lemmas 2.3 and 2.4. Thus, two new analytical formulas are derived which are practically very useful. Their proofs are straightforward exercise of Lemmas 2.3, 2.4, and (3.51).

Proposition 3.12. *Considering the results of Lemma 2.3, one obtains the expression
**
where
**
for , and .*

Another expression for the exponential matrix of Jordan block, see (2.18), is provided by the following lemma.

Proposition 3.13. *Considering the results of Lemma 2.4, one obtains the expression
**
where the satisfy the following system of (for ) equations:
**
and , for where .*

Analyzing more the results of this section, see Theorem 3.10 and Lemma 2.3 and 2.4, we can present briefly a symbolical algorithm for the solution of system (1.3).

*Symbolical Algorithm**Step 1. *Determine the pencil .*Step 2. *Calculate the expressions (3.3). Thus, we have to find the f.e.d, i.e.d., r.m.i, c.m.i, and so forth, (i.e., the *complex Kronecker form * of the singular pencil here; it should be noticed that this step is not an easy task; some parts are still under research). *Step 3. *Using the results of Step 2, determine the matrices ,, , , and .*Step 4. *Determine , , , (using the Jordan canonical form of matrix ), and (see Remark 3.9).*Step 5. *Considering the transformation (3.10), that is, , we obtain (3.54). Then the following.*Sub step 5.1. *Choose an arbitrary matrix .*Sub step 5.2. *Determine the matrix , that is,
where , for and
*Step fta*

Following the results of *Lemma **4*, determine
where
for , and . *Step ftb*

Following the results of *Lemma **5*, determine
where satisfy the following system of (for ) equations:
and , for , and .

*Example 3.14 (with consistent initial condition). *Consider the 2nd order system
where , and , with
with known consistent initial conditions .

From the singularity of , there exist nonsingular matrices

Then, using (3.3), we obtain where ,, , , , and are given by Considering the transformation (3.10), that is, , we obtain the results of Lemma 2.3 (see also , below): where

(i) is an arbitrarily chosen matrix.

(ii) with , and

(iii)

(iv)

(We have only two eigenvalues, and .)

(v)

where are known, and

(vi) With

(3.74)

Combining the above arithmetic results, the analytic solution of system (3.62) is given by considering (3.58).

#### 4. Solution Space Form of Nonconsistent Initial Conditions

In this short section, we would like to describe briefly the impulse behaviour of the solution of the original system (1.3), at time , see also [11–13]. In that case, we reformulate Proposition 3.6, so that the impulse solution is finally obtained. Note that in this part of the paper the condition that does not hold anymore, since we are interesting for solutions with impulsive behaviour (again can be either or ).

Moreover, we assume that the space of nonconsistent initial conditions is denoted by , which is called also *redundancy space*. Then, considering also Lemma 3.1, and especially (3.7) and (3.9), we have the nonconsistent initial condition that is

for .

In order to be able to find a solution, we use the classical method of Laplace transformation. This method has been applied several times in descriptor system theory; see for instance, [4, 5, 11].

Proposition 4.1. *The analytic solution of the system (3.27) is given by
**
where and are the delta function of Dirac and its derivatives, respectively.*

*Proof. *Let us start by observing that—as it is well known—there exists a such that , that is, the index of nilpotency equals of .

Moreover, system (3.27) can be rewritten as follows; see also proof of Proposition 3.6,

Where by taking the Laplace transformation of , the following expression derives:
and by defining , we obtain
Since is the index nilpotency of , it is known that
where ; see, for instance [4, 10]. Thus, substituting the above expression into (4.5), the following equation is being taken:
Since , (4.7) is transformed into (4.8):
Now, by applying the inverse* Laplace* transformation into (4.8), the equation (4.2) is derived.

*Remark 4.2. *The analytic solution of the subsystem (3.7) is given by
where the results of Proposition 4.1 are also considered.

Similarly to Proposition 4.1, we can prove the following proposition.

Proposition 4.3. *The analytic solution of the system (3.9) is given by
*

Theorem 4.4. *The analytic solution of (1.3) is given by
*