#### Abstract

Nonhomogeneous system of linear differential equations of second order with multiple different delays and pairwise permutable matrices defining the linear parts is considered. Solution of corresponding initial value problem is represented using matrix polynomials.

#### 1. Introduction

Motivated by delayed exponential representing a solution of a system of differential or difference equations with one or multiple fixed or variable delays [1–6], which has many applications in theory of controllability, asymptotic properties, boundary-value problems, and so forth [3–5, 7–15], we extended representation of a solution of a system of differential equations of second order with delay [1] to the case of two delays where the linear parts were given by permutable matrices [16]. Equations (1), (2), and the below-stated (11) with are generalizations of the scalar equation representing linear oscillator, to -dimensional space with one or multiple fixed delays. Clearly, each solution of the latter equation is oscillating whenever . Analogically, (1) with can have at least one oscillating solution whenever is odd. Indeed, if is matrix, is odd, and has a simple real nonzero eigenvalue , then there exists a regular matrix such that where is matrix. On letting , one gets or rewrites as the system where . Note that the first column of is the eigenvector of corresponding to . Clearly, if solution of (5) is oscillating, then solution of (4) is oscillating in the first coordinate whenever its initial condition satisfies . Consequently, solution of (1) is oscillating in whenever . Taking , one obtains characteristic equation of (5), which has solutions with . Thus, is oscillating.

On the other hand, there can exist a nonoscillating solution of the system (1) whenever and is even. For instance, if and , then (1) has the form with , which, obviously, does not have an oscillating solution satisfying nonoscillating initial condition. Similarly, it can be shown that system with odd dimension can possess a nonoscillating solution satisfying an appropriate initial condition.

For simplicity, we call the generalizations (1), (2), and (11) with , of scalar equation (3), *oscillating* although their solutions do not always have to be oscillating. Nevertheless, at the end of this paper, in Corollary 8 we state the representation of a solution of more general system (86) without squares of matrices.

We note that the delayed matrix exponential from [1–5] as well as the representation of a solution of second-order differential equations derived in [1, 16] and in this paper can lead to new results in nonlinear boundary value problems for impulsive functional differential equations considered in [17] or stochastic delayed differential equations from [18].

So, in the present paper, we extend our result from [16] to three and more delays by the assumption of pairwise permutable matrices defining linear parts. By such an assumption, we are able to construct matrix functions solving homogeneous system of differential equations of second order with any number of fixed delays, and, consequently, we use these functions to represent a solution of the corresponding nonhomogeneous initial value problem. As will be shown in the next sections, extending from two to more delays brings many technical difficulties, for example, the use of multinomial coefficients. Naturally, the results of the present paper hold with one or two different delays as well. However, these cases can by studied in a simpler way, which was already done in [1, 16]. Thus, we focus our attention on the case of three and more different delays.

First, we recall our result from [16].

Theorem 1. *Let , , and . Let be permutable matrices; that is, , and let be a given function. Solution of
**
satisfying initial condition
**
has the form
**
where
*

We will denote and the zero and identity matrix, respectively.

#### 2. Systems with Multiple Delays

In this section, we derive the representation of a solution of satisfying the initial condition (8), where , , , are pairwise permutable matrices; that is, for each , , and are given functions. The solution will be represented using matrix functions analogical to (10) and will be stated in Section 3. We note that the same problems with were studied in [1, 16].

From now on, we assume the property of empty sum and empty product; that is, for any function and matrix function , whether they are defined or not for indicated argument.

We recall that is a multinomial coefficient [19] given by Note that if , then and (20) coincides with (10).

We will need a property of multinomial coefficients described in the next lemma.

Lemma 2. *Let be fixed. Then
**
for any .*

* Proof. *If , then the statement follows from the property of binomial coefficients:
Let the statement be true for . Next, we use the property of multinomial coefficient
with inductive hypothesis to derive
Clearly, from (16), we get
Applying the case (property of binomial coefficient) and (16), we get
Putting (18) and (19) in (17), we obtain that the statement holds for and the proof is complete.

In further work, we write for the multinomial coefficient of elements of the finite set , and for the multinomial coefficient of and elements of the finite set ; for example, if , then . For the completeness, we define .

Define the functions as for any .

We will need functions for and complex matrix (cf. [16]) defined as with the properties for any , considering the one-sided derivatives at , .

Some of properties of functions and are concluded in Lemma 4, but to prove it we will need the next lemma.

Lemma 3. *Let and . Let be pairwise permutable matrices, that is, for each . Then for any ,
**
where the sums are taken over all subsets of including the trivial ones, and
*

* Proof. *Denote , the set of all nonnegative, positive integers, respectively; that is, . Thus, we have the trivial identity

Analogically, for any each -tuple such that can be divided in two distinct sets of -s so that if and if . That is, denotes the set of all indices such that . Moreover, . Accordingly, we can write
where the union is taken over all subsets of including the trivial ones. So, in the view of definition (20), the statement for follows.

Statement for can be proved in a similar way.

Lemma 4. *Let and . Let be pairwise permutable matrices; that is, for each . Then the following holds for any :*(1)*if for some , then
*(2)*if for , , then
*(3)*for any bijective mapping we get
*(4)*taking the one-sided derivatives at , then
*(5)*considering the one-sided derivatives at 0 (they both equal ), then
**Statements (1)–(4) hold with instead of .*

* Proof. *Statement (1) follows easily from definition of , because if and whenever . Next, if , then
for any matrix function . Thus, using the property of multinomial coefficient (see (16))
for (2), we obtain
Property (3) is trivial.

Now, we prove the statement (4). If , then
by (2) and from the property of (see (22)).

Hence, without any loss of generality, we assume that for each , (in the other case, we collect matrices as stated in (2)). Note the case was proved in [16, Lemma 2.3.] Now, assume that solves
that is, that the statement is fulfilled for different delays.

Let . If , then , that is,
and from definition (20) it holds
for such . Consequently,
by the inductive hypothesis.

Now, let . Applying Lemma 3, we get
with given by (24) and the sum taken over all subsets of including the trivial ones. Note that
with a characteristic function of a set given by
Since each is a finite set, Lemma 2 yields
We apply this identity to derive a formula for the second derivative of for any :
Next, for any fixed we split the second sum to and , that is,
and use the equality
since
So we obtain
for each . Obviously, . Consequently,
Now, we add and subtract
to the right-hand side of (50) to get
and apply whenever :
Denoting the number of elements of the set , we split the last two terms of the right-hand side of the latter equality with respect to
Hence, we have
Now, we show that
Let , and let be arbitrary and fixed such that . Then, clearly,
and , . Moreover, if , are such that , , then .

On the other side, if , are arbitrary and fixed such that , then
and . Furthermore, if are such that , , then, . In conclusion, there is correspondence between the terms on the left-hand side of (56) and the terms on the right-hand side. So (56) is valid.

Putting (56) in (55) we obtain
Next, by the property of empty sum, we get
Moreover, it holds
Therefore, putting (60) and (61) in (59) and the result in (53), we obtain
Hence, solves (31) for all . Clearly, the same is true for .

For , statements (1)–(3) can be proved as for . Next, if , we apply the point (2) of this lemma and property (22) for to see that
So, is a solution of (31) when all delays are the same.

Again, the case with different delays was proved in [16]; thus, we assume that the statement is fulfilled for , and that for each . As before, if and , then
by definition (20), and the statement follows from the inductive hypothesis. For , we apply Lemma 3 to see that
with given by (25). This time
and . The rest proceeds analogically to .

The final statement follows directly from definition (20).

*Remark 5. *Another proof of statements (1)–(3) of the previous lemma can be made with the aid of statement (4) of the same lemma and uses the uniqueness of a solution of the corresponding initial value problem. For instance in statement (1) of the lemma, both
solve
with initial condition
and .

We are ready to state and prove our main result.

#### 3. Main Result

Here we find a solution of the initial value problem (11), (8) in the sense of the next definition.

*Definition 6. *Let , , and , and let be matrices, and let be a given function. Function is a solution of (11) and initial condition (8), if (taken the second right-hand derivative at 0) satisfies (11) on and condition (8) on .

Theorem 7. *Let , , , and , and let be pairwise permutable matrices; that is, for each , and let be a given function. Solution of (11) satisfying initial condition (8) has the form
**
where and .*

* Proof. *Obviously, satisfies the initial condition on , and, from definition (20), . For the derivative, it holds . Moreover, if , then
since
for each . Thus
and . Clearly,

We show that, although is not at , function is at these points and, therefore, in . At once, we prove that is a solution of (11).

Assume that . Then identities (71) and (73) are valid, and by differentiating (73) for such we get
since for each .

Now, let be such that for each , . Then
whenever , and (70) becomes
By the point (5) of Lemma 4, we get
and for the second derivative it holds
since . Now, we apply the property (4) of Lemma 4 together with
to see that both and are solutions of
Therefore,
In fact, this is exactly formula (11) since for each .

Finally, if , we have
So, differentiating this formula twice and applying (4) of Lemma 4 result in (11). Hence, one can see that function given by (70) really solves (11) and satisfies initial condition (8) and, moreover, that . To see the last one, one has to put into the computed derivatives, for example, if for each , then by (75) and (82) we get
where .

It is easy to see that defining functions