Abstract

We consider a reduction of a nonhomogeneous linear system of first-order operator equations to a totally reduced system. Obtained results are applied to Cauchy problem for linear differential systems with constant coefficients and to the question of differential transcendency.

1. Introduction

Linear systems with constant coefficients are considered in various fields (see [15]). In our paper [5] we use the rational canonical form and a certain sum of principal minors to reduce a linear system of first-order operator equations with constant coefficients to an equivalent, so called partially reduced, system. In this paper we obtain more general results regarding sums of principal minors and a new type of reduction. The obtained formulae of reduction allow some new considerations in connection with Cauchy problem for linear differential systems with constant coefficients and in connection with the differential transcendency of the solution coordinates.

2. Notation

Let us recall some notation. Let 𝐾 be a field, and let 𝐵𝐾𝑛×𝑛 be an 𝑛-square matrix. We denote by(𝛿𝑘)𝛿𝑘=𝛿𝑘(𝐵) the sum of its principal minors of order 𝑘(1𝑘𝑛), (𝛿𝑖𝑘)𝛿𝑖𝑘=𝛿𝑖𝑘(𝐵) the sum of its principal minors of order 𝑘 containing 𝑖th column (1𝑖,𝑘𝑛).

Let 𝑣1,,𝑣𝑛 be elements of 𝐾. We write 𝐵𝑖(𝑣1,,𝑣𝑛)𝐾𝑛×𝑛 for the matrix obtained by substituting 𝑣=[𝑣1𝑣𝑛]𝑇 in place of 𝑖th column of 𝐵. Furthermore, it is convenient to use 𝛿𝑖𝑘𝐵;𝑣=𝛿𝑖𝑘𝐵;𝑣1,,𝑣𝑛=𝛿𝑖𝑘𝐵𝑖𝑣1,,𝑣𝑛,(2.1) and the corresponding vector 𝛿𝑘𝐵;𝑣=𝛿1𝑘𝐵;𝑣𝛿𝑛𝑘𝐵;𝑣𝑇.(2.2) The characteristic polynomial Δ𝐵(𝜆) of the matrix 𝐵𝐾𝑛×𝑛 has the following form: Δ𝐵(𝜆)=det(𝜆𝐼𝐵)=𝜆𝑛+𝑑1𝜆𝑛1++𝑑𝑛1𝜆+𝑑𝑛,(2.3) where 𝑑𝑘=(1)𝑘𝛿𝑘(𝐵), 1𝑘𝑛; see [6, page 78].

Denote by 𝐵(𝜆) the adjoint matrix of 𝜆𝐼𝐵, and let 𝐵0,𝐵1,,𝐵𝑛2,𝐵𝑛1 be 𝑛-square matrices over 𝐾 determined by 𝐵(𝜆)=adj(𝜆𝐼𝐵)=𝜆𝑛1𝐵0+𝜆𝑛2𝐵1++𝜆𝐵𝑛2+𝐵𝑛1.(2.4)

Recall that(𝜆𝐼𝐵)𝐵(𝜆)=Δ𝐵(𝜆)𝐼=𝐵(𝜆)(𝜆𝐼𝐵).

The recurrence 𝐵0=𝐼;𝐵𝑘=𝐵𝑘1𝐵+𝑑𝑘𝐼for1𝑘<𝑛 follows from equation 𝐵(𝜆)(𝜆𝐼𝐵)=Δ𝐵(𝜆)𝐼; see [6, page 91].

3. Some Results about Sums of Principal Minors

In this section we give two results about sums of principal minors.

Theorem 3.1. For 𝐵𝐾𝑛×𝑛 and 𝑣=[𝑣1𝑣𝑛]𝑇𝐾𝑛×1, one has: 𝛿𝑖𝑘𝐵;𝑛𝑗=1𝑏1𝑗𝑣𝑗,,𝑛𝑗=1𝑏𝑛𝑗𝑣𝑗+𝛿𝑖𝑘+1𝐵;𝑣1,,𝑣𝑛=𝛿𝑘(𝐵)𝑣𝑖.(3.1)

Remark 3.2. The previous result can be described by 𝛿𝑖𝑘𝐵;𝐵𝑣+𝛿𝑖𝑘+1𝐵;𝑣=𝛿𝑘(𝐵)𝑣𝑖,1𝑖𝑛,(3.2) or simply by the following vector equation: 𝛿𝑘𝐵;𝐵𝑣+𝛿𝑘+1𝐵;𝑣=𝛿𝑘(𝐵)𝑣.(3.3)

Proof of Theorem 3.1. Let 𝑒𝑠𝐾𝑛×1 denote the column whose only nonzero entry is 1 in 𝑠th position. We also write 𝐵𝑠 for 𝑠th column of the matrix 𝐵 and [𝐵]̂𝑠 for a square matrix of order 𝑛1 obtained from 𝐵 by deleting its 𝑠th column and row. According to the notation used in (2.1), let [𝐵𝑖(𝐵𝑠)]̂𝑠 stand for the matrix of order 𝑛1 obtained from 𝐵 by substituting 𝑠th column 𝐵𝑠 in place of 𝑖th column, and then by deleting 𝑠th column and 𝑠th row of the new matrix. By applying linearity of 𝛿𝑘(𝐵;𝑣) with respect to 𝑣, we have 𝛿𝑖𝑘𝐵;𝐵𝑣+𝛿𝑖𝑘+1𝐵;𝑣=𝛿𝑖𝑘𝐵;𝑛𝑠=1𝑣𝑠𝐵𝑠+𝛿𝑖𝑘+1𝐵;𝑛𝑠=1𝑣𝑠𝑒𝑠=𝑛𝑠=1𝑣𝑠𝛿𝑖𝑘𝐵;𝐵𝑠+𝑛𝑠=1𝑣𝑠𝛿𝑖𝑘+1𝐵;𝑒𝑠=𝑛𝑠=1𝑣𝑠𝛿𝑖𝑘𝐵;𝐵𝑠+𝛿𝑖𝑘+1𝐵;𝑒𝑠=𝑣𝑖𝛿𝑖𝑘𝐵;𝐵𝑖+𝛿𝑖𝑘+1𝐵;𝑒𝑖+𝑛𝑠=1𝑠𝑖𝑣𝑠𝛿𝑖𝑘𝐵;𝐵𝑠+𝛿𝑖𝑘+1𝐵;𝑒𝑠.(3.4) First, we compute 𝑣𝑖(𝛿𝑖𝑘(𝐵;𝐵𝑖)+𝛿𝑖𝑘+1(𝐵;𝑒𝑖))=𝑣𝑖(𝛿𝑖𝑘(𝐵)+𝛿𝑘̂𝑖([𝐵]))=𝑣𝑖𝛿𝑘(𝐵). Then, it remains to show that 𝑛𝑠=1,𝑠𝑖𝑣𝑠(𝛿𝑖𝑘(𝐵;𝐵𝑠)+𝛿𝑖𝑘+1(𝐵;𝑒𝑠))=0.
It suffices to prove that each term in the sum is zero, that is, 𝛿𝑖𝑘𝐵;𝐵𝑠+𝛿𝑖𝑘+1𝐵;𝑒𝑠=0,for𝑠𝑖.(3.5) Suppose that 𝑠𝑖. We now consider minors in the sum 𝛿𝑖𝑘(𝐵;𝐵𝑠). All of them containing 𝑠th column are equal to zero, so we deduce 𝛿𝑖𝑘𝐵;𝐵𝑠=𝛿𝑖𝑘𝐵𝑖𝐵𝑠=𝛿𝑖𝑘𝐵𝑖𝐵𝑠.̂𝑠(3.6) If 𝑠𝑖, then each minor in the sum 𝛿𝑖𝑘+1(𝐵;𝑒𝑠) necessarily contains 𝑠th row and 𝑖th column. By interchanging 𝑖th and 𝑠th column, we multiply each minor by 1. We now proceed by expanding these minors along 𝑠th column to get 1 times the corresponding 𝑘th order principal minors of matrix 𝐵𝑖(𝐵𝑠) which do not include 𝑠th column. Hence, 𝛿𝑖𝑘+1(𝐵;𝑒𝑠)=𝛿𝑖𝑘([𝐵𝑖(𝐵𝑠))]̂𝑠, and the proof is complete.

In the following theorem, we give some interesting correspondence between the coefficients 𝐵𝑘 of the matrix polynomial 𝐵(𝜆)=adj(𝜆𝐼𝐵) and sums of principal minors 𝛿𝑘+1(𝐵;𝑣), 0𝑘<𝑛.

Theorem 3.3. Given an arbitrary column [𝑣1𝑣𝑛]𝑇𝐾𝑛×1, it holds 𝐵𝑘𝑣=𝐵𝑘𝑣1𝑣2𝑣𝑛=(1)𝑘𝛿1𝑘+1𝐵;𝑣1,,𝑣𝑛𝛿2𝑘+1𝐵;𝑣1,,𝑣𝑛𝛿𝑛𝑘+1𝐵;𝑣1,,𝑣𝑛=(1)𝑘𝛿𝑘+1𝐵;𝑣.(3.7)

Proof. The proof proceeds by induction on 𝑘. It is being obvious for 𝑘=0. Assume, as induction hypothesis (IH), that the statement is true for 𝑘1. Multiplying the right side of the equation 𝐵𝑘=𝐵𝑘1𝐵+𝑑𝑘𝐼, by the vector 𝑣, we obtain that 𝐵𝑘𝑣=𝐵𝑘1𝐵𝑣+𝑑𝑘𝑣=(IH)(1)𝑘1𝛿𝑘𝐵;𝐵𝑣+𝑑𝑘𝑣=(1)𝑘1𝛿𝑘𝐵;𝐵𝑣𝛿𝑘𝑣=(5)(1)𝑘𝛿𝑘+1𝐵;𝑣.(3.8)

Remark 3.4. Theorem 3.3 seems to have an independent application. Taking 𝑣=𝑒𝑗,1𝑗𝑛, we prove formulae (8)–(10) given in [7].

4. Formulae of Total Reduction

We can now obtain a new type of reduction for the linear systems with constant coefficients from [5] applying results of previous section. For the sake of completeness, we introduce some definition.

Let 𝐾 be a field, 𝑉 a vector space over field 𝐾, and let 𝐴𝑉𝑉 be a linear operator. We consider a linear system of first-order 𝐴-operator equations with constant coefficients in unknowns 𝑥𝑖, 1𝑖𝑛, is 𝐴𝑥1=𝑏11𝑥1+𝑏12𝑥2++𝑏1𝑛𝑥𝑛+𝜑1𝐴𝑥2=𝑏21𝑥1+𝑏22𝑥2++𝑏2𝑛𝑥𝑛+𝜑2𝐴𝑥𝑛=𝑏𝑛1𝑥1+𝑏𝑛2𝑥2++𝑏𝑛𝑛𝑥𝑛+𝜑𝑛(4.1) for 𝑏𝑖𝑗𝐾 and 𝜑𝑖𝑉. We say that 𝐵=[𝑏𝑖𝑗]𝑛𝑖,𝑗=1𝐾𝑛×𝑛 is the system matrix, and 𝜑=[𝜑1𝜑𝑛]𝑇𝑉𝑛×1 is the free column.

Let 𝑥=[𝑥1𝑥𝑛]𝑇 be a column of unknowns and 𝐴𝑉𝑛×1𝑉𝑛×1 be a vector operator defined componentwise 𝐴(𝑥)=[𝐴(𝑥1)𝐴(𝑥𝑛)]𝑇. Then system (4.1) can be written in the following vector form: 𝐴𝑥=𝐵𝑥+𝜑.(4.2) Any column 𝑣𝑉𝑛×1 which satisfies the previous system is its solution.

Powers of operator 𝐴 are defined as usual 𝐴𝑖=𝐴𝑖1𝐴 assuming that 𝐴0=𝐼𝑉𝑉 is the identity operator. By 𝑛th order linear 𝐴-operator equation with constant coefficients, in unknown 𝑥, we mean 𝐴𝑛(𝑥)+𝑏1𝐴𝑛1(𝑥)++𝑏𝑛𝐼(𝑥)=𝜑,(4.3) where 𝑏1,,𝑏𝑛𝐾 are coefficients and 𝜑𝑉. Any vector 𝑣𝑉 which satisfies (4.3) is its solution.

The following theorem separates variables of the initial system.

Theorem 4.1. Assume that the linear system of first-order 𝐴-operator equations is given in the form (4.2), 𝐴(𝑥)=𝐵𝑥+𝜑, and that matrices 𝐵0,,𝐵𝑛1 are coefficients of the matrix polynomial 𝐵(𝜆)=adj(𝜆𝐼𝐵). Then it holds the following: Δ𝐵𝐴𝑥=𝑛𝑘=1𝐵𝑘1𝐴𝑛𝑘𝜑.(4.4)

Proof. Let 𝐿𝐵𝑉𝑛×1𝑉𝑛×1 be a linear operator defined by 𝐿𝐵(𝑥)=𝐵𝑥. Replacing 𝜆𝐼 by 𝐴 in the equation Δ𝐵(𝜆)𝐼=𝐵(𝜆)(𝜆𝐼𝐵), we obtain that Δ𝐵𝐴=𝐵𝐴𝐴𝐿𝐵Δ,hence𝐵𝐴𝑥=𝐵𝐴𝐴𝐿𝐵𝑥=𝐵𝐴𝐴𝑥𝐵𝑥=(8)𝐵𝐴𝜑=𝑛𝑘=1𝐵𝑘1𝐴𝑛𝑘𝜑.(4.5)

The next theorem is an operator generalization of Cramer's rule.

Theorem 4.2 (the theorem of total reduction-vector form). Linear system of first-order 𝐴-operator equations (4.2) can be reduced to the system of 𝑛th order 𝐴-operator equations Δ𝐵𝐴𝑥=𝑛𝑘=1(1)𝑘1𝛿𝑘𝐵;𝐴𝑛1𝜑.(4.6)

Proof. It is an immediate consequence of Theorems 4.1 and 3.3 as follows: Δ𝐵𝐴𝑥=𝑛(10)𝑘=1𝐵𝑘1𝐴𝑛𝑘𝜑=𝑛(6)𝑘=1(1)𝑘1𝛿𝑘𝐵;𝐴𝑛𝑘𝜑.(4.7)

We can now rephrase the previous theorem as follows.

Theorem 4.3 (the theorem of total reduction). Linear system of first-order 𝐴-operator equations (4.1) implies the system, which consists of 𝑛th order 𝐴-operator equations as follows: Δ𝐵𝑥(𝐴)1=𝑛𝑘=1(1)𝑘1𝛿1𝑘𝐵;𝐴𝑛𝑘𝜑Δ𝐵𝑥(𝐴)𝑖=𝑛𝑘=1(1)𝑘1𝛿𝑖𝑘𝐵;𝐴𝑛𝑘𝜑Δ𝐵𝑥(𝐴)𝑛=𝑛𝑘=1(1)𝑘1𝛿𝑛𝑘𝐵;𝐴𝑛𝑘𝜑.(4.8)

Remark 4.4. System (4.8) has separated variables, and it is called totally reduced. The obtained system is suitable for applications since it does not require a change of base. This system consists of 𝑛th order linear 𝐴-operator equations which differ only in the variables and in the nonhomogeneous terms.
Transformations of the linear systems of operator equations into independent equations are important in applied mathematics [1]. In the following two sections: we apply our theorem of total reduction to the specific linear operators 𝐴.

5. Cauchy Problem

Let us assume that 𝐴 is a differential operator on the vector space of real functions and that system (4.1) has initial conditions 𝑥𝑖(𝑡0)=𝑐𝑖, for 1𝑖𝑛. Then the Cauchy problem for system (4.1) has a unique solution. Using form (4.2), we obtain additional 𝑛1 initial conditions of 𝑖th equation in system (4.8). Consider 𝐴𝑗𝑥𝑖𝑡0=𝐵𝑗𝑥𝑡0𝑖+𝑗1𝑘=0𝐵𝑗1𝑘𝐴𝑘𝜑𝑡0𝑖,1𝑗𝑛1,(5.1) where []𝑖 denotes 𝑖th coordinate. Then each equation in system (4.8) has a unique solution under given conditions and additional conditions (5.1), and these solutions form a unique solution to system (4.1). Therefore, formulae (4.8) can be used for solving systems of differential equations.

It is worth pointing out that the above method can be also extended to systems of difference equations.

6. Differential Transcendency

Now suppose that 𝑉 is the vector space of meromorphic functions over the complex field 𝐶 and that 𝐴 is a differential operator, 𝐴(𝑥)=(𝑑/𝑑𝑧)(𝑥). Let us consider system (4.1) under these assumptions.

Recall that a function 𝑥𝑉 is differentially algebraic if it satisfies a differential algebraic equation with coefficients from 𝐶; otherwise, it is differentially transcendental (see [24, 810]).

Let us consider nonhomogenous linear differential equation of 𝑛th order in the form (4.3), where 𝑏1,,𝑏𝑛𝐶 are constants and 𝜑𝑉. If 𝑥 is differentially transcendental then Δ𝐵(𝐴)(𝑥) is also a differentially transcendental function. On the other hand, if 𝜑 is differentially transcendental, then, based on Theorem 2.8. from [10], the solution 𝑥 of (4.3) is a differentially transcendental function. Therefore, we obtain the equivalence.

Theorem 6.1. Let 𝑥 be a solution of (4.3), and then 𝑥 is a differentially transcendental function if and only if 𝜑 is a differentially transcendental function.

We also have the following statement about differential transcendency.

Theorem 6.2. Let 𝜑𝑗 be the only differentially transcendental component of the free column 𝜑. Then for any solution 𝑥 of system (4.2), the corresponding entry 𝑥𝑗 is also a differentially transcendental function.

Proof. The sum 𝑛𝑘=1(1)𝑘1𝛿𝑗𝑘(𝐵;𝐴𝑛𝑘(𝜑)) must be a differentially transcendental function. The previous theorem applied to the following equationΔ𝐵𝑥(𝐴)𝑗=𝑛𝑘=1(1)𝑘1𝛿𝑗𝑘𝐵;𝐴𝑛𝑘𝜑(6.1) implies that 𝑥𝑗 is a differentially transcendental function too.

Let us consider system (4.1), and let 𝜑1 be the only differentially transcendental component of the free column 𝜑. Then, the coordinate 𝑥1 is a differentially transcendental function too. Whether the other coordinates 𝑥𝑘 are differentially algebraic depends on the system matrix 𝐵. From the formulae of total reduction and Theorem 6.1. we obtain the following statement.

Theorem 6.3. Let 𝜑1 be the only differentially transcendental component of the free column 𝜑 of system (4.1). Then the coordinate 𝑥𝑘, 𝑘1, of the solution 𝑥, is differentially algebraic if and only if in the sum 𝑛𝑗=1(1)𝑗1𝛿𝑘𝑗(𝐵;𝐴𝑛𝑗(𝜑)) appears no function 𝐴𝑛𝑗(𝜑1)(𝑗=1,,𝑛).

Example 6.4. Let us consider system (4.1) in the form (4.8) in dimensions 𝑛=2 and 𝑛=3 with 𝜑1 as the only differentially transcendental component. The function 𝑥1 is differentially transcendental. For 𝑛=2 the function 𝑥2 is differentially algebraic if and only if 𝑏21=0. For 𝑛=3 the function 𝑥2 is differentially algebraic if and only if 𝑏21=0𝑏31𝑏23=0 and the function 𝑥3 is differentially algebraic iff 𝑏31=0𝑏31𝑏22=0.

Let us emphasize that if we consider two or more differentially transcendental components of the free column 𝜑, then the differential transcendency of the solution coordinates also depends on some kind of their differential independence (see e.g., [8]).

Acknowledgment

Research is partially supported by the Ministry of Science and Education of the Republic of Serbia, Grant no. 174032.