#### Abstract

We consider a reduction of a nonhomogeneous linear system of first-order operator equations to a totally reduced system. Obtained results are applied to Cauchy problem for linear differential systems with constant coefficients and to the question of differential transcendency.

#### 1. Introduction

Linear systems with constant coefficients are considered in various fields (see [1–5]). In our paper [5] we use the rational canonical form and a certain sum of principal minors to reduce a linear system of first-order operator equations with constant coefficients to an equivalent, so called partially reduced, system. In this paper we obtain more general results regarding sums of principal minors and a new type of reduction. The obtained formulae of reduction allow some new considerations in connection with Cauchy problem for linear differential systems with constant coefficients and in connection with the differential transcendency of the solution coordinates.

#### 2. Notation

Let us recall some notation. Let be a field, and let be an -square matrix. We denote by the sum of its principal minors of order , the sum of its principal minors of order containing th column .

Let be elements of . We write for the matrix obtained by substituting in place of th column of . Furthermore, it is convenient to use and the corresponding vector The characteristic polynomial of the matrix has the following form: where , ; see [6, page 78].

Denote by the adjoint matrix of , and let be -square matrices over determined by

Recall that.

The recurrence follows from equation ; see [6, page 91].

#### 3. Some Results about Sums of Principal Minors

In this section we give two results about sums of principal minors.

Theorem 3.1. *For and , one has:
*

*Remark 3.2. *The previous result can be described by
or simply by the following vector equation:

* Proof of Theorem 3.1. *Let denote the column whose only nonzero entry is in th position. We also write for th column of the matrix and for a square matrix of order obtained from by deleting its th column and row. According to the notation used in (2.1), let stand for the matrix of order obtained from by substituting th column in place of th column, and then by deleting th column and th row of the new matrix. By applying linearity of with respect to , we have
First, we compute . Then, it remains to show that .

It suffices to prove that each term in the sum is zero, that is,
Suppose that . We now consider minors in the sum . All of them containing th column are equal to zero, so we deduce
If , then each minor in the sum necessarily contains th row and th column. By interchanging th and th column, we multiply each minor by . We now proceed by expanding these minors along th column to get times the corresponding th order principal minors of matrix which do not include th column. Hence, , and the proof is complete.

In the following theorem, we give some interesting correspondence between the coefficients of the matrix polynomial and sums of principal minors , .

Theorem 3.3. *Given an arbitrary column , it holds
*

* Proof. *The proof proceeds by induction on . It is being obvious for . Assume, as induction hypothesis (IH), that the statement is true for . Multiplying the right side of the equation , by the vector , we obtain that

*Remark 3.4. *Theorem 3.3 seems to have an independent application. Taking , we prove formulae (8)–(10) given in [7].

#### 4. Formulae of Total Reduction

We can now obtain a new type of reduction for the linear systems with constant coefficients from [5] applying results of previous section. For the sake of completeness, we introduce some definition.

Let be a field, a vector space over field , and let be a linear operator. We consider a linear system of first-order -operator equations with constant coefficients in unknowns , , is for and . We say that is the system matrix, and is the free column.

Let be a column of unknowns and be a vector operator defined componentwise . Then system (4.1) can be written in the following vector form: Any column which satisfies the previous system is its solution.

Powers of operator are defined as usual assuming that is the identity operator. By th order linear -operator equation with constant coefficients, in unknown , we mean where are coefficients and . Any vector which satisfies (4.3) is its solution.

The following theorem separates variables of the initial system.

Theorem 4.1. * Assume that the linear system of first-order -operator equations is given in the form (4.2), , and that matrices are coefficients of the matrix polynomial . Then it holds the following:
*

* Proof. *Let be a linear operator defined by . Replacing by in the equation , we obtain that

The next theorem is an operator generalization of Cramer's rule.

Theorem 4.2 (the theorem of total reduction-vector form). *Linear system of first-order -operator equations (4.2) can be reduced to the system of th order -operator equations
*

*Proof. *It is an immediate consequence of Theorems 4.1 and 3.3 as follows:

We can now rephrase the previous theorem as follows.

Theorem 4.3 (the theorem of total reduction). *Linear system of first-order -operator equations (4.1) implies the system, which consists of th order -operator equations as follows:
*

*Remark 4.4. *System (4.8) has separated variables, and it is called totally reduced. The obtained system is suitable for applications since it does not require a change of base. This system consists of th order linear -operator equations which differ only in the variables and in the nonhomogeneous terms.

Transformations of the linear systems of operator equations into independent equations are important in applied mathematics [1]. In the following two sections: we apply our theorem of total reduction to the specific linear operators .

#### 5. Cauchy Problem

Let us assume that is a differential operator on the vector space of real functions and that system (4.1) has initial conditions =, for . Then the Cauchy problem for system (4.1) has a unique solution. Using form (4.2), we obtain additional initial conditions of th equation in system (4.8). Consider where denotes th coordinate. Then each equation in system (4.8) has a unique solution under given conditions and additional conditions (5.1), and these solutions form a unique solution to system (4.1). Therefore, formulae (4.8) can be used for solving systems of differential equations.

It is worth pointing out that the above method can be also extended to systems of difference equations.

#### 6. Differential Transcendency

Now suppose that is the vector space of meromorphic functions over the complex field and that is a differential operator, . Let us consider system (4.1) under these assumptions.

Recall that a function is differentially algebraic if it satisfies a differential algebraic equation with coefficients from ; otherwise, it is differentially transcendental (see [2–4, 8–10]).

Let us consider nonhomogenous linear differential equation of th order in the form (4.3), where are constants and . If is differentially transcendental then is also a differentially transcendental function. On the other hand, if is differentially transcendental, then, based on Theorem 2.8. from [10], the solution of (4.3) is a differentially transcendental function. Therefore, we obtain the equivalence.

Theorem 6.1. *Let be a solution of (4.3), and then is a differentially transcendental function if and only if is a differentially transcendental function. *

We also have the following statement about differential transcendency.

Theorem 6.2. *Let be the only differentially transcendental component of the free column . Then for any solution of system (4.2), the corresponding entry is also a differentially transcendental function. *

*Proof. *The sum must be a differentially transcendental function. The previous theorem applied to the following equation
implies that is a differentially transcendental function too.

Let us consider system (4.1), and let be the only differentially transcendental component of the free column . Then, the coordinate is a differentially transcendental function too. Whether the other coordinates are differentially algebraic depends on the system matrix . From the formulae of total reduction and Theorem 6.1. we obtain the following statement.

Theorem 6.3. *Let be the only differentially transcendental component of the free column of system (4.1). Then the coordinate , , of the solution , is differentially algebraic if and only if in the sum appears no function . *

*Example 6.4. *Let us consider system (4.1) in the form (4.8) in dimensions and with as the only differentially transcendental component. The function is differentially transcendental. For the function is differentially algebraic if and only if . For the function is differentially algebraic if and only if and the function is differentially algebraic iff .

Let us emphasize that if we consider two or more differentially transcendental components of the free column , then the differential transcendency of the solution coordinates also depends on some kind of their differential independence (see e.g., [8]).

#### Acknowledgment

Research is partially supported by the Ministry of Science and Education of the Republic of Serbia, Grant no. 174032.