Abstract

The method of solving matrix linear equations 𝐴𝑋+𝐵𝑌=𝐶 and 𝐴𝑋+𝑌𝐵=𝐶 over commutative Bezout domains by means of standard form of a pair of matrices with respect to generalized equivalence is proposed. The formulas of general solutions of such equations are deduced. The criterions of uniqueness of particular solutions of such matrix equations are established.

1. Preliminaries

1.1. Introduction

The matrix linear equations play a fundamental role in many talks in control and dynamical systems theory [14]. The such equations are the matrix linear bilateral equations with one and two variables𝐴𝑋+𝑋𝐵=𝐶,(1.1)𝐴𝑋+𝑌𝐵=𝐶,(1.2) and the matrix linear unilateral equations𝐴𝑋+𝐵𝑌=𝐶,(1.3) where 𝐴, 𝐵, and 𝐶 are matrices of appropriate size over a certain field or over a ring , 𝑋,𝑌 are unknown matrices. Equations (1.1), (1.2) are called Sylvester equations. The equation 𝐴𝑋+𝑋𝐴𝑇=𝐶, where matrix 𝐴𝑇 is transpose of 𝐴, is called Lyapunov equation and it is special case of Sylvester equation. Equation (1.3) is called the matrix linear Diophantine equation [3, 4].

Roth [5] established the criterions of solvability of matrix equations (1.1), (1.2) whose coefficients 𝐴, 𝐵, and 𝐶 are the matrices over a field .

Theorem 1.1 ([5]). The matrix equation (1.1), where 𝐴, 𝐵, and 𝐶 are matrices with elements in a field , has a solution 𝑋 with elements in if and only if the matrices 𝑀=𝐴𝐶0𝐵,𝑁=𝐴00𝐵(1.4) are similar.
The matrix equation (1.2), where 𝐴, 𝐵, and 𝐶 are matrices with elements in , has solution 𝑋, 𝑌 with elements in if and only if the matrices 𝑀and 𝑁 are equivalent.

The Roth’s theorem was extended by many authors to the matrix equations (1.1), (1.2) in cases where their coefficients are the matrices over principal ideal rings [68], over arbitrary commutative rings [9], and over other rings [1014].

The matrix linear unilateral equation (1.3) has a solution if and only if one of the following conditions holds:(a)a greatest common left divisor of the matrices 𝐴 and B is a left divisor of the matrix 𝐶;(b)the matrices 𝐴𝐵𝐶 and 𝐴𝐵0 are right equivalent.

In the case where 𝐴, 𝐵, and 𝐶 in (1.3) are the matrices over a polynomial ring [𝜆], where is a field, these conditions were formulated in [1, 4]. It is not difficult to show, that these conditions of solvability hold for the matrix linear unilateral equation (1.3) over a commutative Bezout domain.

The matrix equations (1.1), (1.2), (1.3), where the coefficients 𝐴, 𝐵, and 𝐶 are the matrices over a field , reduce by means of the Kronecker product to equivalent systems of linear equations [15]. Hence (1.1) over algebraic closed field has unique solution if and only if the matrices 𝐴 and 𝐵 have no common characteristic roots.

One of the methods of solving matrix polynomial equation,𝐴(𝜆)𝑋(𝜆)+𝑌(𝜆)𝐵(𝜆)=𝐶(𝜆),(1.5) where 𝐴(𝜆), 𝐵(𝜆), and 𝐶(𝜆) are matrices over a polynomial ring [𝜆], is based on reducibility of polynomial equation to equivalent equation with matrix coefficients over a field , that is,𝐴𝑍+𝑍𝐵=𝐷,(1.6) where 𝐴 and 𝐵 are companion matrices of matrix polynomials 𝐴(𝜆)=𝑟𝑖=0𝐴𝑖𝜆𝑟𝑖 and 𝐵(𝜆)=𝑠𝑗=0𝐵𝑗𝜆𝑠𝑖, respectively, 𝐷 is matrix over a field , and 𝑍 is unknown matrix [16, 17].

Equation (1.5) has a unique solution 𝑋0(𝜆), 𝑌0(𝜆) of bounded degree deg𝑋0(𝜆)<deg𝐵(𝜆) if and only if (det𝐴(𝜆),det𝐵(𝜆))=1 [17].

Feinstein and Bar-Ness [18] established that for (1.5), in which at least one from the matrix coefficients 𝐴(𝜆) or 𝐵(𝜆) is regular, there exists unique minimal solution 𝑋0(𝜆), 𝑌0(𝜆), such that deg𝑋0(𝜆)<deg𝐵(𝜆), deg𝑌0(𝜆)<deg𝐴(𝜆), if and only if (det𝐴(𝜆),det𝐵(𝜆))=1 and deg𝐶(𝜆)deg𝐴(𝜆)+deg𝐵(𝜆)1. The similar result was established in [19] in the case where at least one from matrix coefficients 𝐴(𝜆) or 𝐵(𝜆) is regularizable.

In this paper we propose the method of solving matrix linear equations (1.2), (1.3) over a commutative Bezout domain. This method is based on the use of standard form of a pair of matrices with respect to generalized equivalence introduced in [20, 21], and on congruences. We introduce the notion of particular solutions of such matrix equations. We establish the criterions of uniqueness of particular solutions and write down the formulas of general solutions of such equations.

1.2. The Linear Congruences and Diophantine Equations

Let be a commutative Bezout domain. A commutative domain is called a Bezout domain if any two elements 𝑎, 𝑏 have a greatest common divisor (𝑎,𝑏)=𝑑,  𝑑 and 𝑑=𝑝𝑎+𝑞𝑏, for some 𝑝, 𝑞 [22, 23]. Note that a commutative domain is a Bezout domain if and only if any finitely generated ideal is principal.

Further, 𝑈() denotes a group of units of , 𝑚 denotes a complete set of residues modulo the ideal (𝑚) generated by element 𝑚 or a complete set of residues modulo 𝑚. An element 𝑎 of is said to be an associate to an element 𝑏 of , if 𝑎=𝑏𝑢, where 𝑢 belongs to 𝑈(). A set of elements of , one from each associate class, is said to be a complete set of nonassociates, which we denoted by [24]. For example, if = is a ring of integers, then can be chosen as the set of positive integers with zero, that is, ={0,1,2,}, and 𝑚 can be chosen as the set of the smallest nonnegative residues, that is, 𝑚={0,1,2,,𝑚1}.

Many properties of divisibility in principal ideal rings [2427] can be easily generalized to the commutative Bezout domain. Recall some of them which will be used later.

In what follows, will always denote a commutative Bezout domain.

Lemma 1.2. Each residue class 𝑎(mod𝑚) over can be represented as union 𝑎(mod𝑚)=𝑟𝑅𝑑𝑎+𝑚𝑟(mod𝑚𝑑),(1.7) where the union is taken over all residues of arbitrary complete set of residues 𝑑 modulo 𝑑, where 𝑑0.

In the case where is an euclidean ring, this lemma was proved in [27]. By the same way, this lemma can be proved in the case where is a commutative Bezout domain.

The class of elements 𝑥𝑥0(mod𝑚) satisfying the congruence 𝑎𝑥𝑏(mod𝑚) is called solution of this congruence.

Lemma 1.3. Let 𝑎𝑥𝑏(mod𝑚)(1.8) and (𝑎,𝑚)=𝑑, where 𝑎,𝑏, 𝑚, 𝑑. Congruence (1.8) has a solution if and only if 𝑑𝑏,thatis,𝑏=𝑏1𝑑.
Let 𝑎=𝑎1𝑑, 𝑏=𝑏1𝑑,𝑚=𝑚1𝑑, where (𝑎1,𝑚1)=1, and 𝑥𝑥0(mod𝑚1) be a solution of congruence 𝑎1𝑥𝑏1mod𝑚1.(1.9) Then the general solution of congruence (1.8) has the form: 𝑥𝑥0+𝑚1𝑟(mod𝑚),(1.10) where 𝑟 is any element of 𝑑.

Proof. Necessity. It is obvious.
Sufficiency. Let (𝑎,𝑚)=𝑑 and 𝑑𝑏. Then dividing both sides a congruence (1.8) and 𝑚 by 𝑑, we get congruence (1.9), where (𝑎1,𝑚1)=1. There exist elements 𝑢, 𝑣 of such that 𝑎1𝑢+𝑚1𝑣=1. Thus we have 𝑎1𝑢1(mod𝑚1). Multiply two sides of this congruence by 𝑏10,thatis, 𝑎1𝑢𝑏1𝑏1(mod𝑚1). Therefore, 𝑥𝑢𝑏1(mod𝑚1) is a solution of congruence (1.9). Set 𝑢𝑏1=𝑥0. Then by Lemma 1.2 we get the general solution of congruence (1.8): 𝑥𝑥0+𝑚1𝑟(mod𝑚), where 𝑟 is an arbitrary element of 𝑑. This proves the lemma.

Corollary 1.4. The congruence (1.8) has unique solution 𝑥𝑥0(mod𝑚) such that 𝑥0𝑚 if and only if (𝑎,𝑚)=1.

Let𝑎𝑥+𝑏𝑦=𝑐(1.11) be a linear Diophantine equation over and (𝑎,𝑏)=𝑑. Equation (1.11) has a solution if and only if 𝑑𝑐.

Suppose that 𝑎=𝑎1𝑑, 𝑏=𝑏1𝑑, and 𝑐=𝑐1𝑑, where (𝑎1,𝑏1)=1. Then (1.11) implies𝑎1𝑥+𝑏1𝑦=𝑐1.(1.12) Let 𝑥0, 𝑦0 be a solution of (1.12), that is, 𝑥0 is a solution of congruence 𝑎1𝑥𝑐1(mod𝑏1), 𝑥0𝑏1, and 𝑦0=(𝑐1𝑎1𝑥0)/𝑏1. It is easily to verify that if 𝑥0𝑏1, then 𝑥0𝑏.

The solution 𝑥0, 𝑦0 of (1.12) is obviously the solution of (1.11).

Definition 1.5. The solution 𝑥0, 𝑦0 of (1.11) such that 𝑥0𝑏 is called the particular solution of this equation.

Then by Lemmas 1.2 and 1.3, a general solution of (1.11) has the form𝑥=𝑥0+𝑏𝑑𝑟+𝑏𝑘,𝑦=𝑦0𝑎𝑑𝑟𝑎𝑘,(1.13) where 𝑟 is an arbitrary element of 𝑑 and 𝑘 is any element of .

Corollary 1.6. A particular solution 𝑥0, 𝑦0 of (1.11) with 𝑥0𝑏 is unique if and only if (𝑎,𝑏)=1.

Example 1.7. Let 9𝑥+6𝑦=6(1.14) be linear Diophantine equation over ring . Then (9,6)=3=𝑑 and (1.14) is solvable. Then the particular solutions of (1.14) are 𝑥1(0)=0,𝑦1(0)𝑥=1,2(0)=2,𝑦2(0)𝑥=2,3(0)=4,𝑦3(0)=5,(1.15) because 𝑥1(0), 𝑥2(0),𝑥3(0)6={0,1,2,3,4,5}.
Then the general solution of (1.14) can be written using (1.13) 6𝑥=0+39𝑟+6𝑘,𝑦=13𝑟9𝑘,(1.16) that is, 𝑥=2𝑟+6𝑘,𝑦=13𝑟9𝑘,(1.17) where 𝑟 is arbitrary element of 3={0,1,2} and 𝑘 is any element of .

1.3. Standard Form of a Pair of Matrices

Let be a commutative Bezout domain with diagonal reduction of matrices [28], that is, for every matrix 𝐴 of the ring of matrices 𝑀(𝑛,), there exist invertible matrices 𝑈, 𝑉𝐺𝐿(𝑛,) such that𝑈𝐴𝑉𝐴=𝐷𝐴𝜑=diag1,,𝜑𝑛,𝜑𝑖𝜑𝑖+1,𝑖=1,,𝑛1.(1.18) If 𝜑𝑖, 𝑖=1,,𝑛, then the matrix 𝐷𝐴 is unique and is called the canonical diagonal form (Smith normal form) of the matrix 𝐴. Such rings are so-called adequate rings. The ring is called an adequate if is a commutative domain in which every finitely generated ideal is principal and for every 𝑎, 𝑏 with 𝑎0; 𝑎 can be represented as 𝑎=𝑐𝑑 where (𝑐,𝑏)=1 and (𝑑𝑖,𝑏)1 for every nonunit factor 𝑑𝑖 of 𝑑 [29].

Definition 1.8. The pairs (𝐴1,𝐴2) and (𝐵1,𝐵2) of matrices 𝐴𝑖, 𝐵𝑖𝑀(𝑛,), 𝑖=1,2, are called generalized equivalent pairs if 𝐴𝑖=𝑈𝐵𝑖𝑉𝑖, 𝑖=1,2, for some invertible matrices 𝑈 and 𝑉𝑖 over .

In [20, 21], the forms of the pair of matrices with respect to generalized equivalence are established.

Theorem 1.9. Let be an adequate ring, and let 𝐴, 𝐵𝑀(𝑛,) be the nonsingular matrices and 𝐷𝐴𝜑=Φ=diag1,,𝜑𝑛,𝐷𝐵𝜓=Ψ=diag1,,𝜓𝑛(1.19) be their canonical diagonal forms.
Then the pair of matrices (𝐴,𝐵) is generalized equivalent to the pair (𝐷𝐴,𝑇𝐵), where 𝑇𝐵 has the following form: 𝑇𝐵=𝜓1𝑡0021𝜓1𝜓2𝑡0𝑛1𝜓1𝑡𝑛2𝜓2𝜓𝑛,(1.20)𝑡𝑖𝑗𝛿𝑖𝑗, where 𝛿𝑖𝑗=(𝜑𝑖/𝜑𝑗,𝜓𝑖/𝜓𝑗), 𝑖, 𝑗=1,,𝑛, 𝑖>𝑗.

The pair (𝐷𝐴,𝑇𝐵) defined in Theorem 1.9 is called the standard form of the pair of matrices (𝐴,𝐵) or the standard pair of matrices (𝐴,𝐵).

Definition 1.10. The pair (𝐴,𝐵) is called diagonalizable if it is generalized equivalent to the pair of diagonal matrices (𝐷𝐴,𝐷𝐵), that is, its standard form is the pair of diagonal matrices (𝐷𝐴,𝐷𝐵).

Corollary 1.11. Let 𝐴, 𝐵𝑀(𝑛,). If (𝜑𝑛/𝜑1,𝜓𝑛/𝜓1)=1, then the pair of matrices (𝐴,𝐵) is diagonalizable.

It is clear taking into account Corollary 1.11 that if (det𝐴,det𝐵)=1, then the standard form of matrices (𝐴,𝐵) is the pair of diagonal matrices (𝐷𝐴,𝐷𝐵).

Let us formulate the criterion of diagonalizability of the pair of matrices.

Theorem 1.12. Let 𝐴, 𝐵𝑀(𝑛,) and 𝐴 be a nonsingular matrix. Then the pair of matrices (𝐴,𝐵) is generalized equivalent to the pair of diagonal matrices (𝐷𝐴,𝐷𝐵) if and only if the matrices (adj𝐴)𝐵 and (adj𝐷𝐴)𝐷𝐵 are equivalent, where adj𝐴 is an adjoint matrix.

2. The Matrix Linear Unilateral Equations 𝐴𝑋+𝐵𝑌=𝐶

2.1. The Construction of the Solutions of the Matrix Linear Unilateral Equations with Two Variables

Suppose that the matrix linear unilateral equation (1.3) is solvable, and let (𝐷𝐴,𝑇𝐵) be a standard form of a pair of matrices (𝐴,𝐵) from (1.3) with respect to generalized equivalence, that is,𝐷𝐴=Φ=𝑈𝐴𝑉𝐴𝜑=diag1,,𝜑𝑛,𝑇𝐵=𝑈𝐵𝑉𝐵=𝜓1𝑡0021𝜓1𝜓2𝑡0𝑛1𝜓1𝑡𝑛2𝜓2𝜓𝑛(2.1) is a lower triangular matrix of the form (1.20) with the principal diagonal𝐷𝐵𝜓=Ψ=diag1,,𝜓𝑛,(2.2) where 𝑈, 𝑉𝐴, 𝑉𝐵𝐺𝐿(𝑛,).

Then (1.3) is equivalent to the equation𝐷𝐴𝑋+𝑇𝐵𝑌=𝐶,(2.3) where 𝑋=𝑉𝐴1𝑋, 𝑌=𝑉𝐵1𝑌, and 𝐶=𝑈𝐶.

The pair of matrices 𝑋0, 𝑌0 satisfying (2.3) is called the solution of this equation. Then𝑋0=𝑉𝐴𝑋0,𝑌0=𝑉𝐵𝑌0(2.4) is the solution of (1.3).

The matrix equation (2.3) is equivalent to the system of linear equation:𝜑1̃𝑥11+𝜓1̃𝑦11=̃𝑐11,𝜑1̃𝑥12+𝜓1̃𝑦12=̃𝑐12,𝜑1̃𝑥1𝑛+𝜓1̃𝑦1𝑛=̃𝑐1𝑛,𝜑2̃𝑥21+𝜓1𝑡21̃𝑦11+𝜓2̃𝑦21=̃𝑐21,𝜑𝑛̃𝑥𝑛𝑛+𝜓1𝑡𝑛1̃𝑦1𝑛++𝜓𝑛1𝑡𝑛,𝑛1̃𝑦𝑛1,𝑛+𝜓𝑛̃𝑦𝑛𝑛=̃𝑐𝑛𝑛,(2.5) with the variables ̃𝑥𝑖𝑗, ̃𝑦𝑖𝑗, 𝑖, 𝑗=1,,𝑛, where 𝑡𝑖𝑗, 𝑖,𝑗=1,,𝑛, from (2.3), or𝜑𝑖̃𝑥𝑖𝑗+𝑖1𝑙=1𝜓𝑙𝑡𝑖𝑙̃𝑦𝑙𝑗+𝜓𝑖̃𝑦𝑖𝑗=̃𝑐𝑖𝑗,𝑖,𝑗=1,,𝑛,(2.6) where 𝑋=̃𝑥𝑖𝑗𝑛1, 𝑌=̃𝑦𝑖𝑗𝑛1, and 𝐶=̃𝑐𝑖𝑗𝑛1.

The solving of this system reduces to the successive solving of linear Diophantine equations of the form𝜑𝑖̃𝑥𝑖𝑗+𝜓𝑖̃𝑦𝑖𝑗=̃𝑐𝑖𝑗.(2.7)

Using solutions of system (2.6), we construct the solutions 𝑋, 𝑌 of matrix equation (2.3). Then 𝑋=𝑉𝐴𝑋 and 𝑌=𝑉𝐵𝑌 are the solutions of matrix equation (1.3).

2.2. The General Solution of the Matrix Equation 𝐴𝑋+𝐵𝑌=𝐶 with the Diagonalizable Pair of Matrices (𝐴,𝐵)

Suppose that the pair of matrices (𝐴,𝐵) is diagonalizable, that is, 𝑈𝐴𝑉𝐴=𝐷𝐴𝜑=Φ=diag1,,𝜑𝑛,𝑈𝐵𝑉𝐵=𝐷𝐵𝜓=Ψ=diag1,,𝜓𝑛(2.8) for some matrices 𝑈, 𝑉𝐴, 𝑉𝐵𝐺𝐿(𝑛,).

Then (1.3) is equivalent to the equationΦ𝑋+Ψ𝑌=𝐶,(2.9) where 𝑋=𝑉𝐴1𝑋, 𝑌=𝑉𝐵1𝑌, and 𝐶=𝑈𝐶.

From matrix equation (2.9), we get the system of linear Diophantine equation:𝜑𝑖̃𝑥𝑖𝑗+𝜓𝑖̃𝑦𝑖𝑗=̃𝑐𝑖𝑗,𝑖,𝑗=1,,𝑛.(2.10)

Let ̃𝑥(0)𝑖𝑗, ̃𝑦(0)𝑖𝑗, 𝑖, 𝑗=1,,𝑛, be a particular solution of corresponding equation of system (2.10), that is, ̃𝑥(0)𝑖𝑗 is the solution of congruence 𝜑𝑖̃𝑥𝑖𝑗̃𝑐𝑖𝑗(mod𝜓𝑖), ̃𝑥(0)𝑖𝑗𝑅𝜓𝑖, and ̃𝑦(0)𝑖𝑗=(̃𝑐𝑖𝑗𝜑𝑖̃𝑥(0)𝑖𝑗)/𝜓𝑖.

The general solution of corresponding equation of system (2.10) by the formula (1.13) will have the following form:̃𝑥𝑖𝑗=̃𝑥(0)𝑖𝑗+𝜓𝑖𝑑𝑖𝑖𝑟𝑖+𝜓𝑖𝑘𝑖𝑗,̃𝑦𝑖𝑗=̃𝑦(0)𝑖𝑗𝜑𝑖𝑑𝑖𝑖𝑟𝑖𝜑𝑖𝑘𝑖𝑗,𝑖,𝑗=1,,𝑛,(2.11) where 𝑑𝑖𝑖=(𝜑𝑖,𝜓𝑖), 𝑟𝑖 are arbitrary elements of 𝑑𝑖𝑖, and 𝑘𝑖𝑗 are any elements of , 𝑖, 𝑗=1,,𝑛. The particular solution of matrix equation (2.9) is𝑋0=̃𝑥(0)𝑖𝑗𝑛1,𝑌0=̃𝑦(0)𝑖𝑗𝑛1,(2.12) where ̃𝑥(0)𝑖𝑗, ̃𝑦(0)𝑖𝑗, 𝑖, 𝑗=1,,𝑛, is a particular solution of corresponding equation of system (2.10). Then𝑋0=𝑉𝐴𝑋0,𝑌0=𝑉𝐵𝑌0(2.13) is a particular solution of matrix equation (1.3).

Thus we get the following theorem.

Theorem 2.1. Let the pair of matrices (𝐴,𝐵) from matrix equation (1.3) be diagonalizable and its standard pair be the pair of matrices (Φ,Ψ) in the form (2.8). Let 𝑋0, 𝑌0 be a particular solution of matrix equation (2.9). Then the general solution of matrix equation (2.9) is 𝑋𝑋=0𝜓+diag1𝑑11𝑟1𝜓,,𝑛𝑑𝑛𝑛𝑟𝑛𝑌𝐿+Ψ𝐾,𝑌=0𝜑diag1𝑑11𝑟1𝜑,,𝑛𝑑𝑛𝑛𝑟𝑛𝐿Φ𝐾,(2.14) where 𝑑𝑖𝑖=(𝜑𝑖,𝜓𝑖), 𝑟𝑖 are arbitrary elements of 𝑑𝑖𝑖, 𝑖=1,,𝑛; 𝐿=𝑙𝑖𝑗𝑛1, 𝑙𝑖𝑗=1, 𝑖, 𝑗=1,,𝑛; 𝐾=𝑘𝑖𝑗𝑛1, 𝑘𝑖𝑗 are arbitrary elements of .
The general solution of matrix equation (1.3) has the form 𝑋=𝑉𝐴𝑋,𝑌=𝑉𝐵𝑌.(2.15)

Example 2.2. Consider the equation 𝐴𝑋+𝐵𝑌=𝐶,(2.16) for the matrices 𝐴=1735918,𝐵=26761236,𝐶=1534615(2.17) are matrices over and 𝑥𝑋=11𝑥12𝑥21𝑥22𝑦,𝑌=11𝑦12𝑦21𝑦22(2.18) are unknown matrices.
The matrix equation (2.16) is solvable.
The pair of matrices (𝐴,𝐵) from matrix equation (2.16) by Theorem 1.12 is diagonalizable, since the matrices =,(adj𝐴)𝐵=183591726761236481083072adj𝐷𝐴𝐷𝐵==900120012180012(2.19) are equivalent. Therefore, 𝑈𝐴𝑉𝐴=𝐷𝐴=Φ=diag(1,9),𝜑1=1,𝜑2=9,𝑈𝐵𝑉𝐵=𝐷𝐵=Ψ=diag(2,12),𝜓1=2,𝜓2=12,(2.20) where 𝑈=1201,𝑉𝐴=2111,𝑉𝐵=3211.(2.21)
Then (2.16) is equivalent to the equation Φ𝑋+Ψ𝑌=𝐶,(2.22) where 𝑋=𝑉𝐴1𝑋=̃𝑥11̃𝑥12̃𝑥21̃𝑥22,𝑌=𝑉𝐵1𝑌=̃𝑦11̃𝑦12̃𝑦21̃𝑦22,𝐶=𝑈𝐶=34615.(2.23)
From matrix equation (2.22), we get the system of linear Diophantine equations: ̃𝑥11+2̃𝑦11=3,̃𝑥12+2̃𝑦12=4,9̃𝑥21+12̃𝑦21=6,9̃𝑥22+12̃𝑦22=15.(2.24)
The particular solution of each linear equation of system (2.24) has the following form: ̃𝑥(0)11=1,̃𝑦(0)11=1,̃𝑥(0)12=0,̃𝑦(0)12=2,̃𝑥(0)21=2,̃𝑦(0)21=1,̃𝑥(0)22=3,̃𝑦(0)22=1.(2.25)
The particular solution of matrix equation (2.22) is 𝑋0=,𝑌10230=1211.(2.26)
Then by (2.14) the general solution of matrix equation (2.22) is +𝑋=10232𝑟12𝑟14𝑟24𝑟2+2𝑘112𝑘1212𝑘2112𝑘22,𝑟𝑌=12111𝑟13𝑟23𝑟2𝑘11𝑘129𝑘219𝑘22,(2.27) or 𝑋=1+2𝑘112𝑘122+4𝑟2+12𝑘213+4𝑟2+12𝑘22,𝑌=1𝑘112𝑘1213𝑟29𝑘2113𝑟29𝑘22,(2.28) where 𝑟1 is from 1={0},𝑟2 is arbitrary element of 3={0,1,2}, and 𝑘𝑖𝑗, 𝑖, 𝑗=1,2, is arbitrary elements of .
Finally, the general solution of matrix equation (2.16) is 𝑋=𝑉𝐴𝑋=4+4𝑟2+4𝑘11+12𝑘213+4𝑟2+4𝑘12+12𝑘223+4𝑟2+2𝑘11+12𝑘213+4𝑟2+2𝑘12+12𝑘22,𝑌=𝑉𝐵𝑌=5+6𝑟23𝑘11+18𝑘218+6𝑟23𝑘12+18𝑘2223𝑟2+𝑘119𝑘2133𝑟2+𝑘129𝑘22.(2.29)

2.3. The Uniqueness of Particular Solutions of the Matrix Linear Unilateral Equation

The conditions of uniqueness of solutions of bounded degree (minimal solutions) of matrix linear polynomial equations (1.5) were found in [1619]. We present the conditions of uniqueness of particular solutions of matrix linear equation over a commutative Bezout domain .

Theorem 2.3. The matrix equation (2.3) has a unique particular solution 𝑋0=̃𝑥(0)𝑖𝑗𝑛1,𝑌0=̃𝑦(0)𝑖𝑗𝑛1(2.30) such that ̃𝑥(0)𝑖𝑗𝜓𝑖, 𝑖, 𝑗=1,,𝑛, if and only if (det𝐷𝐴,det𝑇𝐵)=1.

Proof. From matrix equation (2.3), we get the system of linear equations (2.6). The solving of this system reduces to the successive solving of the linear Diophantine equations of the form (2.7).
The matrix equation (2.3) has a unique particular solution 𝑋0=̃𝑥(0)𝑖𝑗𝑛1, 𝑌0=̃𝑦(0)𝑖𝑗𝑛1 such that ̃𝑥(0)𝑖𝑗𝜓𝑖, 𝑖, 𝑗=1,,𝑛, if and only if each linear Diophantine equations of the form (2.7) has a unique particular solution ̃𝑥(0)𝑖𝑗, ̃𝑦(0)𝑖𝑗 such that ̃𝑥(0)𝑖𝑗𝜓𝑖, 𝑖, 𝑗=1,,𝑛. By Corollary 1.6, this holds if and only if (𝜑𝑖,𝜓𝑖)=1 for all 𝑖=1,,𝑛. It follows that (det𝐷𝐴,det𝑇𝐵)=1. This completes the proof.

Theorem 2.4. Let 𝑋0=̃𝑥(0)𝑖𝑗𝑛1, 𝑌0=̃𝑦(0)𝑖𝑗𝑛1, where ̃𝑥(0)𝑖𝑗𝜓𝑖,𝑖=1,,𝑛, be a unique particular solution of matrix equation (2.3).
Then the general solution of matrix equation (2.3) is 𝑋𝑋=0𝑌+Ψ𝐾,𝑌=0Φ𝐾,(2.31) where Φ=𝐷𝐴 and Ψ=𝐷𝐵 are canonical diagonal forms of 𝐴 and 𝐵 from matrix equation (1.3), respectively, 𝐾=𝑘𝑖𝑗𝑛1, 𝑘𝑖𝑗 are arbitrary elements of , 𝑖, 𝑗=1,,𝑛.
The general solution of matrix equation (1.3) is the pair of matrices 𝑋=𝑉𝐴𝑋,𝑌=𝑉𝐵𝑌.(2.32)

Proof. The particular solution of the form (2.30) of (2.3) is unique if and only if (det𝐷𝐴,det𝑇𝐵)=1,that is, (det𝐴,det𝐵)=1. Then by Corollary 1.11 the pair of matrices (𝐴,𝐵) is diagonalizable and (1.3) gives us the equation of the form (2.9).
Thus by Theorem 2.1, we get the formula (2.31) of the general solution of (2.3) and the formula (2.32) for computation of general solution of (1.3) in the case where (2.3) has unique particular solution of the form (2.30). The theorem is proved.

3. The Matrix Linear Bilateral Equations 𝐴𝑋+𝑌𝐵=𝐶

Consider the matrix linear bilateral equation (1.2), where 𝐴, 𝐵, and 𝐶 are matrices over a commutative Bezout domain , and𝑈𝐴𝐴𝑉𝐴=𝐷𝐴𝜑=Φ=diag1,,𝜑𝑛,𝜑𝑖𝜑𝑖+1,𝑈𝐵𝐵𝑉𝐵=𝐷𝐵𝜓=Ψ=diag1,,𝜓𝑛,𝜓𝑖𝜓𝑖+1,𝑖=1,,𝑛1(3.1) are the canonical diagonal forms of matrices 𝐴 and 𝐵, respectively, and 𝑈𝐴, 𝑉𝐴, 𝑈𝐵, 𝑉𝐵𝐺𝐿(𝑛,).

Then (1.2) is equivalent to Φ𝑋+𝑌Ψ=𝐶,(3.2) where 𝑋=𝑉𝐴1𝑋𝑉𝐵, 𝑌=𝑈𝐴𝑌𝑈𝐵1, and 𝐶=𝑈𝐴𝐶𝑉𝐵.

Such an approach to solving (1.2), where 𝐴, 𝐵 and 𝐶 are the matrices over a polynomial ring [𝜆], where is a field, was applied in [3].

The equation (3.2) is equivalent to the system of linear Diophantine equations𝜑𝑖̃𝑥𝑖𝑗+𝜓𝑗̃𝑦𝑖𝑗=̃𝑐𝑖𝑗,𝑖,𝑗=1,,𝑛.(3.3)

Theorem 3.1. Let 𝑋0=̃𝑥(0)𝑖𝑗𝑛1,𝑌0=̃𝑦(0)𝑖𝑗𝑛1(3.4) be a particular solution of matrix equation (3.2), that is, ̃𝑥(0)𝑖𝑗, ̃𝑦(0)𝑖𝑗, 𝑖, 𝑗=1,,𝑛, are particular solutions of linear Diophantine equations of system (3.3).
The general solution of matrix equation (3.2) is 𝑋𝑋=0+𝑊Ψ𝑌+𝐾Ψ,𝑌=0𝑊Φ𝐾Φ,(3.5) where 𝑊Ψ=(𝜓𝑗/𝑑𝑖𝑗)𝑤𝑖𝑗𝑛1, 𝑊Φ=(𝜑𝑗/𝑑𝑖𝑗)𝑤𝑖𝑗𝑛1, where 𝑤𝑖𝑗 are arbitrary elements of 𝑑𝑖𝑗, and 𝐾=𝑘𝑖𝑗𝑛1, where 𝑘𝑖𝑗 are arbitrary elements of , 𝑖, 𝑗=1,,𝑛.
The general solution of matrix equation (1.2) is 𝑋=𝑉𝐴𝑋𝑉𝐵1,𝑌=𝑈𝐴1𝑌𝑈𝐵.(3.6)

Similarly as for (2.3), we prove that particular solution of (3.2) is unique if and only if (detΦ,detΨ)=1. Then by the same way as for (1.3) we write down the general solution of matrix equation (1.2).

Theorem 3.2. Suppose that 𝑋0=̃𝑥(0)𝑖𝑗𝑛1 and 𝑌0=̃𝑦(0)𝑖𝑗𝑛1, where ̃𝑥(0)𝑖𝑗𝜓𝑖, 𝑖=1,,𝑛, is unique particular solution of matrix equation (3.2) and 𝐷𝐴𝜑=Φ=diag1,,𝜑𝑛,𝐷𝐵𝜓=Ψ=diag1,,𝜓𝑛(3.7) are canonical diagonal forms of matrices 𝐴, 𝐵 from matrix equation (1.2), respectively.
Then the general solution of matrix equation (3.2) is 𝑋𝑋=0𝑌+𝐾Ψ,𝑌=0𝐾Φ,(3.8) where 𝐾=𝑘𝑖𝑗𝑛1; 𝑘𝑖𝑗 are arbitrary elements of , 𝑖, 𝑗=1,,𝑛.
The general solution of matrix equation (1.2) is 𝑋=𝑉𝐴𝑋𝑉𝐵1,𝑌=𝑈𝐴1𝑌𝑈𝐵.(3.9)