`ISRN AlgebraVolume 2012 (2012), Article ID 205478, 14 pageshttp://dx.doi.org/10.5402/2012/205478`
Research Article

## The Matrix Linear Unilateral and Bilateral Equations with Two Variables over Commutative Rings

Pidstryhach Institute for Applied Problems of Mechanics and Mathematics, National Academy of Sciences of Ukraine, 3-b, Naukova Street, 79060 L'viv, Ukraine

Received 16 January 2012; Accepted 20 February 2012

Academic Editors: I. Cangul, H. Chen, and P. Damianou

Copyright © 2012 N. S. Dzhaliuk and V. M. Petrychkovych. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

#### Abstract

The method of solving matrix linear equations and over commutative Bezout domains by means of standard form of a pair of matrices with respect to generalized equivalence is proposed. The formulas of general solutions of such equations are deduced. The criterions of uniqueness of particular solutions of such matrix equations are established.

#### 1. Preliminaries

##### 1.1. Introduction

The matrix linear equations play a fundamental role in many talks in control and dynamical systems theory [14]. The such equations are the matrix linear bilateral equations with one and two variables and the matrix linear unilateral equations where , , and are matrices of appropriate size over a certain field or over a ring , , are unknown matrices. Equations (1.1), (1.2) are called Sylvester equations. The equation , where matrix is transpose of , is called Lyapunov equation and it is special case of Sylvester equation. Equation (1.3) is called the matrix linear Diophantine equation [3, 4].

Roth [5] established the criterions of solvability of matrix equations (1.1), (1.2) whose coefficients , , and are the matrices over a field .

Theorem 1.1 ([5]). The matrix equation (1.1), where , , and are matrices with elements in a field , has a solution with elements in if and only if the matrices are similar.
The matrix equation (1.2), where , , and are matrices with elements in , has solution , with elements in if and only if the matrices and are equivalent.

The Roth’s theorem was extended by many authors to the matrix equations (1.1), (1.2) in cases where their coefficients are the matrices over principal ideal rings [68], over arbitrary commutative rings [9], and over other rings [1014].

The matrix linear unilateral equation (1.3) has a solution if and only if one of the following conditions holds:(a)a greatest common left divisor of the matrices and is a left divisor of the matrix ;(b)the matrices and are right equivalent.

In the case where , , and in (1.3) are the matrices over a polynomial ring , where is a field, these conditions were formulated in [1, 4]. It is not difficult to show, that these conditions of solvability hold for the matrix linear unilateral equation (1.3) over a commutative Bezout domain.

The matrix equations (1.1), (1.2), (1.3), where the coefficients , , and are the matrices over a field , reduce by means of the Kronecker product to equivalent systems of linear equations [15]. Hence (1.1) over algebraic closed field has unique solution if and only if the matrices and have no common characteristic roots.

One of the methods of solving matrix polynomial equation, where , , and are matrices over a polynomial ring , is based on reducibility of polynomial equation to equivalent equation with matrix coefficients over a field , that is, where and are companion matrices of matrix polynomials and , respectively, is matrix over a field , and is unknown matrix [16, 17].

Equation (1.5) has a unique solution , of bounded degree if and only if [17].

Feinstein and Bar-Ness [18] established that for (1.5), in which at least one from the matrix coefficients or is regular, there exists unique minimal solution , , such that , , if and only if and . The similar result was established in [19] in the case where at least one from matrix coefficients or is regularizable.

In this paper we propose the method of solving matrix linear equations (1.2), (1.3) over a commutative Bezout domain. This method is based on the use of standard form of a pair of matrices with respect to generalized equivalence introduced in [20, 21], and on congruences. We introduce the notion of particular solutions of such matrix equations. We establish the criterions of uniqueness of particular solutions and write down the formulas of general solutions of such equations.

##### 1.2. The Linear Congruences and Diophantine Equations

Let be a commutative Bezout domain. A commutative domain is called a Bezout domain if any two elements , have a greatest common divisor ,   and , for some , [22, 23]. Note that a commutative domain is a Bezout domain if and only if any finitely generated ideal is principal.

Further, denotes a group of units of , denotes a complete set of residues modulo the ideal generated by element or a complete set of residues modulo . An element of is said to be an associate to an element of , if , where belongs to . A set of elements of , one from each associate class, is said to be a complete set of nonassociates, which we denoted by [24]. For example, if is a ring of integers, then can be chosen as the set of positive integers with zero, that is, , and can be chosen as the set of the smallest nonnegative residues, that is, .

Many properties of divisibility in principal ideal rings [2427] can be easily generalized to the commutative Bezout domain. Recall some of them which will be used later.

In what follows, will always denote a commutative Bezout domain.

Lemma 1.2. Each residue class over can be represented as union where the union is taken over all residues of arbitrary complete set of residues modulo , where .

In the case where is an euclidean ring, this lemma was proved in [27]. By the same way, this lemma can be proved in the case where is a commutative Bezout domain.

The class of elements satisfying the congruence is called solution of this congruence.

Lemma 1.3. Let and , where ,, , . Congruence (1.8) has a solution if and only if .
Let , ,, where , and be a solution of congruence Then the general solution of congruence (1.8) has the form: where is any element of .

Proof. Necessity. It is obvious.
Sufficiency. Let and . Then dividing both sides a congruence (1.8) and by , we get congruence (1.9), where . There exist elements , of such that . Thus we have . Multiply two sides of this congruence by ,, . Therefore, is a solution of congruence (1.9). Set . Then by Lemma 1.2 we get the general solution of congruence (1.8): , where is an arbitrary element of . This proves the lemma.

Corollary 1.4. The congruence (1.8) has unique solution such that if and only if .

Let be a linear Diophantine equation over and . Equation (1.11) has a solution if and only if .

Suppose that , , and , where . Then (1.11) implies Let , be a solution of (1.12), that is, is a solution of congruence , , and . It is easily to verify that if , then .

The solution , of (1.12) is obviously the solution of (1.11).

Definition 1.5. The solution , of (1.11) such that is called the particular solution of this equation.

Then by Lemmas 1.2 and 1.3, a general solution of (1.11) has the form where is an arbitrary element of and is any element of .

Corollary 1.6. A particular solution , of (1.11) with is unique if and only if .

Example 1.7. Let be linear Diophantine equation over ring . Then and (1.14) is solvable. Then the particular solutions of (1.14) are because , ,.
Then the general solution of (1.14) can be written using (1.13) that is, where is arbitrary element of and is any element of .

##### 1.3. Standard Form of a Pair of Matrices

Let be a commutative Bezout domain with diagonal reduction of matrices [28], that is, for every matrix of the ring of matrices , there exist invertible matrices , such that If , , then the matrix is unique and is called the canonical diagonal form (Smith normal form) of the matrix . Such rings are so-called adequate rings. The ring is called an adequate if is a commutative domain in which every finitely generated ideal is principal and for every , with ; can be represented as where and for every nonunit factor of [29].

Definition 1.8. The pairs and of matrices , , , are called generalized equivalent pairs if , , for some invertible matrices and over .

In [20, 21], the forms of the pair of matrices with respect to generalized equivalence are established.

Theorem 1.9. Let be an adequate ring, and let , be the nonsingular matrices and be their canonical diagonal forms.
Then the pair of matrices is generalized equivalent to the pair , where has the following form: , where , , , .

The pair defined in Theorem 1.9 is called the standard form of the pair of matrices or the standard pair of matrices .

Definition 1.10. The pair is called diagonalizable if it is generalized equivalent to the pair of diagonal matrices , that is, its standard form is the pair of diagonal matrices .

Corollary 1.11. Let , . If , then the pair of matrices is diagonalizable.

It is clear taking into account Corollary 1.11 that if , then the standard form of matrices is the pair of diagonal matrices .

Let us formulate the criterion of diagonalizability of the pair of matrices.

Theorem 1.12. Let , and be a nonsingular matrix. Then the pair of matrices is generalized equivalent to the pair of diagonal matrices if and only if the matrices and are equivalent, where is an adjoint matrix.

#### 2. The Matrix Linear Unilateral Equations 𝐴 𝑋 + 𝐵 𝑌 = 𝐶

##### 2.1. The Construction of the Solutions of the Matrix Linear Unilateral Equations with Two Variables

Suppose that the matrix linear unilateral equation (1.3) is solvable, and let be a standard form of a pair of matrices from (1.3) with respect to generalized equivalence, that is, is a lower triangular matrix of the form (1.20) with the principal diagonal where , , .

Then (1.3) is equivalent to the equation where , , and .

The pair of matrices , satisfying (2.3) is called the solution of this equation. Then is the solution of (1.3).

The matrix equation (2.3) is equivalent to the system of linear equation: with the variables , , , , where , ,, from (2.3), or where , , and .

The solving of this system reduces to the successive solving of linear Diophantine equations of the form

Using solutions of system (2.6), we construct the solutions , of matrix equation (2.3). Then and are the solutions of matrix equation (1.3).

##### 2.2. The General Solution of the Matrix Equation 𝐴 𝑋 + 𝐵 𝑌 = 𝐶 with the Diagonalizable Pair of Matrices ( 𝐴 , 𝐵 )

Suppose that the pair of matrices is diagonalizable, that is, for some matrices , , .

Then (1.3) is equivalent to the equation where , , and .

From matrix equation (2.9), we get the system of linear Diophantine equation:

Let , , , , be a particular solution of corresponding equation of system (2.10), that is, is the solution of congruence , , and .

The general solution of corresponding equation of system (2.10) by the formula (1.13) will have the following form: where , are arbitrary elements of , and are any elements of , , . The particular solution of matrix equation (2.9) is where , , , , is a particular solution of corresponding equation of system (2.10). Then is a particular solution of matrix equation (1.3).

Thus we get the following theorem.

Theorem 2.1. Let the pair of matrices from matrix equation (1.3) be diagonalizable and its standard pair be the pair of matrices in the form (2.8). Let , be a particular solution of matrix equation (2.9). Then the general solution of matrix equation (2.9) is where , are arbitrary elements of , ; , , , ; , are arbitrary elements of .
The general solution of matrix equation (1.3) has the form

Example 2.2. Consider the equation for the matrices are matrices over and are unknown matrices.
The matrix equation (2.16) is solvable.
The pair of matrices from matrix equation (2.16) by Theorem 1.12 is diagonalizable, since the matrices are equivalent. Therefore, where
Then (2.16) is equivalent to the equation where
From matrix equation (2.22), we get the system of linear Diophantine equations:
The particular solution of each linear equation of system (2.24) has the following form:
The particular solution of matrix equation (2.22) is
Then by (2.14) the general solution of matrix equation (2.22) is or where is from is arbitrary element of , and , , , is arbitrary elements of .
Finally, the general solution of matrix equation (2.16) is

##### 2.3. The Uniqueness of Particular Solutions of the Matrix Linear Unilateral Equation

The conditions of uniqueness of solutions of bounded degree (minimal solutions) of matrix linear polynomial equations (1.5) were found in [1619]. We present the conditions of uniqueness of particular solutions of matrix linear equation over a commutative Bezout domain .

Theorem 2.3. The matrix equation (2.3) has a unique particular solution such that , , , if and only if .

Proof. From matrix equation (2.3), we get the system of linear equations (2.6). The solving of this system reduces to the successive solving of the linear Diophantine equations of the form (2.7).
The matrix equation (2.3) has a unique particular solution , such that , , , if and only if each linear Diophantine equations of the form (2.7) has a unique particular solution , such that , , . By Corollary 1.6, this holds if and only if for all . It follows that . This completes the proof.

Theorem 2.4. Let , , where ,, be a unique particular solution of matrix equation (2.3).
Then the general solution of matrix equation (2.3) is where and are canonical diagonal forms of and from matrix equation (1.3), respectively, , are arbitrary elements of , , .
The general solution of matrix equation (1.3) is the pair of matrices

Proof. The particular solution of the form (2.30) of (2.3) is unique if and only if that is, . Then by Corollary 1.11 the pair of matrices is diagonalizable and (1.3) gives us the equation of the form (2.9).
Thus by Theorem 2.1, we get the formula (2.31) of the general solution of (2.3) and the formula (2.32) for computation of general solution of (1.3) in the case where (2.3) has unique particular solution of the form (2.30). The theorem is proved.

#### 3. The Matrix Linear Bilateral Equations 𝐴 𝑋 + 𝑌 𝐵 = 𝐶

Consider the matrix linear bilateral equation (1.2), where , , and are matrices over a commutative Bezout domain , and are the canonical diagonal forms of matrices and , respectively, and , , , .

Then (1.2) is equivalent to where , , and .

Such an approach to solving (1.2), where , and are the matrices over a polynomial ring , where is a field, was applied in [3].

The equation (3.2) is equivalent to the system of linear Diophantine equations

Theorem 3.1. Let be a particular solution of matrix equation (3.2), that is, , , , , are particular solutions of linear Diophantine equations of system (3.3).
The general solution of matrix equation (3.2) is where , , where are arbitrary elements of , and , where are arbitrary elements of , , .
The general solution of matrix equation (1.2) is

Similarly as for (2.3), we prove that particular solution of (3.2) is unique if and only if . Then by the same way as for (1.3) we write down the general solution of matrix equation (1.2).

Theorem 3.2. Suppose that and , where , , is unique particular solution of matrix equation (3.2) and are canonical diagonal forms of matrices , from matrix equation (1.2), respectively.
Then the general solution of matrix equation (3.2) is where ; are arbitrary elements of , , .
The general solution of matrix equation (1.2) is

#### References

1. T. Kaczorek, Polynomial and Rational Matrices, Communications and Control Engineering Series, Springer, London, UK, 2007.
2. V. Kučera, “Algebraic theory of discrete optimal control for single-variable systems. I. Preliminaries,” Kybernetika, vol. 9, pp. 94–107, 1973.
3. V. Kučera, “Algebraic theory of discrete optimal control for multivariable systems,” Kybernetika, vol. 10/12, supplement, pp. 3–56, 1974.
4. W. A. Wolovich and P. J. Antsaklis, “The canonical Diophantine equations with applications,” SIAM Journal on Control and Optimization, vol. 22, no. 5, pp. 777–787, 1984.
5. W. E. Roth, “The equations $AX-YB=C$ and $AX-XB=C$ in matrices,” Proceedings of the American Mathematical Society, vol. 3, pp. 392–396, 1952.
6. M. Newman, “The Smith normal form of a partitioned matrix,” Journal of Research of the National Bureau of Standards, vol. 78, pp. 3–6, 1974.
7. C. R. Johnson and M. Newman, “A condition for the diagonalizability of a partitioned matrix,” Journal of Research of the National Bureau of Standards, vol. 79, no. 1-2, pp. 45–48, 1975.
8. R. B. Feinberg, “Equivalence of partitioned matrices,” Journal of Research of the National Bureau of Standards, vol. 80, no. 1, pp. 89–97, 1976.
9. W. H. Gustafson, “Roth's theorems over commutative rings,” Linear Algebra and Its Applications, vol. 23, pp. 245–251, 1979.
10. R. E. Hartwig, “Roth's equivalence problem in unit regular rings,” Proceedings of the American Mathematical Society, vol. 59, no. 1, pp. 39–44, 1976.
11. W. H. Gustafson and J. M. Zelmanowitz, “On matrix equivalence and matrix equations,” Linear Algebra and Its Applications, vol. 27, pp. 219–224, 1979.
12. R. M. Guralnick, “Roth's theorems and decomposition of modules,” Linear Algebra and Its Applications, vol. 39, pp. 155–165, 1981.
13. L. Huang and J. Liu, “The extension of Roth's theorem for matrix equations over a ring,” Linear Algebra and Its Applications, vol. 259, pp. 229–235, 1997.
14. R. E. Hartwig and P. Patricio, “On Roth's pseudo equivalence over rings,” Electronic Journal of Linear Algebra, vol. 16, pp. 111–124, 2007.
15. P. Lancaster and M. Tismenetsky, The Theory of Matrices, Computer Science and Applied Mathematics, Academic Press, Orlando, Fla, USA, 2nd edition, 1985.
16. S. Barnett, “Regular polynomial matrices having relatively prime determinants,” Proceedings of the Cambridge Philosophical Society, vol. 65, pp. 585–590, 1969.
17. V. Petrychkovych, “Cell-triangular and cell-diagonal factorizations of cell-triangular and cell-diagonal polynomial matrices,” Mathematical Notes, vol. 37, no. 6, pp. 431–435, 1985.
18. J. Feinstein and Y. Bar-Ness, “On the uniqueness of the minimal solution to the matrix polynomial equation $A\left(\lambda \right)X\left(\lambda \right)+Y\left(\lambda \right)B\left(\lambda \right)=C\left(\lambda \right)$,” Journal of the Franklin Institute, vol. 310, no. 2, pp. 131–134, 1980.
19. V. M. Prokip, “About the uniqueness solution of the matrix polynomial equation $A\left(\lambda \right)X\left(\lambda \right)-Y\left(\lambda \right)B\left(\lambda \right)=C\left(\lambda \right)$,” Lobachevskii Journal of Mathematics, vol. 29, no. 3, pp. 186–191, 2008.
20. V. Petrychkovych, “Generalized equivalence of pairs of matrices,” Linear and Multilinear Algebra, vol. 48, no. 2, pp. 179–188, 2000.
21. V. Petrychkovych, “Standard form of pairs of matrices with respect to generalized eguivalence,” Visnyk of Lviv University, vol. 61, pp. 153–160, 2003.
22. P. M. Cohn, Free Rings and Their Relations, Academic Press, London, UK, 1971.
23. S. Friedland, “Matrices over integral domains,” in CRC Handbook of Linear Algebra, pp. 23-1–23-11, Chapman & Hall, New York, NY, USA, 2007.
24. M. Newman, Integral Matrices, Academic Press, New York, NY, USA, 1972.
25. B. L. Van der Waerden, Algebra, Springer, New York, NY, USA, 1991.
26. A. I. Borevich and I. R. Shafarevich, Number Theory, vol. 20 of Translated from the Russian by Newcomb Greenleaf. Pure and Applied Mathematics, Academic Press, New York, NY, USA, 1966.
27. K. A. Rodossky, Euclid's Algorithm, Nauka, Moscow, Russia, 1988.
28. I. Kaplansky, “Elementary divisors and modules,” Transactions of the American Mathematical Society, vol. 66, pp. 464–491, 1949.
29. O. Helmer, “The elementary divisor theorem for certain rings without chain condition,” Bulletin of the American Mathematical Society, vol. 49, pp. 225–236, 1943.