Research Article | Open Access

# Note on the Numerical Solutions of the General Matrix Convolution Equations by Using the Iterative Methods and Box Convolution Product

**Academic Editor:**Yong Zhou

#### Abstract

We define the so-called box convolution product and study their properties in order to present the approximate solutions for the general coupled matrix convolution equations by using iterative methods. Furthermore, we prove that these solutions consistently converge to the exact solutions and independent of the initial value.

#### 1. Introduction

In addition to the usual matrix multiplication, there has been renewed interest in the matrix convolution products of matrix functions such as Kronecker convolution product and Hadamard convolution product. These products have interesting properties and many applications, for example, to the solution of nonhomogeneous matrix differential equations based on the definition of the so-called Dirac identity matrix which behaves like a group identity element under the convolution matrix operation.

In fact, the importance of these products arises naturally in several areas of mathematics and plays very important role in many applications such as system theory, control theory, stability theory of differential equations, communication systems, perturbation analysis of matrix differential equations, and other fields of pure and applied mathematics; see, for example, [1, 2]. Furthermore several different techniques have been successfully applied in various fields of matrix algebra in matrix equations, and matrix differential equations, matrix inequalities. For example, Nikolaos in [3] established some inequalities involving convolution product of matrices and presented a new method to obtain closed form solutions of transition probabilities as well as dependability measures and then solved the renewal matrix equation by using the convolution product of matrices. Similarly, Sumita in [4] established the matrix Laguerre transform in order to calculate matrix convolutions and evaluated a matrix renewal function. In [5], the authors recently studied connections between Kronecker and Hadamard convolution product and establish some attractive inequalities for Hadamard convolution product.

In the field of matrix algebra and system identification, the iterative methods have received much attention. For example, Starke and Niethammer in [6] presented an iterative method for solutions of Sylvester equations by using the SOR technique; Kagstrom in [7] derived an a proximate solution of the coupled Sylvester matrix equations; Ding and Chen in [8] presented a general family of iterative methods to solve coupled Sylvester matrix equations. Similarly, Kılıçman and Al Zhour in [9, 10] studied the iterative methods to solve the coupled matrix convolution equations. To the best of our knowledge numerical solutions of general matrix convolution equations have not fully been investigated yet; thus the present work is focused on the iterative solutions of coupled matrix convolution equations and the convergence of these solutions.

In the present paper, we study approximate solutions of the general matrix convolution equations by using iterative methods and box convolution product. Further, we prove that these solutions consistently converge to the exact solutions for any initial value. In the present work we use the following notations

(i): is the set of all integrable matrices for all and if , we write instead of .(ii),, and : are the inverse, determinant and vector-operator respect to the convolution of, respectively.(iii),,: the convolution product, usual product, and Kronecker convolution product, respectively, of matrix functions and .(iv) and : are the norm and transpose of matrix function, respectively.(v) and : and the -power convolution product and -*p*ower usual product of matrix function, respectively.(vi) and are the Dirac delta function and Dirac identity acts as group identity matrix under the convolution matrix operation, respectively.(vii) symbol represents the box convolution product and Khatri-Rao convolution product was denoted by notation .

#### 2. Some Results on the Convolution Products of Matrices

In this section, we introduce the convolution products of matrices, namely, the convolution and Kronecker convolution products; for details see [5]. Some new results of these products are established that will be useful in our investigation of the applications.

*Definition 2.1. *Let ,, and . The convolution and Kronecker convolution products are matrix functions defined for as follows(i)*Convolution product* (see, e.g., [5, 9]) is
Similarly, the *correlation product can also be defined *as convolution:
Notice that correlation and convolution are identical when the functions are symmetric. (ii)*Kronecker convolution product* (see [5, 9]) is
where is the submatrix of order , is of order and is of order .

Of course there are some interesting connections between these matrix convolutions products which are very important in order to establish new inequalities that involving these matrix products. For example, the entries of the autocovariances matrix function can be expressed in terms of the Kronecker convolution product; see [5].

*Definition 2.2. *Let, . The determinant, inverse, and *m*-*power* of with respect to the convolution are defined for as follows (see, e.g., [10]).

*Determinant is*where the determinant of thematrix function obtained from by deleting row 1 and column

*j*of also known as the minor of corresponding to the entry of

*.*For example, if, then (ii)

*Inversion is*where and .(iii)-

*power convolution product is*

for positive integer and . Then we have the following theorem.

Theorem 2.3. *Let, , C , be scalar identity matrix and. Then for and being constants,
*

*Proof. *The proof is straightforward by using the definition of the convolution product of matrices.

Lemma 2.4. *Let and . Then
*

*Proof. *The proof is straightforward by the definitions of the Kronecker convolution product.

Theorem 2.5. *Let, , . Then
*

*Proof. *To prove (2.10), let containvectors and let be a vector of zeros except for a in the position, for each , then
Now, since and are vectors, then by Lemma 2.4 we have

#### 3. Main Results

The iterative methods always begin with an initial vector, say , and, for nonnegative integer *k*, then calculate the next vector, , from the current vector . The limit of such a sequence of vectors , when the limit exists, is the desired solution to problem. Thus the fundamental tool is to understand iterative algorithm that is the concepts of distance between vectors.

In the literature there are many researches on the theory and applications of iterative algorithms. However in this part we consider the following linear convolution equation:

where given full-rank matrix with nonzero diagonal elements for all, and is -vector and is an unknown vector to be solved. Further we assume that is the iterative solutions of . Now in order to apply the iterative method it is necessary to choose an initial vector, say, and apply rule of iteration method to compute from the known previous vector. Thus the iterative methods are as follows:

where is a full-rank matrix to be determined for all and is the selected parameter. Here we can also interpret as step-size or convergence factor also known as the ratio of convergence. Then the following lemma is straightforward.

Lemma 3.1. *If is a full square matrix and invertible and if one sets in (3.2), then iterative steps converge to and the iterative algorithm as follows:
*

Lemma 3.2 .. *If is a nonsquare full column-rank matrix, then and algorithm is given by
**
Similarly, if is a nonsquare full row-rank matrix, then and
**
where , and .*

By selecting different values for the convergence factor we can have several different operators as a representation, for example, if , then it is easy to prove that

(i)iterative solution in (3.3) converges to , (ii)solution in (3.4) converges to (iii)similarly in (3.5) also converges to .Note that the iterative algorithm in (3.4) and (3.5) is also suitable for solving nonsquare convolution systems and thus also useful for finding the iterative solutions of coupled matrix convolution equations.

We also note that the convergence ratio in (3.3)–(3.5) does not depend on the matrix and it easy to select; however the algorithms in (3.3)–(3.5) require some computation of matrix inversion with respect to convolution.

Now we can use the iterative method to solve more general coupled matrix convolution equations of the following form: where , and are given matrices while unknown matrix functions to be solved.; see [9].

In order to be more concise the least - squares iterative algorithm will be presented later, and we introduce the box convolution product denoted by notation . Let

, , ,

, . Then the box convolution product is defined as

provided that the orders of multiplier matrix and multiplicand matrix are compatible. Similarly, the Khatri-Rao convolution product, denoted by notation, is defined by

Now let be partitioned Dirac identity matrix. Then the box convolution product has the following some useful properties:

(i)(ii)one has (iii)one has (iv)in general, , (v),(vi).*Proof. *The proof is straightforward on using the definition of the box convolution.

Lemma 3.3. *The general coupled matrix convolution equations defined in (3.6) have a unique solution if and only if the matrix is nonsingular; in this case, the solution is given by
**
and if , then the general coupled matrix convolution equations defined in (3.12) have unique solutions .*

In order to derive the iterative algorithm for solving the general matrix convolution equations defined in (3.6), we first consider the following coupled matrix convolution equations defined by Kılıçman and Al Zhour in [9] and its iterative solution can be expressed as where and are Dirac identity matrices of order and , respectively.

Now let be the estimates or iterative solutions of then we present the least-squares iterative algorithm to compute the solutions of general coupled matrix convolution (3.6) as follows: where . We note that since (3.19) is well established by using the Lemma 3.1, then the algorithm in (3.19) is known as the least-squares iterative algorithm.

Theorem 3.4. *If the general matrix convolution equations defined in (3.6) have unique solutions , , then the iterative solutions given by the algorithm in (3.19) converge to the solutionsfor any initial values , that is,
*

*Proof. *Define the estimation error matrix as follows:
Let
By using (3.6) and (3.19), it is easy to get
Thus by defining a nonnegative definite function:
on using the equation defined in (3.23), and the box convolution properties, then we have the following formula:
and thus it follows that
Now taking the summation on from to yields
If the convergence factor is chosen to satisfy , then we have
It follows that as , thus we obtain
or
Thus on using Lemma 3.3, we prove Theorem 3.4.

From the proofs of Lemma 3.3 and Theorem 3.4, we can see that the iterative solutions in (3.19) are linearly convergent. That is, if then the general matrix convolution equations defined in (3.6) can simply be expressed as By using the box convolution product properties, (3.19) can be written as the following more compact form: Thus by referring to Lemma 3.1, we can also establish the Gradient iterative algorithm (i.e., ) for the solution of general coupled matrix convolution equations defined in (3.6) as follows:

where the convergence factor is given by

*Example 3.5. *Consider to solve the coupled matrix equations:
where
Then we can easily see that the convolution products are the same as the matrix-vector products *AX*, *BY*, *DX*, and *YE* and thus the convolution equation takes the form *AX* +*BY*=*C*, *DX*+*YE* = *F. *Then the exact solutions of *X* and *Y* are
Taking and and applying the iterative steps to compute and , then the speed of the convergence in the iterative solutions for and is based on the selection of the . For example, if we choose then we can easily see that the error tends to zero faster; see the details in [5, 8].

*Remark 3.6. *The convergence factor may not be the best and may be conservative. In fact, there exists a best such that the fast convergence rate of to and to can be obtained. However, if is too large, the iterative method may diverge. How to choose a best convergence factor is still a project to be studied. Further if we define the relative error as
and since and , it is clear that become smaller and smaller and goes to zero as , that is,
and this indicates that the proposed iterative method is effective.

We also note that the convolution theorem can be proved easily by using the box convolution definition and given by since

provided that the matrices have Fourier transform.

#### Acknowledgments

The authors gratefully acknowledge that this research was partially supported by the Ministry of Science, Technology and Innovations (MOSTI), Malaysia under the Grant IRPA project no. 06-01-04-SF1050. The authors would also like to express their sincere thanks to the referees for their very constructive comments and suggestions.

#### References

- T. Chen and B.A. Francis,
*Optimal Sampled-Data Control Systems*, Springer, London, UK, 1995. - A. E. Gilmour, “Circulant matrix methods for the numerical solution of partial differential equations by FFT convolutions,”
*Applied Mathematical Modelling*, vol. 12, no. 1, pp. 44–50, 1988. View at: Publisher Site | Google Scholar | Zentralblatt MATH | MathSciNet - L. Nikolaos, “Dependability analysis of semi-Markov system,”
*Reliability Engineering & System Safety*, vol. 55, pp. 203–207, 1997. View at: Publisher Site | Google Scholar - H. Sumita, “The matrix Laguerre transform,”
*Applied Mathematics and Computation*, vol. 15, no. 1, pp. 1–28, 1984. View at: Publisher Site | Google Scholar | Zentralblatt MATH | MathSciNet - A. Kılıçman and Z. Al Zhour, “On the connection between kronecker and Hadamard convolution products of matrices and some applications,”
*Journal of Inequalities and Applications*, vol. 2009, Article ID 736243, 10 pages, 2009. View at: Publisher Site | Google Scholar | Zentralblatt MATH | MathSciNet - G. Starke and W. Niethammer, “Sor for $AX-XB=C$,”
*Linear Algebra and Its Applications*, vol. 154, pp. 355–375, 1991. View at: Publisher Site | Google Scholar | Zentralblatt MATH | MathSciNet - B. Kagstrom, “A perturbation analysis of the generalized sylvester equation $(AR-LB,DR-LU)=(C,F)$,”
*SIAM Journal on Matrix Analysis and Applications*, vol. 15, no. 4, pp. 1045–1060, 1994. View at: Publisher Site | Google Scholar | MathSciNet - F. Ding and T. Chen, “Iterative least-squares solutions of coupled sylvester matrix equations,”
*Systems Control Letters*, vol. 54, no. 2, pp. 95–107, 2005. View at: Publisher Site | Google Scholar | Zentralblatt MATH | MathSciNet - A. Kılıçman and Z. Al Zhour, “Iterative solutions of coupled matrix convolution equations,”
*Soochow Journal of Mathematics*, vol. 33, no. 1, pp. 167–180, 2007. View at: Google Scholar | Zentralblatt MATH | MathSciNet - A. Kılıçman and Z. Al Zhour, “Vector least-squares solutions for coupled singular matrix equations,”
*Journal of Computational and Applied Mathematics*, vol. 206, no. 2, pp. 1051–1069, 2007. View at: Publisher Site | Google Scholar | Zentralblatt MATH | MathSciNet

#### Copyright

Copyright © 2010 Adem Kılıçman and Zeyad Al zhour. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.