- About this Journal ·
- Abstracting and Indexing ·
- Advance Access ·
- Aims and Scope ·
- Article Processing Charges ·
- Articles in Press ·
- Author Guidelines ·
- Bibliographic Information ·
- Citations to this Journal ·
- Contact Information ·
- Editorial Board ·
- Editorial Workflow ·
- Free eTOC Alerts ·
- Publication Ethics ·
- Reviewers Acknowledgment ·
- Submit a Manuscript ·
- Subscription Information ·
- Table of Contents

ISRN Computational Mathematics

VolumeΒ 2012Β (2012), Article IDΒ 126908, 6 pages

http://dx.doi.org/10.5402/2012/126908

## A New Iterative Algorithm for Solving a Class of Matrix Nearness Problem

^{1}College of Mathematics and Computational Science, Guilin University of Electronic Technology, Guilin 541004, China^{2}Department of Mathematics, Shanghai University, Shanghai 200444, China

Received 14 September 2011; Accepted 3 October 2011

Academic Editors: T.Β Allahviranloo and K.Β Eom

Copyright Β© 2012 Xuefeng Duan and Chunmei Li. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

#### Abstract

Based on the alternating projection algorithm, which was proposed by Von Neumann to treat the problem of finding the projection of a given point onto the intersection of two closed subspaces, we propose a new iterative algorithm to solve the matrix nearness problem associated with the matrix equations , , which arises frequently in experimental design. If we choose the initial iterative matrix , the least Frobenius norm solution of these matrix equations is obtained. Numerical examples show that the new algorithm is feasible and effective.

#### 1. Introduction

Denoted by be the set of real matrices, and be the transpose and Moore-Penrose generalized inverse of the matrix , respectively. For ,ββ denotes the inner product of the matrix and . The induced norm is the so-called Frobenius norm, that is, ; then is a real Hilbert space. Let be a closed convex subset in a real Hilbert space and be a point in ; then the point in nearest to is called the projection of onto and denoted by ; that is to say, is the solution of the following minimization problem (see [1, 2]) that is,

The problem of finding a nearness matrix in a constraint matrix set to a given matrix is called the matrix nearness problem. Because the preliminary estimation is frequently obtained from experiments, it may not satisfy the given restrictions. Hence it is necessary to find a nearness matrix in this constraint matrix set to replace the estimation [3]. In the area of structure design, finite element model updating and control theory, and so forth, the matrix set is always the (constraint) solution set or the least square (constraint) solution set of some matrix equations [4β6]. Thus, the problem mentioned above is also called the matrix nearness problem associated with the matrix equation. Recently, there are many discussions on the matrix nearness problem associated with some matrix equations. For instance, see [4, 6β14].

In this paper, we consider the following problem.

*Problem 1. *Given matrices β, and , find such that
where
Obviously, is the solution set of the matrix equations
and is the optimal approximate solution of (5) to the given matrix . In particular, if , then the solution of Problem 1 is just the least Frobenius norm solution of the matrix equations (5). It is easy to verify that is convex set; then the solution of Problem 1 is unique.

The matrix equations (5) and its matrix nearness problem have been extensively studied for the past 40 or more years. Wang [15] and Navarra et al. [16] gave some conditions for the existence of a solution and some representations of the general common solution to (5). By the projection theorem and matrix decompositions, Liao et al. [6] gave an analytical expression of the optimal approximate least square symmetric solution of (5). However, these direct methods may be less efficient for the large sparse coefficient matrices due to the limited storages and the speeds of the computers. Therefore, iterative methods for solving the matrix equations (5) have attracted much interests recently. Peng et al. [11] and Chen et al. [7] proposed some iterative methods to compute the symmetric solutions and optimal approximate symmetric solution of (5). An efficient iterative method was presented to solve the matrix nearness Problem 1 associated with the matrix equations (5) in [13]. Ding et al. [17] considered the unique solution of the matrix equations (5) and used gradient-based iterative algorithm to compute the unique solution. The (least square) solution and the optimal approximate (least square) solution of (5), which is constrained as bisymmetric, reflexive, generalized reflexive, generalized centrosymmetric, were studied in [7β10, 12].

The alternating projection algorithm dates back to von Neumann [18], who treated the problem of finding the projection of a given point onto the intersection of two closed subspaces. Later, Bauschke and Borwein [1] extended the analysis of Von Neumannβs alternating projection scheme to the case of two closed affine subspaces. There are many variations and extensions of the alternating projection algorithm, and we can use them to find the projection of a given point onto the intersection of closed subspaces [19] and closed convex sets [20, 21]. For a complete discussion on the alternation projection algorithm see Deutsch [2].

In this paper, we propose a new algorithm to solve Problem 1. We state Problem 1 as the minimization of a convex quadratic function over the intersection of two closed affine subspaces in the vector space . From this point of view, Problem 1 can be solved by the alternating projection algorithm. If we choose the initial iterative matrix , the least Frobenius norm solution of the matrix equations is obtained. In the end, we use some numerical examples to show that the new algorithm is faster and lower computational cost for each step than the algorithm proposed by Sheng and Chen [13] to solve Problem 1. Especially, the CPU time and iteration steps of our algorithm increase slowly as the dimension of the matrix is increasing; so our algorithm is suitable for large-scale problems.

#### 2. Alternating Projection Algorithm for Solving Problem 1

In this section, we apply the alternating projection algorithm to solve Problem 1. We begin with two lemmas.

Lemma 1 (see [1, Theoremββ4.1]). *Let be closed affine subspaces in a Hilbert space and be a point in . Here, and are closed subspaces and . If , then the sequences and generated by the alternating projection algorithm
**
both converge to the projection of the point onto the intersection of and , that is,
*

Lemma 2 (see [22, Theoremββ9.3.1]). *Let and , be known matrices. Then the matrix equation has a solution if and only if
**
and the representation of the solution is
**
where is arbitrary.*

Lemma 3 (see [22, Theoremββ9.3.2]). *Given , set
**
then the solution of the following problem
**
is
**
that is,
*

Now we begin to use the alternating projection algorithm (6) to solve Problem 1. Firstly, we define two sets It is easy to know that , and if the set is nonempty, then And by Lemma 2, the sets and can be equivalently written as Hence, and are closed affine subspaces.

After defining the sets and , Problem 1 can be rewritten as finding , such that Noting the equalities (17) and (2), it is easy to find that Therefore, Problem 1 can be converted equivalently into finding the projection of a given matrix onto the intersection set . Now we will use alternating projection algorithm (6) to compute the projection . Consequently, we can get the solution of Problem 1.

By (6) we can see that the key problems to realize the alternating projection algorithm (6) are how to compute the projections of a matrix onto and , respectively. Such problems are perfectly solvable in the following theorems.

Theorem 1. *Suppose that the set is nonempty. For a given matrix , we have
*

*Proof. *By (1) and (2), we know that the projection is the solution of the following minimization problem
and according to Lemma 3 we know that the solution of the minimization problem (20) is . Hence,

Theorem 2. *Suppose that the setββ is nonempty. For a given matrix , we have
*

*Proof. *The proof is similar to that of Theorem 1 and is omitted here.

By the alternation projection algorithm (6) and Theorems 1 and 2, we get a new algorithm to solve Problem 1 which can be stated as follows.

*Algorithm 1. *One has(1)set ;(2)set ;(3)for
end.

By Lemma 1 and (15) and (16), we get the convergence theorem for Algorithm 1.

Theorem 3. *If the set is nonempty, then the matrix sequences and generated by Algorithm 1 both converge to the projection ofββ onto the intersection of and , that is,
*

*Proof. *If the set is nonempty, by (15) we have
And noting (16), we know that the sets and are closed affine subspaces in Hilbert space . Hence, by Lemma 1 we derive that the matrix sequences and generated by Algorithm 1 both converge to the projection of onto the intersection of and , that is,

Combining Theorem 3 and the equalities (18) and (17), we have the following.

Theorem 4. *If the set is nonempty, then the matrix sequence and generated by Algorithm 1 both converge to the unique solution of Problem 1. Moreover, if the initial matrix , then the matrix sequence and both converge to the least Frobenius norm solution of the matrix equations .*

#### 3. Numerical Experiments

In this section, we give some numerical examples to illustrate that the new algorithm is feasible and effective to solve Problem 1. All programs are written in MATLAB 7.8. We denote and use the practical stopping criterion .

*Example 1. *Consider Problem 1 with
Here we use and to stand for matrix of ones and zeros. It is easy to verify that is a solution of the matrix equations (5); that is to say, the set is nonempty. Therefore we can use Algorithm 1 to solve Problem 1.

Let . After 5 iterations of Algorithm 1, we get the optimal approximate solution which is also the least Frobenius norm solution of the matrix equations (5), and its residual error By concrete computations, we know that the distance from to the solution set is

Let . After 6 iterations of Algorithm 1, we get the optimal approximate solution and its residual error By concrete computations, we know that the distance from to the solution set is

Example 1 shows that Algorithm 1 is feasible to solve Problem 1.

*Example 2. *Consider Problem 1 with
where stand for random matrix. Let the given matrix . It is easy to verify that is the solution of the matrix equations (5); that is, the set is nonempty; therefore, we can use Algorithm 1 and the following algorithm proposed by Sheng and Chen [13] to solve Problem 1.

*Algorithm 2. *One has(1)input and ;(2)calculate
(3)if , then stop; else, ; (4)calculate
(5)go to step 3.

It is easy to see that Algorithm 1 has lower computational cost for each step in the comparison with Algorithm 2. Experiments show that Algorithm 1 and Algorithm 2 are feasible to solve Problem 1. We list the iteration steps (denoted by IT), CPU time (denoted by CPU), residual error (denoted by ERR), and the distance (denoted by DIS) in Table 1.

From Table 1, we can see that Algorithm 1 outperforms Algorithm 2 in iteration step and CPU time. Therefore our algorithm is faster than the algorithm proposed by Sheng and Chen [13]. Especially, the CPU time and iteration steps of our algorithm increase slowly as the dimension is increasing; so our algorithm is suitable for large-scale problems.

#### 4. Conclusion

The alternating projection algorithm dates back to von Neumann [18], who treated the problem of finding the projection of a given point onto the intersection of two closed subspace. In this paper, we first apply the alternating projection algorithm to solve Problem 1, which occurs frequently in experimental design [23]. If we choose the initial matrix , the least Frobenius norm solution of the matrix equations can be obtained. Numerical examples show that the new algorithm is faster and lower computational cost for each step than the algorithm proposed by Sheng and Chen [13] to solve Problem 1. Especially, the CPU time and iteration steps of the new algorithm increase slowly as the matrixβs dimension is increasing; so the alternating projection algorithm is suitable for large-scale matrix nearness problems.

#### Acknowledgments

The work was supported in part by National Science Foundation of China (11101100; 10861005) and Natural Science Foundation of Guangxi Province (0991238; 2011GXNSFA018138).

#### References

- H. H. Bauschke and J. M. Borwein, βDykstraβ²s alternating projection algorithm for two sets,β
*Journal of Approximation Theory*, vol. 79, no. 3, pp. 418β443, 1994. View at Publisher Β· View at Google Scholar Β· View at Scopus - F. Deutsch,
*Best Approximation in Inner Product Spaces*, Springer, New York, NY, USA, 2001. - N. Higham,
*Matrix Nearness Problems and Applications*, Oxford University Press, London, UK, 1989. - H. Dai and P. Lancaster, βLinear matrix equations from an inverse problem of vibration theory,β
*Linear Algebra and Its Applications*, vol. 246, pp. 31β47, 1996. View at Publisher Β· View at Google Scholar Β· View at Scopus - M. Friswell and J. Mottorshead,
*Finite Element Model Updating in Structure Dynamics*, Kluwer Academic publisher, Dodrecht, The Netherlands, 1995. - A. P. Liao, Y. Lei, and S. F. Yuan, βThe matrix nearness problem for symmetric matrices associated with the matrix equation [${A}^{T}XA$, ${B}^{T}XB$] = [$C,D$],β
*Linear Algebra and Its Applications*, vol. 418, no. 2-3, pp. 939β954, 2006. View at Publisher Β· View at Google Scholar Β· View at Scopus - Y. Chen, Z. Peng, and T. Zhou, βLSQR iterative common symmetric solutions to matrix equations $AXB=E$ and $CXD=F$,β
*Applied Mathematics and Computation*, vol. 217, no. 1, pp. 230β236, 2010. View at Publisher Β· View at Google Scholar Β· View at Scopus - J. Cai and G. Chen, βAn iterative algorithm for the least squares bisymmetric solutions of the matrix equations ${A}_{1}X{B}_{1}={C}_{1}$, ${A}_{2}X{B}_{2}={C}_{2}$,β
*Mathematical and Computer Modelling*, vol. 50, no. 7-8, pp. 1237β1244, 2009. View at Publisher Β· View at Google Scholar Β· View at Scopus - M. Dehghan and M. Hajarian, βAn iterative algorithm for solving a pair of matrix equations $AXB=E$ and $CXD=F$ over generalized centro-symmetric matrices,β
*Computers and Mathematics with Applications*, vol. 56, no. 12, pp. 3246β3260, 2008. View at Publisher Β· View at Google Scholar Β· View at Scopus - L. Fanliang, H. Xiyan, and Z. Lei, βThe generalized reflexive solution for a class of matrix equations ($AX=B$, $XC=D$),β
*Acta Mathematica Scientia*, vol. 28, no. 1, pp. 185β193, 2008. View at Publisher Β· View at Google Scholar Β· View at Scopus - Y. X. Peng, X. Y. Hu, and L. Zhang, βAn iterative method for symmetric solutions and optimal approximation solution of the system of matrix equations ${A}_{1}X{B}_{1}={C}_{1}$, ${A}_{2}X{B}_{2}={C}_{2}$,β
*Applied Mathematics and Computation*, vol. 183, no. 2, pp. 1127β1137, 2006. View at Publisher Β· View at Google Scholar Β· View at Scopus - Z. H. Peng, X. Y. Hu, and L. Zhang, βAn efficient algorithm for the least-squares reflexive solution of the matrix equation ${A}_{1}X{B}_{1}={C}_{1}$, ${A}_{2}X{B}_{2}={C}_{2}$,β
*Applied Mathematics and Computation*, vol. 181, no. 2, pp. 988β999, 2006. View at Publisher Β· View at Google Scholar Β· View at Scopus - X. Sheng and G. Chen, βA finite iterative method for solving a pair of linear matrix equations ($AXB$, $CXD$) = ($CXD$),β
*Applied Mathematics and Computation*, vol. 189, no. 2, pp. 1350β1358, 2007. View at Publisher Β· View at Google Scholar Β· View at Scopus - S. Yuan, A. Liao, and G. Yao, βThe matrix nearness problem associated with the quaternion matrix equation ${A}^{T}XA+{B}^{T}XB=C$,β
*Journal of Applied Mathematics and Computing*, vol. 37, no. 1-2, pp. 133β144, 2011. View at Publisher Β· View at Google Scholar - Q. W. Wang, βA system of matrix equations and a linear matrix equation over arbitrary regular rings with identity,β
*Linear Algebra and Its Applications*, vol. 384, no. 1-3, pp. 43β54, 2004. View at Publisher Β· View at Google Scholar Β· View at Scopus - A. Navarra, P. L. Odell, and D. M. Young, βRepresentation of the general common solution to the matrix equations ${A}_{1}X{B}_{1}={C}_{1}$, ${A}_{2}X{B}_{2}={C}_{2}$ with applications,β
*Computers and Mathematics with Applications*, vol. 41, no. 7-8, pp. 929β935, 2001. View at Publisher Β· View at Google Scholar Β· View at Scopus - J. Ding, Y. Liu, and F. Ding, βIterative solutions to matrix equations of the form ${A}_{i}X{B}_{i}={F}_{i}$,β
*Computers and Mathematics with Applications*, vol. 59, no. 11, pp. 3500β3507, 2010. View at Publisher Β· View at Google Scholar Β· View at Scopus - J. von Numann,
*Functional Operators*, vol. 11, Princeton University Press, Princeton, NJ, USA, 1950. - I. Halperin, βThe product of projection operators,β
*Acta Scientiarum Mathematicarum*, vol. 23, pp. 96β99, 1962. View at Google Scholar - R.L. Dykstra, βAn algorithm for restricted least-squares regression,β
*Journal of the American Statistical Association*, vol. 78, pp. 837β842, 1983. View at Google Scholar - R. Escalante and M. Raydan, βDykstra's algorithm for a constrained least-squares matrix problem,β
*Numerical Linear Algebra with Applications*, vol. 3, no. 6, pp. 459β471, 1996. View at Google Scholar Β· View at Scopus - H. Dai,
*Matrix Theory*, Science Press, Beijing, China, 2001. - T. Meng, βExperimental design and decision support,β in
*Expert system the Technology of Knowledge Management and Decision Making for 21st Centry*, C. Leondes, Ed., vol. 1, Academic Press, New York, NY, USA, 2001. View at Google Scholar