Advances in Operations Research

Volume 2012, Article ID 357954, 15 pages

http://dx.doi.org/10.1155/2012/357954

## Solving the Matrix Nearness Problem in the Maximum Norm by Applying a Projection and Contraction Method

^{1}School of Mathematics and Physics, Changzhou University, Jiangsu Province, Changzhou 213164, China^{2}Department of Mathematics, School of Sciences, China University of Mining and Technology, Xuzhou 221116, China

Received 11 April 2012; Accepted 17 June 2012

Academic Editor: Abdellah Bnouhachem

Copyright © 2012 M. H. Xu and H. Shao. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

#### Abstract

Let *S* be a closed convex set of matrices and *C* be a given matrix. The matrix nearness problem considered in this paper is to find a matrix *X* in the set *S* at which max reaches its minimum value. In order to solve the matrix nearness problem, the problem is reformulated to a min-max problem firstly, then the relationship between the min-max problem and a monotone linear variational inequality (LVI) is built. Since the matrix in the LVI problem has a special structure, a projection and contraction method is suggested to solve this LVI problem. Moreover, some implementing details of the method are presented in this paper. Finally, preliminary numerical results are reported, which show that this simple algorithm is promising for this matrix nearness problem.

#### 1. Introduction

Let be a given symmetric matrix and where are given scalars and , is the identity matrix, and denotes that is a positive semidefinite matrix. It is clear that is a nonempty closed convex set. The problem considered in this paper is where Throughout the paper we assume that the solution set of problem (1.2) is nonempty.

Note that when and , the set reduces to the semidefinite cone Using the terminology in interior point methods, is called semidefinite cone, and thus the related problem belongs to the class of semidefinite programming [1].

Problem (1.2) can be viewed as a type of matrix nearness problem, that is, the problem of finding a matrix that satisfies some property and is nearest to a given one. A survey on matrix nearness problems can be found in [2]. The matrix nearness problems have many applications especially in finance, statistics, and compressive sensing. For example, one application in statistics is to make adjustments to a symmetric matrix so that it is consistent with prior knowledge or assumptions, and is a valid covariance matrix [3–8]. Paper [9] discusses a new class of matrix nearness problems that measure approximation error using a directed distance measure called a Bregman divergence and proposes a framework for studying these problems, discusses some specific matrix nearness problems, and provides algorithms for solving them numerically.

Note that a different norm is used in this paper than in these published papers [3–7] and this makes the objective function of problem (1.2) be nonsmooth, and the problem (1.2) cannot be solved very easily. In the next section, we will find that the problem (1.2) can be converted into a linear variational inequality and thus can be solved effectively with the projection and contraction (PC) method which is extremely simple both in theoretical analysis and numerical implementations [10–15].

The paper is organized as follows. The relationship between the matrix nearness problem considered in this paper and a monotone linear variational inequality (LVI) is built in Section 2. In Section 3, some preliminaries on variational inequalities are summarized. The projection and contraction method for the LVI associated with the considered problem is suggested. In Section 4, the implementing details for applying the projection and contraction method to the matrix optimization problems are studied. Preliminary numerical results and some concluding remarks are reported in Sections 5 and 6, respectively.

#### 2. Reformulating the Problem to a Monotone LVI

For any , we have where and is the Euclidean inner-product of and .

In order to simplify the following descriptions, let be a linear transformation which converts the matrix into a column vector in obtained by stacking the columns of the matrix on top of one another, that is, and let be the original matrix , that is,

Based on (2.1) and the fact that the matrix and are symmetric, problem (1.2) can similarly be rewritten as the following min-max problem: where

*Remark 2.1. *Since and are both symmetric matrices, we can restrict the matrices in set to be symmetric.

Let Since and are both convex sets, it is easy to prove that and are also convex sets.

Let be any solution of (2.4) and . Then where and . Thus, is a solution of the following variational inequality: find such that For convenience of coming analysis, we rewrite the linear variational inequality (2.9) in the following compact form: find such that where In the following, we denote the linear variational inequality (2.10)-(2.11) by LVI.

*Remark 2.2. *Since is skew-symmetric, the linear variational inequality LVI is monotone.

#### 3. Projection and Contraction Method for Monotone LVIs

In this section, we summarize some important concepts and preliminary results which are useful in the coming analysis.

##### 3.1. Projection Mapping

Let be a nonempty closed convex set of . For a given , the projection of onto , denoted by , is the unique solution of the following problem: where is the Euclidian norm. The projection under an Euclidean norm plays an important role in the proposed method. A basic property of the projection mapping on a closed convex set is

In many cases of practical applications, the closed convex set has a simple structure, and the projection on is easy to carry out. For example, let be a vector whose each element is , and Then the projection of a vector on can be obtained by where the min and max are component wise.

##### 3.2. Preliminaries on Linear Variational Inequalities

We denote the solution set of LVI (2.10) by and assume that . Since the early work of Eaves [16], it is well known that the variational inequality LVI problem is equivalent to the following projection equation:
In other words, to solve LVI is equivalent to finding a zero point of the continuous residue function
Hence,
In the literature of variational inequalities, is called the *error bound* of LVI. It quantitatively measures how much fails to be in .

##### 3.3. The Projection and Contraction Method

Let be a solution. For any , because , it follows from (2.10) that By setting and in (3.2), we have Adding the above two inequalities, and using the notation of , we obtain For positive semi-definite (not necessary symmetric) matrix , the following theorem follows from (3.10) directly.

Theorem 3.1 (Theorem 1 in [11]). * For any , we have
**
where
*

For , it follows from (3.11)-(3.12) that is a descent direction of the unknown function . Some practical PC methods for LVI based on direction are given in [11].

*Algorithm 3.2 (Projection and Contraction Method for LVI Problem (2.10)). * Given . for , if , then do the following
where , is defined in (3.12), and

*Remark 3.3. *In fact, is a relaxation factor and it is recommended to be taken from . In practical computation, usually we take . The method was first mentioned in [11]. Among the PC methods for asymmetric LVI [10–13], this method makes just one projection in each iteration.

For completeness sake, we include the theorem for LVI (2.10) and its proofs.

Theorem 3.4 (Theorem 2 in [11]). *The method (3.13)-(3.14) produces a sequence , which satisfies
*

*Proof. *It follows from (3.11) and (3.14) that
Thus the theorem is proved.

The method used in this paper is called *projection and contraction method* because it makes projections in each iteration and the generated sequence is Fejér monotone with respect to the solution set.

For skew-symmetric in LVI (2.10), it is easy to prove that . Thus, the contraction inequality (3.15) can be simplified to Following (3.17), the convergence results of the sequence can be found in [11] or be proved similarly to that in [17].

Since the above inequality is true for all , we have where The above inequality states that we get a “great” profit from the th iteration, if is not too small; conversely, if we get a very small profit from the th iteration, then is already very small and is a “sufficiently good” approximation of a .

#### 4. Implementing Details of Algorithm 3.2 for Solving Problem (1.2)

We use the Algorithm 3.2 to solve the linear variational inequality (2.10) arising from problem (1.2). For a given , the process of computing a new iterate is listed as follows: The key operations here are to compute and , where are symmetric matrices. In the following, we first focus on the computing method of , where is a symmetric matrix.

Since is a symmetric matrix, we have where . It is known that the optimal solution of problem is given by where and is the Frobenius norm. Thus, we have Now we move to consider the technique of computing , where is a symmetric matrix.

Lemma 4.1. *If is a symmetric matrix and is the solution of the problem
**
then we have
*

*Proof. *Since
and , where , we have that if is the solution of problem (4.7), then is also the solution of problem (4.7). As it is known that the solution of problem (4.7) is unique. Thus, , and the proof is complete.

Lemma 4.2. *If is the solution of problem
**
where , is the sign of real number , and
**
then is the solution of the problem
*

*Proof . *The result follows from

Lemma 4.3. *Let , be a permutation transformation sorting the components of in descending order, that is, the components of are in descending order. Further, suppose that is the solution of the problem
**
then is the solution of the problem
*

*Proof . *Since is a permutation transformation, we have that
and the optimal values of objective function of problem (4.14) and (4.15) are equal.

Note that
Thus, is the optimal solution of problem (4.15). And the proof is complete.

*Remark 4.4. *Suppose that is a symmetric matrix for a given . Let , where is a permutation transformation sorting the components of in descending order. Lemmas 4.1–4.3 show that if is the solution of the following problem
then

Hence, to solve the problem (4.18) is a key work to obtain the projection . Let , then problem (4.18) can be rewritten as The Lagrangian function for the constrained optimal problem (4.20) is defined as where scale and vector are the Lagrange multipliers corresponding to inequalities and , respectively. By KKT condition, we have that is, It is easy to check that if , then is the solution of problem (4.23). Now we assume that . In this case, let Note that and Thus, there exists a least-integer such that Since we have that Let

Theorem 4.5. *Let and be given by (4.30) and (4.29), respectively, then is the solution of problem (4.23), and thus
*

*Proof . *It follows from (4.26), (4.27), and (4.29) that
Following from (4.32) it is easy to check that is a solution of problem (4.23). Note that problem (4.18) is convex, thus is the solution of problem (4.18). Further, according to Remark 4.4, we have
The proof is complete.

*Remark 4.6. *Note that if is the solution of problem
where is a given scalar, and
then is the solution of problem (1.2). Thus, we can find the solution of problem (1.2) by solving problem (4.34).

Let Now, we are in the stage to describe the implementing process of Algorithm 3.2 for problem (1.2) in detail.

*Algorithm 4.7 (projection and contraction method for problem (1.2)). *

*Step 1*(Initialization). Let be a given symmetric matrix, be given scalars. Choose arbitrarily an initial point , , and , where , , and are defined by (4.36) and (2.7), respectively. Let , , and be a prespecified tolerance.

*Step 2*(Computation). Compute and by using (4.6) and (4.31), respectively.

Let , , , , where .

*Step 3.*(Verification). If , then stop and output the approximate solution , where .

*Step 4.*(Iteration). , , , goto Step 2.

#### 5. Numerical Experiments

In this section, some examples are provided to illustrate the performance of Algorithm 4.7 for solving problem (1.2). In the following illustrative examples, the computer program for implementing Algorithm 4.7 is coded in Matlab and the program runs on IBM notebook (R51).

*Example 5.1. * Consider problem (1.2) with , and , where , and eye are both the Matlab functions, and is the size of problem (1.2). Let , then we have
where zeros is also the Matlab function, , , , and are the smallest and the largest eigenvalue of matrix , respectively. Table 1 reports the numerical results of Example 5.1 solved by Algorithm 4.7, where is the number of iterations, the unit of time is second, and is the approximate solution of problem (1.2) obtained by Algorithm 4.7.

*Example 5.2. * Consider problem (1.2) with , and , where . In this test example, let , be same as that in Example 5.1, , and be also given according to (5.1). Table 2 reports the numerical results of Example 5.2 and shows the numerical performance of Algorithm 4.7 for solving problem (1.2).

#### 6. Conclusions

In this paper, a relationship between the matrix nearness problem and the linear variational inequality has been built. The matrix nearness problem considered in this paper can be solved by applying an algorithm for the related linear variational inequality. Based on this point, a projection and contraction method is presented for solving the matrix nearness problem, and the implementing details are introduced in this paper. Numerical experiments show that the method suggested in this paper has a good performance, and the method can be improved by setting the parameters in Algorithm 4.7 properly. Thus, further studying of the effect of the parameters in Algorithm 4.7 maybe a very interesting work.

#### Acknowledgments

This research is financially supported by a research grant from the Research Grant Council of China (Project no. 10971095).

#### References

- S. Boyd and L. Vandenberghe,
*Convex Optimization*, Cambridge University Press, 2004. View at Zentralblatt MATH - N. J. Higham, “Matrix nearness problems and applications,” in
*Applications of Matrix Theory*, M. Gover and S. Barnett, Eds., pp. 1–27, Oxford University Press, Oxford, UK, 1989. View at Google Scholar · View at Zentralblatt MATH - S. Boyd and L. Xiao, “Least-squares covariance matrix adjustment,”
*SIAM Journal on Matrix Analysis and Applications*, vol. 27, no. 2, pp. 532–546, 2005. View at Publisher · View at Google Scholar · View at Zentralblatt MATH - N. J. Higham, “Computing a nearest symmetric positive semidefinite matrix,”
*Linear Algebra and its Applications*, vol. 103, pp. 103–118, 1988. View at Publisher · View at Google Scholar · View at Zentralblatt MATH - N. J. Higham, “Computing the nearest correlation matrix—a problem from finance,”
*IMA Journal of Numerical Analysis*, vol. 22, no. 3, pp. 329–343, 2002. View at Publisher · View at Google Scholar · View at Zentralblatt MATH - G. L. Xue and Y. Y. Ye, “An efficient algorithm for minimizing a sum of Euclidean norms with applications,”
*SIAM Journal on Optimization*, vol. 7, no. 4, pp. 1017–1036, 1997. View at Publisher · View at Google Scholar · View at Zentralblatt MATH - G. L. Xue and Y. Y. Ye, “An efficient algorithm for minimizing a sum of $p$-norms,”
*SIAM Journal on Optimization*, vol. 10, no. 2, pp. 551–579, 2000. View at Publisher · View at Google Scholar - J. Yang and Y. Zhang, “Alternating direction algorithms for ${\ell}_{1}$-problems in compressive sensing,”
*SIAM Journal on Scientific Computing*, vol. 33, no. 1, pp. 250–278, 2011. View at Publisher · View at Google Scholar - I. S. Dhillon and J. A. Tropp, “Matrix nearness problems with Bregman divergences,”
*SIAM Journal on Matrix Analysis and Applications*, vol. 29, no. 4, pp. 1120–1146, 2007. View at Publisher · View at Google Scholar - B. S. He, “A projection and contraction method for a class of linear complementarity problems and its application in convex quadratic programming,”
*Applied Mathematics and Optimization*, vol. 25, no. 3, pp. 247–262, 1992. View at Publisher · View at Google Scholar · View at Zentralblatt MATH - B. S. He, “A new method for a class of linear variational inequalities,”
*Mathematical Programming*, vol. 66, no. 2, pp. 137–144, 1994. View at Publisher · View at Google Scholar · View at Zentralblatt MATH - B. S. He, “Solving a class of linear projection equations,”
*Numerische Mathematik*, vol. 68, no. 1, pp. 71–80, 1994. View at Publisher · View at Google Scholar · View at Zentralblatt MATH - B. S. He, “A modified projection and contraction method for a class of linear complementarity problems,”
*Journal of Computational Mathematics*, vol. 14, no. 1, pp. 54–63, 1996. View at Google Scholar · View at Zentralblatt MATH - B. He, “A class of projection and contraction methods for monotone variational inequalities,”
*Applied Mathematics and Optimization*, vol. 35, no. 1, pp. 69–76, 1997. View at Publisher · View at Google Scholar · View at Zentralblatt MATH - M. H. Xu and T. Wu, “A class of linearized proximal alternating direction methods,”
*Journal of Optimization Theory and Applications*, vol. 151, no. 2, pp. 321–337, 2011. View at Publisher · View at Google Scholar - B. C. Eaves, “On the basic theorem of complementarity,”
*Mathematical Programming*, vol. 1, no. 1, pp. 68–75, 1971. View at Publisher · View at Google Scholar · View at Zentralblatt MATH - M. H. Xu, J. L. Jiang, B. Li, and B. Xu, “An improved prediction-correction method for monotone variational inequalities with separable operators,”
*Computers & Mathematics with Applications*, vol. 59, no. 6, pp. 2074–2086, 2010. View at Publisher · View at Google Scholar · View at Zentralblatt MATH