Abstract

Let S be a closed convex set of matrices and C be a given matrix. The matrix nearness problem considered in this paper is to find a matrix X in the set S at which max reaches its minimum value. In order to solve the matrix nearness problem, the problem is reformulated to a min-max problem firstly, then the relationship between the min-max problem and a monotone linear variational inequality (LVI) is built. Since the matrix in the LVI problem has a special structure, a projection and contraction method is suggested to solve this LVI problem. Moreover, some implementing details of the method are presented in this paper. Finally, preliminary numerical results are reported, which show that this simple algorithm is promising for this matrix nearness problem.

1. Introduction

Let be a given symmetric matrix and where are given scalars and , is the identity matrix, and denotes that is a positive semidefinite matrix. It is clear that is a nonempty closed convex set. The problem considered in this paper is where Throughout the paper we assume that the solution set of problem (1.2) is nonempty.

Note that when and , the set reduces to the semidefinite cone Using the terminology in interior point methods, is called semidefinite cone, and thus the related problem belongs to the class of semidefinite programming [1].

Problem (1.2) can be viewed as a type of matrix nearness problem, that is, the problem of finding a matrix that satisfies some property and is nearest to a given one. A survey on matrix nearness problems can be found in [2]. The matrix nearness problems have many applications especially in finance, statistics, and compressive sensing. For example, one application in statistics is to make adjustments to a symmetric matrix so that it is consistent with prior knowledge or assumptions, and is a valid covariance matrix [3–8]. Paper [9] discusses a new class of matrix nearness problems that measure approximation error using a directed distance measure called a Bregman divergence and proposes a framework for studying these problems, discusses some specific matrix nearness problems, and provides algorithms for solving them numerically.

Note that a different norm is used in this paper than in these published papers [3–7] and this makes the objective function of problem (1.2) be nonsmooth, and the problem (1.2) cannot be solved very easily. In the next section, we will find that the problem (1.2) can be converted into a linear variational inequality and thus can be solved effectively with the projection and contraction (PC) method which is extremely simple both in theoretical analysis and numerical implementations [10–15].

The paper is organized as follows. The relationship between the matrix nearness problem considered in this paper and a monotone linear variational inequality (LVI) is built in Section 2. In Section 3, some preliminaries on variational inequalities are summarized. The projection and contraction method for the LVI associated with the considered problem is suggested. In Section 4, the implementing details for applying the projection and contraction method to the matrix optimization problems are studied. Preliminary numerical results and some concluding remarks are reported in Sections 5 and 6, respectively.

2. Reformulating the Problem to a Monotone LVI

For any , we have where and is the Euclidean inner-product of and .

In order to simplify the following descriptions, let be a linear transformation which converts the matrix into a column vector in obtained by stacking the columns of the matrix on top of one another, that is, and let be the original matrix , that is,

Based on (2.1) and the fact that the matrix and are symmetric, problem (1.2) can similarly be rewritten as the following min-max problem: where

Remark 2.1. Since and are both symmetric matrices, we can restrict the matrices in set to be symmetric.

Let Since and are both convex sets, it is easy to prove that and are also convex sets.

Let be any solution of (2.4) and . Then where and . Thus, is a solution of the following variational inequality: find such that For convenience of coming analysis, we rewrite the linear variational inequality (2.9) in the following compact form: find such that where In the following, we denote the linear variational inequality (2.10)-(2.11) by LVI.

Remark 2.2. Since is skew-symmetric, the linear variational inequality LVI is monotone.

3. Projection and Contraction Method for Monotone LVIs

In this section, we summarize some important concepts and preliminary results which are useful in the coming analysis.

3.1. Projection Mapping

Let be a nonempty closed convex set of . For a given , the projection of onto , denoted by , is the unique solution of the following problem: where is the Euclidian norm. The projection under an Euclidean norm plays an important role in the proposed method. A basic property of the projection mapping on a closed convex set is

In many cases of practical applications, the closed convex set has a simple structure, and the projection on is easy to carry out. For example, let be a vector whose each element is , and Then the projection of a vector on can be obtained by where the min and max are component wise.

3.2. Preliminaries on Linear Variational Inequalities

We denote the solution set of LVI (2.10) by and assume that . Since the early work of Eaves [16], it is well known that the variational inequality LVI problem is equivalent to the following projection equation: In other words, to solve LVI is equivalent to finding a zero point of the continuous residue function Hence, In the literature of variational inequalities, is called the error bound of LVI. It quantitatively measures how much fails to be in .

3.3. The Projection and Contraction Method

Let be a solution. For any , because , it follows from (2.10) that By setting and in (3.2), we have Adding the above two inequalities, and using the notation of , we obtain For positive semi-definite (not necessary symmetric) matrix , the following theorem follows from (3.10) directly.

Theorem 3.1 (Theorem  1 in [11]). For any , we have where

For , it follows from (3.11)-(3.12) that is a descent direction of the unknown function . Some practical PC methods for LVI based on direction are given in [11].

Algorithm 3.2 (Projection and Contraction Method for LVI Problem (2.10)). Given . for , if , then do the following where , is defined in (3.12), and

Remark 3.3. In fact, is a relaxation factor and it is recommended to be taken from . In practical computation, usually we take . The method was first mentioned in [11]. Among the PC methods for asymmetric LVI [10–13], this method makes just one projection in each iteration.
For completeness sake, we include the theorem for LVI (2.10) and its proofs.

Theorem 3.4 (Theorem  2 in [11]). The method (3.13)-(3.14) produces a sequence , which satisfies

Proof. It follows from (3.11) and (3.14) that Thus the theorem is proved.

The method used in this paper is called projection and contraction method because it makes projections in each iteration and the generated sequence is FejΓ©r monotone with respect to the solution set.

For skew-symmetric in LVI (2.10), it is easy to prove that . Thus, the contraction inequality (3.15) can be simplified to Following (3.17), the convergence results of the sequence can be found in [11] or be proved similarly to that in [17].

Since the above inequality is true for all , we have where The above inequality states that we get a β€œgreat” profit from the th iteration, if is not too small; conversely, if we get a very small profit from the th iteration, then is already very small and is a β€œsufficiently good” approximation of a .

4. Implementing Details of Algorithm 3.2 for Solving Problem (1.2)

We use the Algorithm 3.2 to solve the linear variational inequality (2.10) arising from problem (1.2). For a given , the process of computing a new iterate is listed as follows: The key operations here are to compute and , where are symmetric matrices. In the following, we first focus on the computing method of , where is a symmetric matrix.

Since is a symmetric matrix, we have where . It is known that the optimal solution of problem is given by where and is the Frobenius norm. Thus, we have Now we move to consider the technique of computing , where is a symmetric matrix.

Lemma 4.1. If is a symmetric matrix and is the solution of the problem then we have

Proof. Since and , where , we have that if is the solution of problem (4.7), then is also the solution of problem (4.7). As it is known that the solution of problem (4.7) is unique. Thus, , and the proof is complete.

Lemma 4.2. If is the solution of problem where , is the sign of real number , and then is the solution of the problem

Proof . The result follows from

Lemma 4.3. Let , be a permutation transformation sorting the components of in descending order, that is, the components of are in descending order. Further, suppose that is the solution of the problem then is the solution of the problem

Proof . Since is a permutation transformation, we have that and the optimal values of objective function of problem (4.14) and (4.15) are equal.
Note that Thus, is the optimal solution of problem (4.15). And the proof is complete.

Remark 4.4. Suppose that is a symmetric matrix for a given . Let , where is a permutation transformation sorting the components of in descending order. Lemmas 4.1–4.3 show that if is the solution of the following problem then

Hence, to solve the problem (4.18) is a key work to obtain the projection . Let , then problem (4.18) can be rewritten as The Lagrangian function for the constrained optimal problem (4.20) is defined as where scale and vector are the Lagrange multipliers corresponding to inequalities and , respectively. By KKT condition, we have that is, It is easy to check that if , then is the solution of problem (4.23). Now we assume that . In this case, let Note that and Thus, there exists a least-integer such that Since we have that Let

Theorem 4.5. Let and be given by (4.30) and (4.29), respectively, then is the solution of problem (4.23), and thus

Proof . It follows from (4.26), (4.27), and (4.29) that Following from (4.32) it is easy to check that is a solution of problem (4.23). Note that problem (4.18) is convex, thus is the solution of problem (4.18). Further, according to Remark 4.4, we have The proof is complete.

Remark 4.6. Note that if is the solution of problem where is a given scalar, and then is the solution of problem (1.2). Thus, we can find the solution of problem (1.2) by solving problem (4.34).

Let Now, we are in the stage to describe the implementing process of Algorithm 3.2 for problem (1.2) in detail.

Algorithm 4.7 (projection and contraction method for problem (1.2)).
Step  1 (Initialization). Let be a given symmetric matrix, be given scalars. Choose arbitrarily an initial point , , and , where , , and are defined by (4.36) and (2.7), respectively. Let , , and be a prespecified tolerance.
Step  2 (Computation). Compute and by using (4.6) and (4.31), respectively.
Let  , ,  , , where .
Step  3. (Verification). If , then stop and output the approximate solution , where .
Step  4. (Iteration). , , , goto Step  2.

5. Numerical Experiments

In this section, some examples are provided to illustrate the performance of Algorithm 4.7 for solving problem (1.2). In the following illustrative examples, the computer program for implementing Algorithm 4.7 is coded in Matlab and the program runs on IBM notebook (R51).

Example 5.1. Consider problem (1.2) with , and , where , and eye are both the Matlab functions, and is the size of problem (1.2). Let , then we have where zeros is also the Matlab function, , , , and are the smallest and the largest eigenvalue of matrix , respectively. Table 1 reports the numerical results of Example 5.1 solved by Algorithm 4.7, where is the number of iterations, the unit of time is second, and is the approximate solution of problem (1.2) obtained by Algorithm 4.7.

Example 5.2. Consider problem (1.2) with , and , where . In this test example, let , be same as that in Example 5.1, , and be also given according to (5.1). Table 2 reports the numerical results of Example 5.2 and shows the numerical performance of Algorithm 4.7 for solving problem (1.2).

6. Conclusions

In this paper, a relationship between the matrix nearness problem and the linear variational inequality has been built. The matrix nearness problem considered in this paper can be solved by applying an algorithm for the related linear variational inequality. Based on this point, a projection and contraction method is presented for solving the matrix nearness problem, and the implementing details are introduced in this paper. Numerical experiments show that the method suggested in this paper has a good performance, and the method can be improved by setting the parameters in Algorithm 4.7 properly. Thus, further studying of the effect of the parameters in Algorithm 4.7 maybe a very interesting work.

Acknowledgments

This research is financially supported by a research grant from the Research Grant Council of China (Project no. 10971095).