Abstract

For constrained minimization problem of maximum eigenvalue functions, since the objective function is nonsmooth, we can use the approximate inexact accelerated proximal gradient (AIAPG) method (Wang et al., 2013) to solve its smooth approximation minimization problem. When we take the function in the problem , where is the maximum eigenvalue function, is a proper lower semicontinuous convex function (possibly nonsmooth) and denotes the indicator function. But the approximate minimizer generated by AIAPG method must be contained in otherwise the method will be invalid. In this paper, we will consider the case where the approximate minimizer cannot be guaranteed in . Thus we will propose two different strategies, respectively, constructing the feasible solution and designing a new method named relax inexact accelerated proximal gradient (RIAPG) method. It is worth mentioning that one advantage when compared to the former is that the latter strategy can overcome the drawback. The drawback is that the required conditions are too strict. Furthermore, the RIAPG method inherits the global iteration complexity and attractive computational advantage of AIAPG method.

1. Introduction

The minimization problem of maximum eigenvalue functions in nonsmooth optimization presents a fascinating mathematical challenge. Such problems arise in many different areas of applied mathematics, especially in engineering design [1], and also play important roles in enriching blend of classical mathematical techniques and contemporary optimization theory. The constrained minimization problem of maximum eigenvalue functions can be transformed into the minimization problem of the sum of two convex functions. Various methods have been proposed to deal with such problems, such as in [2], a forward-backward splitting algorithm was used to solve the minimization problem of two proper lower semicontinuous convex functions. Besides, several fixed point algorithms based on proximity operator were introduced in [3] for ROF denoising model which is actually the minimization problem of the sum of two convex functions. More recently, the AIAPG method which is based on accelerated proximal gradient (APG) method [4] was introduced in [5] for solving the minimization problem of the sum of maximum eigenvalue function and proper lower semicontinuous convex function. If the approximate minimizer is infeasible, that is, the approximate minimizer is not strictly contained in the feasible set , the AIAPG method will not be available. Hence, we design the RIAPG method which is based on AIAPG method to solve the smooth approximation problem of constrained minimization problem of maximum eigenvalue functions.

We consider the following constrained minimization problem of the maximum eigenvalue function: where is the maximum eigenvalue function, is a linear map, , and means is a positive semidefinite matrix. is the space of real symmetric matrices. The problem is equivalent to the following form: where , denotes the indicator function. Then we consider the smooth approximation [6] to the maximum eigenvalue function which is a proper, lower semicontinuous, convex function and is Lipschitz continuous. This thought for dealing with the problem resembles the technique used in [7]. Hence, the approximation form of is given by Problem can be solved by AIAPG method in feasible case. In infeasible case, we will propose two strategies. On the one hand, we use infeasible approximate minimizer to construct feasible solution which satisfies the conditions required by AIAPG method. On the other hand, we enlarge the feasible set suitably and present RIAPG method to solve problem .

The rest of paper is organized as follows. Section 2 introduces the constructive technique of feasible approximate minimizer that satisfies the requirement of AIAPG method. Due to the drawback of AIAPG method is that the required conditions are strict. It makes challenge to the efficiency of practical performance and the accuracy of calculation. Hence, the relax inexact accelerated proximal gradient method will be addressed more formally in Section 3. Section 4 is devoted to a series of lemmas and theorems to show the convergence analysis of the method. Finally, we have a conclusion section.

Notation. For any in , denotes stand trace inner product, and , respectively, stand for Frobenius norm and spectral norm. is the adjoint operator of linear operator such that = for all . To facilitate the latter specification, we have also given the following notations. Let be a self-adjoint positive definite operator that is chosen by the user. In addition, ,, are all the given convergent sequences of nonnegative numbers such that , , .

2. Construction of Feasible Solution

Problem can be solved by AIAPG method [5], but note that the approximate minimizer generated by above method must be feasible; that is, and . At the same time, given in [5], the approximate solution should satisfy the KKT optimality conditions. More precisely

In practice, the positive semidefiniteness of approximate solution is easy to stipulate by performing projection onto , but the vector is usually not exactly equal to 0. Hence, we present a strategy that uses infeasible solution to construct which is a feasible solution such that satisfies corresponding KKT optimality conditions.

Suppose satisfies the conditions , , , and , and there exists such that , where , , is surjective. Then the constructive form of feasible solution is given as follows: where and .

In the following paragraph, we will show that is feasible and satisfies corresponding KKT optimality conditions for above construction. By the definition of , , and , we have It is easy to get and will be positive semidefinite whenever . In addition, for the following results are also valid But above results were established on the condition of the requirement of , that is, The proof of above conclusions is similar as in [8] and we omit it here.

Though we have succeeded in constructing a feasible solution , the requirement of is too stringent to be difficult for computational efficiency. To overcome the drawbacks above we propose RIAPG method to solve problem for which the iterate points generated by method need not be strictly contained in .

3. A Relax Inexact Accelerated Proximal Gradient Method

The RIAPG algorithm for solving the problem is described as follows.

Given a tolerance . Input . Set . Iterate the following steps.

Step 1. Find an approximate minimizer where is allowed to be contained in a suitable enlargement of , and the sequence is monotonically decreasing. Consider

Step 2. Compute .

Step 3. Compute .

Let . When the dual of (6) is given by We assume that the approximate minimizer in (6) and its corresponding dual variables satisfy the following conditions: where is a given positive number and we also assume that the sequence is monotonically decreasing.

Let be the optimal solution of , and the dual of is given as follows: Let be the optimal solution of above dual problem.

To facilitate the later proof, we define the following quantities:

It should be noted that comparing to the quantities of AIAPG method, and here may be negative since the lack of the feasibility of .

4. Convergence Analysis

In the following paragraphs, a series of lemmas and theorems will be given to specify the convergence analysis of the RIAPG method. We should mention that the lack of the feasibility of introduces nontrivial technical difficulties in the proof of the convergence.

Lemma 1. Given and a positive definite linear operator on such that the conditions in (9) hold, then for all , we have

Proof. Noting the first inequality of (9), we have By the convexity of , we have Then, Since and the third inequality of (9), we have Then the required result is proved by considering the fact that and .

Lemma 2. Suppose that , for all . Then(i);(ii)in addition, the conditions in (9) are satisfied for all . Then where , , .

Proof. (i) According to Lemma 1, taking in (12), we have Similarly, taking in (12), we have By multiplying (18) throughout by and adding that to (19), we have In addition, by multiplying (20) throughout by , and using , we have Note that the second inequality above follows the fact that the definition of and . Since , and (11), we have Then the result (i) is proved.(ii)We have .
First, we show that . As ,,,. By applying the Lemma 1, taking we have Since and , we have Next we will show where . By (i), we can get We use the fact that consequently, (25) holds. Hence Then we can get According to (28), we have ; this implies and then Adding to both sides of (29) and moving the terms, we get this implies thus, In the last two inequalities, we use the fact that and .
Let ; then we have which implies The result (ii) follows from (35), (25), and the fact that .

Lemma 3. (i) Suppose that there exists such that If the sequence is bounded from above, then the sequence is bounded.
(ii) Suppose that for all , is bounded, and there exists such that Then the sequence is bounded. In addition, the sequence is also bounded.

Proof. (i) By using the convexity of , , and monotonicity of the sequence of , we have Thus Then the required result is proved.(ii)Noting (9) and monotonicity of the sequence of , we have Then that the sequence is bounded follows from the fact that is bounded. By the continuity of and the fact that , the sequence is also bounded.
Next, we will show that is bounded. Take and we can get from (9). Hence So we have directly from . Then the boundedness of is proved by using the fact that the sequence and are bounded.

Lemma 4. For all , we have where is an optimal solution of the problem .

Proof. By the convexity of and the fact that ,,, we have

Theorem 5. Suppose that for all . Taking , we have

Proof. Taking the problem into account, is an optimal solution of it. Since , we have Then the inequality on the left side of (44) follows from Lemma 4, (45) and the fact that .
Next, we will show the inequality on the right side of (44). By Lemma 2(ii) and using , we have Since . Then we have Using again, the required result is proved.

From the assumption on the sequences of , , , we can get the result that the sequences and are bounded. Moreover, by using Lemma 3, we note the sequence is also bounded; at the same time, we can also get the boundedness of and . Then the convergence of the RIAPG method with the convergent rate is proved.

5. Conclusion

The principal result given here is that we have presented the implementable and globally convergent method (RIAPG method) for solving the constrained minimization problem of maximum eigenvalue functions. RIAPG method, being an extension of AIAPG method, is especially suited for the case where the approximate minimizer generated by AIAPG method may not be in the feasible set. Though this method is based on some assumptions, it enriches the way to deal with the constrained minimization problem of maximum eigenvalue functions.

Conflict of Interests

The authors declare that there is no conflict of interests regarding the publication of this paper.

Acknowledgments

The authors thank the referees for their beneficial suggestions for the improvement of this paper. This paper is supported by the National Natural Science Foundation of China under Project no. 11171138.