Abstract

In this paper, we apply the -algorithm to solve the constrained minimization problem of a maximum eigenvalue function which is the composite function of an affine matrix-valued mapping and its maximum eigenvalue. Here, we convert the constrained problem into its equivalent unconstrained problem by the exact penalty function. However, the equivalent problem involves the sum of two nonsmooth functions, which makes it difficult to apply -algorithm to get the solution of the problem. Hence, our strategy first applies the smooth convex approximation of maximum eigenvalue function to get the approximate problem of the equivalent problem. Then the approximate problem, the space decomposition, and the -Lagrangian of the object function at a given point will be addressed particularly. Finally, the -algorithm will be presented to get the approximate solution of the primal problem by solving the approximate problem.

1. Introduction

The eigenvalue optimization problems have attracted wide attention to the nonsmooth optimization. Such problems arise from many applications such as signal recovery [1], shape optimization [2], and robotics [3]. Therefore, the research on methods for solving such problems plays an important role in enriching the blend of classical mathematical techniques and contemporary optimization theory. Various methods have been proposed to deal with such problems; for example, the bundle method was used by Helmberg and Oustry to solve a class of unconstrained maximum eigenvalue optimization problems [4]. Recently, Oustry applied -Newton algorithm to solve the maximum eigenvalue optimization problem [5]. However, this method must satisfy the transversality condition. In this paper, we design a -algorithm which does not satisfy the strict condition above to solve the constrained maximum eigenvalue optimization problem approximately. Here, we focus our attention on the following mode problem particularly:where is the maximum eigenvalue function and the mapping is affine, is given, is a linear operator, and is the space of symmetric matrices. Consider an exact penalty function associated with as follows: where , , and is a penalty parameter. For large enough, it is well known that the problem is equivalent to the following form: It is known that the -decomposition theory must be applied on the condition that the dimension of the -space is not full dimensional. Since inherits the nondifferentiability of and the function , it is difficult to apply -decomposition theory to in that the -space of at a given point is full dimensional. Hence, it is imperative to consider the smooth approximation function [6] to the function . Then the approximate problem of is given as follows: where . Thus the problem can be solved by -algorithm and we can get the approximate solution of the problem at the same time.

The rest of the paper is organized as follows. Section 2 introduces three equivalent -space decomposition definitions of , associated with a given . The -Lagrangian of and relevant property will be addressed more detailedly in Section 3. Section 4 is devoted to the -algorithm for solving the approximate problem and the convergence analysis of the method. Finally, Section 5 gives some conclusive comments.

To be convenient for explanation, we give the set of the act indicators throughout the paper and set

The solution of the problem depends on the study of the objective function of problem . The -space decomposition theory of will be shown firstly.

2. -Space Decomposition for

Firstly, we can easily obtain the description of the subdifferential about as follows:and the relative interior of

We start by defining a decomposition of space , associated with a given . We give three definitions for the subspaces and as follows.

Definition 1. (i) Define as the subspace where is linear and take , and since is sublinear, we have (ii) Define as the subspace parallel to the affine hull of ; in other words, where is arbitrary and take .
(iii) Define and as the normal and tangent cones to at a given point ; that is, In the meantime, and are subspaces.

Theorem 2. In Definition 1, we have the following:
(i) The subspace is actually given by and is independent of the particular .
(ii) Subspace is parallel to the affine hull of ; that is,More specifically,
(iii) .

Proof. (i) On one hand, by Definition 1 and a normal cone, we havewhere , , and .
On the other hand, let By the definition of a normal cone, . Next, we only need to establish the converse case. Let and and it suffices to prove . Indeed, let By the definition of relative interior, there exists a positive constant such that and implies that Then the result (i) is proved.
(ii) Taking the affine hull of , we obtain Hence, the subspace which is parallel to the affine hull of is also parallel to the affine hull of ; that is, , where is arbitrary. Moreover, by the definition of , we have where and . Let and , , then , and we can obtain Then result (ii) is proved.
(iii) By the property of and the definition of , we have By the convexity of , we haveLet , , take , and, by (18), we have It implies that, , ; that is, . Hence, .
Let , we have for all and . By the assumption , we have Then . By (i), we have . Hence, .
By (i) and (18), we obtain . The proof of (iii) is completed.

The solution of problem is not only based on the -space decomposition of but also based on the study of the -Lagrangian of , which will be shown next.

3. The -Lagrangian of

Let , let be a positive semidefinite matrix, and let be a basis matrix for . , we define the -Lagrange function of as follows:Associated with (19) we have the set of minimizers

In the following paragraphs, a series of theorems and corollaries will be given to specify the property of and the expansions of .

Theorem 3. By the definition of , we have the following conclusions:(i) is a proper convex function.(ii)A minimum point in (20) is characterized by the existence of some such that , where , and .(iii)In particular, and .(iv)If , then is nonempty for each and .

Theorem 4. Let satisfy . Then, , the subdifferential of at this has the expression In particular, is differentiable at , with .

Corollary 5. If , then .

Theorem 6. Let satisfy , then, , , , and we have

Theorem 7. Assume the function has a generalized Hessian at and . For and , it holds that

The proofs of the above theorems and corollary are similar to [7] and here we ignore the details of them.

Based on the study of -space decomposition theory and the -Lagrangian of , the -algorithm which can solve the problem will be addressed in the next section.

4. The -Method

Depending on the -theory mentioned above, the constrained minimization problem of maximum eigenvalue function has been converted into the convex minimization problem which can be solved by the -algorithm in [8]. Hence, we apply the -algorithm in [8] and do some appropriate modifications for solving the problem .

In this section, some definitions and two quadratic programming problems will be denoted for easy understanding.

Given a tolerance , a prox-parameter , and a prox-center , to find -approximation of , our bundle subroutine accumulates information from the candidates , where .

Definition 8. Let , , , and the linearization error is defined by

Definition 9. Given a positive scalar parameter , the proximal point function depending on is defined by

The first quadratic programming subproblem has the following form and properties; see [9]. The problemhas a dual problemTheir respective solutions, denoted by and , satisfywhere and , for all , satisfies . For convenience, in the sequel we denote the output of these calculations by

The second quadratic programming subproblem is where , , and . The above problem has a dual problem without linearization error terms: Similar to (28), the respective solutions, denoted by and , satisfySince the need of the algorithm, the solution of the problem will be applied to get the matrix . Firstly, define an active index by . Then, from (32), for all , sofor all such and for a fixed . Define a full-column rank matrix by choosing the largest number of indices satisfying (33) such that the corresponding vectors are linearly independent and by letting these vectors be the columns of . Then let be a matrix where columns form an orthogonal basis for the null-space of with if is vacuous.

For convenience, in the sequel we denote the output from these calculations by

The algorithm depending on the above quadratic programming problems is given as follows.

Algorithm 10.
Step 0. Choose positive parameters , , and with . Let and , respectively, be an initial point and subgradient. Also, let be a matrix with orthogonal -dimensional columns estimating an optimal -basis. Set and .
Step 1. Stop if .
Step 2. Choose an positive definite matrix , where is the number of columns of and approximating a basis for . For which is a minimizer of , , where is a -function satisfying for all . is a basis matrix of and is the approximation of .
Step 3. Compute an approximate -Newton step by solving the linear system and set .
Step 4. Choose , , initialize , and run the following bundle subprocedure with :Compute recursivelyuntil satisfyingThen set Step 5. If and then declare a successful candidate and set Otherwise, execute a line search on the line determined by and to find thereon satisfying ; reinitialize and return the above bundle subroutine, but with , to find new values for ; then set .
Step 6. Replace by and go to stopping test.
Next, we will show the convergence of Algorithm 10. From here on, we assume that and that Algorithm 10 does not terminate. When the primal track at the initial point exists, firstly, it shows that if some execution of the bundle procedure in Algorithm 10 continues indefinitely, there is convergence to a minimizer of .

Theorem 11. If the bundle procedure does not terminate, that is, if (37) never holds, then the sequence of -values converges to and minimizes . If the procedure terminates with , the corresponding equals and minimizes . In both of these cases .

Proof. The recursion in the bundle subprocedure replacing by satisfies conditions () to () in [9]. By Proposition in [9], if this procedure does not terminate it generates an infinite sequence of -values converging to zero. Since (37) does not hold, the sequence of -values also converges to 0. Thus, by lemma in [8] and continuity of , we can get for all . The termination case with follows in a similar manner, since (37) implies in this case. In either case, by the minimality of , . From in [8], , and the final result follows, since .

Next theorem shows minimizing convergence from any initial point without assuming the existence of a primal track. Here we assume that all executions of bundle procedure terminate.

Theorem 12. Suppose that the algorithm sequence is bounded above by . Then the following hold:(i)The sequence is decreasing and either or and both converge to .(ii)If is bounded from below, then any accumulation point of minimizes .

Proof. (i) Since , whether or not is successful candidate, the inequalityholds. Equation (41) implies that is decreasing. Suppose . Then summing (41) over and using the fact that for all imply that . From Lemma in [8] and (37) with , we haveThen (43) with and implies that which establishes (i).
(ii) Now suppose is bounded below and is any accumulation point of . Then, because and converge to 0 by item (i), (42) together with the continuity of implies that for all and (ii) is proved.

In order to obtain convergence of the whole sequence , we need the concept of a strong minimizer.

Definition 13. We say that is a strong minimizer of if and the corresponding -Lagrangian has a Hessian at that is positive definite.

Corollary 14. Suppose that is a strong minimizer of , as in Definition 13, and that the algorithm sequence is bounded above by . Then converges to . If, in addition, the sequence is bounded, then and converge to and converges to .

Proof. The proofs will be finished when , , and take the place of , , and , respectively.

5. Conclusions

The principal result is that we present the -algorithm for solving the constrained minimization problem of maximum eigenvalue functions. The innovative point is converting the constrained problem into the approximate unconstrained problem. By using the smooth convex approximation of maximum eigenvalue function, the latter problem can be solved by -algorithm. Although this method is based on some assumptions, it enriches the ways to deal with the constrained minimization problem of maximum eigenvalue functions.

Additional Points

Wei Wang (1960–) is a Professor in School of Mathematics of Liaoning Normal University.

Competing Interests

The authors declare that there is no conflict of interests regarding the publication of this paper.

Acknowledgments

This work is supported by the National Natural Science Foundation of China (no. 11671184, no. 11171138, and no. 11671183).