Abstract

We introduce so-called semidefinite quasiconvex minimization problem. We derive new global optimality conditions for the above problem. Based on the global optimality conditions, we construct an algorithm which generates a sequence of local minimizers which converge to a global solution.

1. Introduction

Semidefinite linear programming can be regarded as an extension of linear programming and solves the following problem: where is a matrix of variables and . is notation for “ is positive semidefinite”. denotes Frobenius norm and .

Semidefinite programming finds many applications in engineering and optimization [1]. Most interior-point methods for linear programming have been generalized to semidefinite convex programming [13]. There are many works devoted to the semidefinite convex programming problem but less attention so for has been paid to quasiconvex programming semidefinite quasiconvex minimization problem.

The aim of this paper is to develop theory and algorithms for the semidefinite quasiconvex programming. The paper is organized as follows. Section 2 is devoted to formulation of semidefinite quasiconvex programming and its global optimality conditions. In Section 3, we consider an approximation of the level set of the objective function and its properties.

2. Problem Definition and Optimality Conditions

Let be matrices in , and define a scalar matrix function as follows:

Definition 1. Let be a differentiable function of the matrix . Then

Introduce the Frobenius scalar product as follows: If is differentiable, then it can be checked that

Definition 2. A set is convex if for all and .

Definition 3. The function is said to be quasiconvex on if

The well-known property of a convex function [3] can be easily generalized as follows.

Lemma 4. A function is quasiconvex if and only if the set is convex for all .

Proof
Necessity. Suppose that is an arbitrary number and . By the definition of quasiconvexity, we have which means that the set is convex.
Sufficiency. Let be a convex set for all . For arbitrary , define . Then and . Consequently, , for any . This completes the proof.

Lemma 5. Let be a quasiconvex and differentiable function. Then the inequality for implies that where and denotes the Frobenius scalar product of two matrices.

Proof. Since is quasiconvex, for all and such that . By Taylor's formula, there is a neighborhood of the point on which: From the fact that , we obtain which completes the proof.

Consider the problem of minimizing a differentiable quasiconvex matrix function subject to constraints where are scalar functions and are positive semidefinite matrices, .

We call problem (12)–(14) as the semidefinite quasiconvex minimization problem.

Denote by a constraint set of the problem as follows: Then problems (12)–(14) reduce to In general, the set is nonconvex. Problem (16) is nonconvex and belongs to a class of global optimization problems in Banach space.

We formulate a new global optimality condition for problem (16) in the following. For this purpose,we introduce the level set of the function at a point : Then global optimality conditions for problem (16) can be formulated as follows.

Theorem 6. Let be a solution of problem (16). Then where . If, in addition, holds for all and , then condition (18) becomes sufficient.

Proof
Necessity. Assume that is a solution of problem (16). Let and . Then we have , and Lemma 5 implies .
Sufficiency. Suppose, on the contrary, that is not a solution of (16). Then there exists a such that . Construct a ray for defined by
We claim that holds for all positive . By Taylor's formula, we have for small , where . Therefore, there exists such that holds for all . Hence, by Lemma 5, we have since and by the assumption. Note that for all , we also have ; for otherwise, we would have , and consequently, by Lemma 5, , which would imply which is contradicting to the assumption that . Moreover, we can show that is increasing in . If holds for some , then , which would contradict the fact that . These prove our claim for all .
Now it is obvious that the function defined as is continuous on . Also, with assumption (19) implies , and therefore, there exists an such that . Using the continuity of and the inequalities , there exists an such that which means that . On the other hand, we have . Thus we get which contradicts (18). This means that must be a solution of (16).

Example 7. Consider the following problem:

Example 8. Consider the fractional programming problem where is convex and differentiable on and is concave and differentiable on . Suppose that and are defined positively on a ball containing a subset ; that is, We will call this problem as the mixed fractional minimization problem. By Lemma 4, we can easily show that is quasiconvex. Hence, the optimality condition (13) at a solution of (27) is as follows:

3. An Algorithm for the Convex Minimization Problem

We consider the quasiconvex minimization problem as a special case of problem (16): where is strongly convex and continuously differentiable and is an arbitrary compact set in . In this case, then we can weaken condition (19) as shown in the next theorem.

Theorem 9. Let be a solution of problem (30). Then If, in addition, holds, then condition (31) is also sufficient.

Proof
Necessity. Assume that is a solution of problem (30). Consider and . Then by the convexity of , we have
Sufficiency. Let us prove the assertion by contradiction. Assume that (31) holds and there exists a point such that Clearly, by assumption (32). Now define as follows for : Then, by the convexity of , we have which implies Then find such that that is, Thus we get Define a function as where . It is clear that is continuous on . Note that and . There are two cases with respect to the values of which we should consider.
Case a.   (or ), then contradicting condition (31).
Case b.   and . Since is continuous, there exists a point such that (or ). Then we have again contradicting (31).
Thus, in both cases, we find contradictions, proving the theorem.

Now using the function we reformulate Theorem 9 in terms of function defined as follows:

Theorem 10. Assume that is strongly convex and continuously differentiable and is a compact set in . Let . If , then the point is a solution to problem (30).

Proof. This is an obvious consequence of the following relations: which are fulfilled for all and .
Now we are ready to present an algorithm for solving problem (30). We also suppose that one can efficiently solve the problem of computing for any given .
Algorithm MIN
Input. A strongly quasiconvex function and a compact set .
Output. A solution to the minimization problem (30).

Step 1. Choose a feasible point . Set .

Step 2. Solve the following problem: Let be a solution of this problem (i.e., , and let realizes (i.e., ).

Step 3. If then output and terminate. Otherwise, let and return Step 2.
The convergence of this algorithm is based on the following theorem.

Theorem 11. Assume that is strongly convex and continuously differentiable and is a compact set in . Let . Then the sequence generated by Algorithm MIN is a minimizing sequence for problem (30); that is, and every accumulation point of the sequence is a global minimizer of (30).

Proof. From the construction of , we have and for all , where . Clearly, by assumption. Also, note that for all and , we have If there exists a such that then, by Theorem 11, is a solution to problem (30) and in this case the proof is complete. Therefore, without loss of generality, we can assume that ignored for all and prove the theorem by contradiction. If the assertion is false; that is, is not a minimizing sequence for problem (30), the following inequality holds: By the definition of and Algorithm MIN, we have and . The convexity of implies that Hence, we obtain for all , and the sequence is strictly decreasing. Since the sequence is bounded from below by , it has a limit and satisfies Then, from (49) and (50), we obtain From (51) we have for all . Now define as follows: Then, by the convexity of , we have which implies Choose such that that is, Define a function as where . It is clear that is continuous on . Note that and . Since is continuous, there exists a point such that ; that is, and . Also, note that Taking into account , we have Since , this implies The continuity of on yields which is a contradiction to (49).
Consequently, is a minimizing sequence for problem (30). Since is compact, we can always select the convergent subsequences from such that Then together with (63), we obtain which completes the proof.

4. Numerical Experiments

The proposed algorithm has been tested on the following numerical examples.

Problem 12. where The global solution is

Problem 13. The global solution is