Table of Contents Author Guidelines Submit a Manuscript
Journal of Optimization
Volume 2015, Article ID 790451, 16 pages
http://dx.doi.org/10.1155/2015/790451
Research Article

Constraint Consensus Methods for Finding Strictly Feasible Points of Linear Matrix Inequalities

Department of Mathematics and Statistics, Northern Arizona University, Flagstaff, AZ 86011-5717, USA

Received 25 July 2014; Revised 31 October 2014; Accepted 6 November 2014

Academic Editor: Manlio Gaudioso

Copyright © 2015 Shafiu Jibrin and James W. Swift. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Abstract

We give algorithms for solving the strict feasibility problem for linear matrix inequalities. These algorithms are based on John Chinneck’s constraint consensus methods, in particular, the method of his original paper and the modified DBmax constraint consensus method from his paper with Ibrahim. Our algorithms start with one of these methods as “Phase 1.” Constraint consensus methods work for any differentiable constraints, but we take advantage of the structure of linear matrix inequalities. In particular, for linear matrix inequalities, the crossing points of each constraint boundary with the consensus ray can be calculated. In this way we check for strictly feasible points in “Phase 2” of our algorithms. We present four different algorithms, depending on whether the original (basic) or DBmax constraint consensus vector is used in Phase 1 and, independently, in Phase 2. We present results of numerical experiments that compare the four algorithms. The evidence suggests that one of our algorithms is the best, although none of them are guaranteed to find a strictly feasible point after a given number of iterations. We also give results of numerical experiments indicating that our best method compares favorably to a new variant of the method of alternating projections.

1. Introduction

We consider the strict feasibility problem for linear matrix inequality (LMI) constraints. In particular, we seek a point in the interior of the region defined by LMI constraints of the form where are symmetric matrices. The partial ordering means is positive semidefinite. In semidefinite programming problems, a function is optimized over a system of LMI constraints. For more information on semidefinite programming including applications, refer to [1, 2]. Consider the feasible region

We assume that has a nonempty interior. In this paper, we are interested in finding strictly feasible points, those in the interior of . Finding strictly feasible points is an important problem in interior point methods [3, 4]. Some methods need a starting point in the interior to proceed to optimality.

A projective method for solving the strict feasibility problem in the case of linear matrix inequalities (LMI’s) was proposed in [5]. Their method is based on the Dikin ellipsoids and successive projections onto a linear subspace. The cost per iteration in this approach is high because each iteration solves a least-squares problem. A different approach based on the method of alternating projections was given in [6]. The iterations in this method are also costly as they require eigenvalue-eigenvector decompositions. The approach described in [7] solves the LMI feasibility problem by computing a minimum-volume ellipsoid at each iteration. The convergence rate is slow, so this method is also expensive. In [8], several iterative methods were given for the convex feasibility problem. These techniques use an orthogonal or a subgradient projection at each iteration.

We study the original (basic) constraint consensus method and the DBmax constraint consensus method developed by Chinneck [3] and Ibrahim and Chinneck [9] for general nonlinear constraints to find near-feasible points. These methods also use gradient projection at each iteration to find a consensus vector that moves a point iteratively towards the feasible region starting from an infeasible point. The main cost in these methods is computing gradients, so they are relatively cheaper than the methods described in [57]. While the goal in [3, 9] is finding near-feasible points, ours is finding interior (strictly) feasible points. We apply and combine these methods to handle LMI constraints and find interior feasible points of in two phases. Phase 1 is simply the original constraint consensus method or the DBmax constraint consensus method to find a near-feasible point.

In Phase 2, starting with the near-feasible point found in Phase 1, the algorithms use the concepts of crossing points and binary words to obtain a point in a most-satisfied interval along the ray of the consensus vector at each iteration. LMI constraints have the advantage that their crossing points are computable, whereas this is not possible for general nonlinear constraints. The goal in Phase 2 is to find an interior point of . We give four variations of the new algorithms: Original-DBmax (OD) constraint consensus method, DBmax-DBmax (DD) constraint consensus method, Original-Original (OO) constraint consensus method, and DBmax-Original (DO) constraint consensus method. The OD constraint consensus method uses the original constraint consensus method in Phase 1 and the DBmax search directions in Phase 2. The description of the other three methods is similar.

The paper is organized as follows. Section 2 gives a review of two of the constraint consensus methods introduced in [3, 9] and the computation of crossing points for LMIs. In Section 3 we apply the constraint consensus methods to a single LMI and present some theoretical results. For example, we provide necessary and sufficient conditions for the feasibility vector of a constraint to move the point to the boundary of that constraint. In Section 4 we give our four algorithms. Section 5 tests our algorithms on known benchmark problems. We also compare the four methods on a variety of randomly generated test problems. The results show that DO constraint consensus method outperforms our three other algorithms; it takes fewer total iterations and less computing time and has the most success in finding strictly feasible points. We compared DO with the method of alternating projections and found DO to outperform it on average, especially on problems with large number of LMI constraints relative to the number of variables. The concluding section has comments on how our algorithms could be improved and extended. While we focus on LMI constraints, our methods are applicable to other constraints, including nonconvex types, provided the gradients and the crossing points are computable.

2. Background Material

This section discusses background material that will be needed in the next section. It will include discussions on constraint consensus methods, binary words, and crossing points for nonlinear constraints.

We consider the system of inequality constraints of the form , . For each , is a nonlinear or linear function mapping to .

2.1. The Original Constraint Consensus and DBmax Constraint Consensus Methods

In this subsection, we describe the original constraint consensus and DBmax constraint consensus methods for general nonlinear constraints.

The original (basic) constraint consensus method was developed by Chinneck in [3] to find a near-feasible point for the given system. The method starts from an infeasible point . The method associates each constraint with a feasibility vector , defined by where is the constraint violation at and is the gradient of at point . We assume that exists and that if is infeasible. If and , then we define to resolve the ambiguity in (3). We will show in Section 3 that if is linear, then is the point on the boundary of that is closest to . The length of the feasibility vector, , is called the feasibility distance, and it is an estimate of the distance to the boundary. We define to be near-feasible with respect to if , where is a preassigned feasibility tolerance. We say that is strictly feasible with respect to if . In summary, with respect to the constraint , we say that (i) is near-feasible if ;(ii) is feasible if or, equivalently, ;(iii) is strictly feasible if .

The feasibility vectors at the initial point for all the constraints are combined to give a single consensus vector . In the original constraint consensus method, this consensus vector is the average of all the feasibility vectors with length . However, the th component of the consensus vector is averaged over only the subset of feasibility constraints that actually include the th variable. The first iterate is given by and the process is repeated with . The algorithm terminates if the number (NINF) of constraints that violates the near-feasibility condition at is zero. It also stops if the length of the consensus vector , where is a given tolerance. Note that might be shorter than even when is far from if two feasibility vectors point in nearly opposite directions. Of course, the algorithm will also terminate if some maximum number of iterations is reached.

In [9], Ibrahim and Chinneck gave several variations of the original constraint consensus method. The best algorithm variant for finding near-feasible points appears to be the maximum direction-based (DBmax) constraint consensus method. Their numerical experiments show that DBmax has the best success rate for finding near-feasible points. It also takes fewer iterations than most of the other methods. The DBmax constraint consensus method looks at the number of positive and negative values of each of the components of the feasibility vectors . For the th component, DBmax finds the number of constraints with positive th entry in their feasibility vector . It also looks for the number of constraints with negative th entry in their feasibility vector . The sign with largest number (votes) of constraints is considered the winner.

After determining the winning sign, DBmax creates the consensus vector by choosing each th component of to be the largest proposed movement in the winning direction. If a component has the same number of positive and negative votes then DBmax takes the average of the largest proposed movements in the positive and negative direction. If for all , then is set at 0. As in the case of the original consensus method, when finding the th component of the consensus vector in DBmax, only the subset of feasibility constraints that actually includes the th variable is considered. An example of the two methods for computing the consensus vector is as follows: In this example, constraint 4 does not depend on ; that is, . Thus, the first component of the original is obtained by averaging , , and . In the DBmax calculation, the winning sign is negative for the first component; in the second component positive is the winning sign. There is a tie in the third component, so we take the average of and . The fourth component also has no winning sign and shows a rare example where the component of the original consensus vector is larger (in absolute value) than the same component of the DBmax consensus vector.

Usually, is larger for the DBmax method than for the original method. For this reason, the DBmax method algorithm makes rapid progress toward the feasible region and significantly reduces the number of iterations needed to reach near feasibility as compared to the original method [9]. On the other hand, when the functions are concave, the feasible region is convex and component-averaging gradient-projection schemes (including the original constraint consensus method) have been proved to converge [3].

2.2. Binary Words and Crossing Points

For simplicity of presentation, we assume that the functions , are concave over . This will insure that the feasible region is convex.

For each , define the characteristic function by Define the binary word function by For each , is a binary word. If , then is a feasible point of . Furthermore, is the number of violated constraints at . Consider the equivalence relation defined by if . The equivalence classes form a partition of (or a subset of ). Figure 1 shows an example with constraints. The curves are labeled with . The interior of this boundary is the convex region where . In this example there are 8 equivalence classes. In general there are at most equivalence classes.

Figure 1: A partition of formed by binary words . The feasible region, with , is shaded.

Suppose is a point in , and is a direction vector (not necessarily the consensus vector). The crossing points of the constraint on the ray are and , where and are positive. There are at most 2 crossing points since is concave. For general concave constraints these optimization problems are difficult, but, in the case of linear matrix inequalities, and are solutions to ; hence they are generalized eigenvalues that satisfy . Algorithms for finding the crossing points of an LMI can be found in [11, 12]. These algorithms depend on the sign of . When , the ray has 0, 1, or 2 crossing points; see Figure 2. If , the ray has at most 1 crossing point.

Figure 2: The ray and the crossing points of the constraint in an example where .

3. The Original Constraint Consensus and DBmax Constraint Consensus Methods for Linear Matrix Inequalities

This section discusses the tools required for the implementation of the original (basic) and DBmax constraint consensus methods described in Section 2.1 to LMI constraints. We give some theoretical results and show how to compute the gradient.

A symmetric matrix is positive semidefinite if and only if its smallest eigenvalue, , is nonnegative. Thus the LMI system (1) can be written as For simplicity of presentation, in this section we consider a single LMI constraint , which is equivalent to .

Lemma 1. The function is concave over .

Proof. Theorem  3.5 in [13] states that is convex over . Therefore is concave.

A consequence of Lemma 1 is that the feasible region for one LMI or a system of LMIs (1) is convex. Let be the gradient vector of .

Lemma 2. Suppose is an infeasible point of , in which a feasible point of exists and that , the gradient of , exists at . Then, .

Proof. Assume that . Since is concave over by Lemma 1, then for all . Since is an infeasible point, . Thus, for all . This would contradict our assumption that is feasible.

In the remainder of this section, we will assume that the hypotheses of Lemma 2 hold. Thus, the feasibility vector is

Let , for be the value of along a line in , and let be the linearization of at 0.

Theorem 3. Under the hypotheses of Lemma 2, the function defined by is concave. The linearization of at 0, , satisfies In particular, .

Proof. The restriction of a concave function on to a line is concave, so Lemma 1 implies that is concave. Since is concave, . Note that . By Lemma 2, . Then, by the chain rule, the slope of is Thus, .

An interpretation of is that the normalization of was chosen so that is the linear approximation of the boundary of . Typical graphs of and are shown in Figure 3.

Figure 3: Typical graphs of and , its linearization at . In the left case, the ray intersects with the boundary of the feasible region at , since . In the second case, this ray does not intersect with the feasible region, since is never .

The following corollary shows that the distance (if it exists) from along to the boundary of is at least .

Corollary 4. Under the hypotheses of Lemma 2, (a) for all . (b) Furthermore, if is a linear constraint, then ; that is, lies on the boundary of .

Proof. (a) Since is infeasible, . By Theorem 3, for all , and thus , for all . (b) If is a linear constraint, then is linear in . Thus and . Combined with part (a) this shows that is a boundary point of .

Remark 5. Even when is nonlinear, it is possible that lies on the boundary. Consider the following LMI: The eigenvalues of are , so and its gradient is . The feasible region is the closed unit disk. Suppose is infeasible; that is, . A calculation shows that . Thus, is on the boundary of the feasible region, as shown in Figure 4. Theorem 3 says that cannot be in the interior of the feasible region. Usually is infeasible for nonlinear constraints. In the present example is feasible because is linear on . This motivates the following theorem.

Figure 4: For the LMI (11), the feasible region is the disk, and is on the boundary.

Theorem 6. Under the hypotheses of Lemma 2, is linear over if and only if is a boundary point of .

Proof. Suppose is linear over . Then, for . Thus . Combined with Corollary 4(a), this implies that is a boundary point of . On the other hand, assume that is a boundary point of . Then . Thus, the concave function agrees with its linearization at and , and therefore for all .

To apply the constraint consensus methods to linear matrix inequalities, we need to be able to compute the gradient . This is given in the following results. We assume that the matrices are each of dimension .

Let be eigenvalues of with the corresponding orthonormal set of eigenvectors .

Lemma 7. If , then the function is differentiable at and

Proof. Since is simple and , this follows from Corollary  3.10 in [13]. The derivative of with respect to is , so

By Lemma 7, the components of are given by . The above results can now be used to apply the original and DBmax constraint consensus methods in Section 2.1 to LMI constraints (1). Recall that these methods, on convergence, search for near-feasible points and nonstrictly feasible points.

4. New Constraint Consensus Algorithms

In this section, we modify and combine the original (basic) and DBmax constraint consensus methods described in [3, 9] to find strictly feasible points for our system of LMI constraints (1).

We note that strictly feasible points satisfy (i.e., is positive definite) for all , and lie in the interior of the feasible region . We give four variations of the new algorithms: Original-DBmax (OD) constraint consensus method, DBmax-DBmax (DD) constraint consensus method, Original-Original (OO) constraint consensus method and DBmax-Original (DO) constraint consensus method. Algorithm 1 is OD, Algorithm 2 is DD, Algorithm 3 is OO and Algorithm 4 is DO. Each method has two phases: Phase 1 and Phase 2. For example, the OD method uses the original constraint consensus vector in Phase 1 and the DBmax constraint consensus vector in Phase 2.

Algorithm 1: Original-DBmax (OD) constraint consensus algorithm.

Algorithm 2: DBmax-DBmax (DD) constraint consensus algorithm.

Algorithm 3: Original-original (OO) constraint consensus algorithm.

Algorithm 4: DBmax-original (DO) constraint consensus algorithm.

Phase 1 is the classical constraint consensus method given in [3, 9]. The starting point in Phase 1 is a random point. Phase 1 stops when a near-feasible point is found (i.e., for all ), when the consensus vector is shorter than the movement tolerance (), or when the maximum number of iterations is reached ().

After exiting Phase 1, the algorithm goes to Phase 2 to try to find a strictly feasible (not just near-feasible) point. The first iterate in Phase 2 is last iterate of Phase 1. Phase 2 now generates a consensus vector determined by all the unsatisfied constraints. In other words, the feasibility distance tolerance is set to 0 in Phase 2. Then, all the crossing points of the constraints on the consensus ray are computed (see Figure 5), using the method described in [12].

Figure 5: In Phase 2, the crossing points and binary words are computed along the consensus ray. In this example a feasible point, with binary word , is found.

The corresponding binary words on each line segment in the consensus ray are determined efficiently by starting with and flipping the th bit when constraint is crossed. This assumes that the consensus ray actually crosses a boundary curve whenever it touches the curve. The next iterate is the midpoint of the most-satisfied segment, that is, the segment with the most zeros in its binary word. Thus, in our Algorithms 1 and 3. If the most-satisfied segment is a ray going to infinity, then the next iterate is the last crossing point plus . If there are no crossing points on the consensus ray, then . Algorithms 1 and 3 do this by defining , with to be the crossing points, and then adding one point . If more than one segment has the maximum number of zeros in its binary word, then is in the line segment closest to . The process is repeated with and so on, until a feasible point of is found or the maximum number of iterations is reached.

5. Numerical Experiments I: Comparing OD, DD, OO, and DO Methods

In this section, we check the effectiveness of the four modified constraint consensus algorithms on benchmark problems, as well as many new randomly generated problems. In addition to measuring the success in finding strictly feasible points, we also compare their performances on the number of iterations and time to find a feasible point.

All numerical experiments were done using a Dell OptiPlex 980 computer with codes written in MATLAB©. The list of the test problems is given in Table 1. The columns give the number of LMI constraints , the dimension , and the sizes of the matrices. We randomly generated problems 1 through 13, whereas the others are taken from the SDPLIBRARY [10] as listed in the last column. For each random problem, , , and are chosen not at random. However, for each problem, LMI was generated randomly as follows: is an diagonal matrix with each diagonal entry chosen from . Each () is a random symmetric and sparse matrix with approximately nonzero entries generated using the Matlab command . This ensures that, for each of the 13 random problems, the origin is an interior point. The LMIs of problem 1 are given in (14). The boundaries of the four LMI constraints are shown in Figure 6

Table 1: Test problems.
Figure 6: The boundary of each LMI constraint for Problem 1. The LMIs are given in (14). The feasible region for any LMI is determined by polynomial inequalities of degree at most , which is 2 for these constraints.

We ran each of the test problems from Table 1 with 100 random initial conditions. Each component of the initial point was chosen from a normal distribution with mean 0 and variance . We used the four algorithms with and . We will later discuss the effect of the choices of and on the performance of the algorithms. We used a maximum of iterations in Phase 1 and up to iterations in Phase 2. Tables 2, 3, 4, and 5 give the results of these experiments. The “Exit” column gives the percentage of the 100 trials that completed Phase 1 successfully (i.e., Phase 1 converged in fewer than iterations). The “Feas Pt Found?” column shows the percentage of the trials that found a strictly feasible point in Phase 2. The average iterations and times for the successful trials in Phase 1 and Phase 2 are also given for this set of test problems. Note that when Phase 1 finds a strictly feasible point, then iterations of Phase 2 are performed. This is the reason that the average iterations in Phase 2 are sometimes less than 1. Phase 2 rarely converged in more than 4 iterations, even when we raised the maximum to iterations. Note that each of Problems 19–25 has only one constraint, and the four methods are the same. Hence, we omitted the results for these problems in Tables 35. The last row at the bottom of the tables gives averages for each column for all the 25 problems, even though results for problems 19–25 are not included in Tables 35.

Table 2: Results of the OD method on the test problems.
Table 3: Results of the DD method on the test problems with more than one constraint.
Table 4: Results of the OO method on the test problems with more than one constraint.
Table 5: Results of the DO method on the test problems with more than one constraint.

Figure 7 compares OO, OD, DO, and DD for the 18 test problems that have more than one constraint. The results show that method O usually has a higher success rate in Phase 2, regardless of which method is used in phase 1. The success rate plotted is the fraction of random initial guesses that yield a feasible point. In the left figure, method D is used in phase 1, and on the right method O is used in phase 1. The circles connected by solid lines (indicating method O in Phase 2) have a higher success rate than the squares connected by a dotted line (indicating method D in Phase 2) in almost all of the cases. Comparison of the two methods that use O in Phase 2 is given in Figure 8. In the figure, the circles use method O in phase 1, and the squares use method D in Phase 1. The success rate is slightly better with DO, and the total time is much better with DO. Since Figure 7 shows that O is clearly better in Phase 2, we conclude that DO is the best of the four methods (OO, OD, DO, and DD) we tested. So, we see from the results in Tables 25 and Figures 7 and 8 that DO is the most successful and OD is the least successful. On the whole, the DBmax method works better in Phase 1, and the original method is better in Phase 2. Note that DO finds a strictly feasible point with average success rate of 79% for the 25 problems and 75.7% for the 12 benchmark problems.

Figure 7: Comparison of the four composite methods (OO, OD, DO, and DD) for the 18 test problems that have more than one constraint. Observe that method O has a higher success rate in Phase 2.
Figure 8: Comparison of the two methods that use O in Phase 2, for the same 18 test problems considered in Figure 7. We conclude that DO is the best method.

However, the number of problems used is too small to make a definitive comparison. To compare these methods better, we generated and used a set of 500 random problems. For each problem, is an integer uniform in the interval and is an integer uniform in the interval . For , is an integer uniform in the interval . In all of the 500 problems, each LMI was generated as follows: is an diagonal matrix with each diagonal entry chosen from . Each () is a random symmetric and sparse matrix with approximately nonzero entries generated using the Matlab command . This ensures that each of our test problems is random and that the origin is an interior point for each problem.

For each of the 500 random problems, one starting point was chosen at random and used for each of our four algorithms. Each component of the initial point was chosen from a normal distribution with mean 0 and variance . The results are given in Table 6. They confirm that the DO algorithm is the most successful in finding strictly feasible points. Another interesting result of this experiment, not shown in Table 6, is that the DBmax method in Phase 1 found a strictly feasible point in 5 out of the 500 problems, whereas the original method in Phase 1 never found a strictly feasible point in any of these 500 problems.

Table 6: Success rate percentages and average times for the 500 random problems.

The results show that DO is the best method (in percentage of successes), followed by OO, followed by DD, and followed by OD. In other words, D is better in Phase 1 and O is better in Phase 2. This suggests that the original consensus directions are better than DBmax search directions for iterates close to the feasible region while the reverse is the case for iterates far away from the feasible region. The fastest is DO, followed by DD, followed by OD, and followed by OO.

To obtain some geometrical understanding of how Phase 2 can fail, we include detailed figures of problem 1. This problem also allows a visual comparison of the original and DBmax methods. Figure 9 shows how the consensus vector is computed at the initial point and shows the first few steps of Phase 1. Constraint 2 is satisfied at this point. The original consensus vector is the average of the three feasibility vectors , , and , whereas the DBmax consensus vector has the coordinate of and the coordinate of . The middle figure shows the first 8 iterations of the original constraint consensus method and the right figure shows the first 4 iterations of the DBmax method. Note how the original method converges more slowly, whereas the DBmax method follows a more erratic path. The erratic path of the DBmax method suggests that the original method gives a consensus ray (used in Phase 2) that is more likely to intersect the feasible region.

Figure 9: (a) shows the original consensus vector and the DBmax consensus vector for problem 1 at the point . Figures (b) and (c) show phase 1 for the two methods.

It is notable that Phase 2 does not always converge in one iteration when Phase 1 stops at a near-feasible point. For problem 1 this is seen in Figure 10, which shows the end of Phase 1 and Phase 2 for the OO algorithm with . In this figure, Phase 1 points are dots and Phase 2 points are circles. The left figure starts with of Phase 1, and the points and of Phase 2 are visible. The detailed figure on the right shows the end of Phase 1 and the initial point of Phase 2. Note that is within distance of the constraint 4 boundary, so the consensus vector at that point is , the feasibility vector for constraint 1. As a result, is almost on the boundary of constraint 1, and Phase 1 ends with all constraints satisfied (2 and 3) or approximately satisfied (1 and 4). The final point () of Phase 1 is the initial point () of Phase 2, as indicated by the circle.

Figure 10: The OO algorithm for problem 1, with , continued from Figure 9(b).

In Figure 10, the consensus vector calculated at of Phase 2 is the average of and , but it is dominated by , and the consensus ray does not intersect the feasible region. For this problem, Phase 2 finds a feasible point in two steps as seen in the figure on the left, but for other problems one step of Phase 2 can move the point very far from the feasible region. In practice we find that Phase 2 either converges in 3 or fewer steps or it does not converge at all. In this example, the consensus rays from to all intersect the feasible region. If were , then Phase 1 would have stopped at , and Phase 2 would have found a strictly feasible point in one iteration. The figure shows how the consensus ray can miss the feasible region because the initial point of Phase 2 is too close to the feasible region. The consensus ray based on the DBmax consensus vector misses the feasible region more often than the consensus ray based on the original consensus vector, which is why the original method is preferred in Phase 2.

The example shown in Figures 9 and 10 suggests that is preferred. To further investigate the interplay between the parameters and , we ran the DO algorithm on problems 14 and 18 with 100 randomly chosen initial conditions, for several choices of and . The results are shown in Tables 7 and 8. The case and works best for problem 14. But, for problem 18, the best choice is and . These show that the optimal values of and are problem dependent, but in our two example. In Chinneck and Ibrahim’s papers [3, 9], where the goal is to find a near-feasible point in Phase 1, is usually chosen to be much smaller than .

Table 7: Success rate percentages for the DO method in problem 14 as and vary.
Table 8: Success rate percentages for the DO method in problem 18 as and vary.

6. Numerical Experiments II: Comparing DO with Another Method

In this section, we compare our DO method with the method of alternating projections (MAP) given as Algorithm 3.1 in the paper [6]. The MAP method works well in finding feasible points for LMIs and has been found to perform even better than SEDUMI [14] on certain types of LMI constraints. It has the added advantage that it allows the user to input a starting point, which enables us to compare it with DO from the same starting points.

A brief description of the MAP method is as follows. Recall that our problem is to find an interior point of the system The system is equivalent to the single LMI where and are block diagonal matrices. The matrix is of size .

Consider the affine space associated with the single LMI constraint and the set , where is the set of all positive semidefinite matrices. Algorithm 3.1 (MAP) in [6] seeks a point in the interior of the intersection of the two closed convex sets and . The algorithm generates a sequence of points that converges into the interior by alternating projections onto the sets. In our implementation of MAP, we exploited the block-diagonal structures of to reduce computation time. The starting point in MAP can be any symmetric matrix. To allow comparison of MAP with DO, we picked starting points for MAP by choosing random (as in Section 5) and using as the starting symmetric matrix.

In their paper, the authors also used a conification procedure that turns the problem into that of finding a point in the interior of the two closed convex cones and , where is the set of positive real numbers. We will refer to Algorithm 3.1 applied to cones and as the MAPCON method. We choose starting points for MAPCON by setting and choosing random (as in Section 5) to give the starting symmetric matrix . It has been proved in [6] that MAPCON converges after a finite number of iterations, while MAP only guarantees asymptotic convergence.

We compared DO and MAP on the 25 problems given in Section 5 and for each method ran 10 trials for each problem and the averages computed. We also used the same parameters for DO as in Section 5, that is, and , and used a maximum of iterations in Phase 1 and up to iterations in Phase 2. To avoid too long computation time, DO is stopped after reaching a maximum of 500 seconds. For MAP and MAPCON, a maximum of 510 iterations and a maximum of 500 seconds were used. For convergence, both methods are stopped when two consecutive iterates are within a tolerance distance of 0.01.

We used the same starting points for DO, MAP, and MAPCON and generated the points as in Section 5. The results for the 25 problems are given in Table 9. DO found a feasible point in each of the 25 problems, MAP in 16 problems, and MAPCON in 16 problems. For better comparison, the averages for the iterations and times were calculated over the 12 problems, where DO, MAP, and MAPCON found a strictly feasible point. In the table, we see that MAP and MAPCON were unable to find a strictly point after reaching the iteration or time limits on some of the problems. The asterisk “” in Table 9 indicates the entry is irrelevant and was not recorded since either the iteration or time limit has been reached.

Table 9: Results of the DO, MAP, and MAPCON methods on the 25 test problems.

We also compared the methods on a set A of 500 random problems, which were generated in the same manner as the 500 random problems of Section 5. But, here, we decided to use smaller size problems to reduce the total time taken to complete the experiments as MAP and MAPCON were taking too much on larger values of and . For each problem, is an integer uniform in the interval and is an integer uniform in the interval . For , is an integer uniform in the interval . The results are given in Table 10. DO found a feasible point in 443 problems, MAP in 240 problems, and MAPCON in 238 and both methods were successful in 143 same problems. The averages for the iterations and times were calculated over the 143 problems where DO, MAP, and MAPCON found a strictly feasible point.

Table 10: Results of DO, MAP, and MAPCON methods for the set A of 500 random problems.

To see the effect of the size of relative to on our comparison of the three methods, we generated another set B of 500 randomly generated problems. Set B has fewer variables (smaller ) and more constraints (larger ) than set A. The results are given in Table 11. For each problem, is an integer uniform in the interval and is an integer uniform in the interval . For , is an integer uniform in the interval . DO found a feasible point in 467 problems, MAP in 206 problems, and MAPCON in 244 and both methods were successful in 125 problems. The averages for the iterations and times were calculated over the 125 problems where DO, MAP, and MAPCON found a strictly feasible point.

Table 11: Results of DO, MAP, and MAPCON methods for the set B of 500 random problems.

The results in Tables 9, 10, and 11 show that DO has a higher success rate than MAP and MAPCON. The DO method also took less number of iterations and time to find a strictly feasible point. It indicates that DO outperforms both MAP and MAPCON as a LMI feasibility method on our test problems. The results in Tables 10 and 11 also suggest that the performance of DO over MAP and MAPCON gets better when the number of variables is small relative to the number of LMI constraints. An unexpected consequence from our results is that MAPCON may not be a better method than MAP in practice.

7. Conclusion

We apply, modify, and combine the constraint consensus methods of Chinneck and Ibrahim [3, 9] to find strictly feasible points of linear matrix inequality constraints. Phase 1 of our algorithm is the standard constraint consensus method, using either the original (O) or DBmax (D) consensus vector. At each step in Phase 2, we search the ray defined by the original or DBmax consensus vector for crossing points of the LMI constraints and move to the line segment on that ray where the most constraints are satisfied. We tested the four resulting algorithms, namely, Original-DBmax (OD) constraint consensus method, DBmax-DBmax (DD) constraint consensus method, Original-Original (OO) constraint consensus method, and DBmax-Original (DO) constraint consensus method. Our numerical experiments indicate that DO is the best method. It works well for the benchmark problems, finding a strictly feasible point with a success rate of 75.7% on average.

Further modifications could improve our algorithms. For example, if Phase 2 does not converge in 5 iterations it will most likely not converge. In this case, the algorithm could give up and restart Phase 1 with a new, randomly chosen starting point. Heuristics for a good choice of and could improve success. Another possible modification is an algorithm that gradually makes a transition from Phase 1 to Phase 2. We compared DO with the method of alternating projections and found DO to outperform it on average, especially on problems with large number of constraints relative to the number of variables. We plan in the future to study DO performance as the sizes of the variables vary and to compare it with other methods such as SEDUMI and SDPT3.

Our four algorithms can be used with any differentiable constraints, provided the crossing points of each constraint boundary with a ray can be computed. The algorithms do not require the feasible region to be convex. We have implemented the algorithms for linear matrix inequalities, where the computation of crossing points is relatively straight forward.

Conflict of Interests

The authors declare that there is no conflict of interests regarding the publication of this paper.

References

  1. M. J. Todd, “Semidefinite optimization,” Acta Numerica, vol. 10, pp. 515–560, 2001. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet
  2. L. Vandenberghe and S. Boyd, “Semidefinite programming,” SIAM Review, vol. 38, no. 1, pp. 49–95, 1996. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus
  3. J. W. Chinneck, “The constraint consensus method for finding approximately feasible points in nonlinear programs,” INFORMS Journal on Computing, vol. 16, no. 3, pp. 255–265, 2004. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet · View at Scopus
  4. R. B. Kearfott and J. Dian, “An iterative method for finding approximate feasible points,” Tech. Rep., Department of Mathematics, University of Southern Louisiana, Lafayette, La, USA, 2000. View at Google Scholar
  5. P. Gahinet and A. Nemirovski, “The projective method for solving linear matrix inequalities,” Mathematical Programming, vol. 77, pp. 163–190, 1997. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus
  6. M. A. Rami, U. Helmke, and J. B. Moore, “A finite steps algorithm for solving convex feasibility problems,” Journal of Global Optimization, vol. 38, no. 1, pp. 143–160, 2007. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus
  7. D. Baang, “Improved ellipsoid algorithm for LMI feasibility problems,” International Journal of Control, Automation and Systems, vol. 7, no. 6, pp. 1015–1019, 2009. View at Publisher · View at Google Scholar · View at Scopus
  8. Y. Censor and S. A. Zenios, Parallel Optimization: Theory, Algorithms, and Applications, Oxford University Press, New York, NY, USA, 1997. View at MathSciNet
  9. W. Ibrahim and J. W. Chinneck, “Improving solver success in reaching feasibility for sets of nonlinear constraints,” Computers & Operations Research, vol. 35, no. 5, pp. 1394–1411, 2008. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus
  10. B. Borchers, “SDPLIB 1.2, library of semidefinite programming test problems,” Optimization Methods and Software, vol. 11, no. 1–4, pp. 683–690, 1999. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus
  11. S. Jibrin, “Generating uniform points on the boundary of bounded spectrahedron with applications to rank constrained linear matrix inequality problem,” International Journal of Applied Mathematics & Engineering Sciences, vol. 1, no. 1, pp. 27–38, 2007. View at Google Scholar
  12. R. J. Caron, T. Traynor, and S. Jibrin, “Feasibility and constraint analysis of sets of linear matrix inequalities,” INFORMS Journal on Computing, vol. 22, no. 1, pp. 144–153, 2010. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet · View at Scopus
  13. M. L. Overton and R. S. Womersley, “Optimality conditions and duality theory for minimizing sums of the largest eigenvalues of symmetric matrices,” Mathematical Programming, vol. 62, no. 2, pp. 321–357, 1993. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus
  14. J. Sturm, “Using SeDuMi 1.02, a MATLAB toolbox for optimization over symmetric cones,” Optimization Methods and Software, vol. 11-12, pp. 625–653, 1999. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus