/ / Article

Research Article | Open Access

Volume 2015 |Article ID 790451 | 16 pages | https://doi.org/10.1155/2015/790451

# Constraint Consensus Methods for Finding Strictly Feasible Points of Linear Matrix Inequalities

Revised31 Oct 2014
Accepted06 Nov 2014
Published08 Jan 2015

#### Abstract

We give algorithms for solving the strict feasibility problem for linear matrix inequalities. These algorithms are based on John Chinneck’s constraint consensus methods, in particular, the method of his original paper and the modified DBmax constraint consensus method from his paper with Ibrahim. Our algorithms start with one of these methods as “Phase 1.” Constraint consensus methods work for any differentiable constraints, but we take advantage of the structure of linear matrix inequalities. In particular, for linear matrix inequalities, the crossing points of each constraint boundary with the consensus ray can be calculated. In this way we check for strictly feasible points in “Phase 2” of our algorithms. We present four different algorithms, depending on whether the original (basic) or DBmax constraint consensus vector is used in Phase 1 and, independently, in Phase 2. We present results of numerical experiments that compare the four algorithms. The evidence suggests that one of our algorithms is the best, although none of them are guaranteed to find a strictly feasible point after a given number of iterations. We also give results of numerical experiments indicating that our best method compares favorably to a new variant of the method of alternating projections.

#### 1. Introduction

We consider the strict feasibility problem for linear matrix inequality (LMI) constraints. In particular, we seek a point in the interior of the region defined by LMI constraints of the form where are symmetric matrices. The partial ordering means is positive semidefinite. In semidefinite programming problems, a function is optimized over a system of LMI constraints. For more information on semidefinite programming including applications, refer to [1, 2]. Consider the feasible region

We assume that has a nonempty interior. In this paper, we are interested in finding strictly feasible points, those in the interior of . Finding strictly feasible points is an important problem in interior point methods [3, 4]. Some methods need a starting point in the interior to proceed to optimality.

A projective method for solving the strict feasibility problem in the case of linear matrix inequalities (LMI’s) was proposed in . Their method is based on the Dikin ellipsoids and successive projections onto a linear subspace. The cost per iteration in this approach is high because each iteration solves a least-squares problem. A different approach based on the method of alternating projections was given in . The iterations in this method are also costly as they require eigenvalue-eigenvector decompositions. The approach described in  solves the LMI feasibility problem by computing a minimum-volume ellipsoid at each iteration. The convergence rate is slow, so this method is also expensive. In , several iterative methods were given for the convex feasibility problem. These techniques use an orthogonal or a subgradient projection at each iteration.

We study the original (basic) constraint consensus method and the DBmax constraint consensus method developed by Chinneck  and Ibrahim and Chinneck  for general nonlinear constraints to find near-feasible points. These methods also use gradient projection at each iteration to find a consensus vector that moves a point iteratively towards the feasible region starting from an infeasible point. The main cost in these methods is computing gradients, so they are relatively cheaper than the methods described in . While the goal in [3, 9] is finding near-feasible points, ours is finding interior (strictly) feasible points. We apply and combine these methods to handle LMI constraints and find interior feasible points of in two phases. Phase 1 is simply the original constraint consensus method or the DBmax constraint consensus method to find a near-feasible point.

In Phase 2, starting with the near-feasible point found in Phase 1, the algorithms use the concepts of crossing points and binary words to obtain a point in a most-satisfied interval along the ray of the consensus vector at each iteration. LMI constraints have the advantage that their crossing points are computable, whereas this is not possible for general nonlinear constraints. The goal in Phase 2 is to find an interior point of . We give four variations of the new algorithms: Original-DBmax (OD) constraint consensus method, DBmax-DBmax (DD) constraint consensus method, Original-Original (OO) constraint consensus method, and DBmax-Original (DO) constraint consensus method. The OD constraint consensus method uses the original constraint consensus method in Phase 1 and the DBmax search directions in Phase 2. The description of the other three methods is similar.

The paper is organized as follows. Section 2 gives a review of two of the constraint consensus methods introduced in [3, 9] and the computation of crossing points for LMIs. In Section 3 we apply the constraint consensus methods to a single LMI and present some theoretical results. For example, we provide necessary and sufficient conditions for the feasibility vector of a constraint to move the point to the boundary of that constraint. In Section 4 we give our four algorithms. Section 5 tests our algorithms on known benchmark problems. We also compare the four methods on a variety of randomly generated test problems. The results show that DO constraint consensus method outperforms our three other algorithms; it takes fewer total iterations and less computing time and has the most success in finding strictly feasible points. We compared DO with the method of alternating projections and found DO to outperform it on average, especially on problems with large number of LMI constraints relative to the number of variables. The concluding section has comments on how our algorithms could be improved and extended. While we focus on LMI constraints, our methods are applicable to other constraints, including nonconvex types, provided the gradients and the crossing points are computable.

#### 2. Background Material

This section discusses background material that will be needed in the next section. It will include discussions on constraint consensus methods, binary words, and crossing points for nonlinear constraints.

We consider the system of inequality constraints of the form , . For each , is a nonlinear or linear function mapping to .

##### 2.1. The Original Constraint Consensus and DBmax Constraint Consensus Methods

In this subsection, we describe the original constraint consensus and DBmax constraint consensus methods for general nonlinear constraints.

The original (basic) constraint consensus method was developed by Chinneck in  to find a near-feasible point for the given system. The method starts from an infeasible point . The method associates each constraint with a feasibility vector , defined by where is the constraint violation at and is the gradient of at point . We assume that exists and that if is infeasible. If and , then we define to resolve the ambiguity in (3). We will show in Section 3 that if is linear, then is the point on the boundary of that is closest to . The length of the feasibility vector, , is called the feasibility distance, and it is an estimate of the distance to the boundary. We define to be near-feasible with respect to if , where is a preassigned feasibility tolerance. We say that is strictly feasible with respect to if . In summary, with respect to the constraint , we say that (i) is near-feasible if ;(ii) is feasible if or, equivalently, ;(iii) is strictly feasible if .

The feasibility vectors at the initial point for all the constraints are combined to give a single consensus vector . In the original constraint consensus method, this consensus vector is the average of all the feasibility vectors with length . However, the th component of the consensus vector is averaged over only the subset of feasibility constraints that actually include the th variable. The first iterate is given by and the process is repeated with . The algorithm terminates if the number (NINF) of constraints that violates the near-feasibility condition at is zero. It also stops if the length of the consensus vector , where is a given tolerance. Note that might be shorter than even when is far from if two feasibility vectors point in nearly opposite directions. Of course, the algorithm will also terminate if some maximum number of iterations is reached.

In , Ibrahim and Chinneck gave several variations of the original constraint consensus method. The best algorithm variant for finding near-feasible points appears to be the maximum direction-based (DBmax) constraint consensus method. Their numerical experiments show that DBmax has the best success rate for finding near-feasible points. It also takes fewer iterations than most of the other methods. The DBmax constraint consensus method looks at the number of positive and negative values of each of the components of the feasibility vectors . For the th component, DBmax finds the number of constraints with positive th entry in their feasibility vector . It also looks for the number of constraints with negative th entry in their feasibility vector . The sign with largest number (votes) of constraints is considered the winner.

After determining the winning sign, DBmax creates the consensus vector by choosing each th component of to be the largest proposed movement in the winning direction. If a component has the same number of positive and negative votes then DBmax takes the average of the largest proposed movements in the positive and negative direction. If for all , then is set at 0. As in the case of the original consensus method, when finding the th component of the consensus vector in DBmax, only the subset of feasibility constraints that actually includes the th variable is considered. An example of the two methods for computing the consensus vector is as follows: In this example, constraint 4 does not depend on ; that is, . Thus, the first component of the original is obtained by averaging , , and . In the DBmax calculation, the winning sign is negative for the first component; in the second component positive is the winning sign. There is a tie in the third component, so we take the average of and . The fourth component also has no winning sign and shows a rare example where the component of the original consensus vector is larger (in absolute value) than the same component of the DBmax consensus vector.

Usually, is larger for the DBmax method than for the original method. For this reason, the DBmax method algorithm makes rapid progress toward the feasible region and significantly reduces the number of iterations needed to reach near feasibility as compared to the original method . On the other hand, when the functions are concave, the feasible region is convex and component-averaging gradient-projection schemes (including the original constraint consensus method) have been proved to converge .

##### 2.2. Binary Words and Crossing Points

For simplicity of presentation, we assume that the functions , are concave over . This will insure that the feasible region is convex.

For each , define the characteristic function by Define the binary word function by For each , is a binary word. If , then is a feasible point of . Furthermore, is the number of violated constraints at . Consider the equivalence relation defined by if . The equivalence classes form a partition of (or a subset of ). Figure 1 shows an example with constraints. The curves are labeled with . The interior of this boundary is the convex region where . In this example there are 8 equivalence classes. In general there are at most equivalence classes.

Suppose is a point in , and is a direction vector (not necessarily the consensus vector). The crossing points of the constraint on the ray are and , where and are positive. There are at most 2 crossing points since is concave. For general concave constraints these optimization problems are difficult, but, in the case of linear matrix inequalities, and are solutions to ; hence they are generalized eigenvalues that satisfy . Algorithms for finding the crossing points of an LMI can be found in [11, 12]. These algorithms depend on the sign of . When , the ray has 0, 1, or 2 crossing points; see Figure 2. If , the ray has at most 1 crossing point.

#### 3. The Original Constraint Consensus and DBmax Constraint Consensus Methods for Linear Matrix Inequalities

This section discusses the tools required for the implementation of the original (basic) and DBmax constraint consensus methods described in Section 2.1 to LMI constraints. We give some theoretical results and show how to compute the gradient.

A symmetric matrix is positive semidefinite if and only if its smallest eigenvalue, , is nonnegative. Thus the LMI system (1) can be written as For simplicity of presentation, in this section we consider a single LMI constraint , which is equivalent to .

Lemma 1. The function is concave over .

Proof. Theorem  3.5 in  states that is convex over . Therefore is concave.

A consequence of Lemma 1 is that the feasible region for one LMI or a system of LMIs (1) is convex. Let be the gradient vector of .

Lemma 2. Suppose is an infeasible point of , in which a feasible point of exists and that , the gradient of , exists at . Then, .

Proof. Assume that . Since is concave over by Lemma 1, then for all . Since is an infeasible point, . Thus, for all . This would contradict our assumption that is feasible.

In the remainder of this section, we will assume that the hypotheses of Lemma 2 hold. Thus, the feasibility vector is

Let , for be the value of along a line in , and let be the linearization of at 0.

Theorem 3. Under the hypotheses of Lemma 2, the function defined by is concave. The linearization of at 0, , satisfies In particular, .

Proof. The restriction of a concave function on to a line is concave, so Lemma 1 implies that is concave. Since is concave, . Note that . By Lemma 2, . Then, by the chain rule, the slope of is Thus, .

An interpretation of is that the normalization of was chosen so that is the linear approximation of the boundary of . Typical graphs of and are shown in Figure 3.

The following corollary shows that the distance (if it exists) from along to the boundary of is at least .

Corollary 4. Under the hypotheses of Lemma 2, (a) for all . (b) Furthermore, if is a linear constraint, then ; that is, lies on the boundary of .

Proof. (a) Since is infeasible, . By Theorem 3, for all , and thus , for all . (b) If is a linear constraint, then is linear in . Thus and . Combined with part (a) this shows that is a boundary point of .

Remark 5. Even when is nonlinear, it is possible that lies on the boundary. Consider the following LMI: The eigenvalues of are , so and its gradient is . The feasible region is the closed unit disk. Suppose is infeasible; that is, . A calculation shows that . Thus, is on the boundary of the feasible region, as shown in Figure 4. Theorem 3 says that cannot be in the interior of the feasible region. Usually is infeasible for nonlinear constraints. In the present example is feasible because is linear on . This motivates the following theorem.

Theorem 6. Under the hypotheses of Lemma 2, is linear over if and only if is a boundary point of .

Proof. Suppose is linear over . Then, for . Thus . Combined with Corollary 4(a), this implies that is a boundary point of . On the other hand, assume that is a boundary point of . Then . Thus, the concave function agrees with its linearization at and , and therefore for all .

To apply the constraint consensus methods to linear matrix inequalities, we need to be able to compute the gradient . This is given in the following results. We assume that the matrices are each of dimension .

Let be eigenvalues of with the corresponding orthonormal set of eigenvectors .

Lemma 7. If , then the function is differentiable at and

Proof. Since is simple and , this follows from Corollary  3.10 in . The derivative of with respect to is , so

By Lemma 7, the components of are given by . The above results can now be used to apply the original and DBmax constraint consensus methods in Section 2.1 to LMI constraints (1). Recall that these methods, on convergence, search for near-feasible points and nonstrictly feasible points.

#### 4. New Constraint Consensus Algorithms

In this section, we modify and combine the original (basic) and DBmax constraint consensus methods described in [3, 9] to find strictly feasible points for our system of LMI constraints (1).

We note that strictly feasible points satisfy (i.e., is positive definite) for all , and lie in the interior of the feasible region . We give four variations of the new algorithms: Original-DBmax (OD) constraint consensus method, DBmax-DBmax (DD) constraint consensus method, Original-Original (OO) constraint consensus method and DBmax-Original (DO) constraint consensus method. Algorithm 1 is OD, Algorithm 2 is DD, Algorithm 3 is OO and Algorithm 4 is DO. Each method has two phases: Phase 1 and Phase 2. For example, the OD method uses the original constraint consensus vector in Phase 1 and the DBmax constraint consensus vector in Phase 2.

 INPUT: An initial point , a feasibility distance tolerance , a movement tolerance , maximum number of iterations and Phase 1: Do the original basic constraint consensus method, Algorithm 1 in , using the parameters , and and starting from to get a near-feasible point. Let be the last iterate of Phase 1. (Remark: is the starting point of Phase 2.) Phase 2: (uses DBmax consensus method directions given in ) Set Set While   and is infeasible  do Set , , , , for each variable for every constraint   do if constraint is violated then Calculate feasibility vector for every variable in th constraint do if    then if    then else if    then if    then for every variable : do if    then else if    then else Determine the LMI crossing points , with , on the consensus ray , and let denote the constraint of the crossing point . If there are no crossing points (i.e. ), set . Set . Set Set Set for    do Update by flipping , the th bit of if    then replace with Set Set If , then is feasible .

 INPUT: an initial point , a feasibility distance tolerance , a movement tolerance , maximum number of iterations and Phase 1: Do the DBmax constraint consensus method, Algorithm 4 described in , using the parameters , and and starting from to get a near-feasible point. Let be the last iterate of Phase 1. Phase 2: as in the OD Constraint Consensus Algorithm

 INPUT: an initial point , a feasibility distance tolerance , a movement tolerance , maximum number of iterations and Phase 1: as in the OD constraint Consensus Algorithm Phase 2: (uses original basic consensus method directions) Set Set While   and is infeasible do Set for all for every constraint   do if constraint is violated then Calculate feasibility vector for every variable in th constraint do , for every variable in th constraint: do if    then else Determine the LMI crossing points , with , on the consensus ray , and let denote the constraint of the crossing point . If there are no crossing points (i.e. ), set . Set . Set Set Set for    do Update by flipping , the th bit of if    then replace with Set Set If , then is feasible .

 INPUT: an initial point , a feasibility distance tolerance , a movement tolerance , maximum number of iterations and Phase 1: as in the DD Constraint Consensus Algorithm Phase 2: as in the OO Constraint Consensus Algorithm

Phase 1 is the classical constraint consensus method given in [3, 9]. The starting point in Phase 1 is a random point. Phase 1 stops when a near-feasible point is found (i.e., for all ), when the consensus vector is shorter than the movement tolerance (), or when the maximum number of iterations is reached ().

After exiting Phase 1, the algorithm goes to Phase 2 to try to find a strictly feasible (not just near-feasible) point. The first iterate in Phase 2 is last iterate of Phase 1. Phase 2 now generates a consensus vector determined by all the unsatisfied constraints. In other words, the feasibility distance tolerance is set to 0 in Phase 2. Then, all the crossing points of the constraints on the consensus ray are computed (see Figure 5), using the method described in .

The corresponding binary words on each line segment in the consensus ray are determined efficiently by starting with and flipping the th bit when constraint is crossed. This assumes that the consensus ray actually crosses a boundary curve whenever it touches the curve. The next iterate is the midpoint of the most-satisfied segment, that is, the segment with the most zeros in its binary word. Thus, in our Algorithms 1 and 3. If the most-satisfied segment is a ray going to infinity, then the next iterate is the last crossing point plus . If there are no crossing points on the consensus ray, then . Algorithms 1 and 3 do this by defining , with to be the crossing points, and then adding one point . If more than one segment has the maximum number of zeros in its binary word, then is in the line segment closest to . The process is repeated with and so on, until a feasible point of is found or the maximum number of iterations is reached.

#### 5. Numerical Experiments I: Comparing OD, DD, OO, and DO Methods

In this section, we check the effectiveness of the four modified constraint consensus algorithms on benchmark problems, as well as many new randomly generated problems. In addition to measuring the success in finding strictly feasible points, we also compare their performances on the number of iterations and time to find a feasible point.

All numerical experiments were done using a Dell OptiPlex 980 computer with codes written in MATLAB©. The list of the test problems is given in Table 1. The columns give the number of LMI constraints , the dimension , and the sizes of the matrices. We randomly generated problems 1 through 13, whereas the others are taken from the SDPLIBRARY  as listed in the last column. For each random problem, , , and are chosen not at random. However, for each problem, LMI was generated randomly as follows: is an diagonal matrix with each diagonal entry chosen from . Each () is a random symmetric and sparse matrix with approximately nonzero entries generated using the Matlab command . This ensures that, for each of the 13 random problems, the origin is an interior point. The LMIs of problem 1 are given in (14). The boundaries of the four LMI constraints are shown in Figure 6

 Problem Dimension () Numbers of constraints () Size of matrices () Source 1 2 4 Random 2 2 5 Random 3 2 10 Random 4 3 5 Random 5 2 17 Random 6 20 25 Random 7 20 13 Random 8 20 30 Random 9 20 40 Random 10 10 26 Random 11 15 50 Random 12 15 100 Random 13 7 8 Random 14 21 2 control1  15 66 2 control2  16 136 2 control3  17 174 2 arch0  18 13 3 hinf01  19 251 1 250 gpp250-1  20 100 1 100 mcp100  21 124 1 124 mcp124-1  22 250 1 250 mcp250-1  23 10 1 30 infd2  24 101 1 100 gpp100  25 125 1 124 gpp124-1 

We ran each of the test problems from Table 1 with 100 random initial conditions. Each component of the initial point was chosen from a normal distribution with mean 0 and variance . We used the four algorithms with and . We will later discuss the effect of the choices of and on the performance of the algorithms. We used a maximum of iterations in Phase 1 and up to iterations in Phase 2. Tables 2, 3, 4, and 5 give the results of these experiments. The “Exit” column gives the percentage of the 100 trials that completed Phase 1 successfully (i.e., Phase 1 converged in fewer than iterations). The “Feas Pt Found?” column shows the percentage of the trials that found a strictly feasible point in Phase 2. The average iterations and times for the successful trials in Phase 1 and Phase 2 are also given for this set of test problems. Note that when Phase 1 finds a strictly feasible point, then iterations of Phase 2 are performed. This is the reason that the average iterations in Phase 2 are sometimes less than 1. Phase 2 rarely converged in more than 4 iterations, even when we raised the maximum to iterations. Note that each of Problems 19–25 has only one constraint, and the four methods are the same. Hence, we omitted the results for these problems in Tables 35. The last row at the bottom of the tables gives averages for each column for all the 25 problems, even though results for problems 19–25 are not included in Tables 35.

 Problem Phase 1 Phase 2 Phase 1 + Phase 2 Iter Time (sec) Exit Iter Time (sec) Feas Pt found? Total time (sec) 1 20.4 0.008 100% 1.0 0.002 87% 0.010 2 12.7 0.007 100% 1.0 0.002 100% 0.009 3 11.0 0.012 100% 1.0 0.003 100% 0.015 4 46.0 0.023 100% 1.4 0.002 100% 0.026 5 189.8 4.203 100% 5.2 0.188 17% 4.391 6 43.3 0.056 100% 2.8 0.008 46% 0.064 7 219.2.1 1.996 100% 5.9 0.076 17% 2.072 8 219.5 2.012 100% 6.0 0.077 15% 2.089 9 77.0 1.000 100% 5.2 0.094 36% 1.094 10 87.2 0.408 100% 3.4 0.029 35% 0.437 11 113.9 1.405 100% 4.0 0.077 46% 1.482 12 23.8 0.070 100% 1.4 0.010 100% 0.080 13 122.6 0.992 100% 5.4 0.061 19% 1.053 14 109.6 0.242 100% 1.4 0.005 61% 0.247 15 266.0 1.881 100% 1.2 0.014 69% 1.895 16 178.3 2.888 100% 1.9 0.049 48% 2.937 17 103.8 3.921 100% 1.0 0.064 100% 3.985 18 142.2 0.275 100% 1.4 0.005 40% 0.280 19 128.3 18.378 100% 1.5 0.431 78% 18.809 20 61.9 0.583 100% 1.8 0.040 100% 0.623 21 81.8 1.021 100% 4.0 0.153 66% 1.174 22 163.1 6.892 100% 3.5 0.869 64% 7.761 23 108.2 0.209 95% 1.2 0.007 73% 0.216 24 53.1 0.801 100% 1.4 0.037 65% 0.838 25 65.2 1.601 100% 1.5 0.073 72% 1.674 Average 105.9 2.035 100% 2.6 0.095 62% 2.130
 Problem Phase 1 Phase 2 Phase 1 + Phase 2 Iter Time (sec) Exit Iter Time (sec) Feas Pt found? Total time (sec) 1 11.7 0.006 100% 1.1 0.002 99% 0.008 2 8.0 0.005 100% 0.9 0.002 98% 0.007 3 11.6 0.013 100% 0.9 0.003 100% 0.016 4 42.5 0.023 100% 1.1 0.002 100% 0.025 5 29.2 0.671 100% 2.7 0.099 58% 0.770 6 20.5 0.029 100% 2.0 0.006 87% 0.035 7 44.9 0.418 100% 3.5 0.045 44% 0.463 8 45.5 0.432 100% 3.5 0.049 39% 0.481 9 27.0 0.379 100% 3.4 0.072 63% 0.451 10 25.4 0.126 100% 2.0 0.018 44% 0.144 11 26.0 0.337 100% 2.2 0.043 74% 0.380 12 18.3 0.057 100% 1.1 0.008 99% 0.065 13 32.3 0.269 100% 3.4 0.039 47% 0.308 14 114.4 0.253 100% 1.4 0.005 80% 0.258 15 285.0 1.984 99% 1.6 0.018 86% 2.002 16 226.6 3.613 100% 1.2 0.032 78% 3.645 17 46.9 1.786 100% 1.0 0.073 100% 1.859 18 102.9 0.197 100% 1.9 0.006 47% 0.203 Average 71.2 1.603 100% 2.0 0.085 74% 1.688
 Problem Phase 1 Phase 2 Phase 1 + Phase 2 Iter Time (sec) Exit Iter Time (sec) Feas Pt found? Total time (sec) 1 20.4 0.008 100% 1.1 0.002 87% 0.010 2 12.7 0.007 100% 1.0 0.003 100% 0.010 3 11.0 0.012 100% 1.0 0.004 100% 0.016 4 46.0 0.024 100% 1.4 0.003 100% 0.027 5 192.7 4.276 100% 2.6 0.098 64% 4.374 6 40.1 0.053 100% 2.1 0.007 94% 0.060 7 226.5 2.080 100% 2.7 0.036 59% 2.116 8 224.6 2.079 100% 2.6 0.036 65% 2.115 9 78.4 1.167 100% 3.0 0.059 55% 1.226 10 92.6 0.435 100% 2.0 0.019 80% 0.454 11 116.2 1.435 100% 2.6 0.052 72% 1.487 12 23.8 0.071 100% 1.2 0.098 100% 0.080 13 124.3 1.006 100% 2.8 0.034 72% 1.039 14 110.5 0.246 100% 1.4 0.007 61% 0.253 15 265.1 1.870 100% 1.3 0.015 67% 1.886 16 173.7 2.814 100% 1.9 0.049 48% 2.863 17 103.9 3.935 100% 1.0 0.069 100% 4.004 18 164.1 0.316 100% 1.5 0.005 36% 0.321 Average 107.5 2.053 100% 1.9 0.085 75% 2.138
 Problem Phase 1 Phase 2 Phase 1 + Phase 2 Iter Time (sec) Exit Iter Time (sec) Feas Pt found? Total time (sec) 1 11.7 0.006 100% 1.2 0.003 99% 0.009 2 8.0 0.005 100% 1.0 0.002 97% 0.007 3 11.6 0.013 100% 0.9 0.003 100% 0.016 4 42.5 0.022 100% 1.1 0.002 100% 0.024 5 29.2 0.671 100% 2.4 0.090 67% 0.761 6 20.5 0.029 100% 1.9 0.006 91% 0.035 7 45.1 0.419 100% 2.8 0.036 62% 0.455 8 45.3 0.426 100% 2.6 0.035 69% 0.461 9 27.1 0.370 100% 2.6 0.050 75% 0.420 10 25.0 0.123 100% 2.1 0.017 54% 0.140 11 26.0 0.340 100% 2.0 0.039 86% 0.379 12 18.3 0.057 100% 1.2 0.009 98% 0.0.066 13 32.1 0.267 100% 2.7 0.031 64% 0.298 14 115.3 0.255 100% 1.4 0.005 80% 0.260 15 285.0 1.983 99% 1.6 0.018 86% 2.001 16 225.5 3.598 100% 1.3 0.033 76% 3.631 17 46.9 1.828 100% 1.0 0.066 100% 1.894 18 98.7 0.191 100% 1.6 0.005 51% 0.196 Average 71.0 1.604 100% 1.9 0.082 79% 1.686

Figure 7 compares OO, OD, DO, and DD for the 18 test problems that have more than one constraint. The results show that method O usually has a higher success rate in Phase 2, regardless of which method is used in phase 1. The success rate plotted is the fraction of random initial guesses that yield a feasible point. In the left figure, method D is used in phase 1, and on the right method O is used in phase 1. The circles connected by solid lines (indicating method O in Phase 2) have a higher success rate than the squares connected by a dotted line (indicating method D in Phase 2) in almost all of the cases. Comparison of the two methods that use O in Phase 2 is given in Figure 8. In the figure, the circles use method O in phase 1, and the squares use method D in Phase 1. The success rate is slightly better with DO, and the total time is much better with DO. Since Figure 7 shows that O is clearly better in Phase 2, we conclude that DO is the best of the four methods (OO, OD, DO, and DD) we tested. So, we see from the results in Tables 25 and Figures 7 and 8 that DO is the most successful and OD is the least successful. On the whole, the DBmax method works better in Phase 1, and the original method is better in Phase 2. Note that DO finds a strictly feasible point with average success rate of 79% for the 25 problems and 75.7% for the 12 benchmark problems.

However, the number of problems used is too small to make a definitive comparison. To compare these methods better, we generated and used a set of 500 random problems. For each problem, is an integer uniform in the interval and is an integer uniform in the interval . For , is an integer uniform in the interval . In all of the 500 problems, each LMI was generated as follows: is an diagonal matrix with each diagonal entry chosen from . Each () is a random symmetric and sparse matrix with approximately nonzero entries generated using the Matlab command . This ensures that each of our test problems is random and that the origin is an interior point for each problem.

For each of the 500 random problems, one starting point was chosen at random and used for each of our four algorithms. Each component of the initial point was chosen from a normal distribution with mean 0 and variance . The results are given in Table 6. They confirm that the DO algorithm is the most successful in finding strictly feasible points. Another interesting result of this experiment, not shown in Table 6, is that the DBmax method in Phase 1 found a strictly feasible point in 5 out of the 500 problems, whereas the original method in Phase 1 never found a strictly feasible point in any of these 500 problems.

 Algorithm Success rate Average time (sec) OD 31.6% 1.336 DD 46.6% 0.527 OO 50.8% 1.343 DO 54.2% 0.502

The results show that DO is the best method (in percentage of successes), followed by OO, followed by DD, and followed by OD. In other words, D is better in Phase 1 and O is better in Phase 2. This suggests that the original consensus directions are better than DBmax search directions for iterates close to the feasible region while the reverse is the case for iterates far away from the feasible region. The fastest is DO, followed by DD, followed by OD, and followed by OO.

To obtain some geometrical understanding of how Phase 2 can fail, we include detailed figures of problem 1. This problem also allows a visual comparison of the original and DBmax methods. Figure 9 shows how the consensus vector is computed at the initial point and shows the first few steps of Phase 1. Constraint 2 is satisfied at this point. The original consensus vector is the average of the three feasibility vectors , , and , whereas the DBmax consensus vector has the coordinate of and the coordinate of . The middle figure shows the first 8 iterations of the original constraint consensus method and the right figure shows the first 4 iterations of the DBmax method. Note how the original method converges more slowly, whereas the DBmax method follows a more erratic path. The erratic path of the DBmax method suggests that the original method gives a consensus ray (used in Phase 2) that is more likely to intersect the feasible region.

It is notable that Phase 2 does not always converge in one iteration when Phase 1 stops at a near-feasible point. For problem 1 this is seen in Figure 10, which shows the end of Phase 1 and Phase 2 for the OO algorithm with . In this figure, Phase 1 points are dots and Phase 2 points are circles. The left figure starts with of Phase 1, and the points and of Phase 2 are visible. The detailed figure on the right shows the end of Phase 1 and the initial point of Phase 2. Note that is within distance of the constraint 4 boundary, so the consensus vector at that point is , the feasibility vector for constraint 1. As a result, is almost on the boundary of constraint 1, and Phase 1 ends with all constraints satisfied (2 and 3) or approximately satisfied (1 and 4). The final point () of Phase 1 is the initial point () of Phase 2, as indicated by the circle.

In Figure 10, the consensus vector calculated at of Phase 2 is the average of and , but it is dominated by , and the consensus ray does not intersect the feasible region. For this problem, Phase 2 finds a feasible point in two steps as seen in the figure on the left, but for other problems one step of Phase 2 can move the point very far from the feasible region. In practice we find that Phase 2 either converges in 3 or fewer steps or it does not converge at all. In this example, the consensus rays from to all intersect the feasible region. If were , then Phase 1 would have stopped at , and Phase 2 would have found a strictly feasible point in one iteration. The figure shows how the consensus ray can miss the feasible region because the initial point of Phase 2 is too close to the feasible region. The consensus ray based on the DBmax consensus vector misses the feasible region more often than the consensus ray based on the original consensus vector, which is why the original method is preferred in Phase 2.

The example shown in Figures 9 and 10 suggests that is preferred. To further investigate the interplay between the parameters and , we ran the DO algorithm on problems 14 and 18 with 100 randomly chosen initial conditions, for several choices of and . The results are shown in Tables 7 and 8. The case and works best for problem 14. But, for problem 18, the best choice is and . These show that the optimal values of and are problem dependent, but in our two example. In Chinneck and Ibrahim’s papers [3, 9], where the goal is to find a near-feasible point in Phase 1, is usually chosen to be much smaller than .

 0.00001 0.0001 0.001 0.01 0.1 0.00001 65 74 82 80 78 0.0001 71 72 81 80 78 0.001 78 78 79 80 78 0.01 78 78 78 78 78 0.1 74 74 74 74 75
 0.00001 0.0001 0.001 0.01 0.1 0.00001 55 46 45 55 48 0.0001 55 51 54 53 46 0.001 51 38 52 62 49 0.01 44 45 42 60 55 0.1 41 52 43 37 55

#### 6. Numerical Experiments II: Comparing DO with Another Method

In this section, we compare our DO method with the method of alternating projections (MAP) given as Algorithm 3.1 in the paper . The MAP method works well in finding feasible points for LMIs and has been found to perform even better than SEDUMI  on certain types of LMI constraints. It has the added advantage that it allows the user to input a starting point, which enables us to compare it with DO from the same starting points.

A brief description of the MAP method is as follows. Recall that our problem is to find an interior point of the system The system is equivalent to the single LMI where and are block diagonal matrices. The matrix is of size .

Consider the affine space associated with the single LMI constraint and the set , where is the set of all positive semidefinite matrices. Algorithm 3.1 (MAP) in  seeks a point in the interior of the intersection of the two closed convex sets and . The algorithm generates a sequence of points that converges into the interior by alternating projections onto the sets. In our implementation of MAP, we exploited the block-diagonal structures of to reduce computation time. The starting point in MAP can be any symmetric matrix. To allow comparison of MAP with DO, we picked starting points for MAP by choosing random (as in Section 5) and using as the starting symmetric matrix.

In their paper, the authors also used a conification procedure that turns the problem into that of finding a point in the interior of the two closed convex cones and , where is the set of positive real numbers. We will refer to Algorithm 3.1 applied to cones and as the MAPCON method. We choose starting points for MAPCON by setting and choosing random (as in Section 5) to give the starting symmetric matrix . It has been proved in  that MAPCON converges after a finite number of iterations, while MAP only guarantees asymptotic convergence.

We compared DO and MAP on the 25 problems given in Section 5 and for each method ran 10 trials for each problem and the averages computed. We also used the same parameters for DO as in Section 5, that is, and , and used a maximum of iterations in Phase 1 and up to iterations in Phase 2. To avoid too long computation time, DO is stopped after reaching a maximum of 500 seconds. For MAP and MAPCON, a maximum of 510 iterations and a maximum of 500 seconds were used. For convergence, both methods are stopped when two consecutive iterates are within a tolerance distance of 0.01.

We used the same starting points for DO, MAP, and MAPCON and generated the points as in Section 5. The results for the 25 problems are given in Table 9. DO found a feasible point in each of the 25 problems, MAP in 16 problems, and MAPCON in 16 problems. For better comparison, the averages for the iterations and times were calculated over the 12 problems, where DO, MAP, and MAPCON found a strictly feasible point. In the table, we see that MAP and MAPCON were unable to find a strictly point after reaching the iteration or time limits on some of the problems. The asterisk “” in Table 9 indicates the entry is irrelevant and was not recorded since either the iteration or time limit has been reached.

 Problem DO MAP MAPCON Feas Pt found? Iter Time (sec) Feas Pt found? Iter Time (sec) Feas Pt Found? Iter Time (sec) 1 100% 13.0 0.044 100% 35.0 0.030 40% 7.0 0.016 2 100% 8.5 0.045 100% 76.8 0.068 70% 12.7 0.022 3 100% 12.6 0.073 100% 37.4 0.069 100% 45.8 0.108 4 100% 43.5 0.096 100% 91.6 0.108 20% 11.0 0.023 5 100% 15.7 0.110 100% 35.2 0.108 100% 79.6 0.308 6 40% 33.8 0.815 0% * >510 0% >510 * 7 70% 46.1 0.621 30% 71.7 3.832 50% 38.2 2.417 8 60% 47.8 1.421 100% 80.6 8.934 0% >510 * 9 60% 30.0 1.214 0% >510 * 60% 149.0 25.418 10 50% 26.8 0.439 0% >510 * 100% 67.0 2.173 11 70% 27.3 1.304 0% >510 * 0% >510 * 12 70% 33.1 2.212 0% >510 * 10% 113.0 24.593 13 100% 23.6 0.138 100% 45.8 0.264 90% 46.1 0.322 14 80% 108.8 0.803 0% >510 * 0% >510 * 15 100% 261.3 6.676 0% >510 * 0% >510 * 16 80% 207.6 18.719 0% * >500 0% * >500 17 100% 47.2 13.128 0% * >500 0% * >500 18 20% 24.0 0.168 100% 187.6 2.428 0% >510 * 19 60% 128.7 239.697 70% 7.0 168.965 40% 10.0 192.640 20 100% 65.4 3.059 100% 3.0 1.471 100% 3.0 1.538 21 70% 84.4 6.064 100% 3.0 2.644 100% 3.0 2.692 22 70% 163.1 84.398 100% 3.0 38.916 100% 3.0 38.346 23 80% 91.5 0.517 50% 370.6 4.250 0% >510 * 24 90% 53.9 6.693 100% 25.2 13.662 70% 114.7 63.193 25 70% 68.7 16.433 100% 31.4 39.071 70% 181.9 201.696 Average 77% 56.8 29.738 58% 32.9 22.1 45% 43.1 41.742

We also compared the methods on a set A of 500 random problems, which were generated in the same manner as the 500 random problems of Section 5. But, here, we decided to use smaller size problems to reduce the total time taken to complete the experiments as MAP and MAPCON were taking too much on larger values of and . For each problem, is an integer uniform in the interval and is an integer uniform in the interval . For , is an integer uniform in the interval . The results are given in Table 10. DO found a feasible point in 443 problems, MAP in 240 problems, and MAPCON in 238 and both methods were successful in 143 same problems. The averages for the iterations and times were calculated over the 143 problems where DO, MAP, and MAPCON found a strictly feasible point.

 Algorithm Success rate Average iterations Average time (sec) DO 89% 23.9 0.183 MAP 48% 55.0 0.500 MAPCON 48% 61.5 1.200

To see the effect of the size of relative to on our comparison of the three methods, we generated another set B of 500 randomly generated problems. Set B has fewer variables (smaller ) and more constraints (larger ) than set A. The results are given in Table 11. For each problem, is an integer uniform in the interval and is an integer uniform in the interval . For , is an integer uniform in the interval . DO found a feasible point in 467 problems, MAP in 206 problems, and MAPCON in 244 and both methods were successful in 125 problems. The averages for the iterations and times were calculated over the 125 problems where DO, MAP, and MAPCON found a strictly feasible point.

 Algorithm Success rate Average iterations Average time (sec) DO 93% 16.6 0.122 MAP 41% 35.7 0.238 MAPCON 49% 205.2 1.876

The results in Tables 9, 10, and 11 show that DO has a higher success rate than MAP and MAPCON. The DO method also took less number of iterations and time to find a strictly feasible point. It indicates that DO outperforms both MAP and MAPCON as a LMI feasibility method on our test problems. The results in Tables 10 and 11 also suggest that the performance of DO over MAP and MAPCON gets better when the number of variables is small relative to the number of LMI constraints. An unexpected consequence from our results is that MAPCON may not be a better method than MAP in practice.

#### 7. Conclusion

We apply, modify, and combine the constraint consensus methods of Chinneck and Ibrahim [3, 9] to find strictly feasible points of linear matrix inequality constraints. Phase 1 of our algorithm is the standard constraint consensus method, using either the original (O) or DBmax (D) consensus vector. At each step in Phase 2, we search the ray defined by the original or DBmax consensus vector for crossing points of the LMI constraints and move to the line segment on that ray where the most constraints are satisfied. We tested the four resulting algorithms, namely, Original-DBmax (OD) constraint consensus method, DBmax-DBmax (DD) constraint consensus method, Original-Original (OO) constraint consensus method, and DBmax-Original (DO) constraint consensus method. Our numerical experiments indicate that DO is the best method. It works well for the benchmark problems, finding a strictly feasible point with a success rate of 75.7% on average.

Further modifications could improve our algorithms. For example, if Phase 2 does not converge in 5 iterations it will most likely not converge. In this case, the algorithm could give up and restart Phase 1 with a new, randomly chosen starting point. Heuristics for a good choice of and could improve success. Another possible modification is an algorithm that gradually makes a transition from Phase 1 to Phase 2. We compared DO with the method of alternating projections and found DO to outperform it on average, especially on problems with large number of constraints relative to the number of variables. We plan in the future to study DO performance as the sizes of the variables vary and to compare it with other methods such as SEDUMI and SDPT3.

Our four algorithms can be used with any differentiable constraints, provided the crossing points of each constraint boundary with a ray can be computed. The algorithms do not require the feasible region to be convex. We have implemented the algorithms for linear matrix inequalities, where the computation of crossing points is relatively straight forward.

#### Conflict of Interests

The authors declare that there is no conflict of interests regarding the publication of this paper.

1. M. J. Todd, “Semidefinite optimization,” Acta Numerica, vol. 10, pp. 515–560, 2001.
2. L. Vandenberghe and S. Boyd, “Semidefinite programming,” SIAM Review, vol. 38, no. 1, pp. 49–95, 1996. View at: Publisher Site | Google Scholar | MathSciNet
3. J. W. Chinneck, “The constraint consensus method for finding approximately feasible points in nonlinear programs,” INFORMS Journal on Computing, vol. 16, no. 3, pp. 255–265, 2004.
4. R. B. Kearfott and J. Dian, “An iterative method for finding approximate feasible points,” Tech. Rep., Department of Mathematics, University of Southern Louisiana, Lafayette, La, USA, 2000. View at: Google Scholar
5. P. Gahinet and A. Nemirovski, “The projective method for solving linear matrix inequalities,” Mathematical Programming, vol. 77, pp. 163–190, 1997. View at: Publisher Site | Google Scholar | MathSciNet
6. M. A. Rami, U. Helmke, and J. B. Moore, “A finite steps algorithm for solving convex feasibility problems,” Journal of Global Optimization, vol. 38, no. 1, pp. 143–160, 2007. View at: Publisher Site | Google Scholar | MathSciNet
7. D. Baang, “Improved ellipsoid algorithm for LMI feasibility problems,” International Journal of Control, Automation and Systems, vol. 7, no. 6, pp. 1015–1019, 2009. View at: Publisher Site | Google Scholar
8. Y. Censor and S. A. Zenios, Parallel Optimization: Theory, Algorithms, and Applications, Oxford University Press, New York, NY, USA, 1997. View at: MathSciNet
9. W. Ibrahim and J. W. Chinneck, “Improving solver success in reaching feasibility for sets of nonlinear constraints,” Computers & Operations Research, vol. 35, no. 5, pp. 1394–1411, 2008. View at: Publisher Site | Google Scholar | MathSciNet
10. B. Borchers, “SDPLIB 1.2, library of semidefinite programming test problems,” Optimization Methods and Software, vol. 11, no. 1–4, pp. 683–690, 1999. View at: Publisher Site | Google Scholar | MathSciNet
11. S. Jibrin, “Generating uniform points on the boundary of bounded spectrahedron with applications to rank constrained linear matrix inequality problem,” International Journal of Applied Mathematics & Engineering Sciences, vol. 1, no. 1, pp. 27–38, 2007. View at: Google Scholar
12. R. J. Caron, T. Traynor, and S. Jibrin, “Feasibility and constraint analysis of sets of linear matrix inequalities,” INFORMS Journal on Computing, vol. 22, no. 1, pp. 144–153, 2010.
13. M. L. Overton and R. S. Womersley, “Optimality conditions and duality theory for minimizing sums of the largest eigenvalues of symmetric matrices,” Mathematical Programming, vol. 62, no. 2, pp. 321–357, 1993. View at: Publisher Site | Google Scholar | MathSciNet
14. J. Sturm, “Using SeDuMi 1.02, a MATLAB toolbox for optimization over symmetric cones,” Optimization Methods and Software, vol. 11-12, pp. 625–653, 1999. View at: Publisher Site | Google Scholar | MathSciNet