Journal of Optimization

Volume 2015, Article ID 790451, 16 pages

http://dx.doi.org/10.1155/2015/790451

## Constraint Consensus Methods for Finding Strictly Feasible Points of Linear Matrix Inequalities

Department of Mathematics and Statistics, Northern Arizona University, Flagstaff, AZ 86011-5717, USA

Received 25 July 2014; Revised 31 October 2014; Accepted 6 November 2014

Academic Editor: Manlio Gaudioso

Copyright © 2015 Shafiu Jibrin and James W. Swift. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

#### Abstract

We give algorithms for solving the strict feasibility problem for linear matrix inequalities. These algorithms are based on John Chinneck’s constraint consensus methods, in particular, the method of his original paper and the modified DBmax constraint consensus method from his paper with Ibrahim. Our algorithms start with one of these methods as “Phase 1.” Constraint consensus methods work for any differentiable constraints, but we take advantage of the structure of linear matrix inequalities. In particular, for linear matrix inequalities, the crossing points of each constraint boundary with the consensus ray can be calculated. In this way we check for strictly feasible points in “Phase 2” of our algorithms. We present four different algorithms, depending on whether the original (basic) or DBmax constraint consensus vector is used in Phase 1 and, independently, in Phase 2. We present results of numerical experiments that compare the four algorithms. The evidence suggests that one of our algorithms is the best, although none of them are guaranteed to find a strictly feasible point after a given number of iterations. We also give results of numerical experiments indicating that our best method compares favorably to a new variant of the method of alternating projections.

#### 1. Introduction

We consider the strict feasibility problem for linear matrix inequality (LMI) constraints. In particular, we seek a point in the interior of the region defined by LMI constraints of the form where are symmetric matrices. The partial ordering means is positive semidefinite. In semidefinite programming problems, a function is optimized over a system of LMI constraints. For more information on semidefinite programming including applications, refer to [1, 2]. Consider the feasible region

We assume that has a nonempty interior. In this paper, we are interested in finding* strictly feasible* points, those in the interior of . Finding strictly feasible points is an important problem in interior point methods [3, 4]. Some methods need a starting point in the interior to proceed to optimality.

A projective method for solving the strict feasibility problem in the case of linear matrix inequalities (LMI’s) was proposed in [5]. Their method is based on the Dikin ellipsoids and successive projections onto a linear subspace. The cost per iteration in this approach is high because each iteration solves a least-squares problem. A different approach based on the method of alternating projections was given in [6]. The iterations in this method are also costly as they require eigenvalue-eigenvector decompositions. The approach described in [7] solves the LMI feasibility problem by computing a minimum-volume ellipsoid at each iteration. The convergence rate is slow, so this method is also expensive. In [8], several iterative methods were given for the convex feasibility problem. These techniques use an orthogonal or a subgradient projection at each iteration.

We study the original (basic) constraint consensus method and the DBmax constraint consensus method developed by Chinneck [3] and Ibrahim and Chinneck [9] for general nonlinear constraints to find near-feasible points. These methods also use gradient projection at each iteration to find a consensus vector that moves a point iteratively towards the feasible region starting from an infeasible point. The main cost in these methods is computing gradients, so they are relatively cheaper than the methods described in [5–7]. While the goal in [3, 9] is finding near-feasible points, ours is finding interior (strictly) feasible points. We apply and combine these methods to handle LMI constraints and find interior feasible points of in two phases. Phase 1 is simply the original constraint consensus method or the DBmax constraint consensus method to find a near-feasible point.

In Phase 2, starting with the near-feasible point found in Phase 1, the algorithms use the concepts of crossing points and binary words to obtain a point in a most-satisfied interval along the ray of the consensus vector at each iteration. LMI constraints have the advantage that their crossing points are computable, whereas this is not possible for general nonlinear constraints. The goal in Phase 2 is to find an interior point of . We give four variations of the new algorithms:* Original-DBmax (OD) constraint consensus method*,* DBmax-DBmax (DD) constraint consensus method*,* Original-Original (OO) constraint consensus method*, and* DBmax-Original (DO) constraint consensus method*. The OD constraint consensus method uses the original constraint consensus method in Phase 1 and the DBmax search directions in Phase 2. The description of the other three methods is similar.

The paper is organized as follows. Section 2 gives a review of two of the constraint consensus methods introduced in [3, 9] and the computation of crossing points for LMIs. In Section 3 we apply the constraint consensus methods to a single LMI and present some theoretical results. For example, we provide necessary and sufficient conditions for the feasibility vector of a constraint to move the point to the boundary of that constraint. In Section 4 we give our four algorithms. Section 5 tests our algorithms on known benchmark problems. We also compare the four methods on a variety of randomly generated test problems. The results show that DO constraint consensus method outperforms our three other algorithms; it takes fewer total iterations and less computing time and has the most success in finding strictly feasible points. We compared DO with the method of alternating projections and found DO to outperform it on average, especially on problems with large number of LMI constraints relative to the number of variables. The concluding section has comments on how our algorithms could be improved and extended. While we focus on LMI constraints, our methods are applicable to other constraints, including nonconvex types, provided the gradients and the crossing points are computable.

#### 2. Background Material

This section discusses background material that will be needed in the next section. It will include discussions on constraint consensus methods, binary words, and crossing points for nonlinear constraints.

We consider the system of inequality constraints of the form , . For each , is a nonlinear or linear function mapping to .

##### 2.1. The Original Constraint Consensus and DBmax Constraint Consensus Methods

In this subsection, we describe the original constraint consensus and DBmax constraint consensus methods for general nonlinear constraints.

The original (basic) constraint consensus method was developed by Chinneck in [3] to find a near-feasible point for the given system. The method starts from an infeasible point . The method associates each constraint with a* feasibility vector *, defined by
where is the* constraint violation* at and is the gradient of at point . We assume that exists and that if is infeasible. If and , then we define to resolve the ambiguity in (3). We will show in Section 3 that if is linear, then is the point on the boundary of that is closest to . The length of the feasibility vector, , is called the* feasibility distance*, and it is an estimate of the distance to the boundary. We define to be* near-feasible* with respect to if , where is a preassigned feasibility tolerance. We say that is* strictly feasible* with respect to if . In summary, with respect to the constraint , we say that (i) is near-feasible if ;(ii) is feasible if or, equivalently, ;(iii) is strictly feasible if .

The feasibility vectors at the initial point for all the constraints are combined to give a single* consensus vector *. In the original constraint consensus method, this consensus vector is the average of all the feasibility vectors with length . However, the th component of the consensus vector is averaged over only the subset of feasibility constraints that actually include the th variable. The first iterate is given by and the process is repeated with . The algorithm terminates if the number (NINF) of constraints that violates the near-feasibility condition at is zero. It also stops if the length of the consensus vector , where is a given tolerance. Note that might be shorter than even when is far from if two feasibility vectors point in nearly opposite directions. Of course, the algorithm will also terminate if some maximum number of iterations is reached.

In [9], Ibrahim and Chinneck gave several variations of the original constraint consensus method. The best algorithm variant for finding near-feasible points appears to be the maximum direction-based (DBmax) constraint consensus method. Their numerical experiments show that DBmax has the best success rate for finding near-feasible points. It also takes fewer iterations than most of the other methods. The DBmax constraint consensus method looks at the number of positive and negative values of each of the components of the feasibility vectors . For the th component, DBmax finds the number of constraints with positive th entry in their feasibility vector . It also looks for the number of constraints with negative th entry in their feasibility vector . The sign with largest number (votes) of constraints is considered the winner.

After determining the winning sign, DBmax creates the consensus vector by choosing each th component of to be the largest proposed movement in the winning direction. If a component has the same number of positive and negative votes then DBmax takes the average of the largest proposed movements in the positive and negative direction. If for all , then is set at 0. As in the case of the original consensus method, when finding the th component of the consensus vector in DBmax, only the subset of feasibility constraints that actually includes the th variable is considered. An example of the two methods for computing the consensus vector is as follows: In this example, constraint 4 does not depend on ; that is, . Thus, the first component of the original is obtained by averaging , , and . In the DBmax calculation, the winning sign is negative for the first component; in the second component positive is the winning sign. There is a tie in the third component, so we take the average of and . The fourth component also has no winning sign and shows a rare example where the component of the original consensus vector is larger (in absolute value) than the same component of the DBmax consensus vector.

Usually, is larger for the DBmax method than for the original method. For this reason, the DBmax method algorithm makes rapid progress toward the feasible region and significantly reduces the number of iterations needed to reach near feasibility as compared to the original method [9]. On the other hand, when the functions are concave, the feasible region is convex and component-averaging gradient-projection schemes (including the original constraint consensus method) have been proved to converge [3].

##### 2.2. Binary Words and Crossing Points

For simplicity of presentation, we assume that the functions , are concave over . This will insure that the feasible region is convex.

For each , define the characteristic function by
Define the binary word function by
For each , is a* binary word*. If , then is a feasible point of . Furthermore, is the number of violated constraints at . Consider the equivalence relation defined by if . The equivalence classes form a partition of (or a subset of ). Figure 1 shows an example with constraints. The curves are labeled with . The interior of this boundary is the convex region where . In this example there are 8 equivalence classes. In general there are at most equivalence classes.