A Globally Convergent Parallel SSLE Algorithm for Inequality Constrained Optimization
A new parallel variable distribution algorithm based on interior point SSLE algorithm is proposed for solving inequality constrained optimization problems under the condition that the constraints are block-separable by the technology of sequential system of linear equation. Each iteration of this algorithm only needs to solve three systems of linear equations with the same coefficient matrix to obtain the descent direction. Furthermore, under certain conditions, the global convergence is achieved.
Consider the following inequality constrained optimization problems: where , are continuously differentiable. We denote To solve the problem (1), there are two type methods with superlinear convergence: sequential quadratic programming (SQP) type algorithms (see [1–4], etc.) and SSLE (sequential system of linear equations) type algorithms (see [5–9], etc.). In general, since SQP algorithms are necessary to solve one or more quadratic programming subproblems in single iteration, the computation effort is very large.
SSLE algorithms were proposed to solve the problem (1), in which an iteration similar to the following linear system was considered: where is Lagrangian function, is an estimate of the Hessian of , is the current estimate of a solution , is the search direction, and is the next estimate of the Kuhn-Tucker multiplier vector associated with . Obviously, it is simpler to solve system of linear equations than to solve the QP (quadratic programming) problem with inequality constraints.
In addition, parallel variable distribution (PVD) algorithm  is a method that distributes the variables among parallel processors. The problem is parted into many respective subproblems and each subproblem is arranged to a different processor in it. Each processor has the primary responsibility for updating its block of variables while allowing the remaining secondary variables to change in a restricted fashion along some easily computable directions. In 2002, Sagastizábal and Solodov  proposed two new variants of PVD for the constrained case. Without assuming convexity of constraints, but assuming block-separable structure, they showed that PVD subproblems can be solved inexactly by solving their quadratic programming approximations. Han et al.  proposed an asynchronous PVT algorithm for solving large-scale linearly constrained convex minimization problems with the idea in 2009, which is based on the idea that a constrained optimization problem is equivalent to a differentiable unconstrained optimization problem by introducing the Fischer function. In 2011, Zheng et al.  gave a parallel SSLE algorithm, in which the PVD subproblems are solved inexactly by serial sequential linear equations, for solving large-scale constrained optimization with block-separable structure. Without assuming the convexity of constraints, the algorithm is proved to be globally convergent to a KKT point.
In this paper, we use Zhu  as our main reference on SSLE-type PVD method for problem (1). Suppose that the problem (1) has the following block-separable structure:
Then, the problem is distributed into parallel subproblems which have been computed by the parallel processors. In the algorithm, at each iteration, the search direction is obtained by solving three systems of linear equations with the same coefficient matrix, which guarantees that every iteration is feasible. Thereby, the computational effort of the proposed algorithm is reduced further. Furthermore, its global convergence is obtained under some suitable conditions.
The remaining part of this paper is organized as follows. In Section 2, a parallel SSLE algorithm is presented. Global convergence is established under some basic assumptions in Section 3. And concluding remarks are given in the last section.
2. Description of Algorithm
Now we state our algorithm as follows.
Step 0 (initialization). Given a starting point . Choose parameters , , , , , , and and an initial symmetric positive definite matrix . Set .
Step 1 (parallelization). For each processor , let
1.1 Computation of the Newton Direction. Solve the following system of linear equations:
Let be the solution. If , stop.
1.2 Computation of the Descent Direction. Solve the following system of linear equations: where . Let be the solution.
1.3 Computation of the Main Search Direction. Establish a convex combination of and : where
1.4 Computation of the High-Order Corrected Direction. Set
Solve the following system of linear equations: where Let be the solution. If , set .
Step 2 (synchronization). Let
2.1 The Line Search. Compute , the first number in the sequence satisfying
Step 3 (update). Obtain by updating the positive definite matrix using some quasi-Newton formulas. Set and . Let . Go back to Step 1.
3. Global Convergence of Algorithm
We make the following general assumptions and let them hold throughout the paper.(H3.1)For , the sets and are nonempty. The set is compact.(H3.2)The functions , , and are continuously differentiable.(H3.3)For all , the set of vectors is linearly independent.(H3.4)There exist , , such that , for all and .
Lemma 1. For any , any positive-definite matrix , and nonnegative vector such that , , the matrix is nonsingular, where , , and .
Proof. We need only to prove that is the unique solution of the following linear equations:
Now consider the cases and separately.
For , if and , it follows from (19) that By (19), we have Then from the assumption (H3.3), it shows that .
For , if , it follows from the first equation of (19) that If , it follows from the second equation of (19) that Hence, if , combine (22) and (23), and from the first equation of (19) and the assumption (H3.4), we get It shows that and .
Lemma 2. For ,(1)if , then is a KKT point of (1);(2)if , then computed according to (8) is well defined and
Proof. (1) It is obvious according to the definition of the KKT point of (1).
(2) If , from (6), we have Thereby, from (9), there exists some , such that ; that is, is well defined. In addition, from (7), it follows that Thus, from (8), it is clear to see that The claim holds.
Lemma 3. The line search in Step 2 of the algorithm is well defined; that is, there exists such that (14)–(16) hold.
Proof. Firstly, for (14), since is continuously differentiable, we can see that
From (25), we have , . Then we can obtain
Thus, for , there exists some , such that , .
On the other hand, from (25), if , , we have So, for all , Then, from (13), we obtain, for , that So, there exists some , , such that , ; that is, (15) holds.
Thirdly, for (16), since is continuous and , there exists some , , such that
Let ; then the conclusion holds.
According to (H3.1), (H3.2), and (H3.4), we might assume that there exists a subsequence as well, such that
In order to obtain the global convergence of the algorithm, we assume the following condition.(H3.5)The number of stationary points of (1) is finite.
Theorem 4. The algorithm in Section 2 either stops at the KKT point of (1) in finite iteration or generates an infinite sequence whose all accumulation points are KKT points of (1).
Proof. The first statement is obvious, the only stopping point being in Step 1.1. Firstly, we show that
where , .
Since is monotonically decreasing, the facts and continuity of imply that For , , suppose by contradiction that . Then, from (17), we have Hence, it is easy to prove that is the unique solution of the following linear system: where , . Then we can obtain Thereby, Similar to (8), we define , and by imitating the proof of Lemma 2, it follows that From (41), (42), and the proof of Lemma 3, we can conclude that the step-size obtained by the linear search in Step 2.2 is bounded away from zero on ; that is, So, from (14), (37), and (42), we get It is a contradiction, which shows that , .
Furthermore, from (40), we have If , , then , , and it is obvious that is a KKT point of (1).
Without loss of generality, we suppose that there exists some , such that . If , then it is easy to see that is a KKT point of (1). Suppose that . Since there are only finitely many choices for sets , we might assume, for , large enough, that as well, where is a constant set. Obviously, . From condition (H3.5), it holds that , . Thereby, it holds that While, from (15), there exists some , such that, for , it is in contradiction to (46), which shows that is a KKT point of (1).
4. Concluding Remarks
In this paper, combined with the idea of parallel variable distribution, we proposed a new interior point SSLE algorithm for solving constrained optimization problems. The presented algorithm is a special structure of the objective function or constraints with a special structure. Under some mild conditions, the theoretical analysis shows that the convergence of this algorithm can be obtained.
It is noted that there are still some problems worthy of further discussion such as study of the parallel algorithm with inequality and equality constraints.
Conflict of Interests
The authors have declared that there is no conflict of interests.
The authors would also like to thank the anonymous referees for the careful reading and helpful comments and suggestions that led to an improved version of this paper. This research was supported by the Foundation of Hunan Provincial Education Department under Grant (nos. 12A077, 12C0743, and 13C453) and Scientific Research Fund of Hunan University of Humanities, Science and Technology of China (no. 2012QN04).
P. T. Boggs and J. W. Tolle, “A strategy for global convergence in a sequential quadratic programming algorithm,” SIAM Journal on Numerical Analysis, vol. 26, no. 3, pp. 600–623, 1989.View at: Google Scholar
C. T. Lawrence and A. L. Tits, “Nonlinear equality constraints in feasible sequential quadratic programming,” Optimization Methods and Software, vol. 6, no. 4, pp. 265–282, 1996.View at: Google Scholar
Z. Zhu and J. Jian, “An efficient feasible SQP algorithm for inequality constrained optimization,” Nonlinear Analysis: Real World Applications, vol. 10, no. 2, pp. 1220–1228, 2009.View at: Publisher Site | Google Scholar
Z. Luo, G. Chen, S. Luo et al., “Improved feasible SQP algorithm for nonlinear programs with equality constrained sub-problems,” Journal of Computers, vol. 8, no. 6, pp. 1496–1503, 2013.View at: Google Scholar
E. R. Panier, A. L. Tits, and J. N. Herskovits, “QP-free, globally convergent, locally superlinearly convergent algorithm for inequality constrained optimization,” SIAM Journal on Control and Optimization, vol. 26, no. 4, pp. 788–811, 1988.View at: Google Scholar
H. D. Qi and L. Qi, “A new QP-free, globally convergent, locally superlinearly convergent algorithm for inequality constrained optimization,” SIAM Journal on Optimization, vol. 11, no. 1, pp. 113–132, 2001.View at: Publisher Site | Google Scholar
L. Chen, Y. Wang, and G. He, “A feasible active set QP-free method for nonlinear programming,” SIAM Journal on Optimization, vol. 17, no. 2, pp. 401–429, 2006.View at: Publisher Site | Google Scholar
Z. Zhu, “An interior point type QP-free algorithm with superlinear convergence for inequality constrained optimization,” Applied Mathematical Modelling, vol. 31, no. 6, pp. 1201–1212, 2007.View at: Publisher Site | Google Scholar
W. X. Cheng, C. C. Huang, and J. B. Jian, “An improved infeasible SSLE method for constrained optimization without strict complementarity,” Computers & Operations Research, vol. 40, no. 5, pp. 1506–1515, 2013.View at: Publisher Site | Google Scholar
M. C. Ferris and O. L. Mangasarian, “Parallel variable distribution,” SIAM Journal on Optimization, vol. 4, no. 4, pp. 815–832, 1994.View at: Google Scholar
C. A. Sagastizábal and M. V. Solodov, “Parallel variable distribution for constrained optimization,” Computational Optimization and Applications, vol. 22, no. 1, pp. 111–131, 2002.View at: Publisher Site | Google Scholar
C. Han, Y. Wang, and G. He, “On the convergence of asynchronous parallel algorithm for large-scale linearly constrained minimization problem,” Applied Mathematics and Computation, vol. 211, no. 2, pp. 434–441, 2009.View at: Publisher Site | Google Scholar
F. Zheng, C. Han, and Y. Wang, “Parallel SSLE algorithm for large scale constrained optimization,” Applied Mathematics and Computation, vol. 217, no. 12, pp. 5377–5384, 2011.View at: Publisher Site | Google Scholar