- About this Journal ·
- Abstracting and Indexing ·
- Advance Access ·
- Aims and Scope ·
- Annual Issues ·
- Article Processing Charges ·
- Articles in Press ·
- Author Guidelines ·
- Bibliographic Information ·
- Citations to this Journal ·
- Contact Information ·
- Editorial Board ·
- Editorial Workflow ·
- Free eTOC Alerts ·
- Publication Ethics ·
- Reviewers Acknowledgment ·
- Submit a Manuscript ·
- Subscription Information ·
- Table of Contents
Abstract and Applied Analysis
Volume 2013 (2013), Article ID 680768, 8 pages
Convergence Analysis of Alternating Direction Method of Multipliers for a Class of Separable Convex Programming
1School of Mathematical Science and Key Laboratory for NSLSCS of Jiangsu Province, Nanjing Normal University, Nanjing, Jiangsu 210023, China
2College of Mathematics and Information, China West Normal University, Nanchong, Sichuan 637009, China
Received 19 July 2013; Accepted 30 July 2013
Academic Editor: Xu Minghua
Copyright © 2013 Zehui Jia et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
The purpose of this paper is extending the convergence analysis of Han and Yuan (2012) for alternating direction method of multipliers (ADMM) from the strongly convex to a more general case. Under the assumption that the individual functions are composites of strongly convex functions and linear functions, we prove that the classical ADMM for separable convex programming with two blocks can be extended to the case with more than three blocks. The problems, although still very special, arise naturally from some important applications, for example, route-based traffic assignment problems.
In this paper, we consider the convex programming with separable functions: where () are closed proper convex functions (not necessarily smooth); (); () are closed convex sets; and . Throughout the paper, we assume that the solution set of (1) is nonempty.
For the special case of (1) with , the problem has been studied extensively. Among lots of numerical methods, one of the most popular methods is the alternating direction method of multipliers (ADMM) which was presented originally in [1, 2]. The iterative scheme of ADMM for (2) is as follows: where is Lagrange multiplier associated with the linear constraints and is the penalty parameter. The convergence of ADMM for (2) was also established under the condition that the involved functions are convex and the constrained sets are convex too.
While there are diversified applications whose objective function is separable into individual convex functions without coupled variables, such as traffic problems, the problem of recovering the low-rank, sparse components of matrices from incomplete and noisy observation in , the constrained total-variation image restoration and reconstruction problem in [4, 5], and the minimal surface PDE problem in , it is thus natural to extend ADMM from blocks to blocks, resulting in the iterative scheme:
Unfortunately, the convergence of the natural extension is still open under convex assumption, and the recent convergence results  are under the assumption that all the functions involved in the objective functions are strongly convex. This lack of convergence has inspired some ADM-based methods, for example, prediction-correction type method [3, 8–11], that is, the iterate is regarded as a prediction, and the next iterate is a correction for it. However, the numerical results show that the algorithm (4) always performs better than these variants. Recently, Han and Yuan  show that the global convergence of the extension of ADMM for is valid if the involved functions are further assumed to be strongly convex. This result does not answer the open problem regarding the convergence of the extension of ADMM under the convex assumption, but it makes a key progress towards this objective.
In this paper, we consider the separable convex optimization problem (1) where each individual function is the combination of a strongly convex function and a linear transform . That is, (1) takes the following form: where () are closed proper strongly convex function with the modulus (not necessarily smooth); (); () are closed convex sets; and ; (), where may not have full column rank (if has full column rank, the composite function is strongly convex and reduces to the case considered in ). Note that although (5) is very special, it arises frequently from many applications. One example is under the route-based traffic assignment problem , where is the link traffic cost, is the link-path incidence matrix, and is the path follow vector.
In the following, we abuse a little the notation and still write with ; that is, the problem under consideration is where () are closed proper strongly convex function with the modulus (not necessarily smooth).
The rest of the paper is organized as follows. In the next section, we list some necessary preliminary results that will be used in the rest of the paper. We then describe the algorithm formally and analyze its global convergence under reasonable conditions in Section 3. We complete the paper with some conclusions in Section 4.
In this section, we summarize some basic concepts and their properties that will be useful for further discussion.
Let denote the standard definition of the -norm, and particularly, let denote the Euclidean norm. For a symmetric and positive definite matrix , we denote the -norm, that is, . If is the product of a positive parameter and the identity matrix , that is, , we use the simpler notation: .
Let . If the domain of denoted by is not empty, then is said to be proper. If for any and , we have then is said to be convex. Furthermore, is said to be strongly convex with the modulus if and only if
A set-valued operator defined on is said to be monotone if and only if and is said to be strongly monotone with modulus if and only if
Let denote the set of closed proper convex functions from to . For any , the subdifferential of which is the set-valued operator, defined by is monotone. Moreover, if is strongly convex function with the modulus , is strongly monotone with the modulus .
Let be a mapping from a set . Then is said to be co-coercive on with modulus , if
Throughout the paper, we make the following assumptions.
Assumption 1. (i) , , ; (ii) the solution set of (1) is nonempty.
Remark 2. Assumption 1 is a little restrictive. However, some problems can satisfy it. A remarkable one is the following route-based traffic assignment problem.
Consider a transportation network , where is the set of nodes. We denote the set of links by , and the number of the element of by , respectively. Let denote the set of origin-destination (O-D) pairs. For an O-D pair , let be its traffic demand; let be the set of routes connecting , and ; denotes the number of the routes connecting ; let be the route flow on . The feasible route flow vector is thus given by Define as the link-route incidence matrix such that Then, link flow can be written as By denoting the link cost function as and for the additive case, the route cost function as , they can be related by The user equilibrium traffic assignment problem can be formulated as a VI: find such that or equivalently, find such that where is the vector of the link cost function.
In general, it is easy to show that is a row of and is not a full column rank (if is, then the above variational inequality is strongly monotone).
For simplicity, in the following, we only consider the case for . Notice that for , it can be proved similarly following the processing of .
3. The Method
In this section, we consider the following convex minimization problem with linear constraint, where the objective function is in the form of the sum of three individual functions without coupled variable: where () are closed proper strongly convex function with the modulus (not necessarily smooth); (), (); () are closed convex sets; and .
The iterative scheme of ADMM for problem (19) is as follows: where is the Lagrangian multiplier associated with the linear constraints and is the penalty parameter.
In this section, we prove the convergence of the extended ADMM for problem (19). As the assumptions aforementioned, by invoking the first-order necessary and sufficient condition for convex programming, we easily see that the problem (19) under the condition is characterized by the following variational inequality (VI): find and such that where We denote the VI (21)-(22) by MVI.
Lemma 3 (see ). If and (), then is a solution of MVI.
Lemma 3 implies that the iterate is a solution of MVI when the inequality (23) holds with . Some techniques of establishing the error bounds in  can help us analyze how precisely the iterate satisfies the optimality conditions when the proposed stopping criterion is satisfied with a tolerance .
Proof. By invoking the first-order optimality condition for the related subproblem in (20), for any , , we get Setting () in (25), we have On the other hand, setting in (21), it follows that Adding (26) and (27), we obtain With the rearrangement of the above inequalities, we derive that Adding the above inequalities (29), we have The proof is complete.
Hereafter, we define a matrix which will make the notation of proof more succinct. More specifically, let Obviously, is a positive semidefinite matrix, only for analysis convenience; we denote
Lemma 5. Let be a solution of MVI, and let the sequence be generated by (20). Then, one has
Proof. From (20) and Lemma 4, we have
and , we can get
Using Cauchy-Schwarz inequality, we have
Substituting (36) and (37) into (34), we get
Since is strongly convex, from the strong monotonicity of the subdifferential mapping (with the modulus ), then we have where , , for any .
By using the notion of , from (38) we have The proof is complete.
Proof. From Lemma 5, we have
From Assumption 1, it follows that
From (45), we have
which means that the generated sequence is bounded.
Furthermore, it follows that which means that Therefore, we have Since is nonzero and bounded, from (48) we have Since is bounded, has at least one cluster point, say . Let be the corresponding subsequence that converges to . Taking a limit along this subsequence in (25) and (49), we obtain , which follows that is an optimal Lagrange multiplier. Since is arbitrary, we can set in (46) and conclude that the whole generated sequence converges to a solution of MVI.
In this paper, we extend the convergence analysis of the ADMM for the separable convex optimization problem with strongly convex functions to the case in which the individual functions are composites of strongly convex functions with a linear transform. Under further assumptions, we established the global convergence of the algorithm.
It should be admitted that although some problems arising from applications such as traffic assignment fall into our analysis, the problems considered here are too special. Thus, it is far away to solve the open problem of convergence of the ADMM with more than three blocks.
Xingju Cai was supported by the National Natural Science Foundation of China (NSFC) Grants nos. 11071122 and 11171159 and by the Doctoral Fund of Ministry of Education of China no. 20103207110002.
- D. Gabay and B. Mercier, “A dual algorithm for the solution of nonlinear variational problems via finite element approximation,” Computers and Mathematics with Applications, vol. 2, no. 1, pp. 17–40, 1976.
- D. Gabay, “Applications of the method of multipliers to variational inequalities,” in Augmented Lagrangian Methods: Applications to Numerical Solution of Boundary-Value Problems, M. Fortin and R. Glowinski, Eds., pp. 299–331, North-Holland Publisher, Amsterdam, The Netherland, 1983.
- M. Tao and X. Yuan, “Recovering low-rank and sparse components of matrices from incomplete and noisy observations,” SIAM Journal on Optimization, vol. 21, no. 1, pp. 57–81, 2011.
- M. K. Ng, P. Weiss, and X. Yuan, “Solving constrained total-variation image restoration and reconstruction problems via alternating direction methods,” SIAM Journal on Scientific Computing, vol. 32, no. 5, pp. 2710–2736, 2010.
- L. I. Rudin, S. Osher, and E. Fatemi, “Nonlinear total variation based noise removal algorithms,” Physica D, vol. 60, no. 1–4, pp. 259–268, 1992.
- Z. Wen, D. Goldfarb, and W. Yin, “Alternating direction augmented Lagrangian methods for semidefinite programming,” Mathematical Programming Computation, vol. 2, no. 3-4, pp. 203–230, 2010.
- D. Han and X. Yuan, “A note on the alternating direction method of multipliers,” Journal of Optimization Theory and Applications, vol. 155, pp. 227–238, 2012.
- D. R. Han, X. M. Yuan, W. X. Zhang, and X. J. Cai, “An ADM-based splitting method for separable convex programming,” Computational Optimization and Applications, vol. 54, pp. 343–369, 2013.
- B. S. He, M. Tao, and X. M. Yuan, “Alternating direction method with Gaussian back substitution for separable convex programming,” SIAM Journal on Optimization, vol. 22, pp. 313–340, 2012.
- B. S. He, M. Tao, M. H. Xu, and X. M. Yuan, “Alternating directions based contraction method for generally separable linearly constrained convex programming problems,” Optimization, vol. 62, pp. 573–596, 2013.
- B. S. He, M. Tao, and X. M. Yuan, “A splitting method for separable convex programming,” IMA Journal of Numerical Analysis. In press.
- D. Han and H. K. Lo, “Solving non-additive traffic assignment problems: a descent method for co-coercive variational inequalities,” European Journal of Operational Research, vol. 159, no. 3, pp. 529–544, 2012.
- F. Facchinei and J. S. Pang, Finite-Dimensional Variational Inequalities and Complementary Problems. Volume I and II, Springer, New York, NY, USA, 2003.