Abstract

We propose an appealing line-search-based partial proximal alternating directions (LSPPAD) method for solving a class of separable convex optimization problems. These problems under consideration are common in practice. The proposed method solves two subproblems at each iteration: one is solved by a proximal point method, while the proximal term is absent from the other. Both subproblems admit inexact solutions. A line search technique is used to guarantee the convergence. The convergence of the LSPPAD method is established under some suitable conditions. The advantage of the proposed method is that it provides the tractability of the subproblem in which the proximal term is absent. Numerical tests show that the LSPPAD method has better performance compared with the existing alternating projection based prediction-correction (APBPC) method if both are employed to solve the described problem.

1. Introduction

In this paper, we consider a separable convex optimization problem of the form where , , , and are convex functions; , are closed convex sets; is a given matrix; and .

The augmented Lagrangian function associated with the problem (1) is

For simplicity of analysis, we assume the objective function is continuously differentiable. Let and ; by the convexity of the functions and , and are monotone in and , respectively. Thus, by the optimality, the problem (1) is equivalent to the following monotone variational inequalities: find such that where .

Chen and Teboulle [1] investigated the problem (1) and proposed a proximal-based decomposition method. Tseng [2] interpreted Chen and Teboulle's approach as an alternating version of the proximal point method and the extragradient method. Furthermore, Tseng generalized their method to solve much broader classes of problems and to yield new decomposition methods for convex programming and variational inequalities.

Indeed, there are many methods to deal with the problem (1), or its equivalent version (3), in the literature. Among these methods, proximal point method and alternating directions method are power tools, for example, [38].

Throughout this paper, we assume that the solution set of the problem (1) or, equivalently, the solution set of SVIs (3), denoted by , is nonempty. The notation denotes the Euclidean norm, denotes the infinite norm defined by , and denotes the -norm defined by .

From a given , the classical proximal alternating directions method produces the new iterate triple via the following scheme.

Find via solving

Find via solving

Update via where is a given penalty parameter of the linear constraint . The coefficients in formulas (4) and in (5) are referred to as proximal parameters. The method is convergent by setting (for a proof see [3]).

However, finding a solution of the subproblem (4) or (5) is not an easy task, since each of them requires an implicit projection. He et al. [4] suggested solving inexactly the subproblems (4) and (5). Their method is referred to as alternating projection based prediction-correction (APBPC) method. In [4], the authors considered a generalized version of the problem (1) with the constraints . The problem under consideration in this paper, that is, problem (1), has some features depending on . These features provide some advantages for constructing more efficient method.

This paper is organized as follows. Section 2 proposes a line-search-based partial proximal alternating directions method for solving the problem (1) and states some useful notation. The proposed method can be looked as to a proximal alternating directions method by setting in (5) and admits inexact solution at all iterations. Thus, the proposed method is actually an inexact partial proximal alternating direction method using a certain line-search technique. In Section 3, we provide two descent directions of a given merit function and prove the descent property of the proposed method. In Section 4, the convergence property of the proposed method is established under some suitable conditions. Section 5 gives some preliminary numerical experimental results on the compressed sensor problem. These results show that the proposed method has better performance compared with the existing APBPC method, when both are used to solve the problem with the described features. Finally, some concluding remarks are given in Section 6.

2. Method Description

In this section, we first describe the line-search-based partial proximal alternating directions method for the problem (3) in what follows.

Line-Search-Based Partial Proximal Alternating Directions (LSPPAD) Method. Let . For a given , the LSPPAD method produces the new triple via the following scheme.(s1)Solving the following variational inequality, find such that And denote the solution by . Let , if Let , and go back to (s1).(s2)Solving the following variational inequality, find such that And denote the solution by . Let , if

Let , and go back to (s2).(s3) Update via (s4)Compute the step length . Let

Remark 1. The main difference between the APBPC method [4] and the proposed LSPPAD method is that the proximal term appears in the prediction step 2 of the APBPC method, while it vanishes in the LAPPAD method.
(2) In the LSPPAD method, (s1) is inherent one iteration of the projection gradient method for the -subproblem based on the augmented Lagrangian function (see (2)) with the fixed and step length . That is
And (s2) is one iteration of the same method for the -subproblem with the fixed and step length ; that is,

Combining (7) and (9) together with (11) and by a manipulation we have

The following notations are useful for convenience in the future discussion:

By these notations, the variational inequality (16) can be rewritten into a compact form:

3. Descent Directions of the Merit Function

For the convenience of analysis, we ignore the index of the matrices, vectors, and scalars in the section.

It is easy to show that, for all , is the gradient of the unknown merit function at the point . The vector is a descent direction of the merit function if and only if . In the section, under suitable conditions, we will show that both and (see (18) and (19), resp.) are the descent directions of while .

Hereafter, we denote by the compact form whenever is a semidefinite symmetric matrix. It is obvious that by the semidefinite property of matrix .

Lemma 2. For a given , let be generated by (s1) to (s3) of the LSPPAD method, and let be defined by (20). Then one has where .

Proof. By a manipulation, we get Substituting into the first term of the right side of (23), we get By (s1), we have . Note that ; we get Similarly, we have By (s2), we have . Using Cauchy-Schwarz inequality, we get Again using , we get Thus Substituting (24), (25), and (29) into (23), we have Recalling we obtain

Lemma 3. By the same conditions of Lemma 2 one has

Proof. By Cauchy-Schwarz inequality, we get Noting that , we have Then by (s1) and (s2), we get

Lemma 4. For a given , let be generated by (s1) to (s3) of the LSPPAD method. Let be defined by (20), and let and be defined by (18) and (19), respectively. One has

Proof. Since (resp., ) is monotone with respect to (resp., ), it follows that (see (17)) is a monotone operator with respect to , we have and, consequently, Note that is a solution of (3), which implies for . Thus . We have Recalling the definition of , the inequality (37) follows from (40) directly.

Theorem 5. For a given , let be generated by (s1) to (s3) of the LSPPAD method. Then one has

Proof. By (21), we get (by letting ) Combining (43) and (37), we have Thus (41) follows from (44). Adding (37) and (21), we get By letting , we get (42) from (45) directly.

4. Convergence Analysis

For establishing the convergence, we first prove the following contractility property.

Lemma 6. For a given , let be generated by the LSPPAD method. Then one has

Proof. Recall the definition of and the iteration formula (13); by a straightforward computation and Theorem 5, we get

By Lemma 6 we have Thus, we can get the maximal drop-out value at each iteration by maximizing the following function: This quadratic function reaches its maximum at the point with its value Thus, by Lemma 2 we have the following.

Corollary 7. If the constant sequences and are positive and bounded above for all iteration , then one has

We will show later that the conditions of Corollary 7 can be satisfied in practice.

The numerical experiment [5] suggests that for fast convergence one can use a relaxation factor and let at each iteration. To do so, we get By (46) or (49), we have Furthermore, by Corollary 7, we have with The inequality (55) plays an important role in proving convergence of the proposed method.

Letting by Lemma 3 we have Combing (55) and (58), we get the Fejér monotonicity of the generated sequence .

Theorem 8. Suppose the sequence be generated by the LSPPAD method and the conditions of Corollary 7 hold. Then the sequence converges to , which is a solution of the problem (3), or, equivalently, the problem (1).

Proof. By (55) and (58), we get Consequently, It follows that the sequence is bounded. Moreover, we get from which it follows that Thus is bounded and it has at least one cluster point. Let be a cluster point of , and suppose the subsequence converges to . It follows from (62) that and, consequently (by (s1) and (s2)), Then, by (7), (9), and (11), we get which implies that Thus solves the variational inequalities (3).
Since , by (62), for any , there exists an integer such that Therefore, for any , we have which implies that the sequence converges to , which is a solution of the problem (3) or the problem (1).

5. Practical Implementation and Numerical Results

For the implementation of the proposed method, we have to give some rules to determine the proximal parameter and penalty parameter .

It follows from (56) that if the sequences are positive and bounded, then is well defined and , which guarantees that the conditions of Corollary 7 hold.

On the penalty parameter sequence , He et al. [9] proposed a self-adaptive rule at each iteration based on the iterate information, which is referred to as self-adaptive penalty parameters method, and showed that the sequence generated by the method is bounded and away from zero. In the proposed method of this paper, we use directly the self-adaptive penalty parameters method (Method 3, Strategy S3 in [9]). The self-adaptive rule is Combing (70) and the line search in (s2) of the LSPPAD method, we have that the sequence is positive, away from zero and bounded above.

We next focus on the proximal parameter sequence .

Lemma 9. If is monotone and Lipschitz continuous with constant , that is, then, whenever one has .

Proof. Recall the definition of in (s1) of the LSPPAD method; by (71) we get

Lemma 9 indicates that the line search in (s1) will terminate in finite number of iterations.

In practice we use a self-adaptive rule to guarantee not tend to infinity under the line search in (s1) of the LSPPAD method. The self-adaptive rule is letting whenever , where . By this rule and the step (s1) of the LAPPAD method, is obviously bounded away from zero; thus the condition on of Corollary 7 holds.

Finally, we give some numerical experiments on compressed sensor problem; see [10]. These experiments are tested on a laptop with 166 GHz CPU, 2.5 GB RAM, and Matlab 6.5.

The separable convex optimization formulation of compressed sensor problem is where . In the test problem, is nonsmooth. However, the special structure of the problem (74) admits the equivalence relation between the problems (1) and (3) if the subderivative of the absolute value function is specialized as follows: So the proposed method is available for this problem.

The test data of the problem (74) is generated by the following style: is a random matrix with according to uniform distribution, and is a given signal vector with a random noise, . The practical parameters of the LSPPAD method are given in the following: (where denotes the eigenvalue of ), . The experimental results are stated in Table 1. The notation in Table 1 is as follows: , and cputime are clear; denotes the required sparse degree; function evaluations is the number of computation of product on matrix and vector (which is the main cost of the proposed method). The process stops whenever .

The numerical results show that the proposed LSPPAD method has better performance compared with the APBPC method for the test problem.

6. Conclusions

In this paper, we present a line-search-based partial proximal alternating directions (LSPPAD) method for solving a class of structured convex optimization problems. This kind of problems is common in practice. The LSPPAD method makes full use of the special structure of the described problem. In the LSPPAD method, two subproblems are solved in an alternative fashion. The subproblem of variable is solved without the proximal term, while the -subproblem is solved by a proximal point method; both used line-search technique to guarantee the convergence. To do so provides some advantages for the tractability. The convergence property of the LSPPAD method is established. Numerical tests show that, due to the described problem, the LSPPAD method has better performance compared with the APBPC method which solves both subproblems by the inexact proximal point method.

Conflict of Interests

All the authors of the paper declare that they do not have any conflict of interests and there are no financial or personal relationships with other people or organizations that can inappropriately influence their work in this paper.

Acknowledgments

This work is supported by the Natural Science Foundation of China with Grants 61170308 and 71371065 and the Science and Technology Development Fund of Fuzhou University with Grant 2013-XQ-29. The early version of this paper was contributed to the International MultiConference of Engineers and Computer Scientists.