Abstract

An implementable algorithm for solving a nonsmooth convex optimization problem is proposed by combining Moreau-Yosida regularization and bundle and quasi-Newton ideas. In contrast with quasi-Newton bundle methods of Mifflin et al. (1998), we only assume that the values of the objective function and its subgradients are evaluated approximately, which makes the method easier to implement. Under some reasonable assumptions, the proposed method is shown to have a Q-superlinear rate of convergence.

1. Introduction

In this paper we are concerned with the unconstrained minimization of a real-valued, convex function , namely, and in general is nondifferentiable. A number of attempts have been made to obtain convergent algorithms for solving (1). Fukushima and Qi [1] propose an algorithm for solving (1) under semismoothness and regularity assumptions. The proposed algorithm is shown to have a Q-superlinear rate of convergence. An implementable BFGS method for general nonsmooth problems is presented by Rauf and Fukushima [2], and global convergence is obtained based on the assumption of strong convexity. A superlinearly convergent method for (1) is proposed by Qi and Chen [3], but it requires the semismoothness condition. He [4] obtains a globally convergent algorithm for convex constrained minimization problems under certain regularity and uniform continuity assumptions. Among methods for nonsmooth optimization problems, some have superlinear rate of convergence, for instance, see Mifflin and Sagastizábal [5] and Lemaréchal et al. [6]. They propose two conceptual algorithms with superlinear convergence for minimizing a class of convex functions, and the latter demands that the objective function should be differentiable in a certain space (the subspace along which has 0 breadth at a given point ), but sometimes it is difficult to decompose the space. Besides these methods mentioned above, there is a quasi-Newton bundle type method proposed by Mifflin et al. [7] it has superlinear rate of convergence, but the exact values of the objective function and its subgradients are required. In this paper, we present an implementable algorithm by using bundle and quasi-Newton ideas and Moreau-Yosida regularization, and the proposed algorithm can be shown to have a superlinear rate of convergence. An obvious advantage of the proposed algorithm lies in the fact that we only need the approximate values of the objective function and its subgradients.

It is well known that (1) can be solved by means of the Moreau-Yosida regularization of , which is defined by where is a fixed positive parameter and denotes the Euclidean norm or its induced matrix norm on . The problem of minimizing , that is, is equivalent to (1) in the sense that solves (1) if and only if it solves (3), see Hiriart-Urruty and Lemaréchal [8]. The problem (3) has a remarkable feature that the objective function is a differentiable convex function, even though is nondifferentiable. Moreover has a Lipschitz continuous gradient where is the unique minimizer of (2) and is the subdifferential mapping of . Hence, by Rademacher’s theorem, is differentiable almost everywhere and the set is nonempty and bounded for each . We say is BD-regular at if all matrices are nonsingular. It is reasonable to pay more attention to the problem (3) since has such good properties. However, because the Moreau-Yosida regularization itself is defined through a minimization problem involving , the exact values of and its gradient at an arbitrary point are difficult or even impossible to compute in general. Therefore, we attempt to explore the possibility of utilizing the approximations of these values.

Several attempts have been made to combine quasi-Newton idea with Moreau-Yosida regularization to solve (1). For related works on this subject, see Chen and Fukushima [9] and Mifflin [10]. In particular, Mifflin et al. [7] consider using bundle ideas to approximate linearly the values of in order to approximate in which the exact values of and one of its subgradients at some points are needed. In this paper we assume that for given and , we can find some and such that which means that . This setting is realistic in many applications, see Kiwiel [11]. Let us see some examples. Assume that is strongly convex with modulus , that is, and that with continuously differentiable and convex. By the chain rule we have . Now assume that we have an approximation of such that . Such an approximation may be obtained by using finite differences. In this case, typically for . Let . Then, we have for all and . Some simple manipulations show that By the definition of , the bound depends on , we obtain From the local boundedness of , we infer that is locally bounded. Thus, is an -subgradient of at , see Hintermüller [12]. As for the approximate function values, if is a max-type function of the form where each is convex and is an infinite set, then it may be impossible to calculate . However, for any positive one can usually find in finite time an -solution to the maximization problem (11), that is, an element satisfying . Then one may set . On the other hand, in some applications, calculating for a prescribed may require much less work than computing . This is, for instance, the case when the maximization problem (11) involves solving a linear or discrete programming problem by the methods of Gabasov and Kirilova [13]. Some people have tried to solve (1) by assuming the values of the objective function, and its subgradients can only be computed approximately. For example, Solodov [14] considers the proximal form of a bundle algorithm for (1), assuming the values of the function and its subgradients are evaluated approximately, and it is shown how these approximations should be controlled in order to satisfy the desired optimality tolerance. Kiwiel [15] proposes an algorithm for (1), and the algorithm utilizes the approximation evaluations of the objective function and its subgradients; global convergence of the method is obtained. Kiwiel [11] introduces another method for (1); it requires only the approximate evaluations of and its -subgradients, and this method converges globally. It is in evidence that bundle methods with superlinear convergence for solving (1) by using approximate values of the objective and its subgradients are seldom obtained. Compared with the methods mentioned above, the method proposed in this paper is not only implementable but also has a superlinear rate of convergence under some additional assumptions, and it should be noted that we only use the approximate values of the objective function and its subgradients which makes the algorithm easier to implement.

Some notations are listed below for presenting the algorithm.(i), the subdifferential of at , and each such is called a subgradient of at .(ii), the -subdifferential of at , and each such is called an -subgradient of at .(iii), the unique minimizer of (2).(iv), the gradient of at .

This paper is organized as follows: in Section 2, to approximate the unique minimizer of (2), we introduce the bundle idea, which uses approximate values of the objective function and its subgradients. The approximate quasi-Newton bundle-type algorithm is presented in Section 3. In the last section, we prove the global convergence and, under additional assumptions, Q-superlinear convergence of the proposed algorithm.

2. The Approximation of

Let and  , where is the current iterate point of AQNBT algorithm presented in Section 3, then (13) has the form Now we consider approximating by using the bundle idea. Suppose we have a bundle generated sequentially starting from and possibly a subset of the previous set used to generate . The bundle includes the data , , where , , and satisfy Suppose that the elements in can be arranged according to the order of their entering the bundle. Without loss of generality we may suppose . is updated by the rule , , . The condition (13) means , . By using the data in the bundle we construct a polyhedral function defined by Obviously is a lower approximation of , so . We define a linearization error by where satisfies Then can be written as Let

The problem (18) can be dealt with by solving the following quadratic programming: As iterations go along, the number of elements in bundle increases. When the size of the bundle becomes too big, it may cause serious computational difficulties in the form of unbounded storage requirement. To overcome these difficulties, it is necessary to compress the bundle and clean the model. Wolfe [16] and Lemaréchal [17], for the first time, introduce the aggregation strategy, which requires storing only a limited number of subgradients, see Kiwiel and Mifflin [1820]. Aggregation strategy is the synthesis mechanism that condenses the essential information of the bundle into one single couple (defined below). The corresponding affine function, inserted in the model when there is compression, is called aggregate linearization (defined below). This function summarizes all the information generated up to iteration . Suppose is the upper bound of the number of elements in , . If reaches the prescribed , two or more of those elements are deleted from the bundle ; that is, two or more linear pieces in the constraints of (19) are discarded (notice that different selections of discarded linear pieces may result in different speed of convergence), and introduce the aggregate linearization associated with the aggregate -subgradient and linearization error into bundle. Define the aggregate linearization as where , . Multiplier is the optimal solution of dual problem for (19), see Solodov [14]. By doing so, the surrogate aggregate linearization maintains the information of the deleted linear pieces and at the same time the problem (19) is manageable since the number of the elements in is limited. Suppose solves the problem (19), and let be an approximation of and . Let where is chosen to satisfy

The results stated below are fundamental and useful in the subsequent discussions. (P1). (P2) if and only if and .Note that is the unique minimizer of (2) and (P1) and (P2) can be obtained by the definitions of  ,  , and .(P3)(i)If we define , where , then as the new point is appended into the bundle infinitely. (ii)Let . if , then . Because by the update rule , we have and . Thus , so . It is easy to see that . Therefore, .

Let We accept as an approximation of based on the following rule: where and are given positive numbers and is fixed during one bundling process; that is, depends on , see Step 1 in AQNBT algorithm presented in Section 3. If (24) is not satisfied, we let , and take and satisfying and then append a new piece to (14), replace by , and solve (19) for finding a new and to be tested in (24). If this bundle process does not terminate, we have the following conclusion.(P4)Suppose that is not the minimizer of . If (24) is never satisfied, then as the new point is appended into the bundle infinitely.Suppose that . Define the functions and by

Let be the unique minimizer of , and let be the unique minimizer of , where . Note that if , then let , so ; if , delete at least two elements from , say , and , the order of the other elements in are left intact. Introduce an additional index associated with the aggregated -subgradient and linearization error into and let , so . By adjusting appropriately, we can make sure that and are not far away from . According to the proof of Proposition 3, see Fukushima [21], we find that has limit, say , and also converges to as . By the definitions of and we have and as , so as .

In the next part we give the definition of , which is the approximation of , and some properties of are discussed. It is easy to see that the approximation of is associated with :(P5). By the strong convexity of , we have . From the definitions of and , we obtain . By (P1), (P5) holds.

By (P4) and (P5), we have the following (P6). In fact, (P6) says that the bundle subalgorithm for finding terminates in finite steps. (P6) If does not minimize , then we can find one solution of (18) such that (24) holds.

3. Approximate Quasi-Newton Bundle-Type Algorithm

For presenting the algorithm, we use the following notations: , , and . Given positive numbers , , , and such that , , , and one symmetric positive definite matrix .

Approximate Quasi-Newton Bundle-Type Algorithm (AQNBT Alg):

Step 1 (initialization). Let be a starting point, and let be an symmetric positive definite matrix. Let and be positive numbers. Choose a sequence of positive numbers such that . Set . Find and such that Let ,  ,   , and be the running index of bundle subalgorithm.

Step 2 (finding a search direction). If , stop with optimal. Otherwise compute

Step 3 (line search). Starting with , let   be the smallest nonnegative integer such that where corresponds to the approximations and of   at ;   satisfies and is the solution of (19), in which is replaced by , and the expression of is similar to (21), but is replaced by .  Set and .

Step 4 (computing the approximate gradient). Compute .

Step 5 (updating ). Let and . Set Set , and go to Step 2.
End of AQNBT algorithm.

4. Convergence Analysis

In this section we prove the global convergence of the algorithm described in Section 3, and furthermore under the assumptions of semismoothness and regularity, we show that the proposed algorithm has a Q-superlinear convergence rate. Following the proof of Theorem 3, see Mifflin et al. [7], we can show that, at each iteration , is well defined, and hence the stepsize can be determined finitely in Step 4. We assume the proposed algorithm does not terminate in finite steps, so the sequence is an infinite sequence. Since the sequence satisfies , there exists a constant such that . Let . By making a slight change of the proof of Lemma 1, see Mifflin et al. [7], we have the following lemma.

Lemma 1. and .

Theorem 2. Suppose is bounded below and there exists a constant such that Then any accumulation point of is an optimal solution of problem (1).

Proof. According to the first part of the proof of Theorem 3, see Mifflin et al., [7], we have . Since , from (P1) we obtain as , and . Thus Let be an arbitrary accumulation point of , and let be a subsequence converging to . By (P5) we have Since is bounded, we may suppose for some . Moreover we have If , then . Otherwise, if , by taking a subsequence if necessary we may assume for . The definition of in the line search rule gives where . So by (P1) we obtain By taking the limit in (39) on the subsequence , we have In view of (37), the last inequality also gives . Since and is bounded, it follows from that Therefore, is an optimal solution of problem (1).

In the next part, we focus our attention on establishing Q-superlinear convergence of the proposed algorithm.

Theorem 3. Suppose that the conditions of Theorem 2 hold and is an optimal solution of (1). Assume that is BD-regular at . Then is the unique optimal solution of (1) and the entire sequence converges to .

Proof. By the convexity and BD-regularity of at , is the unique optimal solution of (3); for the proof, see Qi and Womersley [22]. So is also the unique optimal solution of (1). This implies that both and must have compact level sets. By Lemma 1   has at least one accumulation point, and from Theorem 2 we know this accumulation point must be since is the unique solution of (1). Next following the proof of Theorem 5.1, see Fukushima and Qi [1], we can prove that the entire sequence converges to .

The condition that the Lipschitz continuous gradient of is semismooth at the unique optimal solution of (1) is required in the next theorem. This condition is identified if is the maximum of several affine functions or satisfies the constant rank constraint qualification.

Theorem 4. Suppose that the conditions of Theorem 3 hold and is semismooth at the unique optimal solution of (1). Suppose further that (i), (ii), (iii), for all large .Then converges to Q-superlinearly.

Proof. Firstly we have converges to by Theorem 3. Then by condition (i) and (P5), we have By condition (ii), there is a such that Since is semismooth at , we have, according to Qi and Sun [13], Notice that , (42)–(44) and condition (iii), for all large , we have This establishes Q-superlinear convergence of to .

Condition (i) can be replaced by a more realistic condition without impairing the convergence result since is chosen before is generated. For condition (ii), Fukushima and Qi [1] suggest one of possible choices of , we may expect to provide a reasonable approximation to an element in , but it may be far from what we should approximate. There are some approaches to overcome this phenomenon, see Mifflin [10] and Qi and Chen [3]. For condition (iii) we can make sure that if the conditions of Theorem 4, except (iii), hold and , then condition (iii) holds automatically.

Acknowledgment

This research was partially supported by the National Natural Science Foundation of China (Grants no. 11171049 and no. 11171138).