Mathematical Problems in Engineering

Mathematical Problems in Engineering / 2020 / Article

Research Article | Open Access

Volume 2020 |Article ID 9108150 | https://doi.org/10.1155/2020/9108150

Zhengyong Zhou, Qi Yang, "An Active Set Smoothing Method for Solving Unconstrained Minimax Problems", Mathematical Problems in Engineering, vol. 2020, Article ID 9108150, 25 pages, 2020. https://doi.org/10.1155/2020/9108150

An Active Set Smoothing Method for Solving Unconstrained Minimax Problems

Academic Editor: Mohammad D. Aliyu
Received14 Feb 2020
Accepted05 Jun 2020
Published24 Jun 2020

Abstract

In this paper, an active set smoothing function based on the plus function is constructed for the maximum function. The active set strategy used in the smoothing function reduces the number of gradients and Hessians evaluations of the component functions in the optimization. Combing the active set smoothing function, a simple adjustment rule for the smoothing parameters, and an unconstrained minimization method, an active set smoothing method is proposed for solving unconstrained minimax problems. The active set smoothing function is continuously differentiable, and its gradient is locally Lipschitz continuous and strongly semismooth. Under the boundedness assumption on the level set of the objective function, the convergence of the proposed method is established. Numerical experiments show that the proposed method is feasible and efficient, particularly for the minimax problems with very many component functions.

1. Introduction

In this paper, we consider the following unconstrained minimax problem:where the component functions , , are twice continuously differentiable. Minimax problem (1) is a typical nonsmooth optimization problem and arises in many fields, such as engineering design ([1]), vehicle routing ([2, 3]), structural optimization ([4]), electronic circuit design ([5]), and game theory ([6, 7]).

Many methods have been proposed for solving minimax problem (1), such as subgradient methods ([8]), bundle type methods ([9, 10]), cutting plane methods ([11]), sequential quadratic programming methods ([1214]), interior point methods ([1517]), conjugate gradient methods ([18]), and smoothing methods ([1926]). The main advantage of smoothing methods is that the minimax problem is transformed into a sequence of simple, smooth, and unconstrained optimization problems, which can be solved by standard unconstrained minimization solvers.

In [27], the following aggregate function (also called the exponential penalty function) induced from Jaynes’ maximum entropy principle was introduced:where is the smoothing parameter. It approaches uniformly with respect to as the smoothing parameter goes to 0, and has been wildly used in the smoothing methods for solving the minimax problems. Its gradient can be written as follows:withwhich is a convex combination of the gradients of all the component functions, and its Hessianis a complicated combination of the gradients and Hessians of all component functions. Therefore, for the maximum function with very many nonlinear component functions, the evaluations for the gradient and Hessian of the aggregate function always consume a large amount of computation.

For the minimax problems with very many component functions, several active set strategies have been developed for the smoothing methods to reduce the number of gradients or Hessians evaluations of the component functions at each iteration. In [18], the following active set smoothing function for was presented:where is the smoothing parameter,

The active set used in at can be written as follows:

In [28], a cubic spline smoothing function for was presented. For any smoothing parameter , the active set used in the cubic spline smoothing function at can be represented as follows:

In [25], an active set strategy for the aggregate function was introduced. For a given , let

Then, the active set used for the aggregate function at is updated as

In [26], another active set strategy for the aggregate function was presented. For any smoothing parameter , the active set used for the aggregate function at is defined as follows:where is a complicated combination of several parameters.

In this paper, based on the plus function, an active set smoothing function for the maximum function is proposed, and the smoothing function only relates to a part of component functions, whose function values are close to . It is continuously differentiable, and its gradient is locally Lipschitz continuous and strongly semismooth. Combing the active set smoothing function, a geometric reduction rule for the smoothing parameters, the Armijo line search strategy, the steepest decent direction, and the Newton direction, an active set smoothing method is proposed for solving unconstrained minimax problems. Under the boundedness assumption on the level set of , the convergence of the active set smoothing method is established. Numerical experiments show that the resulting method is stable and efficient, especially for the minimax problems with very many component functions.

The following assumptions and results will be used in this paper:Assumption 1: the component functions , , are twice continuously differentiable, and , , is strongly semismoothAssumption 2: for any , the level set is bounded

Definition 1 (see [29]). Suppose that is locally Lipschitz continuous, if for any , ,where is the generalized Jacobian of at , is the directional derivative of at in the direction for , and then is said to be strongly semismooth at .

Lemma 1 (see [29]). Suppose that and are strongly semismooth, then(i)For any , , is strongly semismooth(ii) is strongly semismooth(iii)If for the constant , is strongly semismooth(iv) is strongly semismooth

Lemma 2 (see [29]). Suppose that is locally Lipschitz continuous, if all , , are strongly semismooth, then is strongly semismooth.

Lemma 3 (see [30]). For the function , if is locally Lipschitz continuous, then is strongly semismooth.

Theorem 1 (see [24]). Suppose that the component functions , , are continuously differentiable. If is a local minimizer of problem (1), thenwhereand conv denotes the convex hull of .

2. An Active Set Smoothing Function for the Maximum Function

In this section, based on the plus function :we construct the following function :where is the smoothing parameter and is the scaling parameter. By the definition of the plus function, Assumption 1, and , we have the following result.

Lemma 4. For any and , is continuously differentiable.

For any , , and , letthen we know thatonly relates to the component functions for , whose function values are close to . Therefore, is called an active set smoothing function for the maximum function in this paper. By direct calculation, we can obtain the gradient of :which can be also written as follows:

Lemma 5. For any , , and ,

Proof. If , by , we haveIf , we know by , then we haveBy (23) and (24), the conclusion holds.

Lemma 6. For any , , , and satisfying ,(i)(ii), where .

Proof. By (10) and , we knowThen, by , we have ,and hence, . Therefore, by and , we have(i)By (26) and (27), we haveThen, by , we have(ii)By (26), (29) and , we haveAccording to Lemmas 5 and 6 and for , we have the following approximation of for .

Lemma 7. For any , , and satisfying ,(i)(ii)For convenience of discussion, for any , , and , letthen the gradient of in (20) can be rewritten as follows:

Lemma 8. Suppose that Assumption 1 holds, then for any and , , , is locally Lipschitz continuous.

Proof. By Assumption 1, we know that , , is locally Lipschitz continuous with respect to the variables for any and . By the definition of the plus function, for any , we know that(i)If , , then we have , and hence, (ii)If , , then we have , , and hence, (iii)If , , then we have , , and hence, Therefore, the plus function is Lipschitz continuous. Hence, , , is locally Lipschitz continuous.

Lemma 9. Suppose that Assumption 1 holds, then for any and , is locally Lipschitz continuous.

Proof. By Assumption 1, , , is locally Lipschitz continuous. Then, by Lemma 8, and , are locally Lipschitz continuous, which implies that is locally Lipschitz continuous.

Lemma 10. Suppose that Assumption 1 holds, then for any and , , , is strongly semismooth.

Proof. By the proof of Lemma 8, the plus function is Lipschitz continuous. For any and , we know and , and hence, , , ; then, we haveFor any and , we know and , and hence, ; then, we haveTherefore, for any and , we havewhich implies that the plus function is strongly semismooth at by Definition 1. Since the plus function is sufficiently smooth on , we know that the plus function is strongly semismooth on .
By Assumption 1, , , is locally Lipschitz continuous. Then, by Lemma 3, the component functions , , are strongly semismooth, and hence, , , is strongly semismooth with respect to for any and . Therefore, by (iv) of Lemma 1, , , is strongly semismooth.

Lemma 11. Suppose that Assumption 1 holds, then for any and , is strongly semismooth.

Proof. By Lemma 9, is locally Lipschitz continuous. By Assumption 1, , , is strongly semismooth. By Lemma 10, , , is strongly semismooth. Then, by (i) and (ii) of Lemma 1, and , are strongly semismooth. Therefore, is strongly semismooth by Lemma 2.
For any , let . By (32) and the definition of the plus function, the Clarke generalized Jacobian of at can be represented as follows:whereFor efficient numerical evaluation of , we can set for ; then, we knowwhere

3. An Active Set Smoothing Method and Its Convergence

In this section, based on the active set smoothing function for and the smoothing methods introduced in [24], an active set smoothing method is proposed to solve problem (1). For a starting point and an initial smoothing parameter , the initial scaling parameter is chosen from a bounded region in Subroutine 1, which reduces the ill-conditioning of caused by the scaling problem of the variable ; then, is set to be . The Armijo line search strategy, the steepest decent direction, and the Newton direction, in which the selection of the search direction depends on the condition number of and two convergence conditions for , are used to compute an approximate solution of the smoothing problem :

Then, the smoothing parameter geometrically reduces to , the scaling parameter is chosen from a bounded region in Subroutine 1, is updated to in two ways to balance the efficiency and convergence of the resulting algorithm in Subroutine 1, and the smoothing problem :is solved with the starting point . By repeating this process, a sequence of smooth, unconstrained optimization problems is solved. As the smoothing parameters go to 0, a solution of problem (1) can be obtained by the solutions of the smoothing problems .

Algorithm 1. An active set smoothing algorithm.
Data. Input ; , , ; , ; ; satisfying ; , , .Step 0: set , , , , and , and go to Subroutine 1.Step 1: (compute the search direction) compute the condition number of . If , compute the Newton direction by solvingIf is positive and the Newton direction satisfiesgo to Step 2.If is not positive and the Newton direction satisfiesgo to Step 2.Else, compute the steepest decent directiongo to Step 2.Step 2: (compute the stepsize) let , where is the smallest nonnegative integer satisfyinggo to Step 3.Step 3: set , replace by , and go to Step 4.  Step 4: if , go to Step 5; else, go to Step 1.Step 5: (adjustment of the smoothing parameter) set , , , , and , replace by and by , and go to Subroutine 1.Subroutine 1: adjustment of the scaling parameter.Substep 0: set , , , and . If or , set :else, set :Compute the condition numbers of , of , and of . If and , go to Substep 1; else, if and , go to Substep 2; else, go to Step 4 of Algorithm 1 with , , , and .Substep 1: set , , , and , and go to Substep 3.Substep 2: set , , , and , and go to Substep 4.Substep 3: set and and compute the condition number of . If and , go to Substep 1; else, go to Step 4 of Algorithm 1 with , , , and .Substep 4: set and and compute the condition number of . If and , go to Substep 2; else, go to Step 4 of Algorithm 1 with , , , and .

Remark 1. In Subroutine 1 for adjusting the scaling parameter , if , is updated to satisfyThen, by and , we haveIf , is updated to satisfywhich keeps the monotonicity of with respect to in Lemma 13.

Theorem 2 (local convergence [31]). For any and , suppose that is a stationary point of the problem . If all are nonsingular, then there exist a neighborhood of and a constant such that for any and any , is nonsingular and

The sequence produced by any initial point and the semismooth Newton method quadratically converges to .

Lemma 12. Suppose that Assumption 1 holds, then for any bounded set and parameters , , , and , there exists a such that for any , andwhere is the stepsize computed in Step 2 of Algorithm 1.

Proof. Let and . By (43)–(47), the search direction satisfiesBy Lemma 9, is locally Lipschitz continuous, and then, there exists a Lipschitz constant such that for any , ,For any and , by the mean value theorem, there exists a such that