Abstract

Based on discretization methods for solving semi-infinite programming problems, this paper presents a spline smoothing Newton method for semi-infinite minimax problems. The spline smoothing technique uses a smooth cubic spline instead of max function and only few components in the max function are computed; that is, it introduces an active set technique, so it is more efficient for solving large-scale minimax problems arising from the discretization of semi-infinite minimax problems. Numerical tests show that the new method is very efficient.

1. Introduction

In this paper, we consider the following semi-infinite minimax problems: where . We assume (as in Assumption  3.4.1 in [1]) that both and are Lipschtiz continuous on bounded sets. Such a semi-infinite minimax problem is an exciting part of mathematical programming. It has very widespread application backgrounds in optimal electronic circuit design, linear Chebyshev approximation, minimization of floor area, optimal control, computer-aided design, numerous engineering problems, and so forth (see [18]). Over the past decade, many researchers had done a lot of works on it and proposed some algorithms (see [914]). However, efficient algorithms for solving the problem are few, because it is difficult to design an algorithm to deal with the nondifferentiability of objective function and the infinite set . A common approach for solving is the discretization method. Generally, discretization of multidimensional domains gives rise to minimax problems with thousands of component functions. Computation cost is increased; efficiency of the discretization method is affected. To overcome these problems, Polak et al. proposed algorithms with smoothing techniques for solving finite and semi-infinite minimax problems (see [15, 16]). In [16], an active set strategy which can be used in conjunction with exponential (entropic) smoothing for solving large-scale minimax problems arising from the discretization of semi-infinite minimax problems had been proposed. But the active set grew monotonically. In this paper, using the feedback precision-adjustment smoothing parameter rule which was proposed by Polak et al. in [15], we propose a new discretization smoothing algorithm for solving . The smoothing function is cubic spline function not exponential function in our algorithm. The spline smoothing technique uses a smooth spline function instead of thousands of component functions and acts also as an active set technique, so only few components in the max function are computed at each iteration. And the active set does not grow monotonically; hence the number of gradients and Hessian calculations is dramatically reduced, and the computation cost is greatly reduced. Numerical tests show that the new method is very efficient for semi-infinite minimax problems with complicated component functions.

We assume that the problem can be approximated by a sequence of finite minimax problems of the form where the sets , ,… have finite cardinality. In practice, we can expect that , that is, that they grow monotonically, and that the closure of is equal to . However, it suffices to assume, as in Assumption  3.4.2 in [1].

Assumption 1. There exist a strictly positive valued, strictly monotone decreasing function and constants and , such that for every and , there exists a such that
It is easy to satisify with Assumption 1. For example, let .
We assume that .
Next, we let and define and with this notation, the problems assume the form
The optimality functions for the problems are defined by where .
In [1], we find the following result.

Theorem 2. Suppose that is an optimal solution to problems ; then, .

The corresponding optimality function for is defined (see Theorem  3.1.6 in [1]) by where

Referring to Lemma  3.4.3 in [1], we see that, under Assumption 1, the following result must hold.

Theorem 3. Suppose that is a sequence in converging to a point . Then, and as .

In this paper, we consider to approximate uniformly by the smooth spline introduced in [17].

Let us first recall the formulation of multivariate spline. Let be a polyhedral domain of which is partitioned with irreducible algebraic surfaces into cells . A function defined on is called a -spline function with th order smoothness, expressed for short as , if and , where is the set of all polynomials of degree or less in variables. Similar to the smooth spline which approximates uniformly given in [17], we can construct a spline function to approximate uniformly (as ), where is the homogenous Morgan-Scott partition of type two in [17], as follows: where , , and the cell is the region defined by the following inequalities: The composite function approximates uniformly as , where

Proposition 4. Suppose that . For any , we define If the function is continuous, then is continuous and is increasing with respect to . Furthermore, where denotes the cardinality of .

Proof. is twice continuously differentiable and if the functions are continuous, it is easy to know that is continuous. From Lemma  1.1 in [18], we know is increasing with respect to .
According to (12), we have From the definition of , we know . From the definition of , we know . Then, we have By , we can obtain . Thus and and . Hence the desired result follows.

The following proposition is proved in [19].

Proposition 5. (1) If all the functions are continuously differentiable, then is continuously differentiable and where and
(2) For any and , , and
(3) If all the functions, , are twice continuously differentiable, then is twice continuously differentiable and where

From Proposition 5, we obtain the following results.

Corollary 6. For any and ,

Proof. According to the definition of , we know that as . From Proposition 5 (1), we know If , then If , then . That is Then, .
If , then .
It now follows from (21) and (17) that (21) holds. The proof is completed.

Next, let be an infinite sequence such that , as , and consider the sequence of approximating problems with defined as in (12).

Theorem 7. The problems epiconverge to the problem .

Proof. Referring to Theorem  3.3.2 in [1], we see that to prove the theorem it is sufficient to show that if is a sequence in converging to a point , then .
Thus, suppose that as . Then, Now, by Theorem 2, as , and, since by assumption of , as , it follows from (14) that as , which completes our proof.

2. Spline Smoothing Newton Method and Its Convergence

We combine Algorithm  3.1 in [18] with a discretization precision test to produce an algorithm for solving the semi-infinite minimax problems . The Hessian of the smoothing spline function can be modified by adding a multiple of the identity introduced in [20]; that is, where with denoting the minimum eigenvalue of and .

Algorithm 8.
Data. Given , a monotone increasing sequence of sets , , of cardinality , with as , satisfying Assumption 1, and defining the functions , a sequence of monotone decreasing parameters , such that as , , and . Functions : , satisfying , for all , , .

Parameter. Set .

Step 0. Set .

Step 1. Set .

Step 2. Let ; let be the cardinality of , and . Range according to .

If , the cell is .

Else if , for every , we have . Let be the maximum element of ; then the cell is .

Step 3. Compute . If , go to Step  4. Else go to Step  9.

Step 4. Compute according to (27); then compute Cholesky factor such that and the reciprocal condition number of .

If and , go to Step  5.

Else if and the largest eigenvalue of satisfies , go to Step  5.

Else go to Step  6.

Step 5. Compute the search direction go to Step  7.

Step 6. Compute the search direction

Step 7. Compute the step length , where is the smallest integer satisfying

Step 8. Set , . Go to Step  2.

Step 9. If , compute such that go to Step  10.

Else set , , and ; go to Step  2.

Step 10. If , set , , and ; go to Step  2. Else set , , , and ; go to Step  2.

Step 11. If where the are defined by (17), set , , replace by , by , and go to Step  1.

Else go to Step  1.

Theorem 9. Suppose that is a sequence constructed by Algorithm 8. Then, any accumulation point of this sequence satisfies .

Proof. First note that it follows from Corollary 6 that condition (32) will be eventually satisfied, since Algorithm 8 keeps abating . Next, note that Since by construction and on any infinite subsequence that converges to , the desired result follows.

3. Numerical Experiment

We have implemented Algorithm 8 using Matlab. In order to show the efficiency of the algorithm, we also have implemented algorithm in [16] (denote PWY) using similar procedures. Algorithm PWY was proposed by Polak et al. in [16], which has been introduced in Section 1.

The test results were obtained by running Matlab R2011a on a desktop with Windows XP Professional operation system, Intel(R) Core (TM) i3-370 2.40 GHz processor, and 2.92 GB of memory.

In Algorithm 8, parameters are chosen as follows: , , , , , , , , , , , and . In the PWY algorithm, parameters are chosen as follows: , . The results are listed in Tables 1, 2, 3, 4, and 5. denotes the final approximate solution point and is the value of the objective function of discretized problems at . is the maximum number of discrete points. Time is the CPU time in seconds.

Example 1 (see [21]). Let

Example 2 (see [21]). Let

Example 3. Let

Example 4. Let

Example 5. Let

4. Conclusion

We have developed a spline smoothing Newton method for the solution of semi-infinite minimax problems using smooth cubic spline and discretization strategy. At each iteration, only few components in the max function are computed; hence, the computation cost is greatly reduced. For semi-infinite minimax problems with complicated component functions, numerical tests show that the new method is very efficient.

Conflict of Interests

The authors declare that there is no conflict of interests regarding the publication of this paper.

Acknowledgments

This work was supported by the National Natural Science Foundation of China (11171051, 91230103) and the Fundamental Research Funds for the Central Universities (DC13010214) and the General Project of the Education Department of Liaoning Province (0908-330006).