Table of Contents Author Guidelines Submit a Manuscript
Journal of Applied Mathematics

Volume 2014, Article ID 852074, 7 pages

http://dx.doi.org/10.1155/2014/852074
Research Article

A Spline Smoothing Newton Method for Semi-Infinite Minimax Problems

Li Dong,1 Bo Yu,2,3 and Yu Xiao4

1College of Science, Dalian Nationalities University, Dalian 116600, China

2School of Mathematical Sciences, Dalian University of Technology, Dalian 116024, China

3School of Sciences, Dalian University of Technology, Dalian at Panjin 124221, China

4School of Basic Science, East China Jiaotong University, Nanchang 330013, China

Received 20 May 2014; Accepted 30 June 2014; Published 17 July 2014

Academic Editor: Mariano Torrisi

Copyright © 2014 Li Dong et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Abstract

Based on discretization methods for solving semi-infinite programming problems, this paper presents a spline smoothing Newton method for semi-infinite minimax problems. The spline smoothing technique uses a smooth cubic spline instead of max function and only few components in the max function are computed; that is, it introduces an active set technique, so it is more efficient for solving large-scale minimax problems arising from the discretization of semi-infinite minimax problems. Numerical tests show that the new method is very efficient.

1. Introduction

In this paper, we consider the following semi-infinite minimax problems: where . We assume (as in Assumption  3.4.1 in [1]) that both and are Lipschtiz continuous on bounded sets. Such a semi-infinite minimax problem is an exciting part of mathematical programming. It has very widespread application backgrounds in optimal electronic circuit design, linear Chebyshev approximation, minimization of floor area, optimal control, computer-aided design, numerous engineering problems, and so forth (see [18]). Over the past decade, many researchers had done a lot of works on it and proposed some algorithms (see [914]). However, efficient algorithms for solving the problem are few, because it is difficult to design an algorithm to deal with the nondifferentiability of objective function and the infinite set . A common approach for solving is the discretization method. Generally, discretization of multidimensional domains gives rise to minimax problems with thousands of component functions. Computation cost is increased; efficiency of the discretization method is affected. To overcome these problems, Polak et al. proposed algorithms with smoothing techniques for solving finite and semi-infinite minimax problems (see [15, 16]). In [16], an active set strategy which can be used in conjunction with exponential (entropic) smoothing for solving large-scale minimax problems arising from the discretization of semi-infinite minimax problems had been proposed. But the active set grew monotonically. In this paper, using the feedback precision-adjustment smoothing parameter rule which was proposed by Polak et al. in [15], we propose a new discretization smoothing algorithm for solving . The smoothing function is cubic spline function not exponential function in our algorithm. The spline smoothing technique uses a smooth spline function instead of thousands of component functions and acts also as an active set technique, so only few components in the max function are computed at each iteration. And the active set does not grow monotonically; hence the number of gradients and Hessian calculations is dramatically reduced, and the computation cost is greatly reduced. Numerical tests show that the new method is very efficient for semi-infinite minimax problems with complicated component functions.

We assume that the problem can be approximated by a sequence of finite minimax problems of the form where the sets , ,… have finite cardinality. In practice, we can expect that , that is, that they grow monotonically, and that the closure of is equal to . However, it suffices to assume, as in Assumption  3.4.2 in [1].

Assumption 1. There exist a strictly positive valued, strictly monotone decreasing function and constants and , such that for every and , there exists a such that

It is easy to satisify with Assumption 1. For example, let .

We assume that .

Next, we let and define and with this notation, the problems assume the form

The optimality functions for the problems are defined by where .

In [1], we find the following result.

Theorem 2. Suppose that is an optimal solution to problems ; then, .

The corresponding optimality function for is defined (see Theorem  3.1.6 in [1]) by where

Referring to Lemma  3.4.3 in [1], we see that, under Assumption 1, the following result must hold.

Theorem 3. Suppose that is a sequence in converging to a point . Then, and as .

In this paper, we consider to approximate uniformly by the smooth spline introduced in [17].

Let us first recall the formulation of multivariate spline. Let be a polyhedral domain of which is partitioned with irreducible algebraic surfaces into cells . A function defined on is called a -spline function with th order smoothness, expressed for short as , if and , where is the set of all polynomials of degree or less in variables. Similar to the smooth spline which approximates uniformly given in [17], we can construct a spline function to approximate uniformly (as ), where is the homogenous Morgan-Scott partition of type two in [17], as follows: where , , and the cell is the region defined by the following inequalities: The composite function approximates uniformly as , where

Proposition 4. Suppose that . For any , we define If the function is continuous, then is continuous and is increasing with respect to . Furthermore, where denotes the cardinality of .

Proof. is twice continuously differentiable and if the functions are continuous, it is easy to know that is continuous. From Lemma  1.1 in [18], we know is increasing with respect to .

According to (12), we have From the definition of , we know . From the definition of , we know . Then, we have By , we can obtain . Thus and and . Hence the desired result follows.

The following proposition is proved in [19].

Proposition 5. (1) If all the functions are continuously differentiable, then is continuously differentiable and where and

(2) For any and , , and

(3) If all the functions, , are twice continuously differentiable, then is twice continuously differentiable and where

From Proposition 5, we obtain the following results.

Corollary 6. For any and ,

Proof. According to the definition of , we know that as . From Proposition 5 (1), we know If , then If , then . That is Then, .

If , then .

It now follows from (21) and (17) that (21) holds. The proof is completed.

Next, let be an infinite sequence such that , as , and consider the sequence of approximating problems with defined as in (12).

Theorem 7. The problems epiconverge to the problem .

Proof. Referring to Theorem  3.3.2 in [1], we see that to prove the theorem it is sufficient to show that if is a sequence in converging to a point , then .

Thus, suppose that as . Then, Now, by Theorem 2, as , and, since by assumption of , as , it follows from (14) that as , which completes our proof.

2. Spline Smoothing Newton Method and Its Convergence

We combine Algorithm  3.1 in [18] with a discretization precision test to produce an algorithm for solving the semi-infinite minimax problems . The Hessian of the smoothing spline function can be modified by adding a multiple of the identity introduced in [20]; that is, where with denoting the minimum eigenvalue of and .

Algorithm 8.

Data. Given , a monotone increasing sequence of sets , , of cardinality , with as , satisfying Assumption 1, and defining the functions , a sequence of monotone decreasing parameters , such that as , , and . Functions : , satisfying , for all , , .

Parameter. Set .

Step 0. Set .

Step 1. Set .

Step 2. Let ; let be the cardinality of , and . Range according to .

If , the cell is .

Else if , for every , we have . Let be the maximum element of ; then the cell is .

Step 3. Compute . If , go to Step  4. Else go to Step  9.

Step 4. Compute according to (27); then compute Cholesky factor such that and the reciprocal condition number of .

If and , go to Step  5.

Else if and the largest eigenvalue of satisfies , go to Step  5.

Else go to Step  6.

Step 5. Compute the search direction go to Step  7.

Step 6. Compute the search direction

Step 7. Compute the step length , where is the smallest integer satisfying

Step 8. Set , . Go to Step  2.

Step 9. If , compute such that go to Step  10.

Else set , , and ; go to Step  2.

Step 10. If , set , , and ; go to Step  2. Else set , , , and ; go to Step  2.

Step 11. If where the are defined by (17), set , , replace by , by , and go to Step  1.

Else go to Step  1.

Theorem 9. Suppose that is a sequence constructed by Algorithm 8. Then, any accumulation point of this sequence satisfies .

Proof. First note that it follows from Corollary 6 that condition (32) will be eventually satisfied, since Algorithm 8 keeps abating . Next, note that Since by construction and on any infinite subsequence that converges to , the desired result follows.

3. Numerical Experiment

We have implemented Algorithm 8 using Matlab. In order to show the efficiency of the algorithm, we also have implemented algorithm in [16] (denote PWY) using similar procedures. Algorithm PWY was proposed by Polak et al. in [16], which has been introduced in Section 1.

The test results were obtained by running Matlab R2011a on a desktop with Windows XP Professional operation system, Intel(R) Core (TM) i3-370 2.40 GHz processor, and 2.92 GB of memory.

In Algorithm 8, parameters are chosen as follows: , , , , , , , , , , , and . In the PWY algorithm, parameters are chosen as follows: , . The results are listed in Tables 1, 2, 3, 4, and 5. denotes the final approximate solution point and is the value of the objective function of discretized problems at . is the maximum number of discrete points. Time is the CPU time in seconds.

tab1
Table 1: Test results for Example 1.
tab2
Table 2: Test results for Example 2.
tab3
Table 3: Test results for Example 3.
tab4
Table 4: Test results for Example 4.
tab5
Table 5: Test results for Example 5.

Example 1 (see [21]). Let

Example 2 (see [21]). Let

Example 3. Let

Example 4. Let

Example 5. Let

4. Conclusion

We have developed a spline smoothing Newton method for the solution of semi-infinite minimax problems using smooth cubic spline and discretization strategy. At each iteration, only few components in the max function are computed; hence, the computation cost is greatly reduced. For semi-infinite minimax problems with complicated component functions, numerical tests show that the new method is very efficient.

Conflict of Interests

The authors declare that there is no conflict of interests regarding the publication of this paper.

Acknowledgments

This work was supported by the National Natural Science Foundation of China (11171051, 91230103) and the Fundamental Research Funds for the Central Universities (DC13010214) and the General Project of the Education Department of Liaoning Province (0908-330006).

References

  1. E. Polak, Optimization: Algorithms and Consistent Approximations, Springer, New York, NY, USA, 1997. View at MathSciNet
  2. J. W. Bandler, P. C. Liu, and H. Tromp, “A nonlinear programming approach to optimal design centering, tolerancing, and tuning,” IEEE Transactions on Circuits and Systems, vol. 23, no. 3, pp. 155–165, 1976. View at Google Scholar · View at MathSciNet · View at Scopus
  3. T. Chen and M. K. H. Fan, “On convex formulation of the floorplan area minimization problem,” in Proceedings of the International Symposium on Physical Design (ISPD '98), pp. 124–128, Academic Press, Monterey, Calif, USA, April 1998. View at Scopus
  4. A. V. Fiacco and K. O. Kortanek, Semi-Infinite Programming and Applications, Lecture Notes in Economics and Mathematical Systems, Springer, Berlin, Germany, 1983. View at MathSciNet
  5. R. Hettich and K. O. Kortanek, “Semi-infinite programming: theory, methods, and applications,” SIAM Review, vol. 35, no. 3, pp. 380–429, 1993. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus
  6. A. A. Kassim and B. V. K. Vijaya Kumar, “Path planning for autonomous robots using neural networks,” Journal of Intelligent Systems, vol. 7, no. 1-2, pp. 33–55, 1997. View at Google Scholar · View at Scopus
  7. E. Polak, “On the use of consistent approximations in the solution of semi-infinite optimization and optimal control problems,” Mathematical Programming, vol. 62, no. 2, pp. 385–414, 1993. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus
  8. E. Polak and J. O. Royset, “Algorithms for finite and semi-infinite min-max-min problems using adaptive smoothing techniques,” Journal of Optimization Theory and Applications, vol. 119, no. 3, pp. 421–457, 2003. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet · View at Scopus
  9. Z. Guanglu, C. Wang, and Q. Sun, “A globally convergent method for semi-infinite minimax problems,” Organ Transplantation, vol. 2, no. 2, p. 42, 1998. View at Google Scholar
  10. G. Zhou, C. Wang, and Y. Zhang, “An efficient method for semi-infinite programming,” Mathematics Numerical Sinica, vol. 21, no. 1, pp. 1–8, 1999. View at Google Scholar
  11. E. Polak, D. Q. Mayne, and J. E. Higgins, “On the extension of Newton's method to semi-infinite minimax problems,” SIAM Journal on Control and Optimization, vol. 30, no. 2, pp. 367–389, 1992. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet · View at Scopus
  12. O. Yigui and Z. Qian, “A nonmonotonic trust region algorithm for a class of semi-infinite minimax programming,” Applied Mathematics and Computation, vol. 215, no. 2, pp. 474–480, 2009. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet · View at Scopus
  13. A. Auslender, M. A. Goberna, and M. A. López, “Penalty and smoothing methods for convex semi-infinite programming,” Mathematics of Operations Research, vol. 34, no. 2, pp. 303–319, 2009. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus
  14. L. Zhang, S. Fang, and S. Wu, “An entropy based central cutting plane algorithm for convex min-max semi-infinite programming problems,” Science China Mathematics, vol. 56, no. 1, pp. 201–211, 2013. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus
  15. E. Polak, J. O. Royset, and R. S. Womersley, “Algorithms with adaptive smoothing for finite minimax problems,” Journal of Optimization Theory and Applications, vol. 119, no. 3, pp. 459–484, 2003. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet · View at Scopus
  16. E. Polak, R. S. Womersley, and H. X. Yin, “An algorithm based on active sets and smoothing for discretized semi-infinite minimax problems,” Journal of Optimization Theory and Applications, vol. 138, no. 2, pp. 311–328, 2008. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet · View at Scopus
  17. G. Zhao, Z. Wang, and H. Mou, “Uniform approximation of min/max functions by smooth splines,” Journal of Computational and Applied Mathematics, vol. 236, no. 5, pp. 699–703, 2011. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet · View at Scopus
  18. L. Dong and B. Yu, “A spline smoothing Newton method for finite minimax problems,” Journal of Engineering Mathematics. In press.
  19. L. Dong, B. Yu, and G. H. Zhao, “A smoothing spline homotopy method for nonconvex nonlinear programming,” submitted.
  20. J. Nocedal, S. J. Wright, P. Glynn, and S. M. Robinson, Numerical Optimization, Springer Series in Operations Research, Springer, New York, NY, USA, 1999.
  21. X. Yu, Truncated Aggregate Smoothing Algorithms, School of Mathematical Sciences, Dalian University of Technology, Dalian, China, 2010.