Table of Contents Author Guidelines Submit a Manuscript
Journal of Function Spaces
Volume 2017, Article ID 3941084, 7 pages
Research Article

A New Nonsmooth Bundle-Type Approach for a Class of Functional Equations in Hilbert Spaces

1School of Mathematics, Liaoning Normal University, Dalian 116029, China
2School of Mathematical Sciences, Dalian University of Technology, Dalian 116024, China

Correspondence should be addressed to Jie Shen; moc.361@527010tt

Received 13 April 2017; Accepted 2 July 2017; Published 8 August 2017

Academic Editor: Hugo Leiva

Copyright © 2017 Jie Shen et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.


A new bundle-type approach for solving a class of functional equations is presented by combining bundle idea for nonsmooth optimization with common iterative process for functional equations. Our strategy is to approximate the nonsmooth function in functional equation by a sequence of convex piecewise linear functions, as in the bundle method; this makes the problem more tractable and reduces the difficulty of implementation of method. We only require the piecewise linear convex approximate functions, rather than the actual function, to satisfy the uniform boundedness condition with respect to one variable at stability centers. One example is given to demonstrate the application of the proposed method.

1. Introduction

Theory of functional equations is a branch of mathematics, whose significance lies not only in its vast applications in a number of other branches of mathematics, but also in many practical problems of natural sciences and engineering, for example, economy and information theory. Consider the following functional equation arising from dynamic programming of multistage decision process:where and stand for state and decision vectors, respectively, is a Hilbert space with norm induced by its inner product . , , and , , , , , represent the transformation of the process, and denotes the optimal return function with initial state . It is well known that many practical problems can be formulated as problem (1). The authors [1] establish the existence, uniqueness, and iterative approximation to solutions for problem (1) in Banach spaces and complete metric space, respectively, and it extends, improves, and generalizes the results due to several authors [24]. However, the construction of iterative approximation to solutions for problem (1) needs uniform boundedness of with respect to on the whole space , and this requirement is too strong and difficult to be realized in some cases. So it is worth studying more easier conditions and methods for solving problem (1).

Bundle method is among the most efficient methods for nonsmooth optimization problems and plays an important role in variational inequalities as well [57]. The minimization of a nonsmooth lower semicontinuous proper convex function, which is maybe very hard to solve, can be replaced by minimizing a sequence of more tractable convex functions; see [810]. The key strategy is to approximate the actual function by a sequence of piecewise linear convex functions, that is, the so-called cutting planes method, the predecessor of bundle method introduced by Lemaréchal [11]. Bundle methods have been successfully used in many practical applications.

Thus it appears reasonable to claim that functional equations with nonsmooth functions can benefit from bundle methods. We try to combine bundle idea with classical iterative process [1] in order to approximate the solutions for problem (1) in complete metric space . It is the first time that bundle idea is introduced to functional equations. The result obtained in this paper is quite different from the one in [1] in the way of constructing the iterative sequence which converges to the solution to problem (1). The generation of sequence relies on the previous iteration term and the approximation function , which relaxes the strong conditions imposed on itself. And at the same time, the requirement imposed on the corresponding functions is weaker than that in [1]: instead of requiring to be uniformly bounded with respect to for all , we only require the piecewise linear convex approximations , , constructed in this paper to possess this property at some special points, that is, at stability centers defined below. It is not difficult to be realized.

The rest of this paper is organized as follows. In Section 2 we review some basic notations and results. Section 3 presents the concrete iteration approximation sequence which converges to the unique solution for (1). One example is provided to show the validity of the results.

2. Preliminaries and Notations

The following notation is adopted throughout the paper: , , . For all , denotes the largest integer not exceeding and is a Hilbert space with inner product , its norm is induced by its inner product, and is also a Banach space. is the state space; is the decision space. Define For any positive integer and , definewhere . A sequence in is said to converge to a point in if for any , as . It is clear that is a complete metric space.

3. Solutions for Functional Equation (1) via Bundle Method

Suppose the function in problem (1) is a nonsmooth lower semicontinuous proper convex function. Construct the following approximate functions to :where , , , and denotes the partial subdifferential of with respect to at . Define the linearization error of with respect to at to be can be written equivalently in the form It is easy to prove the following three properties of the approximate functions :...

Next we present a bundle algorithm which produces two sequences and ; the sequence is called the candidate point sequence; it is used to construct and improve the approximate functions , and the sequence consists of the candidate points that decrease sufficiently the function in the sense of the descent test given below, and we call stability centers. Note that is a subsequence of . These two sequences will be employed to provide the conditions that assure the existence and uniqueness of solutions to problem (1).

Algorithm 1.
Step 1 (initialization). Let and be given parameters. Choose and ; compute and . Construct the model (see (4)). Set , , and .
Step 2 (computation of candidate point). Compute Define .
Step 3 (descent test). If , set (descent step); otherwise, set (null step).
Step 4 (improving the model). Append to model (4); that is, construct again. Choose such that . Change to ; go to Step .

Remark 2. According to [12], the sequence is bounded and converges to if Algorithm 1 generates infinitely many descent steps, where satisfies for given . Otherwise, Algorithm 1 generates a last descent step , followed by infinitely many null steps; then converges to and minimizes with respect to for given .

Remark 3. As the constructions of go along, the number of elements in bundle increases. We clean and compress the bundle by employing “aggregation technique” to control the size of bundle without impairing the original properties of global convergence. Suppose the bundle has couples ; we call indispensable couples (resp., dispensable) the pairs in the bundle corresponding to active (resp., inactive) indices, that is, to such that (resp., ); here, for , is the optimal multiplier associated with in Step of Algorithm 1. When the number becomes too big, the following steps are executed:(i)It selects the dispensable couples, which can be discarded.(ii)If the remaining couples are still too many, compress the indispensable information into a single couple . The corresponding affine function is called aggregate linearization: Obviously, . The synthetic couple is added to the new bundle for keeping the information of the discarded indispensable couples since it summaries all the information generated up to .
Now we are ready to present our main results of this article.

Theorem 4. Let , for be mappings, let be in and be in , is given by (4), and sequences and are generated by Algorithm 1. Suppose the following conditions are satisfied:
  For all , , .
  ; , for all .
  , , for all .
  , for all , .
  , for all .
Then the functional equation (1) possesses a solution that satisfies the following conditions.
  The sequence defined by converges to .
  For given , the sequence is generated by Algorithm 1; take , , then .
   is unique with respect to condition .

Proof. Since is in , it is easy to see thatGiven , for each , is defined byAccording to , we havewhich implies that there exists a constant withBy virtue of our assumptions, (11), (12), and (13), we deduce that Thus, is a self-mapping on for given .
Given , for , , , and , there exist satisfying In terms of the above inequalities and , we get which means that It follows that As , we have . Furthermore, we obtain That is to say, is nonexpansive for given ; therefore, is continuous for given .
Next, we assert that for and for By definitions of and , we have Similarly, (20) holds for and . For , it follows from that Suppose that, for , it holds that ; then, for , we have from that Hence (21) holds for . According to mathematical induction, (21) holds for .
Next we claim that is a Cauchy sequence in . For , , , and , there exist satisfying According to and the above four inequalities, we haveCombining (26) with (27), and noting property , we obtainwhere , . Proceeding in this way, we can choose , , such that for It follows from , , , (10), (21), (28), and (29) thatwhich implies thatAs , we deduce thatSince , for each , the four terms on the right-hand side of (32) can be arbitrarily small when and are large sufficiently. It means that, for any , there exists ; when , we have . Therefore is a Cauchy sequence in . Suppose converges to ; note that is nonexpansive for given and the definition of ; we have . So it is easy to obtain It follows from that as . Therefore, ; that is to say, the functional equation (1) possesses one solution in .
For any , given , the sequence is generated by Algorithm 1; take , . Let ; there exists a positive integer such thatBy virtue of , (21), and (34), we infer that which means .
The last conclusion is quite similar to Theorem  10 in [1], so we omit the proof.

Remark 5. By comparing with [1] we find that during the process of constructing the iterative sequence which converges to the unique solution for (1), the approximate functions to and the sequence of descent steps are employed so that we can relax the conditions on by only requiring to satisfy the uniform boundedness condition at stability centers instead of itself on the whole space . The introduction of approximate functions makes problem (1) easier to solve since it is a piecewise linear convex function and is not hard to satisfy uniform boundedness condition. Besides that, the functions and , are only required to be uniformly bounded at stability centers for , not on the whole space , which is another difference between [1] and our algorithm.

Example 6. Suppose , , , and for all . Consider the following functional equation: It is easy to verify that the conditions , and of Theorem 4 are satisfied. Now let us see ; note that ; take ; it follows from the definition of that Similarly, if we take , , it can be proved that . For , if we take , , then we can obtain , which is just the case for in . By imitating the above process, we have the following conclusion: if we take , , it holds that , . In other words, if we choose to be a sequence such that it approaches infinity as , the conditions in Theorem 4 are also satisfied. Therefore, this functional equation possesses a unique solution .

Conflicts of Interest

The authors declare that they have no conflicts of interest.


Project is supported by the National Natural Science Foundations of China (nos. 11301246, 11671183, and 11601061), the Natural Science Foundation Plan Project of Liaoning Province (no. 20170540573), and Fundamental Research Funds for the Central Universities of China (no. DUT16LK07).


  1. Deepmala, “Existence theorems for solvability of a functional equation arising in dynamic programming,” International Journal of Mathematics and Mathematical Sciences, vol. 104, no. 3, pp. 273–244, 2014. View at Google Scholar · View at MathSciNet
  2. Z. Liu and J. S. Ume, “On properties of solutions for a class of functional equations arising in dynamic programming,” Journal of Optimization Theory and Applications, vol. 117, no. 3, pp. 533–551, 2003. View at Publisher · View at Google Scholar · View at MathSciNet
  3. Z. Liu, R. P. Agarwal, and S. M. Kang, “On solvability of functional equations and system of functional equations arising in dynamic programming,” Journal of Mathematical Analysis and Applications, vol. 297, no. 1, pp. 111–130, 2004. View at Publisher · View at Google Scholar · View at MathSciNet
  4. Z. Liu, Y. Xu, J. S. Ume, and S. M. Kang, “Solutions to two functional equations arising in dynamic programming,” Journal of Computational and Applied Mathematics, vol. 192, no. 2, pp. 251–269, 2006. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus
  5. J. Shen and L. P. Pang, “A bundle-type auxiliary problem method for solving generalized variational-like inequalities,” Computers and Mathematics with Applications, vol. 55, pp. 2993–2998, 2008. View at Google Scholar
  6. J. Shen and L.-P. Pang, “An approximate bundle-type auxiliary problem method for solving generalized variational inequalities,” Mathematical and Computer Modelling, vol. 48, no. 5-6, pp. 769–775, 2008. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus
  7. J. Zhang, Y.-Q. Zhang, and L.-W. Zhang, “A sample average approximation regularization method for a stochastic mathematical program with general vertical complementarity constraints,” Journal of Computational and Applied Mathematics, vol. 280, pp. 202–216, 2015. View at Publisher · View at Google Scholar · View at MathSciNet
  8. G. Salmon, V. H. Nguyen, and J. J. Strodiot, “Coupling the auxiliary problem principle and epiconvergence theory to solve general variational inequalities,” Journal of Optimization Theory and Applications, vol. 104, no. 3, pp. 629–657, 2000. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus
  9. G. Salmon, J. J. Strodiot, and V. H. Nguyen, “A Perturbed auxiliary problem method for paramonotone multivalued mappings,” in Advances in Convex Analysis and Global Optimization, N. Hadjisavvas and P. P. Ardalos, Eds., vol. 54 of Nonconvex Optimization and Its Applications, pp. 515–529, Kluwer, Dordrecht, Netherlands, 2001. View at Publisher · View at Google Scholar
  10. Y. Sonntag, Convergence au sens de Mosco: thdorie et applications a lapproximation des solutions d’ inequations [Ph.D. thesis], Universite de Porvence, 1982.
  11. C. Lemaréchal, “An extension of Davidon methods to nondifferentiable problems,” Mathematical Programming Study, vol. 3, pp. 95–109, 1975. View at Google Scholar
  12. J. Frédéric Bonnans, J. Charles Gilbert, C. Lemaréchal, and C. Sagastizábal, Numerical Optimization, Springer-Verlag, Berlin Heidelberg, 1997.