About this Journal Submit a Manuscript Table of Contents
Abstract and Applied Analysis
Volume 2013 (2013), Article ID 515902, 10 pages
http://dx.doi.org/10.1155/2013/515902
Research Article

The Strong Convergence of Prediction-Correction and Relaxed Hybrid Steepest-Descent Method for Variational Inequalities

1School of Computer Science, Civil Aviation Flight University of China, Guanghan 618307, China
2School of Civil Aviation, Nanjing University of Aeronautics and Astronautics, Nanjing 210016, China

Received 22 June 2013; Accepted 19 August 2013

Academic Editor: Xu Minghua

Copyright © 2013 Haiwen Xu. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Abstract

We establish the strong convergence of prediction-correction and relaxed hybrid steepest-descent method (PRH method) for variational inequalities under some suitable conditions that simplify the proof. And it is to be noted that the proof is different from the previous results and also is not similar to the previous results. More importantly, we design a set of practical numerical experiments. The results demonstrate that the PRH method under some descent directions is more slightly efficient than that of the modified and relaxed hybrid steepest-descent method, and the PRH Method under some new conditions is more efficient than that under some old conditions.

1. Introduction

Let be a real Hilbert space with inner product and norm , let be a nonempty closed convex subset of , and let be an operator. Then the variational inequality problem [1] is to find such that

The literature contains many methods for solving variational inequality problems; see [225] and references therein. According to the relationship between the variational inequality problems and a fixed point problem, we can obtain where the projection operator is the projection from onto , that is, In this paper, is an operator with -Lipschtz and -strongly monotone; that is, satisfies the following conditions: If is small enough, then is a contraction. Naturally, the convergence of Picard iterates generated by the right-hand side of (2) is obtained by Banach’s fixed point theorem. Such a method is called the projection method or more results about the projection method see [6, 8, 20] and so forth.

In fact, the projection in the contraction methods may not be easy to compute, and a great effort is to compute the projection in each iteration. Yamada and Deutsch have provided a hybrid steepest-descent method for solving the [2, 3] in order to reduce the difficulty and complexity of computing the projection . Subsequently, the convergence of hybrid steepest-descent methods was given out by Xu and Kim [4] and Zeng et al. [5]. Naturally, by analyzing several three-step iterative methods in each iteration by the fixed pointed equation, we can obtain the Noor iterations. Recently, Ding et al. [7] proposed a three-step relaxed hybrid steepest-descent method for variational inequalities, and the simple proof of three-step relaxed hybrid steepest-descent methods under different conditions was introduced by Yao et al. [24]. The literature [14, 16] described a modified and relaxed hybrid steepest-descent (MRHSD) method and the different convergence of the MRHSD method under the different conditions. A set of practical numerical experiments in the literature [16] demonstrated that the MRHSD method has different efficiency under different conditions. Subsequently, the prediction-correction and relaxed hybrid steepest-descent method (PRH method) [15] makes more use of the history information and less decreases the loss of information than the methods [7, 14]. The PRH method introduced more descent directions than the MRHSD method [14, 16], and computing these descent directions only needs the history information.

In this paper, we will prove the strong convergence of PRH method under different and suitable restrictions imposed on parameters (Condition 12), which differs from that of [15]. Moreover, the proof of strong convergence is different from the previous proof in [15], which is not similar to that in [7] in Step 2. And more importantly, numerical experiments verify that the PRH method under Condition 12 is more efficient than that under Condition 10, and the PRH method under some descent directions is more slightly efficient than that of the MRHSD method [14, 16]. Furthermore, it is easy to obtain these descent directions.

The remainder of the paper is organized as follows. In Section 2, we review several lemmas and preliminaries. We prove the convergence theorem under Condition 12 in Section 3. In Section 4, we give out a series of numerical experiments, which demonstrated that the PRH method under Condition 12 is more efficient than under Condition 10. Section 5 concludes the paper.

2. Preliminaries

In order to proof the later convergence theorem, we introduce several lemmas and the main results in the following.

Lemma 1. In a real Hilbert space H, there holds the inequality

The lemma is a basic result of a Hilbert space with the inner product.

Lemma 2 (demiclosedness principle). Assume that is a nonexpansive self-mapping on a nonempty closed convex subset of a Hilbert space . If has a fixed point, then is demiclosed. That is, whenever is a sequence in weakly converging to some and the sequence strongly converges to some , it follows that . Here is the identity operator of .

The following lemma is an immediate result of a projection mapping onto a closed convex subset of a Hilbert space.

Lemma 3. Let be a nonempty closed convex subset of . For and , then(1), (2).

Lemma 4 (see [13]). Let and be bounded sequence in a Banach space X and let be a sequence in with . Suppose for all integers and . Then .

Lemma 5 ([5, 7]). Let be a sequence of nonnegative real numbers satisfying the inequality where , , and satisfy the following conditions: (1), (2), (3). Then .

Since is -strongly monotone, has a unique solution [5]. Assume that is a nonexpansive mapping with the fixed point set . Obviously .

For any given numbers and , we define the mapping by

Lemma 6 (see [5]). If and , then is a contraction. In fact, where , for all .

Lemma 7 (see [7]). Let be a sequence of nonnegative numbers with and let be sequence of real numbers with . Then

3. Convergence Theorem

Before analyzing the convergence theorem, we first review the PRH method and related results [15].

Algorithm 8 (see [15]). Take three fixed numbers , starting with arbitrarily chosen initial points , compute the sequences such that; Prediction Step  1: ,Step  2: ,Step  3: ,Correction Step  4:   where is a nonexpansive mapping.

Let and , satisfy the following conditions.

Remark 9. In fact, the PRH method is the MRHSD method when .

Condition 10. One has

Theorem 11 (see [15]). In Condition 10, the sequence converges strongly to , and is the unique solution of the .

We obtain the strong convergence theorem of PRH method for variational inequalities under different assumptions.

Condition 12. One has

Theorem 13. The sequence converges strongly to , and is the unique solution of the . Assume that and , satisfy Condition 12.

Proof. We divide the proof into several steps.
Step 1. , and are bounded. Since is -strongly monotone, (1) has a unique solution , and , , .
A series of computations yields where , where .
Moreover, we also obtain where , subtituting; (14) into (13) and (14) into (12), we immediately obtain Furthermore, It is easy to obtain the following by induction: where , Hence are also bounded.
Step 2. Consider .
Indeed, by a series of computations, we have According to (20) and the prediction step of Algorithm 8, we also obtain Also by the prediction step of Algorithm 8 and (20), (21), we have
Let
so we get Furthermore, Apply , and and (22), (25) to get According to Lemma 4, we obtain Furthermore, by , we also get By (27), (28) and the correction step of Algorithm 8, we immediately conclude that so we get
Step 3. Consider.
Indeed, by the prediction step of Algorithm 8, we have According to the assumption and , then By (32), we immediately obtain
By a series of computations, we can get Hence, by (28), (33), and (34), we also obtain Using Steps 2 and 3, it is easy to obtain the following corollary.
Corollary 14. Consider .
Applying Steps   2 and 3 , one getsso it is easy to see that
Step 4. Consider .
For some , here exits weakly and such that According to , we have By being the unique solution of , we can obtain Since , we immediately conclude that
Step 5. By Step 1 and Lemma 1, we have where and .
Denote We can rewrite (42) as In fact, satisfies Lemma 5; according to we obtain Moreover, by Step 4, we also obtain
Furthermore, by (43), (47), and (48), it is easy to obtain Consequently apply Lemma 5 to obtain

4. Numerical Experiments

The problem considered in this section is where is the matrix Fröbenis norm; that is,

Note that the matrix Fröbenis norm is induced by the inner product The problems arise from finance and statistics, and we form the test problems similarly as in [9, 21].

Let , where Let be given symmetric matrices, and asymmetric which differs from previous approaches [9, 21], and it is to be noted that the extended contraction method (EC method) [9] has much difficulty in computing the examples when is asymmetric, where in element wise:

Then (51) is equivalent to the following variational inequality: So we get

According to Condition 10, we take the following parameter sequences, and let Condition 10 denote the parameter sequences: According to Condition 12, we take the following parameter sequences, and let Condition 12 denote the parameter sequences: Obviously, we have much difficulty in computing the projection of . In order to reduce the difficulty and complexity of computing the projection , we define by where which can be computed without difficulty and the fixed point set of . According to Theorems 11 and 13, the sequences generated by Algorithm 8 under Conditions 10 and 12 are convergent.

The computation begins with ones in MATLAB and stops as soon as . All codes were implemented in MATLAB 7.1 and ran at a Pentium R 1.70G processor, 2G Acer note computer.

We test the problems with , 200, 300, 400, 500, 1000, and 2000. The test results with the PRH method under different conditions are reported in Tables 1, 2, 3, and 4. And the CPU time is in seconds. It is to be noted that the results of extended contraction method are only given out when the iteration step (It) is less than or equal to 100.

tab1
Table 1: Numerical results for the PRH method and the EC method.
tab2
Table 2: Numerical results for tolerance .
tab3
Table 3: Numerical results for tolerance .
tab4
Table 4: Numerical results for tolerance .

Test Examples 1. In this example we generate the data in a similar manner as in [9]. The entries of diagonal elements of are randomly generated in the interval ; the entries of off-diagonal elements of are randomly generated in the interval (Algorithm 1): When and tolerance , the computation time of the proposed method is too long, so the results of the PRH method give out approximate solution with and tolerance in the following. And the extended contraction method (EC method) has much difficulty in computing the examples when is asymmetric. Furthermore, by introducing auxiliary variable, the certain projection method or relaxed-PPA method [10] can be implemented by these tests.

alg1
Algorithm 1

Test Examples 2. We form the data of the second problems similarly as in the first test examples. The entries of diagonal elements of are randomly generated in the interval ; the entries of off-diagonal elements of are generated from a uniform distribution in the same interval (Algorithm 2):

alg2
Algorithm 2

From Tables 1 to 3, we found that the iteration numbers and CPU time of PRH under Condition 12 are more efficient than that under Condition 10. In Table 4 of our method, the tests’ results give out that the PRH method under some descent directions is more slightly efficient than those of the MRHSD method [14, 16], and it is easy to obtain these descent directions. Furthermore, it is important to find by Tables 2 and 4.

5. Conclusions

We have proved the strong convergence of PRH method under Condition 12, which differs from Condition 10. The result can be considered as an improvement and refinement of the previous results [14]. And more importantly, numerical experiments demonstrated that the PRH method under Condition 12 is more efficient than that under Condition 10, and the PRH method under some descent directions is more slightly efficient than that of the MRHSD method. How to select parameters of the PRH method for solving variational inequalities is worthy of further investigations in the future.

Acknowledgments

This research was supported by National Science and Technology Support Program (Grant no. 2011BAH24B06), Joint Fund of National Natural Science Foundation of China and Civil Aviation Administration of China (Grant no. U1233105), and Science Foundation of the Civil Aviation Flight University of China (Grant no. J2010-45).

References

  1. M. S. Gowda and Y. Song, “On semidefinite linear complementarity problems,” Mathematical Programming, vol. 88, no. 3, pp. 575–587, 2000. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet
  2. I. Yamada, “The hybrid steepest descent method for the variational inequality problem over the intersection of fixed point sets of nonexpansive mappings,” in Inherently Parallel Algorithms in Feasibility and Optimization and Their Applications, D. Bumariu, Y. Censor, and S. Reich, Eds., vol. 8, pp. 473–504, North-Holland, Amsterdam, The Netherlands, 2001. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet
  3. F. Deutsch and I. Yamada, “Minimizing certain convex functions over the intersection of the fixed point sets of nonexpansive mappings,” Numerical Functional Analysis and Optimization, vol. 19, no. 1-2, pp. 33–56, 1998. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet
  4. H. K. Xu and T. H. Kim, “Convergence of hybrid steepest-descent methods for variational inequalities,” Journal of Optimization Theory and Applications, vol. 119, no. 1, pp. 185–201, 2003. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet
  5. L. C. Zeng, N. C. Wong, and J. C. Yao, “Convergence analysis of modified hybrid steepest-descent methods with variable parameters for variational inequalities,” Journal of Optimization Theory and Applications, vol. 132, no. 1, pp. 51–69, 2007. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet
  6. L. C. Zeng, “On a general projection algorithm for variational inequalities,” Journal of Optimization Theory and Applications, vol. 97, no. 1, pp. 229–235, 1998. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet
  7. X. P. Ding, Y. C. Lin, and J. C. Yao, “Three-step relaxed hybrid steepest-descent methods for variational inequalities,” Applied Mathematics and Mechanics, vol. 28, no. 8, pp. 1029–1036, 2007. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet
  8. B. S. He, “A new method for a class of linear variational inequalities,” Mathematical Programming, vol. 66, no. 2, pp. 137–144, 1994. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet
  9. B. S. He and M. H. Xu, “A general framework of contraction methods for monotone variational inequalities,” Pacific Journal of Optimization, vol. 4, no. 2, pp. 195–212, 2008. View at Zentralblatt MATH · View at MathSciNet
  10. B. S. He, “PPA-based contraction methods for general linearly constrained convex optimization,” Lectures of Contraction Methods for Convex Optimization and Monotone Variational Inequalities, 06C, 2012, http://math.nju.edu.cn/~hebma/.
  11. N. J. Huang, X. X. Huang, and X. Q. Yang, “Connections among constrained continuous and combinatorial vector optimization,” Optimization, vol. 60, no. 1-2, pp. 15–27, 2011. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet
  12. P. T. Harker and J. S. Pang, “A damped-Newton method for the linear complementarity problem,” in Computational Solution of Nonlinear Systems of Equations, vol. 26, pp. 265–284, American Mathematical Society, Providence, RI, USA, 1990. View at MathSciNet
  13. T. Suzuki, “Strong convergence of Krasnoselskii and Mann's type sequences for one-parameter nonexpansive semigroups without Bochner integrals,” Journal of Mathematical Analysis and Applications, vol. 305, no. 1, pp. 227–239, 2005. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet
  14. H. W. Xu, E. B. Song, H. P. Pan, H. Shao, and L. M. Sun, “The modified and relaxed hybrid steepestdescent methods for variational inequalities,” in Proceedings of the 1st International Conference on Modelling and Simulation, vol. 2, pp. 169–174, World Academic Press, 2008.
  15. H. W. Xu, H. Shao, and Q. C. Zhang, “The Prediction-correction and relaxed hybrid steepest-descent method for variational inequalities,” in Proceedings of the International Symposium on Education and Computer Science, vol. 1, pp. 252–256, IEEE Computer Society and Academy, 2009.
  16. H. W. Xu, “Efficient implementation of a modified and relaxed hybrid steepest-descent method for a type of variational inequality,” Journal of Inequalities and Applications, vol. 2012, article 93, 2012. View at Publisher · View at Google Scholar · View at MathSciNet
  17. J. H. Hammond, Solving asymmetric variational inequality problems and systems of equations with generalized nonlinear programming algorithms [Ph.D. dissertation], Department of Mathematics, MIT, Cambridge, Mass, USA, 1984.
  18. P. Tseng, “Further applications of a splitting algorithm to decomposition in variational inequalities and convex programming,” Mathematical Programming, vol. 48, no. 2, pp. 249–263, 1990. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet
  19. R. A. Horn and C. R. Johnson, Topics in Matrix Analysis, Cambridge University Press, Cambridge, UK, 1991. View at Publisher · View at Google Scholar · View at MathSciNet
  20. D. F. Sun, “A projection and contraction method for generalized nonlinear complementarity problems,” Mathematica Numerica Sinica, vol. 16, no. 2, pp. 183–194, 1994. View at MathSciNet
  21. Y. Gao and D. F. Sun, “Calibrating least squares covariance matrix problems with equality and inequality constraints,” Tech. Rep., Department of Mathematics, National University of Singapore, 2008.
  22. M. A. Noor, “Some recent advances in variational inequalities. I. Basic concepts,” New Zealand Journal of Mathematics, vol. 26, no. 1, pp. 53–80, 1997. View at Zentralblatt MATH · View at MathSciNet
  23. M. A. Noor, “New approximation schemes for general variational inequalities,” Journal of Mathematical Analysis and Applications, vol. 251, no. 1, pp. 217–229, 2000. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet
  24. Y. Yao, M. A. Noor, R. Chen, and Y.-C. Liou, “Strong convergence of three-step relaxed hybrid steepest-descent methods for variational inequalities,” Applied Mathematics and Computation, vol. 201, no. 1-2, pp. 175–183, 2008. View at Publisher · View at Google Scholar · View at MathSciNet
  25. “Advances in Equilibrium Modeling, Analysis, and Computation,” in Annals of Operations Research, A. Nagurney, Ed., vol. 44, J. C. Baltzer AG Scientific Publishing, Basel, Switzerland, 1993.