Abstract

Among the quasi-Newton algorithms, the BFGS method is often discussed by related scholars. However, in the case of inexact Wolfe line searches or even exact line search, the global convergence of the BFGS method for nonconvex functions is not still proven. Based on the aforementioned issues, we propose a new quasi-Newton algorithm to obtain a better convergence property; it is designed according to the following essentials: (1) a modified BFGS formula is designed to guarantee that inherits the positive definiteness of ; (2) a modified weak Wolfe–Powell line search is recommended; (3) a parabola, which is considered as the projection plane to avoid using the invalid direction, is proposed, and the next point is designed by a projection technique; (4) to obtain the global convergence of the proposed algorithm more easily, the projection point is used at all the next iteration points instead of the current modified BFGS update formula; and (5) the global convergence of the given algorithm is established under suitable conditions. Numerical results show that the proposed algorithm is efficient.

1. Introduction

Considerwhere and . The multitudinous algorithms for (1) often use the following iterative formula:where is the current point, , is a step size, and is a search direction at . There exist many algorithms for (1) [19]. Davidon [10] pointed out that the quasi-Newton method is one of the most effective methods for solving nonlinear optimization problems. The idea of the quasi-Newton method is to use the first derivative to establish an approximate Hessian matrix in many iterations, and the approximation is updated by a low-rank matrix in each iteration. The primary quasi-Newton equation is as follows:

The search direction of the quasi-Newton method is generated by the following equation:where is any given symmetric positive-definite matrix, , the Hessian approximation matrix is the quasi-Newton update matrix, and is the gradient of at . The BFGS (Broyden [11], Fletcher [12], Goldfarb [13], and Shanno [14]) method is one of the quasi-Newton line search methods and has great numerical stability. The famous BFGS update formula iswhich is effective for solving (1) [1518]. Powell [19] first proved that the BFGS method possesses global convergence for convex functions under Wolfe line search. Some global convergence results for the BFGS method for convex minimization problems can be found in [1926]. However, Dai [16] proposed a counterexample to illustrate that the standard BFGS method may not be applicable to nonconvex functions with Wolfe line search, and Mascarenhas [27] demonstrated the nonconvergence of the standard BFGS method even with exact line search. To verify the global convergence of the BFGS method for general functions, some modified BFGS methods [2831] have been also presented for nonconvex minimization problems. Aiming to obtain a better approximation of the objective function Hessian matrix, Wei et al. [32] proposed a new BFGS method, whose formula iswhere and , and the corresponding quasi-Newton equation is as follows:

For convex functions, convergence analysis of the new BFGS algorithm was given for weak Wolfe–Powell line search:where and .

Motivated by the above formula and other observations, Yuan and Wei [33] defined a modified quasi-Newton equation as follows:where and

It is obvious that if holds, then the quasi-Newton method is the method (7); otherwise, it is the standard BFGS method. Therefore, when holds, the modified quasi-Newton method (10) and the quasi-Newton method (6) have the same approximation of the Hessian matrix. Inspired by their views, we will demonstrate the global convergence of the modified BFGS (MBFGS) method (10) for nonconvex functions with the modified weak Wolfe–Powell (MWWP) line search [34], whose form is as follows:where , and . The parameter is different from that in paper [34].

This article is organized as follows: Section 2 introduces the motivation and states the given technique and algorithm. In Section 3, we prove the global convergence of the modified BFGS method with MWWP line search under some reasonable conditions. Section 4 reports the results of the numerical experiments to show the performance of the algorithms. The last section presents the conclusion. Throughout the article, and are replaced by and , and and are replaced by and . denotes the Euclidean norm.

2. Motivation and Algorithm

The global convergence of the BFGS algorithm has been established for the uniformly convex functions which have many advantages. It is worth considering whether we can use these properties of uniformly convex functions in the BFGS algorithm to obtain global convergence. This idea motivates us to propose a projection technique to acquire better convergence properties of the BFGS algorithm. Given a new numerical formula for (1):where is the next point generated by the classical BFGS formula. Moreover, a parabolic form is given as follows:where is a constant. It is not difficult to see that can be considered as the first two terms of the expansion of a quadratic function at , whose Hessian matrix is a diagonal matrix with eigenvalue . Therefore, the BFGS method is globally convergent. By projecting onto (14), we obtain the next step :

The idea of the projection can also be found in [6, 8, 35]. Based on the above discussions, the modified algorithm is given in Algorithm 1.

Step 0: given a point , constants , and an symmetric positive-definite matrix , set .
Step 1: stop if .
Step 2: obtain a search direction by solving
Step 3: calculate using the inequalities (11) and (12).
Step 4: set .
Step 5: if holds, then let , , and go to Step 7; otherwise, go to Step 6.
Step 6: let be defined by (15), , and .
Step 7: update using formula (10).
Step 8: let , and go to Step 1.

Remark 1. (i) is the defined projection point in Step 6, and vector is the same as vector in Step 5, where the projection point does not work in (10) but does in the next iteration.(ii)If holds in Step 5, then the global convergence of the algorithm can be obtained by the modified weak Wolfe–Powell line search, (11) and (12). If not, we can ensure the global convergence of the algorithm using the projection method (15).

3. Convergence Analysis

In this section, we concentrate on the global convergence of the modified projection BFGS algorithm. The following assumptions are required.

Assumption 1. (i)The level set is bounded.(ii)The function is twice continuously differentiable and bounded from below, and its gradient function is Lipschitz continuous, that is,holds, where is a constant.Assumption 1(ii) indicates that the relationholds.

Theorem 1. Suppose that Assumption 1 and hold. Then, there exists a constant satisfying (11) and (12), where , and are constants.

Proof. The detailed proof of the rationality of the line search is given in paper [35].

Lemma 1. Let Assumption 1 and hold. If the sequence is generated by Algorithm 1, then we havewhere is a constant.

Proof. According to Lemma 1 of paper [35], the following relations are reasonable:

where parameter . Combining (19) and (20), we obtain the following formula with :

Using the definition of , we obtain

Therefore, . The proof is complete.

Lemma 2. If the sequence is generated by Algorithm 1 and Assumption 1 holds, then the matrix is positive definite for all .

Proof. According to (18), the relation is valid. Thus, the proof is complete.

Lemma 3. If the sequence is generated by Algorithm 1 and Assumption 1 holds, then

Proof. According (11) and Assumption 1(ii), the following formula obviously exists

Combining (12) with (16), we obtain

Thus,

Substituting the above inequality into (24), we have (23). The proof is complete.

Lemma 4. Let Assumption 1 and the inequality hold. If there exist constants , then the following hold:for at least values of for any positive integer .

Proof. The proof will be completed using the following two cases:Case 1. If is true, then and . By (16) and Assumption 1,Combining the above inequality with (19), we obtainAssumption 1(ii) and the definition of imply thatBy (31), (19), and the definition of , we haveRelations (22), (30), and (32) indicate thatCase 2. holds. From Step 6 of Algorithm 1, we obtainTogether with (32), we haveIntegrating (35) with (20), we obtainUsing the definition of , we have

where the second inequality follows from , and the last inequality follows from (16). Combining the above formula with (20), we obtain

All in all, the following formula always holds:

where and . Similar to the proof of Theorem 1 in [36], we obtain (27) and (28). The proof is complete.

Theorem 2. If the conditions of Lemma 4 hold, then we obtain

Proof. By (23), we can obtain

Then, using Algorithm 1, we have

The relationship between (27) and (28) indicates that

which means that holds for . Combining Lemma 4 and , we obtain

This implies that (40) holds. The proof is complete.

4. Numerical Results

In this section, we perform some numerical experiments to test Algorithm 1 with the modified weak Wolfe–Powell line search and compare its performance with that of the normal BFGS method. We call Algorithm 1 as MBFGS.

4.1. General Unconstrained Optimization Problems

Tested problems: the problems are obtained from [37, 38]. There are 74 test questions in total, which are listed in Table 1.Dimensionality: problem instances with 300, 900, and 2700 variables are considered.Himmelblau stop rule [39]: if , then set , or let . If or holds, then the program is stopped, where and .Parameters: in Algorithm 1, , , , , and , which is the unit matrix.Experiment environment: all programs are written in MATLAB R2014a and run on a PC with an Inter(R) Core(TM) i5-4210U CPU @ 1.70 GHz, 8.00 GB of RAM, and the Windows 10 operating system.Symbol representation: some definitions of the notation used in Tables 1 and 2 are as follows:No: the test problem number.CPUTime: the CPUtime in seconds.NI: the number of iterations.NFG: the total number of function and gradient evaluations.Image description: Figures 13 show the profiles of CPUTime, NI, and NFG. It is easy to see from these figures that the MBFGS method possesses the best performance since its performance curves of CPUTime, NI, and NFG are better than those of the BFGS method. In addition, numerical results of the total CPUTime, NI, and NFG of the modified BFGS method are lower than those of the BFGS method.

4.2. The Muskingum Model in Engineering Problems

The Muskingum model, whose definition is as follows, is presented in this subsection. The key work is to numerically estimate the model using Algorithm 1.

Muskingum Model [40]:whose symbolic representation is as follows: is the storage time constant, is the weight coefficient, is an extra parameter, is the observed inflow discharge, is the observed outflow discharge, is the time step at time (), and is the total time.

The observed data of the experiment are derived from the process of flood runoff from Chenggouwan and Linqing of Nanyunhe in the Haihe Basin, Tianjin, China. To obtain better numerical results, the initial point and the time step are selected. The specific values of and for the years 1960, 1961, and 1964 are stated in article [41]. The test results are listed in Table 3.

The following three conclusions are apparent from Figures 46 and Table 3: (1) Combined with the Muskingum model, the MBFGS method has great numerical experiment performance, similar to the BFGS method and the HIWO method, and the efficiency of these three algorithms is fascinating. (2) The final points (, , and ) of the MBFGS method are emulative of those of other similar methods. (3) Owing to the final points of these three methods being distinct, the Muskingum model may have more approximation optimum points.

5. Conclusion

This paper gives a modified BFGS method and studies its global convergence under an inexact line search for nonconvex functions. A new algorithm is proposed, which has the following properties: (i) The search direction and its associated step size are accepted if a positive condition holds, and the next iterative point is designed; otherwise, a parabola is introduced, which is regarded as the projection surface to avoid using the failed direction, and the next point is designed by a projection technique. (ii) To easily obtain global convergence of the proposed algorithm, the projection point is used at all the next iteration points instead of the current modified BFGS update formula. The global convergence for nonconvex functions and the numerical results of the proposed algorithm indicated that the given method is competitive with other similar methods. As for future work, we have the following points to consider: (a) Is there a new projection technique suitable for the global convergence of the modified BFGS method? (b) Application of the modified BFGS method (10) to other line search techniques should be discussed. (c) Whether the combination of the projection technique mentioned and a conjugate gradient method, especially the PRP method, has good numerical experimental results is worthy of investigation.

Data Availability

All data supporting the findings are included in the paper.

Conflicts of Interest

There is no conflict of interests regarding the publication of this paper.

Acknowledgments

This work was supported by the National Natural Science Foundation of China (Grant No. 11661009), the High Level Innovation Teams and Excellent Scholars Program in Guangxi Institutions of Higher Education (Grant No. [2019]52)), the Guangxi Natural Science Key Fund (No. 2017GXNSFDA198046), and the Special Funds for Local Science and Technology Development Guided by the Central Government (No. ZY20198003).