Abstract

The BFGS method is one of the most effective quasi-Newton algorithms for minimization-optimization problems. In this paper, an improved BFGS method with a modified weak Wolfe–Powell line search technique is used to solve convex minimization problems and its convergence analysis is established. Seventy-four academic test problems and the Muskingum model are implemented in the numerical experiment. The numerical results show that our algorithm is comparable to the usual BFGS algorithm in terms of the number of iterations and the time consumed, which indicates our algorithm is effective and reliable.

1. Introduction

With the development of the economy and society, a large number of optimization problems have been emerged in the fields of economic management, aerospace, transportation, national defense and so on. It is very necessary and meaningful for us to discuss, analyse the problems, and find some effective methods to solve them. Let us consider the optimization model:where , . To solve (1), the following iterative formula is widely used. Given the starting point , the iterative scheme iswhere is the current iteration point, is the next iteration point, is the step length, and is the search direction that is obtained by solving the quasi-Newton equation:where is the gradient of at the point , is the quasi-Newton updating matrix or its approximation, and the sequence satisfies the standard secant equation . The updating matrix of can be defined bywhere , , and is symmetric and positive definite.

Formula (4) is the famous standard BFGS update formula, which is one of the most effective quasi-Newton methods. For a convex function, using exact line search or some special inexact line search, the global convergence (see [1, 2]) and superlinear convergence (see [3, 4]) of the BFGS method were obtained. For general functions, the BFGS method may fail under inexact line search techniques. This fact has been proven by Dai [5], and Mascarenhas [6] has also proven that the BFGS method is not convergent, even under the exact line search techniques. Although the convergence of the BFGS method under general nonconvex functions has some shortcomings, its high efficiency and great numerical stability have motivated many scholars [712] to study and improve the BFGS method. The improvements achieved by scholars are as follows.

Formula 1. (see [7]). The BFGS update formula is modified bywhere and function satisfies (i) for all ; (ii) if and only if ; (iii) if is in a bounded set, is bounded. Li and Fukushima discussed its global convergence without the convexity assumption on .

Formula 2. (see [8]) The BFGS update formula is modified bywhere and . Moreover, scholars [8, 13] have proven that this method is better than the original BFGS method.

Formula 3. (see [9]) The BFGS update formula is modified bywhere and . According to , it is clear that the method contains both gradient and function value information. In addition, the modulated quasi-Newton method with superlinear convergence constructed by formula 3 is studied in [9].

Formula 4. (see [14]) The BFGS update formula is modified bywhere and . The global convergence of the improved BFGS method (MBFGS) is discussed by Li et al. [14]. Meanwhile, they also compared the three methods in numerical experiments. The results show that the algorithm based on this method is superior to the other three methods.
In many optimization algorithms, scholars often use the weak Wolfe–Powell (WWP) line search technique to find the step length . The WWP line search technique is determined bywhere , , and .
In order to get more interesting properties of WWP line search, many scholars have improved the line search technique. Yuan et al. [15] improved the WWP line search technique and studied the new line search technique that has global convergence in the BFGS and PRP methods. Their improved line search technique (MWWP) is formulated as follows:where , , , and . The detailed line search is elaborated in [15]. Some research results based on this improved line search can be found in [16, 17]. The above discussion motivate us to seek an improved BFGS method which may obtain better numerical performance.
In this article, we will discuss our work in the following sections. In Section 2, using (8) and the MWWP line search technique, algorithms are constructed to solve optimization problems. In Section 3, we study convergence of the modified BFGS method. In Section 4, the numerical results of the algorithm are reported. In the last section, the conclusion is presented.

2. Algorithm

The corresponding modified BFGS algorithm is called Algorithm 1 and can be presented as follows.

Algorithm 1. (i)Step 1: choose an initial point , , , , and . Given an initial symmetric and positive definite matrix , set .(ii)Step 2: when , stop. Otherwise, take the next step.(iii)Step 3: solve to obtain .(iv)Step 4: the step length is determined by (10) and (11).(v)Step 5: set a new iteration point of . Update by (8).(vi)Step 6: let and return to Step 2.

Remark 1. The step length , generated by the proposed new line search technique, has a great numerical performance. And the rationality proof of the MWWP line search has been given in [15].

3. Convergence Analysis

The global convergence analysis of the improved BFGS method will be introduced in this section, and the following assumptions are needed.

Assumption 1. (i)The level set of is bounded.(ii)The objective function is convex on .(iii) is twice continuously differentiable and bounded below, with a Lipschitz continuous gradient function . It means that there exists a positive constant , such thatNext, we will give the global convergence. The positive definite of is presented in the following lemma.

Lemma 1. Let the sequence be generated by (8); then, the matrix is positive definite for all .

Proof. Induction is used to prove the positive definiteness of . For , it is obvious that the matrix is positive definite. For , by and (11), we havewhere the last inequality holds by and . Therefore, the matrix is positive definite. The proof is completed.

Lemma 2. Let Assumption 1 hold, and the sequence is generated in the Algorithm 1. Then,

Proof. By MWWP line search (8) and formula (12), we obtainTherefore, the following bound holds:By (10) and Assumption 1 (iii), we haveAdding these inequalities from to , we obtainCombining the above inequality with (16), we obtain (14). Therefore, Lemma 2 has been proven.

Remark 2. It is obvious that there are two values of the . Therefore, the MWWP line search has two situations. In this paper, we discuss the situation of .

Lemma 3. Let and Assumption 1 hold. Then, there exists a positive constant such thathold for at least values of with any positive integer .

Proof. By the , if , then . Lemma 3 holds (see [15]).
If , then . It is similar to the result of Yuan and Wei [18]. According to the convexity of objective function , we obtainThe above two inequalities and the definition of indicate thatThen, we obtainTherefore, by the above analysis, it followsBy the definition of , it follows thatThen, we have thatThe proof of Theorem 2.1 of [2] implies that Lemma 3 holds.
Based on the above conclusions, the global convergence is analysed in the following theorem.

Theorem 1. If the conditions of Lemma 3 hold, then we obtain

Proof. By Lemma 2, we obtainSince , we haveCombining (19) with (20), we obtainThus, , , and , and we obtainTherefore, (27) holds. The proof is complete.

4. Numerical Results

In this section, we will study the numerical performance of the MBFGS-MWWP algorithm established in Section 2. To verify the algorithm’s effectiveness, we divide the experiments into two parts: we first compare our algorithm with the standard BFGS method with the weak Wolfe–Powell line search technique (BFGS-WWP) in 74 academic problems listed in Table 1 with the dimension varying from 300 to 2700 and then apply our algorithm to the Muskingum engineering model.

4.1. Unconstrained Optimisation Problems

In this section, we compare Algorithm 1 with the BFGS-WWP algorithms for the 74 academic problems listed in Table 1. The codes are written with MATLAB R2014a and run on a PC with an Inter (R) Core (TM) i5-4210U CPU @ 1.70 GHz, 8.00 GB of RAM and the Windows 10 operating system, and the parameters are chosen as , , , and . The numerical results and comparison are shown in Tables 26. The columns in Tables 26 have the following meaning:(i): the index of the tested problems(ii): the dimension of the tested problem(iii): the iteration number consumed(iv): the total number both of the gradient and the function value(v): the time consumed in corresponding algorithms in seconds

For intuitive effect, we adopt the performance technique in [19] to show the performance of different algorithms. From the strategy in [19], the higher the line in the figure is, the better the numerical results are. Figures 13 show the NI, NFG, and CPU time performances, respectively, of the new algorithm and the standard BFGS method. From the results shown in the figures, the NI, NFG, and CPU time of the algorithm constructed in this paper are generally better than those of the standard BFGS algorithm. Therefore, our algorithm is interesting and reliable from this perspective.

4.2. Muskingum Model

In this section, the main work is to use Algorithm 1 to numerically estimate the Muskingum model [20], whose definition is as follows:where is the total time, is the observed inflow discharge, is the observed outflow discharge, denotes the storage time constant, denotes the weight coefficient, denotes an extra parameter, and is the time step at time (). The observation data in the experiment are from the process of flood runoff from Chenggouwan and Linqing of Nanyunhe in the Haihe Basin, Tianjin, China, and the initial point . In addition, the time step are selected, and the detailed values of and for the years 1960, 1961, and 1964 are given in [21].

Figures 46 and Table 7 imply the following conclusions: (1) Similar to the BFGS method and the HIWO method, the MBFGS method, combined with the Muskingum model, has interesting experimental results. (2) The final points (, , and ) of the MBFGS method are more competitive than other similar methods. (3) Due to the difference in the final points of these three methods, the Muskingum model may have more approximate optimal points.

5. Conclusion

In this paper, we study the improved BFGS method with the line search technique [14, 15] and mainly discuss its convergence on a convex function. The numerical results show that the proposed algorithm has a better problem-solving capability than that of the standard BFGS algorithm based on WWP line search. As for the further work, we have several points to consider: (i) That whether the improved BFGS method, based on other line search, also has convergence property. (ii) The combination of line search techniques (10) and (11) with other quasi-Newton methods is worth studying. (iii) Similar applications of nonlinear conjugate gradient algorithm, especially the PRP method, are also worthy of attention.

Data Availability

The data used to support the findings of this study are available in tables in this paper and also can be obtained from the corresponding author upon request.

Conflicts of Interest

The authors declare that there are no conflicts of interest regarding the publication of this paper.

Acknowledgments

This work was supported by the Basic Ability Promotion Project of Guangxi Young and Middle-Aged Teachers (No. 2020KY30018).