Mathematical Problems in Engineering

Mathematical Problems in Engineering / 2020 / Article
Special Issue

Machine Learning and its Applications in Image Restoration

View this Special Issue

Research Article | Open Access

Volume 2020 |Article ID 9280495 | https://doi.org/10.1155/2020/9280495

Pengyuan Li, Zhan Wang, Dan Luo, Hongtruong Pham, "Global Convergence of a Modified Two-Parameter Scaled BFGS Method with Yuan-Wei-Lu Line Search for Unconstrained Optimization", Mathematical Problems in Engineering, vol. 2020, Article ID 9280495, 15 pages, 2020. https://doi.org/10.1155/2020/9280495

Global Convergence of a Modified Two-Parameter Scaled BFGS Method with Yuan-Wei-Lu Line Search for Unconstrained Optimization

Guest Editor: Weijun Zhou
Received23 Jul 2020
Accepted05 Aug 2020
Published26 Aug 2020

Abstract

The BFGS method is one of the most efficient quasi-Newton methods for solving small- and medium-size unconstrained optimization problems. For the sake of exploring its more interesting properties, a modified two-parameter scaled BFGS method is stated in this paper. The intention of the modified scaled BFGS method is to improve the eigenvalues structure of the BFGS update. In this method, the first two terms and the last term of the standard BFGS update formula are scaled with two different positive parameters, and the new value of is given. Meanwhile, Yuan-Wei-Lu line search is also proposed. Under the mentioned line search, the modified two-parameter scaled BFGS method is globally convergent for nonconvex functions. The extensive numerical experiments show that this form of the scaled BFGS method outperforms the standard BFGS method or some similar scaled methods.

1. Introduction

Considerwhere , and is a continuously differentiable function bounded from below. The quasi-Newton methods are currently used in countless optimization software for solving unconstrained optimization problems [18]. The BFGS method, one of the most efficient quasi-Newton methods, for solving (1) is an iterative method of the following form:where , obtained by some line search rule, is a step size, and is the BFGS search direction computed by the following equation:where is the gradient of , and the matrix is the BFGS approximation to the Hessian , which has the following update formula:where and . The problems related to the BFGS method have been analyzed and studied by many scholars, and satisfactory conclusions have been drawn [916]. In earlier year, Powell [17] first proved the global convergence of the standard BFGS method with inexact Wolfe line search for convex functions. Under the exact line search or some specific inexact line search, the BFGS method has the convergence property for convex minimization problems [1821]. By contrast, for nonconvex problems, Mascaren [22] has presented an example to elaborate that the BFGS method and some Broyden-type methods may not be convergent under the exact line search. As such, with the Wolfe line searches, Dai [23] also proved that the BFGS method may fail to converge. To verify the global convergence of the BFGS method for general functions and to obtain a better Hessian approximation matrix of the objective function, Yuan and Wei [24] presented a modified quasi-Newton equation as follows:where

In practice, the standard BFGS method has many qualities worth exploring and can effectively solve a class of unconstrained optimization problems.

Here, two excellent properties of the BFGS method are introduced. One is the self-correcting quality, scilicet; if the current Hessian approximate inverse matrix estimates the curvature of the function incorrectly, then Hessian approximation matrix will correct itself within a few steps. The other interesting property is that small eigenvalues are better corrected than large ones [25]. Hence, one can see that, the efficiency of the BFGS algorithm is subject to the eigenvalues structure of the Hessian approximation matrix intensely. To improve the performances of the BFGS method, Oren and Luenberger [26] scaled the Hessian approximation matrix , that is, they replaced by , where is a self-scaling factor. Nocedal and Yuan [27] further studied the self-scaling BFGS method when . Based on the value of this , Al-Baali [28] introduced a simple modification: . The numerical experiments showed that the modified self-scaling BFGS method outperforms the unscaled BFGS method. Many other scaled BFGS methods with better properties will be enumerated.

Formula 1. The general one-parameter scaled BFGS updating formula iswhere is a positive parameter, and it is diverse for the selection of the scaled factor , which is listed as follows.Choice A:where the value of is given by Yuan [29], and with inexact line search, the global convergence of the scaled BFGS method with given by (9) is established for convex functions by Powell [30]. Ulteriorly, for general nonlinear functions, Yuan limited the value range of to [0.01, 100] to ensure the positivity of under the inexact line search and proved the global convergence of the scaled BFGS method in this form.Choice B:which is obtained as a solution of the problem: . The scaled BFGS method based on this value of was introduced by Barzilai and Borwein [31] and was deemed the spectral scaled BFGS method. Cheng and Li [32] proved that the spectral scaled BFGS method is globally convergent under Wolfe line search with assuming the convexity of the minimizing function.Choice C:where for . Under the Wolfe line search (20) and (21), holds for , which implies that computed by (11) is bounded away from zero, that is to say, . Therefore, in this instance, the large eigenvalues of given by (8) are shifted to the left [33].

Formula 2. Proposed by Oren and Luenberger [26], this scaled BFGS method was the single parameter scaled of the first two items of the BFGS update and was defined aswhere is a positive parameter and is calculated as follows:

The parameter assigned by (13) can make the structure of eigenvalue to inverse Hessian approximation more easily analyzed. Consequently, it is regarded as one of the best factors.

Formula 3. In this method, the scaled parameters are selected to cluster the eigenvalues of the iteration matrix and shift the large eigenvalues to the left. The update formula of the Hessian approximate matrix is computed aswhere both and are positive parameters, and Andrei [34] preset them as the following values:

If the scaled parameters are bounded and line search is inexact, then this scaled BFGS algorithm is globally convergent for general functions. A large number of numerical experiments show that the double parameter scaled BFGS method with and given by (15) and (16) is more competitive than the standard BFGS method. In this paper, combining (7) and (14), we propose a new update formula of listed as follows:where is determined by formula (6),

Some interesting properties of the BFGS-type method are inseparable from the weak Wolfe–Powell (WWP) line search:where . There are many research studies based on this line search [3543]. To further develop the inexact line search, Yuan et al. present a new line search and call it Yuan-Wei-Lu (YWL) line search, which has the following form:where , , and . The main work of this paper is to verify the global convergence of the modified scaled BFGS update (17) with and given by (18) and (19), respectively, under this line search. Abundant numerical results show that such a combination is appropriate for nonconvex functions.

Our paper is organized as follows. The motivation and algorithm are introduced in the next section. In Section 3, the convergence analysis of the modified two-parameter scaled BFGS method under Yuan-Wei-Lu line search is established. Section 4 is devoted to show the results of numerical experiments. Some conclusions are stated in the last section.

2. Motivation and Algorithm

Two crucial tools for analyzing properties of the BFGS method are the trace and the determinant of the given by (4). Thus, the corresponding relations are enumerated as follows:

Applying the following existing relation in the study of Sun and Yuan [44],where , , , and ; we obtain

Obviously, the efficiency of the BFGS method depends on the eigenvalues structure of the Hessian approximation matrix, and the BFGS method is actually more affected by large eigenvalues than by small eigenvalues [25, 45, 46]. It can be seen that the second item on the right side of the formula (25) is negative. Therefore, it produces a shift of the eigenvalues of to the left. Thus, the BFGS method can modify large eigenvalues. Moreover, the third term on the right hand side of (25) being positive produces a shift of the eigenvalues of to the right. If this term is large, may have large eigenvalues too. Therefore, the eigenvalues of the can be corrected by scaling the corresponding items in (25), which is the main motivation for us to use the scaling BFGS method. In this paper, we scale the first two terms and the last term of the standard BFGS update formula with two different positive parameters and propose a new . In subsequent proof, we will propose some lemmas based on these two important tools to analyze the convergence of the modified scaled BFGS method. Then, an algorithm framework for solving the problem (1) will be built in Algorithm 1, which can be designed as

Step 1: given an initial point , an symmetric positive definite matrix , is sufficiently small and choose constants , , and . Set .
Step 2: if , stop.
Step 3: obtain a search direction by solving
Step 4: compute by Yuan-Wei-Lu line search conditions (22) and (23).
Step 5: find the scaling factors and by (18) and (19).
Step 6: let . Update by (17).
Step 7: let , and go to Step 2.

3. Convergence Analysis

In Section 3, the global convergence of Algorithm 1 will be established, and the following assumptions are useful in convergence analysis.

Assumption 1. (i)The level set is bounded(ii)The function is twice continuously differentiable and bounded from below

Lemma 1. If is the positive definite, , and if is computed by (22) and (23), then given by (17) is an equally positive definite for all .

Proof. The inequality (22) and (23) indicates that . Using the definition of , we obtainFor any ,where the penultimate inequality follows, andwhich is obtained by the Cauchy–Schwarz inequality.

Lemma 2. Let be generated by (16) for , then and inclines to 1.

Proof. Observe the formula (19); after substituting , we can find that is close to 1. Owing to the symmetry, positive definiteness, and nonsingularity of , its eigenvalues is real and positive, and . Hence, for , and . Since , , and for sufficiently large , , and are roughly of the same order of magnitude, which shows that . To sum up, the relations and are valid, namely for , and inclines to 1. The proof is completed.

Remark 1. Based on the conclusion of lemma, we can infer that for any integer , there exist two positive constants satisfying .

Lemma 3. If is updated by (14), where and are determined by (18) and (16), then

Proof. Considering (25), we haveIn addition,Therefore, by Remark 1 and the above inequality, the formula (33) is transformed intowhich implies (31). From the positive definiteness of , (32) also holds. The proof is completed.

Lemma 4. Consider and for all , where and are constants. Then, there exists a positive constant such thatfor all sufficiently large.

Proof. Utilizing the identity (26) and taking the determinant on both sides of the formula (14) with and computed as in (18) and (16), we havewhere the penultimate inequality follows , , , and for all . Furthermore, by and Lemma 4, we obtainTherefore,Suppose is sufficiently large, (39) implies (36). The proof is completed.

Theorem 1. If the sequence is obtained by Algorithm 1, then

Proof. The proof by contradiction is used to prove (40) holds. Suppose that . By Yuan-Wei-Lu line search (22) and bounded below, we obtainAdding the abovementioned inequalities from to and utilizing Assumption 1 (ii), we haveFrom Assumption 1 (ii) and (42), we haveBased on this, given a constant , there is a positive integer satisfyingwhere is any positive integer, and the first inequality follows the geometric inequality. Moreover, by Lemma 4, we obtainConsidering , the above formula and formula (39) are contradictory. Thus, (40) is valid. The proof is completed.

4. Numerical Results

In this section, numerical results of Algorithm 1 are reported, and the following methods were compared: (i) MTPSBFGS method ( is updated by (17) with and given by (18) and (19)). (ii) SBFGS method ( is updated by (14) with and given by (11) and (16)).

4.1. General Unconstrained Optimisation Problems

Tested problems: a total of 74 test questions, listed in Table 1 and derived from the studies by Bongartz et al. and More et al. [47, 48].Parameters: Algorithm 1 runs with , , , , and .Dimensionality: the algorithm is tested in the following three dimensions: 300, 900, and 2700.Himmelblau stop rule [49]: if , then set or . The iterations are stopped if or holds, where and .Experiment environment: all programs are written in MATLAB R2014a and run on a PC with an Inter(R) Core(TM) i5-4210U CPU at 1.70 GHz, 8.00 GB of RAM, and the Windows 10 operating system.Symbol representation: No.: the test problem number. CPU time: the CPU time in seconds. NI: the number of iterations.  NFG: the total number of function and gradient evaluations.Image description: Figures 13 show the profiles for CPU time, NI, and NFG, and Tables 26 provide the detail numerical results. From these figures and tables, it is obvious that the MTPSBFGS method possesses better numerical performance between these two methods, that is, the proposed modified scaled BFGS method is reasonable and feasible. The specific reasons for good performance are stated as follows. The parameter scaling the first two terms of the standard BFGS update is determined to cluster the eigenvalues of this matrix, and the parameter scaling the third term is determined to reduce its large eigenvalues, thus obtaining a better distribution of them.


No.Test problem

1Extended Freudenstein and Roth function
2Extended trigonometric function
3Extended Rosenbrock function
4Extended White and Holst function
5Extended Beale function
6Extended penalty function
7Perturbed quadratic function
8Raydan 1 function
9Raydan 2 function
10Diagonal 1 function
11Diagonal 2 function
12Diagonal 3 function
13Hager function
14Generalized tridiagonal 1 function
15Extended tridiagonal 1 function
16Extended three exponential terms function
17Generalized tridiagonal 2 function
18Diagonal 4 function
19Diagonal 5 function
20Extended Himmelblau function
21Generalized PSC1 function
22Extended PSC1 function
23Extended Powell function
24Extended block diagonal BD1 function
25Extended Maratos function
26Extended Cliff function
27Quadratic diagonal perturbed function
28Extended Wood function
29Extended Hiebert function
30Quadratic function QF1 function
31Extended quadratic penalty QP1 function
32Extended quadratic penalty QP2 function
33A quadratic function QF2 function
34Extended EP1 function
35Extended tridiagonal 2 function
36BDQRTIC function (CUTE)
37TRIDIA function (CUTE)
38ARWHEAD function (CUTE)
39NONDIA function (CUTE)
40NONDQUAR function (CUTE)
41DQDRTIC function (CUTE)
42EG2 function (CUTE)
43DIXMAANA function (CUTE)
44DIXMAANB function (CUTE)
45DIXMAANC function (CUTE)
46DIXMAANE function (CUTE)
47Partial perturbed quadratic function
48Broyden tridiagonal function
49Almost perturbed quadratic function
50Tridiagonal perturbed quadratic function
51EDENSCH function (CUTE)
52VARDIM function (CUTE)
53STAIRCASE S1 function
54LIARWHD function (CUTE)
55DIAGONAL 6 function
56DIXON3DQ function (CUTE)
57DIXMAANF function (CUTE)
58DIXMAANG function (CUTE)
59DIXMAANH function (CUTE)
60DIXMAANI function (CUTE)
61DIXMAANJ function (CUTE)
62DIXMAANK function (CUTE)
63DIXMAANL function (CUTE)
64DIXMAAND function (CUTE)
65ENGVAL1 function (CUTE)
66FLETCHCR function (CUTE)
67COSINE function (CUTE)
68Extended DENSCHNB function (CUTE)
69Extended DENSCHNF function (CUTE)
70SINQUAD function (CUTE)
71BIGGSB1 function (CUTE)
72Partial perturbed quadratic PPQ2 function
73Scaled quadratic SQ1 function
74Scaled quadratic SQ2 function


MTPSBFGS-YWLSBFGS-WWP
No.DimNINFGCPU timeNINFGCPU time

130029630.312525570.3125
190024565.85937523545.96875
12700224886.93752967135.6875
2300501140.59375461020.5
29005111412.8906255011213.515625
2270053120217.9062553122254.75
3300501370.515625471200.5
39007521517.593756316716.75
3270043135184.562560166280.484375
43001043701.25621760.75
49008725822.453125381099.1875
4270067179308.312550149236.984375
530018480.2187522580.296875
590018454.562520465.171875
52700204590.252358106.984375
6300681520.828125681520.796875
69006915818.3756915818.40625
6270085192405.8437585192410.765625
7300761540.8125761540.875
790013326836.687513326836.84375
727002324661158.7343752314641151.625
830022490.20312526540.265625
890025556.51562525556.421875
827002555118.6406252555116.859375
93007160.062512260.109375
99007161.54687512262.84375
9270081831.96875122649.046875
10300290290
10900290.0625290.0625
102700290.25290.25
11300751940.92187546940.59375
119009527226.3593756613418.484375
11270062019.87597196472.390625
1230011240.12511240.109375
1290013283.45312513283.296875
122700132860.34375132858.40625
1330011250.12510230.125
139008231.92187510262.46875
132700199488.65625199688.71875
143008200.1258200.15625
149007181.7031257181.703125
14270071827.687571827.421875
1530019430.35937522490.40625
1590025577.125398211.078125
1527002555119.406254186198.203125
163009210.10937510230.0625
169008181.9218759212.15625
162700112449.359375102243.046875
1730033730.42187532750.390625
1790023495.98437523495.8125
1727002553115.8906252553112.984375


MTPSBFGS-YWLSBFGS-WWP
No.DimNINFGCPU timeNINFGCPU time

1830031003100
189003100.531253100.5
1827003109.093753108.75
1930031003100.0625
199003100.531253100.484375
1927003108.5781253108.421875
2030033710.3437532690.359375
2090012312.23437512312.171875
202700123544.5468754393191.3125
2130025560.2812534740.390625
2190025566.70312535769.265625
2127002658124.4843753678170.34375
223008300.1093758310.125
229008311.906258312
22270083133.51562583133.359375
2330034850.359375521090.5625
23900398910.656255111713.6875
23270047111233.6562547112223.71875
24300331620.296875291450.234375
24900141111.03125371716.890625
2427001411115.343751511424.234375
25300902621.06251273481.5
2590012334633.7656258825723.703125
25270056139270.70312597284463.609375
26300561340.640625561340.609375
269006515217.3593756515217.296875
26270061146292.54687561146287.25
273006160.06256160
2790011312.7511312.65625
272700164575.3125174479.671875
2830028630.312527610.296875
2890028637.3437527626.921875
2827002561119.52561117.265625
293004170.06254150.046875
299004170.68754170.828125
2927004179.4843754179.53125
30300931881.203125931881.046875
3090016433045.9687516633446.890625
3027002955921476.906252945901474.96875
3130023520.2187523520.21875
3190027626.82812527626.84375
3127002864126.7343752864126.421875
3230022460.2812522460.25
3290020445.17187520445.03125
3227004798219.5156254798218.78125
333005110.06255110.0625
33900490.625490.59375
332700377.421875377.34375
34300370.0625370
34900370.53125370.5
3427004813.2031254813.234375


MTPSBFGS-YWLSBFGS-WWP
No.DimNINFGCPU timeNINFGCPU time

35300480480
359007141.593757141.59375
352700112246.875112246.40625
3630024590.35937526640.46875
3690023666.35937520585.59375
362700184982.359375194685.5625
373001362751.656251362751.6875
3790023547367.4062523547367.359375
3727004418862240.0156254428882234.09375
3830012270.12512270.0625
3890012262.812512262.921875
382700153364.90625153364.65625
3930037800.4062537800.46875
39900438910.890625439111.265625
3927002652116.7031252652116.078125
4030053213297.40625958192512.96875
409005461364150.92187510002008274.5625
40270064416053124.65625100020144810.296875
4130018410.187520450.1875
4190019434.82812519434.734375
412700194385.0625194384.875
4230019650.1562516570.078125
429004210.0468754210.046875
4227004210.5781254210.515625
4330022480.26562525540.234375
4390023505.8437527586.984375
4327002554114.3281252962133.78125
4430038800.45312539820.546875
4490036769.703125439011.5
4427003982180.4531254696212.125
4530017400.17187518420.28125
4590017404.26562518424.65625
452700184280.171875194485.09375
463001042551.5625601260.875
4690014135441.2031258718024.796875
462700164418829.3125116238586.296875
4730037800.73437536780.71875
47900449514.640625449514.78125
472700133164.21875133163.9375
4830026520.29687526520.3125
48900489612.890625489612.890625
482700224799.546875224799.96875
49300761540.953125761540.859375
4990013326837.20312513427037.59375
4927002324661161.1093752344701176.921875
50300761541.234375751521.328125
5090013226638.85937513226638.53125
5027002314641165.18752324661170
5130023480.29687523480.3125
5190023485.89062523485.859375
5127002348102.4843752348103.453125


MTPSBFGS-YWLSBFGS-WWP
No.DimNINFGCPU timeNINFGCPU time

52300872000.875872001.0625
5290010323627.687510323627.875
522700118270566.65625118270567.828125
53300938237512.093753887784.671875
5390010002537282.37510002002280.203125
532700100025395006.875100020025001.984375
5430031720.32812535880.390625
549004210710.812524656.09375
5427002367102.752769121.75
553009200.12518380.265625
5590010222.23437520424.921875
552700112446.3125214492.390625
563001000260713.031253757504.5625
5690010002540268.312510002000277.515625
562700100025394518.96875100020004637.546875
573001072601.578125591240.796875
579009923628.1406258517624.15625
57270063149313.140625105216522.734375
58300972331.3125691440.890625
5890012029233.843759920427.828125
582700144344717.328125117240583.078125
5930024730.25761690.890625
5990027727.29687527717.296875
5927003178148.62579173389.4375
603001092601.484375601260.796875
6090013233837.4843758718024.359375
602700176434882.109375116238581.953125
613001042461.53125591240.796875
619009622526.6093758517624.1875
61270066154324.140625105216521.9375
623001042591.390625611460.8125
629008923225.2656259922027.625
622700126314623.578125105226521.1875
633001433622.0468751473082.046875
639009725927.5937517336549.28125
632700186465933.51994221005.09375
6430040880.5312545980.578125
6490028627.07812532708.296875
6427003066135.656253372150.015625
6530022480.35937522480.4375
6590019455.04687519455.1875
652700184079.125184080.765625
66300611123311.171875618123811.6875
6690010002003284.812510002002286.65625
662700100020034583.015625100020024587.296875
6730062106210
6790012330.2512330.21875
67270010291.20312513591.421875
6830024500.23437529600.328125
6890025526.45312531648.375
6827002756126.43753368156.328125


MTPSBFGS-YWLSBFGS-WWP
No.DimNINFGCPU timeNINFGCPU time

6930026560.26562525540.21875
6990028607.54687525546.734375
6927002962136.4218752554119.25
7030031840.530900.4375
70900401031132888.46875
7027003391146.76562553122245.71875
713003428944.1718751903812.203125
7190010002572617.343755341069145.265625
712700100025917501.1875100020014617.703125
723001243132.531251092842.265625
7290028376294.0625311781107.890625
72270084321284629.2587121634782.0625
73300951921.0625951921.125
7390016833846.98437516934046.765625
7327002965941477.0468752925861462.71875
7430037760.37536740.34375
749005010213.406255010213.484375
74270081164396.39062581164399.5625

4.2. Muskingum Model in Engineering Problems

In this subsection, we present the Muskingum model, and it has the following form:

Muskingum model [50]:whose symbolic representation is as follows: is the storage time constant, is the weight coefficient, is an extra parameter, is the observed inflow discharge, is the observed outflow discharge, is the total time, and is the time step at time .

The observed data of the experiment are obtained from the process of flood runoff from Chenggouwan and Linqing of Nanyunhe in the Haihe Basin, Tianjin, China. Select the initial point and the time step . The concrete values of and for the years 1960, 1961, and 1964 are listed in [51]. The test results are presented in Table 7.


Algorithm

BFGS [52]10.81560.98261.0219
HIWO [50]13.28130.80010.9933
MTPSBFGS11.18491.00000.9996

Figures 46 and Table 7 imply the following three conclusions: (i) based on the Muskingum model, the efficiency of the MTPSBFGS method is wonderful, and numerical performance of these three algorithms is fantastic. (ii) Compared to other similar methods, the final points (, , and ) of the MTPSBFGS method are competitive. (iii) Due to the endpoints of these three methods being different, the Muskingum model may have more approximation optimum points.

5. Conclusion

A modified two parameter scaled BFGS method and the Yuan-Wei-Lu line search technology are introduced in this paper. By scaling the first two terms and the third term of the standard BFGS method with different positive parameters, a new two parameter scaled BFGS method is proposed. In this method, the new value of is given to guarantee better properties of the new scaled BFGS method. With Yuan-Wei-Lu line search, the proposed BFGS method is globally convergent. Numerical results indicate that the modified two parameter scaled BFGS method outperforms the standard BFGS method and even the same type of the BFGS method. As for the longer-term work, there are several points to consider: (1) are there some new values of , , and that make the BFGS method based on the update formula (17) perform better? (2) Whether the new scaled method combined with other line search have also great theoretical results. (3) Some new engineering problems based on the BFGS-type method are worth studying.

Data Availability

The data used to support this study are included within this article.

Conflicts of Interest

The authors declare that there are no conflicts of interest regarding the publication of this paper.

Acknowledgments

This work was supported by the National Natural Science Foundation of China (Grant no. 11661009), the High Level Innovation Teams and Excellent Scholars Program in Guangxi Institutions of Higher Education (Grant no. (2019)52), the Guangxi Natural Science Key Fund (Grant no. 2017GXNSFDA198046), and the Guangxi Natural Science Foundation (Grant no. 2020GXNSFAA159069).

References

  1. R. H. Byrd, S. L. Hansen, J. Nocedal, and Y. Singer, “A stochastic quasi-Newton method for large-scale optimization,” SIAM Journal on Optimization, vol. 26, no. 2, pp. 1008–1031, 2016. View at: Publisher Site | Google Scholar
  2. A. S. Lewis and M. L. Overton, “Nonsmooth optimization via quasi-Newton methods,” Mathematical Programming, vol. 141, no. 1-2, pp. 135–163, 2013. View at: Publisher Site | Google Scholar
  3. M. S. Salim and A. I. Ahmed, “A family of Quasi-Newton methods for unconstrained optimization problems,” Optimization, vol. 67, no. 10, pp. 1717–1727, 2018. View at: Publisher Site | Google Scholar
  4. Z. Wei, G. Li, and L. Qi, “New quasi-Newton methods for unconstrained optimization problems,” Applied Mathematics and Computation, vol. 175, no. 2, pp. 1156–1188, 2006. View at: Publisher Site | Google Scholar
  5. Z. Wei, G. Yu, G. Yuan, and Z. Lian, “The superlinear convergence of a modified BFGS-type method for unconstrained optimization,” Computational Optimization and Applications, vol. 29, no. 3, pp. 315–332, 2004. View at: Publisher Site | Google Scholar
  6. G. Yuan, Z. Sheng, B. Wang, W. Hu, and C. Li, “The global convergence of a modified BFGS method for nonconvex functions,” Journal of Computational and Applied Mathematics, vol. 327, pp. 274–294, 2018. View at: Publisher Site | Google Scholar
  7. W. Zhou and L. Zhang, “Global convergence of the nonmonotone MBFGS method for nonconvex unconstrained minimization,” Journal of Computational and Applied Mathematics, vol. 223, no. 1, pp. 40–47, 2009. View at: Publisher Site | Google Scholar
  8. W. Zhou and X. Chen, “Global convergence of a new hybrid Gauss-Newton structured BFGS method for nonlinear least squares problems,” SIAM Journal on Optimization, vol. 20, no. 5, pp. 2422–2441, 2010. View at: Publisher Site | Google Scholar
  9. D.-H. Li and M. Fukushima, “A modified BFGS method and its global convergence in nonconvex minimization,” Journal of Computational and Applied Mathematics, vol. 129, no. 1-2, pp. 15–35, 2001. View at: Publisher Site | Google Scholar
  10. D.-H. Li and M. Fukushima, “On the global convergence of the BFGS method for nonconvex unconstrained optimization problems,” SIAM Journal on Optimization, vol. 11, no. 4, pp. 1054–1064, 2001. View at: Publisher Site | Google Scholar
  11. L. Liu, Z. Wei, and X. Wu, “The convergence of a new modified BFGS method without line searches for unconstrained optimization or complexity systems,” Journal of Systems Science and Complexity, vol. 23, no. 4, pp. 861–872, 2010. View at: Publisher Site | Google Scholar
  12. Y. Xiao, Z. Wei, and Z. Wang, “A limited memory BFGS-type method for large-scale unconstrained optimization,” Computers & Mathematics with Applications, vol. 56, no. 4, pp. 1001–1009, 2008. View at: Publisher Site | Google Scholar
  13. C. Zhu, R. H. Byrd, P. Lu, and J. Nocedal, “Algorithm 778: l-BFGS-B: fortran subroutines for large-scale bound-constrained optimization,” ACM Transactions on Mathematical Software, vol. 23, no. 4, pp. 550–560, 1997. View at: Publisher Site | Google Scholar
  14. W. Zhou, “A modified BFGS type quasi-Newton method with line search for symmetric nonlinear equations problems,” Journal of Computational and Applied Mathematics, vol. 367, Article ID 112454, 2020. View at: Publisher Site | Google Scholar
  15. L. Zhang and H. Tang, “A hybrid MBFGS and CBFGS method for nonconvex minimization with a global complexity bound,” Pacific Journal of Optimization, vol. 14, no. 4, pp. 693–702, 2018. View at: Google Scholar
  16. W. Zhou and L. Zhang, “A modified Broyden-like quasi-Newton method for nonlinear equations,” Journal of Computational and Applied Mathematics, vol. 372, Article ID 112744, 2020. View at: Publisher Site | Google Scholar
  17. M. J. D. Powell, “Some global convergence properties of a variable metric algorithm for minimization without exact line searches,” SIAM-AMS Proceedings, vol. 9, pp. 53–72, 1976. View at: Google Scholar
  18. R. H. Byrd, J. Nocedal, and Y.-X. Yuan, “Global convergence of a cass of quasi-Newton methods on convex problems,” SIAM Journal on Numerical Analysis, vol. 24, no. 5, pp. 1171–1190, 1987. View at: Publisher Site | Google Scholar
  19. L. C. W. Dixon, “Variable metric algorithms: necessary and sufficient conditions for identical behavior of nonquadratic functions,” Journal of Optimization Theory and Applications, vol. 10, no. 1, pp. 34–40, 1972. View at: Publisher Site | Google Scholar
  20. A. Griewank, “The global convergence of partitioned BFGS on problems with convex decompositions and Lipschitzian gradients,” Mathematical Programming, vol. 50, no. 1–3, pp. 141–175, 1991. View at: Publisher Site | Google Scholar
  21. M. J. D. Powell, “On the convergence of the variable metric algorithm,” IMA Journal of Applied Mathematics, vol. 7, no. 1, pp. 21–36, 1971. View at: Publisher Site | Google Scholar
  22. W. F. Mascarenhas, “The BFGS method with exact line searches fails for non-convex objective functions,” Mathematical Programming, vol. 99, no. 1, pp. 49–61, 2004. View at: Publisher Site | Google Scholar
  23. Y.-H. Dai, “Convergence properties of the BFGS algorithm,” SIAM Journal on Optimization, vol. 13, no. 3, pp. 693–701, 2006. View at: Publisher Site | Google Scholar
  24. G. Yuan and Z. Wei, “Convergence analysis of a modified BFGS method on convex minimizations,” Computational Optimization and Applications, vol. 47, no. 2, pp. 237–255, 2010. View at: Publisher Site | Google Scholar
  25. J. Nocedal, “Theory of algorithms for unconstrained optimization,” Acta Numerica, vol. 1, pp. 199–242, 1992. View at: Publisher Site | Google Scholar
  26. S. S. Oren and D. G. Luenberger, “Self-scaling variable metric (SSVM) algorithms, part I: criteria and sufficient conditions for scaling a class of algorithms,” Management Science, vol. 20, no. 5, pp. 845–862, 1974. View at: Publisher Site | Google Scholar
  27. J. Nocedal and Y.-X. Yuan, “Analysis of self-scaling quasi-Newton method,” Mathematical Programming, vol. 61, no. 1–3, pp. 19–37, 1993. View at: Publisher Site | Google Scholar
  28. M. Al-Baali, “Analysis of a family of self-scaling quasi-Newton methods,” Tech. Rep., Department of Mathematics and Computer Science, United Arab Emirates University, Al Ain, UAE, 1993, Technical report. View at: Google Scholar
  29. Y.-X. Yuan, “A modified BFGS algorithm for unconstrained optimization,” IMA Journal of Numerical Analysis, vol. 11, no. 3, pp. 325–332, 1991. View at: Publisher Site | Google Scholar
  30. M. J. D. Powell, “How bad are the BFGS and DFP methods when the objective function is quadratic?” Mathematical Programming, vol. 34, no. 1, pp. 34–47, 1986. View at: Publisher Site | Google Scholar
  31. J. Barzilai and J. M. Borwein, “Two-point step size gradient methods,” IMA Journal of Numerical Analysis, vol. 8, no. 1, pp. 141–148, 1988. View at: Publisher Site | Google Scholar
  32. W. Y. Cheng and D. H. Li, “Spectral scaling BFGS method,” Journal of Optimization Theory and Applications, vol. 146, no. 2, pp. 305–319, 2010. View at: Publisher Site | Google Scholar
  33. N. Andrei, “An adaptive scaled BFGS method for unconstrained optimization,” Numerical Algorithms, vol. 77, no. 2, pp. 413–432, 2017. View at: Publisher Site | Google Scholar
  34. N. Andrei, “A double parameter scaled BFGS method for unconstrained optimization,” Journal of Computational and Applied Mathematics, vol. 332, pp. 26–44, 2018. View at: Publisher Site | Google Scholar
  35. Y.-H. Dai and C.-X. Kou, “A nonlinear conjugate gradient algorithm with an optimal property and an improved wolfe line search,” SIAM Journal on Optimization, vol. 23, no. 1, pp. 296–320, 2013. View at: Publisher Site | Google Scholar
  36. Z. Dai, X. Dong, J. Kang, and L. Hong, “Forecasting stock market returns: new technical indicators and two-step economic constraint method,” The North American Journal of Economics and Finance, vol. 53, Article ID 101216, 2020. View at: Publisher Site | Google Scholar
  37. Z. Dai and H. Zhu, “A modified Hestenes-Stiefel-type derivative-free method for large-scale nonlinear monotone equations,” Mathematics, vol. 8, no. 2, p. 168, 2020. View at: Publisher Site | Google Scholar
  38. G. Yuan, X. Wang, and Z. Sheng, “Family weak conjugate gradient algorithms and their convergence analysis for nonconvex functions,” Numerical Algorithms, vol. 84, no. 3, pp. 935–956, 2020. View at: Publisher Site | Google Scholar
  39. G. Yuan, J. Lu, and Z. Wang, “The PRP conjugate gradient algorithm with a modified WWP line search and its application in the image restoration problems,” Applied Numerical Mathematics, vol. 152, pp. 1–11, 2020. View at: Publisher Site | Google Scholar
  40. G. Yuan, T. Li, and W. Hu, “A conjugate gradient algorithm for large-scale nonlinear equations and image restoration problems,” Applied Numerical Mathematics, vol. 147, pp. 129–141, 2020. View at: Publisher Site | Google Scholar
  41. G. Yuan, Z. Wei, and Y. Yang, “The global convergence of the Polak-Ribière-Polyak conjugate gradient algorithm under inexact line search for nonconvex functions,” Journal of Computational and Applied Mathematics, vol. 362, pp. 262–275, 2019. View at: Publisher Site | Google Scholar
  42. L. Zhang, “A derivative-free conjugate residual method using secant condition for general large-scale nonlinear equations,” Numerical Algorithms, vol. 83, no. 4, pp. 1277–1293, 2020. View at: Publisher Site | Google Scholar
  43. W. Zhou, “A short note on the global convergence of the unmodified PRP method,” Optimization Letters, vol. 7, no. 6, pp. 1367–1372, 2013. View at: Publisher Site | Google Scholar
  44. W. Sun and Y. Yuan, Optimization Theory and Methods, Springer US, New York, NY, USA, 2006.
  45. R. H. Byrd, D. C. Liu, and J. Nocedal, “On the behavior of broyden’s class of quasi-Newton methods,” SIAM Journal on Optimization, vol. 2, no. 4, pp. 533–557, 1992. View at: Publisher Site | Google Scholar
  46. M. J. D. Powell, “Updating conjugate directions by the BFGS formula,” Mathematical Programming, vol. 38, no. 1, pp. 29–46, 1987. View at: Publisher Site | Google Scholar
  47. I. Bongartz, A. R. Conn, N. Gould, and P. L. Toint, “CUTE: constrained and unconstrained testing environment,” ACM Transactions on Mathematical Software, vol. 21, no. 1, pp. 123–160, 1995. View at: Publisher Site | Google Scholar
  48. J. J. Moré, B. S. Garbow, and K. E. Hillstrom, “Testing unconstrained optimization software,” ACM Transactions on Mathematical Software (TOMS), vol. 7, no. 1, pp. 17–41, 1981. View at: Publisher Site | Google Scholar
  49. Y. Yuan and W. Sun, Theory and Methods of Optimization, Science Press of China, Beijing, China, 1999.
  50. A. Ouyang, L.-B. Liu, Z. Sheng, and F. Wu, “A class of parameter estimation methods for nonlinear Muskingum model using hybrid invasive weed optimization algorithm,” Mathematical Problems in Engineering, vol. 2015, Article ID 573894, 15 pages, 2015. View at: Publisher Site | Google Scholar
  51. A. Ouyang, Z. Tang, K. Li, A. Sallam, and E. Sha, “Estimating parameters of Muskingum model using an adaptive hybrid PSO algorithm,” International Journal of Pattern Recognition and Artificial Intelligence, vol. 28, pp. 1–29, 2014. View at: Publisher Site | Google Scholar
  52. Z. W. Geem, “Parameter estimation for the nonlinear Muskingum model using the BFGS technique,” Journal of Irrigation and Drainage Engineering, vol. 132, no. 5, pp. 474–478, 2006. View at: Publisher Site | Google Scholar

Copyright © 2020 Pengyuan Li et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.


More related articles

 PDF Download Citation Citation
 Download other formatsMore
 Order printed copiesOrder
Views280
Downloads247
Citations

Related articles

Article of the Year Award: Outstanding research contributions of 2020, as selected by our Chief Editors. Read the winning articles.