Mathematical Problems in Engineering

Mathematical Problems in Engineering / 2020 / Article
Special Issue

Machine Learning and its Applications in Image Restoration

View this Special Issue

Research Article | Open Access

Volume 2020 |Article ID 4519274 | https://doi.org/10.1155/2020/4519274

Tianshan Yang, Pengyuan Li, Xiaoliang Wang, "Convergence Analysis of an Improved BFGS Method and Its Application in the Muskingum Model", Mathematical Problems in Engineering, vol. 2020, Article ID 4519274, 9 pages, 2020. https://doi.org/10.1155/2020/4519274

Convergence Analysis of an Improved BFGS Method and Its Application in the Muskingum Model

Guest Editor: Weijun Zhou
Received28 Jun 2020
Accepted28 Jul 2020
Published18 Aug 2020

Abstract

The BFGS method is one of the most effective quasi-Newton algorithms for minimization-optimization problems. In this paper, an improved BFGS method with a modified weak Wolfe–Powell line search technique is used to solve convex minimization problems and its convergence analysis is established. Seventy-four academic test problems and the Muskingum model are implemented in the numerical experiment. The numerical results show that our algorithm is comparable to the usual BFGS algorithm in terms of the number of iterations and the time consumed, which indicates our algorithm is effective and reliable.

1. Introduction

With the development of the economy and society, a large number of optimization problems have been emerged in the fields of economic management, aerospace, transportation, national defense and so on. It is very necessary and meaningful for us to discuss, analyse the problems, and find some effective methods to solve them. Let us consider the optimization model:where , . To solve (1), the following iterative formula is widely used. Given the starting point , the iterative scheme iswhere is the current iteration point, is the next iteration point, is the step length, and is the search direction that is obtained by solving the quasi-Newton equation:where is the gradient of at the point , is the quasi-Newton updating matrix or its approximation, and the sequence satisfies the standard secant equation . The updating matrix of can be defined bywhere , , and is symmetric and positive definite.

Formula (4) is the famous standard BFGS update formula, which is one of the most effective quasi-Newton methods. For a convex function, using exact line search or some special inexact line search, the global convergence (see [1, 2]) and superlinear convergence (see [3, 4]) of the BFGS method were obtained. For general functions, the BFGS method may fail under inexact line search techniques. This fact has been proven by Dai [5], and Mascarenhas [6] has also proven that the BFGS method is not convergent, even under the exact line search techniques. Although the convergence of the BFGS method under general nonconvex functions has some shortcomings, its high efficiency and great numerical stability have motivated many scholars [712] to study and improve the BFGS method. The improvements achieved by scholars are as follows.

Formula 1. (see [7]). The BFGS update formula is modified bywhere and function satisfies (i) for all ; (ii) if and only if ; (iii) if is in a bounded set, is bounded. Li and Fukushima discussed its global convergence without the convexity assumption on .

Formula 2. (see [8]) The BFGS update formula is modified bywhere and . Moreover, scholars [8, 13] have proven that this method is better than the original BFGS method.

Formula 3. (see [9]) The BFGS update formula is modified bywhere and . According to , it is clear that the method contains both gradient and function value information. In addition, the modulated quasi-Newton method with superlinear convergence constructed by formula 3 is studied in [9].

Formula 4. (see [14]) The BFGS update formula is modified bywhere and . The global convergence of the improved BFGS method (MBFGS) is discussed by Li et al. [14]. Meanwhile, they also compared the three methods in numerical experiments. The results show that the algorithm based on this method is superior to the other three methods.
In many optimization algorithms, scholars often use the weak Wolfe–Powell (WWP) line search technique to find the step length . The WWP line search technique is determined bywhere , , and .
In order to get more interesting properties of WWP line search, many scholars have improved the line search technique. Yuan et al. [15] improved the WWP line search technique and studied the new line search technique that has global convergence in the BFGS and PRP methods. Their improved line search technique (MWWP) is formulated as follows:where , , , and . The detailed line search is elaborated in [15]. Some research results based on this improved line search can be found in [16, 17]. The above discussion motivate us to seek an improved BFGS method which may obtain better numerical performance.
In this article, we will discuss our work in the following sections. In Section 2, using (8) and the MWWP line search technique, algorithms are constructed to solve optimization problems. In Section 3, we study convergence of the modified BFGS method. In Section 4, the numerical results of the algorithm are reported. In the last section, the conclusion is presented.

2. Algorithm

The corresponding modified BFGS algorithm is called Algorithm 1 and can be presented as follows.

Algorithm 1. (i)Step 1: choose an initial point , , , , and . Given an initial symmetric and positive definite matrix , set .(ii)Step 2: when , stop. Otherwise, take the next step.(iii)Step 3: solve to obtain .(iv)Step 4: the step length is determined by (10) and (11).(v)Step 5: set a new iteration point of . Update by (8).(vi)Step 6: let and return to Step 2.

Remark 1. The step length , generated by the proposed new line search technique, has a great numerical performance. And the rationality proof of the MWWP line search has been given in [15].

3. Convergence Analysis

The global convergence analysis of the improved BFGS method will be introduced in this section, and the following assumptions are needed.

Assumption 1. (i)The level set of is bounded.(ii)The objective function is convex on .(iii) is twice continuously differentiable and bounded below, with a Lipschitz continuous gradient function . It means that there exists a positive constant , such thatNext, we will give the global convergence. The positive definite of is presented in the following lemma.

Lemma 1. Let the sequence be generated by (8); then, the matrix is positive definite for all .

Proof. Induction is used to prove the positive definiteness of . For , it is obvious that the matrix is positive definite. For , by and (11), we havewhere the last inequality holds by and . Therefore, the matrix is positive definite. The proof is completed.

Lemma 2. Let Assumption 1 hold, and the sequence is generated in the Algorithm 1. Then,

Proof. By MWWP line search (8) and formula (12), we obtainTherefore, the following bound holds:By (10) and Assumption 1 (iii), we haveAdding these inequalities from to , we obtainCombining the above inequality with (16), we obtain (14). Therefore, Lemma 2 has been proven.

Remark 2. It is obvious that there are two values of the . Therefore, the MWWP line search has two situations. In this paper, we discuss the situation of .

Lemma 3. Let and Assumption 1 hold. Then, there exists a positive constant such thathold for at least values of with any positive integer .

Proof. By the , if , then . Lemma 3 holds (see [15]).
If , then . It is similar to the result of Yuan and Wei [18]. According to the convexity of objective function , we obtainThe above two inequalities and the definition of indicate thatThen, we obtainTherefore, by the above analysis, it followsBy the definition of , it follows thatThen, we have thatThe proof of Theorem 2.1 of [2] implies that Lemma 3 holds.
Based on the above conclusions, the global convergence is analysed in the following theorem.

Theorem 1. If the conditions of Lemma 3 hold, then we obtain

Proof. By Lemma 2, we obtainSince , we haveCombining (19) with (20), we obtainThus, , , and , and we obtainTherefore, (27) holds. The proof is complete.

4. Numerical Results

In this section, we will study the numerical performance of the MBFGS-MWWP algorithm established in Section 2. To verify the algorithm’s effectiveness, we divide the experiments into two parts: we first compare our algorithm with the standard BFGS method with the weak Wolfe–Powell line search technique (BFGS-WWP) in 74 academic problems listed in Table 1 with the dimension varying from 300 to 2700 and then apply our algorithm to the Muskingum engineering model.


Test problem

1Extended Freudenstein and Roth function
2Extended trigonometric function
3Extended Rosenbrock function
4Extended White and Holst function
5Extended Beale function
6Extended penalty function
7Perturbed quadratic function
8Raydan 1 function
9Raydan 2 function
10Diagonal 1 function
11Diagonal 2 function
12Diagonal 3 function
13Hager function
14Generalized Tridiagonal-1 function
15Extended Tridiagonal-1 function
16Extended three exponential terms function
17Generalized Tridiagonal-2 function
18Diagonal 4 function
19Diagonal 5 function
20Extended Himmelblau function
21Generalized PSC1 function
22Extended PSC1 function
23Extended Powell function
24Extended block diagonal BD1 function
25Extended Maratos function
26Extended Cliff function
27Quadratic diagonal perturbed function
28Extended Wood function
29Extended Hiebert function
30Quadratic function QF1 function
31Extended quadratic penalty QP1 function
32Extended quadratic penalty QP2 function
33A quadratic function QF2 function
34Extended EP1 function
35Extended Tridiagonal-2 function
36BDQRTIC function (CUTE)
37TRIDIA function (CUTE)
38ARWHEAD function (CUTE)
39NONDIA function (CUTE)
40NONDQUAR function (CUTE)
41DQDRTIC function (CUTE)
42EG2 function (CUTE)
43DIXMAANA function (CUTE)
44DIXMAANB function (CUTE)
45DIXMAANC function (CUTE)
46DIXMAANE function (CUTE)
47Partial perturbed quadratic function
48Broyden Tridiagonal function
49Almost perturbed quadratic function
50Tridiagonal perturbed quadratic function
51EDENSCH function (CUTE)
52VARDIM function (CUTE)
53STAIRCASE S1 function
54LIARWHD function (CUTE)
55DIAGONAL 6 function
56DIXON3DQ function (CUTE)
57DIXMAANF function (CUTE)
58DIXMAANG function (CUTE)
59DIXMAANH function (CUTE)
60DIXMAANI function (CUTE)
61DIXMAANJ function (CUTE)
62DIXMAANK function (CUTE)
63DIXMAANL function (CUTE)
64DIXMAAND function (CUTE)
65ENGVAL1 function (CUTE)
66FLETCHCR function (CUTE)
67COSINE function (CUTE)
68Extended DENSCHNB function (CUTE)
69Extended DENSCHNF function (CUTE)
70SINQUAD function (CUTE)
71BIGGSB1 function (CUTE)
72Partial perturbed quadratic PPQ2 function
73Scaled quadratic SQ1 function
74Scaled quadratic SQ2 function

4.1. Unconstrained Optimisation Problems

In this section, we compare Algorithm 1 with the BFGS-WWP algorithms for the 74 academic problems listed in Table 1. The codes are written with MATLAB R2014a and run on a PC with an Inter (R) Core (TM) i5-4210U CPU @ 1.70 GHz, 8.00 GB of RAM and the Windows 10 operating system, and the parameters are chosen as , , , and . The numerical results and comparison are shown in Tables 26. The columns in Tables 26 have the following meaning:(i): the index of the tested problems(ii): the dimension of the tested problem(iii): the iteration number consumed(iv): the total number both of the gradient and the function value(v): the time consumed in corresponding algorithms in seconds


N0DimMBFGS-MWWPBFGS-WWP
NINFGCPU timeNINFGCPU time

13009260.12519261.0625
19007192.40637192.4375
1270072134.656172134.0131
23002795984.65632545474.0052
29006601381186.23127001489203.3942
2270071630.469171630.5030
330062814718.734467614729.2969
390010002429286.140610002598275.7656
32700100030865405.7969100029995390.2344
430073316369.9219733161010.9375
490010002371278.406310002421285.8281
42700100021105408.3906100021165375.0313
530013390.187515450.2031
590015464.031315463.8125
52700154674.1875154673.2188
63008330.09388330.1406
690013493.484413493.0625
6270041710.500141710.3906
73001984012.56251984012.5781
7900409823112.9531409823111.6563
7270088317684735.796988317684768.6094
830022480.406322480.3281
890031668.359431668.5156
8270057118294.218857118292.7812
93006160.062513280.1563
99006161.406313283.2656
9270061624.7813143063.6563
10300290.0313290
10900290.0469290.0469
102700290.1563290.1563
1130046940.609446940.5781
119006713618.03126613419.0021
11270082200422.890697196533.4688
1230038780.531338780.5469
12900469412.7031469412.6253
12270066134340.437566134343.1406
1330014340.156214340.2031
1390015423.968815424.0156
13270049153257.421949151250.4063
1430011250.281311250.3438
1490010232.859410232.5625
14270061525.109461523.5313
1530013320.328113320.3906
1590014343.812514343.9219
152700153772.0156153775.4844
163007180.06257180.0938
169006161.32816161.2521
16270061824.453161623.8906
17300981991.2656981991.3281
179008818223.98488718023.3281
17270082170415.312582171405.2813


N0DimMBFGS-MWWPBFGS-WWP
NINFGCPU timeNINFGCPU time

183003100.03123100.0312
189003100.54683100.4844
1827003109.34383109.0032
19300390.03123100.0312
19900390.54683100.5312
1927003910.17183109.6254
2030011320.18758270.0625
2090013552.468710291.8281
202700164865.7812134045.4531
21300471220.5781501140.7187
219005814116.21015012513.2567
21270058149295.171842108207.6562
223006210.09377300.1093
229006211.39067301.5937
22270062125.156273029.0781
233001143931.85931133781.5937
2390015252342.890619062852.0468
2327002458541304.25212096931105.5625
24300943381.1562311610.2813
249009336122.9531171271.9375
2427002412974.76562513679.4375
2530068618238.781273419089.7031
2590010003200280.968710002465272.8751
25270096928675167.4062100027915395.4062
263004150.03124150.0312
269004150.60934150.6406
26270041510.468141510.1093
273007180.09377180.1563
2790013303.312513303.7031
2727002860139.20312860149.8593
28300882551.3593842451.4843
2890010328828.218710229527.3125
282700131328694.8125128330669.3593
293004150.03124150.0312
299004150.65624150.4218
29270041510.39064156.2968
303002074192.75212074192.8281
30900444893122.4375444893120.4687
30270086317284608.984386317284734.1093
3130011310.140611310.1562
3190013493.359313493.8281
312700165879.8751165891.4531
3230013440.156213450.2031
3290012333.046812333.2521
32270072223367.953183235435.4687
333004110.03124110.0312
339004110.71874110.6562
33270041112.515641112.7812
34300390390.0312
34900390.5625390.5121
34270041014.718741014.2031


N0DimMBFGS-MWWPBFGS-WWP
NINFGCPU timeNINFGCPU time

353005130.06255130.0625
359005131.09375131.0781
35270051319.593751318.1093
36300451180.8751451160.9375
369004712213.59375514515.4375
36270064171324.656264167316.7968
373002515073.56252515073.7031
379005771159160.87515771159176.6251
37270086917414684.328186917414743.0937
383005150.06255150.0625
389005151.04685151.1718
38270051519.734351518.9218
3930019720.252219670.3281
3990023876.265621815.2521
39270034118173.112335120181.7812
4030048511627.218752010557.4375
409005061162143.34377311468198.0625
40270052812702713.421882916644167.6718
4130012340.171812340.1718
4190012342.906212343.0468
412700123456.5231123453.7968
4230018600.218716550.1875
429004210.07814210.0468
4227004210.29684210.3125
4330015380.265615380.2343
4390018445.406218444.6875
432700194694.0312204892.6562
4430029710.437527700.3752
449004712013.09378320422.3593
44270087203443.703199224494.0937
45300561310.7812631460.9531
459007517820.82819020925.4843
45270081190414.4218124265615.4062
46300701570.9375901961.67187
4690012127634.671811725832.5468
462700174401916.87521994161049.3437
473001102252.29681102252.3752
4790021944673.843721944668.9843
4727003467021940.34373467021948.3437
48300901831.6252901831.7031
489007515720.56257515720.0468
482700144291741.3751144291716.9375
493001973992.67181973993.0781
49900422849117.1562422849116.6562
49270088217664726.187588217664725.7031
503001853754.17181853754.0312
50900395795116.1562395795112.3906
50270085917204628.046885917204585.6251
5130024580.312524580.3125
5190021565.703130788.2523
5127003380163.78122772127.3593


N0DimMBFGS-MWWPBFGS-WWP
NINFGCPU timeNINFGCPU time

5230031303130.0468
529003130.34373130.2968
5227003135.64063135.4062
533003146344.10934048125.4063
539009271860261.953110002004270.2188
532700100020045316.6406100020025250.6406
5430025780.328124740.2969
5490028887.328128857.1406
54270033107166.218137116183.0625
5530010240.140620420.2812
5590011262.906221445.2187
552700112650.96872348105.5123
563003116263.78123547104.6252
569009211846252.859310002002271.5312
562700100020025193.1251100020004871.1241
57300771991.2187962321.4531
579007291.64064014011.0781
57270073030.4531157565.8594
58300671440.8751731541.2656
5890010321429.843110321428.1875
5827002888124.14061964071044.9688
593003150.03123150.0313
5990011582.812511583.2656
59270024111107.312524108109.1093
60300711660.9687791731.1875
6090013329542.203118238450.2031
6027001904331024.2812182388947.9062
61300681720.937526870.4375
619007291.75219451.7656
61270073031.2812199082.3906
623003598775.71813919086.3125
629006251452182.96876181415168.5625
62270062214733372.6406156520809.6251
633001573712.51561663912.6562
6390023657167.640626759274.8906
63270054112162961.687549311142569.2656
643001984823.21312024853.2187
6490027662982.1562211125.4218
6427003129.93753129.1406
65300511141.0312481080.9062
659005112215.42184210911.6562
65270055126290.484477179379.0781
66300527118710.2968534121110.1875
6690010002415294.625110002442285.8281
662700100024945153.8281100025444975.5468
673006210.03126210
6790010040821.328112330.1562
67270010290.906213590.9843
6830011260.187511260.2031
6890011263.218712283.5156
682700122860.4375122852.7656


N0DimMBFGS-MWWPBFGS-WWP
NINFGCPU timeNINFGCPU time

69300431420.687526810.3752
699006520017.890628897.5938
69270073222399.734334107170.4375
703003588479.2031501510.9218
709004415913.81257125019.2812
7027002637641469.703145152207.3438
713001633322.03121633342.0938
71900483975147.3593488986130.2656
712700100020005118.8751100020004951.7343
7230035800.828143940.7968
729005812818.26565712618.1875
72270096208535.140698210531.1562
733001332712.28121332711.7031
7390022545563.859322545560.2521
7327004719472572.96814438912325.0156
74300431120.7187431120.5625
7490018437357.359318437350.5781
7427002625291460.04682625291372.9375

For intuitive effect, we adopt the performance technique in [19] to show the performance of different algorithms. From the strategy in [19], the higher the line in the figure is, the better the numerical results are. Figures 13 show the NI, NFG, and CPU time performances, respectively, of the new algorithm and the standard BFGS method. From the results shown in the figures, the NI, NFG, and CPU time of the algorithm constructed in this paper are generally better than those of the standard BFGS algorithm. Therefore, our algorithm is interesting and reliable from this perspective.

4.2. Muskingum Model

In this section, the main work is to use Algorithm 1 to numerically estimate the Muskingum model [20], whose definition is as follows:where is the total time, is the observed inflow discharge, is the observed outflow discharge, denotes the storage time constant, denotes the weight coefficient, denotes an extra parameter, and is the time step at time (). The observation data in the experiment are from the process of flood runoff from Chenggouwan and Linqing of Nanyunhe in the Haihe Basin, Tianjin, China, and the initial point . In addition, the time step are selected, and the detailed values of and for the years 1960, 1961, and 1964 are given in [21].

Figures 46 and Table 7 imply the following conclusions: (1) Similar to the BFGS method and the HIWO method, the MBFGS method, combined with the Muskingum model, has interesting experimental results. (2) The final points (, , and ) of the MBFGS method are more competitive than other similar methods. (3) Due to the difference in the final points of these three methods, the Muskingum model may have more approximate optimal points.


Algorithm

BFGS [22]10.81560.98261.0219
HIWO [20]13.28130.80010.9933
MBFGS11.18320.99990.9996

5. Conclusion

In this paper, we study the improved BFGS method with the line search technique [14, 15] and mainly discuss its convergence on a convex function. The numerical results show that the proposed algorithm has a better problem-solving capability than that of the standard BFGS algorithm based on WWP line search. As for the further work, we have several points to consider: (i) That whether the improved BFGS method, based on other line search, also has convergence property. (ii) The combination of line search techniques (10) and (11) with other quasi-Newton methods is worth studying. (iii) Similar applications of nonlinear conjugate gradient algorithm, especially the PRP method, are also worthy of attention.

Data Availability

The data used to support the findings of this study are available in tables in this paper and also can be obtained from the corresponding author upon request.

Conflicts of Interest

The authors declare that there are no conflicts of interest regarding the publication of this paper.

Acknowledgments

This work was supported by the Basic Ability Promotion Project of Guangxi Young and Middle-Aged Teachers (No. 2020KY30018).

References

  1. C. G. Broyden, J. E. Dennis, and J. J. Moré, “On the local and superlinear convergence of quasi-Newton methods,” IMA Journal of Applied Mathematics, vol. 12, no. 3, pp. 223–245, 1973. View at: Publisher Site | Google Scholar
  2. R. H. Byrd and J. Nocedal, “A tool for the analysis of quasi-Newton methods with application to unconstrained minimization,” SIAM Journal on Numerical Analysis, vol. 26, no. 3, pp. 727–739, 1989. View at: Publisher Site | Google Scholar
  3. J. E. Dennis and J. J. Moré, “Quasi-Newton methods, motivation and theory,” SIAM Review, vol. 19, no. 1, pp. 46–89, 1977. View at: Publisher Site | Google Scholar
  4. J. E. Dennis and J. J. Moré, “A characterization of superlinear convergence and its application to quasi-Newton methods,” Mathematics of Computation, vol. 28, no. 126, p. 549, 1974. View at: Publisher Site | Google Scholar
  5. Y.-H. Dai, “Convergence properties of the BFGS algoritm,” SIAM Journal on Optimization, vol. 13, no. 3, pp. 693–701, 2002. View at: Publisher Site | Google Scholar
  6. W. F. Mascarenhas, “The BFGS method with exact line searches fails for non-convex objective functions,” Mathematical Programming, vol. 99, no. 1, pp. 49–61, 2004. View at: Publisher Site | Google Scholar
  7. D. Li and M. Fukushima, “A modified BFGS method and its global convergence in nonconvex minimization,” Journal of Computational and Applied Mathematics, vol. 129, no. 1-2, pp. 15–35, 2001. View at: Publisher Site | Google Scholar
  8. Z. Wei, G. Yu, G. Yuan, and Z. Lian, “The superlinear convergence of a modified BFGS-type method for unconstrained optimization,” Computational Optimization and Applications, vol. 29, no. 3, pp. 315–332, 2004. View at: Publisher Site | Google Scholar
  9. J. Z. Zhang, N. Y. Deng, and L. H. Chen, “New quasi-Newton equation and related methods for unconstrained optimization,” Journal of Optimization Theory and Applications, vol. 102, no. 1, pp. 147–167, 1999. View at: Publisher Site | Google Scholar
  10. D.-H. Li and M. Fukushima, “On the global convergence of the BFGS method for nonconvex unconstrained optimization problems,” SIAM Journal on Optimization, vol. 11, no. 4, pp. 1054–1064, 2001. View at: Publisher Site | Google Scholar
  11. L. Zhang and H. Tang, “A hybrid MBFGS and CBFGS method for nonconvex minimization with a global complexity bound,” Pacific Journal of Optimization, vol. 14, no. 4, pp. 693–702, 2018. View at: Google Scholar
  12. W. Zhou, “A modified BFGS type quasi-Newton method with line search for symmetric nonlinear equations problems,” Journal of Computational and Applied Mathematics, vol. 367, Article ID 112454, 2020. View at: Publisher Site | Google Scholar
  13. Z. Wei, G. Li, and L. Qi, “New quasi-Newton methods for unconstrained optimization problems,” Applied Mathematics and Computation, vol. 175, no. 2, pp. 1156–1188, 2006. View at: Publisher Site | Google Scholar
  14. X. Li, B. Wang, and W. Hu, “A modified nonmonotone BFGS algorithm for unconstrained optimization,” Journal of Inequalities Applications, vol. 2017, no. 1, p. 183, 2017. View at: Publisher Site | Google Scholar
  15. G. Yuan, Z. Wei, and X. Lu, “Global convergence of BFGS and PRP methods under a modified weak Wolfe-Powell line search,” Applied Mathematical Modelling, vol. 47, pp. 811–825, 2017. View at: Publisher Site | Google Scholar
  16. G. Yuan, Z. Sheng, B. Wang, W. Hu, and C. Li, “The global convergence of a modified BFGS method for nonconvex functions,” Journal of Computational and Applied Mathematics, vol. 327, pp. 274–294, 2018. View at: Publisher Site | Google Scholar
  17. G. Yuan, P. Li, and J. Lu, “The global convergence of the BFGS method with a modified WWP line search for nonconvex functions,” Numerical Algorithms, In press. View at: Google Scholar
  18. G. Yuan and Z. Wei, “Convergence analysis of a modified BFGS method on convex minimizations,” Computational Optimization and Applications, vol. 47, no. 2, pp. 237–255, 2010. View at: Publisher Site | Google Scholar
  19. E. D. Dolan and J. J. Moré, “Benchmarking optimization software with performance profiles,” Mathematical Programming, vol. 91, no. 2, pp. 201–213, 2002. View at: Publisher Site | Google Scholar
  20. A. Ouyang, L.-B. Liu, Z. Sheng, and F. Wu, “A class of parameter estimation methods for nonlinear Muskingum model using hybrid invasive weed optimization algorithm,” Mathematical Problems in Engineering, vol. 2015, Article ID 573894, 15 pages, 2015. View at: Publisher Site | Google Scholar
  21. A. Ouyang, Z. Tang, K. Li, A. Sallam, and E. Sha, “Estimating parameters of Muskingum model using an adaptive hybrid PSO algorithm,” International Journal of Pattern Recognition and Artificial Intelligence, vol. 28, no. 1, pp. 1–29, 2014. View at: Publisher Site | Google Scholar
  22. Z. W. Geem, “Parameter estimation for the nonlinear Muskingum model using the BFGS technique,” Journal of Irrigation and Drainage Engineering, vol. 132, no. 5, pp. 474–478, 2006. View at: Publisher Site | Google Scholar

Copyright © 2020 Tianshan Yang et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.


More related articles

 PDF Download Citation Citation
 Download other formatsMore
 Order printed copiesOrder
Views292
Downloads208
Citations

Related articles

Article of the Year Award: Outstanding research contributions of 2020, as selected by our Chief Editors. Read the winning articles.