Research Article  Open Access
Pengyuan Li, Zhan Wang, Dan Luo, Hongtruong Pham, "Global Convergence of a Modified TwoParameter Scaled BFGS Method with YuanWeiLu Line Search for Unconstrained Optimization", Mathematical Problems in Engineering, vol. 2020, Article ID 9280495, 15 pages, 2020. https://doi.org/10.1155/2020/9280495
Global Convergence of a Modified TwoParameter Scaled BFGS Method with YuanWeiLu Line Search for Unconstrained Optimization
Abstract
The BFGS method is one of the most efficient quasiNewton methods for solving small and mediumsize unconstrained optimization problems. For the sake of exploring its more interesting properties, a modified twoparameter scaled BFGS method is stated in this paper. The intention of the modified scaled BFGS method is to improve the eigenvalues structure of the BFGS update. In this method, the first two terms and the last term of the standard BFGS update formula are scaled with two different positive parameters, and the new value of is given. Meanwhile, YuanWeiLu line search is also proposed. Under the mentioned line search, the modified twoparameter scaled BFGS method is globally convergent for nonconvex functions. The extensive numerical experiments show that this form of the scaled BFGS method outperforms the standard BFGS method or some similar scaled methods.
1. Introduction
Considerwhere , and is a continuously differentiable function bounded from below. The quasiNewton methods are currently used in countless optimization software for solving unconstrained optimization problems [1–8]. The BFGS method, one of the most efficient quasiNewton methods, for solving (1) is an iterative method of the following form:where , obtained by some line search rule, is a step size, and is the BFGS search direction computed by the following equation:where is the gradient of , and the matrix is the BFGS approximation to the Hessian , which has the following update formula:where and . The problems related to the BFGS method have been analyzed and studied by many scholars, and satisfactory conclusions have been drawn [9–16]. In earlier year, Powell [17] first proved the global convergence of the standard BFGS method with inexact Wolfe line search for convex functions. Under the exact line search or some specific inexact line search, the BFGS method has the convergence property for convex minimization problems [18–21]. By contrast, for nonconvex problems, Mascaren [22] has presented an example to elaborate that the BFGS method and some Broydentype methods may not be convergent under the exact line search. As such, with the Wolfe line searches, Dai [23] also proved that the BFGS method may fail to converge. To verify the global convergence of the BFGS method for general functions and to obtain a better Hessian approximation matrix of the objective function, Yuan and Wei [24] presented a modified quasiNewton equation as follows:where
In practice, the standard BFGS method has many qualities worth exploring and can effectively solve a class of unconstrained optimization problems.
Here, two excellent properties of the BFGS method are introduced. One is the selfcorrecting quality, scilicet; if the current Hessian approximate inverse matrix estimates the curvature of the function incorrectly, then Hessian approximation matrix will correct itself within a few steps. The other interesting property is that small eigenvalues are better corrected than large ones [25]. Hence, one can see that, the efficiency of the BFGS algorithm is subject to the eigenvalues structure of the Hessian approximation matrix intensely. To improve the performances of the BFGS method, Oren and Luenberger [26] scaled the Hessian approximation matrix , that is, they replaced by , where is a selfscaling factor. Nocedal and Yuan [27] further studied the selfscaling BFGS method when . Based on the value of this , AlBaali [28] introduced a simple modification: . The numerical experiments showed that the modified selfscaling BFGS method outperforms the unscaled BFGS method. Many other scaled BFGS methods with better properties will be enumerated.
Formula 1. The general oneparameter scaled BFGS updating formula iswhere is a positive parameter, and it is diverse for the selection of the scaled factor , which is listed as follows. Choice A: where the value of is given by Yuan [29], and with inexact line search, the global convergence of the scaled BFGS method with given by (9) is established for convex functions by Powell [30]. Ulteriorly, for general nonlinear functions, Yuan limited the value range of to [0.01, 100] to ensure the positivity of under the inexact line search and proved the global convergence of the scaled BFGS method in this form. Choice B: which is obtained as a solution of the problem: . The scaled BFGS method based on this value of was introduced by Barzilai and Borwein [31] and was deemed the spectral scaled BFGS method. Cheng and Li [32] proved that the spectral scaled BFGS method is globally convergent under Wolfe line search with assuming the convexity of the minimizing function. Choice C: where for . Under the Wolfe line search (20) and (21), holds for , which implies that computed by (11) is bounded away from zero, that is to say, . Therefore, in this instance, the large eigenvalues of given by (8) are shifted to the left [33].
Formula 2. Proposed by Oren and Luenberger [26], this scaled BFGS method was the single parameter scaled of the first two items of the BFGS update and was defined aswhere is a positive parameter and is calculated as follows:
The parameter assigned by (13) can make the structure of eigenvalue to inverse Hessian approximation more easily analyzed. Consequently, it is regarded as one of the best factors.
Formula 3. In this method, the scaled parameters are selected to cluster the eigenvalues of the iteration matrix and shift the large eigenvalues to the left. The update formula of the Hessian approximate matrix is computed aswhere both and are positive parameters, and Andrei [34] preset them as the following values:
If the scaled parameters are bounded and line search is inexact, then this scaled BFGS algorithm is globally convergent for general functions. A large number of numerical experiments show that the double parameter scaled BFGS method with and given by (15) and (16) is more competitive than the standard BFGS method. In this paper, combining (7) and (14), we propose a new update formula of listed as follows:where is determined by formula (6),
Some interesting properties of the BFGStype method are inseparable from the weak Wolfe–Powell (WWP) line search:where . There are many research studies based on this line search [35–43]. To further develop the inexact line search, Yuan et al. present a new line search and call it YuanWeiLu (YWL) line search, which has the following form:where , , and . The main work of this paper is to verify the global convergence of the modified scaled BFGS update (17) with and given by (18) and (19), respectively, under this line search. Abundant numerical results show that such a combination is appropriate for nonconvex functions.
Our paper is organized as follows. The motivation and algorithm are introduced in the next section. In Section 3, the convergence analysis of the modified twoparameter scaled BFGS method under YuanWeiLu line search is established. Section 4 is devoted to show the results of numerical experiments. Some conclusions are stated in the last section.
2. Motivation and Algorithm
Two crucial tools for analyzing properties of the BFGS method are the trace and the determinant of the given by (4). Thus, the corresponding relations are enumerated as follows:
Applying the following existing relation in the study of Sun and Yuan [44],where , , , and ; we obtain
Obviously, the efficiency of the BFGS method depends on the eigenvalues structure of the Hessian approximation matrix, and the BFGS method is actually more affected by large eigenvalues than by small eigenvalues [25, 45, 46]. It can be seen that the second item on the right side of the formula (25) is negative. Therefore, it produces a shift of the eigenvalues of to the left. Thus, the BFGS method can modify large eigenvalues. Moreover, the third term on the right hand side of (25) being positive produces a shift of the eigenvalues of to the right. If this term is large, may have large eigenvalues too. Therefore, the eigenvalues of the can be corrected by scaling the corresponding items in (25), which is the main motivation for us to use the scaling BFGS method. In this paper, we scale the first two terms and the last term of the standard BFGS update formula with two different positive parameters and propose a new . In subsequent proof, we will propose some lemmas based on these two important tools to analyze the convergence of the modified scaled BFGS method. Then, an algorithm framework for solving the problem (1) will be built in Algorithm 1, which can be designed as

3. Convergence Analysis
In Section 3, the global convergence of Algorithm 1 will be established, and the following assumptions are useful in convergence analysis.
Assumption 1. (i)The level set is bounded(ii)The function is twice continuously differentiable and bounded from below
Lemma 1. If is the positive definite, , and if is computed by (22) and (23), then given by (17) is an equally positive definite for all .
Proof. The inequality (22) and (23) indicates that . Using the definition of , we obtainFor any ,where the penultimate inequality follows, andwhich is obtained by the Cauchy–Schwarz inequality.
Lemma 2. Let be generated by (16) for , then and inclines to 1.
Proof. Observe the formula (19); after substituting , we can find that is close to 1. Owing to the symmetry, positive definiteness, and nonsingularity of , its eigenvalues is real and positive, and . Hence, for , and . Since , , and for sufficiently large , , and are roughly of the same order of magnitude, which shows that . To sum up, the relations and are valid, namely for , and inclines to 1. The proof is completed.
Remark 1. Based on the conclusion of lemma, we can infer that for any integer , there exist two positive constants satisfying .
Lemma 3. If is updated by (14), where and are determined by (18) and (16), then
Proof. Considering (25), we haveIn addition,Therefore, by Remark 1 and the above inequality, the formula (33) is transformed intowhich implies (31). From the positive definiteness of , (32) also holds. The proof is completed.
Lemma 4. Consider and for all , where and are constants. Then, there exists a positive constant such thatfor all sufficiently large.
Proof. Utilizing the identity (26) and taking the determinant on both sides of the formula (14) with and computed as in (18) and (16), we havewhere the penultimate inequality follows , , , and for all . Furthermore, by and Lemma 4, we obtainTherefore,Suppose is sufficiently large, (39) implies (36). The proof is completed.
Theorem 1. If the sequence is obtained by Algorithm 1, then
Proof. The proof by contradiction is used to prove (40) holds. Suppose that . By YuanWeiLu line search (22) and bounded below, we obtainAdding the abovementioned inequalities from to and utilizing Assumption 1 (ii), we haveFrom Assumption 1 (ii) and (42), we haveBased on this, given a constant , there is a positive integer satisfyingwhere is any positive integer, and the first inequality follows the geometric inequality. Moreover, by Lemma 4, we obtainConsidering , the above formula and formula (39) are contradictory. Thus, (40) is valid. The proof is completed.
4. Numerical Results
In this section, numerical results of Algorithm 1 are reported, and the following methods were compared: (i) MTPSBFGS method ( is updated by (17) with and given by (18) and (19)). (ii) SBFGS method ( is updated by (14) with and given by (11) and (16)).
4.1. General Unconstrained Optimisation Problems
Tested problems: a total of 74 test questions, listed in Table 1 and derived from the studies by Bongartz et al. and More et al. [47, 48]. Parameters: Algorithm 1 runs with , , , , and . Dimensionality: the algorithm is tested in the following three dimensions: 300, 900, and 2700. Himmelblau stop rule [49]: if , then set or . The iterations are stopped if or holds, where and . Experiment environment: all programs are written in MATLAB R2014a and run on a PC with an Inter(R) Core(TM) i54210U CPU at 1.70 GHz, 8.00 GB of RAM, and the Windows 10 operating system. Symbol representation: No.: the test problem number. CPU time: the CPU time in seconds. NI: the number of iterations. NFG: the total number of function and gradient evaluations. Image description: Figures 1–3 show the profiles for CPU time, NI, and NFG, and Tables 2–6 provide the detail numerical results. From these figures and tables, it is obvious that the MTPSBFGS method possesses better numerical performance between these two methods, that is, the proposed modified scaled BFGS method is reasonable and feasible. The specific reasons for good performance are stated as follows. The parameter scaling the first two terms of the standard BFGS update is determined to cluster the eigenvalues of this matrix, and the parameter scaling the third term is determined to reduce its large eigenvalues, thus obtaining a better distribution of them.






4.2. Muskingum Model in Engineering Problems
In this subsection, we present the Muskingum model, and it has the following form:
Muskingum model [50]:whose symbolic representation is as follows: is the storage time constant, is the weight coefficient, is an extra parameter, is the observed inflow discharge, is the observed outflow discharge, is the total time, and is the time step at time .
The observed data of the experiment are obtained from the process of flood runoff from Chenggouwan and Linqing of Nanyunhe in the Haihe Basin, Tianjin, China. Select the initial point and the time step . The concrete values of and for the years 1960, 1961, and 1964 are listed in [51]. The test results are presented in Table 7.
Figures 4–6 and Table 7 imply the following three conclusions: (i) based on the Muskingum model, the efficiency of the MTPSBFGS method is wonderful, and numerical performance of these three algorithms is fantastic. (ii) Compared to other similar methods, the final points (, , and ) of the MTPSBFGS method are competitive. (iii) Due to the endpoints of these three methods being different, the Muskingum model may have more approximation optimum points.
5. Conclusion
A modified two parameter scaled BFGS method and the YuanWeiLu line search technology are introduced in this paper. By scaling the first two terms and the third term of the standard BFGS method with different positive parameters, a new two parameter scaled BFGS method is proposed. In this method, the new value of is given to guarantee better properties of the new scaled BFGS method. With YuanWeiLu line search, the proposed BFGS method is globally convergent. Numerical results indicate that the modified two parameter scaled BFGS method outperforms the standard BFGS method and even the same type of the BFGS method. As for the longerterm work, there are several points to consider: (1) are there some new values of , , and that make the BFGS method based on the update formula (17) perform better? (2) Whether the new scaled method combined with other line search have also great theoretical results. (3) Some new engineering problems based on the BFGStype method are worth studying.
Data Availability
The data used to support this study are included within this article.
Conflicts of Interest
The authors declare that there are no conflicts of interest regarding the publication of this paper.
Acknowledgments
This work was supported by the National Natural Science Foundation of China (Grant no. 11661009), the High Level Innovation Teams and Excellent Scholars Program in Guangxi Institutions of Higher Education (Grant no. (2019)52), the Guangxi Natural Science Key Fund (Grant no. 2017GXNSFDA198046), and the Guangxi Natural Science Foundation (Grant no. 2020GXNSFAA159069).
References
 R. H. Byrd, S. L. Hansen, J. Nocedal, and Y. Singer, “A stochastic quasiNewton method for largescale optimization,” SIAM Journal on Optimization, vol. 26, no. 2, pp. 1008–1031, 2016. View at: Publisher Site  Google Scholar
 A. S. Lewis and M. L. Overton, “Nonsmooth optimization via quasiNewton methods,” Mathematical Programming, vol. 141, no. 12, pp. 135–163, 2013. View at: Publisher Site  Google Scholar
 M. S. Salim and A. I. Ahmed, “A family of QuasiNewton methods for unconstrained optimization problems,” Optimization, vol. 67, no. 10, pp. 1717–1727, 2018. View at: Publisher Site  Google Scholar
 Z. Wei, G. Li, and L. Qi, “New quasiNewton methods for unconstrained optimization problems,” Applied Mathematics and Computation, vol. 175, no. 2, pp. 1156–1188, 2006. View at: Publisher Site  Google Scholar
 Z. Wei, G. Yu, G. Yuan, and Z. Lian, “The superlinear convergence of a modified BFGStype method for unconstrained optimization,” Computational Optimization and Applications, vol. 29, no. 3, pp. 315–332, 2004. View at: Publisher Site  Google Scholar
 G. Yuan, Z. Sheng, B. Wang, W. Hu, and C. Li, “The global convergence of a modified BFGS method for nonconvex functions,” Journal of Computational and Applied Mathematics, vol. 327, pp. 274–294, 2018. View at: Publisher Site  Google Scholar
 W. Zhou and L. Zhang, “Global convergence of the nonmonotone MBFGS method for nonconvex unconstrained minimization,” Journal of Computational and Applied Mathematics, vol. 223, no. 1, pp. 40–47, 2009. View at: Publisher Site  Google Scholar
 W. Zhou and X. Chen, “Global convergence of a new hybrid GaussNewton structured BFGS method for nonlinear least squares problems,” SIAM Journal on Optimization, vol. 20, no. 5, pp. 2422–2441, 2010. View at: Publisher Site  Google Scholar
 D.H. Li and M. Fukushima, “A modified BFGS method and its global convergence in nonconvex minimization,” Journal of Computational and Applied Mathematics, vol. 129, no. 12, pp. 15–35, 2001. View at: Publisher Site  Google Scholar
 D.H. Li and M. Fukushima, “On the global convergence of the BFGS method for nonconvex unconstrained optimization problems,” SIAM Journal on Optimization, vol. 11, no. 4, pp. 1054–1064, 2001. View at: Publisher Site  Google Scholar
 L. Liu, Z. Wei, and X. Wu, “The convergence of a new modified BFGS method without line searches for unconstrained optimization or complexity systems,” Journal of Systems Science and Complexity, vol. 23, no. 4, pp. 861–872, 2010. View at: Publisher Site  Google Scholar
 Y. Xiao, Z. Wei, and Z. Wang, “A limited memory BFGStype method for largescale unconstrained optimization,” Computers & Mathematics with Applications, vol. 56, no. 4, pp. 1001–1009, 2008. View at: Publisher Site  Google Scholar
 C. Zhu, R. H. Byrd, P. Lu, and J. Nocedal, “Algorithm 778: lBFGSB: fortran subroutines for largescale boundconstrained optimization,” ACM Transactions on Mathematical Software, vol. 23, no. 4, pp. 550–560, 1997. View at: Publisher Site  Google Scholar
 W. Zhou, “A modified BFGS type quasiNewton method with line search for symmetric nonlinear equations problems,” Journal of Computational and Applied Mathematics, vol. 367, Article ID 112454, 2020. View at: Publisher Site  Google Scholar
 L. Zhang and H. Tang, “A hybrid MBFGS and CBFGS method for nonconvex minimization with a global complexity bound,” Pacific Journal of Optimization, vol. 14, no. 4, pp. 693–702, 2018. View at: Google Scholar
 W. Zhou and L. Zhang, “A modified Broydenlike quasiNewton method for nonlinear equations,” Journal of Computational and Applied Mathematics, vol. 372, Article ID 112744, 2020. View at: Publisher Site  Google Scholar
 M. J. D. Powell, “Some global convergence properties of a variable metric algorithm for minimization without exact line searches,” SIAMAMS Proceedings, vol. 9, pp. 53–72, 1976. View at: Google Scholar
 R. H. Byrd, J. Nocedal, and Y.X. Yuan, “Global convergence of a cass of quasiNewton methods on convex problems,” SIAM Journal on Numerical Analysis, vol. 24, no. 5, pp. 1171–1190, 1987. View at: Publisher Site  Google Scholar
 L. C. W. Dixon, “Variable metric algorithms: necessary and sufficient conditions for identical behavior of nonquadratic functions,” Journal of Optimization Theory and Applications, vol. 10, no. 1, pp. 34–40, 1972. View at: Publisher Site  Google Scholar
 A. Griewank, “The global convergence of partitioned BFGS on problems with convex decompositions and Lipschitzian gradients,” Mathematical Programming, vol. 50, no. 1–3, pp. 141–175, 1991. View at: Publisher Site  Google Scholar
 M. J. D. Powell, “On the convergence of the variable metric algorithm,” IMA Journal of Applied Mathematics, vol. 7, no. 1, pp. 21–36, 1971. View at: Publisher Site  Google Scholar
 W. F. Mascarenhas, “The BFGS method with exact line searches fails for nonconvex objective functions,” Mathematical Programming, vol. 99, no. 1, pp. 49–61, 2004. View at: Publisher Site  Google Scholar
 Y.H. Dai, “Convergence properties of the BFGS algorithm,” SIAM Journal on Optimization, vol. 13, no. 3, pp. 693–701, 2006. View at: Publisher Site  Google Scholar
 G. Yuan and Z. Wei, “Convergence analysis of a modified BFGS method on convex minimizations,” Computational Optimization and Applications, vol. 47, no. 2, pp. 237–255, 2010. View at: Publisher Site  Google Scholar
 J. Nocedal, “Theory of algorithms for unconstrained optimization,” Acta Numerica, vol. 1, pp. 199–242, 1992. View at: Publisher Site  Google Scholar
 S. S. Oren and D. G. Luenberger, “Selfscaling variable metric (SSVM) algorithms, part I: criteria and sufficient conditions for scaling a class of algorithms,” Management Science, vol. 20, no. 5, pp. 845–862, 1974. View at: Publisher Site  Google Scholar
 J. Nocedal and Y.X. Yuan, “Analysis of selfscaling quasiNewton method,” Mathematical Programming, vol. 61, no. 1–3, pp. 19–37, 1993. View at: Publisher Site  Google Scholar
 M. AlBaali, “Analysis of a family of selfscaling quasiNewton methods,” Tech. Rep., Department of Mathematics and Computer Science, United Arab Emirates University, Al Ain, UAE, 1993, Technical report. View at: Google Scholar
 Y.X. Yuan, “A modified BFGS algorithm for unconstrained optimization,” IMA Journal of Numerical Analysis, vol. 11, no. 3, pp. 325–332, 1991. View at: Publisher Site  Google Scholar
 M. J. D. Powell, “How bad are the BFGS and DFP methods when the objective function is quadratic?” Mathematical Programming, vol. 34, no. 1, pp. 34–47, 1986. View at: Publisher Site  Google Scholar
 J. Barzilai and J. M. Borwein, “Twopoint step size gradient methods,” IMA Journal of Numerical Analysis, vol. 8, no. 1, pp. 141–148, 1988. View at: Publisher Site  Google Scholar
 W. Y. Cheng and D. H. Li, “Spectral scaling BFGS method,” Journal of Optimization Theory and Applications, vol. 146, no. 2, pp. 305–319, 2010. View at: Publisher Site  Google Scholar
 N. Andrei, “An adaptive scaled BFGS method for unconstrained optimization,” Numerical Algorithms, vol. 77, no. 2, pp. 413–432, 2017. View at: Publisher Site  Google Scholar
 N. Andrei, “A double parameter scaled BFGS method for unconstrained optimization,” Journal of Computational and Applied Mathematics, vol. 332, pp. 26–44, 2018. View at: Publisher Site  Google Scholar
 Y.H. Dai and C.X. Kou, “A nonlinear conjugate gradient algorithm with an optimal property and an improved wolfe line search,” SIAM Journal on Optimization, vol. 23, no. 1, pp. 296–320, 2013. View at: Publisher Site  Google Scholar
 Z. Dai, X. Dong, J. Kang, and L. Hong, “Forecasting stock market returns: new technical indicators and twostep economic constraint method,” The North American Journal of Economics and Finance, vol. 53, Article ID 101216, 2020. View at: Publisher Site  Google Scholar
 Z. Dai and H. Zhu, “A modified HestenesStiefeltype derivativefree method for largescale nonlinear monotone equations,” Mathematics, vol. 8, no. 2, p. 168, 2020. View at: Publisher Site  Google Scholar
 G. Yuan, X. Wang, and Z. Sheng, “Family weak conjugate gradient algorithms and their convergence analysis for nonconvex functions,” Numerical Algorithms, vol. 84, no. 3, pp. 935–956, 2020. View at: Publisher Site  Google Scholar
 G. Yuan, J. Lu, and Z. Wang, “The PRP conjugate gradient algorithm with a modified WWP line search and its application in the image restoration problems,” Applied Numerical Mathematics, vol. 152, pp. 1–11, 2020. View at: Publisher Site  Google Scholar
 G. Yuan, T. Li, and W. Hu, “A conjugate gradient algorithm for largescale nonlinear equations and image restoration problems,” Applied Numerical Mathematics, vol. 147, pp. 129–141, 2020. View at: Publisher Site  Google Scholar
 G. Yuan, Z. Wei, and Y. Yang, “The global convergence of the PolakRibièrePolyak conjugate gradient algorithm under inexact line search for nonconvex functions,” Journal of Computational and Applied Mathematics, vol. 362, pp. 262–275, 2019. View at: Publisher Site  Google Scholar
 L. Zhang, “A derivativefree conjugate residual method using secant condition for general largescale nonlinear equations,” Numerical Algorithms, vol. 83, no. 4, pp. 1277–1293, 2020. View at: Publisher Site  Google Scholar
 W. Zhou, “A short note on the global convergence of the unmodified PRP method,” Optimization Letters, vol. 7, no. 6, pp. 1367–1372, 2013. View at: Publisher Site  Google Scholar
 W. Sun and Y. Yuan, Optimization Theory and Methods, Springer US, New York, NY, USA, 2006.
 R. H. Byrd, D. C. Liu, and J. Nocedal, “On the behavior of broyden’s class of quasiNewton methods,” SIAM Journal on Optimization, vol. 2, no. 4, pp. 533–557, 1992. View at: Publisher Site  Google Scholar
 M. J. D. Powell, “Updating conjugate directions by the BFGS formula,” Mathematical Programming, vol. 38, no. 1, pp. 29–46, 1987. View at: Publisher Site  Google Scholar
 I. Bongartz, A. R. Conn, N. Gould, and P. L. Toint, “CUTE: constrained and unconstrained testing environment,” ACM Transactions on Mathematical Software, vol. 21, no. 1, pp. 123–160, 1995. View at: Publisher Site  Google Scholar
 J. J. Moré, B. S. Garbow, and K. E. Hillstrom, “Testing unconstrained optimization software,” ACM Transactions on Mathematical Software (TOMS), vol. 7, no. 1, pp. 17–41, 1981. View at: Publisher Site  Google Scholar
 Y. Yuan and W. Sun, Theory and Methods of Optimization, Science Press of China, Beijing, China, 1999.
 A. Ouyang, L.B. Liu, Z. Sheng, and F. Wu, “A class of parameter estimation methods for nonlinear Muskingum model using hybrid invasive weed optimization algorithm,” Mathematical Problems in Engineering, vol. 2015, Article ID 573894, 15 pages, 2015. View at: Publisher Site  Google Scholar
 A. Ouyang, Z. Tang, K. Li, A. Sallam, and E. Sha, “Estimating parameters of Muskingum model using an adaptive hybrid PSO algorithm,” International Journal of Pattern Recognition and Artificial Intelligence, vol. 28, pp. 1–29, 2014. View at: Publisher Site  Google Scholar
 Z. W. Geem, “Parameter estimation for the nonlinear Muskingum model using the BFGS technique,” Journal of Irrigation and Drainage Engineering, vol. 132, no. 5, pp. 474–478, 2006. View at: Publisher Site  Google Scholar
Copyright
Copyright © 2020 Pengyuan Li et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.