Mathematical Problems in Engineering

Mathematical Problems in Engineering / 2020 / Article
Special Issue

Machine Learning and its Applications in Image Restoration

View this Special Issue

Research Article | Open Access

Volume 2020 |Article ID 6279543 | https://doi.org/10.1155/2020/6279543

Junyu Lu, Yong Li, Hongtruong Pham, "A Modified Dai–Liao Conjugate Gradient Method with a New Parameter for Solving Image Restoration Problems", Mathematical Problems in Engineering, vol. 2020, Article ID 6279543, 13 pages, 2020. https://doi.org/10.1155/2020/6279543

A Modified Dai–Liao Conjugate Gradient Method with a New Parameter for Solving Image Restoration Problems

Academic Editor: Maojun Zhang
Received25 Jul 2020
Revised22 Aug 2020
Accepted29 Aug 2020
Published18 Sep 2020

Abstract

One adaptive choice for the parameter of the Dai–Liao conjugate gradient method is suggested in this paper, which is obtained with modified quasi–Newton equation. So we get a modified Dai–Liao conjugate gradient method. Some interesting features of the proposed method are introduced: (i) The value of parameter of the modified Dai–Liao conjugate gradient method takes both the gradient and function value information. (ii) We establish the global convergence property of the modified Dai–Liao conjugate gradient method under some suitable assumptions. (iii) Numerical results show that the modified DL method is effective in practical computation and the image restoration problems.

1. Introduction

In this paper, we consider the following unconstrained optimization problem:where and is a continuously differentiable function. Requiring simplicity and low memory storage [15], conjugate gradient (CG) methods [611] are useful tools for solving large-scale problems. Thus, we consider solving (1) with the conjugate gradient methods. A sequence of iterates is generated from a given with the following simple iteration formula:where is the -th iteration point, the step size can be usually chosen to satisfy certain line search conditions, and is the search direction defined bywhere is a scalar. Many scholars presented some classical formulas of as follows: the corresponding methods are called the Polak–Ribiére–Polyak (PRP) [12, 13], Fletcher–Reeves (FR) [14], Hestense–Stiefel (HS) [15], conjugate descent (CD) [16], Liu–Storey (LS) [17], and Dai–Yuan (DY) [18] CG methods:where is the gradient of at the point and is the Euclidean norm. We relatively easily established the global convergence property of the CD method, DY method, and FR method, but numerical results of these methods are not desirable in some computations. Powell [19] explained a major numerical disadvantage of the FR method, such as subsequent steps being very short if a small step is originated away from the solution point. If poor occurs in practical computation, the PRP, HS, or LS method will perform a restart, so these three methods perform much better than the above three methods in the numerical test. They are generally regarded as the most efficient conjugate gradient methods. Therefore, in recent year, many scholars try to research some modified formulas for conjugate gradient methods which have global convergence property for general functions and satisfied performance in the numerical test. Among them, Dai and Liao [20] proposed some modified conjugate methods with a new conjugacy condition. Interestingly, their method cited in [20] not only has global convergence for general function, but also has better numerical performance than HS and PR methods.

When the CG methods produce a sequence of search directions , the following conjugacy condition holds:where and is the Hessian of the objective function. The definition of is as follows:

If the general function is nonlinear, combining with the knowledge of mean value theorem, we draw a conclusion such thatwhere . By (5) and (7), we have

As we know, Dai and Liao [20] studied (8) in depth based on the quasi–Newton techniques. We have that in the quasi–Newton method, is defined as an approximation matrix of the Hessian , and the new matrix holds the following formula:

In the quasi–Newton method, the search direction is defined by

According to the above two equations, we have

Based on the above relations, Dai and Liao acquired the following conjugacy condition:where is a positive parameter. In the case , (12) becomes (8). In the case , (12) becomes (11). Moreover, holds under the exact line search. It is easily observed that both (11) and (12) coincide with (8). According to the above discussion, (12) can be considered as an extension of (8) and (11). Multiplying the definition of with and using (12), Dai and Liao introduced the new formula in the DL method as follows:

In [20], the conjugate gradient method with (13) establishes the global convergence property of uniformly convex functions. Furthermore, in order to ensure general functions also satisfy global convergence, a new formula was presented by Dai and Liao as follows:

It is easily observed that the values of the first term in (13) are nonnegative. They proved that the modified DL method with (14) satisfies global convergence for general functions under some suitable conditions. In the DL method, different parameter shows different numerical performance. Based on the singular value study, Hager and Zhang [21] presented the new choices for :and they proved that in [21] satisfies the sufficient descent condition. In addition, Babaie-Kafaki and Ghanbari [22] presented a new value for :

In [22], they proved that the DL method with has better numerical results than the methods proposed by Dai and Kou [23]. In last several years, some scholars tried to find the new choice for the nonnegative parameter in (13) [24, 25]. The remainder of this paper is organized as follows: in Section 2, based on the new conjugacy condition, a modified DL gradient method is proposed with a new value for the parameter. We establish the global convergence property of the presented method in Section 3. Finally, we do some numerical experiments in Section 4.

2. New DL Conjugate Gradient Method

It is well known that the objective function can be regarded as a quadratic equation, so, if a point is close enough to a local minimizer, a good direction is the Newton direction:

Hence, from of CG methods and the formula of in (13), we can compute the value of parameter as follows:

By some algebra, we obtain

Thoughtfully, because computing the Hessian matrix is inefficient, a quasi–Newton method is used in this paper. In the quasi–Newton methods [26], the search direction is computed by solving unconstrained optimization problems, , where is the approximation matrix of the Hessian such that the matrix satisfies the following important equation:

Theorem 1. Assume that the function is sufficiently smooth and is sufficiently small, and then, we havewhere is the tensor of at and

Proof. Doing Taylor expansion for the objective function , we haveBy the definitions of , we completed the proof.
The BFGS-type methods have been proven to satisfy ideal global convergence of uniformly convex functions, but the methods may fail to establish the convergence property of nonconvex functions. So, a new version of the BFGS method is proposed by Wei et al. [27] to overcome its convergence failure for more general objective function. In their method, the matrix is obtained from the following modified secant equation:where and . In order to get the powerful theoretical and numerical properties of the modified BFGS method, use (24) to simplify (19) and the parameter is proposed as follows:Then, to ensure that the new DL method satisfies the descent condition, we present a modified definition of the value of parameter similar to [28] as follows:where ; it means that we have as follows:where . Based on the above discussion, we can collect some of the advantages of these formulas:(i)The new DL method possesses the information about function values.(ii)The global convergence for nonconvex functions is established under some assumptions.(iii)The new formulas are applied to the engineering Muskingum model and image restoration problems. Finally, we end this section by presenting a modified DL algorithm, which is listed in Algorithm 1.

Step 0: choose an initial point , , , , . Set , .
Step 1: stop if .
Step 2: determine a stepsize satisfying the following line search by [4]:
and
where , , and .
Step 3: let .
Step 4: if , then the modified DL algorithm stops.
Step 5: calculate the search direction by (27).
Step 6: set k: = k + 1 and go to step 3.

3. Convergence Analysis

In the following paper, there are some indispensable assumptions for the global convergence of the algorithm on objective functions.

Assumption 1. (i) The level set is bounded.
(ii) In some neighbourhood of , is differentiable, and its gradient function is the Lipschitz continuous, namely,where is a constant and any .

Remark 1. Assumption 1 implies that there exists a constant satisfying

Remark 2. In order to establish the convergence property of new method, we assume the new proposed parameter is bounded. For this purpose, we setwhere H is a large positive constant.

Lemma 1. Consider the proposed modified DL method in the form of (27). If the line search procedure guarantees , for all , then, we havewhere .

Proof. Similar to the proof of Theorem 1 [29].

Lemma 2. Suppose that Assumption 1 holds. Consider the proposed modified DL method in the form of (27), and the step size obtained by the equations in step 2 in Algorithm 1. If be a nonconvex function on the level set , then, we have

Proof. From Lemma 1, holds. By the second equation in Step 2 in Algorithm 1 and (28),So the following equation holds:From the first equation in Step 2 in Algorithm 1, we obtainSumming up both sides of the above inequalities from k = 0 to , we obtainCombining with (34), (32) holds, which completes the proof.

Lemma 3. Suppose that Assumption 1 is true and then the sequence is generated by Algorithm YWLDL. Ifwe obtain

Proof. From Lemma 1 and (37), we haveCombining with Lemma 2, such thatThe proof is complete.

Theorem 2. Suppose that Assumption 1 holds. Consider that the proposed modified DL method in the form of (27) and the step size obtained the equations in step 2 in Algorithm 1. If be a nonconvex function on the level set , then, we obtain

Proof. First, we suppose that equation (41) is not true. Namely, equation (37) holds. From the equations in step 2 in Algorithm 1, in case 1, holds, combining with (36), we haveAnd in case 2, similar to the proof of Theorem 3.1 [30], the relation holds as follows:where . From (36) and (43), the following equation holds in case 2:Thus,We havewhere holds. And from the second equation in Step 2 in Algorithm 1, we obtainBy (31) and (37), we haveCombining with (30)–(30), (46), and (48), the following equation holds:where . From (27), we haveLet , the following holds:This contradicts (38), so the proof is completed.

4. Numerical Experiments

This section reports some numerical results. We do some numerical experiments in order to investigate the performance of our proposed algorithm and compare it with others. In the next subsections, all the numerical tests were run in a 2.30 GHz CPU and 8.00 GB of memory in a Windows 10 operating system.

4.1. Normal Unconstrained Optimization Problems

In this section, we test the effect of some algorithms on numerical problems, and the unconstrained optimization problems are listed in Table 1. We will report on various numerical implementations of our presented method (YWLDL algorithm) with the modified Wolfe–Powell line search (YWL) on these problems and compare its performance with the one of the following algorithms: HZ algorithm, the BG algorithm, and the DL+ algorithm, for the problems. The proposed algorithm is listed as follows:HZ algorithm: the DL method with parameter in [21] under the weak Wolfe–Powell line searchBG algorithm: the DL method with parameter in [22] under the weak Wolfe–Powell line searchDL+ algorithm: the DL method with parameter in [20] under the strong Wolfe–Powell line search


Nr.Test problems

1Extended Freudenstein and Roth function
2Extended trigonometric function
3Extended Rosenbrock function
4Extended white and holst function
5Extended beale function
6Extended penalty function
7Perturbed quadratic function
8Raydan 1 function
9Raydan 2 function
10Diagonal 1 function
11Diagonal 2 function
12Diagonal 3 function
13Hager function
14Generalized tridiagonal 1 function
15Extended tridiagonal 1 function
16Extended three exponential terms function
17Generalized tridiagonal 2 function
18Diagonal 4 function
19Diagonal 5 function
20Extended Himmelblau function
21Generalized PSC1 function
22Extended PSC1 function
23Extended Powell function
24Extended block diagonal BD1 function
25Extended Maratos function
26Extended cliff function
27Quadratic diagonal perturbed function
28Extended wood function
29Extended Hiebert function
30Quadratic function QF1 function
31Extended quadratic penalty QP1 function
32Extended quadratic penalty QP2 function
33A quadratic function QF2 function
34Extended EP1 function
35Extended tridiagonal-2 function
36BDQRTIC function (CUTE)
37TRIDIA function (CUTE)
38ARWHEAD function (CUTE)
39ARWHEAD Function (CUTE)
40NONDQUAR function (CUTE)
41DQDRTIC function (CUTE)
42EG2 function (CUTE)
43DIXMAANA function (CUTE)
44DIXMAANB function (CUTE)
45DIXMAANC function (CUTE)
46DIXMAANE function (CUTE)
47Partial perturbed quadratic function
48Broyden tridiagonal function
49Almost perturbed quadratic function
50Tridiagonal perturbed quadratic function
51EDENSCH function (CUTE)
52VARDIM function (CUTE)
53STAIRCASE S1 function
54LIARWHD function (CUTE)
55DIAGONAL 6 function
56DIXON3DQ function (CUTE)
57DIXMAANF function (CUTE)
58DIXMAANG function (CUTE)
59DIXMAANH function (CUTE)
60DIXMAANI function (CUTE)
61DIXMAANJ function (CUTE)
62DIXMAANK function (CUTE)
63DIXMAANL function (CUTE)
64DIXMAAND function (CUTE)
65ENGVAL1 function (CUTE)
66FLETCHCR function (CUTE)
67COSINE function (CUTE)
68Extended DENSCHNB function (CUTE)
69DENSCHNF function (CUTE)
70SINQUAD function (CUTE)
71BIGGSB1 function (CUTE)
72Partial perturbed quadratic PPQ2 function
73Scaled quadratic SQ1 function
74Scaled quadratic SQ2 function

In this part of the numerical experiment, we introduce the stopping rules, dimensions, and some parameters as follows:Stop rules (the Himmelblau stop rule [31]): if , let or . If the condition or is satisfied, where , . The algorithm will stop if the number of iterations is greater than 1000.Dimension: 3000, 6000, and 9000 variables.Parameters: , , and .

The columns of Table 1 have the following meanings:Nr: the number of tested problemsTest problems: the name of problems

The comparison data can be shown as follows:NI: the number of iterationsNFG: the total of the function and gradient evaluationsCPU: the CPU time of the algorithm spent in seconds

We used the tool presented by Dolan and Moré [32] in these numerical reports, the new tool to show their performance in order to analyse the efficiency of the YWLDL algorithm, HZ algorithm, BG algorithm, and DL+ algorithm, and Figures 13 show the profiles relative to the CPU time, NI, and NFG, respectively. Figure 1 presents that YWLDL successfully solves about 35 percent of the test problems with the CPU time in seconds, HZ solves about 18 percent of the test problems, BG solves about 24 percent of the test problems, and DL+ solves about 6 percent of the test problems. We can conclude that YWLDL is more competitive than the HZ algorithm, BG algorithm, and DL+ algorithm since its performance curves corresponding to the CPU time are best. Figures 2 and 3 show that the robustness of YWLDL algorithm is worse than that of the BG algorithm. The YWLDL algorithm can successfully solve most of the test problems. Altogether, it is clear that the YWLDL algorithm is efficient based on the experimental results and the YWLDL algorithm with modified Wolfe–Powell line search is competitive with the other four algorithms for these test problems.

4.2. Muskingum Model in Engineering Problems

It is well known that some algorithms for solving optimization problems have been reassessed as a major challenge in engineering problems. Many scholars hope to present some effective algorithms to solve these engineering problems. Parameter estimation is one of the important tools for the research of a well-known hydrologic engineering application problem called the nonlinear Muskingum model. The Muskingum model, depending on the water inflow and outflow, is a popular flood routing model defined as follows:

The Muskingum model [33] is defined byat time , where , denotes the total number of times; , and denote the storage time constant; the weighting factor, and an additional parameter, respectively; denotes the time step; denotes the observed inflow discharge; and denotes the observed outflow discharges. In Chenggouwan and Linqing of Nanyunhe in the Haihe Basin, Tianjin, China, using actual observation data of flood run off process, and the initial point are presented. The detailed data for and in 1960, 1961, and 1964 can be found in [34]. The tested results of the final points are listed in Table 2.


Algorithm

BFGS [35]10.81560.98261.0219
HIWO [33]13.28130.80010.9933
Algorithm NDL11.19370.99990.9993

Figures 46 can draw at least three conclusions: (i) It is not different from the BFGS method and HIWO method, the presented algorithm provides a good approximation for these data, and the YWLDL algorithm can effectively be used to study this nonlinear Muskingum model. (ii) In the parameter estimation of the Muskingum model, YWLDL Algorithm shows good approximation. (iii) The final points (, and ) are competitive with the final points of similar algorithms.

4.3. Image Restoration Problems

In this section, we will use the proposed algorithm to recover the original image from the image destroyed by impulse noise. It has important practical significance in the field of optimization. The selection of parameters is similar to that in the section of Numerical Experiments. The stop condition is or holds. The experiments choose Barbara , Baboon , and Lena as the test images. The detailed performance results are given in Figures 79. It is not difficult to observe that both HZ algorithm, BG algorithm, and DL+ algorithm and YWLDL algorithm are tested in the image restoration of tested images. The expenditure of the CPU time is listed in Table 3 to compare the YWLDL algorithm with the other algorithms.


25% noiseBarbaraBaboonLenaTotal

HZ algorithm0.843751.2187501.0468753.109375
BG algorithm1.0000000.9218751.0312502.953125
DL+ algorithm1.0156250.9531250.9531252.921875
YWLDL algorithm0.9375000.9062500.9375002.78125
45% noiseBarbaraBaboonLenaTotal
HZ algorithm1.3281251.3125001.3437503.984375
BG algorithm1.3593751.2852501.1875003.832125
DL+ algorithm1.2500001.2656251.281253.796875
YWLDL algorithm1.2031251.1250001.156253.484375
65% noiseBarbaraBaboonLenaTotal
HZ algorithm1.8125001.9062501.6718755.390625
BG algorithm1.9218751.5468751.781505.25025
DL+ algorithm1.656251.5781251.7656255.00000
YWLDL algorithm1.5468751.5000001.5625004.609375

From Table 3 and Figures 79, we may draw the following conclusions: (i) It pretty obviously takes less time to restore an image with 25% noise than one with 65% noise. (ii) When the salt-and-pepper noise increases, the cost to restore the image increases. (iii) The YWLDL algorithm shows advantage to the HZ algorithm, BG algorithm, and DL+ algorithm in image restoration problems.

5. Conclusion

In this paper, the YWLDL conjugate gradient algorithm that combines with the modified WWP line search technique is proposed. The theory of other methods for constructing parameter is one of the interesting works, and we think that some modified DL methods with other parameter under the modified WWP line search technique can also solve the Muskingum model, and we may do this work in the future.

Data Availability

The data used to support the findings of this study are included within the article.

Conflicts of Interest

The authors declare that there are no conflicts of interest regarding the publication of this paper.

Acknowledgments

This work was supported by the National Natural Science Foundation of China (Grant No. 11661009), the High Level Innovation Teams and Excellent Scholars Program in Guangxi Institutions of Higher Education (Grant No. [2019]52), the Guangxi Natural Science Foundation (2020GXNSFAA159069), and the Guangxi Natural Science Key Fund (No. 2017GXNSFDA198046).

References

  1. E. G. Birgin and M. Martínez, “A spectral conjugate gradient method for unconstrained optimization,” Applied Mathematics and Optimization, vol. 43, no. 2, pp. 117–128, 2001. View at: Publisher Site | Google Scholar
  2. A. I. Cohen, “Rate of convergence of several conjugate gradient algorithms,” SIAM Journal on Numerical Analysis, vol. 9, no. 2, pp. 248–259, 1972. View at: Publisher Site | Google Scholar
  3. Z. Dai and H. Zhu, “A modified Hestenes-Stiefel-type derivative-free method for large-scale nonlinear monotone equations,” Mathematics, vol. 168, pp. 135–164, 2020. View at: Google Scholar
  4. G. Yuan, Z. Wei, and X. Lu, “Global convergence of BFGS and PRP methods under a modified weak Wolfe-Powell line search,” Applied Mathematical Modelling, vol. 47, pp. 811–825, 2017. View at: Publisher Site | Google Scholar
  5. Y.-X. Yuan, “Analysis on the conjugate gradient method,” Optimization Methods and Software, vol. 2, no. 1, pp. 19–29, 1993. View at: Publisher Site | Google Scholar
  6. Y.-H. Dai, “New properties of a nonlinear conjugate gradient method,” Numerische Mathematik, vol. 89, no. 1, pp. 83–98, 2001. View at: Publisher Site | Google Scholar
  7. J. Nocedal and J. Wright, “Numberical optimization,” Springer Series in Operations Research, Springer, Berlin, Germany, 2nd edition, 2006. View at: Google Scholar
  8. Z. Shi, “Restricted PR conjugate gradient method and its global convergence,” Advances in Mathematics, vol. 31, no. 1, pp. 47–55, 2002. View at: Google Scholar
  9. G. Yuan, T. Li, and W. Hu, “A conjugate gradient algorithm for large-scale nonlinear equations and image restoration problems,” Applied Numerical Mathematics, vol. 147, pp. 129–141, 2020. View at: Publisher Site | Google Scholar
  10. G. Yuan, J. Lu, and Z. Wang, “The PRP conjugate gradient algorithm with a modified WWP line search and its application in the image restoration problems,” Applied Numerical Mathematics, vol. 152, pp. 1–11, 2020. View at: Publisher Site | Google Scholar
  11. G. Yuan, X. Wang, and Z. Sheng, “Family weak conjugate gradient algorithms and their convergence analysis for nonconvex functions,” Numerical Algorithms, vol. 84, no. 3, pp. 935–956, 2020. View at: Publisher Site | Google Scholar
  12. E. Polak and G. Ribière, “Note sur la convergence de méthodes de directions conjuguées,” Revue Française d’Informatique et de Recherche Opérationnelle. Série Rouge, vol. 3, no. 16, pp. 35–43, 1969. View at: Publisher Site | Google Scholar
  13. B. T. Polyak, “The conjugate gradient method in extremal problems,” USSR Computational Mathematics and Mathematical Physics, vol. 9, no. 4, pp. 94–112, 1969. View at: Publisher Site | Google Scholar
  14. R. Fletcher and C. M. Reeves, “Function minimization by conjugate gradients,” The Computer Journal, vol. 7, no. 2, pp. 149–154, 1964. View at: Publisher Site | Google Scholar
  15. M. R. Hestenes and E. Stiefel, “Methods of conjugate gradients for solving linear systems,” Journal of Research of the National Bureau of Standards, vol. 49, no. 6, pp. 409–436, 1952. View at: Publisher Site | Google Scholar
  16. R. Fletcher, Practical Method of Optimization, Vol I: Unconstrained Optimization, Wiley, New York, NY, USA, 2nd edition, 1997.
  17. Y. Liu and C. Storey, “Efficient generalized conjugate gradient algorithms part 1: theory,” Journal of Optimization Theory and Applications Mathematics, vol. 10, pp. 177–182, 2000. View at: Google Scholar
  18. Y. H. Dai and Y. Yuan, “A nonlinear conjugate gradient method with a strong global convergence property,” SIAM Journal on Optimization, vol. 10, no. 1, pp. 177–182, 1999. View at: Publisher Site | Google Scholar
  19. M. J. D. Powell, “Convergence properties of algorithms for nonlinear optimization,” SIAM Review, vol. 28, no. 4, pp. 487–500, 1986. View at: Publisher Site | Google Scholar
  20. Y.-H. Dai and Z. Liao, “New conjugacy conditions and related nonlinear conjugate gradient methods,” Applied Mathematics and Optimization, vol. 43, no. 1, pp. 87–101, 2001. View at: Publisher Site | Google Scholar
  21. W. W. Hager and H. Zhang, “A new conjugate gradient method with guaranteed descent and an efficient line search,” SIAM Journal on Optimization, vol. 16, no. 1, pp. 170–192, 2005. View at: Publisher Site | Google Scholar
  22. S. Babaie-Kafaki and R. Ghanbari, “A class of adaptive Dai-Liao conjugate gradient methods based on the scaled memoryless BFGS update,” 4OR, vol. 15, no. 1, pp. 85–92, 2017. View at: Publisher Site | Google Scholar
  23. Y.-H. Dai and C.-X. Kou, “A nonlinear conjugate gradient algorithm with an optimal property and an improved wolfe line search,” SIAM Journal on Optimization, vol. 23, no. 1, pp. 296–320, 2013. View at: Publisher Site | Google Scholar
  24. M. Fatemi, “A new efficient conjugate gradient method for unconstrained optimization,” Journal of Computational and Applied Mathematics, vol. 300, pp. 207–216, 2016. View at: Publisher Site | Google Scholar
  25. M. Fatemi, “An optimal parameter for Dai-Liao family of conjugate gradient methods,” Journal of Optimization Theory and Applications, vol. 169, no. 2, pp. 587–605, 2016. View at: Publisher Site | Google Scholar
  26. Z. Wei, G. Yu, G. Yuan, and Z. Lian, “The superlinear convergence of a modified BFGS-type method for unconstrained optimization,” Computational Optimization and Applications, vol. 29, no. 3, pp. 315–332, 2004. View at: Publisher Site | Google Scholar
  27. Z. Wei, G. Li, and L. Qi, “New quasi-Newton methods for unconstrained optimization problems,” Applied Mathematics and Computation, vol. 175, no. 2, pp. 1156–1188, 2006. View at: Publisher Site | Google Scholar
  28. Z. Aminifard and S. Babaie-Kafaki, “An optimal parameter choice for the Dai-Liao family of conjugate gradient methods by avoiding a direction of the maximum magnification by the search direction matrix,” A Quarterly Journal of Operations Research, vol. 17, pp. 1–14, 2019. View at: Publisher Site | Google Scholar
  29. S. Babaie-Kafaki, “On the sufficient descent condition of the Hager-Zhang conjugate gradient methods,” 4OR, vol. 12, no. 3, pp. 285–292, 2014. View at: Publisher Site | Google Scholar
  30. G. Yuan, Z. Wei, and Y. Yang, “The global convergence of the Polak-Ribière-Polyak conjugate gradient algorithm under inexact line search for nonconvex functions,” Journal of Computational and Applied Mathematics, vol. 362, pp. 262–275, 2019. View at: Publisher Site | Google Scholar
  31. Y. Yuan and W. Sun, Theory and Methods of Optimization, Science Press of China, Beijing, China, 1999.
  32. E. D. Dolan and J. J. Moré, “Benchmarking optimization software with performance profiles,” Mathematical Programming, vol. 91, no. 2, pp. 201–213, 2002. View at: Publisher Site | Google Scholar
  33. A. Ouyang, L. Liu, Z. Sheng, and F. Wu, “A class of parameter estimation methods for nonlinear Muskingum model using hybrid invasive weed optimization algorithm,” Mathematical Problems Engingering, vol. 2015, Article ID 573894, 15 pages, 2015. View at: Publisher Site | Google Scholar
  34. A. Ouyang, Z. Tang, K. Li, A. Sallam, and E. Sha, “Estimating parameters of Muskingum model using an adaptive hybrid PSO algorithm,” International Journal of Pattern Recognition and Artificial Intellgence, vol. 28, Article ID 1459003, p. 29, 2014. View at: Publisher Site | Google Scholar
  35. Z. W. Geem, “Parameter estimation for the nonlinear Muskingum model using the BFGS technique,” Journal of Irrigation and Drainage Engineering, vol. 132, no. 5, pp. 474–478, 2006. View at: Publisher Site | Google Scholar

Copyright © 2020 Junyu Lu et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.


More related articles

 PDF Download Citation Citation
 Download other formatsMore
 Order printed copiesOrder
Views49
Downloads28
Citations

Related articles

We are committed to sharing findings related to COVID-19 as quickly as possible. We will be providing unlimited waivers of publication charges for accepted research articles as well as case reports and case series related to COVID-19. Review articles are excluded from this waiver policy. Sign up here as a reviewer to help fast-track new submissions.