Journal of Mathematics

Journal of Mathematics / 2020 / Article

Research Article | Open Access

Volume 2020 |Article ID 3615208 | https://doi.org/10.1155/2020/3615208

Eman T. Hamed, Rana Z. Al-Kawaz, Abbas Y. Al-Bayati, "New Investigation for the Liu-Story Scaled Conjugate Gradient Method for Nonlinear Optimization", Journal of Mathematics, vol. 2020, Article ID 3615208, 12 pages, 2020. https://doi.org/10.1155/2020/3615208

New Investigation for the Liu-Story Scaled Conjugate Gradient Method for Nonlinear Optimization

Academic Editor: Hang Xu
Received30 Sep 2019
Revised18 Nov 2019
Accepted13 Dec 2019
Published25 Jan 2020

Abstract

This article considers modified formulas for the standard conjugate gradient (CG) technique that is planned by Li and Fukushima. A new scalar parameter for this CG technique of unconstrained optimization is planned. The descent condition and global convergent property are established below using strong Wolfe conditions. Our numerical experiments show that the new proposed algorithms are more stable and economic as compared to some well-known standard CG methods.

1. Introduction

Conjugate gradient (CG) strategies consists of a category of nonlinear optimization algorithms, which needs low memory and powerful local and global convergence properties [1,2]. Typically, a CG method is meant to resolve massive scaled nonlinear optimization problem:

On the understanding that the function is defined in the form is smooth nonlinear function. The repetitive formula is in the form

The most important component of this formula is step-size, and the search direction consists ofwhereas denotes and denotes a positive scalar. The step-size is sometimes chosen to satisfy bound line search condition [3]. Among these search direction conditions, the strong Wolfe line search condition is sometimes outlined as follows:and . There are many different formulas for conjugate coefficients as in the following sources, e.g., Hestenes and Stiefel, HS [4]; Fletcher and Reeves, FR [5]; Polak and Ribière, PR [6]; Conjugate Descent, CD [7]; Li and Fukushima, LF [1]; and Liu and Story, LS [8], correspond to different choice for the scalar parameter .

2. A New Scalar Formula for the Parameter

Here in this part of this article, we proposed a new version for the parameter by relying on the modified BFGS method proposed by Li and Fukushima [1]. In the BFGS method, the matrix is updated to the following formula [9]:where . In addition, the normal secant relation is outlined consistent with the subsequent formula:

The researchers Li and Fukushima presented an appropriate modified BFGS technique which is globally and super-linearly convergent, even though while not requiring convex objective functions. The subsequent modified secant equation is outlined consistent with the subsequent formula as follows:whereand  > 0;  > 0, is a parameter defined as

Specifically, take value c is constant, and it is greater than zero.

There are three different cases for the term :Case 1: if in this case we have the problem of the nonpositive definite matrix, so Li and Fukushima proposed formula as in (9) and developed the corresponding BFGS formula as follows:Moreover, the form of in (10), when max is used so that the value (0; zero), is not selected in this case. Through this formula, the researchers proved that the modified symmetric matrix is positive definite [10].Case 2: if , in this case, we can say surely that the BFGS update matrix is symmetric and positive definite when applied within this formula (in other words, when applying the inequality in the formula , max = 0) [11].

If we use Liu and Story (LS), we use any scalar ; then, (3) becomeswhere

When any positive value to is greater than one, the new parallel search direction provided in equation (12) is the Newton direction. Hence, Newton’s direction is

Hence,

Using equation (8), the new scalar becomes

By substituting equations (9) and (10) and by taking (because we use the strong Wolfe line search in equation (10), yields . Therefore, the new scalar within the new search direction is

Hence, we conclude from equation (17) that the new parameter is best because it is up to date to find the value of y, and also we find different forms when changing the value of c as we will notice in the section of numerical results.

3. New Theorem (Sufficient Descent Direction)

If we presume that the line search satisfies conditions (4) and (5), then the new search direction which is generated from equations (12) and (17) could be a sufficient descent direction.

Proof. From equations (12) and (17) we obtainedBy using Powell restart equation (i.e., ),If  > 0, the next inequality is true:Using strong Wolfe line search condition (5a) yieldsThis latter equation implies thatThus, our requirement is complete.

3.1. Outlines of the New CG-Algorithms

Step 1: select the initial point , and select some positive values for and . Then, set and set .Step 2: test for stopping criterion. If satisfied, then stop; otherwise, continue.Step 3: determine by Wolfe conditions, which are defined in equations (4) and (5).Step 4: compute the second iterative point from equation (2).Step 5: calculate the scalar parameter from equation (17).Step 6: calculate the new search directions, namely, Step 7: test Powell restarting criterion, namely, if , then restart the new search direction with .Step 8: set the next iteration k = k + 1, and go to Step 2.

4. Convergence Analysis for the New Proposed Algorithm

In the following parts, we have a tendency to discuss the convergence analysis property for the new algorithm thoroughly. First, we offer an assumption for the convergence analysis property for the new algorithms. Then, we offer another well-known lemma needed within the study of convergence analysis property. Finally, we have a tendency to set new theorems aboard their proofs that area unit associated with the convergence analysis for the new algorithm.

Assumption. (i) The level setis bounded, that is, there exists a constant z > 0, such as [12](ii)In neighbourhood N of S, f is continuously differentiable, and its gradient is Lipschitz continuous, that is, there exists a constant L > 0, such asFrom the assumptions (i) and (ii) on f, we are able to deduce that there exists  > 0 such as

Lemma. If we suppose that [3,13](1)Assumption holds.(2)Search direction in the standard CG method is a descent direction.(3)Optimal step is calculated by equations (4) and (5).(4)The convergence condition is satisfied, i.e., ifThen,

5. New Theorem (Uniformly Convex Function)

If we suppose that(1)Assumption holds.(2)The new search direction defined by equations (12) and (17) is a descent direction.(3)The optimal step is calculated by equations (4) and (5).(4)The objective function f is uniformly convex; then,

Proof. Consider the new direction in equation (12) and the parameter of equation (17) satisfy the next absolute value condition:Well parameter Moreover, by combining the results, we obtainedWe got the required proof. We put similar points to the previous hypotheses, but there are some variations in the formulas.

6. New Theorem (General Function)

If we suppose that(1)Assumption holds.(2)The new search direction defined by equations (12) and (17)is a descent direction.(3)The optimal step is calculated by equations (4) and (5).(4)The objective function f is general function; then,

Proof. Using the same proof style of the previous theorem with the difference in the fact that the functions of the algorithm are general functions,Then, we obtainTherefore, the proof of the new theorems in regards to the convergence analyses of the proposed algorithms is complete.

7. Numerical Experiments

In this section, we have reported some numerical experiments that are performed on a set of (60) unconstrained optimization test problems to analyse the efficiency of . Detail of these test problems, with their given initial points, can be found in [14,15]. We handled each of these (60) test functions by adding 1000 for each n to arrive at maximum number of n which is equal to 10000. The termination criterion used in our experiments is .

In our comparisons below, we employ the following algorithms:(i)LS: with the Wolfe line search(ii)CD: with the Wolfe line search(iii)HS: with the Wolfe line search(iv)PR: with the Wolfe line search(v)New Algorithm, using equation (17) and the scalar c = 0.1(vi)New Algorithm, using equation (17) and the scalar c = 0.001

In Tables 1 and 2, we numerically compare the proposed new CG algorithms against other well-known CG algorithms to verify their performance using the known comparison tools for such algorithms which are as follows:NOI = the total number of calculated iterative iterationsNOFG = the total number of function and gradient calculationsTIME = the total CPU time required for the processor to execute the CG algorithm and reach the minimum value of the required function minimization


Prob.New algorithm (C = 0.1) NOI/NOFG/TIMENew algorithm (C = 0.001) NOI/NOFG/TIMELS
NOI/NOFG/TIME
CD
NOI/NOFG/TIME

199/252/0.14101/263/0.14323/7045/2.31414/9608/2.64
2408/884/2.02405/880/1.97412/888/1.98395/878/2.08
3853/2183/1.06830/2171/0.98823/2159/1.02824/2154/0.97
4123/308/0.14123/308/0.15114/300/0.17121/314/0.15
5100/388/0.15100/388/0.20274/5586/1.14167/1918/0.77
6585/995/1.56585/995/1.546576/19689/4.546848/20496/5.70
730/80/0.1530/80/0.1340/100/0.2140/100/0.16
81032/2705/1.581020/2668/1.421033/2768/1.491026/2972/1.52
92388/4881/2.402503/5048/3.063520/7182/7.493676/7485/8.83
10477/871/2.66478/858/2.6218795/503441/7.6019346/518146/9.14
11182/423/1.03174/418/1.098669/270951/5.838252/257171/6.83
12113/302/0.16113/302/0.20318/3881/1.69279/2792/5.80
1380/226/0.1178/222/0.0873/241/0.1073/241/0.09
1461/131/0.5061/131/0.56124/2140/7.12160/2833/3.54
15460/991/0.62452/969/0.59456/953/0.61467/997/0.64
1666/132/0.0366/132/0.0467/134/0.1468/136/0.06
1770/160/0.1170/160/0.0669/158/0.1071/162/0.08
18753/1577/0.79791/1668/0.85691/1477/0.74782/1687/0.82
1974/158/0.3374/158/0.42123/1905/2.3170/150/0.30
20110/349/0.41109/349/0.37114/334/0.43111/329/0.41
21806/3224/1.60600/1665/1.26429/1895/4.93561/5284/5.69
2272/275/0.5172/285/0.4984/366/0.702084/2322/9.54
234470/9572/1.335033/10748/2.4220010/168275/4.9519334/128670/5.02
2462/201/0.4262/201/0.40546/15413/3.73495/13595/3.48
25459/1091/0.60521/1192/0.78494/1101/0.65531/1154/0.76
2656/153/0.0865/373/0.1356/155/0.0666/328/0.12
2785/203/0.1185/203/0.1280/190/0.0780/190/0.09
28534/1139/0.65514/1073/0.68493/1055/0.63492/1075/0.61
29540/1274/0.62537/1267/0.58941/2237/1.03943/2239/1.11
30576/1440/0.96591/1504/1.0519026/581537/2.5718144/543276/2.03
31113/236/0.16113/236/0.11118/245/0.11123/253/0.11
32813/2181/3.28735/1997/3.0318140/560188/8.6718320/564064/8.04
3398/268/0.1898/268/0.17359/5355/2.21387/6200/3.06
34346/766/0.48348/766/0.50356/786/0.44332/732/0.44
357635/12820/8.597554/12715/8.737463/12576/8.297183/12051/8.37
36280/978/0.43280/978/0.35264/935/0.45273/978/0.39
37217/534/0.31217/534/0.27221/551/0.31219/546/0.31
38121/287/0.14120/285/0.13814/20969/8.83786/20117/7.46
39150/329/0.64153/328/0.65141/305/0.59137/297/0.58
40107/217/0.65107/217/0.68144/1165/3.67124/476/1.35
41120/330/0.23120/330/0.25100/290/0.17100/290/0.18
423832/9998/2.513373/8928/7.183404/9046/8.473182/8618/7.82
4340/80/0.1140/80/0.0840/80/0.1240/80/0.12
4450/110/0.1050/110/0.0550/110/0.0950/110/0.05
4543/184/0.0843/184/0.113903/129109/7.107265/242193/4.78
46427/1323/2.56427/1320/2.60413/1307/2.53428/1252/2.44
4764/249/0.0964/249/0.06118/447/0.22114/425/0.19
48308/798/1.39293/772/1.28773/7349/0.721731/23050/5.88
4922/89/0.1122/89/0.13316/8561/9.47298/8022/8.33
5020/50/0.0320/50/0.0220/50/0.0020/50/0.04
5192/1755/2.2542/136/0.17361/9646/9.02315/8202/8.61
52107/418/0.16107/418/0.15107/418/0.16107/418/0.16
536199/52426/6.756199/52426/6.788071/53291/6.208071/53291/9.30
5451/151/0.2051/151/0.1951/151/0.2151/151/0.18
5560/140/0.1960/140/0.1960/140/0.1860/140/0.20
5670/140/0.2070/140/0.1770/140/0.1970/140/0.16
5779/158/0.2279/158/0.2279/158/0.2474/148/0.22
58143/570/0.25143/570/0.25143/570/0.23143/570/0.21
59188/498/0.29172/453/0.27176/449/0.27177/461/0.22
6083/236/0.1383/236/0.11985/27409/9.27917/25010/8.38
Total37602/124886/55.5437426/121643/59.2131933/2455353/154.98136949/1242169/166.56


Prob.New algorithm (C = 0.1) NOI/NOFG/TIMENew algorithm (C = 0.001) NOI/NOFG/TIMEHS
NOI/NOFG/TIME
PR
NOI/NOFG/TIME

199/252/0.14101/263/0.145902/173484/0.7211798/270815/6.07
2408/884/2.02405/880/1.97362/637/1.83416/720/2.07
3853/2183/1.06830/2171/0.98789/1817/0.86989/1979/1.11
4123/308/0.14123/308/0.15141/281/0.19254/430/0.26
5100/388/0.15100/388/0.20653/17073/3.79410/7948/2.73
6585/995/1.56585/995/1.549881/16243/3.0720010/22091/4.04
730/80/0.1530/80/0.1340/90/0.1840/90/0.18
81032/2705/1.581020/2668/1.42997/2279/1.288780/10109/1.57
92388/4881/2.402503/5048/3.064658/7644/2.4814945/16013/9.92
10477/871/2.66478/858/2.6220010/98744/4.7020010/292528/9.39
11182/423/1.03174/418/1.0914053/39805/1.2815593/489487/8.58
12113/302/0.16113/302/0.20428/6427/2.19609/11752/4.81
1380/226/0.1178/222/0.08113/234/0.08291/509/0.26
1461/131/0.5061/131/0.56906/23792/1.66318/6032/3.76
15460/991/0.62452/969/0.59636/1006/0.73964/1479/1.25
1666/132/0.0366/132/0.0460/120/0.031043/1116/0.38
1770/160/0.1170/160/0.06207/339/0.19110/230/0.10
18753/1577/0.79791/1668/0.85821/1545/0.783732/4630/3.99
1974/158/0.3374/158/0.42108/1352/2.44303/7194/2.66
20110/349/0.41109/349/0.37135/321/0.42154/339/0.45
21806/3224/1.60600/1665/1.26875/14122/5.89929/10442/9.02
2272/275/0.5172/285/0.492104/2442/3.61161/440/0.92
234470/9572/1.335033/10748/2.4218912/38658/8.8720010/25808/6.95
2462/201/0.4262/201/0.401853/6983/3.392527/76854/9.01
25459/1091/0.60521/1192/0.78304/606/0.401199/1793/1.55
2656/153/0.0865/373/0.13128/1103/0.372697/14700/7.09
2785/203/0.1185/203/0.1291/193/0.13132/264/0.16
28534/1139/0.65514/1073/0.68288/558/0.35556/925/0.60
29540/1274/0.62537/1267/0.58852/1783/0.981014/2180/1.14
30576/1440/0.96591/1504/1.0520010/98171/6.4420010/317766/8.02
31113/236/0.16113/236/0.1179/168/0.10147/287/0.13
32813/2181/3.28735/1997/3.0320010/91480/9.4720010/179051/5.16
3398/268/0.1898/268/0.17631/11069/5.15837/17999/8.99
34346/766/0.48348/766/0.50716/1148/0.90744/1213/0.93
357635/12820/8.597554/12715/8.738375/13146/9.988539/12513/15.46
36280/978/0.43280/978/0.35330/695/0.37401/868/0.47
37217/534/0.31217/534/0.27610/6778/2.40820/11112/5.50
38121/287/0.14120/285/0.131565/42467/4.891624/45918/6.43
39150/329/0.64153/328/0.65174/290/0.54193/323/0.69
40107/217/0.65107/217/0.68253/426/1.034281/4461/11.17
41120/330/0.23120/330/0.25118/286/0.19124/298/0.20
423832/9998/2.513373/8928/7.183685/8619/8.3017419/22467/5.83
4340/80/0.1140/80/0.0899/119/0.1999/119/0.19
4450/110/0.1050/110/0.0550/110/0.0270/282/0.13
4543/184/0.0843/184/0.1112047/91618/9.9315601/521624/4.14
46427/1323/2.56427/1320/2.60409/1040/2.07597/1232/2.51
4764/249/0.0964/249/0.06133/380/0.15143/393/0.40
48308/798/1.39293/772/1.282549/27198/2.6115662/156182/6.10
4922/89/0.1122/89/0.131377/37674/8.581571/43940/8.14
5020/50/0.0320/50/0.0220/50/0.0220/50/0.03
5192/1755/2.2542/136/0.171620/5439/3.431680/46332/4.04
52107/418/0.16107/418/0.15125/360/0.14125/360/0.16
536199/52426/6.756199/52426/6.784160/9534/3.232233/8070/9.66
5451/151/0.2051/151/0.1975/173/0.2577/144/0.20
5560/140/0.1960/140/0.1960/120/0.1460/120/0.17
5670/140/0.2070/140/0.1770/140/0.2480/160/0.22
5779/158/0.2279/158/0.2279/158/0.2486/172/0.23
58143/570/0.25143/570/0.25177/506/0.25177/506/0.25
59188/498/0.29172/453/0.27209/460/0.30641/917/1.01
6083/236/0.1383/236/0.111615/43549/5.172005/57225/7.75
Total37602/124896/60.137426/121943/70.14167737/10741186/139.61271227/7136001/214.33

Therefore, among these CG algorithms, the new algorithm appears to generate the best search direction. In Table 3, there is a clear evidence that the new algorithm outperforms the standard LS and CD algorithms detailed as follows (when c = 0.1):(a)For 100% LS algorithm: the new algorithm is improved by (71.5%) NOI, improved by (94.92%) NOFG, and improved by (64.2%) time(b)For 100% CD algorithm: the new algorithm is improved by (72.6%) NOI, improved by (89.95%) NOFG, and improved by (66.7%) time


AlgorithmToolsLS (1991) (%)CD (1987) (%)

When c = 0.1NOI28.527.4
NOFG5.0810.05
TIME35.833.3

When c = 0.001NOI28.327.3
NOFG4.99.7
TIME38.135.5

And (when c = 0.001):(c)For 100% LS algorithm: the new algorithm is improved by (64.34%) NOI, improved by (16.99%) NOFG, and improved by (16.08%) time(d)For 100% CD algorithm: the new algorithm is improved by (52.60%) NOI, improved by (14.63%) NOFG, and improved by (12.75%) timeIn Table 4, there is a clear evident that the new algorithm outperforms the standard HS and PR algorithms as detailed below (when c = 0.1):(e)For 100% HS algorithm: the new algorithm is improved by (77.6%) NOI, improved by (98.9%) NOFG, and improved by (57%) time(f)For 100% PR algorithm: the new algorithm is improved by (86.2%) NOI, improved by (98.3%) NOFG, and improved by (72%) timeand (when c = 0.001)(g)For 100% HS algorithm: the new algorithm is improved by (77.7%) NOI, improved by (98.9%) NOFG, and improved by (49.8%) time(h)For 100% PR algorithm: the new algorithm is improved by (86.3%) NOI, improved by (98.3%) NOFG, and improved by (67.3%) time


AlgorithmToolsHS (1952) (%)PR (1969) (%)

When c = 0.1NOI22.413.8
NOFG1.11.7
TIME4328

When c = 0.001NOI22.313.7
NOFG1.11.7
TIME50.232.7

What can be deduced from the above tables and experiments are summarized in the following:(i)Points (a to d) above are that our new proposed algorithms in the field of CG-type methods are economic and robust as compared to the standard LS and CD algorithms(ii)The abovementioned points (e to h) are that our new proposed algorithms in the field of CG-type methods are economic and robust as compared to the standard HS and PR algorithms

All these comparisons were made using the performance profile of Dolan and Moré [16], and we can conclude that(1)Figure 1 illustrates the new algorithm versus (LS, CD, HS, and PR) the activity of the new algorithms in calculating the number of iterations(2)Figure 2 explains the new algorithm versus (LS, CD, HS, and PR) the activity of the new algorithms in calculating the number of function and gradient evaluations(3)Figure 3 displays how long the algorithms take to reach the solution (i.e., the required CPU time)(4)Figure 4 shows the functions that perform well in the new algorithm with two different constants compared to the basic algorithms (LS, CD, HS, and PR) based on the number of iterations(5)Figure 5 demonstrates the outstanding performance of a number of functions in the new algorithm with two different constants compared to basic algorithms (LS, CD, HS, and PR) based on the number of function and gradient evaluations

8. Conclusions

In this study, we have submitted two proposed new CG methods (by changing the value of c). A crucial property of proposed CG methods is that it secures sufficient descent directions. Under mild conditions, we have demonstrated that the new algorithms are globally convergent for each uniformly convex and general functions using the strong Wolfe line search conditions. The preliminary numerical results show that if we decide a good value of parameter c, the new algorithms perform very well. However, an optimal value of the parameter c can be handled theoretically (in future research studies) to achieve more best numerical results.

Data Availability

The data used the support the findings of this study are available from the corresponding author upon request.

Conflicts of Interest

The authors declare that there are no conflicts of interest regarding the publication of this paper.

Acknowledgments

The research was supported by College of Computer Sciences and Mathematics, University of Mosul, Republic of Iraq, under Project no. 3615208.

References

  1. D.-H. Li and M. Fukushima, “A modified BFGS method and its global convergence in nonconvex minimization,” Journal of Computational and Applied Mathematics, vol. 129, no. 1-2, pp. 15–35, 2001. View at: Publisher Site | Google Scholar
  2. Y. H. Dai and Y. X. Yuan, “Convergent properties of nonlinear conjugate gradient methods,” SIAM Journal on Optimization, vol. 10, no. 1, pp. 348–358, 1999. View at: Publisher Site | Google Scholar
  3. Y.-H. Dai, “New conjugacy conditions and related nonlinear conjugate gradient methods,” Applied Mathematics and Optimization, vol. 43, no. 1, pp. 87–101, 2001. View at: Publisher Site | Google Scholar
  4. M. R. Hestenes and E. Stiefel, “Methods of conjugate gradients for solving linear systems,” Journal of Research of the National Bureau of Standards, vol. 49, no. 6, pp. 409–436, 1952. View at: Publisher Site | Google Scholar
  5. R. Fletcher, “Function Minimization by conjugate gradients,” The Computer Journal, vol. 7, no. 2, pp. 149–154, 1964. View at: Publisher Site | Google Scholar
  6. J. Nocedal and S. J. Wright, Numerical Optimization, Springer, Cham, Switzerland, 2nd edition, 2006.
  7. R. Fletcher, Practical Methods of Optimization, Unconstrained Optimization, Wiley, New York, NY, USA, 1987.
  8. Y. Liu and C. Storey, “Efficient generalized conjugate gradients algorithms, part 1: theory,” Journal of Optimization Theory Applications, vol. 69, pp. 129–137, 1999. View at: Google Scholar
  9. Q. Guo, J.-G. Liu, D. H. Wang, and D.-H. Wang, “A modified BFGS method and its superlinear convergence in nonconvex minimization with general line search rule,” Journal of Applied Mathematics and Computing, vol. 28, no. 1-2, pp. 435–446, 2008. View at: Publisher Site | Google Scholar
  10. Z. Liu and H. Liu, “An efficient gradient method with approximately optimal stepsize based on tensor model for unconstrained optimization,” Journal of Optimization Theory and Applications, vol. 181, no. 2, pp. 608–633, 2019. View at: Publisher Site | Google Scholar
  11. X. Li, B. Wang, and W. Hu, “A modified nonmonotone BFGS algorithm for unconstrained optimization,” Journal of Inequalities and Applications, vol. 183, pp. 1–18, 2017. View at: Google Scholar
  12. J. C. Gilbert and J. Nocedal, “Global convergence properties of conjugate gradient methods for optimization,” SIAM Journal on Optimization, vol. 2, no. 1, pp. 21–42, 1992. View at: Publisher Site | Google Scholar
  13. X. Chen and J. Sun, “Global convergence of a two-parameter family of conjugate gradient methods without line search,” Journal of Computational and Applied Mathematics, vol. 146, no. 1, pp. 37–45, 2002. View at: Publisher Site | Google Scholar
  14. N. Andrei, “An unconstrained optimization test functions collection,” Advanced Model Optimization, vol. 10, pp. 147–161, 2011. View at: Google Scholar
  15. N. Andrei, “40 Conjugate gradients algorithms for unconstrained optimization,” Bulletin of the Malaysian Mathematical Sciences Society, vol. 34, pp. 319–330, 2011. View at: Google Scholar
  16. E. D. Dolan and J. J. Moré, “Benchmarking optimization software with performance profiles,” Mathematical Programming, vol. 91, no. 2, pp. 201–213, 2002. View at: Publisher Site | Google Scholar

Copyright © 2020 Eman T. Hamed et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.


More related articles

 PDF Download Citation Citation
 Download other formatsMore
 Order printed copiesOrder
Views392
Downloads408
Citations

Related articles

We are committed to sharing findings related to COVID-19 as quickly as possible. We will be providing unlimited waivers of publication charges for accepted research articles as well as case reports and case series related to COVID-19. Review articles are excluded from this waiver policy. Sign up here as a reviewer to help fast-track new submissions.