Abstract

We introduce and investigate proper accelerations of the Dai–Liao (DL) conjugate gradient (CG) family of iterations for solving large-scale unconstrained optimization problems. The improvements are based on appropriate modifications of the CG update parameter in DL conjugate gradient methods. The leading idea is to combine search directions in accelerated gradient descent methods, defined based on the Hessian approximation by an appropriate diagonal matrix in quasi-Newton methods, with search directions in DL-type CG methods. The global convergence of the modified Dai–Liao conjugate gradient method has been proved on the set of uniformly convex functions. The efficiency and robustness of the newly presented methods are confirmed in comparison with similar methods, analyzing numerical results concerning the CPU time, a number of function evaluations, and the number of iterative steps. The proposed method is successfully applied to deal with an optimization problem arising in 2D robotic motion control.

Our research area is the large-scale multivariable unconstrained optimization problem.in which the function is uniformly convex and twice continuously differentiable. Quasi-Newton (QN) methods and conjugate gradient (CG) methods are the two most popular approaches in solving nonlinear optimization problems.

Various and numerous modifications of Dai–Liao (DL) conjugate gradient (CG) methods [1] with acceleration parameters arise from the natural demand for solving large-scale problems (1). The motivation of this research is based on the wide applications of unconstrained optimization problems and the efficiency of conjugate gradient methods for solving them [211]. The main result obtained in this study is the verification and investigation of the correlation between QN and CG approaches. More specifically, in this research, we study the possibilities of applying QN methods in improving CG-type algorithms [1216].

The generic iterative scheme that aimed to solve (1) is as follows:where is the previous iterative point, is a new iterative point, is the gradient vector in , is a search direction defined upon the descent condition , and is a step length. The basic descent direction is the direction opposite to the gradient , which leads to the template of gradient descent (GD) iterations [17, 18].in which is defined by the backtracking line search.

Algorithm 1 from [19] is selected as a framework for implementing the inexact line search which determines the step length .

Require: Objective , the search direction at the point , and real numbers and .
(1)Initialize .
(2)While , update .
(3)Return .

The starting point of our investigation is iterations of the Newton method with line search.where is the inverse of the Hessian . The quasi-Newton type iterationsare based on the assumption that (resp., ) is an appropriate symmetric positive definite estimation of (resp., ) [18]. The update from to is specified on the quasi-Newton property (secant equation)

The quasi-Newton methods based on matrix approximations of show some shortcomings in solving large-scale problems due to the requirement to compute and store matrices during iterations. Because of that, we choose the simplest scalar approximation of according to the classification presented in [20]. Therefore,

This defines the simplest and numerically efficient approximation of by the identity matrix and approximate scalar . Such reduction results in the iterative flow as follows:

One efficient definition of was proposed in [19] based on the Taylor expansion of the objective function , resulting in

Initiated SM iterations of the form (8) and (9)were defined in [19].

Furthermore, the next modified SM (MSM) scheme was proposed in [21], using the output of the backtracking Algorithm 1 and the gain parameter in the form of iterates.where was defined in [21] by the rule

Since , the main idea used in the MSM iterates is to accelerate the SM iterations by the parameter . More details about accelerated gradient methods can be found in [19, 22, 23]. Since for and , mathematical analysis of the function in the interval reveals and . Figure 1 presents the graph of for .

We observe that the choice reduces iterations (8) to a kind of the GD iterative rule.in which can be determined in various approaches. Barzilai and Borwein in [29] suggested two mutually dual variations of the GD method, known as BB iterations, defined by the step length in (13) equal to

Suitable adaptive strategies for choosing among the first and the second BB step length enhance greatly the performance of the BB method [30]. The method has been improved in numerous articles, such as [31, 32].

In this research, the acceleration parameters and , used in the iterative process (11), will be exploited to improve the efficiency of the DL conjugate gradient method which is based on the rule (2) with the search direction

Determined by the real parameter

The parameter is known as the CG update parameter. Table 6 in appendix shows the abbreviations and full names of the methods considered in this paper.

The conjugation conditionwas introduced in [1] by Dai and Liao. The condition (18) has been an inspiration for many researchers, of which the most important are Hager and Zhang [24, 25], Dai and Kou [26], Babaie–Kafaki and Ghanbari [27], Ivanov et al. [28], Lotfi and Hosseini [15], and Zheng and Zheng [33] to create new DL-type CG methods.

Some of the most significant rules to determine are collected in Table 1. The diversity in definitions of the DL parameter is confirmed in Table 1. The parameter in the MDL method proposed in [15] is based on the improvement of the Dai–Liao CG class by a modified BFGS method.

But not all possibilities are exhausted. Since the line search used in this research gives the output , it follows and consequently in common with are useful in the proposal of a novel rule which determines the CG parameter . Our main idea is to find the DL parameter in (17) after the unification of descent directions in the MSM method (11) and in the DL iterations (17). In the present manuscript, we use an original approach which is developed on the unification of two search directions included in the MSM method or in the BB method (which belongs to the class of quasi-Newton methods) and the DL method (from the CG class). The unification of the MSM and DL methods leads to an equation with respect to the unknown parameter whose solution gives a new DL parameter and corresponding DL-type method of the CG class, termed as the MSMDL method. On the other hand, the hybridization of the BB1 method with the DL class leads to the BB1DL method.

Main contributions achieved in this article are highlighted as follows:(1)A novel approach to finding the DL parameter is proposed, based on the equalization of search directions included in a diagonal matrix approximation of quasi-Newton methods with the search direction from the CG class;(2)Convergence analysis of the proposed MSMDL method is conducted under standard assumptions;(3)Numerical examples on standard test examples are presented with the aim to show the effectiveness of the proposed MSMDL and BB1DL methods.

The global contents of the remaining sections are as follows: in Section 2, we present an algorithm for the MSMDL method for solving unconstrained optimization problems with a new CG parameter which contains an acceleration parameter from the MSM quasi-Newton method. Section 3 explores the convergence properties of the presented MSMDL method. Some numerical results are proposed and discussed in Section 4 as well as a comparison of the suggested methods against some similar existing methods. The application of the suggested MSMDL method on 2D robotic motion control is discussed in Section 5. Some final conclusions and discussion are stated in Section 6.

2. New Dai–Liao CG Method with the Acceleration Parameter

The first basis of the proposed iterations is the MSM scheme (11) for solving unconstrained optimization (1). In order to fulfill the Second-Order Necessary Condition and Second-Order Sufficient Condition, inappropriate values which appear in (12) will be replaced by . To avoid such situations, in accordance with [19, 21], the following acceleration parameter will be used:

The resulting iterations are termed as MSM iterative scheme and defined by

Therefore, the search direction underlying in the MSM method is determined by the vectorwhere the parameter is defined in (12) using the Taylor expansion as in [21].

On the other hand, the CG update parameter from (17) can be determined by putting (17) into (16), which leads to

After equalization of from (21) with from (22), the following equation with respect to the unknown is obtained:

Our idea is to find the Dai–Liao parameter as a solution to equation (23). Application of the scalar product by on the left- and right-hand side in the equation (23) gives the following equation with respect to :

Thus, on the basis of (24), it further follows

Now, the parameter is expressed from the equation (25) as follows:

Since and , after the substitution of in (26), the following solution is obtained after some simplifications:

It is known that the DL parameter is calculated to generate the direction of maximum enhancement, utilizing the search direction matrix to be orthogonal to the gradient vector [34]. To make sure that the new DL method satisfies the descent condition, the definition of in (27) is altered using concepts from [15, 34] in the final form:

Considering in (17), the following improvement of the Dai–Liao CG parameter is proposed:

The MSMDL method is based on (2), (6), (28), and (29). The algorithmic procedure of the MSMDL method is established in Algorithm 2.

Require: Goal function , initial approximation , and parameters , .
(1)We set , and calculate , , .
(2)If test criteria are fulfilled then go to step 11: and stop; else, go to the step 3.
(3)We compute customizing Algorithm 1.
(4)We compute .
(5)We compute and .
(6)We compute using (19).
(7)We compute using (28).
(8)We compute using (29).
(9)We compute .
(10)We set and go to step 2.
(11)We return and .

The previous strategy for combining MSM and DL approaches can be applied to any quasi-Newton direction. If we replace by from (14), we get a new BB1DL method. An analogous calculation gives

Furthermore, the replacement in (17) initiates the following improvement of the Dai–Liao CG parameter :

If we apply the mentioned changes, we arrive at another variant of Algorithm 2, where the following steps are used instead of steps 6, 7, 8, and 9 in the MSMDL method:Step : We compute using (14).Step : We compute using (30).Step : We compute using (31).Step : We compute .

The variant of Algorithm 2 based on steps 1–5, , , , , 10, and 11: will be called the BB1DL method. More precisely, the BB1DL method is based on (2), (16), (30), and (31).

The numerical results in Section 4 show the effectiveness of the BB1DL method.

Based on all the previous discussion, we can conclude that the general framework presented in Algorithm 2 is applicable to other quasi-Newton methods.

3. Convergence Analysis

The global convergence of the proposed variant of CG methods is derived upon the standard assumptions.

Assumption 1. (1)The level set , defined by the initial guess and (2), is bounded.(2)The objective is continuous and differentiable in a neighborhood of with the Lipschitz continuous gradient . As a consequence, there exists a positive constant such thatAssumption 1 ensures the existence of positive values and which fulfillAnother main element in proving the convergence of a CG method is the propertyof uniformly convex functions, where . The verification of this property can be found in Theorem 1.3.16 of [18]. By (32), it follows , which in conjunction with (35) initiatesClearly, the inequality (36) implies . Furthermore, (36) initiatesTaking into account and (37), we concludeThe statement of Lemma 1 will be useful in the verification of main statements. It can be verified on the basis of the results obtained in Lemma 2.2 in [50] and [35].

Lemma 1. Let the Assumption 1 be satisfied and the sequence be generated by the MSMDL method (2), (16), and (29). Then, it holds

Lemma 2. Let Assumption 1 hold, be uniformly convex, and the CG parameter (29) fulfils , for all and for some constant

Then, MSMDL satisfies the sufficient descent conditionwith .

Proof. Assumption 1 guarantees (38) for search directions (16) in the proposed MSMDL method. The inequality (40) will be confirmed by the induction. For , it follows that . So, (40) is fulfilled for . We assume that (52) is satisfied for . Multiplying the identity (16) in the case and corresponding to the MSMDL method by , it can be derivedNow, from the equality (41), it followsUse the inequalitywithWe getBecause , the inequality (40) is fulfilled for in (45), and arbitrary .
Global convergence of the MSMDL iterations is verified in Theorem 1. Assumption 2 and the proofs of Lemma 3 can be found in [3638].

Assumption 2. The function is twice continuously differentiable and uniformly convex on .
If the conditions in Assumption 2 hold, then Assumption 1 is satisfied.

Lemma 3 (see [3638]).. Under the conditions in Assumption 2, there exist real numbers , satisfying

It is such that has a unique minimizer and

Theorem 1. Let restrictions in Assumption 1 hold. If is uniformly convex, then the series generated inside the MSMDL iterations satisfies

Proof. We suppose the opposite, i.e., that (50) is not true. This initiates the existence of a positive constant such that for all ,Then, from (32) and (35), we obtainFurther calculations and approximations on the basis of (32) are given as follows:To complete the theorem, it is necessary to prove that is bounded. Two cases should be distinguished based on the definition of in (28).

Case 1. If , one concludes

Case 2. In the case , it followsBased on Algorithm 1 and the initial value for , it follows that . Now, the fact implies (Figure 1), which further in common with (55) gives as follows:Since is an approximation of the Hessian , the inequality (47) implies . Now, from (56), we concludeBased on the cases 1 and 2, it followsNow, on the basis of (53) and (58), it followsSquaring both sides in (59) impliesNext, dividing both sides of inequalities (60) by and using (61), it can be concluded thatThe inequalities in (61) implyTherefore, causes a contradiction with Lemma 1.

Theorem 2. Let the restrictions in Assumption 1 hold. If the goal function is uniformly convex, then the series generated within the BB1DL iterations satisfies (60).

Proof. Proof Theorem 2 is similar to proof Theorem 1. It differs in the part where it is necessary to prove that is bounded. In the sequel, we prove that is bounded. Two cases should be distinguished based on the definition of in (30).Case (i): If , based on (63), we haveCase (ii): In the case , it followsNow, from (14) and (37), it follows , i.e.,The penultimate inequality in (65) directly follows from (32) and (35). Based on the cases (i) and (ii), it followsThe proof is completed.
An additional limit-type convergence result is proved in Theorem 3. Theorem 3 shows the linear convergence of the MSMDL method under Assumption 2. In order to prove the linear convergence of the MSMDL method, we present Lemma 4 which gives a lower bound of the step length . The proof is similar as in the case of Lemma 4 in Danmalam et al. in [39] or Lemma 4 in [40].

Lemma 4. We suppose that the conditions in Assumption 2 hold, and the sequence be generated by the MSMDL method with the backtracking line search. Then, there is a constant such that

Proof. The backtracking line search condition gives as follows:If , then does not satisfy (68), that is,The mean value theorem and (32) ensure the existence of , such thatFrom (60), we obtainNow, the following inequalities hold on the basis of (69) and (70):The inequalities (72), (40), and (71) givewhich leads toThe last inequality gives the required inequality (67) by setting . The proof is completed.

Theorem 3. Let Assumption 2 hold and be the unique minimizer of (1). Then, there are constants and such that the sequence generated by the MSMDL method fulfills

Proof. From the backtracking line search (40), Lemma 2, and Lemma 4, it followsUsing the left inequality in (49) and afterwards the right inequality in (48), the inequality in (76) is further approximated as follows:We consider the replacement in the inequality (77). Clearly, on the basis of , , , and , , it follows On the other hand, , , , and imply .
Furthermore, it followsCombining the left inequality of (48) with (78), we obtainwhich shows that the inequality (75) holds for . The proof is completed.

Remark 1. The result as in Theorem 3 can be directly applied to the method.

4. Numerical Experiments

In this section, we are going to prove the numerical efficiency of the MSMDL and BB1DL methods. To this aim, we perform two competitions on standard test functions with given initial points from [41, 42]. The first competition is between CG-DESCENT [24], M1 [27], DK [26], and MSMDL methods, and the second one is between BB1DL, MSMDL, and two recently developed DL CG methods with global convergence property (EDL [28] and MDL [15]). We compare all of these methods into three criteria:(i)The CPU time in seconds, CPUts(ii)The number of iterative steps, NI(iii)The number of function evaluations, NFE

The methods which participate in the competition are presented in Section 1 (Table 1). Test problems are evaluated in ten dimensions (100, 500, 1000, 3000, 5000, 7000, 8000, 10000, 15000, and 20000). Codes implementing the tested methods are evaluated in MATLAB R2017a and on a LAP’s (Intel (R) Core (TM) i3-7020U, up to 2.3 GHz, and 8 GB memory) with the Windows 10 Pro operating system.

Algorithms MSMDL, BB1DL, CG-DESCENT, M1, DK, EDL, and MDL were compared using the backtracking line search with parameters , . Tested algorithms are stopped after 50000 iterations or

Specific parameters used only in the MDL and MSMDL methods are defined as follows:(i)In the MDL method, , , and (ii)In the MSMDL method,

The symbol “” in the subsequent tables means that the method failed to achieve the prescribed accuracy after 50000 iterations for one or more tested dimensions of the observed test function.

Summary numerical results for the first competition (between MSMDL, CG-DESCENT, M1, and DK methods) are obtained by testing 34 test functions and presented in Table 2. This table includes numerical results obtained by monitoring the criteria NI, NFE, and CPUts in the MSMDL, CG-DESCENT, M1, and DK methods.

The performance profiles proposed in [43] are applied to compare obtained numerical data for criteria CPUts, NI, and NFE generated by the tested methods listed at the beginning of the section. The left-hand side of each performance profile in Figures 25 indicates the percentage of test problems in which the considered method is the best among tested methods, whereas the right-hand side gives the percentage of the test problems that are successfully solved by each method.

Benchmark comparison ranges the solvers included in the set on the set of test problems . The performance profile ratio is defined for each problem and each solver , and it is formulated as follows:where denotes the NI or NFE or CPUts needed to solve the problem by the solver . Then, the performance profile for a solver in the log -scale is defined by the following:

Solvers with a greater probability are more desirable. If the solver achieves better results compared to the solver , then the curve of the performance profile generated by the solver is located above the corresponding curve of the performance profile generated by the solver .

In Figure 2, we compare the performance profiles NI and NFE for CG-DESCENT, MSMDL, M1, and DK methods based on the numerical values covered in Table 2. A careful analysis reveals that the MSMDL method solves 64.71% of the test problems with the least NI compared to the CG-DESCENT, M1 , and DK . From Figure 2(a), it is perceptible that the MSMDL graphs attain the top level first, which indicates that MSMDL outperforms other considered methods with respect to the criterion NI. Figure 2(b) shows that the MSMDL method is more efficient than the CG-DESCENT, MSMDL, M1, and DK methods, with respect to NFE, since it solves 67.65% of the test problems with the least NFE compared to the CG-DESCENT, M1 , and DK . From Figure 2 (bottom), it is notified that the MSMDL graph first reaches the top, so MSMDL is the winner relative to other considered methods.

Figure 3 shows the performance profile of the CG-DESCENT, MSMDL, M1, and DK methods based on the CPUts included in Table 2. The MSMDL method solves 58.82% of the test problems with the least CPUts compared to CG-DESCENT, M1 , and DK . According to Figure 3, the MSMDL graph comes first to the top, which verifies its dominance in terms of CPUts.

The MSMDL method did not successfully solve of all the test functions in Table 2, while each of the CG-DESCENT, M1, and DK methods did not successfully solve of the test functions. A detailed summary of the results for each method is arranged in Table 3.

With the total of 330 solved test problems, MSMDL is able to solve the largest number of test problems ( of all tested problems), while M1 and CG-DESCENT solved only of all tested problems.

Based on the data involved in Tables 2 and 3 and graphs involved in Figures 2 and 3, it is noticed that the MSMDL method achieves the best results compared to CG-DESCENT, M1, and DK methods with respect to three basic criteria: NI, NFE, and CPUts.

In addition to standard analysis of numerical results, we performed additional analysis for the MSMDL method. The goal of additional tests is to monitor the usage of the imposed value in iterations for each of the tested functions. We also count the assignments in (28) for each individual function. The third analysed parameter is the maximum value for obtained during testing. The test results are given in Table 4.

The total number of assigned values (respectively. ) from Table 4 will be denoted by (respectively. ). Furthermore, the values will be monitored. The total sum of individual values NI, , and across the tested functions will be denoted by , , and , respectively. The total sum of all iterative steps across all test functions is equal to , which further implies and . In this way, the behavior is observable.

In the subsequent numerical experiments, we compare the MSMDL and BB1DL methods versus EDL and MDL methods.

The performance comparisons of the MSMDL and BB1DL solvers against the EDL and MDL methods are shown in Figures 4 and 5. Figure 4(a) compares considered solvers with respect to the profile NI, Figure 4(b) in terms of the NFE. Graphs of CPUts performance profiles for EDL, MDL, MSMDL, and BB1DL are arranged in Figure 5.

Figure 4(b) shows that the MSMDL and BB1DL methods achieved more efficient results than EDL and MDL methods in terms of NFE, which is confirmed by upper positions of the graphs of their performance profiles. Figure 4(a) shows that the MSMDL and BB1DL methods achieved slightly superior results compared to the EDL and MDL methods in terms of NI, which is confirmed by the dominant graphs of their performance profiles.

Summary numerical results for the second competition (between EDL, MDL, MSMDL, and BB1DL) are obtained by testing 25 test functions and arranged in Table 5. This table shows numerical data obtained by monitoring the criteria NI, NFE, and CPUts for the EDL, MDL, MSMDL, and BB1DL methods.

Numerical results in Table 5 show that the MSMDL method solves about 36%, while the BB1DL method successfully solves 32% of the test problems with the least values of NI and NFE.

Profile performances based on CPUts of the MSMDL and BB1DL in Figure 5 show better performances of these solvers compared to the profile performances of the EDL and MDL solvers. In numerical results in Table 5, we found that the MSMDL method solves about 28%, while the BB1DL method successfully solves 36% of the test problems with the minimal CPUts.

We observe that the EDL and MDL methods, which are currently among the best DL conjugate gradient methods proposed in the literature, give worse numerical results than the MSMDL and BB1DL methods in terms of the NI, NFE, and CPUts.

5. Application in 2D Robotic Motion Control

Problems arising from the concept of robot system have attracted the attention of researchers and subsequently, some algorithms for handling them have been developed [6, 44]. For instance, Zhang et al. [45] discussed the fundamentals of n-link robots known as a 1-link robot system. Qiang et al. [46] pointed out that the importance of taking the characteristics of motor dynamics into account for the accuracy and stability requirements of robot movements to be achieved. More so, among the criteria that the motor dynamics need to satisfy is for the actual output of the system to track the desired output within an acceptable minimal error [47]. Motivated by the work of Zhang et al. [11], Sun et al. [7] applied an algorithm for solving real-valued unconstrained optimization to solve 2D robotic motion control.

We consider a motion control problem involving a two-joint planar robotic manipulator as described in [11]. Let and denote the joint angle vector and end effector position vector, respectively. A discrete time kinematics equation of a two-joint planar robot manipulator at a position level is governed by the following model:

The vector-valued function is referred to the kinematics mapping which has the following structure:where the parameters and represent the lengths of the first and second rod, respectively. Now, with regards to robotic motion control, the following unconstrained optimization problem,is solved at each instantaneous time within the interval with being the task duration. The end effector is usually controlled to track a Lissajous curve denoted by . We note that by taking , then problem (85) has the form of problem (1), and therefore, the proposed MSMDL method can be used to solve it.

In this experiment, unlike the Lissajous curve used in [7, 48, 49], we require the end effector to track the following two Lissajous curves,

The implementation of the proposed MSMDL with regard to the motion control experiment was coded in MATLAB R2019b and run on a PC with an Intel Core (TM) i5-8250u processor with 4 GB of RAM and CPU 1.60 GHZ. In this experiment, the lengths of the first and second rods are taken as , where the initial point is with the task duration divided into 200 equal parts, where seconds. The results experimental with are presented in Figure 6, while that of are given in Figure 7. Figures 6(a) and 7(a) depict the robot trajectories synthesized by the MSMDL for and , respectively. On the other hand, Figures 6(b) and 7(b), respectively, plot the end effector trajectory and desired path for and .

The errors recorded by MSMDL during the course of the experiment with respect to the horizontal axis are reported in Figures 6(c) and 7(c) for and , respectively, while those of the vertical axis are equally presented in Figures 6(d) and 7(d) for and , respectively.

Figures 6(a), 6(b), 7(a), and 7(b) confirm that the MSMDL method successfully and efficiently executed the task given to it with acceptable error. In addition, Figures 6(c), 6(d), 7(c), and 7(d) show that the proposed MSMDL method not only solves test problems but completes them with acceptable error. This further demonstrates the efficiency and applicability of the MSMDL method.

6. Conclusion

The main result obtained in this study is the verification and investigation of the correlation between QN and CG approaches. More specifically, it has been shown that QN and CG are not independent approaches, and QN iterates can be used to improve CG-type iterations.

Defining the best CG direction and also finding an optimal parameter for the DL CG class are open problems [14]. The parameters of the MSMDL method and of the BB1DL method are defined and selected using the unification of the MSM and the BB1 method and the CG DL class, respectively. Based on the presented numerical results in Tables 2 and 5 and Figures 25, we can conclude that the proposed MSMDL and BB1DL methods achieve superior numerical results with respect to all the three criteria (NI, NFE, and CPUts) versus opposing methods.

A modified Dai–Liao CG method termed the MSMDL method, intended for solving unconstrained optimization problems, is proposed and examined both theoretically and numerically. The modification is based on an appropriately scaled CG parameter by means of the approximation of the Hessian matrix via the diagonal matrix in the MSM method. Our leading principle is to find the DL parameter in (17) as a solution to the equation which appears after the unification of descent directions in the MSM method (11) and in the DL iterations (17). In this way, the expression and the acceleration parameter from the MSM method will appear in the expression which determines . The new parameter contains not only the gradient information but also some Hessian matrix information. The general perception is that the proposed iterations are defined as a hybridization of MSM and DL-type CG methods. Under some standard assumptions, the global convergence property of the MSMDL method is established. Numerical comparisons on a large class of well-known test problems, especially for solving high-dimensional problems, indicate that the MSMDL method is quite effective. The application of the MSMDL method on solving problems arising from 2D robotic motion control further confirms its efficiency as well as applicability.

The proposed unification of the search directions of two methods of the general pattern (2) belonging to different classes is a continuation of a new branch of research in nonlinear optimization, which could be termed as unification. This research investigates the unification of MSM and BB methods with the DL CG solver to determine the parameter . Further research may include the unification of search directions between different quasi-Newton methods and various CG methods. The method proposed in this paper is useful as a general principle for hybridizing any two methods from the conjugate gradient group or from the quasi-Newton methods group.

Appendix

Table 6 shows the abbreviations and full names of the methods. For most methods, the abbreviation and the full name of the method are identical.

Data Availability

The numerical results and graphical illustrations used to support the findings of this study are included within the article. The MATLAB code is available on request.

Conflicts of Interest

The authors declare that they have no conflicts of interest.

Acknowledgments

The work of the second author is supported in part by the Serbian Academy of Sciences and Arts (-96). Predrag Stanimirović acknowledges support from the Science Fund of the Republic of Serbia (grant no. 7750185, Quantitative Automata Models: Fundamental Problems and Applications - QUAM). This work was supported by the Ministry of Science and Higher Education of the Russian Federation (grant no. 075-15-2022-1121).