Abstract

In this study, we tend to propose a replacement hybrid algorithmic rule which mixes the search directions like Steepest Descent (SD) and Quasi-Newton (QN). First, we tend to develop a replacement search direction for combined conjugate gradient (CG) and QN strategies. Second, we tend to depict a replacement positive CG methodology that possesses the adequate descent property with sturdy Wolfe line search. We tend to conjointly prove a replacement theorem to make sure global convergence property is underneath some given conditions. Our numerical results show that the new algorithmic rule is powerful as compared to different standard high scale CG strategies.

1. Introduction

The nonlinear CG technique could be a helpful procedure to search out the minimum value of any nonlinear function through exploitation unconstrained nonlinear optimization strategies.

Let us contemplate the subsequent unconstrained minimization problem: where is a real-valued smooth function. The repetitious formula is given aswhere is associate optimum step-size computed by any line search procedure [1]. The search direction is defined asand denotes , while is a positive scalar.

Well-known established instances of square measure are from Hestenes-Stiefel, Fletcher-Reeves, Polak-Ribière, Liu-Storey, Dai-Yuan, and Dai-Liao (see [2, 3, 3, 4, 47], respectively), within the already-existing convergence analysis and implementation of the CG methodology, the weak Wolfe conditions square measure [8]:If we choose , also the strong Wolfe conditions [8] consist of (4) and Now, this allows us to review Ibrahim et al. work [9] that could be a work that considers unconstrained minimization problems. Ibrahim et al. recommend a search direction that is outlined asand is the BFGS updating matrix if step-size isThe scalar is chosen to ensure conjugacy.

Also, another addition is regarded as an equivalent unconstrained minimization problem. Ibrahim et al. [10] recommend another search direction outlined asMatrix I is the identity and <0.

In addition, Ibrahim et al. [11] propose another search direction that is outlined asThe positive scalar and are the Hestenes-Stiefel parameters.

2. A New Proposed Search Direction

In this section, we advise a replacement search direction as deduced from Ibrahim et al. [911]. The new search direction is outlined aswhereas (the approximation matrix of BFGS updating matrix) denotes approximations of Hessian matrix G, and could be a positive constant. In order to drive the value of , we have a tendency to multiply either side of (11) by to induceSince and (Perry condition [12]), thenIn order to see value of , we tend to additionally multiply either side of (11) to induceAs a result of processes of multiplication shown in (13) and (16), we tend to reach the following search directions in (17a), (17b), and (17c). The subsequent new search directions are our new projected algorithmic program:In the following step, we have a tendency to assume that each search direction got to satisfy the subsequent descent condition (, for all k). Also, there should exist a constant c>0 in order to get For all , the new direction that is outlined in (18) ought to satisfy the sufficient descent condition. The enough descent conditions are going to be used later to prove our new theorem (see Section 2.2). So to prove our new theorem, we have a tendency to necessarily use the subsequent given assumptions (see Section 2.1).

2.1. Assumptions in [9, 11, 13]

(A1) is twice continuously differentiable.(A2)f is uniformly convex; that is, m and M are positive constants, such thatfor all , and G is the Hessian matrix of f.(A3)The matrix G is Lipschitz continuous at the point ; that is, there exists the positive constant L satisfyingfor all x in a neighborhood of .

2.2. A New Theorem for Proving Sufficient Descent Property

To prove that our new projected algorithm defined in (17a), (17b), and (17c) satisfies sufficiently descent condition, we tend to suppose that assumptions in (Section 2.1) square measure are true. Additionally, the sequence is bounded. Then, sufficient descent condition (18) is true for all .

Proof. When taking (17a), (17b), and (17c) and achieving the descent condition, we can see the following:We get value , which is bounded away from zero. Therefore (18) is true.

2.3. Lemma in [10, 14]

Suppose that assumptions in Section 2.1 are true. Then, the step-size which is determined by (2) satisfies when is a positive constant.

2.4. New Theorem for Proving Global Convergence Property

Having demonstrated the important and necessary properties of regression algorithms, we now come to the proof of the necessary property to be present in all numerical optimization algorithms. Let us demonstrate the new algorithm defined in (17a), (17b), and (17c). To achieve a global convergence property, assume that the theory in Section 2.2 and the assumptions in Section 2.1 are correct. Then

Proof. By linking the theory in Section 2.2 and lemma in Section 2.3 we give the following result:Hence, from our new theorem in Section 2.2, we can define that , and we can therefore simplify (23) as . Thus, the proof is established.

3. A New Form for the Parameter in Conjugate Gradient Method

To obtain an updated version of the conjugate gradient method associated with a new parameter , we compare the standard CG-method specified in (3) with the proposed new algorithm specified in (17a), (17b), and (17c):Multiplying both sides of (24) by we getOrknowing that I is the identity matrix. Moreover, Then, the new is and . It should be noted that when using exact line searches assuming that and the matrix is the identity matrix, the new standard will be reduced to HS. This condition must also be met, with (positive constant), and sinceafter these conditions we know the new parameter isIn conjunction with both parametersand the search direction is defined as

3.1. Outlines of the New Proposed CG-Algorithm in (29a), (29b), (29c), (29d), and (29e)

By assuming , and by setting the iteration k=1, we get the following steps.

Step 1. Set , if , then stop.

Step 2. Compute by strong Wolfe line search conditions in (4) and in (6).

Step 3. Generate by implementing (29a), (29b), (29c), (29d), and (29e).

Step 4. If [15] is satisfied, then go to Step 1; if not, then continue.

3.2. Assumptions for Proving the Convergence Analysis Property of the New Algorithm in (29a), (29b), (29c), (29d), and (29e)

Let us suppose the following.(i)The level set (ii)The condition is satisfied where ; also since f is a uniformly convex function on S, then there exists a constant , such thatFrom both (19a), (19b), and (31), we can get the following:

3.3. New Theorem for Proving Sufficiently Descent Directions of the New Algorithm in (29a), (29b), (29c), (29d), and (29e)

If we have a tendency to assume that (A3) in Section 2.1 is true and if we assume that conditions (i) and (ii), outlined in (30) and in (31), respectively, are true, then the new proposed search direction defined in (29e) satisfies the sufficient descent condition.

Proof. By using the mathematical induction we demonstrate this new theorem; for initial direction (k=1) we havewhich satisfies (18).
We suppose that
And if we multiply both sides of (29a) by , then we can getsince , since :and since The above equations ensure that condition (18) is satisfied. Hence, the proof is complete.

3.4. Lemma in [1, 16]

By assuming that assumptions in Section 2.1 are true, by supposing that any CG-method with search direction could be a descent direction provided that the step-size is obtained by the strong Wolfe line search conditions, and ifthen

3.5. A New Theorem for Proving the Global Convergence Property of the New Algorithm in (29a), (29b), (29c), (29d), and (29e)

If we suppose that assumptions (i) and (ii) in (30) and in (31), respectively, are true and if we have a tendency to assume that (A3) in Section 2.1 is additionally true, then the search directions outlined in (29e) are descent provided that the step-size is computed using (4) and (6), and then

Proof. sinceand since and ,then from (A3) in part (Section 2.1), from condition (ii) in (31), and from (28) we get

4. Numerical Results and Comparisons

In this work, we have a tendency to compare our new proposed CG-method with some normal classical CG strategies like Hestenes-Stiefel [HS] and Dai-Yuan [DY] by exploitation of fifty unconstrained nonlinear cases; take a look at functions obtained from Andrei [17, 18]. As for the computer program, it was stopped when . In addition, the term , which is defined in (29a), can be computed as . Numerical results for new algorithm in (29a), (29b), (29c), (29d), and (29e) with and are for the total of 50 test problems from the CUTE library. The Sigma plotting software was used to graph the data. We adopt the performance profiles given by Dolan and Moré [19]. Thus, new, HS, and DY strategies are compared in terms of NOI, CPU, and NOF in Figures 13. For each method, we plotted the fraction of problems that were solved correctly within a factor of the best time. In the figures, the uppermost curve is the method that solves the most problems within a factor t of the best time. In Figures 13, the new method outperforms the HS algorithm and DY method in terms of NOI, CPU, and NOF. If the solution had not converged after 800 seconds, the program was terminated. Generally, convergence was achieved within this time limit; functions for which the time limit was exceeded are denoted by “F” for fail-in.

5. Conclusions

At the end of this work we were able to obtain a new direction of research defined in (29a), (29b), (29c), (29d), and (29e). This new trend is a hybrid trend that combines pedigree techniques with Quasi-Newton ones. Through the theories presented in the research, the new trend in (29a), (29b), (29c), (29d), and (29e) proved that it satisfies the requirement of sufficient proportions and ensures the property of global convergence. In addition, we have presented a new scalar () which ensures the sufficient descent directions. Moreover, under some conditions, we have established that the new proposed algorithm is a globally convergent algorithm for uniformly convex functions under the strong Wolfe line search conditions. The numerical results show that when we choose the value of parameters in (13) and in (16), we obtain best numerical results as compared to other similar numerical results.

Data Availability

The data used the support the findings of this study are available from the corresponding author upon request.

Conflicts of Interest

The authors declare that there are no conflicts of interest regarding the publication of this paper.

Acknowledgments

The research is supported by College of Computer Sciences and Mathematics, University of Mosul, Republic of Iraq, under Project no. 8728196.