Abstract

One of the more restrictive methods of improvement is the augmented Lagrange method. Two versions are built in the external framework and the internal framework of the proposed method. The first basic version of the proposed algorithm includes a new derivation of Lagrange multiples and different penalty criteria, and the second version is the internal framework in which the unconstrained algorithm known as the conjugate gradient (CG) method was incorporated; also, a new parameter was derived in the search direction. The numerical results are indicative of the stability, efficiency, and speed of the proposed algorithm, based on performance profiles provided by Dolan and More.

1. Introduction

All major constraining problem strategies have been categorized into two basic categories: direct and indirect methods. However, indirect methods create subproblem methods that do not contain restrictions. Lagrange function techniques transform a restricted problem into an unconstrained problem. The technique is basically easy and very powerful. Augmented Lagrange algorithms are well-known and successful methods of solving restrictive optimization problems that are commonly used in various engineering and economic fields. In this paper, we present improvement problems that are restricted to equality.where is a constant and f and are a map from to ; replacing a constrained optimization problem by a series of unconstrained problems by adding a Barrier term [14], there is a general formulation of the augmented Lagrange method defined as follows:

Several authors have studied several formulas for previous years [57].

2. First Version (Two Multipliers, ) (Say, NEW1)

This section provides a detailed breakdown of our improved Lagrange method to solve optimization problem (1). It is a catalyst and differs from the standard augmented Lagrange method to obtain a new value. We derive two new formula-modified Lagrange multipliers. We begin with the augmented Lagrange definition. The general formula of Lagrange functions is

The gradient of Lagrange function is

The new modified barrier augmented Lagrange multiplier method is defined by

Compare (3) with (4) to obtain the first multiplier as follows:

Now, we will derive the second multiplier :

Compare (6) with (7) to obtain the following:

3. Conjugate Gradient (CG) Method with the Constrained Problem

Now, we turn to the new second part, increasing the efficiency of internal iteration by optimizing the line search to quickly reach the optimum value with the least error by using an effective CG method. This method is very popular among mathematicians who are interested in solving large-scale constrained nonlinear optimization problems:where is a smooth function; it creates a series of successive points that take shape as

The parameter is cubic line search direction, and it creates a series of successive aswhere is known as the conjugacy coefficient. Different CG modes are adopted by different and multiple choices of . Scientists have conducted many studies in [820], letting . The standard Wolfe conditions are

The constants are within the period [21, 22].

4. Second Version (Scaling Parameter, ) (Say, New2)

In this version, a new scalar is proposed to improve the CG method. The direction is defined byAnd, is defined by the formula in [13], i.e.,

Multiplying both sides of (15) by , then we get

We have

Using (17) in (16) yields

5. Convergence Analysis of the New Proposed Method

Affinity analysis is performed for new method (20), based on inexact line research given by Wolfe line search. We shall also show that the CG coefficients achieve appropriate ratio conditions and global convergence characteristics. In light of this inexact line research (12) and (13), first, we will seek sufficient conditions.

5.1. Sufficient Descent Conditions

For the requirement of sufficient proportions, we present the following theory.

Theorem 1. Let defined in (12) achieve the relations (12) and (13); then, the direction defined by (20) is a descent direction.

Proof. Multiply both sides of (18) by to obtainWe haveUsing (23) in (22) yields

5.2. Global Convergent Properties

To prove that new method (20) fulfills the convergence conditions, we need some basic convergence hypotheses, the most important of which is the oneness condition of the solution. For additional details, see [2325].

Lemma 1. With reliance on assumption foundations, the convergence property of (20) and the step length ϖ is achieved with relations (12) and (13). Ifthen

Theorem 2. Suppose that our assumption is correct, the new direction defined by (18) is descent, and is computed using (12) and (13); then,

Proof. Since is a positive parameter defined between zero and one, i.e., , thenwhere andFrom (11) and (12) and , we obtainSince , we obtainFrom Wolfe conditions (12) and (13), we haveLetTherefore, from (14), we haveHence, from (26) and (31), we haveThat is, , and hence, the proof is complete.

6. Results and Discussion

To verify the effectiveness of the proposed algorithms, we performed different numerical experiments that could be summarized into three comparison activities. The first is to compare Lagrange’s algorithm using the new derivative Lagrange multiplier with standard algorithms in this field. As for the second efficacy, a comparison of some optimized conjugate gradient algorithms was performed with those known in this field. The final efficacy consists of a combined comparison of Lagrange’s algorithm modification and conjugate gradient optimization developed with a classic algorithm. The new augmented Lagrange methods attempt to minimize the Lagrange multiplier and the penalty parameter in each iteration which means less cost function and time required to implement the method and also the new versions are used for modifying the line search of conjugate gradient use of those multipliers to reduce the errors and avoid the use arbitrary values. All algorithms implement the Wolfe line search conditions with δ = 0.0001 and σ = 0.001. The numerical results are too effective when compared with other established algorithms. To solve the standard constrained problem, we demonstrate the global convergence of the proposed method. The new restrictive CG technology has been tested on more than twenty specific test functions (see Appendix [23] for details of these test issues). We demonstrated results of all experiments using the Dolan and More’ profile [25].

The experiments shown in Figures 13 show the performance of the above methods concerning the number of iterations and the number of job evaluations, and Figure 3 shows the CPU time.

7. Conclusions

The results of the current study of the two new versions showed that the newly proposed method developed has the potential to reduce the cost of the job, the time required to reach the solution, and the number of required iterations compared to some well-known CG methods by using the Dolan–More profile.

Data Availability

The data used the support the findings of this study are available from the corresponding author upon request.

Conflicts of Interest

The authors declare that there are no conflicts of interest regarding the publication of this paper.

Acknowledgments

This research was supported by the College of Computer Sciences and Mathematics, University of Mosul, Republic of Iraq.