Abstract

In computational mathematics, it is a matter of deep concern to recognize which of the given iteration schemes converges quickly with lesser error to the desired solution. Fixed point iterative schemes are constructed to be used for solving equations emerging in many fields of science and engineering. These schemes reformulate a nonlinear equation into a fixed point equation of the form ; such application determines the solution of the original equation via the support of fixed point iterative method and is subject to existence and uniqueness. In this manuscript, we introduce a new modified family of fixed point iterative schemes for solving nonlinear equations which generalize further recursive methods as particular cases. We also prove the convergence of our suggested schemes. We also consider some of the mathematical models which are categorically nonlinear in essence to authenticate the performance and effectiveness of these schemes which can be seen as an expansion and rationalization of some existing techniques.

1. Introduction

In many disciplines of engineering and mathematical sciences, a broader class of problems are studied in the framework of a nonlinear equation , where is a sufficiently smooth function close to simple zero . The development of various iterative methods for finding the approximate solution of a nonlinear equation has become an active area of research in many scientific fields. Several numerical techniques such as Taylor series, decomposition methods, quadrature formulas, modified homotopy perturbation method, and variational iterative method have been used to explore the diversity of iterative methods, for example, see [130]. One of the most well-known and extensively used iterative methods for solving the nonlinear equation is Newton’s method which was initiated by Traub [31]. Many numerical methods have been constructed as an extension of Newton and Newton’s like methods. Weerakoon and Fernando [30] have improved the convergence of the Newton method by approximating the indefinite integral in Newton’s theorem by the rectangular and trapezoidal rule. Frontini and Sormani [13] have extended their results to determine another variant of the Newton method which is cubically convergent. Later on, Ozban [25] has introduced some newly improved forms of the Newton’s method by using the concept of harmonic mean and midpoint rule. Abbasbandy [1], Chun [7], and Darvishi and Barati [10] have constructed and introduced different higher-order iterative methods by applying the decomposition technique of Adomian [32]. Implementation of this Adomian decomposition technique required higher-order derivatives evaluation which is a major pitfall of this method. To overcome this drawback, Daftardar-Gejji and Jafari [11] have used different modifications of the Adomian decomposition method and introduced a simple technique which does not require derivative evaluation of the Adomian polynomial. This technique is useful to rewrite the nonlinear equation as the sum of linear and nonlinear parts. Bhalekar and Daftardar-Gejji [6] have determined the convergence of the former method [11] in detail and established its equivalence with Adomian decomposition method. Numerous researchers are extensively using technique [11] and derived several higher-order iterative methods for solving nonlinear equations. Ali et al. [2] have introduced a family of iterative methods using the quadrature formula as well as the fundamental theorem of calculus and checked the validity and performance of these methods by examining two mathematical models. In [18, 20], decomposition technique [11] is implemented and merged sophistically with a coupled system of equation to investigate various order iterative methods. Alharbi et al. [4] have used the central idea of the decomposition technique along with auxiliary function and investigated generalized and comprehensive forms of higher-order iterative methods for finding solutions of nonlinear equations. Ali et al. [3] have constructed several new iterative methods by using Taylor’s series expansion of the function These iterative methods can be viewed as a generalization of some well-known methods such as the Newton method, Halley method, and Traub’s method.

Inspired and motivated by the continuing research ventures in this direction, we consider the well-known fixed point iterative method [33], in which we rewrite the nonlinear equation as that satisfies the following properties for existence and uniqueness of fixed point:(i)If and for all [a, b], then has at least one fixed point in .(ii)If, in addition, exists on (a, (b) and a positive constant L exists, such that(iii), for all , then there is exactly one fixed point in .

Then, we use Newton’s theorem along with writing the functional equation as coupled system and applying the decomposition technique presented in [11]. In the second section of this study, we introduce some new iterative methods and determine their special cases. The third section comprises of convergence analysis of the proposed iterative methods. Efficiency and performance of newly constructed family of recursive approaches are tested in the last section, by solving some test examples along with two models, i.e., motion of a particle on an inclined plane and Lenard–Jones potential applications to minimization problem. We also present the graphical analysis for these models. Numerical results of the examples reveal and validate the efficacy of our newly proposed methods.

2. New Class of Family of Iterative Schemes

Consider the nonlinear equationwhich can be rewritten aswhere is an initial guess sufficiently close to μ which is simple root of (1). Now, utilizing the technique of Noor et al. [22], approximate the function using fundamental theorem of calculus and quadrature formula:where represents the knots and and satisfy the condition:

Applying the technique of He [15], and writing the nonlinear equation as an equivalent coupled system of equations,

It can be rewritten aswhere

It is clear that is nonlinear operator. Now, we construct sequence of higher-order iterative methods by employing decomposition technique initiated by Gejji and Jafari [11]. With the support of this technique, solution of (8) can be represented as in terms of the infinite series:

The nonlinear operator can be decomposed as

Thus, from equations (8), (11), and (12), we havewhich generates the following iterative scheme:

It follows that

Notable approximation of is conveyed as

For we have

Implementing (10),

From (6), it can easily be computed as and using in (18), we get

For ,

Using (4) and (17), we have

This formulation suggests Algorithm 1.

(i)For a given , approximate solution is computed by the following iterative scheme:
(ii)Kang et al. [20] developed this algorithm which has quadratic convergence. From (6) and (8), we have ,
(iii)For,.
(iv)Employing (17), Algorithm 1, and simplifying, we have
(v)From (19), we take
(vi)By using Algorithm 1 and computing, we get
(vii)This relation yields the following two-step algorithm.

Algorithm 2 and Algorithm 3 are the main iterative schemes which generate further special cases for different values , , and .

(i)For a given , approximate solution is computed by the following iterative scheme
(ii)It is noted that
(iii)For m = 3,
(iv)Using (6), (10), and (17), we have
(v)Applying Algorithm 2, we get
(vi)This formulation yields the following three-step method for solving nonlinear (1).
(i)Let be an initial guess, then one can figure out (approximate solution) with the support of the subsequent recursive scheme: ,
2.1. Some Particular Manifestations of Algorithm 2

Now, we explore the particular cases of Algorithm 2, by considering different values of , and . Taking , Algorithm 2 turns down to the subsequent iterative scheme:

(i)Let be an initial guess, then one can figure out the approximate solution by the following recursive method:
(ii)This algorithm was established by Kwun et al. [21] which has third-order convergence.
(iii)Taking , Algorithm 2 minimizes to the succeeding iterative scheme.
(i)Let be an initial guess, then the approximate solution by the following recursive method:
(ii)Saqib et al. [27] have derived this algorithm which has fourth-order convergence.
(iii)Taking , Algorithm 2 turns down to the subsequent iterative scheme.
(i)Let be an initial guess, then one can figure out the approximate solution by the following recursive technique:
(ii)Gul et al. [29] investigated this algorithm which has third-order convergence.
(iii)Choosing , Algorithm 2 is restructured to the following recursive method.
(i)Let be an initial guess, then one can figure out the approximate solution by the following recursive scheme:
(ii)Taking , Algorithm 2 turns down to the subsequent iterative scheme.
(i)Let be an initial guess, then the approximate solution is computed by the following iterative scheme:
2.2. Some Special Manifestations of Algorithm 3

Picking , Algorithm 3 turns down to the subsequent iterative scheme:

To the best of our knowledge, Algorithm 7, Algorithm 8, Algorithm 9, Algorithm 11, and Algorithm 12 appear to be new ones.

(i)For a given , compute approximate solution by the following iterative scheme:
(ii)Taking , Algorithm 3 reduces to the following recursive approach as follows.
(i)Let be an initial guess, then the approximate solution is computed by the following recursive method:
(ii)Gul et al. [29] suggested this algorithm which has fourth-order convergence.
(iii)Choosing , Algorithm 3 reduces to the subsequent iterative method.
(i)Let be an initial guess, then the approximate solution is computed by the following iterative scheme:
(ii)Choosing , Algorithm 3 turns down to the subsequent iterative method.
(i)Let be an initial guess, then one can figure out the approximate solution by the following recursive scheme:

3. Convergence Analysis

Theorem 1. Let be differentiable function where is an open interval. Let μ͟ be a simple zero of where is sufficiently smooth in the neighborhood of root. If is an initial guess existing nearly close to μ, then the multistep methods defined by the Algorithms 2, 3, 7, 8, 9, 11, and 12 have convergence of order at least 3, 4, 3, 3, 4, 4, and 4, respectively.

Proof. Let µ be root of nonlinear equation or equivalently . Let the errors at nth and (n +1)th iterations be and , respectively.
Now, expanding and by using Taylor’s series about ,Considering Algorithm 2 and following (26), where , we obtainTaylor’s series expansion of about ,By substituting (27)–(29) into Algorithm 2, we obtain the error term of Algorithm 2,Expanding in terms of Taylor series about ,Expanding by Taylor’s series,Applying (30)–(32) into Algorithm 3, we get the error term of Algorithm 3,Now, we investigate the convergence order of special cases of Algorithms 2 and 3.
Amplifying in terms of Taylor series about ,Starting with (23), (27), (29), and (34), we obtainNow, by substituting (33) and (34) into Algorithm 7 and simplifying, we obtain the error term of Algorithm 7,Unfolding in terms of Taylors’ series about ,Employing (23), (27), (28), and (37) and simplifying, we haveBy substituting (40) and (41) into Algorithm 8, we get the error term of Algorithm 8,Now, again considering the error term of Algorithm 4 investigated by Kwun et al. [21],Amplifying in terms of Taylors’ series about ,By substituting (24) and (46) into Algorithm 9, we get the error term of Algorithm 9,Now, considering (41) and expanding in terms of Taylor’s series about ,Opening up the term in the form of Taylor’s series about ,From (23), (48), and (49) and simplifying, we haveBy substituting (51) and (52) into Algorithm 11, we get the error term of Algorithm 11,From (42) considering the error equation of Algorithm 8,Expanding in terms of Taylor’s series about ,Elaborating in the terminology of Taylor’s series about ,From (55)–(57), we haveBy substituting (58) and (59) into Algorithm 12, we get the error term of Algorithm 12,This completes the proof.

4. Efficiency Index

Commonly in literature, the efficiency index [31] of an algorithm supplies us with information about the numeric behavior and performance of the method under examination. It is also used to compare different iterative methods and mathematically defined as , where represents the order of the method and is the total number of function evaluations (the function and the derivatives involved) per iteration necessary by the method. Taking into account this fact, one can calculate the of different iterative methods. Since the Ullah method (UM) [5] is quadratically convergent and needs three function evaluations per iteration, thus for this method will be 1.18921. Similarly, of Farooq method FM [28] is 1.24573 because order of convergence of the method is three and requires five function evaluations per iteration. The efficiency indexes of Noor methods [22] with cubic and fourth order of convergence are , , and . Now, we compute the efficiency indexes of newly proposed algorithms. The convergence order of the methods described in Algorithms 7 and 8 is 3, see (37) and (42), and requires four function evaluations per iteration. Thus, the efficiency index for both methods is . The convergence order of the methods described in Algorithms 9, 11, and 12 is 4, see equations (47), (53), and (60), and the total number of evaluations per iteration is 4, 6, and 6, respectively. Thus, the efficiency indexes for these methods are  = 1.41421, , and , respectively. Table 1 summarizes the efficiency indexes of different algorithms that we have computed above, and it can easily be noted that efficiency indexes of the newly established algorithms AG1, AG2, AG3, AG4, and AG5 are better than the efficiency indexes of other iterative methods.

5. Applications

In this section, we consider two well-known models related to mathematics, physics, and physical chemistry which include a nonlinear model formed due to the motion of particles on an inclined plane and Lenard–Jones potential, a renowned model that represents the interaction between neutral molecules or atoms. We also include some examples used by Chun [7] to elaborate the efficacy of the proposed algorithms. For computational work, we implement codes in MAPLE software and MATLAB for graphical analysis, and the following stopping criterion is taken into account for entire computations:

We display a comparative representation of newly established methods: Algorithm 7 (AG1), Algorithm 8 (AG2), Algorithm 9 (AG3), Algorithm 11 (AG4), and Algorithm 12 (AG5), introduced in this paper, with second-order Ullah method (UM) [5], third-order Farooq method (Algorithm 13) (FM) [28], with alpha = 0.9, and Noor methods [22] {(Algorithm 2.8) (NR1), (Algorithm 2.12) (NR2), and (Algorithm 2.15) (NR3)}, to show that the proposed methods perform more efficiently, see Tables 24 and Figures 13. We obtain an estimated simple root rather than the exact based on the exactness of the computer and use . As for the convergence criteria is concerned it is desired that the distance among two consecutive estimations for zero is not more than . In Tables 24, is the initial guess, (IT) represents the number of the iterations, is the approximate root, and is its corresponding functional value.

The computational order of convergence (COC) (see [9]) is computed to check the behavior of the proposed methods for presented examples and given by

Example 1. (model based on population growth [33]).
Assume the mathematical modeling of the growth of population over short periods of time whose governing equation is nonlinear in nature as follows:We want to determine the value of which demonstrates the constant birth rate of population. For computational work, we take as an initial estimate. The solution of this example approximated to 16 decimal digits is 0.1009979296857498. The numerical results for this problem are given in Table 2. Figure 1 shows the fall of residuals for this example. It is clear from the computational results in terms of number of iterations that new fixed point iterative methods AG1, AG2, AG3, AG4, and AG5 are more efficient in their performance as compared to the already known ordinary methods UM, FM, NR1, NR2, and NR3.

Example 2. (Lenard–Jones potential model [24]).
Consider a specific model for atom potential referred as Lenard–Jones potential which is a well-known model in atomic physics and physical chemistry.where denotes the depth of the potential and is the length scale representing the distance where the interparticle interaction between two atoms becomes zero. The function V(s) in (65) has minimum value at We choose and , then the actual minimum value of function V(s) will be . Now, differentiating in (65), minimization problem is transformed into the problem of finding the solution of the nonlinear equation given byIn this example for computational evaluations, we utilize as an initial estimate. The columns in Table 3 illustrate the comparison of numerical results in terms of number of iterations for this problem. Figure 2 projects the drop of residuals for this example. It is concluded from Table 3 and Figure 2 that the effectiveness and presentation of the proposed schemes are much refined than other similar standard methods. Results of this example also show that fixed point iterative methods converge more rapidly towards solution as compared to the existing ones.

Example 3. (transcendental and algebraic problems).
To numerically analyze the suggested algorithms, we consider the following transcendental and algebraic equations used by Chun [7]:In Table 4, we display the numerical results for examples , and to validate the theoretical results. The second column in Table 4 shows the number of iterations required to reach the stopping criteria (62). It is clear from the results obtained that new methods need a smaller number of iterations when compared with other methods.
The columns in Table 5 represent the number of iterations for different nonlinear functions along with initial guess . Figure 3 shows the comparison of the iterative methods with respect to number of iterations. A comparative representation of number of iterations is presented, needed for different methods with our developed methods using the stopping criteria (62) with the accuracy . It is clear from Table 5 that settling the same convergence criteria for all the methods, the number of iterations required for the new methods remains less than the number of iterations needed by the other methods of the same order.

6. Conclusion

We have introduced a new modified family of recently developed iterative methods (Algorithms 2 and 3) by using a decomposition technique for solving nonlinear equations. Various new iterative methods of different order have been constructed as special cases with the support of recently constructed family. The convergence criteria of newly proposed methods is reviewed and inspected in order to ensure convergence order. In Tables 25 and Figures 13, we furnish the comparative by taking into account both mathematically and graphically for these strategies with a few known procedures by examining two models and a few algebraic nonlinear equations. The numerical results and graphical depiction certified the swiftness and finest performance of the methods with reference to the number of iterations even though the accuracy has been raised up to ε.

Data Availability

No data were used to support this study.

Conflicts of Interest

The authors declare no conflicts of interest.

Acknowledgments

This research has been partially supported by Ministerio de Ciencia, Innovacion y Universidades, grant no. PGC2018-0971-B-100, and Fundacion Seneca de la Region de Murcia, grant no. 20783/PI/18.