Abstract

We use a new modified homotopy perturbation method to suggest and analyze some new iterative methods for solving nonlinear equations. This new modification of the homotopy method is quite flexible. Various numerical examples are given to illustrate the efficiency and performance of the new methods. These new iterative methods may be viewed as an addition and generalization of the existing methods for solving nonlinear equations.

1. Introduction

It is well known that a wide class of problems, which arises in various branches of mathematical and engineering science, can be studied in the unified framework of nonlinear equation of the form . Numerical methods for finding the approximate solutions of the nonlinear equation are being developed by using several different techniques including the Taylor series, quadrature formulas, homotopy, and decomposition techniques; see [1ā€“14] and the references therein.

In this paper, we use the modified homotopy perturbation technique to suggest the main iterative schemes which generates the iterative methods of higher order. First of all, we rewrite the given nonlinear equation along with the auxiliary function , as an equivalent coupled system of equations using the Taylor series. This approach enables us to express the given nonlinear equation as sum of linear and nonlinear equations. This way of writing the given equation is known as the decomposition and plays the central role in suggesting the iterative methods for solving nonlinear equations . Results obtained in this paper suggest that this new technique of decomposition is a promising tool. In Section 2, we outline the main ideas of this technique and suggest one-step and two-step iterative methods for solving nonlinear equations. One can notice that if the derivative of the function vanishes, that is, , during the iterative process, then the sequence generated by the Newton method or the methods derived in [1ā€“9] are not defined. Due to such cardinal sin of division results a mathematical breakdown. This is another motivation of the paper that the derived higher order methods also converge even if the derivative vanishes during the iterative process. Several numerical examples are given to illustrate the efficiency and the performance of the new iterative methods. Our results can be considered as an important improvement and refinement of the previously known results.

2. Construction of Iterative Methods

In this section, we construct some higher ordered iterative methods for solving nonlinear equations. For this purpose, we use the idea of modified homotopy perturbation method.

Consider the nonlinear equation of the type Assume that is a simple root of nonlinear equation (1) and is an initial guess sufficiently close to . Let be the auxiliary function, such that We can rewrite the nonlinear equation (2) as a system of coupled equations using the Taylor series technique as or where is the initial approximation for a zero of (1). We can rewrite (4) in the following form: From the relation (4), it is clear that It is important to note that (6) plays an important role in the derivation of the iterative methods; see Chun [3] and Noor [11]. We rewrite (5) in the following form: where Here is a nonlinear operator. We use modified homotopy technique to develop iterative schemes for solving nonlinear equations.

Define as or where is called embedding parameter, is an initial approximation, and is an arbitrary auxiliary operator. We would like to emphasize that one has opportunity to select the initial guess and auxiliary operator . From (12), we get The embedding parameter monotonically increases from zero to unity as the trivial problem is continuously deformed to the original problem . The changing process of from zero to unity is called deformation and and are homotopic.

For the application of modified homotopy perturbation method to (1), we can write (12) as follows by expanding into a Taylor series at : Now substituting (8) into (12), we have By equating the coefficients of the same powers of , we have To find auxiliary parameter , we take . In this case, from (18), we have By substituting (19) into (18), we have Setting , we obtain Substituting (22) into (18), we have Again substituting , in (19), we get From (9), we have One can easily compute Note that is approximated by For , we have which gives the following iterative method for solving nonlinear equations.

Algorithm 1. For a given , find the approximation solution by the following iterative scheme: This is the main iteration scheme which generates at least quadratically convergent iterative methods for solving nonlinear equations. This scheme is also introduced by He [9] and Noor [10] for this purpose by using different technique.
For , we have This formulation allows us the following iterative scheme for solving nonlinear equations.

Algorithm 2. For a given , find the approximation solution by the following iterative schemes: In a similar way, for , we have This formulation allows us the following iterative method for solving nonlinear equations.

Algorithm 3. For a given , find the approximation solution by the following iterative schemes: For , Algorithm 3 reduces to the following new iterative method for solving nonlinear equations.

Algorithm 4. For a given , find the approximation solution by the following iterative schemes: In all the above derived iterative methods, there is an arbitrary function involved. This arbitrary function helps to produce the methods for best implementation. If we take as a constant function, then we obtain the following methods from Algorithms 5ā€“8.

Algorithm 5. For a given , find the approximation solution by the following iterative scheme:

Algorithm 6. For a given , find the approximation solution by the following iterative schemes:

Algorithm 7. For a given , find the approximation solution by the following iterative schemes:

Algorithm 8. For a given , find the approximation solution by the following iterative schemes: One can select different value of the auxiliary function and obtain various iterative schemes for solving nonlinear equations. For the sake of completeness and to convey the implementation idea we take and obtain the following iterative methods from Algorithms 1ā€“4.

Algorithm 9. For a given , find the approximation solution by the following iterative scheme: This is the main iteration scheme which is also introduced by He [9] and Noor [10] for generating different iterative methods for solving nonlinear equations by using different technique.

Algorithm 10. For a given , find the approximation solution by the following iterative schemes:

Algorithm 11. For a given , find the approximation solution by the following iterative schemes:

Algorithm 12. For a given , find the approximation solution by the following iterative schemes:

Remark 13. We would like to point out that, if we take , , , in Algorithms 1ā€“4, we can obtain various classes of known and new iterative methods for solving nonlinear equations.

Remark 14. It is important to say to never choose such a value of which makes the denominator zero. It is necessary that sign of should be chosen so as to keep the denominator largest in magnitude in the above derived Algorithms.

3. Convergence Analysis

In this section, we consider the convergence criteria of the main iterative schemes Algorithms 5 and 6 developed in Section 2.

Theorem 15. Assume that the function for an open interval in with simple root . Let be a smooth sufficiently in some neighborhood of root and then by Algorithm 4 one sees that it has fourth-order convergence.

Proof. Let be a simple root of the nonlinear equation . Since is sufficiently differentiable, expanding and in Taylorā€™s series at , we obtain where Now expanding , , and in Taylorā€™s series about , we obtain where From (45), (46), and (47), we get
Using (49), we have
Expanding in Taylorā€™s series about and using (50), we have
Expanding in Taylorā€™s series about and using (50), we have Using (51), we get Now using (51) and (52), we get Now using (45), (46), and (51), we get Using (45), (46), (49), (54), and (55), we obtain Finally, the error equation is Error equation (57) shows that the main iterative scheme established in Section 2 as Algorithm 4 has at least fourth-order convergence and, consequently, the methods derived from this scheme will have at least fourth-order convergence.

In a similar way, one can prove the convergence of all newly derived iterative methods.

Remark 16. From the study of convergence analysis, we note that for different values of the auxiliary function , various iterative methods of higher order can be derived from Algorithms 1, 2, 3, and 4 for solving nonlinear equations.

If we select , then Algorithms 1, 2, and 4 reduce to the following higher order iterative methods.

Algorithm 17. For a given , find the approximation solution by the following iterative scheme: which is the well known Halley method [14] of third-order convergence.

Algorithm 18. For a given , find the approximation solution by the following iterative schemes: which is the fourth-order convergent method for solving nonlinear equations and appears to be a new one.

Algorithm 19. For a given , find the approximation solution by the following iterative schemes: which is the fifth-order convergent method for solving nonlinear equations and appears to be a new one.

4. Numerical Results

We now present some examples to illustrate the efficiency of the newly developed two-step iterative methods in this paper. We compare the Newton method [14] (NM), the method of Hasanov et al. [7] (HM), the method of Chun [3] (CM), Algorithm 5 (SH1), and Algorithm 6 (SH2) introduced in this paper. We use .

The following stopping criteria are used for computer programs:

We consider the following nonlinear equations as test problems.

Example 20. We consider the nonlinear equation We consider the different values of parameter , for all the methods to compare the numerical results in Tables 1 and 2, respectively.
Table 1 depicts the numerical results of Example 20. We use the initial guess for the computer program for .
Table 2 depicts the numerical results of Example 20. We use the initial guess , for the computer program for .

Example 21. We consider the nonlinear equation We consider and , for all the methods to compare the numerical results in Tables 3 and 4, respectively.
Table 3 shows the efficiency of the methods for Example 21. We use the initial guess , for the computer program for . The number of iterations and computational order of convergence give us an idea about the better performance of the new methods.
Table 4 shows the efficiency of the methods for Example 21. We use the initial guess , for the computer program for .

Example 22. We consider the nonlinear equation We consider and , for all the methods to compare the numerical results in Tables 5 and 6, respectively.
In Table 5, the numerical results for Example 22 are described. We use the initial guess for the computer program for . We observe that all the methods approach the approximate solution after an equal number of iterations, but the computational order of convergence has little bit difference.
In Table 6, the numerical results for Example 22 are described. We use the initial guess for the computer program.

Example 23. We consider the nonlinear equation We consider and , for all the methods to compare the numerical results in Tables 7 and 8, respectively.
Table 7 shows the numerical results for Example 23. For the computer program we use the initial guess and . We note that the new derived methods have better computational order of convergence and approach the desired result in a less number of iterations.
Table 8 shows the numerical results for Example 23. For the computer program we use the initial guess , and . We note that the new derived methods have better computational order of convergence and approach to the desired result in less number of iterations.

Example 24. We consider the nonlinear equation We consider and , for all the methods to compare the numerical results in Tables 9 and 10, respectively.
In Table 9, we show the numerical results for Example 24. We use the initial guess and , for the computer program. We observe that the new methods approach the desired approximate solution in an equal or less number of iterations. We calculate the computational order of convergence for all the methods which verify the rate of convergence and efficiency of the methods.
In Table 10, we show the numerical results for the Example 24. We use the initial guess and , for the computer program. We calculate the computational order of convergence for all the methods which verify the rate of convergence and efficiency of the methods.
In Tables 1ā€“10, is the approximate solution at th iteration and is the difference of the last two consecutive iterations. The computational order of convergence is approximated for all the examples in all the tables by means of

5. Conclusion

In this paper, we have studied some new iterative methods for solving nonlinear equations by using modified homotopy perturbation technique. Our method of derivation of the iterative methods is very simple as compared to the other techniques, specially the Adomian decomposition technique. If we consider the definition of efficiency index [14] as , where is the order of the method and is the number of functional evaluations per iteration required by the method, we have that Algorithm 2 has the efficiency index equal to while Algorithm 3 has the efficiency index , which is the same as that of the Newton method. The methods derived above are well defined and robust even when , and numerical stability of the methods can also be observed during numerical experimentations. Using the technique and idea of this paper, one can suggest and analyze higher order multistep iterative methods for solving nonlinear equations as well as the system of nonlinear equations.

Conflict of Interests

The author declares that there is no conflict of interests regarding the publication of this paper.