Research Article  Open Access
Convergence Analysis and Numerical Study of a FixedPoint Iterative Method for Solving Systems of Nonlinear Equations
Abstract
We present a fixedpoint iterative method for solving systems of nonlinear equations. The convergence theorem of the proposed method is proved under suitable conditions. In addition, some numerical results are also reported in the paper, which confirm the good theoretical properties of our approach.
1. Introduction
One of the basic problems in mathematics is how to solve nonlinear equations In order to solve these equations, we can use iterative methods such as Newton’s method and its variants. Recently, there has been some progress on iterative methods with higher order of convergence using decomposition techniques; see [1–15] and the reference therein.
The zeros of a nonlinear equation cannot in general be expressed in closed form; thus we have to use approximate methods. Nowadays, we often use iterative methods to get the approximate solution of the system (1); the best known method is the classical Newton’s method. Recently, there has been some progress on solving the system (1), which allows us to get the iterative formula by using essentially Taylor’s polynomial (see [16, 17]), quadrature formulas (see [7, 9–12]), homotopy perturbation method (see [8]), and so on.
In this paper, we will present a new fixed point iterative method for solving the system (1) and prove that the method is cubic convergent under suitable conditions.
This paper is organized as follows. In Section 2, we introduce new iterative methods to solve (1). In Section 3 we extend these methods to solve systems of nonlinear equations, and we also prove convergence of the proposed method. Some numerical results are reported in Section 4, while the paper is concluded in the last section.
2. Iterative Methods and Convergence Analysis
We now consider the following nonlinear equation: Assume that is a simple root of (2); that is, . For , using Taylor’s formula, we have Taking in the above equality, we get If the value of in the interval is replaced with its value in , that is, with , then we have By using (5) in (4), we have We can get an iterative method from (6) to solve the system (2); it is the famous Newton’s formula The formula (7) has already been proved to be quadratically convergent. Now we begin to deduce a higher order iterative method. In fact, if we estimate in the interval by its value in , that is, by , then we have By using (8) in (4), and cutting off the error (still use “=”), we have Let be the solution of (9); we can obtain a new iterative method Since the iterative method (10) is implicittype method, we use classical Newton’s formula (7) as predictor and then use the above scheme (10) as corrector; in this way, we can get a workable iterative method.
Algorithm 1. Step 0 (initialization). Choose the initial value , where is certain real zero of nonlinear mapping and is a sufficiently small constant. Take the stopping criteria . Set .
Step 1 (the predictor step). Compute the predictor
Step 2 (the corrector step). Computing the corrector
Step 3. If or then stop; otherwise, set , ; go to Step 1.
In this section, we consider the convergence and convergent rate of Algorithm 1. We obtain a convergence theorem as follows.
Theorem 2. Let be a simple zero of sufficiently differentiable function . If is sufficiently close to , then the twostep iterative method defined by (11)(12) has threeorder convergence and satisfies the following error equation: where and .
Proof. By (12) we get Multiplying the above equation by , we can get Let in (3) and ; we have thus, Dividing both sides of the above equation by , we can get Furthermore, let in (3); we have From (11) we get It follows from the above equation that By applying (18) we have After some manipulations we obtain Now, applying Taylor’s expansion for at the point , we have From (11), (18), and (24), we obtain And after some manipulations we can get By substituting (17), (23), and (26) into (15) we have Note that Then, it follows from (27) that This proves the conclusion of the theorem and the proof is completed.
3. The nDimensional Case
In this section, we consider the dimensional case of the method, and we also study these iterative methods’ order of convergence. Consider the system of nonlinear equations where each function , , maps a vector of the dimensional space into the real line . The system (30) of nonlinear equations in unknowns can also be represented by defining a function mapping into as . Thus, the system (30) can be written in the form Let be a sufficiently differentiable function on a convex set and let be a real zero of the nonlinear mapping ; that is, . For any , we may write Taylor’s expansion for as follows (see [17]): for , we have If we estimate in the interval by its value in , that is, by , then we have By using (34) in (33), we have We can get an iterative method from (35) to solve the system (31); it is known as Newton’s method The method (36) has already been proved that it has quadratic convergence. Now we begin to deduce a higher order iterative method. If we estimate in the interval by its value in , that is, by , then we have By using (37) in (33), and cutting off the error (still use “=”), we have Let be the solution of (38); we can obtain a new iterative method On the other hand, from the easy Newton’s method (see [4]), we obtain
Now we consider the convex combination of (39) and (40). Let , , and ; then we can deduce from (39) and (40) that the following iterative formula holds: Since the iterative method (41) is implicittype method, we use Newton’s method as predictor and then use the new method (41) as corrector; in this way, we can get a workable iterative method.
Algorithm 3. Step 0 (initialization). Choose the initial value , and , where is certain real zero of nonlinear mapping and is a sufficiently small constant. Take the stopping criterions . Set .
Step 1 (the predictor step). Compute the predictor
Step 2 (the corrector step). Computing the corrector
Step 3. If or then stop; otherwise, set ; go to Step 1.
Remark 4. If we take , our algorithms (42) and (43) can be written in the following form: Notice that, at each iteration, the number of functional evaluations is .
In this section, we consider the convergence and convergent rate of Algorithm 1. We obtain a convergence theorem as follows.
Theorem 5. Let be a sufficiently differentiable function on a convex set containing a root of the nonlinear system (31). The iterative method (42)(43) has cubic convergence and satisfies the error equation
Proof. Defining , from (42) and (43), we have Premultiplying the above equation by , we can get Let in (32) and ; we have thus, Multiplying to the two sides of the above equation, we obtain On the other hand, let in (32); we have From (42) we get . It follows from the above equation that By applying (50) we have After some manipulations we obtain Now, applying Taylor’s formula for at the point , we have From (42), (50), and (55), we obtain And after some manipulations we can get Therefore, we have From (49) and (54), we get By substituting (57), (58), (59), and (54) into (47) we have that is, Notice that Equation (61) can be written as follows: which proves the conclusion of our theorem. The proof is completed.
4. Numerical Examples
In this section we present some examples to illustrate the efficiency and the performance of the newly developed method (42)(43) (present study HM). This new method was compared with Newton’s method (NM), the method of Aslam Noor and Waseem [12] (NR1), the method of Cordero et al. [13] (NAd1), the method of Darvishi and Barati [4] (DV), and the method of Cordero and Torregrosa [10] (CT) in the number of iterations, CPU time, error, and convergence order. All computations were done using the PC with Pentium(R) DualCore CPU T4400 @2.20 GHz. All the programming is implemented in MATLAB 7.9. The convergence order is computed approximately by the following formula:
As the iterative formula (43) contains parameters and , we make the numerical examples based on and .
Example 1. The test function is as follows (see [10]): This problem has a solution . We test this problem by using as a starting point. The test results are listed in Table 1.

Example 2. The test function is as follows (see [9, 10, 13, 14]): This problem has a solution . We test this problem by using initial value as a starting point. The test results are listed in Table 2.

Example 3. The test function is as follows (see [4]): We test this problem by using . The test results are listed in Table 3.

Example 4. The test function is , where and , such that (see [9, 10, 13]) This problem has two solutions: they are We test this problem by using (the iterative sequence converges to ) and (the iterative sequence converges to ) as starting point, respectively. The test results are listed in Tables 4 and 5.


Example 5. The test function is as follows (see [10, 13, 14]): This problem has two solutions: they are and . We test this problem by using (the iterative sequence converges to ) and (the iterative sequence converges to ) as starting point, respectively. The test results are listed in Tables 6 and 7, which are obtained for .


Example 6. Consider the unrestraint optimum problem (see [18]) where is defined by By KKT condition we have where , We test this problem by using as starting point. The test results are listed in Table 8, which are obtained for .

Example 7. Consider the discrete twopoint boundary value problems (see [19]): where is an tridiagonal matrix defined by , and , . We test this problem by using as starting point. The test results are listed in Table 9, which are obtained for .
