The Scientific World Journal

The Scientific World Journal / 2014 / Article

Research Article | Open Access

Volume 2014 |Article ID 789459 | 10 pages | https://doi.org/10.1155/2014/789459

Convergence Analysis and Numerical Study of a Fixed-Point Iterative Method for Solving Systems of Nonlinear Equations

Academic Editor: Pu-yan Nie
Received12 Feb 2014
Accepted20 Feb 2014
Published24 Mar 2014

Abstract

We present a fixed-point iterative method for solving systems of nonlinear equations. The convergence theorem of the proposed method is proved under suitable conditions. In addition, some numerical results are also reported in the paper, which confirm the good theoretical properties of our approach.

1. Introduction

One of the basic problems in mathematics is how to solve nonlinear equations In order to solve these equations, we can use iterative methods such as Newton’s method and its variants. Recently, there has been some progress on iterative methods with higher order of convergence using decomposition techniques; see [115] and the reference therein.

The zeros of a nonlinear equation cannot in general be expressed in closed form; thus we have to use approximate methods. Nowadays, we often use iterative methods to get the approximate solution of the system (1); the best known method is the classical Newton’s method. Recently, there has been some progress on solving the system (1), which allows us to get the iterative formula by using essentially Taylor’s polynomial (see [16, 17]), quadrature formulas (see [7, 912]), homotopy perturbation method (see [8]), and so on.

In this paper, we will present a new fixed point iterative method for solving the system (1) and prove that the method is cubic convergent under suitable conditions.

This paper is organized as follows. In Section 2, we introduce new iterative methods to solve (1). In Section 3 we extend these methods to solve systems of nonlinear equations, and we also prove convergence of the proposed method. Some numerical results are reported in Section 4, while the paper is concluded in the last section.

2. Iterative Methods and Convergence Analysis

We now consider the following nonlinear equation: Assume that is a simple root of (2); that is, . For , using Taylor’s formula, we have Taking in the above equality, we get If the value of in the interval is replaced with its value in , that is, with , then we have By using (5) in (4), we have We can get an iterative method from (6) to solve the system (2); it is the famous Newton’s formula The formula (7) has already been proved to be quadratically convergent. Now we begin to deduce a higher order iterative method. In fact, if we estimate in the interval by its value in , that is, by , then we have By using (8) in (4), and cutting off the error (still use “=”), we have Let be the solution of (9); we can obtain a new iterative method Since the iterative method (10) is implicit-type method, we use classical Newton’s formula (7) as predictor and then use the above scheme (10) as corrector; in this way, we can get a workable iterative method.

Algorithm 1. Step 0 (initialization). Choose the initial value , where is certain real zero of nonlinear mapping and is a sufficiently small constant. Take the stopping criteria . Set .
Step 1 (the predictor step). Compute the predictor
Step 2 (the corrector step). Computing the corrector
Step 3. If or then stop; otherwise, set , ; go to Step 1.

In this section, we consider the convergence and convergent rate of Algorithm 1. We obtain a convergence theorem as follows.

Theorem 2. Let be a simple zero of sufficiently differentiable function . If is sufficiently close to , then the two-step iterative method defined by (11)-(12) has three-order convergence and satisfies the following error equation: where and .

Proof. By (12) we get Multiplying the above equation by , we can get Let in (3) and ; we have thus, Dividing both sides of the above equation by , we can get Furthermore, let in (3); we have From (11) we get It follows from the above equation that By applying (18) we have After some manipulations we obtain Now, applying Taylor’s expansion for at the point , we have From (11), (18), and (24), we obtain And after some manipulations we can get By substituting (17), (23), and (26) into (15) we have Note that Then, it follows from (27) that This proves the conclusion of the theorem and the proof is completed.

3. The n-Dimensional Case

In this section, we consider the -dimensional case of the method, and we also study these iterative methods’ order of convergence. Consider the system of nonlinear equations where each function , , maps a vector of the -dimensional space into the real line . The system (30) of nonlinear equations in unknowns can also be represented by defining a function mapping into as . Thus, the system (30) can be written in the form Let be a sufficiently differentiable function on a convex set and let be a real zero of the nonlinear mapping ; that is, . For any , we may write Taylor’s expansion for as follows (see [17]): for , we have If we estimate in the interval by its value in , that is, by , then we have By using (34) in (33), we have We can get an iterative method from (35) to solve the system (31); it is known as Newton’s method The method (36) has already been proved that it has quadratic convergence. Now we begin to deduce a higher order iterative method. If we estimate in the interval by its value in , that is, by , then we have By using (37) in (33), and cutting off the error (still use “=”), we have Let be the solution of (38); we can obtain a new iterative method On the other hand, from the easy Newton’s method (see [4]), we obtain

Now we consider the convex combination of (39) and (40). Let , , and ; then we can deduce from (39) and (40) that the following iterative formula holds: Since the iterative method (41) is implicit-type method, we use Newton’s method as predictor and then use the new method (41) as corrector; in this way, we can get a workable iterative method.

Algorithm 3. Step 0 (initialization). Choose the initial value , and , where is certain real zero of nonlinear mapping and is a sufficiently small constant. Take the stopping criterions . Set .
Step 1 (the predictor step). Compute the predictor
Step 2 (the corrector step). Computing the corrector
Step 3. If or then stop; otherwise, set ; go to Step 1.

Remark 4. If we take , our algorithms (42) and (43) can be written in the following form: Notice that, at each iteration, the number of functional evaluations is .

In this section, we consider the convergence and convergent rate of Algorithm 1. We obtain a convergence theorem as follows.

Theorem 5. Let be a sufficiently differentiable function on a convex set containing a root of the nonlinear system (31). The iterative method (42)-(43) has cubic convergence and satisfies the error equation

Proof. Defining , from (42) and (43), we have Premultiplying the above equation by , we can get Let in (32) and ; we have thus, Multiplying to the two sides of the above equation, we obtain On the other hand, let in (32); we have From (42) we get . It follows from the above equation that By applying (50) we have After some manipulations we obtain Now, applying Taylor’s formula for at the point , we have From (42), (50), and (55), we obtain And after some manipulations we can get Therefore, we have From (49) and (54), we get By substituting (57), (58), (59), and (54) into (47) we have that is, Notice that Equation (61) can be written as follows: which proves the conclusion of our theorem. The proof is completed.

4. Numerical Examples

In this section we present some examples to illustrate the efficiency and the performance of the newly developed method (42)-(43) (present study HM). This new method was compared with Newton’s method (NM), the method of Aslam Noor and Waseem [12] (NR1), the method of Cordero et al. [13] (NAd1), the method of Darvishi and Barati [4] (DV), and the method of Cordero and Torregrosa [10] (CT) in the number of iterations, CPU time, error, and convergence order. All computations were done using the PC with Pentium(R) Dual-Core CPU T4400 @2.20 GHz. All the programming is implemented in MATLAB 7.9. The convergence order is computed approximately by the following formula:

As the iterative formula (43) contains parameters and , we make the numerical examples based on and .

Example 1. The test function is as follows (see [10]): This problem has a solution . We test this problem by using as a starting point. The test results are listed in Table 1.


Method Iterations CPU time Error Convergence order

NR1 6 0.0006 2.95
NAd1 6 0.0007 3.91
DV 7 0.0018 2.98
CT 6 0.0005 2.95
HM 6 0.0003 3.00

Example 2. The test function is as follows (see [9, 10, 13, 14]): This problem has a solution . We test this problem by using initial value as a starting point. The test results are listed in Table 2.


Method Iterations CPU time Error Convergence order

NR1 3 0.0007 3.15
NAd1 3 0.0011 4.71
DV 4 0.0008 3.02
CT 3 0.0007 3.15
HM 3 0.0004 5.69

Example 3. The test function is as follows (see [4]): We test this problem by using . The test results are listed in Table 3.


Method Iterations CPU time Error Convergence order

NR1 4 0.0008 3.24
NAd1 4 0.0007 4.51
DV 4 0.0007 3.62
CT 4 0.0009 3.24
HM 3 0.0005 Inf

Example 4. The test function is , where and , such that (see [9, 10, 13]) This problem has two solutions: they are We test this problem by using (the iterative sequence converges to ) and (the iterative sequence converges to ) as starting point, respectively. The test results are listed in Tables 4 and 5.


Method Iterations CPU time Error Convergence order

NR1 4 0.0013 3.32
NAd1 3 0.0011 4.40
DV 4 0.0008 3.38
CT 4 0.0015 3.32
HM 3 0.0006 4.68


Method Iterations CPU time Error Convergence order

NR1 4 0.0016 3.29
NAd1 3 0.0010 4.13
DV 4 0.0008 3.33
CT 4 0.0014 3.29
HM 3 0.0006 4.43

Example 5. The test function is as follows (see [10, 13, 14]): This problem has two solutions: they are and . We test this problem by using (the iterative sequence converges to ) and (the iterative sequence converges to ) as starting point, respectively. The test results are listed in Tables 6 and 7, which are obtained for .


Method Iterations CPU time Error Convergence order

NR1 4 0.0104 2.98
NAd1 3 0.0105 3.23
DV 4 0.0072 2.96
CT 4 0.0085 2.98
HM 3 0.0047 3.58


Method Iterations CPU time Error Convergence order

NR1 4 0.0093 2.98
NAd1 3 0.0115 3.23
DV 4 0.0078 2.96
CT 4 0.0107 2.98
HM1 3 0.0052 3.58

Example 6. Consider the unrestraint optimum problem (see [18]) where is defined by By KKT condition we have where , We test this problem by using as starting point. The test results are listed in Table 8, which are obtained for .


Method Iterations CPU time Error Convergence order

NR1 4 0.1513 3.24
NAd1 5 2.2769 3.98
DV 5 0.1774 2.90
CT 4 0.1823 3.24
HM 4 0.1006 3.77

Example 7. Consider the discrete two-point boundary value problems (see [19]): where is an tridiagonal matrix defined by , and , . We test this problem by using as starting point. The test results are listed in Table 9, which are obtained for .


Method Iterations CPU time Error Convergence order

NR1 2 0.6050 Inf
NAd1 2 2.5966 Inf
DV 2 1.3221 Inf
CT 2 0.7301 Inf
HM 2 0.4460 Inf