Abstract

This paper focuses on solving systems of nonlinear equations numerically. We propose an efficient iterative scheme including two steps and fourth order of convergence. The proposed method does not require the evaluation of second or higher order Frechet derivatives per iteration to proceed and reach fourth order of convergence. Finally, numerical results illustrate the efficiency of the method.

1. Introduction

Many relationships in nature are inherently nonlinear, which according to these effects are not in direct proportion to their cause. In fact, a large number of such real-world applications are reduced to solve a system of nonlinear equations numerically. Solving such systems has become one of the most challenging problems in numerical analysis, (see [14]), for example in Chaos (chaos theory studies the behavior of dynamical systems that are highly sensitive to initial conditions), or in climate studies (the weather is famously chaotic, where simple changes in one part of the system produce complex effects throughout).

Some robust and efficient methods for solving nonlinear systems are brought forward (e.g., see [57]). The Newton’s iteration for nonlinear equations is a basic method, which converges quadratically under some conditions; this method has also been extended for systems of nonlinear equations. In this method, if the derivative of the multivariate function at an iterate point is singular or almost singular, namely , the iteration can be aborted due to the overflow or leads to divergence, which restrict its applications to some extent. Recently, there has been some modified Newton’s methods with cubic convergence for systems of nonlinear equations in order to overcome on this flaw; see for example [8]. In fact, there exist fourth-order schemes that avoid the condition of nonsingular Jacobian matrix. For example, Noor et al. in [9] design different algorithms for solving nonlinear systems of equations, obtained by means of variational techniques. One of the iterative schemes proposed in this paper, that we use in the numerical section, has the following expression: denoted by , where are the coordinate functions of .

However, the condition of cannot be always avoided, though as discussed in [10], the generalized inverse could be used instead of , and the convergence still remains quadratically. Moreover, the efficiency index of these solvers is not satisfactory for practical problems, which provide us a way to suggest high order iterative methods with high efficiency index.

In this paper, we suggest a novel two-step iterative scheme for finding the solution of nonlinear systems whose Jacobian matrix is required to be nonsingular. The main merit of our proposed scheme is that per iteration it does not require the evaluation of the second, or higher order Frechet derivatives, while the fourth-order of convergence is attainable using only one function and two first-order Frechet derivatives per computing step.

The paper is organized as follows. In the next section, we give a short review on basic notes in this field of study, while Section 3 proposes the new iterative algorithm to solve systems of nonlinear equations and analyze its local convergence. In Section 4, numerical examples will be further considered to show that the new method performs well. The detailed results for numerical examples are analyzed and discussed in this section. And finally, some conclusions are drawn in Section 5.

2. Preliminary Notes

Let us consider the problem of finding a real zero of a multivariate nonlinear function and is a smooth map and is an open and convex set. We assume that is a zero of the system, and is an initial guess sufficiently close to .

Consider Taylor expansion of about as follows: where . Replace by into (2.1) to get Solve (2.2) for to get If is an approximation of , we denote it by and get that is,

Waziri et al. [11] proposed a technique to approximate the Jacobian inverse by a diagonal matrix , which is updated at each iteration, and where the th diagonal entry of the matrix is defined to be where is the th component of the vector , for all and . We here remark that this procedure is similar to the Secant method in one-dimensional case. The approximation is valid only if is not equal to zero or , where is a positive real number for all . If else, then set , for more details, please refer to [11].

The order of convergence and efficiency index of such solvers are indeed not good for practical problems in which iterations with high efficiencies are required. This was just the motivation of producing higher-order iterative nonlinear systems solvers. Third-order methods free from second derivatives were proposed from Quadrature rules for solving systems of nonlinear equations. These methods require one functional evaluation and two evaluations of first derivatives at two different points. An example of such iterations is the Arithmetic Mean Newton method (3rd AM) derived from Trapezoidal rule [12] where wherein is the Newton’s iteration.

The convergence analysis of these methods using point of attraction theory can be found in [13]. This third-order Newton-like method is more efficient than Halley’s method because it does not require the evaluation of a third-order tensor of values.

In the next section, we present our new algorithm which reaches the highest possible order four by using only three functional evaluations of the multivariate function for solving systems of nonlinear equations.

3. A Novel Computational Iterative Algorithm

Let be the order of a method and be defined as where and represent the number of times and are to be evaluated, respectively. The definition of the Logarithms Informational Efficiency or Efficiency Index for nonlinear systems [1] is given by The efficiency indices of the Newton method (2nd  NM) and the third-order methods free from second derivatives (3rd  AM) are given by respectively. We observe that , if . That is, the third-order methods free from second derivatives are less efficient than Newton’s method for systems of nonlinear equations. Let us also note that scheme has the same efficiency index as Newton’s method = = . Thus, it is important to develop fourth-order methods from these third-order methods to improve the efficiency. Soleymani et al. [5] have recently improved the 3rd  AM method to get a class of fourth-order Jarratt-type methods for solving the nonlinear equation . A simplified example is

We extend this method to the multivariate case. The improved fourth-order Arithmetic Mean Newton method (4th  AM) for systems of nonlinear equations can now be suggested as follows: where , and is the identity matrix and . Thus, by simplifying, we can produce the following fourth-order convergent algorithm.

Proposed Algorithm

Step 1. Solve the linear system ,

Step 2. Calculate(i), (ii),

Step 3. Solve the linear system , where ,

Step 4. Calculate .

The main theorem is going to be demonstrated by means of the -dimensional Taylor expansion of the functions involved. Let be sufficiently Frechet differentiable in . By using the notation introduced in [14], the th derivative of at , is the -linear function such that . It is easy to observe that(1)(2), for all permutation of .So, in the following, we will denote:(a), (b).

It is well known that for lying in a neighborhood of a solution of the nonlinear system , Taylor’s expansion can be applied (assuming that the Jacobian matrix is nonsingular), and where . We observe that since and .

In addition, we can express as where is the identity matrix. Therefore, . From (3.7), we obtain where We denote as the error in the th iteration. The equation where is a -linear function , is called the error equation, and is the order of convergence. Observe that is .

Theorem 3.1. Let be sufficiently Frechet differentiable at each point of an open convex neighborhood of , that is, a solution of the system . Let us suppose that is continuous and nonsingular in , and close enough to . Then the sequence obtained using the iterative expression (3.5) converges to with order 4, being the error equation

Proof. From (3.6) and (3.7), we obtain where , and .
From (3.8), we have where and .
Then, and the expression for is
The Taylor expansion of Jacobian matrix is where Therefore, and then,
On the other hand, the arithmetic mean of the Jacobian matrices can be expressed as From an analogous reasoning as in (3.8), we obtain where So, and finally, by using (3.19) and (3.23), the error equation can be expressed as So, taking into account (3.24), it can be concluded that the order of convergence of the proposed method is four.

The efficiency index of the fourth-order method free from second derivative (4th  AM) is given by This shows that the 4th  AM method is more efficient than 2nd  NM and 3rd  AM methods. We next conduct some numerical experiments to compare the methods.

4. Numerical Examples

In this section, we compare the performance of contributed method with Newton’s scheme and Arithmetic Mean Newton method (2.7). The algorithms have been written in MATLAB 7.6 and tested for the examples given below. For the following test problems, the approximate solutions are calculated correct to 500 digits by using variable arithmetic precision. We use the following stopping criterium: We have used the approximated computational order of convergence given by (see [15]) Let be the number of iterations required before convergence is reached and be the minimum residual.

We check the mentioned method by solving the following test problems.

Test Problem 1 (TP1). Consider , where , and The Jacobian matrix is given by . The starting vector is and the exact solution is . In this case, the comparison of efficiencies is

Test Problem 2 (TP2). Consider

The solution is

We choose the starting vector . The Jacobian matrix is given by and has nonzero elements. In this case, the comparison of efficiencies is: = = = = = = .

Test Problem 3 (TP3). Consider We solve this system using the initial approximation . The solution is The Jacobian is a matrix given by with nonzero elements. In this case, the comparison of efficiencies is: = = = = = = .

Table 1 gives the results for Test Problems 1, 2, and 3. It is observed for all problems that the fourth-order method converges in the least number of iterations. The computational order of convergence agrees with the theory. 4th  AM gives the best results in terms of least residual, and it is the most efficient method compared to 2nd  NM and 3rd  AM. It can compete with the 4th  NR method and has the advantage of having a higher efficiency index.

5. Concluding Remarks

The efficiency of the quadratically and cubically multidimensional methods is not satisfactory in most practical problems. Thus in this paper, we have extended a fourth-order method from a third-order method for systems of nonlinear equations. We have shown that the fourth-order iterative method is more efficient than the second order Newton and third-order methods. Numerical experiments have also shown that the fourth-order method is efficient.

Acknowledgments

The authors would like to thank the referees for the valuable comments and for the suggestions to improve the readability of the paper. This research was supported by Ministerio de Ciencia y Tecnología MTM2011-28636-C02-02.