Abstract and Applied Analysis

Volume 2015 (2015), Article ID 289029, 12 pages

http://dx.doi.org/10.1155/2015/289029

## Design of High-Order Iterative Methods for Nonlinear Systems by Using Weight Function Procedure

^{1}Instituto Tecnológico de Santo Domingo (INTEC), Avenida Los Próceres, Galá, 10602 Santo Domingo, Dominican Republic^{2}Instituto de Matemática Multidisciplinar, Universitat Politècnica de València, Camino de Vera s/n, 46022 València, Spain

Received 9 September 2014; Accepted 25 November 2014

Academic Editor: Benito M. Chen-Charpentier

Copyright © 2015 Santiago Artidiello et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

#### Abstract

We present two classes of iterative methods whose orders of convergence are four and five, respectively, for solving systems of nonlinear equations, by using the technique of weight functions in each step. Moreover, we show an extension to higher order, adding only one functional evaluation of the vectorial nonlinear function. We perform numerical tests to compare the proposed methods with other schemes in the literature and test their effectiveness on specific nonlinear problems. Moreover, some real basins of attraction are analyzed in order to check the relation between the order of convergence and the set of convergent starting points.

#### 1. Introduction

To find the solution of systems of nonlinear equations , , is a common, ancient, and important problem in mathematics and engineering (see, e.g., [1]). Previous experimentation on some of these applied problems shows that high-order methods, associated with floating point arithmetics of multiprecision, are very useful because they carry a clear reduction in the number of required iterations (see, e.g., [2, 3]).

Over the last years there have been numerous contributions of different authors that have designed iterative methods trying to solve nonlinear systems, making several modifications to the classical schemes to accelerate the convergence and reduce the number of operations and functional evaluations at each step of the iterative process. The extension of the variants of Newton’s method for scalar problems described by Weerakoon and Fernando in [4], by Özban in [5], and by Gerlach in [6], to functions of several variables has been developed in [7–10]. All these processes have yield generating multipoint methods with Newton’s method as a predictor. However, a general procedure called pseudocomposition is designed in [11, 12] by using a generic predictor and the Gaussian quadrature as the corrector.

On the other hand, one of the most used techniques to generate high-order methods for solving nonlinear equations or systems was introduced by Traub in [13], that is, the composition of two iterative methods of orders and , respectively, to obtain a method of order . With the direct application of this technique, it is usually necessary to evaluate the nonlinear function and its associated Jacobian matrix to increase the order of convergence and the new method usually ends up being less efficient than the original ones.

So, for designing a more efficient method, it is usual to estimate the new evaluation of the Jacobian matrix, in terms of Jacobian matrices previously used. With this in mind, the new method generally has a lower order of convergence than but the number of functional evaluations per iteration decreases.

Now, let us consider the problem of finding a real zero of a function , that is, a solution of the nonlinear system of equations . In general, this solution must be approximated by means of iterative methods. The most known and widely used method is classical Newton’s method:It is known that this method has second order of convergence and it uses one functional evaluation of the vectorial function and also of the associated Jacobian matrix at the previous iterate, to generate a new one.

In the numerical section, we will use some multipoint iterative schemes to compare with ours. These have been developed by different authors: the first one is the fourth-order method designed by Jarratt in [14], whose iterative expression isIn the following, we will denote it by JM4. Let us also consider the fourth-order method designed by Sharma et al. in [15] whose iterative formula isdenoted by SHM4. We will also use some fifth-order methods to compare with ours: the scheme obtained by Vassileva [12], generalizing Jarratt’s method, improves this convergence by adding functional evaluations (one functional evaluation of in the second step of the process), being its iterative expression:where , , , , and and denoted by VM5, and the method designed by Sharma and Gupta in [16] whose iterative formula iswhere and . We denote this method as SHM5.

In this work we have designed some iterative methods with orders of convergence four and five and presented an analysis in Section 2. We compare them in Section 3 with other known processes on academic examples, with the aim that the results of this comparison will give us an idea of how robust our methods are. The stability is studied under the point of view of the real dynamics on an specific 2-dimensional problem. An applied problem, the partial differential equation of molecular interaction, is used to check the conclusions reached in previous sections.

#### 2. Design and Analysis of Convergence

Let , , be a sufficiently differentiable function on a convex set and let be a solution of the nonlinear system of equations . We propose the following iterative scheme:wherewith , , and being real parameters.

This scheme has been designed as a modified double-Newton method with a frozen evaluation of and and a matrix weight function at the second step. We denote this class of methods (whose order of convergence is four under certain conditions) by MS4. As it has been stated, is a matrix function with matrix variable. Specifically, if denotes the Banach space of real square matrices, then we can define such that its Frechet derivatives satisfy(a), where and ,(b), where and .

Let us note that denotes the space of linear mappings of on itself.

To prove the local convergence of the proposed iterative method to the solution of , we will use a Taylor series expansion of the involved functions around the solution, by assuming that Jacobian matrix is nonsingular in a neighborhood of ,where , .

So, can be expressed (in a neighborhood of ) aswhere is the identity matrix. This technique, including the notation used, has been detailed in [17].

Theorem 1. *Let , , be a sufficiently differentiable function in an open convex set and let be a solution of the system of nonlinear equations . One supposes that is continuous and nonsingular at . Let be a sufficiently differentiable matrix weight function such that , , and , with being the identity matrix. Let one consider as an initial guess, sufficiently close to . Then, the sequence obtained using expression (6) converges to with order of convergence four if and . Moreover, the error equation is**where , , and .*

*Proof. *From (8) and (9) we obtain, respectively,Then, forcing , the Taylor expansion of the inverse of the Jacobian matrix at the iterate is obtained as follows:where , . Then,where and , . These expressions allow us to obtainwhere . Using result (14) and the Taylor series expansion around , we obtainwhere , , , , , , and , . Thus, we obtainwhereMoreover,whereThen,where , , , and .

By using Taylor series expansion of around ,we obtain the expressionwhere , , , , , , , and .

So, we getwhere , , . This allows us to obtain the error equation of this iterative scheme:where and , .

To get order of convergence of at least four, we impose , , and : from , we get that gives us second order of convergence. Then, and, as , is obtained.

Moreover, as , by using and ,By replacing in (25), we get third order of convergence.

Forcing also ,Then, replacing and in (26), the order of convergence will be four.

Finally, the error equation isSo, the proof is completed.

Starting from the general family of iterative methods (6) and under the hypothesis of Theorem 1, we develop two specific fourth-order iterative schemes.(i)By requiring and , it is directly obtained from Theorem 1 that and . So, we get the following iterative expression denoted by MS4A:(ii)The second iterative scheme is denoted by MS4B and is defined by imposing the following conditions: , , and so that Theorem 1 implies that , , and . The resultant iterative expression that we call MS4B is

At this point we wonder if we can obtain a method of order of convergence higher than four slightly modifying the iterative scheme (6), just adding one functional evaluation of in its second step:whereWe denote this class of methods by MS5. In the following, we prove that its order of convergence is five under certain conditions.

Theorem 2. *Let , , be a sufficiently differentiable function in an open convex set and let be a solution of the system of nonlinear equations . One supposes that is continuous and nonsingular at . Let be a sufficiently differentiable matrix weight function such that , , and , where is the identity matrix. Let one consider as an initial guess, sufficiently close to . Then, the sequence obtained using the expression (30) converges to with order of convergence five if and . Moreover, the error equation is**where , , and .*

*Proof. *Let us remark that the only difference between methods (30) and (6) is that, in its second step, is replaced by . So, we need the Taylor series expansion ofwhere , , , , and . Then, we obtain the following expressions:where , , , and , .

So, the expression of the error equation in the last step iswhere and , . Forcing , , , and , we obtain order of convergence five. Solving this system of equations we obtain the conditions that guarantee order of convergence five as they appear in the hypothesis of this theorem. Then, the error equation takes the formand the proof is completed.

From the family of iterative methods (30) under the conditions imposed in Theorem 2, we get two particular fifth-order iterative schemes.(i)By requiring that and , conditions from Theorem 2 yield , , and . The expression of the resulting iterative method that we call MS5A is(ii)The second scheme will be denoted by MS5B and results from setting the parameters , , and in Theorem 2. Then, , , and . Then the iterative expression is

It is interesting that, based on Theorem 2, we can generate methods whose order of convergence is higher than 5. This process consists of adding a new step to scheme (30), keeping the weight function unaltered. In this way, the iterative expression iswhere .

Assuming that hypotheses of Theorem 2 are satisfied, the error in the second step of (30) can be expressed as . Then, the error equation of (39) takes the formand the iterative method has order of convergence five. If , the order of convergence of the iterative scheme (39) is seven.

Based on this result, we state the following theorem which generalizes this procedure.

Theorem 3. *Under the same hypothesis of Theorem 2 and adding , sequence obtained using the following expression converges to with order of convergence :**where , and the order of convergence of the penultimate step is with error equation .*

*Proof. *In general, the error equation of the last step takes the formThis shows that the order of convergence of the method will be if .

In order to calculate the different efficiency indices, it is necessary to take into account not only the order of convergence, but also the number of functional evaluations (: each Jacobian matrix and : each vectorial function ) per iteration and the amount of product-quotients per step. This is obtained by observing the iterative expression of the method and using the fact that the number of product-quotients needed for solving a set of linear systems with the same coefficient matrix is and the product between a matrix and a vector implies products-quotients.

With respect to the classical efficiency index, it is clear that one of all (both proposed and known) fourth-order methods is and, for all fifth-order schemes, . It is also easy to check that , for any .

In Table 1, the computational efficiency indices are showed, and also the needed information is showed: the number of functional evaluations of matrices (FEM, evaluations) or nonlinear functions (FEF, evaluations), amount of linear systems with different Jacobian matrices (NS1, products-quotients), the number of linear systems with the same Jacobian matrix (NS2, products-quotients), and also how many times a matrix-vector product appears (MxV, products).