#### Abstract

We present two classes of iterative methods whose orders of convergence are four and five, respectively, for solving systems of nonlinear equations, by using the technique of weight functions in each step. Moreover, we show an extension to higher order, adding only one functional evaluation of the vectorial nonlinear function. We perform numerical tests to compare the proposed methods with other schemes in the literature and test their effectiveness on specific nonlinear problems. Moreover, some real basins of attraction are analyzed in order to check the relation between the order of convergence and the set of convergent starting points.

#### 1. Introduction

To find the solution of systems of nonlinear equations , , is a common, ancient, and important problem in mathematics and engineering (see, e.g., [1]). Previous experimentation on some of these applied problems shows that high-order methods, associated with floating point arithmetics of multiprecision, are very useful because they carry a clear reduction in the number of required iterations (see, e.g., [2, 3]).

Over the last years there have been numerous contributions of different authors that have designed iterative methods trying to solve nonlinear systems, making several modifications to the classical schemes to accelerate the convergence and reduce the number of operations and functional evaluations at each step of the iterative process. The extension of the variants of Newton’s method for scalar problems described by Weerakoon and Fernando in [4], by Özban in [5], and by Gerlach in [6], to functions of several variables has been developed in [710]. All these processes have yield generating multipoint methods with Newton’s method as a predictor. However, a general procedure called pseudocomposition is designed in [11, 12] by using a generic predictor and the Gaussian quadrature as the corrector.

On the other hand, one of the most used techniques to generate high-order methods for solving nonlinear equations or systems was introduced by Traub in [13], that is, the composition of two iterative methods of orders and , respectively, to obtain a method of order . With the direct application of this technique, it is usually necessary to evaluate the nonlinear function and its associated Jacobian matrix to increase the order of convergence and the new method usually ends up being less efficient than the original ones.

So, for designing a more efficient method, it is usual to estimate the new evaluation of the Jacobian matrix, in terms of Jacobian matrices previously used. With this in mind, the new method generally has a lower order of convergence than but the number of functional evaluations per iteration decreases.

Now, let us consider the problem of finding a real zero of a function , that is, a solution of the nonlinear system of equations . In general, this solution must be approximated by means of iterative methods. The most known and widely used method is classical Newton’s method:It is known that this method has second order of convergence and it uses one functional evaluation of the vectorial function and also of the associated Jacobian matrix at the previous iterate, to generate a new one.

In the numerical section, we will use some multipoint iterative schemes to compare with ours. These have been developed by different authors: the first one is the fourth-order method designed by Jarratt in [14], whose iterative expression isIn the following, we will denote it by JM4. Let us also consider the fourth-order method designed by Sharma et al. in [15] whose iterative formula isdenoted by SHM4. We will also use some fifth-order methods to compare with ours: the scheme obtained by Vassileva [12], generalizing Jarratt’s method, improves this convergence by adding functional evaluations (one functional evaluation of in the second step of the process), being its iterative expression:where , , , , and and denoted by VM5, and the method designed by Sharma and Gupta in [16] whose iterative formula iswhere and . We denote this method as SHM5.

In this work we have designed some iterative methods with orders of convergence four and five and presented an analysis in Section 2. We compare them in Section 3 with other known processes on academic examples, with the aim that the results of this comparison will give us an idea of how robust our methods are. The stability is studied under the point of view of the real dynamics on an specific 2-dimensional problem. An applied problem, the partial differential equation of molecular interaction, is used to check the conclusions reached in previous sections.

#### 2. Design and Analysis of Convergence

Let , , be a sufficiently differentiable function on a convex set and let be a solution of the nonlinear system of equations . We propose the following iterative scheme:wherewith , , and being real parameters.

This scheme has been designed as a modified double-Newton method with a frozen evaluation of and and a matrix weight function at the second step. We denote this class of methods (whose order of convergence is four under certain conditions) by MS4. As it has been stated, is a matrix function with matrix variable. Specifically, if denotes the Banach space of real square matrices, then we can define such that its Frechet derivatives satisfy(a), where and ,(b), where and .

Let us note that denotes the space of linear mappings of on itself.

To prove the local convergence of the proposed iterative method to the solution of , we will use a Taylor series expansion of the involved functions around the solution, by assuming that Jacobian matrix is nonsingular in a neighborhood of ,where , .

So, can be expressed (in a neighborhood of ) aswhere is the identity matrix. This technique, including the notation used, has been detailed in [17].

Theorem 1. Let , , be a sufficiently differentiable function in an open convex set and let be a solution of the system of nonlinear equations . One supposes that is continuous and nonsingular at . Let be a sufficiently differentiable matrix weight function such that , , and , with being the identity matrix. Let one consider as an initial guess, sufficiently close to . Then, the sequence obtained using expression (6) converges to with order of convergence four if and . Moreover, the error equation iswhere , , and .

Proof. From (8) and (9) we obtain, respectively,Then, forcing , the Taylor expansion of the inverse of the Jacobian matrix at the iterate is obtained as follows:where , . Then,where and , . These expressions allow us to obtainwhere . Using result (14) and the Taylor series expansion around , we obtainwhere , , , , , , and , . Thus, we obtainwhereMoreover,whereThen,where , , , and .
By using Taylor series expansion of around ,we obtain the expressionwhere , , , , , , , and .
So, we getwhere , , . This allows us to obtain the error equation of this iterative scheme:where and , .
To get order of convergence of at least four, we impose , , and : from , we get that gives us second order of convergence. Then, and, as , is obtained.
Moreover, as , by using and ,By replacing in (25), we get third order of convergence.
Forcing also ,Then, replacing and in (26), the order of convergence will be four.
Finally, the error equation isSo, the proof is completed.

Starting from the general family of iterative methods (6) and under the hypothesis of Theorem 1, we develop two specific fourth-order iterative schemes.(i)By requiring and , it is directly obtained from Theorem 1 that and . So, we get the following iterative expression denoted by MS4A:(ii)The second iterative scheme is denoted by MS4B and is defined by imposing the following conditions: , , and so that Theorem 1 implies that , , and . The resultant iterative expression that we call MS4B is

At this point we wonder if we can obtain a method of order of convergence higher than four slightly modifying the iterative scheme (6), just adding one functional evaluation of in its second step:whereWe denote this class of methods by MS5. In the following, we prove that its order of convergence is five under certain conditions.

Theorem 2. Let , , be a sufficiently differentiable function in an open convex set and let be a solution of the system of nonlinear equations . One supposes that is continuous and nonsingular at . Let be a sufficiently differentiable matrix weight function such that , , and , where is the identity matrix. Let one consider as an initial guess, sufficiently close to . Then, the sequence obtained using the expression (30) converges to with order of convergence five if and . Moreover, the error equation iswhere , , and .

Proof. Let us remark that the only difference between methods (30) and (6) is that, in its second step, is replaced by . So, we need the Taylor series expansion ofwhere , , , , and . Then, we obtain the following expressions:where , , , and , .
So, the expression of the error equation in the last step iswhere and , . Forcing , , , and , we obtain order of convergence five. Solving this system of equations we obtain the conditions that guarantee order of convergence five as they appear in the hypothesis of this theorem. Then, the error equation takes the formand the proof is completed.

From the family of iterative methods (30) under the conditions imposed in Theorem 2, we get two particular fifth-order iterative schemes.(i)By requiring that and , conditions from Theorem 2 yield , , and . The expression of the resulting iterative method that we call MS5A is(ii)The second scheme will be denoted by MS5B and results from setting the parameters , , and in Theorem 2. Then, , , and . Then the iterative expression is

It is interesting that, based on Theorem 2, we can generate methods whose order of convergence is higher than 5. This process consists of adding a new step to scheme (30), keeping the weight function unaltered. In this way, the iterative expression iswhere .

Assuming that hypotheses of Theorem 2 are satisfied, the error in the second step of (30) can be expressed as . Then, the error equation of (39) takes the formand the iterative method has order of convergence five. If , the order of convergence of the iterative scheme (39) is seven.

Based on this result, we state the following theorem which generalizes this procedure.

Theorem 3. Under the same hypothesis of Theorem 2 and adding , sequence obtained using the following expression converges to with order of convergence :where , and the order of convergence of the penultimate step is with error equation .

Proof. In general, the error equation of the last step takes the formThis shows that the order of convergence of the method will be if .

In order to calculate the different efficiency indices, it is necessary to take into account not only the order of convergence, but also the number of functional evaluations (: each Jacobian matrix and : each vectorial function ) per iteration and the amount of product-quotients per step. This is obtained by observing the iterative expression of the method and using the fact that the number of product-quotients needed for solving a set of linear systems with the same coefficient matrix is and the product between a matrix and a vector implies products-quotients.

With respect to the classical efficiency index, it is clear that one of all (both proposed and known) fourth-order methods is and, for all fifth-order schemes, . It is also easy to check that , for any .

In Table 1, the computational efficiency indices are showed, and also the needed information is showed: the number of functional evaluations of matrices (FEM, evaluations) or nonlinear functions (FEF, evaluations), amount of linear systems with different Jacobian matrices (NS1, products-quotients), the number of linear systems with the same Jacobian matrix (NS2, products-quotients), and also how many times a matrix-vector product appears (MxV, products).

Figure 1 shows the computational efficiency index, for different sizes of the system, of fourth- (MS4A and MS4B) and fifth-order (MS5A and MS5B) proposed methods, joint with the known ones (JM4, SHM4, VM5, and SHM5), for comparison purposes. It can be observed that, in both cases, known methods have better computational efficiency indices for small sizes of the system, but in larger cases the new methods show better behavior.

#### 3. Numerical Results

In this section we test the developed methods to check their effectiveness compared with other known ones. Here we can see the list of systems of nonlinear equations used in these numerical tests, conducted in Matlab R2010a by using variable precision arithmetics with 2000 digits of mantissa and or as a stopping criterion. Consider(i), ;(ii), ;(iii), ;(iv), .

We have two comparative tables in which iterative methods are grouped according to the order of convergence of the involved schemes: in Table 2, we can observe the behavior of the fourth-order methods and in Table 3, the schemes of order can be found. We perform numerical tests for each of the selected systems of nonlinear equations either for the proposed methods MS4A, MS4B, MS5A, and MS5B or for consolidated methods JM4, SHM4, VM5, and SHM5.

The columns of the tables correspond from left to right to nonlinear system to be solved with the initial approximation, iterative method to be used, solutions or roots found, absolute value of the difference between the two last iterations (component by component), , absolute value of each coordinate function evaluated at the last iteration, , the number of iterations used in the process, and approximated computational order of convergence, ACOC, according to (see [8])Let us remark that the value of ACOC that is presented in Tables 2 and 3 is the last coordinate of vector ACOC when the variation between its values is small.

Observing Tables 2 and 3, we note that the approximated computational order of convergence confirms the theoretical results. The results in terms of accuracy in the approximation of the roots and number of iterations are kept in the range expected for the orders of convergence of the methods, as is apparent from a comparison with established methods. We believe that, in general, the proposed methods are competitive on each of the systems of nonlinear equations used.

With respect to the extension to high-order methods, let us see in the following how the increasing order does not reduce the real region of good starting points, as it is usual in iterative methods. We have selected a rectangular region of that contains some of the solutions of system and have used the points of this region as starting ones for the fifth-order iterative methods and their partners of seventh and ninth order.

So, in Figure 2, the dynamical planes associated with known and proposed methods on are showed. These planes have been generated by slightly modifying the routines described in [18]. In them, a mesh of points has been used, 80 has been the maximum number of iterations involved, and has been the tolerance used as a stopping criterium. Then, if a starting point of this mesh converges to one of the solutions of the system (marked with white stars), it is painted in the color assigned to the root which it has converged to. The color used is brighter when the number of iterations is lower. If it reaches the maximum number of iterations without converging to any of the roots, it is painted in black. At the sight of Figure 2, it can be concluded that the areas of converging starting points remain with slight variations, in spite of the increasing order of convergence; this makes this process especially interesting, as it can increase the speed of convergence to a solution with no need of being closer to it.

##### 3.1. Molecular Interaction Problem

To solve the equation of molecular interaction (see [19])we need to deal with a boundary value problem with a nonlinear partial differential equation of second order. To estimate its solution numerically, we have used central divided differences in order to transform the problem in a nonlinear system of equations, which is solved by using the proposed methods (of orders four and five) and the extensions of family MS5 up to order nine.

The discretization process yields to the nonlinear system of equations,where denotes the estimation of the unknown , and (with and ) are the nodes in both variables, where and .

In this case, we fix , so a mesh of is generated. As the boundary conditions give us the value of the unknown function at the nodes for all and also at for all , we have only nine unknowns that are renamed asSo, the system can be expressed aswhereand is the identity matrix and . Now, we will check the performance of the methods by means of some numerical tests, by using variable precision arithmetics of 2000 digits of mantissa. In Tables 4 and 5, we show the numerical results obtained for the problem of molecular interaction (45), with different initial estimations. We show, for the first three iterations, the residual of the function at the last iteration, , and the difference between the last iteration and the preceding one .

We can observe in Tables 4 and 5 that all the new methods converge to the solution of the problem that appears in Table 6. It can be noticed that the error of the test is lower when the order of the iterative method is higher, even at the first iterations. Indeed, if the initial estimation is far from the solution, the proposed methods converge with reasonable results. This is especially important as in real problems, where good initial estimations are not always known.

#### 4. Conclusions

As far as we know, the weight functions procedure has been used only for designing iterative schemes to solve nonlinear equations. In this paper, by using matrix functions, the mentioned procedure is applied to obtain iterative methods for solving nonlinear systems. Fourth- and fifth-order methods are obtained and a technique for designing iterative schemes of any order is presented.

By using different academic test problems and the discretization of a partial differential equation modeling the molecular interaction problem, we compare our methods with several known ones such as Jarratt’s method and Sharma’s method, some of them optimal in the context of nonlinear equations. The numerical tests confirm the theoretical results and show that our methods are more competitive than those used for comparing. In addition, the calculus of the computational efficiency index of the different schemes allows us to ensure that methods MS5A and MS5B are the best for nonlinear systems with big size.

Finally, with respect to the extension to high-order methods, we show that the increasing order does not reduce the real region of good starting points, as it is usual in iterative schemes.

#### Conflict of Interests

The authors declare that there is no conflict of interests regarding the publication of this paper.

#### Acknowledgments

The authors thank the anonymous referee for his/her valuable comments and for the suggestions to improve the readability of the paper. This research was supported by the Ministerio de Ciencia y Tecnología MTM2011-28636-C02-02 and FONDOCYT Dominican Republic.