#### Abstract

Solving systems of nonlinear equations plays a major role in engineering problems. We present a new family of optimal fourth-order Jarratt-type methods for solving nonlinear equations and extend these methods to solve system of nonlinear equations. Convergence analysis is given for both cases to show that the order of the new methods is four. Cost of computations, numerical tests, and basins of attraction are presented which illustrate the new methods as better alternates to previous methods. We also give an application of the proposed methods to well-known Burger's equation.

#### 1. Introduction

Let us consider the problem of constructing an optimal iterative method to find a simple zero of a nonlinear equationas well as its extension for solving nonlinear system of equations . There is no ambiguity that the quadratically convergent Newton method (NM) is one of the best root finding methods based on two evaluations of function for approximating the solution of a nonlinear equation (1) and is given asThe natural extension of Newton’s method for system of nonlinear equations is given [1] as A large number of variants of the classical Newton method to obtain higher-order multistep schemes with better efficiencies have appeared using numerous techniques [2–4]. However, the research remained focussed on obtaining optimal and computationally efficient methods. The main objective was not only to boost up the order and speed of convergence but also to reduce the computational cost. This objective was achieved by defining two-step methods. A very few of them had less computational cost. Weerakoon and Fernando [5] proposed an accelerated cubically convergent two-step variant of Newton’s method:where . Another cubically convergent without memory method was suggested by Frontini and Sormani [2]:However, the above methods were not optimal. Jarratt [6] constructed an optimal method given byand its extension for the systems of nonlinear equations is given by [7]Several researchers have also focussed their attention on obtaining Jarratt-type iterative schemes for solving nonlinear equation [8–11]. Khattri and Abbasbandy [10] gave their contribution by using concept of parameter approach to develop an optimal fourth-order Jarratt-type iterative scheme requiring one evaluation of the function and two evaluations of first derivative. The following is a special case of their scheme:Soleymani et al. [11] replaced the parameter approach by the weight function approach to construct a family of optimal Jarratt-type two-step methods:where and . A special case of their scheme is as follows:Babajee et al. [12] extended (10) for the multivariate case as follows:where, and is the identity matrix. Chun et al. [8] used Halley’s method with an approximation of second derivative by using weight function approach in the second step of their scheme to obtain optimal variant of Jarratt-type fourth-order method. The classical Jarratt family of fourth-order methods was obtained as special case. A special case of his scheme is given asIn this contribution, we present and analyze an optimal family of fourth-order convergent iterative schemes using weight function in the second step. We, then, extend it for multivariate case. The rest of the paper is organized as follows. Section 2 is comprised of the construction and convergence analysis of the new family of methods for the single nonlinear equations. Section 3 consists of the extension of our method to the system of nonlinear equations. Section 4 includes numerical tests. Sections 5 and 6 provide the cost of computations and basins of attraction, respectively, for the sake of comparison of the new methods with the existing ones in this domain. In Section 7 an application of the proposed method to well-known Burger’s equation is presented and concluding remarks are given in Section 8.

#### 2. New Family of Jarratt-Type Methods and Its Convergence Analysis

In this section, we propose a new optimal fourth-order Jarratt-type scheme for computing the zeros of a univariate nonlinear function. We give a two-step scheme in which the first step is similar to Jarratt’s scheme. In the second step, we use weight functions approach as follows:where , , and and represent real-valued weight functions chosen such that new scheme (14) achieves optimal fourth-order convergence as stated in Theorem 1.

Theorem 1. *Let be a simple zero of sufficiently differentiable function in an open interval containing . Then, for , the new without-memory scheme (14) has optimal convergence of order four under the following conditions on weight functions:**and it satisfies the error equation given by**where , .*

*Proof. *Let be the error at th computing step. By using Taylor’s expansion of about the root , we getwhere , . The first derivative in the first step of our scheme can be calculated asUsing (17) and (18) in the first step of (14), the Taylor expansions for and are given byIn the similar manner, for the second step of (14), we haveBy using (17), (18), and (20), we getAgain using Taylor’s expansion, we haveSubstituting , (21) and (23) in (14) and using (15), we have which gives the following error equation:Hence, it can be seen that the new scheme has optimal fourth-order convergence.

#### 3. Extension of the New Family for Multivariate Case and Its Analysis

In this section, we extend our new family of optimal Jarratt-type schemes to solve systems of nonlinear equations. We give a special case for our new family by defining weight functions satisfying the conditions of Theorem 1 as follows:Thus, we achieve a new Jarratt-type fourth-order method:Now, let us define to be a sufficiently Fréchet differentiable in a convex set , where is an open convex neighborhood of the root , is th approximate root of the exact root , and ; we can write the th derivative of at , as a -linear function such that as described in [13]. Thus, it is defined asAlso, where are the Hessian matrices of . Therefore, Taylor’s series for function of variables can be written aswhere , is continuous and nonsingular, and is closer to . Now, we extend our scheme for solving a system of nonlinear equations as follows:where , , is the identity matrix, and . By using the above Taylor expansion we can prove the following theorem.

Theorem 2. *Let be an open convex set containing the root of and let be four-time Fréchet differentiable in such that the Jacobian matrix is continuous and nonsingular in . Then the new method (31) has convergence of order four.*

*Proof. *Let be the solution of nonlinear system , and let be an initial guess close to ; then by Taylor’s expansion of about , we havewhere , , and . We can calculateto getThe Jacobian matrix has the following Taylor expansion:to calculateUsing (33) and (36) in second step of (31), we attain with the following error term:which shows the proposed method (31) is fourth-order convergent.

#### 4. Numerical Results

We, now, check the effectiveness of our new optimal fourth-order family of methods (27) (MSNF-1) by comparing it with Newton’s method (2) (NM), Khattri’s method (8) (KM), Soleymani’s method (10) (SM), and Chun’s method (13) (CM). Algorithms have been executed in Maple software and are tested for the examples given in Table 1. The approximate solutions for solving single equations are calculated by using a precision of decimal digits with the stopping criterion . In Tables 2, 3, 4, 5, and 6, number of iterations, “,” absolute values of functions and absolute values of the difference between approximated roots and exact roots, , for each iterative method are given. We also compare our proposed method (31) (MSNF-2) for solving systems of nonlinear equations with Newton’s method (3) (NM), Babajee’s method (11) (BM), and Khattri’s method (8) (KM). For solving systems of equations results are computed with a precision of decimal digits. The stopping criterion is . Table 7 consists of test functions for solving systems of nonlinear equations along with their exact zeros. Table 8 shows absolute values of the difference between two consecutive approximations of the root and absolute functional values satisfying the above stopping criterion for each of the methods.

#### 5. Computational Cost

In this section, we provide cost of computations and arithmetic required for executing extension of Khattri’s method (8) (KM) for solving system of nonlinear equations, Babajee’s method (11) (BM), extension of Chun’s method (13) (CM) for system, and our method (31) (MSNF-2). The cost of computations for the function is and computational cost for Jacobian is . We calculate the cost of computations by considering the number of scalar products, matrix products, matrix additions, matrix subtractions, vector additions, vector subtractions, vector multiplications, and decomposition into LU factorization of the first derivative for the methods under consideration here. In which case, cost of LU decomposition of first derivative and solving linear system is , if the right-hand side is a vector and for the case if the right-hand side is a matrix, then this cost is . Table 9 illustrates the total arithmetic and computational costs for execution of Khattri’s method (8) (KM), Babajee’s method (11) (BM), Chun’s method (13) (CM), and proposed method (31) (MSNF-2).

#### 6. Attraction Basins

Let be the roots of the complex polynomial , , , where . We use two different techniques to generate basins of attraction on MATLAB software. We take a square box of in the first technique. For every initial guess , a specific color is assigned according to the exact root and dark blue is assigned for the divergence of the method. We use as the stopping criterion for convergence and the maximum number of iterations is . “Jet” is chosen as the colormap here. For the second technique, same scale is taken, but each initial guess is assigned a color depending upon the number of iterations for the method to converge to any of the roots of the given function. We use as the maximum number of iterations; the stopping criterion is same as above and colormap is selected as “Hot.” The method is considered as divergent for that initial guess if it does not converge in the maximum number of iterations and this case is displayed by black color. We take three test examples to obtain basins of attraction, which are given as , , and . The roots of are , , , roots of are and for , and the roots are , , , , .

We compare the results of our newly constructed method (27) with some existing methods (2), (8), (10), and (13) as given in Section 1. Figures 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, and 15 show the dynamics of the methods (2), (8), (10), (13), and (27) for the polynomials , and . Two types of attraction basins are given in all figures. One can easily see that the appearance of darker region shows that the method consumes a fewer number of iterations. Color maps for both types are given with each figure which show the root to which an initial guess converges and the number of iterations in which the convergence occurs.

**(a)**

**(b)**

**(a)**

**(b)**

**(a)**

**(b)**

**(a)**

**(b)**

**(a)**

**(b)**

**(a)**

**(b)**

**(a)**

**(b)**

**(a)**

**(b)**

**(a)**

**(b)**

**(a)**

**(b)**

**(a)**

**(b)**

**(a)**

**(b)**

**(a)**

**(b)**

**(a)**

**(b)**

**(a)**

**(b)**

#### 7. Application

The well-known Burger equation named after Martinus Burger (1895–1981) is a classical partial differential equation. It acts as a governing equation in the gas dynamics and traffic flow models, for example, in modeling shock waves around jet, aerodynamic heating on atmospheric reentry vehicle, and flow of gas fuel within the jet engine. The viscous Burger equation is given as follows:We use the method of finite difference to solve (38) in which derivatives will be replaced with backward and central differences. We use backward difference to approximate derivative along : , where is the step size along and central difference to approximate and as , , where is step size along . After replacing these approximations with partial derivatives arising in (38), we create a mesh by choosing the step size , and taking . The solution of (38) ends up with the tridiagonal system of nonlinear equations and we test this system using Maple 16 on a 64-bit machine. Table 10 shows the error of two consecutive iterates after fourth iteration at each time level with the starting vector . We compare the results of proposed method (MSNF-2) (31) with Newton’s method (3) (NM), Khattri’s method (8) (KM), and Babajee’s method (11) (BM) which illustrate that our method is a better alternate as compared to previous methods for solving nonlinear equations.

#### 8. Conclusions

In this paper, we presented a new family of Jarratt-type fourth-order methods for solving nonlinear equations and its extension for system of nonlinear equations. We conclude from numerical results that our proposed methods for solving single as well as system of nonlinear equations perform relatively better as compared to the previous methods of this domain with less or comparable computational cost. Dynamical analysis of the iterative methods given in Figures 1–15 shows that our method (27) provides highly accurate results in less number of iterations as compared to methods (2), (8), (10), and (13).

#### Conflict of Interests

The authors declare that there is no conflict of interests regarding the publication of this paper.

#### Acknowledgment

The authors are grateful to the referees for their valuable comments that helped to improve the final version of the paper.