Abstract
In this paper, we present and illustrate a frozen Jacobian multistep iterative method to solve systems of nonlinear equations associated with initial value problems (IVPs) and boundary value problems (BVPs). We have used Jacobi-Gauss-Lobatto collocation (J-GL-C) methods to discretize the IVPs and BVPs. Frozen Jacobian multistep iterative methods are computationally very efficient. They require only one inversion of the Jacobian in the form of LU-factorization. The LU factors can then be used repeatedly in the multistep part to solve other linear systems. The convergence order of the proposed iterative method is , where is the number of steps. The validity, accuracy, and efficiency of our proposed frozen Jacobian multistep iterative method is illustrated by solving fifteen IVPs and BVPs. It has been observed that, in all the test problems, with one exception in this paper, a single application of the proposed method is enough to obtain highly accurate numerical solutions. In addition, we present a comprehensive comparison of J-GL-C methods on a collection of test problems.
1. Introduction
Iterative methods for solving systems of nonlinear equations associated with IVPs and BVPs are important because, in general, it is hard to find a closed form solution. Generally, nonlinear IVPs and BVPs are solved in two steps. For the first step, we discretize the nonlinear problem to obtain a system of nonlinear equations. In the second step, we use some iterative method to solve the system of nonlinear equations.
Pseudospectral collocation methods offer excellent accuracy, and we use them for the discretization of IVPs and BVPs. These methods convert the targeted partial or ordinary differential equations into a system of algebraic equations. Recently, Doha et al. [1] used a J-GL-C method for the discretization of a nonlinear Schrödinger equation to obtain a system of ordinary differential equations and solved it by using an implicit Runge-Kutta method. Highly accurate results for four different kinds of nonlinear Schrödinger equations were obtained. In [2], nonlinear reaction-diffusion equations were solved by using a J-GL-C method. In [3], a J-GL-C method was also used to solve nonlinear complex generalized Zakharov systems of equations. Many authors (see, e.g., [4–7]) have used pseudospectral collocation method to solve nonlinear IVPs and BVPs. J-GL-C methods are attractive because they depend on two parameters and by changing the values of these parameters we can obtain different pseudospectral collocation methods, for example, the Legendre, Chebyshev, and Gegenbauer pseudospectral collocation methods. J-GL-C methods are based on Jacobi polynomials. The Jacobi polynomials are the eigenfunctions of the singular Sturm-Liouville problem [8],and can be generated by using the following recurrence relation: where The th derivative of the th-degree Jacobi polynomial is given by It should be noticed that Jacobi polynomials are orthogonal over the domain with the weight function . We are interested in approximating the differential operators by using J-GL-C methods. The easiest way to work with differential operators of different orders is to construct the differentiation matrix; for details on the construction of numerical differentiation matrix on Jacobi polynomials over the domain , see [9]. Suppose is the Jacobi-Gauss-Lobatto differentiation matrix for the first-order derivative operator over the domain . Then, a derivative of order can be approximated over the interval as
After having a setup for the discretization of IVPs and BVPs by using J-GL-C methods, we are looking for an efficient iterative method for the solution of the associated system of nonlinear equations. The first iterative method that comes into mind to solve a system of nonlinear equations is the Newton-Raphson (NR) method [10, 11], which can be written aswhere . The multistep frozen Jacobian version of the Newton-Raphson method (MNR) can be written as
Frozen Jacobian methods have the advantage of computing LU decomposition of the Jacobian, because this makes the solution of linear systems, having the Jacobian as their matrix, inexpensive. However, the convergence order of MNR as a function of the number of function evaluations is relatively low, and many researchers [12–23] have obtained frozen Jacobian multistep iterative methods with higher convergence orders for the same number of function evaluations. For example, HJ, FTUC, and MSF are presented in [22], [17], and [18], respectively. These methods can be describe as
In this paper, we propose another frozen Jacobian multistep iterative method for solving systems of nonlinear equations with higher convergence order than the previously proposed frozen Jacobian methods for the same number of function evaluations. Another goal of the paper is to compare different pseudospectral collocation methods generated by J-GL-C methods by choosing different values for the parameters and .
2. New Multistep Iterative Method
To construct the higher order frozen Jacobian multistep iterative method we first construct method with unknown coefficients as
We have unknowns: namely, . To develop our method we find , , and these unknowns to obtain the maximum convergence order. Once we have solved that problem we write down the multistep part as Notice that are already computed in the base method part and remain fixed in the multistep part. After finding the values of unknowns, we establish our proposed new multistep iterative method (NMIM) as In NMIM, the convergence order of the base method is nine, and we get an increment of five in the convergence order per additional function evaluation. The LU factors of are used in the multistep part to solve five lower and upper triangular systems of equations. NMIM requires only two Jacobian evaluations and the LU-factorization of one of the Jacobians.
3. Convergence Analysis
We use the method of mathematical induction to prove the convergence order of NMIM. We prove the convergence order for and, then, using induction, we will prove the convergence order for . In our analysis, we expand the system of nonlinear equations around its simple root by using Taylor’s series and use higher order Fréchet derivatives in the expansion. We use the symbolic algebra of Maple software to deal with symbolic computations. The Fréchet differentiability of the system of nonlinear equations is an essential constraint of our proof. Gateaux differentiability would not be enough, but that differentiability is neither enough to ensure that has a good local linear approximation, which is in soul of Newton-like methods. A function is said to be Fréchet differentiable at a point if there exists a linear operator such that The linear operator is called the first-order Fréchet derivative and we denote it by . The higher order Fréchet derivatives can be computed recursively using where is a vector independent from .
Theorem 1. Let be a sufficiently Fréchet differentiable function on an open convex neighborhood of with and , where denotes the Fréchet derivative of . Let and , for , where denotes the -order Fréchet derivative of . Then, for , with an initial guess in the neighborhood of , the sequence generated by NMIM converges to with at least local order of convergence nine and error equation where , , and is a 9-linear function: that is, with .
Proof. We define the error at the th step as . To complete the convergence proof, we perform detailed computations using Maple with the following results:
Now, we present the proof of convergence of NMIM via induction.
Theorem 2. The convergence order of NMIM for is .
Proof. All the computations are made under the assumption of Theorem 1. We know from Theorem 1 that the convergence order of NMIM method is nine for . By performing the computations in a similar manner to that of Theorem 1, we get the following error equation for : This proves that the convergence order of NMIM is for . Now, assume that the convergence order of NMIM is for a number of steps’ and and let us prove that the convergence order of NMIM for steps is for th step. If the convergence order of NMIM for steps is , then where is the asymptotic constant and symbol means “approximately equal to.” By using (17), we perform the following steps: