- About this Journal ·
- Abstracting and Indexing ·
- Advance Access ·
- Aims and Scope ·
- Annual Issues ·
- Article Processing Charges ·
- Articles in Press ·
- Author Guidelines ·
- Bibliographic Information ·
- Citations to this Journal ·
- Contact Information ·
- Editorial Board ·
- Editorial Workflow ·
- Free eTOC Alerts ·
- Publication Ethics ·
- Reviewers Acknowledgment ·
- Submit a Manuscript ·
- Subscription Information ·
- Table of Contents
Abstract and Applied Analysis
Volume 2012 (2012), Article ID 364360, 25 pages
Numerical Solutions of Odd Order Linear and Nonlinear Initial Value Problems Using a Shifted Jacobi Spectral Approximations
1Department of Mathematics, Faculty of Science, King Abdulaziz University, Jeddah 21589, Saudi Arabia
2Department of Mathematics, Faculty of Science, Beni-Suef University, Beni-Suef 62511, Egypt
Received 25 May 2012; Accepted 26 June 2012
Academic Editor: D. Anderson
Copyright © 2012 A. H. Bhrawy and M. A. Alghamdi. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
A shifted Jacobi Galerkin method is introduced to get a direct solution technique for solving the third- and fifth-order differential equations with constant coefficients subject to initial conditions. The key to the efficiency of these algorithms is to construct appropriate base functions, which lead to systems with specially structured matrices that can be efficiently inverted. A quadrature Galerkin method is introduced for the numerical solution of these problems with variable coefficients. A new shifted Jacobi collocation method based on basis functions satisfying the initial conditions is presented for solving nonlinear initial value problems. Through several numerical examples, we evaluate the accuracy and performance of the proposed algorithms. The algorithms are easy to implement and yield very accurate results.
The spectral methods are preferable in numerical solutions of ordinary and partial differential equations due to their high-order accuracy whenever they work [1–3]. Standard spectral and collocation methods have been extensively investigated for solving second- and fourth-order differential equations. In a sequence of papers [4–11], the authors have constructed efficient spectral-Galerkin algorithms for second-, fourth-, and 2th-order differential equations subject to various boundary conditions.
The problem of approximating solutions of differential equations by Galerkin approximations involves the projection onto the span of some appropriate set of basis functions. The member of the basis may satisfy automatically the auxiliary conditions imposed on the problem, such as initial, boundary, or more general conditions. Alternatively, these conditions may be imposed as constraints on the expansions coefficients, as in the Lanczos tau-method [12–14].
It is of fundamental importance to know that the choice of the basis functions is responsible for the superior approximation properties of spectral methods when compared with the finite difference and finite element methods. The choice of different basis functions lead to different spectral approximations; for instance, trigonometric polynomials for periodic problems, Chebyshev, Legendre, ultraspherical, and Jacobi polynomials for nonperiodic problems, Laguerre polynomials for problems on half line, and Hermite polynomials for problems on the whole line.
The main aim of this paper is the design of appropriate shifted Jacobi basis (with parameters and ) that are well suited for the approximations of the third- and fifth-order differential equations subject to initial conditions. In general, the use of Jacobi polynomials ( with , and is the polynomial degree) has the advantage of obtaining the solutions of differential equations in terms of the Jacobi indexes and (see for instance, [15–19]).
This paper is concerned with the systematic development of spectral basis functions for the efficient solution of some odd-order differential equations. Starting from Jacobi polynomials . Galerkin approximations to these problems are built. We derived some interesting results, such as useful relationships between the representation of a polynomial function in a given basis and those for its derivative in the same basis, or formulas to compute discrete operator coefficients in closed form. In this paper, we present a direct solvers based on the shifted Jacobi Galerkin (SJG) method for solving the third- and fifth-order differential equations, the basis functions are constructed to satisfy the given initial conditions, and each of these basis functions have been written as a compact combinations of shifted Jacobi polynomials.
For the third- and fifth-order differential equations with variable coefficients, we introduce the pseudospectral shifted Jacobi Galerkin (P-SJG) method. This method is basically formulated in the shifted Jacobi Galerkin spectral form with general indexes , , but the variable coefficients terms and the right hand side being treated by the shifted Jacobi collocation method with the same indexes , so that the schemes can be implemented at shifted Jacobi-Gauss points efficiently.
The last aim of this paper is to propose a suitable way to approximate the nonlinear third- and fifth-order differential equations by convenient spectral collocation method-based on shifted Jacobi basis functions (the member of the basis may satisfy automatically the auxiliary initial conditions imposed on the problem) such that it can be implemented efficiently at shifted Jacobi-Gauss points on the interval . We propose a new spectral shifted Jacobi collocation (SJC) method to find the solution . The nonlinear ODE is collocated at the points. For suitable collocation points, we use the nodes of the shifted Jacobi-Gauss interpolation on . These equations generate nonlinear algebraic equations which can be solved using Newton's iterative method. Finally, the accuracy of the proposed methods is demonstrated by test problems. Numerical results are presented in which the usual exponential convergence behaviour of spectral approximations is exhibited.
The remainder of this paper is organized as follows. Sections 2 and 3 are devoted to the theoretical derivation of the SJG and P-SJG methods for third-order differential equations with constant and variable coefficients subject to homogeneous and nonhomogeneous initial conditions. In Section 4, we apply the SJC method-based on basis functions for solving nonlinear third-order differential equations. Section 5 gives the corresponding results for those obtained in Sections 2, 3, and 4, but for the fifth-order differential equations. In Section 6, we present some numerical results exhibiting the accuracy and efficiency of our numerical algorithms.
2. SJG Method for Third-Order Differential Equations with Constant Coefficients
Let , then we define the weighted space as usual, equipped with the following inner product and norm, The set of Jacobi polynomials forms a complete -orthogonal system, and
Next, let , then we define the weighted space in the usual way, with the following inner product and norm,
The set of shifted Jacobi polynomials forms a complete -orthogonal system. Moreover, and due to (2.2), we have
The derivative of shifted Jacobi polynomial can be written in terms of the shifted Jacobi polynomials themselves as where for the proof, see [20, 21] and for the general definition of a generalized hypergeometric series and special , (see [22, pages 41, 103-104], resp.).
We are interested in using the SJG method to solve the third-order differential equation: subject to where , , and are constants, and is a given source function. Let us first introduce some basic notation that will be used in the upcoming sections. We set Then the shifted Jacobi-Galerkin approximation to (2.9) is, to find such that where and is the inner product in the weighted space . The norm in will be denoted by .
We choose compact combinations of shifted Jacobi polynomials as basis functions aiming to minimize the bandwidth and the condition number of the coefficient matrix corresponding to (2.9). We choose the basis functions of expansion to be of the form: where , , , and are the unique constants such that , for all . From the initial conditions; and making use of (2.3) and (2.4), we have the following system: Hence , , and can be uniquely determined to give It is clear that the basis functions , , are linearly independent. Therefore, by dimension argument and for , we have
Now, it is clear that the variational formulation of (2.12) is equivalent to Let us denote Then, equation (2.19) is equivalent to the following matrix equation: where the nonzero elements of the matrices , , , and are given explicitly in the following theorem.
Theorem 2.1. If one takes as defined in (2.13), and if we denote , , , and . Then the nonzero elements , and for , are given as follows: where
Proof. The basis functions are chosen such that for . On the other hand, it is clear that are linearly independent and the dimension of is equal to . The nonzero elements for , can be obtained by direct computations using the properties of shifted Jacobi polynomials. It can be easily proved that the diagonal elements of the matrix A take the form: It can be easily shown, that all other formulae can be obtained by direct computations using the properties of shifted Jacobi polynomials.
All the formulae can be obtained by direct computations using the properties of shifted Jacobi polynomials. In particular, the special cases for shifted Chebyshev basis of the first and second kinds may be obtained directly by taking and , respectively, and for shifted Legendre basis by taking . These are given as corollaries to the previous theorem as follows.
Corollary 2.2. If , then the nonzero elements , , , for , are given as follows:
Corollary 2.3. If , then the nonzero elements , and for are given as follows:
Corollary 2.4. If , then the nonzero elements , , , and for , are given as follows:
In the following, we can always modify the right-hand side to take care of the nonhomogeneous initial conditions. Let us consider for instance the one-dimensional third-order differential equation (2.9) subject to the nonhomogeneous initial conditions: We proceed as follows.
Set where The transformation (2.29) turns the nonhomogeneous initial conditions (2.28) into the homogeneous initial conditions: Hence, it suffices to solve the following modified one-dimensional third-order differential equation: subject to the homogeneous initial conditions (2.31), where is given by (2.29), and
3. P-SJG Method for Third-Order Differential Equation with Variable Coefficients
In this section, we use the pseudospectral-shifted Jacobi Galerkin method to numerically solve the following third-order differential equation with variable coefficients:
We denote by , , the nodes of the standard Jacobi-Gauss interpolation on the interval . Their corresponding Christoffel numbers are , . The nodes of the shifted Jacobi-Gauss interpolation on the interval are the zeros of , which we denote by , . Clearly , and their corresponding Christoffel numbers are , . Let be the set of polynomials of degree at most . Thanks to the property of the standard Jacobi-Gauss quadrature, it follows that for any ,
We define the discrete inner product and norm as follows: where and are the nodes and the corresponding weights of the shifted Jacobi-Gauss-quadrature formula on the interval , respectively.
Obviously, (see, e.g., formula (2.25) of ) Thus, for any , the norms and coincide.
Associating with this quadrature rule, we denote by the shifted Jacobi-Gauss interpolation, The pseudospectral Galerkin method for (3.1) is to find such that where is the discrete inner product of and associated with the shifted Jacobi-Gauss quadrature.
4. SJC Method for Nonlinear Third-Order Differential Equations
In this section, we are interested in solving numerically the nonlinear third-order differential equation: with initial conditions It is well known that one can convert (4.1) into third-order system of first-order initial-value problems. Methods to solve systems of first-order differential equations are simply generalizations of the methods for a single first-order equation, for example, the classical Runge-Kutta of order four. Another alternative spectral method is to use the shifted Jacobi collocation method to solve (4.1) then, making use of formula (2.7) enables one to express explicitly the derivatives , in terms of the expansion coefficients . The criterion of spectral shifted Jacobi collocation method for solving approximately (4.1) is to find such that is satisfied exactly at the collocation points , . In other words, we have to collocate (4.4) at the shifted Jacobi roots , which immediately yields This constitutes a system of nonlinear algebraic equations in the unknown expansion coefficients , which can be solved by using any standard iteration technique, like Newton's iteration method.
5. Fifth-Order Differential Equations
In this section, we consider the fifth-order differential equation of the form: We define The results for fifth-order differential equations will be given without proofs.
5.1. SJG Method for Constant Coefficients
For , , , , and are constants, we consider the following shifted Jacobi-Galerkin procedure for (5.1): Find such that Now, we choose the basis functions to be of the form: It is not difficult to show that the basis functions are given by Therefore, for , we have
It is clear that (5.3) is equivalent to
Let us denote then equation (5.7) is equivalent to the following matrix equation: where the nonzero elements of the matrices , , , , , and are given explicitly in the following theorem.
Theorem 5.1. If one takes as defined in (5.4), and if one denotes , , , , and . Then the nonzero elements , and for are given as follows: where
Proof. The proof of this theorem is not difficult, and it can be accomplished by following the same procedure used in proving Theorem 2.1.
In the following, we can always modify the right-hand side to take care of the nonhomogeneous initial conditions. Let us consider for instance the one-dimensional fifth-order differential equation (5.1) subject to the nonhomogeneous initial conditions: We proceed as follows.
Set where The transformation (5.13) turns the nonhomogeneous initial conditions (5.12) into the homogeneous initial conditions:
Hence, it suffices to solve the following modified one-dimensional fifth-order equation: subject to the homogeneous initial conditions (5.15), where is given by (5.13), and
5.2. Fifth-Order Equations with Variable Coefficients
Let us consider the fifth-order differential equation (5.1) with , , , , and are variables. The pseudospectral Galerkin method for (5.1) is to find such that where is the discrete inner product of and associated with the shifted Jacobi-Gauss quadrature (for details, see Section 3).
5.3. Nonlinear Fifth-Order Differential Equations
In this section, we are interested in solving numerically the nonlinear fifth-order differential equation: with initial conditions: It is well known that one can convert (5.19) into fifth-order system of first-order initial-value problems. Methods to solve systems of first-order differential equations are simply generalizations of the methods for a single first-order equation, for example, the classical Runge-Kutta of order four. Another alternative spectral method is to use the shifted Jacobi collocation method to solve (5.19): Then, making use of formula (2.7) enables one to express explicitly the derivatives , in terms of the expansion coefficients . The criterion of spectral shifted Jacobi collocation method for solving approximately (5.19) is to find such that is satisfied exactly at the collocation points , . In other words, we have to collocate (5.22) at the shifted Jacobi roots , which immediately yields This constitute a system of nonlinear algebraic equations in the unknown expansion coefficients , which can be solved by using any standard iteration technique, like Newton's iteration method.
6. Numerical Results
To illustrate the effectiveness of the proposed methods in the present paper, several test examples are carried out in this section. Comparisons of the results obtained by the present methods with those obtained by other methods reveal that the present methods are very efficient and more robust.
Example 6.1. Consider the linear third-order problem (see ): subject to the initial condition: where f is selected such that exact solution is
Table 1 list the maximum pointwise error of using the SJG method with various choices of . Numerical results of this problem show that the SJG method converges exponentially.
Example 6.2. Consider the linear third-order problem with variable coefficients: