Abstract

A Jacobi dual-Petrov-Galerkin (JDPG) method is introduced and used for solving fully integrated reformulations of third- and fifth-order ordinary differential equations (ODEs) with constant coefficients. The reformulated equation for the 𝐽 th order ODE involves 𝑛 -fold indefinite integrals for 𝑛 = 1 , , 𝐽 . Extension of the JDPG for ODEs with polynomial coefficients is treated using the Jacobi-Gauss-Lobatto quadrature. Numerical results with comparisons are given to confirm the reliability of the proposed method for some constant and polynomial coefficients ODEs.

1. Introduction

A well-known advantage of spectral methods is high accuracy with relatively fewer unknowns when compared with low-order finite-difference methods [1, 2]. On the other hand, spectral methods typically give rise to full matrices, partially negating the gain in efficiency due to the fewer degrees of freedom. In general, the use of the Jacobi polynomials ( 𝑃 ( 𝛼 , 𝛽 ) 𝑛 with 𝛼 , 𝛽 ( 1 , ) and 𝑛 is the polynomial degree) has the advantage of obtaining solutions of ordinary differential equations (ODEs) in terms of the Jacobi indices (see for instance, [35]). Several such pairs ( 𝛼 , 𝛽 ) have been used for approximate solutions of ODEs (see [610]). We avoid developing approximation results for each particular pair of indices and instead carry out a study with general indices. With this motivation, we introduce in this paper a family of the Jacobi polynomials with general indices.

Third-order differential equations have applications in many engineering models, see for instance [1114]). Fifth-order differential equations generally arise in the mathematical modeling of viscoelastic flows and other branches of mathematical physics and engineering sciences, see [1517]. Existence and uniqueness of solutions of such boundary value problems are discussed, for instance, in [18].

In this paper, the proposed differential equations are integrated 𝑞 times, where 𝑞 is the order of the equation, and we make use of the formulae relating the expansion coefficients of integration appearing in these integrated forms of the proposed differential equations to the Jacobi polynomials themselves (see, Doha [19]). An advantage of this approach is that the general equation in the algebraic system then contains a finite number of terms. We, therefore, motivated our interest in integrated forms of these differential equations. The interested reader is referred to Doha and Bhrawy [7, 20].

The main aim of this paper is to propose a suitable way to approximate some integrated forms of third- and fifth-order ODEs with constant coefficients using a spectral method, based on the Jacobi polynomials such that it can be implemented efficiently and at the same time has a good convergence property. It is worthy to note here that odd-order problems lack the symmetry of even-order ones, so we propose a Jacobi dual-Petrov-Galerkin (JDPG) method. The method leads to systems with specially structured matrices that can be efficiently inverted. We apply the method for solving the integrated forms of third- and fifth-order ODEs by using compact combinations of the Jacobi polynomials, which satisfy essentially all the underlying homogeneous boundary conditions. To be more precise, for the JDPG we choose the trial functions to satisfy the underlying boundary conditions of the differential equations, and we choose the test functions to satisfy the dual boundary conditions. Extension of the JDPG for polynomial coefficient ODEs is obtained by approximating the weighted inner products in the JDPG by using the Jacobi-Gauss-Lobatto quadrature. Finally, examples are given to illustrate the efficiency and implementation of the method. Comparisons are made to confirm the reliability of the method.

The remainder of this paper is organized as follows. In Section 2 we give an overview of the Jacobi polynomials and their relevant properties needed hereafter. Section 3 is devoted to the theoretical derivation of the JDPG method for third-order differential equations with constant and variable polynomial coefficients. Section 4 gives the corresponding results for those obtained in Section 3, but for the fifth-order differential equations with constant coefficient and two choices of boundary conditions. In Section 5, we present some numerical results exhibiting the accuracy and efficiency of our numerical algorithms. Some concluding remarks are given in the final section.

2. Preliminaries

Let 𝑆 𝑁 ( 𝐼 ) be the space of polynomials of degree at most 𝑁 on the interval 𝐼 = ( 1 , 1 ) . We set 𝑊 𝑁 = { 𝑢 𝑆 𝑁 𝑢 ( ± 1 ) = 𝑢 ( 1 ) ( 1 ) = 0 } , 𝑊 𝑁 = { 𝑢 𝑆 𝑁 𝑢 ( ± 1 ) = 𝑢 ( 1 ) ( 1 ) = 0 } , ( 2 . 1 ) and let 𝑃 ( 𝛼 , 𝛽 ) 𝑛 ( 𝑥 ) ( 𝑛 = 0 , 1 , 2 , ) be the Jacobi polynomials orthogonal with the weight functions 𝑤 𝛼 , 𝛽 ( 𝑥 ) = ( 1 𝑥 ) 𝛼 ( 1 + 𝑥 ) 𝛽 , where 𝛼 , 𝛽 > 1 .

Let 𝑥 ( 𝛼 , 𝛽 ) 𝑁 , 𝑗 , 0 𝑗 𝑁 , be the zeros of ( 1 𝑥 2 ) 𝜕 𝑥 𝑃 ( 𝛼 , 𝛽 ) 𝑁 . Denote by 𝜛 ( 𝛼 , 𝛽 ) 𝑁 , 𝑗 , 0 𝑗 𝑁 , the weights of the corresponding Gauss-Lobatto quadrature formula, which are arranged in decreasing order. We define the discrete inner product and norm of weighted space 𝐿 2 𝑤 𝛼 , 𝛽 ( 𝐼 ) as follows: ( 𝑢 , 𝑣 ) 𝑤 𝛼 , 𝛽 , 𝑁 = 𝑁 𝑘 = 0 𝑢 ( 𝑥 ( 𝛼 , 𝛽 ) 𝑁 , 𝑘 ) 𝑣 ( 𝑥 ( 𝛼 , 𝛽 ) 𝑁 , 𝑘 ) 𝜛 ( 𝛼 , 𝛽 ) 𝑁 , 𝑘 , 𝑢 𝑤 𝛼 , 𝛽 , 𝑁 = ( 𝑢 , 𝑢 ) 𝑤 𝛼 , 𝛽 , 𝑁 . ( 2 . 2 ) Obviously (see, e.g., formula (2.25) of [21]), ( 𝑢 , 𝑣 ) 𝑤 𝛼 , 𝛽 , 𝑁 = ( 𝑢 , 𝑣 ) 𝑤 𝛼 , 𝛽 𝑢 , 𝑣 𝑆 2 𝑁 1 . ( 2 . 3 ) Thus, for any 𝑢 𝑆 𝑁 , the norms 𝑢 𝑤 𝛼 , 𝛽 , 𝑁 and 𝑢 𝑤 𝛼 , 𝛽 coincide.

For any 𝑢 𝐶 ( 𝐼 ) , the Jacobi-Gauss-Lobatto interpolation operator 𝐼 𝑃 ( 𝛼 , 𝛽 ) 𝑁 𝑢 ( 𝑥 ) 𝑆 𝑁 , satisfying 𝐼 𝑃 ( 𝛼 , 𝛽 ) 𝑁 𝑢 ( 𝑥 ( 𝛼 , 𝛽 ) 𝑁 , 𝑘 ) = 𝑢 ( 𝑥 ( 𝛼 , 𝛽 ) 𝑁 , 𝑘 ) , 0 𝑘 𝑁 . ( 2 . 4 ) We also denote by 𝐼 𝑐 𝑁 = 𝐼 𝑃 ( 1 / 2 , 1 / 2 ) 𝑁 and 𝐼 𝑙 𝑁 = 𝐼 𝑃 ( 0 , 0 ) 𝑁 the Chebyshev-Gauss-Lobatto and Legendre-Gauss-Lobatto interpolation operators, respectively.

For any real numbers 𝛼 , 𝛽 > 1 , the set of the Jacobi polynomials forms a complete 𝐿 2 𝑤 𝛼 , 𝛽 ( 𝐼 ) -orthogonal system, and ( 𝑃 ( 𝛼 , 𝛽 ) 𝑘 , 𝑃 ( 𝛼 , 𝛽 ) 𝑗 ) 𝑤 ( 𝛼 , 𝛽 ) = 𝑘 𝛿 𝑘 , 𝑗 , ( 2 . 5 ) where 𝛿 𝑘 , 𝑗 is the Kronecker function and 𝑘 = 2 𝛼 + 𝛽 + 1 Γ ( 𝑘 + 𝛼 + 1 ) Γ ( 𝑘 + 𝛽 + 1 ) ( 2 𝑘 + 𝛼 + 𝛽 + 1 ) Γ ( 𝑘 + 1 ) Γ ( 𝑘 + 𝛼 + 𝛽 + 1 ) . ( 2 . 6 )

The following special values will be of fundamental importance in what follows (see, [2224]) 𝑃 ( 𝛼 , 𝛽 ) 𝑛 ( 1 ) = ( 𝛼 + 1 ) 𝑛 𝑛 ! , 𝑃 ( 𝛼 , 𝛽 ) 𝑛 ( 1 ) = ( 1 ) 𝑛 ( 𝛽 + 1 ) 𝑛 𝑛 ! , 𝐷 𝑞 𝑃 ( 𝛼 , 𝛽 ) 𝑛 ( 1 ) = 𝑞 1 𝑖 = 0 Γ ( 𝑛 + 𝛼 + 1 ) ( 𝑛 + 𝜆 + 𝑖 ) 2 𝑞 ( 𝑛 𝑞 ) ! Γ ( 𝑞 + 𝛼 + 1 ) , 𝐷 𝑞 𝑃 ( 𝛼 , 𝛽 ) 𝑛 ( 1 ) = ( 1 ) 𝑛 + 𝑞 𝐷 𝑞 𝑃 ( 𝛽 , 𝛼 ) 𝑛 ( 1 ) , ( 2 . 7 ) where ( 𝑎 ) 𝑘 = Γ ( 𝑎 + 𝑘 ) / Γ ( 𝑎 ) and 𝜆 = 1 + 𝛽 + 𝛼 .

If we define the 𝑞 times repeated differentiation and integration of 𝑃 ( 𝛼 , 𝛽 ) 𝑛 ( 𝑥 ) by 𝐷 𝑞 𝑃 ( 𝛼 , 𝛽 ) 𝑛 ( 𝑥 ) and 𝐼 ( 𝑞 , 𝛼 , 𝛽 ) 𝑛 ( 𝑥 ) , respectively, then (cf. Doha [19, 22]) 𝐷 𝑞 𝑃 ( 𝛼 , 𝛽 ) 𝑛 ( 𝑥 ) = 2 𝑞 ( 𝑛 + 𝜆 ) 𝑞 𝑛 𝑞 𝑖 = 0 𝐶 𝑛 𝑞 , 𝑖 ( 𝑞 , 𝛼 , 𝛽 ) 𝑃 ( 𝛼 , 𝛽 ) 𝑖 ( 𝑥 ) , ( 2 . 8 ) 𝐼 ( 𝑞 , 𝛼 , 𝛽 ) 𝑛 ( 𝑥 ) = 2 𝑞 ( 𝑛 𝑞 + 𝜆 ) 𝑞 𝑛 + 𝑞 𝑖 = 𝑞 𝐶 𝑛 + 𝑞 , 𝑖 ( 𝑞 , 𝛼 , 𝛽 ) 𝑃 ( 𝛼 , 𝛽 ) 𝑖 ( 𝑥 ) + 𝜋 𝑞 1 ( 𝑥 ) , 𝑞 0 , 𝑛 𝑞 + 1 f o r 𝛼 = 𝛽 = 1 2 , 𝑞 0 , 𝑛 𝑞 f o r 𝛼 1 2 o r 𝛽 1 2 , ( 2 . 9 ) where 𝐶 , 𝑖 ( 𝑞 , 𝛼 , 𝛽 ) = ( + 2 𝑞 + 𝜆 ) 𝑖 ( 𝑖 + 𝑞 + 𝛼 + 1 ) 𝑖 Γ ( 𝑖 + 𝜆 ) ( 𝑖 ) ! Γ ( 2 𝑖 + 𝜆 ) × 3 𝐹 2 ( + 𝑖 + 2 𝑞 + 𝑖 + 𝜆 𝑖 + 𝛼 + 1 𝑖 + 𝑞 + 𝛼 + 1 2 𝑖 + 𝜆 + 1 1 ) , ( 2 . 1 0 ) with 𝜋 𝑞 1 ( 𝑥 ) being a polynomial of degree at most ( 𝑞 1 ) . It is to be noted that 𝐼 ( 𝑞 , 𝛼 , 𝛽 ) 𝑛 ( 𝑥 ) may be obtained from 𝐷 𝑞 𝑃 ( 𝛼 , 𝛽 ) 𝑛 ( 𝑥 ) by replacing 𝑞 with negative 𝑞 . In general, the hypergeometric series 3 𝐹 2 cannot be summed in explicit form, but it can be summed by Watsons identity [25], if 𝛼 = 𝛽 . The following two lemmas will be of fundamental importance in what follows.

Lemma 2.1 (see, [19, 26]). One has 𝑥 = 𝑗 = 0 𝐷 ( 𝛾 , 𝛿 ) 𝑗 𝑃 ( 𝛾 , 𝛿 ) 𝑗 ( 𝑥 ) , ( 2 . 1 1 ) where 𝐷 ( 𝛾 , 𝛿 ) 𝑗 = 2 𝑗 ! ( 1 + 𝛾 + 𝛿 + 2 𝑗 ) 𝑗 𝑖 = 0 ( 2 ) 𝑖 ( 1 + 𝛾 + 𝑗 ) 𝑖 𝑖 ! ( 𝑖 𝑗 ) ! ( 1 + 𝛾 + 𝛿 + 𝑗 ) 𝑖 + 1 ( 2 + 𝛾 + 𝛿 + 𝑖 + 𝑗 ) 𝑗 . ( 2 . 1 2 )

Lemma 2.2. If one writes 𝐼 ( 𝑞 , 𝛼 , 𝛽 ) 𝑘 ( 𝑥 ) = 𝑘 + 𝑞 𝑖 = 𝑞 𝑆 𝑞 ( 𝑘 , 𝑖 , 𝛼 , 𝛽 ) 𝑃 ( 𝛼 , 𝛽 ) 𝑖 ( 𝑥 ) + 𝜋 𝑞 1 ( 𝑥 ) , ( 2 . 1 3 ) then 𝑆 𝑞 ( 𝑘 , 𝑖 , 𝛼 , 𝛽 ) = 2 𝑞 ( 𝑘 𝑞 + 𝜆 ) 𝑞 𝐶 𝑘 + 𝑞 , 𝑖 ( 𝑞 , 𝛼 , 𝛽 ) , 𝑞 = 1 , 2 , . ( 2 . 1 4 )

Proof. It is immediately obtained from relation (2.9).

3. Third-Order Differential Equation

We are interested in using the JDPG method to solve the third-order differential equation 𝑢 ( 3 ) ( 𝑥 ) + 𝛾 1 𝑢 ( 2 ) ( 𝑥 ) + 𝛾 2 𝑢 ( 1 ) ( 𝑥 ) + 𝛾 3 𝑢 ( 𝑥 ) = 𝑔 ( 𝑥 ) , i n 𝐼 , ( 3 . 1 ) subject to 𝑢 ( ± 1 ) = 𝑢 ( 1 ) ( 1 ) = 0 , ( 3 . 2 ) where 𝛾 1 , 𝛾 2 , and 𝛾 3 are constants and 𝑔 is a given source function. In this paper, we consider the fully integrated form of the ODE, given by 𝑢 ( 𝑥 ) + 𝛾 1 ( 1 ) 𝑢 ( 𝑥 ) ( 𝑑 𝑥 ) ( 1 ) + 𝛾 2 ( 2 ) 𝑢 ( 𝑥 ) ( 𝑑 𝑥 ) ( 2 ) + 𝛾 3 ( 3 ) 𝑢 ( 𝑥 ) ( 𝑑 𝑥 ) ( 3 ) = 𝑓 ( 𝑥 ) + 2 𝑖 = 0 𝑑 𝑖 𝑃 ( 𝛼 , 𝛽 ) 𝑖 ( 𝑥 ) , 𝑢 ( ± 1 ) = 𝑢 ( 1 ) = 0 , ( 3 . 3 ) where ( 𝑞 ) 𝑢 ( 𝑥 ) ( 𝑑 𝑥 ) ( 𝑞 ) = 𝑞 t i m e s 𝑢 ( 𝑥 ) 𝑞 t i m e s 𝑑 𝑥 𝑑 𝑥 𝑑 𝑥 , 𝑓 ( 𝑥 ) = ( 3 ) 𝑔 ( 𝑥 ) ( 𝑑 𝑥 ) ( 3 ) . ( 3 . 4 ) In this work we assume that 𝑓 , the three-fold indefinite integral form of 𝑔 , can be evaluated analytically. We set 𝑆 𝑁 = s p a n { 𝑃 ( 𝛼 , 𝛽 ) 0 ( 𝑥 ) , 𝑃 ( 𝛼 , 𝛽 ) 1 ( 𝑥 ) , , 𝑃 ( 𝛼 , 𝛽 ) 𝑁 ( 𝑥 ) } , ( 3 . 5 ) then the dual-Petrov-Galerkin approximation to (3.3) is to find 𝑢 𝑁 𝑊 𝑁 such that ( 𝑢 𝑁 , 𝑣 ) 𝑤 𝛼 , 𝛽 + 𝛾 1 ( ( 1 ) 𝑢 𝑁 ( 𝑑 𝑥 ) ( 1 ) , 𝑣 ) 𝑤 𝛼 , 𝛽 + 𝛾 2 ( ( 2 ) 𝑢 𝑁 ( 𝑑 𝑥 ) ( 2 ) , 𝑣 ) 𝑤 𝛼 , 𝛽 + 𝛾 3 ( ( 3 ) 𝑢 𝑁 ( 𝑑 𝑥 ) ( 3 ) , 𝑣 ) 𝑤 𝛼 , 𝛽 = 𝑓 + 2 𝑖 = 0 𝑑 𝑖 𝑃 ( 𝛼 , 𝛽 ) 𝑖 , 𝑣 𝑤 𝛼 , 𝛽 𝑣 𝑊 𝑁 , ( 3 . 6 ) where 𝑤 𝛼 , 𝛽 ( 𝑥 ) = ( 1 𝑥 ) 𝛼 ( 1 + 𝑥 ) 𝛽 and ( 𝑢 , 𝑣 ) 𝑤 = 𝐼 𝑢 𝑣 𝑤 𝑑 𝑥 is the inner product in the weighted space 𝐿 2 𝑤 𝛼 , 𝛽 ( 𝐼 ) . The norm in 𝐿 2 𝑤 𝛼 , 𝛽 ( 𝐼 ) will be denoted by 𝑤 𝛼 , 𝛽 .

3.1. The Jacobi Dual-Petrov-Galerkin Method

We choose compact combinations of the Jacobi polynomials as basis functions to minimize the bandwidth hoping to improve the condition number of the coefficient matrix corresponding to (3.6). We choose the test basis and trial functions of expansions 𝜙 𝑘 ( 𝑥 ) and 𝜓 𝑘 ( 𝑥 ) to be of the form 𝜙 𝑘 ( 𝑥 ) = 𝑃 ( 𝛼 , 𝛽 ) 𝑘 ( 𝑥 ) + 𝜖 𝑘 𝑃 ( 𝛼 , 𝛽 ) 𝑘 + 1 ( 𝑥 ) + 𝜀 𝑘 𝑃 ( 𝛼 , 𝛽 ) 𝑘 + 2 ( 𝑥 ) + 𝜁 𝑘 𝑃 ( 𝛼 , 𝛽 ) 𝑘 + 3 ( 𝑥 ) , ( 3 . 7 ) 𝜓 𝑘 ( 𝑥 ) = 𝑃 ( 𝛼 , 𝛽 ) 𝑘 ( 𝑥 ) + 𝜌 𝑘 𝑃 ( 𝛼 , 𝛽 ) 𝑘 + 1 ( 𝑥 ) + 𝜚 𝑘 𝑃 ( 𝛼 , 𝛽 ) 𝑘 + 2 ( 𝑥 ) + 𝜎 𝑘 𝑃 ( 𝛼 , 𝛽 ) 𝑘 + 3 ( 𝑥 ) , ( 3 . 8 ) where 𝜖 𝑘 , 𝜀 𝑘 , 𝜁 𝑘 , 𝜌 𝑘 , 𝜚 𝑘 , and 𝜎 𝑘 are the unique constants such that 𝜙 𝑘 ( 𝑥 ) 𝑊 𝑁 and 𝜓 𝑘 ( 𝑥 ) 𝑊 𝑁 . From the boundary conditions, 𝜙 𝑘 ( ± 1 ) = 𝜙 ( 1 ) 𝑘 ( 1 ) = 0 and (2.7), hence 𝜖 𝑘 , 𝜀 𝑘 and 𝜁 𝑘 can be uniquely determined by using mathematica to give (see, [27]) 𝜖 𝑘 = ( 𝑘 + 1 ) ( 2 𝑘 + 𝜆 + 2 ) ( 𝑘 𝛼 + 2 𝛽 + 1 ) ( 𝑘 + 𝛼 + 1 ) ( 𝑘 + 𝛽 + 1 ) ( 2 𝑘 + 𝜆 + 4 ) , 𝜀 𝑘 = ( 𝑘 + 1 ) 2 ( 2 𝑘 + 𝜆 + 1 ) ( 𝑘 𝛽 + 2 𝛼 + 3 ) ( 𝑘 + 𝛼 + 1 ) 2 ( 𝑘 + 𝛽 + 1 ) ( 2 𝑘 + 𝜆 + 5 ) , 𝜁 𝑘 = ( 𝑘 + 1 ) 3 ( 2 𝑘 + 𝜆 + 1 ) 2 ( 𝑘 + 𝛼 + 1 ) 2 ( 𝑘 + 𝛽 + 1 ) ( 2 𝑘 + 𝜆 + 4 ) 2 . ( 3 . 9 ) Using (2.7), and 𝜓 𝑘 ( ± 1 ) = 𝜓 ( 1 ) 𝑘 ( 1 ) = 0 , one verifies readily that 𝜌 k = ( 𝑘 + 1 ) ( 2 𝑘 + 𝜆 + 2 ) ( 𝑘 𝛽 + 2 𝛼 + 1 ) ( 𝑘 + 𝛼 + 1 ) ( 𝑘 + 𝛽 + 1 ) ( 2 𝑘 + 𝜆 + 4 ) , 𝜚 𝑘 = ( 𝑘 + 1 ) 2 ( 2 𝑘 + 𝜆 + 1 ) ( 𝑘 𝛼 + 2 𝛽 + 3 ) ( 𝑘 + 𝛽 + 1 ) 2 ( 𝑘 + 𝛼 + 1 ) ( 2 𝑘 + 𝜆 + 5 ) , 𝜎 𝑘 = ( 𝑘 + 1 ) 3 ( 2 𝑘 + 𝜆 + 1 ) 2 ( 𝑘 + 𝛽 + 1 ) 2 ( 𝑘 + 𝛼 + 1 ) ( 2 𝑘 + 𝜆 + 4 ) 2 . ( 3 . 1 0 )

Now it is clear that (3.6) is equivalent to ( 𝑢 𝑁 , 𝜓 𝑘 ( 𝑥 ) ) 𝑤 𝛼 , 𝛽 + 𝛾 1 ( ( 1 ) 𝑢 𝑁 ( 𝑑 𝑥 ) ( 1 ) , 𝜓 𝑘 ( 𝑥 ) ) 𝑤 𝛼 , 𝛽 + 𝛾 2 ( ( 2 ) 𝑢 𝑁 ( 𝑑 𝑥 ) ( 2 ) , 𝜓 𝑘 ( 𝑥 ) ) 𝑤 𝛼 , 𝛽 + 𝛾 3 ( ( 3 ) 𝑢 𝑁 ( 𝑑 𝑥 ) ( 3 ) , 𝜓 𝑘 ( 𝑥 ) ) 𝑤 𝛼 , 𝛽 = 𝑓 ( 𝑥 ) + 2 𝑖 = 0 𝑑 𝑖 𝑃 ( 𝛼 , 𝛽 ) 𝑖 ( 𝑥 ) , 𝜓 𝑘 ( 𝑥 ) 𝑤 𝛼 , 𝛽 , 𝑁 , 𝑘 = 0 , 1 , , 𝑁 , ( 3 . 1 1 ) where ( , ) 𝑤 𝛼 , 𝛽 , 𝑁 is the discrete inner product associated with the Jacobi-Gauss-Lobatto quadrature. The constants 𝑑 0 , 𝑑 1 , and 𝑑 2 would not appear if we take 𝑘 3 in (3.11), therefore we get ( 𝑢 𝑁 , 𝜓 𝑘 ( 𝑥 ) ) 𝑤 𝛼 , 𝛽 + 𝛾 1 ( ( 1 ) 𝑢 𝑁 ( 𝑑 𝑥 ) ( 1 ) , 𝜓 𝑘 ( 𝑥 ) ) 𝑤 𝛼 , 𝛽 + 𝛾 2 ( ( 2 ) 𝑢 𝑁 ( 𝑑 𝑥 ) ( 2 ) , 𝜓 𝑘 ( 𝑥 ) ) 𝑤 𝛼 , 𝛽 + 𝛾 3 ( ( 3 ) 𝑢 𝑁 ( 𝑑 𝑥 ) ( 3 ) , 𝜓 𝑘 ( 𝑥 ) ) 𝑤 𝛼 , 𝛽 = ( 𝑓 ( 𝑥 ) , 𝜓 𝑘 ( 𝑥 ) ) 𝑤 𝛼 , 𝛽 , 𝑁 , 𝑘 = 3 , 4 , , 𝑁 . ( 3 . 1 2 ) If we take 𝜙 𝑘 ( 𝑥 ) and 𝜓 𝑘 ( 𝑥 ) as defined in (3.7) and (3.8), respectively, and if we denote 𝑓 𝑘 = ( 𝑓 , 𝜓 𝑘 ( 𝑥 ) ) 𝑤 𝛼 , 𝛽 , 𝑁 , 𝐟 = ( 𝑓 3 , 𝑓 4 , , 𝑓 𝑁 ) 𝑇 , 𝑢 𝑁 ( 𝑥 ) = 𝑁 3 𝑛 = 0 𝑣 𝑛 𝜙 𝑛 ( 𝑥 ) , 𝐯 = ( 𝑣 0 , 𝑣 1 , , 𝑣 𝑁 3 ) 𝑇 , 𝑎 𝑘 𝑗 = ( 𝜙 𝑗 3 ( 𝑥 ) , 𝜓 𝑘 ( 𝑥 ) ) 𝑤 𝛼 , 𝛽 , 𝑏 𝑘 𝑗 = ( ( 1 ) 𝜙 𝑗 3 ( 𝑥 ) ( 𝑑 𝑥 ) ( 1 ) , 𝜓 𝑘 ( 𝑥 ) ) 𝑤 𝛼 , 𝛽 , 𝑐 𝑘 𝑗 = ( ( 2 ) 𝜙 𝑗 3 ( 𝑥 ) ( 𝑑 𝑥 ) ( 2 ) , 𝜓 𝑘 ( 𝑥 ) ) 𝑤 𝛼 , 𝛽 , 𝑑 𝑘 𝑗 = ( ( 3 ) 𝜙 𝑗 3 ( 𝑥 ) ( 𝑑 𝑥 ) ( 3 ) , 𝜓 𝑘 ( 𝑥 ) ) 𝑤 𝛼 , 𝛽 , ( 3 . 1 3 ) then 𝑊 𝑁 = s p a n { 𝜙 0 ( 𝑥 ) , 𝜙 1 ( 𝑥 ) , , 𝜙 𝑁 3 ( 𝑥 ) } , 𝑊 𝑁 = s p a n { 𝜓 3 ( 𝑥 ) , 𝜓 4 ( 𝑥 ) , , 𝜓 𝑁 ( 𝑥 ) } , ( 3 . 1 4 ) and the nonzero elements ( 𝑎 𝑘 𝑗 ) , ( 𝑏 𝑘 𝑗 ) , ( 𝑐 𝑘 𝑗 ) , and ( 𝑑 𝑘 𝑗 ) for 3 𝑘 , 𝑗 𝑁 are given as follows: 𝑎 𝑘 𝑘 = 𝜁 𝑘 3 𝑘 , 𝑎 𝑘 , 𝑘 + 1 = 𝜀 𝑘 2 𝑘 + 𝜁 𝑘 2 𝜌 𝑘 𝑘 + 1 , 𝑎 𝑘 , 𝑘 + 2 = 𝜖 𝑘 1 𝑘 + 𝜀 𝑘 1 𝜌 𝑘 𝑘 + 1 + 𝜁 𝑘 1 𝜚 𝑘 𝑘 + 2 , 𝑎 𝑘 , 𝑘 + 3 = 𝑘 + 𝜖 𝑘 𝜌 𝑘 𝑘 + 1 + 𝜀 𝑘 𝜚 𝑘 𝑘 + 2 + 𝜁 𝑘 𝜎 𝑘 𝑘 + 3 , 𝑎 𝑘 , 𝑘 + 4 = 𝜌 𝑘 𝑘 + 1 + 𝜖 𝑘 + 1 𝜚 𝑘 𝑘 + 2 + 𝜀 𝑘 + 1 𝜎 𝑘 𝑘 + 3 , 𝑎 𝑘 , 𝑘 + 5 = 𝜚 𝑘 𝑘 + 2 + 𝜖 𝑘 + 2 𝜎 𝑘 𝑘 + 3 , 𝑎 𝑘 , 𝑘 + 6 = 𝜎 𝑘 𝑘 + 3 , 𝑏 𝑘 , 𝑗 = [ 1 ( 𝑗 , 𝑘 , 𝛼 , 𝛽 ) 𝑘 + 1 ( 𝑗 , 𝑘 + 1 , 𝛼 , 𝛽 ) 𝜌 𝑘 𝑘 + 1 + 1 ( 𝑗 , 𝑘 + 2 , 𝛼 , 𝛽 ) 𝜚 𝑘 𝑘 + 2 + 1 ( 𝑗 , 𝑘 + 3 , 𝛼 , 𝛽 ) 𝜎 𝑘 𝑘 + 3 ] , 𝑗 = 𝑘 + 1 , = 0 , 1 , , 8 , 𝑐 𝑘 , 𝑗 = [ 2 ( 𝑗 , 𝑘 , 𝛼 , 𝛽 ) 𝑘 + 2 ( 𝑗 , 𝑘 + 1 , 𝛼 , 𝛽 ) 𝜌 𝑘 𝑘 + 1 + 2 ( 𝑗 , 𝑘 + 2 , 𝛼 , 𝛽 ) 𝜚 𝑘 𝑘 + 2 + 2 ( 𝑗 , 𝑘 + 3 , 𝛼 , 𝛽 ) 𝜎 𝑘 𝑘 + 3 ] , 𝑗 = 𝑘 + 2 , = 0 , 1 , , 1 0 , 𝑑 𝑘 , 𝑗 = 3 ( 𝑗 , 𝑘 , 𝛼 , 𝛽 ) 𝑘 + 3 ( 𝑗 , 𝑘 + 1 , 𝛼 , 𝛽 ) 𝜌 𝑘 𝑘 + 1 + 3 ( 𝑗 , 𝑘 + 2 , 𝛼 , 𝛽 ) 𝜚 𝑘 𝑘 + 2 + 3 ( 𝑗 , 𝑘 + 3 , 𝛼 , 𝛽 ) 𝜎 𝑘 𝑘 + 3 , 𝑗 = 𝑘 + 3 , = 0 , 1 , , 1 2 , ( 3 . 1 5 ) where 𝑖 ( 𝑗 , 𝑘 , 𝛼 , 𝛽 ) = 𝑆 𝑖 ( 𝑗 3 , 𝑘 , 𝛼 , 𝛽 ) + 𝜖 𝑗 3 𝑆 𝑖 ( 𝑗 2 , 𝑘 , 𝛼 , 𝛽 ) + 𝜀 𝑗 3 𝑆 𝑖 ( 𝑗 1 , 𝑘 , 𝛼 , 𝛽 ) + 𝜁 𝑗 3 𝑆 𝑖 ( 𝑗 , 𝑘 , 𝛼 , 𝛽 ) . ( 3 . 1 6 )

Hence by setting 𝐴 = ( 𝑎 𝑘 𝑗 ) , 𝐵 = ( 𝑏 𝑘 𝑗 ) , 𝐶 = ( 𝑐 𝑘 𝑗 ) , 𝐷 = ( 𝑑 𝑘 𝑗 ) , 3 𝑘 , 𝑗 𝑁 , ( 3 . 1 7 ) then (3.12) is equivalent to the following matrix equation: ( 𝐴 + 𝛾 1 𝐵 + 𝛾 2 𝐶 + 𝛾 3 𝐷 ) 𝐯 = 𝐟 . ( 3 . 1 8 ) All the analytical formulae of the nonzero elements of matrices 𝐴 , 𝐵 , 𝐶   and   𝐷 can be obtained by direct computations using the properties of the Jacobi polynomials (for details see, [27, 28]).

In the case of 𝛼 , 𝛽 0 , 𝛼 , 𝛽 ( 1 , ) , we can form explicitly the LU factorization, that is, 𝐴 + 𝛾 1 𝐵 + 𝛾 2 𝐶 + 𝛾 3 𝐷 = L U . In general, the expense of calculating LU factorization of an 𝑁 × 𝑁 dense matrix 𝑀 is 𝑂 ( 𝑁 3 ) operations, and the expense of solving 𝐴 𝐱 = 𝐛 , provided that the factorization is known, is 𝑂 ( 𝑁 2 ) (see, [27]). However, in the case of banded matrix 𝐴 of bandwidth 𝑟 , we need just 𝑂 ( 𝑟 2 𝑁 ) operations to factorize and 𝑂 ( 𝑟 𝑁 ) operations to solve a linear system. In the case of 𝛾 𝑖 0 , 𝑖 = 0 , 1 , 2 , the square matrix 𝐴 + 𝛾 1 𝐵 + 𝛾 2 𝐶 + 𝛾 3 𝐷 has bandwidth of 13. So we need just 𝑂 ( 1 3 𝑁 ) operations to solve the linear system (3.18). If 𝑟 𝑁 , this represents a very substantial saving.

It is worthy to note that, for 𝛼 , 𝛽 0 , the algebraic system (3.18) resulting from fully integrated reformulation of (3.1) is sparse and is therefore cheaper to solve than those obtained from the differentiated form (see [27, Theorem 3.1]). Moreover, the savings in computational effort increase as the size of the systems grow. Thus, we have demonstrated the advantage of using the integrated forms over the differentiated ones for constant coefficients ODEs.

3.2. A Quadrature JDPG Method

The JDPG can be extended for ODEs with polynomial coefficients because of analytical form of a product of an algebraic polynomial, and the Jacobi polynomials are known.

Now the formula of the Jacobi coefficients of the moments of one single Jacobi polynomial of any degree (see, Doha [22]) is 𝑥 𝑚 𝑃 ( 𝛼 , 𝛽 ) 𝑗 ( 𝑥 ) = 2 𝑚 𝑛 = 0 Θ 𝑚 , 𝑛 ( 𝑗 ) 𝑃 ( 𝛼 , 𝛽 ) 𝑗 + 𝑚 𝑛 ( 𝑥 ) 𝑚 , 𝑗 0 , ( 3 . 1 9 ) with 𝑃 ( 𝛼 , 𝛽 ) 𝑟 ( 𝑥 ) = 0 , 𝑟 1 , where Θ 𝑚 , 𝑛 ( 𝑗 ) = ( 1 ) 𝑛 2 𝑗 + 𝑚 𝑛 𝑚 ! ( 2 𝑗 + 2 𝑚 2 𝑛 + 𝜆 ) Γ ( 𝑗 + 𝑚 𝑛 + 𝜆 ) Γ ( 𝑗 + 𝛼 + 1 ) Γ ( 𝑗 + 𝛽 + 1 ) Γ ( 𝑗 + 𝑚 𝑛 + 𝛼 + 1 ) Γ ( 𝑗 + 𝑚 𝑛 + 𝛽 + 1 ) Γ ( 𝑗 + 𝜆 ) × m i n ( 𝑗 + 𝑚 𝑛 , 𝑗 ) 𝑘 = m a x ( 0 , 𝑗 𝑛 ) ( 𝑗 + 𝑚 𝑛 𝑘 ) Γ ( 𝑗 + 𝑘 + 𝜆 ) 2 𝑘 ( 𝑛 + 𝑘 𝑗 ) ! Γ ( 3 𝑗 + 2 𝑚 2 𝑛 𝑘 + 𝜆 + 1 ) × 𝑗 𝑘 = 0 ( 1 ) Γ ( 2 𝑗 + 𝑚 𝑛 𝑘 + 𝛼 + 1 ) Γ ( 𝑗 + 𝑚 + 𝑛 + 𝛽 + 1 ) ! ( 𝑗 𝑘 ) ! Γ ( 𝑗 + 𝛼 + 1 ) Γ ( 𝑘 + + 𝛽 + 1 ) × 2 𝐹 1 ( 𝑗 𝑘 𝑛 , 𝑗 + 𝑚 + 𝑛 + 𝛽 + 1 ; 3 𝑗 + 2 𝑚 2 𝑛 𝑘 + 𝜆 + 1 ; 2 ) . ( 3 . 2 0 ) This formula can be used to facilitate greatly the setting up of the algebraic systems to be obtained by applying the spectral methods for solving differential equations with polynomial coefficients of any order.

Let us consider the following integrated form of the third-order differential equation: 𝑢 ( 𝑥 ) + ( 1 ) 𝛾 1 ( 𝑥 ) 𝑢 ( 𝑥 ) ( 𝑑 𝑥 ) ( 1 ) + ( 2 ) 𝛾 2 ( 𝑥 ) 𝑢 ( 𝑥 ) ( 𝑑 𝑥 ) ( 2 ) + ( 3 ) 𝛾 3 ( 𝑥 ) 𝑢 ( 𝑥 ) ( 𝑑 𝑥 ) ( 3 ) = 𝑓 ( 𝑥 ) + 2 𝑖 = 0 𝑑 𝑖 𝑃 ( 𝛼 , 𝛽 ) 𝑖 ( 𝑥 ) , i n 𝐼 = ( 1 , 1 ) , 𝑢 ( ± 1 ) = 𝑢 ( 1 ) ( 1 ) = 0 , ( 3 . 2 1 ) where 𝛾 1 ( 𝑥 ) , 𝛾 2 ( 𝑥 ) , and 𝛾 3 ( 𝑥 ) are the variable polynomial coefficients of the differential equation. The quadrature dual-Petrov-Galerkin method for (3.21) is to find 𝑢 𝑁 𝑊 𝑁 such that ( 𝑢 𝑁 , 𝑣 𝑁 ) 𝑤 𝛼 , 𝛽 , 𝑁 + ( ( 1 ) 𝛾 1 ( 𝑥 ) 𝑢 𝑁 ( 𝑑 𝑥 ) ( 1 ) , 𝑣 𝑁 ) 𝑤 𝛼 , 𝛽 , 𝑁 + ( ( 2 ) 𝛾 2 ( 𝑥 ) 𝑢 𝑁 ( 𝑑 𝑥 ) ( 2 ) , 𝑣 𝑁 ) 𝑤 𝛼 , 𝛽 , 𝑁 + ( ( 3 ) 𝛾 3 ( 𝑥 ) 𝑢 𝑁 ( 𝑑 𝑥 ) ( 3 ) , 𝑣 𝑁 ) 𝑤 𝛼 , 𝛽 , 𝑁 = 𝑓 + 2 𝑖 = 0 𝑑 𝑖 𝑃 ( 𝛼 , 𝛽 ) 𝑖 , 𝑣 𝑁 𝑤 𝛼 , 𝛽 , 𝑁 𝑣 𝑁 𝑊 𝑁 , ( 3 . 2 2 ) where ( 𝑢 , 𝑣 ) 𝑤 𝛼 , 𝛽 , 𝑁 is the discrete inner product of 𝑢 and 𝑣 associated with the Jacobi-Gauss-Lobatto quadrature.

Let us consider 𝑢 𝑁 = 𝑁 3 𝑘 = 0 ̃ 𝑎 𝑘 𝜙 𝑘 , 𝐚 = ( ̃ 𝑎 0 , ̃ 𝑎 1 , , ̃ 𝑎 𝑁 3 ) 𝑇 , ̃ 𝑓 𝑘 = ( 𝑓 , 𝜓 𝑘 ) 𝑤 𝛼 , 𝛽 , 𝑁 , 𝐟 = ( ̃ 𝑓 3 , ̃ 𝑓 1 , , ̃ 𝑓 𝑁 ) 𝑇 , ( 3 . 2 3 ) and using Lemma 2.2 and formula (3.19), we can obtain ̃ 𝑎 𝑖 𝑗 = ( 𝜙 𝑗 3 , 𝜓 𝑖 ) 𝑤 𝛼 , 𝛽 , 𝑁 , ̃ 𝑏 𝑖 𝑗 = ( ( 1 ) 𝛾 1 ( 𝑥 ) 𝜙 𝑗 3 ( 𝑥 ) ( 𝑑 𝑥 ) ( 1 ) , 𝜓 𝑖 ) 𝑤 𝛼 , 𝛽 , 𝑁 , ̃ 𝑐 𝑖 𝑗 = ( ( 2 ) 𝛾 2 ( 𝑥 ) 𝜙 𝑗 3 ( 𝑥 ) ( 𝑑 𝑥 ) ( 2 ) , 𝜓 𝑖 ) 𝑤 𝛼 , 𝛽 , 𝑁 , ̃ 𝑑 𝑖 𝑗 = ( ( 3 ) 𝛾 3 ( 𝑥 ) 𝜙 𝑗 3 ( 𝑥 ) ( 𝑑 𝑥 ) ( 3 ) , 𝜓 𝑖 ) 𝑤 𝛼 , 𝛽 , 𝑁 . ( 3 . 2 4 ) And by setting ̃ 𝐴 = ( ̃ 𝑎 𝑘 𝑗 ) , ̃ 𝐵 = ( ̃ 𝑏 𝑘 𝑗 ) , ̃ 𝐶 = ( ̃ 𝑐 𝑘 𝑗 ) , ̃ 𝐷 = ( ̃ 𝑑 𝑘 𝑗 ) , 3 𝑘 , 𝑗 𝑁 , ( 3 . 2 5 ) then the linear system of (3.22) becomes ( ̃ 𝐴 + ̃ 𝐵 + ̃ 𝐶 + ̃ 𝐷 ) 𝐚 = 𝐟 . ( 3 . 2 6 )

4. Fifth-Order Differential Equation

In this section, we consider the fifth-order differential equation of the form 𝑢 ( 5 ) + 𝛾 1 𝑢 ( 3 ) + 𝛾 2 𝑢 ( 1 ) + 𝛾 3 𝑢 = 𝑔 ( 𝑥 ) , 𝑥 𝐼 , ( 4 . 1 ) but by considering its integrated form, namely, 𝑢 ( 𝑥 ) + 𝛾 1 ( 2 ) 𝑢 ( 𝑥 ) ( 𝑑 𝑥 ) ( 2 ) + 𝛾 2 ( 4 ) 𝑢 ( 𝑥 ) ( 𝑑 𝑥 ) ( 4 ) + 𝛾 3 ( 5 ) 𝑢 ( 𝑥 ) ( 𝑑 𝑥 ) ( 5 ) = 𝑓 ( 𝑥 ) + 4 𝑖 = 0 𝑑 𝑖 𝑃 ( 𝛼 , 𝛽 ) 𝑖 ( 𝑥 ) . ( 4 . 2 )

4.1. First Choice of Boundary Conditions

Here, we apply the dual-Petrov-Galerkin approximation to (4.2) subject to the boundary conditions 𝑢 ( ± 1 ) = 𝑢 ( 1 ) ( ± 1 ) = 𝑢 ( 2 ) ( 1 ) = 0 . ( 4 . 3 )

We set 𝑉 𝑁 = { 𝑣 𝑆 𝑁 𝑣 ( ± 1 ) = 𝑣 ( 1 ) ( ± 1 ) = 𝑣 ( 2 ) ( 1 ) = 0 } , 𝑉 𝑁 = { 𝑣 𝑆 𝑁 𝑣 ( ± 1 ) = 𝑣 ( 1 ) ( ± 1 ) = 𝑣 ( 2 ) ( 1 ) = 0 } ; ( 4 . 4 ) then the Jacobi dual-Petrov-Galerkin approximation to (4.2) is to find 𝑢 𝑁 𝑉 𝑁 such that ( 𝑢 𝑁 , 𝑣 ) 𝑤 𝛼 , 𝛽 + 𝛾 1 ( ( 2 ) 𝑢 𝑁 ( 𝑑 𝑥 ) ( 2 ) , 𝑣 ) 𝑤 𝛼 , 𝛽 + 𝛾 2 ( ( 4 ) 𝑢 𝑁 ( 𝑑 𝑥 ) ( 4 ) , 𝑣 ) 𝑤 𝛼 , 𝛽 + 𝛾 3 ( ( 5 ) 𝑢 𝑁 ( 𝑑 𝑥 ) ( 5 ) , 𝑣 ) 𝑤 𝛼 , 𝛽 = 𝑓 + 4 𝑖 = 0 𝑑 𝑖 𝑃 ( 𝛼 , 𝛽 ) 𝑖 , 𝑣 𝑤 𝛼 , 𝛽 , 𝑁 𝑣 𝑉 𝑁 . ( 4 . 5 ) We consider the following the Jacobi dual-Petrov-Galerkin procedure for (4.1): find 𝑢 𝑁 𝑉 𝑁 such that ( 𝑢 𝑁 , 𝑣 ) 𝑤 𝛼 , 𝛽 + 𝛾 1 ( ( 2 ) 𝑢 𝑁 ( 𝑑 𝑥 ) ( 2 ) , 𝑣 ) 𝑤 𝛼 , 𝛽 + 𝛾 2 ( ( 4 ) 𝑢 𝑁 ( 𝑑 𝑥 ) ( 4 ) , 𝑣 ) 𝑤 𝛼 , 𝛽 + 𝛾 3 ( ( 5 ) 𝑢 𝑁 ( 𝑑 𝑥 ) ( 5 ) , 𝑣 ) 𝑤 𝛼 , 𝛽 = 𝑓 + 4 𝑖 = 0 𝑑 𝑖 𝑃 ( 𝛼 , 𝛽 ) 𝑖 , 𝑣 𝑤 𝛼 , 𝛽 𝑣 𝑉 𝑁 . ( 4 . 6 ) Now, we choose the basis and the dual basis functions Φ 𝑘 ( 𝑥 ) and Ψ 𝑘 ( 𝑥 ) to be of the form Φ 𝑘 ( 𝑥 ) = 𝑃 ( 𝛼 , 𝛽 ) 𝑘 ( 𝑥 ) + ̂ 𝜖 𝑘 𝑃 ( 𝛼 , 𝛽 ) 𝑘 + 1 ( 𝑥 ) + ̂ 𝜀 𝑘 𝑃 ( 𝛼 , 𝛽 ) 𝑘 + 2 ( 𝑥 ) + ̂ 𝜁 𝑘 𝑃 ( 𝛼 , 𝛽 ) 𝑘 + 3 ( 𝑥 ) + ̂ 𝜇 𝑘 𝑃 ( 𝛼 , 𝛽 ) 𝑘 + 4 ( 𝑥 ) + ̂ 𝜐 𝑘 𝑃 ( 𝛼 , 𝛽 ) 𝑘 + 5 ( 𝑥 ) , 𝑘 = 0 , 1 , , 𝑁 5 , Ψ 𝑘 ( 𝑥 ) = 𝑃 ( 𝛼 , 𝛽 ) 𝑘 ( 𝑥 ) + ̂ 𝜌 𝑘 𝑃 ( 𝛼 , 𝛽 ) 𝑘 + 1 ( 𝑥 ) + ̂ 𝜚 𝑘 𝑃 ( 𝛼 , 𝛽 ) 𝑘 + 2 ( 𝑥 ) + ̂ 𝜎 𝑘 𝑃 ( 𝛼 , 𝛽 ) 𝑘 + 3 ( 𝑥 ) + ̂ 𝜍 𝑘 𝑃 ( 𝛼 , 𝛽 ) 𝑘 + 4 ( 𝑥 ) + ̂ 𝜏 𝑘 𝑃 ( 𝛼 , 𝛽 ) 𝑘 + 5 ( 𝑥 ) , 𝑘 = 0 , 1 , , 𝑁 5 . ( 4 . 7 ) It is not difficult to show that the basis functions Φ 𝑘 ( 𝑥 ) 𝑉 𝑘 + 5 and the dual basis functions Ψ 𝑘 ( 𝑥 ) 𝑉 𝑘 + 5 .

We choose the coefficients ̂ 𝜖 𝑘 , ̂ 𝜀 𝑘 , ̂ 𝜁 𝑘 , ̂ 𝜇 𝑘 , and ̂ 𝜐 𝑘 such that Φ 𝑘 ( 𝑥 ) verifies the boundary conditions (4.3). Making use of (2.7), then the boundary conditions (4.3) lead to linear system for these coefficients. The computation of the exact solution of such linear system for the unknown coefficients is extremely tedious by hand, and we have resorted to the symbolic computation software mathematica version 6, hence ̂ 𝜖 𝑘 , ̂ 𝜀 𝑘 , ̂ 𝜁 𝑘 , ̂ 𝜇 𝑘 , and ̂ 𝜐 𝑘 can be uniquely determined to give ̂ 𝜖 𝑘 = ( 𝑘 + 1 ) ( 2 𝑘 + 𝜆 + 2 ) ( 𝑘 2 𝛼 + 3 𝛽 + 1 ) ( 𝑘 + 𝛼 + 1 ) ( 𝑘 + 𝛽 + 1 ) ( 2 𝑘 + 𝜆 + 6 ) , ̂ 𝜀 𝑘 = ( 𝑘 + 1 ) 2 ( 2 𝑘 + 𝜆 + 1 ) ( 2 𝑘 + 𝜆 + 4 ) ( 𝑘 + 𝛼 + 1 ) 2 ( 𝑘 + 𝛽 + 1 ) 2 ( 2 𝑘 + 𝜆 + 6 ) 2 × [ 2 𝑘 2 + 4 ( 𝛼 + 3 ) 𝑘 + ( 𝛼 2 + 9 𝛼 + 3 𝛽 2 + 3 ( 2 𝛼 1 ) 𝛽 + 1 6 ) ] , ̂ 𝜁 𝑘 = ( 𝑘 + 1 ) 3 ( 2 𝑘 + 𝜆 + 1 ) 2 ( 𝑘 + 𝛼 + 1 ) 3 ( 𝑘 + 𝛽 + 1 ) 2 ( 2 𝑘 + 𝜆 + 7 ) 2 × [ 2 𝑘 2 + 4 ( 𝛽 + 3 ) 𝑘 + ( 3 𝛼 2 3 𝛼 𝛽 2 + 3 ( 2 𝛼 + 5 ) 𝛽 + 1 6 ) ] , ̂ 𝜇 𝑘 = ( 𝑘 + 1 ) 4 ( 𝑘 + 3 𝛼 2 𝛽 + 5 ) ( 2 𝑘 + 𝜆 + 1 ) 3 ( 𝑘 + 𝛼 + 1 ) 3 ( 𝑘 + 𝛽 + 1 ) 2 ( 2 𝑘 + 𝜆 + 6 ) 2 ( 2 𝑘 + 𝜆 + 9 ) , ̂ 𝜐 𝑘 = ( 𝑘 + 1 ) 5 ( 2 𝑘 + 𝜆 + 1 ) 4 ( 𝑘 + 𝛼 + 1 ) 3 ( 𝑘 + 𝛽 + 1 ) 2 ( 2 𝑘 + 𝜆 + 6 ) 4 . ( 4 . 8 ) Since the dual basis functions Ψ 𝑘 ( 𝑥 ) satisfy the dual boundary conditions, and making use of (2.7) then the unknown coefficients ̂ 𝜌 𝑘 , ̂ 𝜚 𝑘 , ̂ 𝜎 𝑘 , ̂ 𝜍 𝑘 , and ̂ 𝜏 𝑘 are determined by using Mathematica to give ̂ 𝜌 𝑘 = ( 𝑘 + 1 ) ( 2 𝑘 + 𝜆 + 2 ) ( 𝑘 + 3 𝛼 2 𝛽 + 1 ) ( 𝑘 + 𝛼 + 1 ) ( 𝑘 + 𝛽 + 1 ) ( 2 𝑘 + 𝜆 + 6 ) , ̂ 𝜚 𝑘 = ( 𝑘 + 1 ) 2 ( 2 𝑘 + 𝜆 + 1 ) ( 2 𝑘 + 𝜆 + 4 ) ( 𝑘 + 𝛼 + 1 ) 2 ( 𝑘 + 𝛽 + 1 ) 2 ( 2 𝑘 + 𝜆 + 6 ) 2 × [ 2 𝑘 2 + 4 ( 𝛽 + 3 ) 𝑘 + ( 3 𝛼 2 + 3 𝛼 𝛽 2 + 3 ( 2 𝛼 + 3 ) 𝛽 + 1 6 ) ] , ̂ 𝜎 𝑘 = ( 𝑘 + 1 ) 3 ( 2 𝑘 + 𝜆 + 1 ) 2 ( 𝑘 + 𝛼 + 1 ) 2 ( 𝑘 + 𝛽 + 1 ) 3 ( 2 𝑘 + 𝜆 + 7 ) 2 × [ 2 𝑘 2 + 4 ( 𝛼 + 3 ) 𝑘 + ( 𝛼 2 + 1 5 𝛼 3 𝛽 2 + 3 ( 2 𝛼 1 ) 𝛽 + 1 6 ) ] , ̂ 𝜍 𝑘 = ( 𝑘 + 1 ) 4 ( 𝑘 + 3 𝛽 2 𝛼 + 5 ) ( 2 𝑘 + 𝜆 + 1 ) 3 ( 𝑘 + 𝛼 + 1 ) 2 ( 𝑘 + 𝛽 + 1 ) 3 ( 2 𝑘 + 𝜆 + 6 ) 2 ( 2 𝑘 + 𝜆 + 9 ) , ̂ 𝜏 𝑘 = ( 𝑘 + 1 ) 5 ( 2 𝑘 + 𝜆 + 1 ) 4 ( 𝑘 + 𝛼 + 1 ) 2 ( 𝑘 + 𝛽 + 1 ) 3 ( 2 𝑘 + 𝜆 + 6 ) 4 . ( 4 . 9 ) It is clear that (4.6) is equivalent to ( 𝑢 𝑁 , Ψ 𝑘 ( 𝑥 ) ) 𝑤 𝛼 , 𝛽 + 𝛾 1 ( ( 2 ) 𝑢 𝑁 ( 𝑑 𝑥 ) ( 2 ) , Ψ 𝑘 ( 𝑥 ) ) 𝑤 𝛼 , 𝛽 + 𝛾 2 ( ( 4 ) 𝑢 𝑁 ( 𝑑 𝑥 ) ( 4 ) , Ψ 𝑘 ( 𝑥 ) ) 𝑤 𝛼 , 𝛽 + 𝛾 3 ( ( 5 ) 𝑢 𝑁 ( 𝑑 𝑥 ) ( 5 ) , Ψ 𝑘 ( 𝑥 ) ) 𝑤 𝛼 , 𝛽 = 𝑓 ( 𝑥 ) + 4 𝑖 = 0 𝑑 𝑖 𝑃 ( 𝛼 , 𝛽 ) 𝑖 ( 𝑥 ) , Ψ 𝑘 ( 𝑥 ) 𝑤 𝛼 , 𝛽 , 𝑘 = 0 , 1 , , 𝑁 . ( 4 . 1 0 ) The constants 𝑑 0 , 𝑑 1 , 𝑑 2 , 𝑑 3 , and 𝑑 4 would not appear if we take 𝑘 5 in (4.10), therefore we get ( 𝑢 𝑁 , Ψ 𝑘 ( 𝑥 ) ) 𝑤 𝛼 , 𝛽 + 𝛾 1 ( ( 2 ) 𝑢 𝑁 ( 𝑑 𝑥 ) ( 2 ) , Ψ 𝑘 ( 𝑥 ) ) 𝑤 𝛼 , 𝛽 + 𝛾 2 ( ( 4 ) 𝑢 𝑁 ( 𝑑 𝑥 ) ( 4 ) , Ψ 𝑘 ( 𝑥 ) ) 𝑤 𝛼 , 𝛽 + 𝛾 3 ( ( 5 ) 𝑢 𝑁 ( 𝑑 𝑥 ) ( 5 ) , Ψ 𝑘 ( 𝑥 ) ) 𝑤 𝛼 , 𝛽 = ( 𝑓 ( 𝑥 ) , Ψ 𝑘 ( 𝑥 ) ) 𝑤 𝛼 , 𝛽 , 𝑘 = 5 , 6 , , 𝑁 . ( 4 . 1 1 ) If we take Φ 𝑘 ( 𝑥 ) and Ψ 𝑘 ( 𝑥 ) as defined in (4.7) and if we denote 𝑓 𝑘 = ( 𝑓 , Ψ 𝑘 ( 𝑥 ) ) 𝑤 𝛼 , 𝛽 , 𝐟 = ( 𝑓 5 , 𝑓 6 , , 𝑓 𝑁 ) 𝑇 , 𝑢 𝑁 ( 𝑥 ) = 𝑁 5 𝑛 = 0 𝑣 𝑛 Φ 𝑛 ( 𝑥 ) , 𝐯 = ( 𝑣 0 , 𝑣 1 , , 𝑣 𝑁 5 ) 𝑇 , 𝑝 𝑘 𝑗 = ( Φ 𝑗 5 ( 𝑥 ) , Ψ 𝑘 ( 𝑥 ) ) 𝑤 𝛼 , 𝛽 , 𝑞 𝑘 𝑗 = ( ( 2 ) Φ 𝑗 5 ( 𝑥 ) ( 𝑑 𝑥 ) ( 2 ) , Ψ 𝑘 ( 𝑥 ) ) 𝑤 𝛼 , 𝛽 , 𝑠 𝑘 𝑗 = ( ( 4 ) Φ 𝑗 5 ( 𝑥 ) ( 𝑑 𝑥 ) ( 4 ) , Ψ 𝑘 ( 𝑥 ) ) 𝑤 𝛼 , 𝛽 , 𝑡 𝑘 𝑗 = ( ( 5 ) Φ 𝑗 5 ( 𝑥 ) ( 𝑑 𝑥 ) ( 5 ) , Ψ 𝑘 ( 𝑥 ) ) 𝑤 𝛼 , 𝛽 , ( 4 . 1 2 ) then 𝑉 𝑁 = s p a n { Φ 0 ( 𝑥 ) , Φ 1 ( 𝑥 ) , , Φ 𝑁 5 ( 𝑥 ) } , 𝑉 𝑁 = s p a n { Ψ 5 ( 𝑥 ) , Ψ 6 ( 𝑥 ) , , Ψ 𝑁 ( 𝑥 ) } , ( 4 . 1 3 ) and the nonzero elements ( 𝑝 𝑘 𝑗 ) , ( 𝑞 𝑘 𝑗 ) , ( 𝑠 𝑘 𝑗 ) , and ( 𝑡 𝑘 𝑗 ) for 5 𝑘 , 𝑗 𝑁 are given as follows: 𝑝 𝑘 𝑘 = ̂ 𝜐 𝑘 5 𝑘 , 𝑝 𝑘 , 𝑘 + 1 = ̂ 𝜇 𝑘 4 𝑘 + ̂ 𝜐 𝑘 4 ̂ 𝜌 𝑘 𝑘 + 1 , 𝑝 𝑘 , 𝑘 + 2 = ̂ 𝜁 𝑘 3 𝑘 + ̂ 𝜇 𝑘 3 ̂ 𝜌 𝑘 𝑘 + 1 + ̂ 𝜐 𝑘 3 ̂ 𝜚 𝑘 𝑘 + 2 , 𝑝 𝑘 , 𝑘 + 3 = ̂ 𝜀 𝑘 2 𝑘 + ̂ 𝜁 𝑘 2 ̂ 𝜌 𝑘 𝑘 + 1 + ̂ 𝜇 𝑘 2 ̂ 𝜚 𝑘 𝑘 + 2 + ̂ 𝜐 𝑘 2 ̂ 𝜎 𝑘 𝑘 + 3 , 𝑝 𝑘 , 𝑘 + 4 = ̂ 𝜖 𝑘 1 𝑘 + ̂ 𝜀 𝑘 1 ̂ 𝜌 𝑘 𝑘 + 1 + ̂ 𝜁 𝑘 1 ̂ 𝜚 𝑘 𝑘 + 2 + ̂ 𝜇 𝑘 1 ̂ 𝜎 𝑘 𝑘 + 3 + ̂ 𝜐 𝑘 1 ̂ 𝜍 𝑘 𝑘 + 4 , 𝑝 𝑘 , 𝑘 + 5 = 𝑘 + ̂ 𝜖 𝑘 ̂ 𝜌 𝑘 𝑘 + 1 + ̂ 𝜀 𝑘 ̂ 𝜚 𝑘 𝑘 + 2 + ̂ 𝜁 𝑘 ̂ 𝜎 𝑘 𝑘 + 3 + ̂ 𝜇 𝑘 ̂ 𝜍 𝑘 𝑘 + 4 + ̂ 𝜐 𝑘 ̂ 𝜏 𝑘 𝑘 + 5 , 𝑝 𝑘 , 𝑘 + 6 = ̂ 𝜌 𝑘 𝑘 + 1 + ̂ 𝜖 𝑘 + 1 ̂ 𝜚 𝑘 𝑘 + 2 + ̂ 𝜀 𝑘 + 1 ̂ 𝜎 𝑘 𝑘 + 3 + ̂ 𝜁 𝑘 + 1 ̂ 𝜍 𝑘 k + 4 + ̂ 𝜇 𝑘 + 1 ̂ 𝜏 𝑘 𝑘 + 5 , 𝑝 𝑘 , 𝑘 + 7 = ̂ 𝜚 𝑘 𝑘 + 2 + ̂ 𝜖 𝑘 + 2 ̂ 𝜎 𝑘 𝑘 + 3 + ̂ 𝜀 𝑘 + 2 ̂ 𝜍 𝑘 𝑘 + 4 + ̂ 𝜁 𝑘 + 2 ̂ 𝜏 𝑘 𝑘 + 5 , 𝑝 𝑘 , 𝑘 + 8 = ̂ 𝜎 𝑘 𝑘 + 3 + ̂ 𝜖 𝑘 + 3 ̂ 𝜍 𝑘 𝑘 + 4 + ̂ 𝜀 𝑘 + 3 ̂ 𝜏 𝑘 𝑘 + 5 , 𝑝 𝑘 , 𝑘 + 9 = ̂ 𝜍 𝑘 𝑘 + 4 + ̂ 𝜖 𝑘 + 4 ̂ 𝜏 𝑘 𝑘 + 5 , 𝑝 𝑘 , 𝑘 + 1 0 = ̂ 𝜏 𝑘 𝑘 + 5 , ( 4 . 1 4 ) 𝑞 𝑘 , 𝑗 = 2 ( 𝑗 , 𝑘 , 𝛼 , 𝛽 ) 𝑘 + 2 ( 𝑗 , 𝑘 + 1 , 𝛼 , 𝛽 ) ̂ 𝜌 𝑘 𝑘 + 1 + 2 ( 𝑗 , 𝑘 + 2 , 𝛼 , 𝛽 ) ̂ 𝜚 𝑘 𝑘 + 2 + 2 ( 𝑗 , 𝑘 + 3 , 𝛼 , 𝛽 ) ̂ 𝜎 𝑘 𝑘 + 3 + 2 ( 𝑗 , 𝑘 + 4 , 𝛼 , 𝛽 ) ̂ 𝜍 𝑘 𝑘 + 4 + 2 ( 𝑗 , 𝑘 + 5 , 𝛼 , 𝛽 ) ̂ 𝜏 𝑘 𝑘 + 5 , 𝑗 = 𝑘 + 2 , = 0 , 1 , , 1 4 , 𝑠 𝑘 , 𝑗 = 4 ( 𝑗 , 𝑘 , 𝛼 , 𝛽 ) 𝑘 + 4 ( 𝑗 , 𝑘 + 1 , 𝛼 , 𝛽 ) ̂ 𝜌 𝑘 𝑘 + 1 + 4 ( 𝑗 , 𝑘 + 2 , 𝛼 , 𝛽 ) ̂ 𝜚 𝑘 𝑘 + 2 + 4 ( 𝑗 , 𝑘 + 3 , 𝛼 , 𝛽 ) ̂ 𝜎 𝑘 𝑘 + 3 + 4 ( 𝑗 , 𝑘 + 4 , 𝛼 , 𝛽 ) ̂ 𝜍 𝑘 𝑘 + 4 + 4 ( 𝑗 , 𝑘 + 5 , 𝛼 , 𝛽 ) ̂ 𝜏 𝑘 𝑘 + 5 , 𝑗 = 𝑘 + 4 , = 0 , 1 , , 1 8 , 𝑡 𝑘 , 𝑗 = 5 ( 𝑗 , 𝑘 , 𝛼 , 𝛽 ) 𝑘 + 5 ( 𝑗 , 𝑘 + 1 , 𝛼 , 𝛽 ) ̂ 𝜌 𝑘 𝑘 + 1 + 5 ( 𝑗 , 𝑘 + 2 , 𝛼 , 𝛽 ) ̂ 𝜚 𝑘 𝑘 + 2 + 5 ( 𝑗 , 𝑘 + 3 , 𝛼 , 𝛽 ) ̂ 𝜎 𝑘 𝑘 + 3 + 5 ( 𝑗 , 𝑘 + 4 , 𝛼 , 𝛽 ) ̂ 𝜍 𝑘 𝑘 + 4 + 5 ( 𝑗 , 𝑘 + 5 , 𝛼 , 𝛽 ) ̂ 𝜏 𝑘 𝑘 + 5 , 𝑗 = 𝑘 + 5 , = 0 , 1 , , 2 0 , ( 4 . 1 5 ) where 𝑖 ( 𝑗 , 𝑘 , 𝛼 , 𝛽 ) = 𝑆 𝑖 ( 𝑗 5 , 𝑘 , 𝛼 , 𝛽 ) + ̂ 𝜖 𝑗 5 𝑆 𝑖 ( 𝑗 4 , 𝑘 , 𝛼 , 𝛽 ) + ̂ 𝜀 𝑗 5 𝑆 𝑖 ( 𝑗 3 , 𝑘 , 𝛼 , 𝛽 ) + ̂ 𝜁 𝑗 5 𝑆 𝑖 ( 𝑗 2 , 𝑘 , 𝛼 , 𝛽 ) + ̂ 𝜇 𝑗 5 𝑆 𝑖 ( 𝑗 1 , 𝑘 , 𝛼 , 𝛽 ) + ̂ 𝜐 𝑗 5 𝑆 𝑖 ( 𝑗 , 𝑘 , 𝛼 , 𝛽 ) . ( 4 . 1 6 ) Then (4.11) is equivalent to the following matrix equation: ( 𝑃 + 𝛾 1 𝑄 + 𝛾 2 𝑆 + 𝛾 3 𝑇 ) 𝐯 = 𝐟 . ( 4 . 1 7 )

4.2. Second Choice of Boundary Conditions

In this subsection, we consider the fifth-order differential equation (4.1) with the following boundary conditions: 𝑢 ( ± 1 ) = 𝑢 ( 1 ) ( ± 1 ) = 𝑢 ( 3 ) ( 1 ) = 0 . ( 4 . 1 8 ) Equation (4.1) subject to the boundary conditions (4.18) has been considered in [29, 30]. Let us denote 𝑍 𝑁 = { 𝑣 𝑆 𝑁 𝑣 ( ± 1 ) = 𝑣 ( 1 ) ( ± 1 ) = 𝑣 ( 3 ) ( 1 ) = 0 } , ̂ 𝑍 𝑁 = { 𝑣 𝑆 𝑁 𝑣 ( ± 1 ) = 𝑣 ( 1 ) ( ± 1 ) = 𝑣 ( 3 ) ( 1 ) = 0 } ; ( 4 . 1 9 ) then the Jacobi dual-Petrov-Galerkin approximation of (4.1) subject to (4.18) consists of finding 𝑢 𝑁 𝑍 𝑁 such that ( 𝑢 𝑁 , 𝑣 ) 𝑤 𝛼 , 𝛽 + 𝛾 1 ( ( 2 ) 𝑢 𝑁 ( 𝑑 𝑥 ) ( 2 ) , 𝑣 ) 𝑤 𝛼 , 𝛽 + 𝛾 2 ( ( 4 ) 𝑢 𝑁 ( 𝑑 𝑥 ) ( 4 ) , 𝑣 ) 𝑤 𝛼 , 𝛽 + 𝛾 3 ( ( 5 ) 𝑢 𝑁 ( 𝑑 𝑥 ) ( 5 ) , 𝑣 ) 𝑤 𝛼 , 𝛽 = 𝑓 + 4 𝑖 = 0 𝑑 𝑖 𝑃 ( 𝛼 , 𝛽 ) 𝑖 , 𝑣 𝑤 𝛼 , 𝛽 𝑣 ̂ 𝑍 𝑁 . ( 4 . 2 0 ) We consider the following choice of basis functions: 𝜑 𝑘 ( 𝑥 ) = 𝑃 ( 𝛼 , 𝛽 ) 𝑘 ( 𝑥 ) + 𝜉 1 , 𝑘 𝑃 ( 𝛼 , 𝛽 ) 𝑘 + 1 ( 𝑥 ) + 𝜉 2 , 𝑘 𝑃 ( 𝛼 , 𝛽 ) 𝑘 + 2 ( 𝑥 ) + 𝜉 3 , 𝑘 𝑃 ( 𝛼 , 𝛽 ) 𝑘 + 3 ( 𝑥 ) + 𝜉 4 , 𝑘 𝑃 ( 𝛼 , 𝛽 ) 𝑘 + 4 ( 𝑥 ) + 𝜉 5 , 𝑘 𝑃 ( 𝛼 , 𝛽 ) 𝑘 + 5 ( 𝑥 ) , ( 4 . 2 1 ) and dual basis functions: ̂ 𝜑 𝑘 ( 𝑥 ) = 𝑃 ( 𝛼 , 𝛽 ) 𝑘 ( 𝑥 ) + 𝛿 1 , 𝑘 𝑃 ( 𝛼 , 𝛽 ) 𝑘 + 1 ( 𝑥 ) + 𝛿 2 , 𝑘 𝑃 ( 𝛼 , 𝛽 ) 𝑘 + 2 ( 𝑥 ) + 𝛿 3 , 𝑘 𝑃 ( 𝛼 , 𝛽 ) 𝑘 + 3 ( 𝑥 ) + 𝛿 4 , 𝑘 𝑃 ( 𝛼 , 𝛽 ) 𝑘 + 4 ( 𝑥 ) + 𝛿 5 , 𝑘 𝑃 ( 𝛼 , 𝛽 ) 𝑘 + 5 ( 𝑥 ) , ( 4 . 2 2 ) where 𝜉 1 , 𝑘 , 𝜉 2 , 𝑘 , 𝜉 3 , 𝑘 , 𝜉 4 , 𝑘 , 𝜉 5 , 𝑘 , 𝛿 1 , 𝑘 , 𝛿 2 , 𝑘 , 𝛿 3 , 𝑘 , 𝛿 4 , 𝑘 , and 𝛿 5 , 𝑘 are chosen to be the unique constants such that 𝜑 𝑘 ( 𝑥 ) 𝑍 𝑁 and ̂ 𝜑 𝑘 ( 𝑥 ) ̂ 𝑍 𝑁 , for all 𝑘 = 0 , 1 , , 𝑁 5 .

Equation (4.20) is equivalent to the following matrix equation: ( ̂ 𝑃 + 𝛾 1 ̂ 𝑄 + 𝛾 2 ̂ 𝑆 + 𝛾 3 ̂ 𝑇 ) 𝐯 = 𝐟 , ( 4 . 2 3 ) where the elements of the matrices ̂ 𝑃 ,   ̂ 𝑄 ,   ̂ 𝑆 , and ̂ 𝑇 can be obtained similarly as in the previous sections, but details are not given here.

5. Numerical Results

In this section some examples are considered aiming to illustrate how one can apply the proposed algorithms presented in the previous sections. Comparisons between JDPG method and other methods proposed in [2932] are made.

Example 5.1. Consider the one-dimensional third-order equation 𝑢 ( 3 ) ( 𝑥 ) + 𝜋 𝑢 ( 2 ) ( 𝑥 ) + 2 𝜋 𝑢 ( 1 ) ( 𝑥 ) + 3 𝜋 𝑢 ( 𝑥 ) = 𝑓 ( 𝑥 ) , 𝑥 𝐼 , 𝑢 ( ± 1 ) = ± 1 2 , 𝑢 ( 1 ) ( 1 ) = 𝜋 4 2 , ( 5 . 1 ) with an exact smooth solution 𝑢 ( 𝑥 ) = s i n ( 𝜋 𝑥 4 ) . ( 5 . 2 )

Table 1 lists the maximum pointwise error of 𝑢 𝑢 𝑁 , using the JDPG with various choices of 𝛼 , 𝛽 , and 𝑁 .

Example 5.2. Consider the one-dimensional fifth-order differential problem 𝑢 ( 5 ) 𝛾 1 𝑢 ( 3 ) 𝛾 2 𝑢 ( 1 ) 𝛾 3 𝑢 = 𝑓 ( 𝑥 ) , 𝑥 𝐼 , 𝑢 ( ± 1 ) = 0 , 𝑢 ( 1 ) ( ± 1 ) = 𝜋 2 , 𝑢 ( 2 ) ( 1 ) = 2 , ( 5 . 3 ) with the exact solution 𝑢 ( 𝑥 ) = 𝑥 2 c o s ( ( 𝜋 / 2 ) 𝑥 ) .

Table 2 lists the maximum pointwise error, using the JDPG method with various choices of 𝛼 , 𝛽 , 𝛾 1 , 𝛾 2 , 𝛾 3 , and 𝑁 . Numerical results of this example show that the JDPG method converges exponentially.

Example 5.3. Consider the following fifth-order boundary value problem (see [2932]): 𝑢 ( 5 ) ( 𝑥 ) 𝑢 ( 𝑥 ) = ( 1 5 + 1 0 𝑥 ) 𝑒 𝑥 , 𝑥 [ 0 , 1 ] , 𝑢 ( 0 ) = 0 , 𝑢 ( 1 ) = 0 , 𝑢 ( 1 ) ( 0 ) = 1 , 𝑢 ( 1 ) ( 1 ) = 𝑒 , 𝑢 ( 3 ) ( 0 ) = 3 . ( 5 . 4 ) The analytic solution of this problem is 𝑢 ( 𝑥 ) = 𝑥 ( 1 𝑥 ) 𝑒 𝑥 . Approximate solutions 𝑢 𝑁 ( 𝑥 ) ( 𝑁 = 1 0 , 1 3 , 2 0 , 2 6 , 4 0 ) are obtained by using our proposed method. Table 3 exhibits a comparison between the error obtained by using JDPG method and the sixth-degree B-spline [31], sextic spline [32], nonpolynomial sextic spline [29], and the computational method in [30]. The numerical results show that JDPG method is more accurate than the existing methods.

Example 5.4. Consider the following third-order ODE with polynomial coefficients: 𝑢 ( 3 ) ( 𝑥 ) + 𝑥 2 𝑢 ( 2 ) ( 𝑥 ) + ( 𝑥 3 + 4 𝑥 ) 𝑢 ( 1 ) ( 𝑥 ) + ( 3 𝑥 2 + 𝑥 + 2 ) 𝑢 ( 𝑥 ) = 𝑔 ( 𝑥 ) , 𝑥 [ 1 , 1 ] , ( 5 . 5 ) subject to 𝑢 ( ± 1 ) = 𝑢 ( 1 ) ( 1 ) = 0 , ( 5 . 6 ) where 𝑔 is selected such that exact solution is 𝑢 ( 𝑥 ) = ( 1 𝑥 2 ) ( 1 𝑥 ) 𝑒 2 𝑥 .

Equation (5.5) can be rearranged to take the form 𝑢 ( 3 ) ( 𝑥 ) + ( 𝑥 2 𝑢 ( 𝑥 ) ) ( 2 ) + ( 𝑥 3 𝑢 ( 𝑥 ) ) ( 1 ) + 𝑥 𝑢 ( 𝑥 ) = 𝑔 ( 𝑥 ) , ( 5 . 7 ) and accordingly its fully integrated form is 𝑢 ( 𝑥 ) + ( 1 ) 𝑥 2 𝑢 ( 𝑥 ) ( 𝑑 𝑥 ) ( 1 ) + ( 2 ) 𝑥 3 𝑢 ( 𝑥 ) ( 𝑑 𝑥 ) ( 2 ) + ( 3 ) 𝑥 𝑢 ( 𝑥 ) ( 𝑑 𝑥 ) ( 3 ) = 𝑓 ( 𝑥 ) + 2 𝑖 = 0 𝑑 𝑖 𝑃 ( 𝛼 , 𝛽 ) 𝑖 ( 𝑥 ) . ( 5 . 8 )

Using the quadrature dual-Petrov-Galerkin method described in Section 3.2, we evaluate the maximum pointwise error of 𝑢 𝑢 𝑁 with various choices of 𝛼 , 𝛽 , and 𝑁 in Table 4. Numerical results show that there is a very good agreement between the approximate solution obtained by the quadrature JDPG method and the exact solution and at the same time ascertain that the JDPG method converges exponentially.

6. Concluding Remarks

In this paper, we described a JDPG method for fully integrated forms of third- and fifth-order ODEs with constant coefficients. Because of the constant coefficients, the matrix elements of the discrete operators are provided explicitly, and this in turn greatly simplifies the steps and the computational effort for obtaining solutions. However, the integrated form of the source function (involving severalfold indefinite integrals) should be known analytically, and the right hand side vector require quadrature approximations. This approach is also considered for ODEs with polynomial coefficients. Numerical results exhibit the high accuracy of the proposed numerical methods of solutions.

Acknowledgment

The authors are very grateful to the referees for carefully reading the paper and for their comments and suggestions which have improved the paper.