To date, researchers usually use spectral and pseudospectral methods for only numerical approximation of ordinary and partial differential equations and also based on polynomial basis. But the principal importance of this paper is to develop the expansion approach based on general basis functions (in particular case polynomial basis) for solving general operator equations, wherein the particular cases of our development are integral equations, ordinary differential equations, difference equations, partial differential equations, and fractional differential equations. In other words, this paper presents the expansion approach for solving general operator equations in the form , with respect to boundary condition , where , and are linear, nonlinear, and boundary operators, respectively, related to a suitable Hilbert space, is the domain of approximation, is an arbitrary constant, and is an arbitrary function. Also the other importance of this paper is to introduce the general version of pseudospectral method based on general interpolation problem. Finally some experiments show the accuracy of our development and the error analysis is presented in norm.

1. Introduction

Approximation theory is an important field of mathematics which has pure and applied aspects. This field tries to approximate the complicated functions with simplified representments. Some important branches of this field are interpolation, extrapolation, best approximation, and so forth. Also, expansion series have high applications in this field (approximation theory). Expansion series try to represent a function using suitable basis functions such as standard polynomials (nonperiodic case) and trigonometric polynomials (periodic case). Taylor series and Fourier series in classical and nonclassical forms are two high applicable versions of expansions. These series are used in the numerical solution of ordinary, partial differential, and integral equations. Weighted residual methods are classes of methods which use expansion series for solving ordinary, partial differential, and integral equations. In these methods, by substituting a suitable expansion of exact solution of ordinary, partial differential, and integral equations in its equation, the residuals are obtained and then the residuals are minimized in a certain way. This minimization leads to specific methods such as Galerkin, collocation, and Tau formulations. In this paper, using Fourier series and interpolation cases as expansions, we implement the spectral and pseudospectral methods for solving the general operator equation , with respect to boundary condition . The remainder of the paper proceeds as follows. In Sections 1.1 to 1.4, we introduce the preliminaries and notations, the general interpolation problem, spectral and pseudospectral methods, and operator equations. In Section 2, we implement spectral method for general operator equation. Section 3 is devoted to the implementation of pseudospectral method for general operator equation with error analysis. In Section 4, we present some test experiments to show the accuracy and validity of our approach. Finally, in the last section, we monitor a brief conclusion.

1.1. Preliminaries and Notations

First we introduce some notations, which we use in the following.

Definition 1. A mapping from a linear space into is called a functional. Also, the set of all bounded linear functionals on the linear space is itself a linear space called the algebraic conjugate or dual-space on and is denoted by [1].

Definition 2. A projection is a linear transformation from a vector space to itself such that . It leaves its image unchanged. Though abstract, this definition of projection formalizes and generalizes the idea of graphical projection [1].

Definition 3. Suppose is a linear space and is its dual-space; then, are linearly independent if and only if

Theorem 4. Let be an -dimensional space, and let be linearly independent. Then, are linearly independent if and only if

Proof . See [1].

Remark 5. Suppose ; then, the operators (integral operator), (derivative operator), and (shift operator), , are independent [1].

1.2. General Interpolation Problem

The general interpolation problem is stated as follows. Let be a space of dimension , and let be given linear functionals on . Find an such that where the are given. This form of interpolation is a generalization of the classical polynomial interpolation. To make the connection, we take as and define the functional by where the are a set of distinct points. The interpolation problem is then to find a polynomial taking on preassigned values of at the points . This type of interpolation is called Lagrange interpolation. Three important subjects in the generalized interpolation are existence, uniqueness, and accuracy of corresponding interpolation problems, respectively, wherein the first two subjects are discussed in [1] and the third one is much more difficult and we are able to give useful answer only for certain types of polynomial interpolation. For more review literatures about this subject, see [1]. Throughout this paper, we consider a linear space as a Hilbert space .

1.3. Spectral and Pseudospectral Methods

In the last two decades, spectral and in particular pseudospectral methods have emerged as intriguing alternatives in many situations and as superior ones in several areas [27]. Spectral and pseudospectral methods are known as highly accurate solvers for ordinary, partial differential, and integral equations. The basic idea of spectral methods is using general Fourier series as an approximate solution with unknown coefficients [2]. Three most widely used spectral versions are the Galerkin, collocation, and Tau methods [2]. Their utility is based on the fact that if the solution sought is smooth, usually only a few terms in an expansion of global basis functions are needed to represent it to high accuracy. It is well known that spectral methods converge to the solution of the continuous problems faster than any finite power of where is the dimension of the reduced order model for smooth solutions [2]. Spectral methods, in the context of numerical schemes for differential equations, belong to the family of weighted residual methods, which are traditionally regarded as the foundation of many numerical methods such as finite element, spectral, finite volume, and boundary element methods. Weighted residual methods represent a particular group of approximation techniques, in which the residuals are minimized in a certain way and this leads to specific methods including Galerkin, collocation, and Tau formulations. Also, the basic idea of pseudospectral methods (see, e.g., [610]) is based on interpolating an unknown function (the exact solution of our problem) in suitable points and obtaining the unknown values. In other words, using some basis functions , such as polynomials, to represent an unknown function (the approximate solution of the operator equation) via An important feature of pseudospectral methods is the fact that one usually is content with obtaining an approximation to the solution on a discrete set of grid points . One of several ways to implement the pseudospectral method is via some matrix operations. For instance, for differential operator , we have useful matrix which is denoted by and is named as differentiation matrix; that is, one finds a matrix such that, at the grid points , we have where is the vector of values of at the grid points. For every operator also we can define its corresponding operational matrix. Frequently, orthogonal polynomials such as Chebyshev polynomials, Jacobi polynomials, and Hermite polynomials are used as basis functions, and the grid points are corresponding to Chebyshev, Jacobi, and Hermite points, respectively. In the Chebyshev polynomials case, the entries of the differentiation matrix are explicitly known (see, e.g., [7]). Using such an approach for solving boundary value problems involves the solution of linear systems of equations which are known to be very ill-conditioned; for example, for methods based on orthogonal polynomials, the condition number of the approximate first order operator grows like , while the condition number of second order operator in general scaled with . For more review literatures about this method, see [6]. Precondition procedure is the method of reducing the obtained ill-condition system from pseudospectral methods [6]. This method is used for the numerical solution of integral equations [11], partial differential equations [8, 10], and ordinary differential equations [9, 10].

1.4. Operator Equations

Many important engineering problems fall into the category of being operator equations with boundary operator conditions such as integral, difference, and partial differential equations. The general form of these equations is in the form with respect to boundary condition , where , , and are linear, nonlinear, and boundary operators, respectively, related to a suitable Hilbert space, is the domain of approximation, and is an arbitrary function. The necessary and sufficient conditions for existence and uniqueness of (8) can be seen in [12]. Obtaining the explicit solutions of operator equation (8) in general is a difficult problem and it depends on the properties of operators , , and . Also, some numerical methods are presented for solving such operator problems, as iterative methods, method of paper [13], and so on. Simplification of the nonlinear term in the operator equation (8) to the linear operators for easy and efficient implementation is an important subject. Some methods exist for this simplification such as using Taylor series and linearization method [13]. In Section 2, we explain this subject very well.

2. Spectral Method for General Operator Equation

In this section, we describe the spectral methods for solving general operator equation , with respect to boundary condition . For this purpose, first consider suitable basis set of functions such as , and expand a function (exact solution of operator equation (8)) based on these basis functions as From basis property of , we can conclude Now, we substitute the expansion series (9), in operator equation (8), and its boundary conditions as the following: If the basis functions , automatically satisfy boundary conditions, then they will be called Galerkin basis functions. From (11), we must simplify two terms and , so we perform this simplification in two cases. But simplification of the first case , is easy, because by using the linear property of operator we obtain But the nonlinear case , is almost cumbersome and some ideas exist for its simplification. Now, we present the idea for simplification of the nonlinear term . For this goal, based on basis functions , and using Gram-Schmidt algorithm, we must obtain an orthogonal system . Then, it is not difficult to see that Now, from (11) and using (13), we obtain By using some project operators on (15) as defined on , we obtain If we suppose that the boundary conditions give us equations, then from (16) we obtain equations for unknown coefficients. So, for obtaining the unknowns, we must eliminate equations for satisfying the boundary conditions and the obtained system must be solved via suitable iteration methods such as Newton method.

Remark 6. In (16), if where is a weight function, the method is named as Tau method. Also, if , where are the set of distinct points, and is a dirac function, then the method is named as collocation method.

3. The General Pseudospectral Method for General Operator Equation

In this section, we describe the general pseudospectral method for numerical solution of the general operator equations. For this goal, first, we introduce the general pseudospectral method. Let be some functionals and scalars , wherein in other words, from (17), we have a generalized interpolation problem with respect to . Also, we suppose that a boundary condition can be written in the following interpolation form: where are functionals on .

Now, solving simultaneous interpolation problems (17) and (18) is a principal part of this paper. The next theorem gives the explicit form of the interpolation problems (17) and (18).

Theorem 7. Suppose is a Hilbert space and is a finite dimensional subspace of with dimension ; then, , the generalized interpolation problem from (17) and (18), is of the form where are functionals of , wherein

Proof. By imposing functionals and on both sides of (19) and using properties (20) and (21), it is easy to show that (19) satisfies conditions (17) and (18).

Remark 8. The particular cases of interpolation (19) are Lagrange, Barycentric, Hermite, and Birkhoff interpolations.

The important point in Theorem 7 is the existence of basis functions and which satisfy (20) and (21), respectively. We can answer this question by the concept of linear independency of operators. We perform this, for , and in similar manner we can obtain the necessary and sufficient conditions of existence of . Now, we consider an arbitrary basis function of and we construct new basis functions which satisfy (20) and (21).

For this goal, suppose then, we must obtain the unknown coefficients . We start from ; this means that from (23), we must obtain one unknown coefficient and it is sufficient for imposing the arbitrary functionals or on (22), for obtaining the unknown coefficient , so thus, Also, for , we have for unknown coefficients and . If we impose two functionals and on (26), for obtaining unknown coefficients and we can obtain the unknown coefficients and exist if and only if By repeating this procedure, we can obtain the result in which the unknown coefficients exist if and only if and this result is equivalent to the independence of functionals .

Now, for implementing the general pseudospectral method for numerical solution of general operator equation (8), with respect to its boundary condition, first we substitute a general interpolation operator (19) in (8). The general interpolation function (the exact solution of (8), with respect to its boundary condition) from Theorem 7 is where and are defined in (17) and (18). Also, we consider the general interpolation problem of the function (the function of the right-hand side of (8)) as where are functionals and are constants; then, we have Now, by substituting (19) and (32) in (8), we obtain then, using some project operators on (33) as defined on , we obtain From (34), we obtain algebraic equations. Simplification of the left-hand side of (34) is an important subject and we express this subject for linear and nonlinear operators and .

(i) Linear Case. In the linear case, if we suppose that the projection operators are linear, then, from property of linear operator , we have so, for obtaining a matrix relation between the values of and and , it is sufficient to calculate the values as the following: where The previous matrix (under bracket) is a square matrix of dimensions and has high application in the pseudospectral method. In particular case, when the linear operator is a derivative, this matrix is named as derivative matrix and is very ill conditioned.

(ii) Nonlinear Case. The nonlinear case is almost cumbersome, and different ideas can be used. The first idea is to approximate the nonlinear operator with some linear operator and using the method of part (i). But this approach is very complicated and approximating a nonlinear operator with linear operators in general is difficult and was down only for particular operators [13]. The second idea is based on approximating a nonlinear operator via Taylor series [14]. Also another approach for simplifying nonlinear operator is based on orthogonal expansions (general Fourier series) of . For this goal, we must obtain an orthogonal system from basis functions using Gram-Schmidt algorithm. Then, it is not difficult to see that where Also, for , we have where We must note that the and coefficients are functions of and .

Now, for case study, let be the space of polynomials with real coefficients wherein it is a Hilbert space with respect to norm, and is a subspace of with dimension .

Also, suppose that the functionals in (9) are defined as where are also functionals of . For instance, the well known functionals with high applications are (moment operator), (ith-derivative operator), (shift operator), and ( are distinct points and, in particular case, the roots of shifted orthogonal polynomials as Hermite and Laguerre polynomials in unbounded domains and shifted Chebyshev and Legendre and, in general case, Jacobi polynomials in bounded domains [1517]). Also, in the higher dimensions, we can define the suitable functionals.

3.1. Error Analysis

The error analysis of pseudospectral method in the general case is very difficult and we can perform this analysis only in polynomial case. For this goal, at first we introduce some useful notations and theorems that are used in this paper.

Definition 9. The Jacobi polynomials [18], , are defined as the orthogonal polynomials with respect to the weight function , on . An explicit formula for these polynomials is given by Further, we have Jacobi polynomials have an important application in several fields of numerical analysis such as quadratures formula and spectral methods. Also, the shifted Jacobi polynomials are defined as orthogonal polynomials in the interval with respect to weight function , with change of variable . For more properties and applications of these polynomials, see [1517]. Now, let , and let be the Jacobi polynomial of degree with respect to weight function and let be the polynomials spaces of degree less than or equal to .

Remark 10. The best choice of points for polynomial interpolation are the roots of orthogonal polynomials and in particular case Jacobi polynomials due to their small Lebesgue constants [6].

The symbol shows the space of measurable functions whose square is Lebesgue integrable in relative to the weight function . The inner product and norm of are defined by Also another useful space is with seminorm and norms, respectively. Consider But an applicable norm that appears in bounding of errors of the spectral method is Now, we present a useful theorem.

Theorem 11. If then one has where is a Lagrange interpolation in distinct points (roots of Jacobi polynomials).

Proof. See [4, 5].

Noting to Theorem 11, we can obtain an error bound for any classes of polynomial interpolation such as Hermite interpolation, Birkhoff interpolation, and other classes. This means that if we have an arbitrary polynomial interpolation , then we can transform the interpolation to the Lagrange interpolation and then we use Theorem 11 for obtaining an error bound. Now, by using Theorem 11, in the next theorem we obtain an upper bound for the error of our approximate solution of (8).

Theorem 12. Suppose and are the linear and nonlinear operators with respect to (8), wherein has inversion and are satisfied in (8); then, where is an arbitrary polynomial interpolation satisfied in interpolation conditions.

Proof. First, we must note that can be expressed via Lagrange interpolation and, for this goal, it is sufficient to interpolate function in distinct points . From (8) and invertibility of , we have also due to pseudospectral method we obtain by producing both sides of (52) with (Lagrange interpolation basis) and sum from , we obtain Now, by subtracting (51) from (53), we get therefore, we have Finally, by using Theorem 11, we obtain therefore, the proof is completed.

4. The Test Experiments

Experiment 1. Let us consider the following nonlinear differential equation problem [19]: with the exact solution .
By simplifying (57), we have thus, the linear and nonlinear parts of our ordinary differential equation, respectively, are where Now, we consider collocation points , where are the roots of Jacobi polynomials of degree ; then, we consider this interpolation problem where is the corresponding polynomial interpolation with respect to problem (61) wherein interpolate the boundary conditions automatically and are the unknown coefficients. In Table 1, we see the comparison between the exact and approximate solutions from pseudospectral method for , and .

Experiment 2. Consider the second-order difference equation problem [20]: The exact solution is . We have and this equation has no nonlinear operator part. The obtained results between the exact and approximate solutions are shown in Table 2 and Figure 1.

Experiment 3. Consider the Fredholm integrodifferential equation [21]: under the initial condition with the exact solution . In this case, we have where

Now, by using the pseudospectral method for , we obtain the following results that are shown in Table 3.

Experiment 4. We consider the linear partial differential equation problem [22]: with the exact solution .
In this case, we have We obtain the approximate solution of (68) on the domain . For this purpose, we consider the discrete points and the general Lagrange interpolation in the discrete points in the following form (two-dimensional case): where and are ordinary Lagrange interpolations. By using collocation method for arbitrary parameters , and , we obtain the exact solution.

Experiment 5. Finally, let us consider the linear fractional differential equation problem [23]: This equation has the exact solution .
In this case, we have which is a linear operator. Using the presented method for every , and , we obtain the exact solution.

5. Conclusion

In this paper, we have developed the expansion approach for solving some operator equations of the form , with respect to boundary condition . For this, the principal importance is the development of the expansion approach for solving general operator equation wherein the particular cases are in the solution of integral equations (Experiment 3), ordinary differential equations (Experiment 1), difference equations (Experiment 2), partial differential equations (Experiment 4), and fractional differential equations (Experiment 5). Also, the error analysis is presented in norm. Using the roots of another orthogonal polynomials (Jacobi polynomials) rather than classical orthogonal polynomials such as Chebyshev polynomials and Legendre polynomials, for collocation points (with free parameters ), in pseudospectral method is an important subject which is performed (Experiments 2 and 3).

Conflict of Interests

The authors declare that there is no conflict of interests regarding the publication of this paper.


The authors wish to thank the anonymous referees for their valuable comments and suggestions. Also, they would like to appreciate the Editor, Dr. Yaohanz Li, who gives them the opportunity to review the paper.