Abstract

Several numerical methods for boundary value problems use integral and differential operational matrices, expressed in polynomial bases in a Hilbert space of functions. This work presents a sequence of matrix operations allowing a direct computation of operational matrices for polynomial bases, orthogonal or not, starting with any previously known reference matrix. Furthermore, it shows how to obtain the reference matrix for a chosen polynomial base. The results presented here can be applied not only for integration and differentiation, but also for any linear operation.

1. Introduction

One of the main characteristics of the use of polynomial bases is to reduce the solving process of differential or integral equations to systems of algebraic equations, expressing the solution by truncated series approximations, up to order [14], such that

The choice of the polynomial basis is normally one of the orthogonal bases belonging to the Hilbert space of functions, in order to ensure that the expansion of the series to a higher order does not affect the coefficients previously calculated, being applicable to classical methods, as the Runge-Kutta, for instance [5]. However, it is also possible to use nonorthogonal bases, as done in [6, 7], where ,  

Considering the line vector that contains the coefficients and the column vector that contains the base polynomials , expression (1.1) can be written as considering: [8].

The central idea when working with operational matrices is to write the integral or differential of the elements of the basis as a linear combination of the same base elements, transforming the integral and differential operations of into matrix operations in a Hilbert space [8].

So, defining as the operational integration matrix and as the operational differential matrix, it is possible to obtain the line vector containing the coefficients of the series that represent the integrated function or the differentiated function by and .

Consequently,

Recently, Doha and Bhrawy [1] presented a method to obtain the operational matrices of integration considering the Jacobi polynomials.

Here, a simpler and more direct way to get the operational matrix, by using Theorem 2.1 from the next section is presented. Additionally, a way to extend it to any polynomial basis, by using Theorem 3.1, presented in Section 3, is also developed.

In spite of the fact that those theorems are applied to integration and differentiation operations, the result is valid to any linear operation, as shown ahead.

2. Obtaining the Operational Matrix

Theorem 2.1. Considering a square matrix describing the resulting coefficients of a linear operation in the generic basis as a function of the same basis and a series the coefficients vector of which is , and the line vector of the resulting coefficients of the linear operation applied to the series, then:

Proof. Considering , since is a linear operation:
On the other hand, considering the finite matrix space , and , is a scalar, depending on .
Considering that and , observing the matrix and vector dimensions, this last result is a scalar and, consequently , implying that:
As is a generic basis, (2.2) implies (2.1) directly.

So, in order to build matrices representing actions of linear operations, as derivative and integration, the main task is to determine matrix and transpose it.

2.1. Example: Integration Matrix of the Legendre Polynomials

Considering that the polynomial basis is used to describe a function to be composed of Legendre polynomials, in the interval , one can observe that, for Legendre polynomials [9]

Defining one can write , with the integral acting over to the elements of the vector. The null coefficient on the -order term assures equal dimensions for the vector to be integrated and the vector that results from the process.

Following (2.3), is a square matrix , and:(i);(ii),  ,  ;(iii); (iv)there is no term, because is not integrated.

Since the operational matrix is the transpose of , when writing a code to implement a computational algorithm, it is required to exchange the indices in the expressions above, obtaining . directly.

3. Obtaining the Operational Matrix for Any Polynomial Basis: The Sandwich Matrix

From an operational matrix expressed in a generic reference basis , it is possible to obtain the corresponding operational matrix in another basis , also generic, by using a sequence of simple matrix operations, as Theorem 3.1 states.

Theorem 3.1 (“Sandwich Matrix” ). Given a generic polynomial basis in the interval , the matrix of the operational matrix theorem is obtained by , where the “sandwich matrix” is , with and being the matrices that describe the generic polynomials and the reference ones in the canonic basis, respectively.

Proof. Considering that is a linear operation: , where is the canonic base , since the canonic base can be written as a function of the reference base as , it can be concluded that .
As , it implies that . Now, it is necessary to transform into the generic basis , in order to express the result as a function of the generic input base.
From the defined matrices and , the expression given above is written as: and must be nonsingular.
Finally, since , the generic matrix is obtained by: , where .

3.1. Comments

(i) No orthogonality condition has been imposed during the proof, thus, the result is valid for all polynomial bases, orthogonal or not.

(ii) The operational matrix of a generic polynomial basis is given by: . Indeed, for any linear operation: and, since , it can be concluded that: .

(iii) Since is arbitrary, the canonic basis can be used, and thus, the matrix describing the elements of as a function of the canonic base is the identity. So, .

(iv) Taking the previous comment into account, the matrix of the generic basis can be obtained by: , that is, considering the resemblance definition [9], the matrix of the generic basis is similar to or resembling matrix . To summarizing, the generating matrices of the operational matrices are similarity classes related to each linear operation in the interval .

(v) Considering the uniqueness of the result of the linear operation of differentiation on continuous functions, the differentiation matrix is invariant for all polynomial bases.

(vi) The elements of the last column of the integration matrix are arbitrary, since they multiply a null element. Therefore, if instead of the Legendre basis another one is used to build the “sandwich matrix" , these elements may be different, but the result of the integration remains the same. So, the uniqueness of the “sandwich matrix" for the integration is ensured but for the last line, the impact of which will be on the last column of the integration matrix.

3.2. Example: “Sandwich Matrix" for Integration

According to the last comment about Theorem 3.1, any polynomial basis can be used to build the “sandwich matrix" , because they are all similar. For the sake of simplicity, the canonic base will be chosen.

Considering the interval , the matrix is built, in order to represent the integrals of the basis as a function of the basis itself. By defining one can write , with the integral acting over to the elements of the vector, that is, ,  .

Consequently, ,  .

The nonnull elements are, therefore: and , with . As apparent in comments presented in the former subsection, for each linear operation, in the canonic basis, .

Considering, for instance, , the “sandwich matrix” for integration is:

For the interval , the elements , with will be equal to zero, that is,

Considering the process using the shifted Legendre base up to order 4, known as shifted-Legendre, is: , where is the integration operational matrix in the Legendre basis. For polynomials up to order , the matrix describing the Legendre polynomials as a function of the canonic basis is:

Calculating

As mentioned, this matrix differs from the one previously obtained with the canonic basis just by the last line elements. It does not modify the integration operation, since these elements are being multiplied by a null coefficient.

The wavelets, orthogonal Jacobi polynomials shifted to the interval (), were used in the Galerkin processes [10], with the appropriate domain transformation. Matrix can also be applied in these cases, either transforming the equation or obtaining the matrix that describes the base polynomials in the chosen interval related to the canonic base.

3.3. Example: “Sandwich Matrix" for Differentiation

Starting with the canonic base, matrix can be built, describing the derivative elements of the basis as a function of the basis itself. Therefore, rewriting the vector defined in the former subsection: , with the derivative acting over the components of vector ,  , for any possible considered domain.

Thus, the nonnull terms of the matrix are , and, according to the last comment from Section 3.1, .

Considering , the differentiation matrix is given by

In order to obtain this matrix from the Legendre basis, the process is analogous to the previous one. Applying , where is the described differentiation operational matrix. As highlighted in the third from Theorem 3.1, presented in Section 3.1, the obtained matrix is identical.

3.4. Example: Chebyshev Operational Matrices for Integration and Differentiation

Some works present the Galerkin method supported by Chebyshev expansions [1113], when solving differential equations. In order to help in this task, integration and differentiation Chebyshev matrices will be obtained from the operational matrices on the Legendre basis, even though it would be easier to conduct this process using the canonic base.

Matrix describing the Chebyshev polynomials, as a function of the canonic basis, up to order is:

By using Theorem 3.1, the Legendre integration matrix is known, and Chebyshev integration matrix can be obtained, by using the second comment from Section 3.1.

Consider, for instance, , to be integrated in the interval . Written as a Chebyshev series: and , observing that is basis independent.

Calculating the several matrices is the case of the example Performing , with the coefficients of the series representing the integral of this function, that is, , where and, consequently, in the canonic form: .

To obtain the Chebyshev differential matrix, the procedure is analogous, giving:

As expected, , with the nonnull terms of the invariant given by ,  .

4. Solving a Boundary Value Problem

In this section, an application of the method presented to build operational matrices is shown, considering the boundary value problem related to convection-diffusion equation [14], given by: with and .

Firstly, in order to have a Jacobi interval domain, we change variables, , with obtaining the transformed equation: with and .

If is the series that approximates and the operational differentiation matrix, one can write: , defining: with and .

This matrix equation is applied to domain points generating a linear algebraic equations system with equations and unknown variables. The two missing equations are obtained from the boundary conditions: and . To avoid the Runge phenomenon [15], are chosen as nodes of the polynomial basis.

Figure 1 shows the exact solution and the obtained by using Legendre approximation and considering and . The two solutions are too closed that, in Figure 2, the error is shown for comparison. Figure 3 shows the exact solution and the obtained by using Chebyshev approximation and considering and The two solutions are too closed that, in Figure 4, the error is shown for comparison.

5. Conclusion

All operational matrices applied to polynomial bases in linear operations may be obtained directly from a central matrix placed between the matrix product involving the matrix describing the chosen base from the canonical base and its inverse.

Considering the available computational facilities, this method may turn the calculation of these matrices easier and quicker, on different bases, and various applications, as the Galerkin process, for instance. Furthermore, “sandwich matrix" allows for directly obtaining the recurrence relations for the derivative and integral of an element of any polynomial basis as a function of other basis elements.