Abstract

Using nonorthogonal bases in spectral methods demands considerable effort, because applying the Gram-Schmidt process is a fundamental condition for calculations. However, operational matrices numerical methods are being used in an increasing way and extensions for nonorthogonal bases appear, requiring simplified procedures. Here, extending previous work, an efficient tensorial method is presented, in order to simplify the calculations related to the use of nonorthogonal bases in spectral numerical problems. The method is called coadjoint formalism and is based on bracket Dirac’s formulation of quantum mechanics. Some examples are presented, showing how simple it is to use the method.

1. Introduction

Spectral numerical methods are increasingly used to solve differential equations, even when fractional derivatives appear. Concerning these methods, they frequently use orthogonal functions [1], in order to ease the calculations and to preserve the basis elements when dimensional expansions are necessary.

This is an important advantage when operational matrices are used, because if it is necessary to extend an orthogonal basis of functions from dimensions to , only the new element must be calculated with all the former elements preserved [2, 3]. If the basis functions are not orthogonal, this assumption is not true.

However, there are some situations that, even for numerical methods with operational matrices [4], solution of differential equations demands new basis of functions [5, 6], with some of them being nonorthogonal as, for instance, Bernstein’s polynomials, mainly in the cases where boundary or initial conditions must be considered [79].

Some iterative methods, based on Krylov formulation, for instance, present the use of such bases and are implemented by using the Gram-Schmidt process [8, 10, 11] that, in most cases, is difficult to be operationalized.

Trying to simplify this procedure, an alternative operational method is presented with the first ideas developed in [12] that is described by using tensorial language and called coadjoint formalism [13], allowing the direct use of nonorthogonal bases without any kind of previous conditioning process.

In a nutshell, coadjoint formalism adapts Dirac’s bracket notation [14] to spectral methods considering finite dimension complex vectorial spaces. The methodology is applied to nonorthogonal bases in finite dimension function spaces with a generalized tensorial approach [1517] that simplifies the operational conditions.

In the next section, the theoretical fundamentals of the coadjoint formalism are presented, trying to connect it with the well-known Dirac bracket, followed by a section where two examples show how simple it is to apply the method developed here, with a conclusion section closing the work.

2. Coadjoint Formalism: Theoretical Foundations

This section presents the concepts and definitions used to build the coadjoint formalism. The bases to be considered are finite sets of complex functions of a real variable; that is, the bases elements , , are given by , with , where is the function space, equipped as a Hilbert space.

2.1. Actuation Spaces

Here, it is considered that any operation regarding series expansion of a function occurs in two distinct spaces:(i)A finite dimensional Hilbert space .(ii)The finite dimension space with dimension , generated by the series expansion with terms for the considered function, denoted by and called order space.

It is assumed that kets are represented by column vectors and vice versa. It is the same for bras, represented by line vectors, understood as covectors, that is, dual space elements. Consequently, an equivalence relation can be established [14]:

The kets can belong either to the function space, or to the order space, . The same is valid for bras.

Covariant and contravariant distinction are assumed [15], and the traditional notation of differential geometry and tensorial calculus is used:(i)Contravariant components: .(ii)Covariant components: .

Quantities described by kets are in the original space; quantities described by bras are in the dual space, in the traditional way of linear algebra [14]. Primitive space is related to the space where the quantity is defined; that is, if a bra defines a quantity, the dual space assumes the primitive space condition of the quantity.

Given a covariant basis composed of , a vector belonging to the order space, , can be described by a ket in two different ways:(i)Invariant representation: .(ii)Coordinate representation: ,with . The spaces where the spectral methods are generally applied are , and .

Given a reference basis from , for a basis composed of and described in terms of the basis , where can be either the Hilbert space or any other complex vectorial space, two representations are possible:(i).(ii),with the inferior dot representing the covariant nature of the basis and represents the set of the -order square matrices. In this kind of disposition, the bases are described by their coordinates in the reference basis , but the reference basis can be omitted and the invariant representation is assumed.

Consequently, there are two isomorphic representations, given by(i),(ii)

Given a vector with its components expressed in a generical basis , the following operations are defined:(i)Conjugation: .(ii)Transposition: (iii)Adjunction: (iv)Duality:

2.2. Inner Products

The inner products (IP), , are defined in several different ways, depending on the space and the representation, as it is shown below.

Order Space(1)Hermitian product is as follows:(i)coordinate representation: .(2)Dual product is as follows:(i)coordinate representation: ;(ii)invariant representation: and .

Functions Space. Considering the space of the functions with domain , the IP is defined as

Hybrid Spaces. The mixed inner product is defined as

2.3. Duality Relations

It is possible to obtain the dual basis from an original basis by using the duality relations, given by

It is possible to express the same vector in several ways, considering the formalism described here, considering four distinct bases. Here, these representations are called connected representations and are shown in Table 1.

Considering that the covariant representation in the original space is considered to be natural, the dot below the basis symbol can be omitted and the transformation relations between the bases are shown in Table 2.

The covariant and contravariant components are obtained by applying(1)contravariant: ,(2)covariant: .

2.4. Series Expansions of Functions

Considering a basis from and elements , the infinite series expansion of a function is given bywith being the expansion coefficients in that basis. Considering the mixed inner product definition, this expression can be modified aswith the covector called coefficient covector.

In the subspace , projectors can be defined bywith the second one called adjoint projector.

The eigenprojectors in the space are defined as the projectors in the -dimensional proper space; that is, and .

In a way analogous to that followed in differential geometry, fundamental metric tensor of a basis, from , is defined aswith its eciprocal metric tensor given by

Consequently, the IPs between two functions , represented by series are given bywith and , respectively, representing the and expansion coefficients, in the basis .

The metric tensorial fundamental operator and its reciprocal are defined as

2.5. Covariant-Contravariant Transformation Relations

Considering the covariant and contravariant matrix representations and defining , the transformation relations for bases and functions can be written as

Therefore,meaning that the reciprocal tensor matrix is the inverse of the metrical tensor matrix. Calling the first one :

2.6. Finite Expansions

The -order finite expansion of a function can be expressed by the action of the reciprocal tensorial metric operator over the function; that is,Under these conditions, the coefficient covector can be expressed as

Then, the expansion is given by the mixed IP: , with being the vector associated with the second representation chosen basis.

Equation (18) can be expressed by using matrices, giving a useful computational expression, in order to find the coefficient covector.

Considering the coefficient covectorit is possible to write

As a consequence, the expanded function becomes

3. Application Examples

In this section, two examples are developed showing the expansion of functions using nonorthogonal bases, one using the canonical basis and the other using a basis of nonorthogonal complex functions.

3.1. Canonical Basis

Here, the more common case of nonorthogonality is developed, considering the canonical basis of functions defined in the real interval, defined by the polynomials:

The metric tensor can be calculated as

Therefore, the analytical expression for the metric tensor elements in the domain can be calculated, resulting inwith

Following the calculations for , the metric matrix is given byallowing, by using expression (17), the calculation of the matrix representing its reciprocal tensor:

By using the methodology developed in the former section and applying the transformation relation (13), the reciprocal basis is composed of polynomials that, for , are listed below:

It is important to notice that, if the basis is orthogonal, to obtain its reciprocal is simply scale changing. Considering a nonorthogonal basis, the procedure is described here and changing order implies a whole recalculation of the basis elements.

In order to illustrate the ideas in a particular case, the expansion of the function is considered.

The development for gives

According to (22), the matrix expression isor

In these expressions, the coefficient covector is , and, according to (21), can be calculated resulting ingenerating the expansion for the function

Figure 1(a) shows the almost perfect superposition of the function and its expansion . Figure 1(b) shows the local error for the expansion.

3.2. Expansion in a Complex Nonorthogonal Basis

Here, considering a complex basis , composed of elements , , the real function is expanded, showing efficient and concise results.

Defining the basisthe metric tensor is expressed by

For and, for the reciprocal metrics,

Expanding the given function for ,where orwithConsidering formerly calculated, the expansion becomes

Figure 2 shows the superposition of the function and its expansion . It can be noticed that the five-term approximation is not good.

From the explicit expression (26), the metrical matrix for can be determined:with the reciprocal matrix given by

Consequently, the ten-term expansion is given bywhere or, in matrix form,withwhere is given by the new matrix or, analogously, calculating the coefficients by the equation

Consequently, the new approximation is given by

Figure 3(a) shows the almost perfect superposition of the function and the real part of its expansion . Figure 3(b) shows the local rest for the expansion.

4. Conclusion

Considering that using nonorthogonal bases is being increased in spectral numerical methods requiring Gram-Schmidt procedures that are generally difficult from the operational point of view, this paper presents a simpler method for expanding functions, based on bracket formalism and called coadjoint method.

The mathematical ideas of the coadjoint method were presented and the examples have shown its practicability.

It can be added that coadjoint method is an efficient and concise tool for nonorthogonal bases with low computational costs. As it is not necessary to have orthogonal bases, more general function can be used in the numerical methods, increasing the quality of the whole process.

Competing Interests

The authors declare that there are no competing interests regarding the publication of this paper.