Abstract

This work develops a computational approach for boundary and initial-value problems by using operational matrices, in order to run an evolutive process in a Hilbert space. Besides, upper bounds for errors in the solutions and in their derivatives can be estimated providing accuracy measures.

1. Introduction

Differential equations are ubiquitous in engineering daily life but their solutions are sometimes very difficult, mainly if they are nonlinear. Several of them do not have an analytical solution describable by a finite combination of elementary functions, or even by an unlimited series with a determinable recurrence relation.

In previous works, analytical and numerical results were obtained for nonlinear differential equations [13], and an algorithm called SIV (Solving Initial Value) was developed. Here, the problem of determining the error limits for SIV is emphasized, providing quality parameters for the method.

In Section 2, some general theoretical considerations about a numerical method for differential equations, by using expansions in Hilbert space, are presented. Error upper bounds estimation is discussed in Section 3, including some examples. Section 4 concludes the reasoning.

2. Methodology

Problems described by differential equations can be submitted to initial conditions and, in those cases, they are called initial-value problems (IVP) and there are a number of softwares dedicated to their solutions, based on Runge-Kutta and Adams methods [4] constantly improved [5, 6], mainly considering applications to physics problems [710].

Lipschitz theorem gives a sufficient condition for a differential equation to have a unique solution and, under the theorem assumptions, mass matrix is guaranteed to be nonsingular, providing that the classical methods can start running and obtaining, for sequential intervals, continuous expansions for limited functions [11]. The precision is only limited by the computational apparatus.

Besides IVP problems, there are the boundary value problems BVP that, mainly, have been solved by using numerical methods derived from Galerkin and variational methods [12, 13]. Previous works [13] developed analytical and numerical methods, based on a spectral approach, that can be applied either to IVP or to BVP.

Details of the method, called SIV, are described mainly in [3]. Here, some hints are presented for the sake of clarity, in order to discuss precision issues.

The implemented evolutive algorithm is genetic and differential [14], conceived for an evolution on a Hilbert space, with which gene is represented by the coefficients of a possible expansion in terms of a basis of this space and corresponding to a sixty-individual vector population (chromosomes). Each generation is submitted to ranking contests.

The error obtained substituting the vector in the equation is the criterium to determine the individual ranking position. The lesser the error is, the better is the individual ranking position. Selecting the reproducers for the next generation is performed by attributing greater choice probability to the best ranked individuals [3].

Heritage is simulated by taking two vectors and combining them to generate a descendent that is a new vector with the same dimension of the original vectors. The criteria to choose if the gene in a locus is given by one or another original vector are the cross-over with the loci randomly chosen [15].

The introduction of learning rules in the algorithm [1, 3] reduced about a hundred times the processing time because the search space is continually contracted with error margins decreasing.

Considering the solution to be continuous and expressed as a linear combination of the elements from an orthogonal basis in the Hilbert functional space is the main assumption of the method presented in [3].

Under these conditions, , with , , and the symbol represents the internal product in the Hilbert space [3].

Defining with being the nonnegative weight function, considering the interval and , the Legendre polynomial basis is obtained with . For the same interval and by using generic , Jacobi polynomials are obtained. Legendre and Chebyshev polynomials are particular cases of Jacobi polynomials. For half-limited intervals, the Laguerre polynomials are used; for unlimited intervals, the Hermite polynomials are adequate.

Solving differential equations with orthogonal series approximations have been thoroughly studied in the last three decades, providing efficient algorithms [1619]. In this approach, it is possible to transform a differential equation into an algebraic one, giving the coefficients of the series expansion. This transformation is performed by using operational matrices, extensively discussed in [3].

Completeness and orthogonality in Hilbert spaces are related to certain domains, depending on the basis chosen. Consequently, it is important to develop basis transformations to consider this fact. In [1, 3] a method for transposing domains changing a variable that transforms the differential equation was developed. Besides, initial and boundary conditions need to be transposed in the same way and the program described in [1, 3] takes care of this transposition, too.

3. Error Upper Bounds

One of the main problems in the numerical solutions of differential equations is how to estimate the error margins. In the majority of the cases, it is possible to stipulate the maximum tolerable error for each step but, as the integration for the whole interval comprehends thousands of steps, the cumulative error is difficult to be obtained [20].

Here, a method to determine the upper bounds for the absolute error in an integration process is described, even if the algorithm and the analytical solution are unknown. These ideas can be used, in order to estimate the confidence for numerical approximations of the solution [21].

Starting with the differential equation written in the following form: the polynomial solution found is replaced on , evaluating for the whole interval. This reasoning allows the calculation of the relative dispersion of the coefficients of the series that approximates the solution. In order to illustrate the process, Example  1 is developed.

3.1. Example  1

As an example of dispersion calculation, the boundary value problem given by is considered, with representing the derivative of the function to be found.

The domain is ; the boundary conditions are and , with analytical solution being .

Transposing the domain to the interval , by using the methodology described in [3, Sections  3.3 and 5], (3.2) becomes with boundary conditions being , and .

Then, isolating the major order derivative, choosing an order 12 Legendre series given in Table 1 from a first interaction of the SIV algorithm, described in [3, appendix B], as a solution candidate, and applying the same SIV procedure [3] to integrate it four times, the function shown in Figure 1 is obtained.

After the integrations, the coefficients of the result are compared with the coefficients of the candidate and the dispersion for each coefficient is determined. In spite of not having a statistical meaning, the term dispersion is used, for the sake of simplicity.

It is worth noticing that, for the Legendre basis , and, consequently, increasing the order, this term decreases meaning that higher-order coefficients with the same absolute errors present greater relative errors.

Consequently, the dispersion to be exhibited follows the following relation:

Figure 2 shows the relative dispersion for the coefficients of the approximated solution presented in Example  1. This dispersion calculation is used to estimate the error upper bounds, as shown in the following.

3.2. Calculating Error Upper Bounds
3.2.1. Solution Error Upper Bound

Considering given by a Legendre series approximation , up to order , one can write and , with being the error affecting .

Consequently, the total error up to order is given by

Considering the worst scenario, all will contribute to increase the error exactly where is maximum. As a matter of fact, some increase the error and some decrease it. In any case, the error upper bound is given by (3.5) and the maximum value of , for any , is 1.

Consequently, the maximum error in the polynomial approximation is

3.2.2. Derivative Error Upper Bounds

For each derivative, the error bound calculation is analogous to the procedure for the solution error. For the derivatives, the error can be written as , for from 1 to the differential equation order.

This calculation can be made by using the differentiation matrix [2]. Denoting the exponent of the differentiation matrix by the number of times that was applied to the coefficients vector, for from 1 to the differential equation order. Considering ,

The error upper bounds in the derivatives are slightly greater than those in the solutions, because the derivatives are expressed by lower order polynomials. In spite of this, the error margins are very low. As a matter of illustration, if the solution of order 7 is used, the error upper bounds for the first, second, and third derivatives are, with representing the coefficient errors.

3.2.3. Transposing Error Upper Bounds

The expressions for the error upper bounds developed in the former sections are for the work domain, but it is important to transpose them for the domain of the original equation. The domain transposition method developed in [1, 3] can be applied to the errors.

For instance, in order to transpose the first-order derivative error, it can be written as , with and, therefore For the second-order derivative, For superior-order derivatives, the reasoning is analogous.

3.3. Finishing Example  1

For Example  1, the upper bounds of the errors for the Legendre series in the interval can now be calculated by using the transposing procedures described previously and are shown in Table 2. Additionally, Figure 3 shows the local errors along the interval. Again, the SIV procedure described in [3] is used, in order to search the solutions.

3.4. Example  2

In the following example, the tools described here and in [13] will be applied to a boundary value problem, running in 4 GB RAM processor, and taking 20 s for the whole processing. The differential equation to be solved is with and ; and , with representing the derivative of the function to be found.

Under these conditions, the analytical solution is , and, after 100 generations of the SIV algorithm described in [3, appendix B], in the Jacobi polynomial domain, the dispersion of the coefficients is shown in Figure 4. The residual values when the approximated solution is replaced in (3.12) are shown in Figure 5.

The error upper bounds are given in Table 3, the obtained solution is and considering that the analytical solution is given, the local error for the interval can be found and is shown in Figure 6.

4. Conclusions

Since the seminal work of Chen and Hsiao [22], the research about operational matrices and the computational capacity advances permitted a strong development in numerical solutions for differential equations. The SIV algorithm developed in [1, 3] gives an interesting combination of operational matrices and computational techniques, by using polynomial approximations for the solutions.

Several advantages are inherent to the method: (i)the basis of the Hilbert spaces is complete and the solutions are square integrable; consequently, increasing the order of the series, the approximation is improved; (ii)as the elements of the basis are orthogonal, changing the coefficients of a component does not interfere with the precision of the other components; (iii)the learning process of the algorithm reduces the search space, reducing the computational costs and decreasing the error margins; (iv)the error bounds can be determined in a very simple way.

Acknowledgment

This work is supported by CNPq.