This work develops a method for solving ordinary differential equations, that is, initial-value problems, with solutions approximated by using Legendre's polynomials. An iterative procedure for the adjustment of the polynomial coefficients is developed, based on the genetic algorithm. This procedure is applied to several examples providing comparisons between its results and the best polynomial fitting when numerical solutions by the traditional Runge-Kutta or Adams methods are available. The resulting algorithm provides reliable solutions even if the numerical solutions are not available, that is, when the mass matrix is singular or the equation produces unstable running processes.

1. Introduction

Mathematical modeling is a constant in engineering daily life, and expressing phenomena by using differential equations is almost mandatory. Successful modeling implies ability to make simplifying assumptions but, even under these assumptions, the solution of differential equations implies a lot of analytical and numerical efforts [1].

Considering these difficulties and sometimes the impossibility of obtaining analytical solutions, several numerical methods were proposed and, nowadays, the great majority of these methods are part of symbolic and numeric softwares as MATLAB and Mathematica, for instance [2]. These methods are accurate and fast, but each one has its limitations, depending on the type of equation to be solved. The main difficulties appear when the mass-matrix is singular or the equation generates instability in the running process [3].

Here, trying to deal with this kind of problem for equations with continuous solutions, a method for initial-value problems in ordinary differential equations is developed, assuming that the space of solutions is a Hilbert space, equipped with a Legendre's polynomial basis [4].

The central idea is to look for the best polynomial coefficient combination, in order to satisfy the original differential equation in the whole interval, instead of searching a polynomial that better fits a set of points in the given interval.

Consequently, a new error criterion must be defined and a program by using differential evolution methods with elitism [57] is developed, based on this new criterion.

The determination of the search space developed here is performed by using sequential contractions, allowing fast evolutions with noticeable reductions in the error bands when compared with random search in large populations, as proposed by [5].

Several examples are solved by the method presented and by using the classical numerical methods, with the accuracy of the results being compared.

In Section 2, the error criterion is discussed and a combination of a global and local criteria is proposed, providing optimal fitting of the polynomial, regarding the function and its derivatives. Section 3 presents the analytical basis of the method, by using Legendre's polynomials to optimize the solutions of an initial-value problem.

The use of the differential genetic algorithm with elitism, guaranteeing stability, applied to the solution of ordinary differential equations, emphasizing the choice of the coefficients under the optimizing criteria developed in Section 2, is described in Section 4. Finally, in Section 5, some examples are presented and the numerical quality of the solutions is discussed.

2. Error Criterion

Usually, the adopted optimization criterion to approximate a function by a polynomial in a given interval is to minimize the square error integral [8]. This kind of approach minimizes the global error along the whole interval, but produces a large value for the local error at the starting and ending points of the interval.

Consequently, when this criterion is applied to an initial-value problem, the approximation does not produce the best result, as the error for the starting point of the interval remains large. To solve this problem, an error criterion considering the local and global results is proposed, minimizing the product of two factors: the maximum error at the starting point of the interval and the global integral error.

To give an idea of this procedure, the differential equation given by (2.1) with initial condition 𝑦(1)=𝑒1, that admits the solution given by (2.2) in the [1,1] interval, is studied.𝑑𝑦𝑑𝑥+2𝑥𝑒𝑥2=0,(2.1)𝑦=𝑒𝑥2.(2.2)

If one tries to obtain, by using MATLAB, the 8th-order Legendre's polynomial that better fits to (2.2), the result, with double-precision format, is given by (2.3). Considering the substitution of 𝑦, given by (2.3), in the original differential equation (2.1), the integral error is 1.9.108 for the whole interval𝑦=.0263𝑥8.1551𝑥6+.4963𝑥4.9996𝑥2+1,0000.(2.3)

If one continues the process, with the result given by (2.3) as an initial solution candidate to the original equation (2.1), by using the method developed in the next Sections, a solution given by (2.4), with an integral error 8.7.109 for the whole interval, is obtained𝑦=.0242𝑥8.1511𝑥6+0.4940𝑥4.9992𝑥2+1,0000.(2.4)

This result is related to the fact that the polynomial given by (2.3) is adequate to fit the function given by (2.2), without considering its derivatives. In order to adequate the solution to the differential equation, given by (2.1), the derivatives must be adjusted in the same way.

3. Analytic Foundations

In this section, it is considered the Hilbert space of piecewise continuous functions, defined in a closed interval [𝑎,𝑏], with a scalar product between 𝑓 and 𝑔, belonging to the space, defined as [4] 𝑓,𝑔=𝑏𝑎𝑓𝑔𝑑𝑥.(3.1)

Considering the real interval [1,1], the Legendre's polynomials set is an orthogonal basis for the space of piecewise continuous function 𝑆, defined in this interval, that is, for any function 𝑓𝑆lim𝑛11𝑓(𝑥)𝑛𝑘=0𝑎𝑘𝑃𝑘(𝑥)2𝑑𝑥=0,(3.2) with 𝑃𝑘 being the 𝑘-degree normalized Legendre's polynomial and 𝑎𝑘=𝑓,𝑃𝑘 [4, 9].

It is important to notice that either the integral or the derivative of a Legendre's polynomial can be expressed as a linear combination of the Legendre's polynomial basis [10], as the following theorems, with proofs presented in Appendix A, show.

Theorem 3.1. Calling 𝑃𝑛(𝑥) the 𝑛-degree Legendre's polynomial, 𝑥1𝑃𝑛(𝜉)𝑑𝜉=𝑃𝑛+1(𝑥)𝑃𝑛(𝑥)2𝑛+1,𝑛𝑁.(3.3)

Theorem 3.2. Calling 𝑃𝑛(𝑥) the 𝑛-degree Legendre's polynomial, 𝑃𝑛=𝑛1𝑘=0(2𝑘+1)21(1)𝑘+𝑛𝑃𝑘.(3.4)

Considering the theorems above, about integration and differentiation of Legendre's polynomials, it is interesting to express these results in a matrix form.

3.1. Operational Matrices

As shown in [11], Theorem 3.3 allows the construction of matrices expressing any linear operation executed over the Hilbert space of the continuous functions. Here, numerical methods for solving differential equations by using Legendre's polynomials are developed. Therefore, constructing these matrices for integration and differentiation operations expressed in a Legendre's polynomials basis is useful.

Let 𝑓(𝑢) a linear combination of a generic basis, {𝐵𝑘}, in the Hilbert space of continuous functions𝑓(𝑢)=𝑛𝑘=0𝑐𝑘𝐵𝑘(𝑢).(3.5)

Theorem 3.3. Let 𝑍 a (𝑛+1)×(𝑛+1) matrix describing the resulting coefficients from a linear operation, 𝛼, over the generic basis, {𝐵𝑘}, as functions of the same basis. Calling 𝑉=[𝑣0𝑣1𝑣𝑛] the resulting coefficients vector from the linear operation, 𝛼, over 𝑓(𝑢): 𝑉𝑇=𝑍𝑇𝐶𝑇, with 𝐶 being the vector of the coefficients described in (3.5).

The procedure described by Theorem 3.3, proved in Appendix A, can be used for any basis in a Hilbert space, with the calculation of matrix 𝑍 representing the main operation to be done.

Combining Theorems 3.1 and 3.3, taking Legendre's polynomials as the basis in the Hilbert space of continuous functions, if 𝑍𝐿𝑖 is (𝑛+2)×(𝑛+2) and considering 𝛼 the integral operation𝑍𝐿𝑖(1,1)=1;𝑍𝐿𝑖(1,2)=1,𝑍𝐿𝑖(𝑘,𝑘1)=12𝑘1;𝑘=2,𝑛+1,𝑍𝐿𝑖(𝑘,𝑘+1)=12𝑘1;𝑘=2,𝑛+1,𝑍𝐿𝑖(𝑛+2,𝑛+1)=12𝑛+3.(3.6)

All the elements of 𝑍𝐿𝑖, not defined by (3.6), are equal to zero. For instance, if 𝑛=2, 𝑍𝐿𝑖 can be written as𝑍𝐿𝑖=110013013001501500170.(3.7)

It can be observed that the element 𝑍𝐿𝑖(𝑛+2,𝑛+3)=𝑍𝐿𝑖(4,5) does not appear, because the third degree Legendre's polynomial was not integrated.

Combining Theorems 3.2 and 3.3, taking Legendre's polynomials as the basis in the Hilbert space of continuous functions, if 𝑍𝑑𝑖 is (𝑛+1)×(𝑛+1) and considering 𝛼 the derivative operation𝑍𝐿𝑑(𝑘+1,𝑗+1)=1(1)𝑘+𝑗2𝑗+12;𝑘=1,𝑛1;𝑗=0,𝑘.(3.8)

All the elements of 𝑍𝐿𝑑, not defined by (3.8), are equal to zero. For instance, for 𝑛=3, 𝑍𝐿𝑑 can be written as𝑍𝐿𝑑=0000100003001050.(3.9)

The integration matrix 𝑍𝐿𝑖 is nonsingular, and, therefore, its inverse always exists. On the other hand, 𝑍𝐿𝑑 needs a correction if 𝑐0 in (3.5) is such that 𝑐0𝑘+1𝑛=1𝑐𝑛(1)𝑛. The correction to be done is to modify 𝑐0, imposing the condition described, without changing the derivative of 𝑓(𝑢).

In [10], the operational matrix was derived for the particular case where 𝛼 is an integration operation. Theorem 3.3, developed here, permits general matrix expressions for all linear 𝛼-operations and considering any kind of basis in the Hilbert space of continuous functions. [12] shows, additionally, this development for Fourier trigonometric and canonical basis.

3.2. Conservation of Initial Conditions in a Legendre's Basis

When using evolution methods to treat initial-value problems, expressing solutions in a Legendre's basis, the initial conditions must be conserved during all algorithm running process. This conservation depends on the correct combination of the coefficients, 𝑐𝑖, to express the function in any step of the running process. This combination generates an algebraic linear system of equations that admits infinite solutions. It is solved choosing a subset of 𝑞 coefficients, with 𝑞 representing the order of the original differential equation, and expressing then as a function of the others.

In order to obtain the coefficients of a function derivative, the differentiation matrix, built as explained in the former subsection, has to be used.

As an example, if a third-order equation with solutions in the [1,1] interval is studied with seven coefficients for the expression in Legendre's basis, one of the possible relationship among the coefficients is 𝑐0=5𝑐321𝑐4+56𝑐5120𝑐6+23𝑦(2)0+𝑦(1)0+𝑦0,𝑐1=9𝑐335𝑐4+90𝑐5189𝑐6+𝑦(2)0+𝑦(1)0,𝑐2=5𝑐315𝑐4+35𝑐570𝑐6+13𝑦(2)0.(3.10) where (𝑦0,𝑦(1)0,𝑦(2)0) is the vector of the initial conditions. If one needs to obtain the coefficients of the integral of the function, the integration matrix, built as explained in the former subsection, must be used, but the initial condition must be considered, as explained below.

Considering that 𝐹(𝑢)=𝑢1𝑓(𝜉)𝑑𝜉 and its Legendres expansion is given by: 𝐹(𝑢)=𝑛+1𝑘=0𝑏𝑘𝑃𝑘(𝑢), with 𝑃𝑘 being the 𝑘-degree Legendre's polynomial and that the initial condition for the original differential equation is 𝐹(1)=𝛽, 𝑏0 obtained by the integration matrix must be replaced by 𝑏0=𝑏0+𝛽. For 𝑞 differential equations, this process must be repeated 𝑞 times.

3.3. Domain Transposition

The purpose of this work is to develop numerical methods to solve differential equations with solutions defined in a generic closed real interval [a,b], but the exposed analytical foundations and other developed methods consider only particular intervals [13, 14]. Here, a generic transformation providing tools to transpose any general closed domain to a particular one is studied for up to fourth-order initial-value problems. Besides, the impact of this transformation in original differential equation and initial conditions is analyzed.

Let 𝑢=𝑇(𝑥);𝑇[𝑎,𝑏][𝑐,𝑑] be a bijective transformation and its inverse 𝑇1, and 𝑦=𝑓(𝑥);𝑓[𝑎,𝑏]𝐑 a solution of an initial value problem, with a vector 𝑌0 representing the initial condition.

The interval transposed function 𝑦t(𝑢)[𝑐,𝑑]𝐑 is defined by 𝑦t(𝑢)=𝑦(𝑥), with 𝑥=𝑇1(𝑢) and, for the first derivative, calling 𝐷0=𝑦(𝑥)𝐷1=𝑦(1)(𝑥)=𝑦(1)𝑡(𝑢)𝑑𝑢𝑑𝑥.(3.11)

Calling 𝜆=𝑑𝑢/𝑑𝑥=1/(𝑑𝑥/𝑑𝑢) and 𝐷𝑡1=𝑦(1)t,𝐷1=𝐷𝑡1𝜆.(3.12)

Consequently, the second and third derivatives are𝐷2=𝐷𝑡2𝜆2+𝐷𝑡1𝑑𝜆𝑑𝑢𝜆,𝐷3=𝐷𝑡3𝜆3+3𝐷𝑡2𝜆2𝑑𝜆𝑑𝑢+𝐷𝑡1𝜆𝑑𝜆𝑑𝑢2+𝜆𝑑2𝜆𝑑2𝑢.(3.13)

For the sake of clarity, the forth derivative is written by grouping the terms expressed in function of 𝐷𝑡1, 𝐷𝑡2, 𝐷𝑡3, and 𝐷𝑡4 as follows:(i)terms in 𝐷𝑡4: 𝐷𝑡4𝜆4,(ii)terms in 𝐷𝑡3: 6𝐷𝑡3𝜆3(𝑑𝜆/𝑑𝑢),(iii)terms in 𝐷𝑡2: 𝐷𝑡2𝜆2(7(𝑑𝜆/𝑑𝑢)2+4𝜆(𝑑2𝜆/𝑑𝑢2)),(iv)terms in 𝐷𝑡1: 𝐷𝑡1𝜆[(𝑑𝜆/𝑑𝑢)3+4𝜆(𝑑𝜆/𝑑𝑢)(𝑑2𝜆/𝑑2𝑢)+𝜆2(𝑑3𝜆/𝑑𝑢3)].

Combining the expressions above,𝐷4=𝐷𝑡4𝜆4+6𝐷𝑡3𝜆3𝑑𝜆𝑑𝑢+𝐷𝑡2𝜆27𝑑𝜆𝑑𝑢2+4𝜆𝑑2𝜆𝑑𝑢2+𝐷𝑡1𝜆𝑑𝜆𝑑𝑢3+4𝜆𝑑𝜆𝑑𝑢𝑑2𝜆𝑑2𝑢+𝜆2𝑑3𝜆𝑑𝑢3.(3.14)

As the solution of a differential equation is tied to the initial conditions, it is necessary to establish domain transformations as described above, in an inverse way. Therefore, calling 𝜌=𝑑𝑥/𝑑𝑢=1/(𝑑𝑢/𝑑𝑥) and considering equations (3.12), (3.13), and (3.14) in an inverse way, with the replacement of 𝜆 by 𝜌, the initial conditions, 𝑦(0), 𝑦(1)(0), 𝑦(2)(0), and 𝑦(3)(0) are given by𝑦𝑡0=𝑦0,𝑦𝑡1=𝑦1𝜌,𝑦𝑡2=𝑦2𝜌2+𝑦1𝑑𝜌𝑑𝑥𝜌,𝑦𝑡3=𝑦3𝜌3+3𝑦2𝜌2𝑑𝜌𝑑𝑥+𝑦1𝜌𝑑𝜌𝑑𝑥2+𝜌𝑑2𝜌𝑑2𝑥.(3.15)

If 𝑇 is a linear transformation, expressions (3.12), (3.13), (3.14), and (3.15), are considerably simplified, as 𝜆 and 𝜌 are constants.

In spite of linear transformation being simple, it does not allow to work with equations defined in an infinite domain. In order to take care of this problem, the transformation 𝑇, given by𝑢=𝑇(𝑥)=(𝑑𝑐)arctan(𝑥)𝑣1𝑣2𝑣1+𝑐,𝑥=𝑇1(𝑢)=tan𝑣2𝑣1(𝑢𝑐)𝑑𝑐+𝑣1,(3.16) where 𝑣1=arctan𝑎 and 𝑣2=arctan𝑏 can be used.

4. Evolution-Adaptive Process for Solving Initial-Value Problems

In this section, the main question to be answered is if it is possible, with the computation power available, to generate random coefficients for an 𝑛-degree polynomial and, under a sequential algorithm, like differential evolution algorithm, over the coefficients, to obtain the exact solution of an initial value problem.

Due to the randomness of the original genetic algorithms, the solution space has its regions visited in a random way, and there is no convergence. Nevertheless, when differential versions of genetic algorithms with elitism are applied, some stable processes and satisfactory results are obtained for particular cases [1518], including complex biological problems [19, 20] and quantum mechanics [21, 22].

Here, a differential genetic algorithm with an implicit learning process, given by a good choice of the region of the space of solutions based on a restriction scheme that continuously contracts the volume of the space of research, guaranteeing stability, is developed.

Considering that a 𝑛-degree polynomial is the best candidate to the solution of an initial value problem, in a particular step of a sequential algorithm, under a certain error criterion, it is natural to ask how far of the exact solution it is. In order to treat this problem, the original differential equation is written with the higher-order derivative term expressed as a function of the lower-order derivatives, the original function, and the independent variable as:𝑑𝑞y𝑑𝑢𝑞=𝐺𝑦𝑡,𝑦(1)𝑡,,𝑦(𝑞1)𝑡,𝑢.(4.1)

Replacing the best solution candidate as 𝑦, into 𝐺 given by (4.1), it is possible to evaluate 𝐺 in the defined interval and fitting the obtained results by using a Legendre's polynomial up to (𝑛𝑞)-degree. The fitting process follows Gauss-Legendre quadrature method [23, 24]. After that, the obtained polynomial is 𝑞-times integrated, by using the integration matrix given by [𝑍𝐿𝑖]𝑇, defined in Section 3.1, and considering the given initial conditions.

Comparing this result with the original candidate, the dispersion is obtained. This dispersion defines the band variation for the coefficients of the polynomial candidate, in the next step of the algorithm. For the first step, the candidate is the null polynomial, providing the first band variation for each coefficient of the Legendre's polynomial.

To perform the differential evolution algorithm, a population is randomly mounted inside the first band variation. Then, by using a combination of crossing-over, random and mutation, new off-springs are generated feeding the sequence of the algorithm.

The program starts a tournament selection. If the best output of the tournament overcomes the former winner, a new band variation is constructed. Otherwise, by using elitism with random cross-over and mutation [25], the algorithm creates a new generation with each gene (coefficient) varying inside the new band, avoiding stagnation in local minima.

The described procedure is possible, even when the mass-matrix is singular [3] because the Gauss-Legendre's abscissae do not include the interval extremes [23, 24]. If the numerical integration solution is available, it can be the starting point of the algorithm, instead of the null-polynomial, shortening the running time of the procedure.

5. Results and Analysis

Based on the analytical development presented in Section 3, by using a differential evolution algorithm, a method for solving initial value problems, from now on abbreviated by SIV and detailed in Appendix B, was developed, considering that the solutions are expressed as linear combinations of Legendre's polynomials, in the transposed domain.

Starting with an interval transposition, the SIV algorithm runs, adjusting the coefficients of the polynomial solution, minimizing the residual error when replacing the polynomial approximation and derivatives and transposed initial conditions in the transposed differential equation. After a chosen number of generations, applying the inverse transposition, the solution is obtained, in the original domain.

Here, the application of the SIV algorithm is shown considering four examples of nonlinear up to fourth-order equations. The obtained solutions are compared with the analytic ones, when they exist.

The algorithm is implemented with 66 chromosomes and 5% of mutation. The CPU is a PC Dual-core with 2 GB RAM and running MATLAB-7.

In the cases where it is not possible to express an analytic solution by a finite combination of elementary functions, the solution obtained by SIV is compared with the obtained by Adams' method, when the maximum MATLAB precision is used [2, 3].

5.1. Linear Transposition: Zero-Order Bessel Equation

The first chosen example is the zero-order Bessel equation, corresponding to𝑥𝐷2+𝐷1+𝐷0𝑥=0,(5.1) with 𝐷𝑖 denoting the 𝑖th derivative of the searched function, related to the independent variable 𝑥, 𝑥[0,1], and with the initial condition given by the vector [1,0].

Equation (5.1) has analytic solution given by𝐽0(𝑥)=𝑘=0(1)𝑘(𝑘!)2𝑥22𝑘.(5.2)

In order to transpose the domain from the [0,1] to [1,1], a linear transformation is used. For this kind of transformation, considering that a function is described in an interval [𝑎,𝑏] and will be transposed to an interval [𝑐,𝑑], the relation (𝑢𝑐)/(𝑑𝑐)=(𝑥𝑎)/(𝑏𝑎), with 𝑢=𝑇(𝑥)=(𝑑𝑐)/(𝑏𝑎)𝑥+(𝑏𝑐𝑎𝑑)/(𝑏𝑎), is used.

The SIV algorithm was applied to (5.1), with a 26-coefficient polynomial approximation, starting with the null-polynomial.

The final result was compared with the analytic solution (5.2). As, in this case, the mass matrix is singular [3, 23, 24], the Runge-Kutta and Adams algorithms cannot be initialized and, consequently, they are not used for comparisons.

After two hundred generations, an evolved Legendre's series approximation was obtained with very low error margins, as shown in Figure 1 for the obtained solution of (5.1) and the corresponding derivatives.

To give an idea about the precision of the method, Figure 2 shows the residue by substituting the approximation of the function given by (5.2) by using Legendre's polynomial fitting process [2], and the residue for the same degree evolved solution. Observing Figure 2, it can be noticed that the SIV algorithm improves the accuracy in, at least, two order of magnitude. Besides, the problem of lack of accuracy at the interval extremes is considerably reduced.

5.2. Linear Interval Transposition: Fourth-Order Differential Equation

Now, it will be considered an example of a fourth-order differential equation. Denoting 𝐷𝑖 the 𝑖-derivative of the searched function, the equation to be solved is𝐷4+𝐷3𝐷1𝐷02+3sin2𝑥+4𝐷0+3cos𝑥+𝐷1sin𝑥2+𝐷22=0,(5.3)𝑥[0,𝜋], and with the initial condition given by the vector: [0,0,2,0].

Equation (5.3) has analytic solution given by𝑓(𝑥)=𝑥sin(𝑥).(5.4) In order to transpose the domain from the [0,𝜋] to [1,1], a linear transformation, as shown in Section 5.1, is used.

Starting with the solution obtained by using MATLAB (ode113) with maximum precision [2], the SIV method with 22 coefficients gives a solution with the error figures shown in Figure 3, after two hundred generations.

To give an idea about the precision of the method, Figure 4 shows the residue by substituting the approximation of the function given by (5.4) by using Legendre's polynomial fitting process [2], and the residue for the same degree evolved solution. Observing Figure 4, it can be noticed that the SIV algorithm improves the accuracy in, at least, two order of magnitude. Besides, the problem of lack of accuracy at the interval extremes is considerably reduced.

5.3. Trigonometric Interval Transposition

Another example where an interval transposition is necessary is described here. Denoting 𝐷𝑖 the 𝑖-derivative of the searched function, the equation to be solved is𝐷4+4𝐷13𝐷1𝑥𝑥419𝑥2+21+𝑥23tan1(𝑥)=0,(5.5)𝑥[0,𝜋], and with the initial condition given by the vector: [0,0,2,0].

Equation (5.3) has analytic solution given by𝑓(𝑥)=tan1(𝑥)2.(5.6)

In order to transpose the domain from [0,𝜋] to [1,1], a trigonometric transformation is used. For this kind of transformation, considering that a function of 𝑥 is defined in an interval [𝑎,𝑏], firstly it is transposed with a new function 𝑢=tan(𝑣), defined for an interval [𝑣1,𝑣2] of 𝑣, such that 𝑣1=tan1(𝑎) and 𝑣2=tan1(𝑏).

Then, this interval is transposed to [𝑐,𝑑] for the variable 𝑢, by using the transformation:𝑢=(𝑑𝑐)(tan1(𝑥)𝑣1)/(𝑣2𝑣1)+𝑐.

Starting with the null-polynomial, with seven coefficients and running the SIV process for six hundred generations, taking about 1 minute, a solution with the error figures shown in Figure 5 was obtained.

As a 6th-order approximation for the function given by (5.6) has a very small error, Figure 6 shows the residue of the evolved series, because it is practically the exact solution.

5.4. Equation without Analytic Solution

There are equations such that the analytic solution cannot be expressed by a finite combination of elementary functions. Here, an example of this kind of case is presented. Denoting 𝐷𝑖 the 𝑖-derivative of the searched function, the equation to be solved is𝐷3+𝐷2𝐷0=0,(5.7)𝑥[0,5], and with the initial condition given by the vector [0,0,.4699].

After a linear interval transposition, the SIV algorithm started with a 21-degree null-polynomial. After four hundred generations, a solution with residual errors shown in Figure 7 was obtained.

As the analytic solution cannot be expressed as a finite combination of elementary functions, in order to establish comparisons, the Adams' method, implemented with the maximum MATLAB precision [2], was used to perform the Legendre's polynomial fitting for the function. Figure 8 shows that the SIV algorithm improves the precision in, at least, one order of magnitude, attenuating the lack of accuracy at the domain extremes.

6. Conclusions

A method that combines genetic algorithms with Legendre's polynomial approximation was proposed, in order to solve nonlinear initial-value problems, up to 4th-order differential equations.

The method presents several advantages.(i)It provides analytical expressions for the solutions and their derivatives for the whole domain, instead of a low-degree piecewise polynomial; (ii)It runs even if the mass matrix is singular, differently from the traditional Runge-Kutta and Adams' methods; (iii) It is able to start the process of finding solution from a null-polynomial; (iv) It considerably improves the precision of the solutions, solving the problem related to the lack of accuracy in the interval extremes; (v)The spectral evolution runs fast once the evolution of each coefficient does not affect the others, due to orthogonality.


A. Proofs of the Theorems

Proof of Theorem 3.1. Calling 𝑃𝑛 the derivative of 𝑃𝑛, based on [4, 9], it can be written(i)(2𝑛+1)𝑃𝑛=𝑃𝑛+1𝑃𝑛1,(ii)𝑃𝑘(1)=(1)𝑘,𝑘𝑁. Integrating equation (i) in the real interval [1,𝑥]𝑥1𝑃𝑛(𝜉)𝑑(𝜉)=𝑃𝑛+1𝑃𝑛12𝑛+1𝑥1,(A.1) and considering (ii), the thesis of the theorem holds for 𝑛𝑁. If 𝑛=0, 𝑥1𝑃0(𝜉)𝑑𝜉=𝑃1(𝑥)+𝑃𝑂.

Proof of Theorem 3.2. The coefficients of the Legendre's polynomial are given by 𝑐𝑘=11𝑓(𝑥)𝑃𝑘(𝑥)𝑑𝑥(2𝑘+1)2.(A.2)
Considering that 𝑓(𝑥)=𝑃𝑛(𝑥) is a (𝑛1)-degree polynomial and noticing that a 𝑛-degree Legendre's polynomial is orthogonal to all polynomials, Legendre's or not, with lesser degree, the integral given by (A.2) can be calculated by parts as 2𝑐𝑘2𝑘+1=11𝑃𝑘𝑢𝑃𝑛𝑑𝑥𝑑𝑣=𝑃𝑘𝑃𝑛||1111𝑃𝑛𝑃𝑘𝑑𝑥(orthogonality)=0.(A.3) Remembering (ii) from Theorem 3.1 and that 𝑃𝑘(1)=1,𝑘𝑁 [4, 9], 2𝑐𝑘2𝑘+1=1(1)𝑘(1)𝑛,(A.4) and, consequently the coefficients of the expansion of 𝑃𝑛 are 𝑐𝑘=(2𝑘+1)21(1)𝑘+𝑛.(A.5)

Proof of Theorem 3.3. As 𝛼 is a linear operation, 𝛼[𝑓(𝑢)]=𝑛𝑘=0𝑐𝑘𝛼𝐵𝑘(𝑢).(A.6)
Considering the space of the finite matrices 𝑀𝑖,𝑗;𝑖=1,2𝑛+1;𝑗=1,2𝑛+1, it can be noticed that, in this space 𝛼[𝑓(𝑢)] is an element of the scalar field, consequently, (𝛼[𝑓(𝑢)])𝑇=𝛼[𝑓(𝑢)].
As 𝐵 is an 1×(𝑛+1) vector and 𝛼[𝐵]=𝑍𝐵, 𝑉𝐵=𝐶(𝑍𝐵), belonging to a scalar field. Therefore, 𝐵𝑇𝑉𝑇=(𝑍𝐵)𝑇𝐶𝑇,𝐵𝑇𝑉𝑇=𝐵𝑇𝑍𝑇𝐶𝑇.(A.7) Taking (A.7) and left multiplying by 𝐵 results in 𝑉𝑇=𝑍𝑇𝐶𝑇.

B. Detailed Algorithm

The SIV algorithm is implemented divided into three parts: database, adaptive-evolutive process, and restarting, as described below.

B.1. Part 1—Database
(i)Reading problem data: equation, domain, initial conditions, domain transposition type, (ii)the database is mounted in the transposed domain, according to Section 3.
B.2. Part 2—Adaptive-Evolutive Algorithm
(i)The number of coefficients of expansion is chosen. The program used here is prepared up to 41 coefficients;(ii)each chromosome (individual) is a set of coefficients of a series expansion of the solution and each gene is a coefficient;(iii)the zero element of the process is the null polynomial. The coefficients are adjusted following the idea expressed by (3.10) by changing 𝑞 (order of the equation) coefficients, in order to obey the initial conditions. Consequently, the first winner appears;(iv)the winner is replaced into (4.1) with G being obtained in several points of the domain. In order to avoid the Runge phenomenon, the chosen points are the nodes of the polynomial basis from the evolutive process;(v)the series expansion for 𝐺 is obtained in the chosen basis for the evolutive process by using the Gauss-Legendre numerical integration;(vi)having the integration matrix and the initial conditions, the series representing 𝐺 is integrated 𝑞 times;(vii)the winner coefficients are compared with the obtained in the former step. If they are equal, the solution is a exact one. If they are not equal, the error bands can be determined for each coefficient, restricting the search space, improving the speed and efficiency of the process;(viii)having the coefficients error bands, the winner is used as the central element to construct a new population with coefficients varying randomly in the band that works as a standard deviation for each coefficient;(ix)the population is submitted to a tournament with the ranking depending on the obtained residue substituting the candidate into the original equation. If the winner of the tournament surpasses the former winner, the process returns to the fourth step. If does not, the process goes to the next step;(x)the individual evaluation in the tournament is performed simply replacing the candidate in the equation and analyzing the residue;(xi)the lower the residue is, the better is the candidate ranking;(xii)a mutation is performed in a few genes (about 5%). The 20% worst are discharged, and the champion is preserved for a new round. A new population is formed by the combination of the former population members. The process returns to the 9th step up to the number of generations chosen when the algorithm started to run;(xiii)the last generation is considered to be the final winner and, after a domain transposition, the results are presented.
B.3. Part 3—Restarting the Process

It is possible to restart the evolution starting with the former results, increasing or maintaining the number of coefficients, beginning in the 8th step of part 2.