Research Article  Open Access
David J. López, José G. Romay, "Implementing Adams Methods with Preassigned Stepsize Ratios", Mathematical Problems in Engineering, vol. 2010, Article ID 765620, 19 pages, 2010. https://doi.org/10.1155/2010/765620
Implementing Adams Methods with Preassigned Stepsize Ratios
Abstract
RungeKutta and Adams methods are the most popular codes to solve numerically nonstiff ODEs. The Adams methods are useful to reduce the number of function calls, but they usually require more CPU time than the RungeKutta methods. In this work we develop a numerical study of a variable step length Adams implementation, which can only take preassigned stepsize ratios. Our aim is the reduction of the CPU time of the code by means of the precalculation of some coefficients. We present several numerical tests that show the behaviour of the proposed implementation.
1. The VariableStepsize Adams Method
The Adams methods (Bashforth and Adams [1], Moulton [2])are the most popular multistep formulas to solve a nonstiff initial valueproblem
These codes, and other interesting multistep methods, are based on the replacement of by an interpolating polynomial , and the approximation
When the stepsize is constant, the explicit step AdamsBashforth method can be written in terms of the backward differences of ,
where the coefficients are obtained by the recurrence
The formulation of the implicit step AdamsMoulton is straightforward, as can be seen, for example, by Lambert in [3, Section 3.9]. The implicit equation is usually solved using a fixedpoint implementation [3, page 144], starting from an AdamsBashforth formula. The order of the step ABM pair is (an exhaustive study of the order of a predictorcorrector pair can be seen in [3, Section 4.2] ), and it is equivalent to the order pair based on step AdamsBashforth and step AdamsMoulton with local extrapolation [3, pages 114115].
The coefficients do not depend on the step length, so they remain constant in the whole integration and need only be computed once. However, they do vary when the stepsize is not constant, so they must be recalculated in each step of the integration.
There are lots of different ways to implement a variablestepsize variableorder Adams code (actually one different for each developer). For example, codes as DEABM [4], VODE [5] and LSODE [6, 7] include some controls to diminish the computational cost in some situations by keeping the stepsize constant, or by avoiding the growth of the order. Nowadays, symbolic programs such as Matlab and Mathematica as well as their numerical solvers of ODEs ode113 [8] and NDSolve [9, pages 961–972,1068,12151216], are very popular. We do not want to restrict our study to a particular code, so we will use in the tests a purer implementation of the variablestepsize Adams code, developed by ourselves, based on the Krogh divided differences formulation [10].
Krogh developed a way to compute the Adams formulas in terms of modified divided differences. Following the notation of Hairer et al. [11, pages 397–400], the variablestepsize explicit step AdamsBashforth predictor is
and the step AdamsMoulton corrector can be expressed as
The coefficients can be calculated easily by the recurrences
The calculation of requires a double loop in terms of and :
The computational cost of the recalculation of in each step is the major disadvantage of the variablestepsize Adams code when the cost is measured in CPU time.
2. The Stepsize Ratios
The coefficients “” and “” are scalar and they are independent of the function . Let us denote the ratios
Then it is well known that dependence on the points in (1.7) and (1.9) is actually a dependence only on the ratios .
2.1. The Coefficients in Terms of the Stepsize Ratios
Let us study the number of ratios needed for each coefficient.
Proposition 2.1. For the coefficient only depends on the ratios , and for the coefficient only depends on the ratios .
Proof. We only need to apply the relation (with ) to deduce that the fraction in (1.7) can be rewritten as and only depends on the ratios . Using an inductive reasoning, starting with the constant value , it follows that if , then A similar proof can be developed for . Applying (2.2) to the fraction in (1.9), From this equation and the starting values the inductive reasoning shows that, if , and so and the proof is complete.
2.2. The Number of Coefficients in Terms of the Number of Ratios
Let us fix only options to choose the ratios, that is,
As the ratios belong to a finite subset, then the coefficients and belong to finite spaces of possible events. As there are only a finite number of “” and “” coefficients, they can be precalculated, and read at the start of the numerical integration. In particular, the major computational disadvantage of the variablestepsize Adams code (the double loop in the computation of (1.9) in each integration step) is removed.
The importance of Proposition 2.1 is not the calculation of the functions and , but the existence of those functions and the number of the ratios for each one. Let us count the number of different possible coefficients in terms of and the order . For simplicity we ignore the constants , and .
For the predictor (1.5) we require from to . According to (1.8), the value is not necessary for in the corrector (1.6). From Proposition 2.1 it follows that there are only different values for , so the total length is
Again from Proposition 2.1, if the coefficient can only take different values. But in this case is used in the implicit corrector (1.6), so it is necessary to compute and (2.10) is also valid for the “” coefficients. We can formulate the first lemma.
Lemma 2.2. The “” and “” coefficients needed in the step pair (1.5) and (1.6) can be stored in two arrays of length (2.10).
A variablestepsize algorithm needs an estimator of the error to change the stepsize. This estimator can be computed in different ways. We follow Lambert's [3, section 4.4] and extend them to variablestepsizes.
As the step pair is actually the order pair with local extrapolation, we can choose the same local error estimator for both pairs, which leads to
In practice the variablestepsize Adams method is also endowed with the capability of changing its order, so it is necessary to obtain some local error estimators for decreasing or increasing the order. By modifying the previous equation we obtain the estimators
for decreasing or increasing the order, respectively. The value of is necessary for in , and also , so Lemma 2.2 can be updated to the following lemma.
Lemma 2.3. The “” and “” coefficients needed for the predictor (1.5), the corrector (1.6), and the estimators (2.11) and (2.12) can be stored in two arrays of length
The CPU time used by the code will increase drastically if we let the order grow without control. For this reason it is necessary to impose a maximum order. In double precision (8 bytes per data, 1516 decimal digits) the maximum order is usually (Lambert [3, page 144]). For our purposes we let this maximum as a parameter (Maximum Order). We must emphasize that, when the maximum order is reached, the estimator is not necessary, and then we do not need to calculate (and store) and . This fact produces the final lemma.
Lemma 2.4. The “” and “” coefficients needed for a VariableStep VariableOrder Adams step code, with order up to , can be stored in two arrays of length
The amount of RAM needed to store and read the coefficients can be an important disadvantage of a fixed ratios implementation. In the following section we search some useful choices of and in (2.9).
3. The Fixed Ratios Strategy
Usually, the implementation of the variablestepsize Adams method includes a restriction in the ratios of the kind of
that appears in the stability of generic multistep variablestepsize methods (Hairer et al. [11, page 403]) and in the convergence of the variablestepsize variableorder Adams method (Shampine [12]). Then, the stepsizes ratio is obtained by adapting equation (391a) by Butcher in [13, page 293]:
The fixed ratios implementation that we propose selects the ratio as the largest in (2.9) lower to in (3.2):
The pathologic case for can be easily avoided by including as the minimum of the prefixed ratios.
For the numerical study we programmed a code, denoted as VSVOABM, with a classical implementation. VSVOABM uses (3.2) for the stepsize selection, and lets the order vary freely up to . In addition, we denote by FRABM (fixed ratios AdamsBashforthMoulton) a code based on VSVOABM, changing (3.2) by (3.3) to select the new stepsize. As the prefixed ratio in (3.3) is lower than in (3.2), the stepsize selected in FRABM will be smaller than the one that VSVOABM would choose from the same estimator, so the error of FRABM is expected to be lower (of course using more steps). As the coefficients “” and “” do not depend on the problem, they were precalculated only once, stored in a file and put in RAM at the start of the integration in FRABM. Then, in each step, they are read from memory instead of being calculated. As FRABM was programmed from VSVOABM, the comparison of the two will show the behaviour of the classical strategy let the ratios free and calculate the coefficients in each step versus the new fix a finite number of ratios and read the coefficients from RAM in each step.
The authors presented in [14] a large number of experiments. In this paper we summarize them. First of all we tested the following problems. In all of them the variation of the stepsize is crucial.
(1)Five revolutions of the classical TwoBody problem used in many relevant books, such as Butcher [13, pages 4–7], Dormand [15, pages 86–90], or Shampine [16, pages 90–95]. The variation of the stepsize grows with the eccentricity of the ellipsis. We test a pair of different problems, one with middle eccentricity and another one with high eccentricity .(2)An Arenstorf problem [17] with the popular orbit printed by Butcher in [13, page 9]. of Hairer et al. [11, page 130, Figure 0.1] shows that a variablestepsize strategy is very important in this problem.(3)The Lorenz problem in the interval , extremely sensitive to errors in the first steps [11, pages 120, 245].(4)A fictitious sevenbody problem, denoted Pleiades [11, pages 245246]. In this problem the function is expensive to evaluate.In the figures we compare the decimal logarithm of the CPU time versus the decimal logarithm of the error, with tolerances from to , except in the Lorenz problem in which tolerances were restricted from to (see Hairer et al. [11, page 245] for an explanation). Nowadays most of these problems can be solved quickly in a standard computer, so CPU time was measured with extreme precision in “uclocks” (about 840 nanoseconds, less than 1 microsecond).
The main conclusions were the following.
(i)Ratios must be close to 1, even if the stepsize may vary a lot in the integration (Section 3.1).(ii)Coefficients “” must be calculated from (1.7) in each step and coefficients “” must be precalculated once and read in each step (Section 3.2).(iii)FRABM methods with ratios, , and maximum orders and are two good methods with very different RAM requirements (Section 3.3).3.1. The Ratios in the Classical Implementation
We present some results of five revolutions of a highly eccentric orbit of the TwoBody problem with VSVOABM. In Figure 1 we see significant changes in the stepsize. In the beginning of a revolution (perigee) it is about one hundred times less than in the middle of a revolution (apogee). However, the ratios in Figure 2 remain near to 1 everywhere, even those less than from the rejected steps.
We have tested some other eccentricities in the TwoBody problem. As the eccentricity reduces to 0, the changes in the stepsizes in Figure 1 diminish. In any case, the structure of Figure 2 shows little change.
In Table 1 we order the ratios and select the percentiles 25, 50 (the median), and 75 of the accepted steps for different tolerances. The percentiles approach to unity as the tolerance reduces, probably because the selected orders increase with small tolerances (see in Shampine [16, page 262, Figure 5.3]). In Table 2 we show the behaviour of the ratios in a global sample obtained with tolerances from to for all the proposed problems.


We must conclude that a good choice of a finite number of prefixed ratios must select them near to 1, even for problems with big changes in the stepsize.
3.2. The Difference between the "” and the “” Coefficients
The coefficients “” and “” are archived in two arrays of the same size, so the CPU time required when we search for a particular or must be the same. However, there is an important difference when we compute them from their algorithms, because while is calculated by means of the single loop (1.7), a double loop is required in (1.9) for .
We present different FRABM integrations for the Arenstorf problem, all of them with prefixed ratios, and maximum order . We change the way to evaluate (computing or reading) the (“”/“”) coefficients, which produces four different implementation modes. In all of them the coefficients must be the same, and also the numerical results and errors (roundoff errors can produce in (3.2) slightly different estimators est and different selections of the ratio in (3.3), but it does not happen often). As the CPU time can be different for the four FRABM methods, the comparison must provide parallel graphs.
In all the tests we made, both reading “” strategies (diamonds and squares in Figure 3) won to both computing “” strategies (stars and triangles). In Figure 3 the best implementation mode resulted to be reading “” and “”. In other problems we tested computing “” and reading “” was slightly superior. Taking into account all the tests, both strategies were virtually tied, and they were doubtlessly more efficient than both computing “” modes. But as storing “” requires an important amount of RAM, we discard the reading mode for “”, so from now on the FRABM implementation will compute “” from (1.7) in each step, and only precalculate, store, and read in each step “”. In this way RAM requirements are halved.
3.3. Proposing Some FRABM Algorithms
Now we will develop some concrete FRABM algorithms. It is impossible to reproduce all of the experiments we made in [14] comparing sets of ratios obtained by statistical and intuitive processes, so in this paper we only present the final conclusions.
First of all, we looked for the minimum and maximum ratios and . These ratios will not be used often, but they are indispensable. The minimum is required for the rejected steps, and the maximum will be used specially in the first steps, as the first stepsize must be very small because the code starts with the minimum order . Computing (and not reading) the “” coefficients directly from (1.9) only in these special cases looks like an interesting idea, because it sets two preassigned ratios free. However, we must note that using one nonpreassigned ratio forces the code to compute the next “” coefficients from (1.9) to eliminate from (2.8), and the computational cost will be increased.
In our experiments there were no clear winners, but maybe the pair and was more robust than other compared ratios, so we assumed the criterion of Dormand [15, page 73] and Butcher [13, page 293] and selected this pair.
With the addition of the ratio we obtained the basic doubling and halving FRABM3 method with prefixed ratios, . Lemma 2.4 shows that for maximum order this method only requires “” coefficients and 2.03 MB of RAM to store them in double precision.
In the vast majority of occasions the stepsize is accepted, not rejected. That is why we decided to keep as the unique prefixed ratio for rejections, and looked for if . In fact, taking into account the numerical results of our tests, for we decided to fix , and as its companion. In particular, the method with ratios, (from now on denoted FRABM4, diamonds in Figure 4), performed significantly better than the one with obtained with percentiles 33 and 66 in Table 2 (stars in Figure 4).
We tested several different strategies for ratios, including the one with percentiles 25, 50, and 75 from Table 2. However, the best strategy turned out to be the simplest: adding to the ratios in FRABM4. Then for FRABM5 we selected . The methods with ratios obtained by means of a statistical criterion from Table 2 performed clearly worse than others selected intuitively in all the tests we made.
For no specific reason, we add the ratio 1.05 to develop a method, with . In Figure 5 we compared the behaviour of FRABM methods with ratios, all of them with maximum order (the RAM requirements of the methods are in Table 3).

In Figure 5, and in all the problems we tested, FRABM5 was the best of the compared methods. We cannot assure that ratios are a better choice than ; we assure that in all of our tests the combination , performed better than , . This situation did not change when we raised . We compared methods with ratios, all of them with the same (reduced through the fault of the RAM cost for ), and FRABM5 was always the winner. It is surprising because FRABM5, only requires an amount of 0.75 MB of RAM.
We cannot conclude yet that the best method is FRABM5, because when bigger maximum orders are attainable with lower RAM cost. However, it appears that when the improvement of the error is less than the growth of the CPU time (and of course the RAM requirements), so we restrict our study to .
We divided the methods into three categories in terms of their RAM requirements (see Table 4), with the usual restriction for , and for FRABM5. As the case reaches the maximum order with lowcost, FRABM3 was considered in the three categories.

We rejected the case for FRABM5 because it needs the enormous amount of 465.66 MB of RAM. Besides, the RAM requirements worked against the efficacy of the code. Figure 6 shows the comparative of FRABM5 methods with maximum orders and in the Pleiades problem. In all the problems under consideration the case was not competitive with , specially for large tolerances.
The behaviour of the methods shown in Figure 6 is an exception. The efficacy of FRABM methods in Table 4 was consistent with its maximum order, that is, it was better with higher MO. However the middle cost methods were between lowcost and highcost methods for all the problems in almost every tolerance, so we discarded them because they were not the winners in any circumstance.
Figures 7 and 8 show representative examples of the performance of the methods in all the problems under consideration. FRABM3 with was clearly the poorest of the compared methods in both categories. For low and highcost FRABM5, won FRABM4, respectively. Thus, combinations of prefixed ratios with maximum orders and are officially chosen as our two FRABM candidates to solve general nonstiff ODEs.
4. The Behaviour of FRABM5
In this section we compare the pair of selected FRABM5 methods (green line with stars for , red line with diamonds for ) and the classical implementation VSVOABM with maximum order (blue line with squares). As the most popular representative of RungeKutta methods, we include in the tests the version of the Dormand and Prince DOPRI853 code [18] downloaded from E. Hairer's website (black line with triangles). Figure 9 shows the efficacy of these four methods when the computational cost is measured in CPU time. We also present in Figure 10 the efficacy graphics with the computational cost measured in number of function calls.
Let us interpret the figures. At a point where a pair of methods has the same coordinate (error) the method with graph units to the left in the coordinate gains % in the cost.
Taking into account only the Adams methods and CPU time, both FRABM5 codes clearly beat VSVOABM in both TwoBody, Arenstorf, Lorenz and Pleiades problems. The difference fluctuates between 0.1–0.15 units in the coordinate (a gain of 20%–30% in CPU time). Let us remark that FRABM5, requires the insignificant amount of 3.73 MB of RAM and is the most efficient multistep method for large tolerances. The case is more efficient for small tolerances, but its RAM cost grows to 93.13 MB.
When comparing the number of function calls, VSVOABM does not start very well in large tolerances, but it can take advantage of its superior maximum order to win FRABM5, in small tolerances. In the TwoBody problem with the case is slightly better than VSVOABM even in these small tolerances. In the rest of the tests VSVOABM is the winner. We made some experiments (not included in this paper) to compare the efficacy in number of function calls of VSVOABM with FRABM4 and FRABM5, all of them with . There was not a clear winner in our tests, which allows us to assert that the main variable that produces differences among the Adams methods is not the implementation mode, but the maximum order . This can be seen in the figures of TwoBody with and Pleiades.
It is well known that RungeKutta methods, like DOPRI853, can hardly compete with multistep methods when comparing the number of evaluations. This can clearly be seen in all of the experiments. However, as RungeKutta methods do not need to recalculate their coefficients in each step, they are very superior in CPU time unless the function is expensive to evaluate. DOPRI853 shows its potential in both TwoBody, Arenstorf,and Lorenz problems, all of them with a cheap function . The difference between DOPRI853 and VSVOABM is about 0.4–0.6 units in the axis, so the gain of DOPRI853 oscillates between 60%–75%, which means that it is about 2.5–4 times faster than VSVOABM. It is true that the fixed ratios strategy improves the efficacy of the Adams method, but the difference between DOPRI853 and VSVOABM was so great that both FRABM5 methods are clearly worse than DOPRI853 in these problems.
The function in the Pleiades is more complicated, and its evaluation requires a significant amount of CPU time. DOPRI853 beats ABMVSVO in all tolerances, but the difference is small, which allows both FRABM5 methods to win in this problem. For small tolerances FRABM5, gains about 20% CPU time with respect to DOPRI853.
The Pleiades problem is expensive to evaluate because it is formed by 28 long firstorder differential equations. Hairer et al. [11, page 427] indicated that the efficacy of Adams methods gets worse as the number of differential equations grows. To study this question in our fixed ratios implementation we add two new expensive to evaluate ODEs extracted from [11].
() A discretized Brusselator with diffusion [11, pages 248249], constituted by 882 short firstorder differential equations.() A discretized hanging Rope problem. In this problem we let the dimension in equation () of [11, pages 247248] as a free parameter. Then this problem consists of long firstorder differential equations.The comparison in number of function calls in the Brusselator problem does not show any new significant conclusion. However, in the Rope examples FRAMB5, beats ABMVSVO even for small tolerances.
In the Brusselator problem, DOPRI853 clearly beats all FRABM5 methods in CPU time, which corresponds to the indication of Hairer, Nørsett, and Wanner. In the Rope problem we can also see the negative influence of the dimension in the behaviour of the multistep methods. With 8 firstorder equations FRABM5 methods are more efficient than DOPRI853, specially with , which gains up to 40% in small tolerances. For 24 equations FRABM5, still remains in the first position in these tolerances, but now all of the methods have similar results. However, DOPRI853 takes the lead for 80 equations, and the difference grows for 320 equations. We must conclude that the statement in Hairer et al. [11, page 427] about dimension and efficacy of the Adams code is also valid for the fixed ratios implementation.
VSVOABM is not far from FRABM5 methods in the Brusselator and Rope problems, but it only beats in small tolerances. FRABM5, is superior in all tolerances.
We sum up the conclusions of this efficacy section.
(1)When the CPU time is used to measure the efficacy, DOPRI853 is the best choice when is short or it has a high number of components. However, the Pleiades and Rope problems show that FRABM5 can beat DOPRI853 in CPU time when has a moderate number of long components. For these kinds of ODEs we propose our fixed ratios implementation, particularly FRABM5 with maximum order for large tolerances and for small ones. VSVOABM was not the best method in any test.(2)When comparing the number of evaluations, fixed ratios and VSVO strategies are comparable; only the maximum order produces differences between large and small tolerances. In this chapter, specially relevant if the evaluation of has an associated monetary cost, all the multistep methods won DOPRI853. In particular, both FRABM5 were slightly better than VSVOABM in large tolerances. In small tolerances FRABM5, was the leader in some problems and VSVOABM in others, while the case did not win in any test. FRABM4, is a choice with an acceptable RAM cost that performed slightly better than FRABM5, in these small tolerances, and was virtually tied to ABMVSVO.References
 F. Bashforth and J. C. Adams, An Attempt to Test the Theories of Capillary Action by Comparing the Theoretical and Measured Forms of Drops of Fluid, Cambridge University Press, Cambridge, UK, 1883, With an Explanation of the Method of Integration Employed in Constructing the Tables which Give the Theoretical Form of Such Drops, by J. C. Adams.
 F. R. Moulton, New Methods in Exterior Ballistics, University Chicago Press, Chicago, Ill, USA, 1926.
 J. D. Lambert, Numerical Methods for Ordinary Differential Systems: The Initial Value Problem, John Wiley & Sons, Chichester, UK, 1991. View at: MathSciNet
 L. F. Shampine and M. K. Gordon, Computer Solution of Ordinary Differential Equations, The Initial Value Problem, Freeman and Company, San Francisco, Calif, USA, 1975. View at: Zentralblatt MATH  MathSciNet
 P. N. Brown, G. D. Byrne, and A. C. Hindmarsh, “VODE: a variablecoefficient ODE solver,” SIAM Journal on Scientific and Statistical Computing, vol. 10, no. 5, pp. 1038–1051, 1989. View at: Publisher Site  Google Scholar  Zentralblatt MATH  MathSciNet
 A. C. Hindmarsh, “LSODE and LSODI, two new initial value ordinary differential equations solvers,” ACM Signum Newsletterm, vol. 15, no. 4, pp. 10–11, 1980. View at: Publisher Site  Google Scholar
 A. C. Hindmarsh, “ODEPACK, a systematized collection of ODE solvers,” in Scientific Computing (Montreal, Que., 1982), R. S. Stepleman et al., Ed., pp. 55–64, IMACS, New Brunswick, NJ, USA, 1983. View at: Google Scholar  MathSciNet
 L. F. Shampine and M. W. Reichelt, “The MATLAB ODE suite,” SIAM Journal on Scientific Computing, vol. 18, no. 1, pp. 1–22, 1997. View at: Publisher Site  Google Scholar  Zentralblatt MATH  MathSciNet
 S. Wolfram, The Mathematica Book, Wolfram Media, Champaign, Ill, USA, 5th edition, 2003.
 F. T. Krogh, “Changing stepsize in the integration of differential equations using modified divided differences,” in Proceedings of the Conference on the Numerical Solution of Ordinary Differential Equations (Univ. Texas, Austin, Tex., 1972), vol. 362 of Lecture Notes in Mathmatics, Springer, Berlin, Germany, 1974. View at: Google Scholar  MathSciNet
 E. Hairer, S. P. Nørsett, and G. Wanner, Solving Ordinary Differential Equations I. Nonstiff Problem, vol. 8 of Springer Series in Computational Mathematics, Springer, Berlin, Germany, 2nd edition, 1993. View at: MathSciNet
 L. F. Shampine, “Variable order Adams codes,” Computers & Mathematics with Applications, vol. 44, no. 56, pp. 749–761, 2002. View at: Publisher Site  Google Scholar  Zentralblatt MATH  MathSciNet
 J. C. Butcher, Numerical Methods for Ordinary Differential Equations, John Wiley & Sons, Chichester, UK, 2003. View at: Publisher Site  MathSciNet
 J. G. Romay, Mejora de los métodos de Adams de paso y orden variable mediante el cálculo previo de coeficientes para cocientes de amplitudes de paso prefijados, Ph.D. thesis, Universidad Politécnica de Cartagena, 2004.
 J. R. Dormand, Numerical Methods for Differential Equations: A Computational Approach, Library of Engineering Mathematics, CRC Press, Boca Raton, Fla, USA, 1996. View at: MathSciNet
 L. F. Shampine, Numerical Solution of Ordinary Differential Equations, Chapman & Hall, New York, NY, USA, 1994. View at: MathSciNet
 R. F. Arenstorf, “Periodic solutions of the restricted threebody problem representing analytic continuations of Keplerian elliptic motions,” American Journal of Mathematics, vol. 85, pp. 27–35, 1963. View at: Publisher Site  Google Scholar  Zentralblatt MATH  MathSciNet
 J. R. Dormand and P. J. Prince, “Practical RungeKutta processes,” SIAM Journal on Scientific and Statistical Computing, vol. 10, no. 5, pp. 977–989, 1989. View at: Publisher Site  Google Scholar  Zentralblatt MATH  MathSciNet
Copyright
Copyright © 2010 David J. López and José G. Romay. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.