About this Journal Submit a Manuscript Table of Contents
International Journal of Differential Equations
Volume 2013 (2013), Article ID 681575, 10 pages
http://dx.doi.org/10.1155/2013/681575
Research Article

Nonlinear Extension of Multiproduct Expansion Schemes and Applications to Rigid Bodies

University of Greifswald, Institute of Physics, Felix-Hausdorff-Str. 6, 17489 Greifswald, Germany

Received 25 April 2013; Accepted 5 August 2013

Academic Editor: Sanjay Khattri

Copyright © 2013 Jürgen Geiser. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Abstract

In this paper we discuss time integrators for nonlinear differential equations. In recent years, splitting approaches have become an important tool for reducing the computational time needed to solve differential equations. Moreover, nonlinearity is a challenge to splitting schemes, while one has to extend the exp-functions in terms of a nonlinear Magnus expansion. Here we discuss a novel extension of the so-called multiproduct expansion methods, which is used to improve the standard Strang splitting schemes as to their nonlinearity. We present an extension of linear splitting schemes and concentrate on nonlinear systems of differential equations and generalise in this respect the recent MPE method; see (Chin and Geiser, 2011). Some first numerical examples, of rigid body problems, are given as benchmarks.

1. Introduction

In recent years, applications to nonlinear differential equations of multiscale problems, for example, the rigid body (see [1]), have arisen and become important.

Here, splitting schemes are important for decoupling the different nonlinear scales, for example, a Hamiltonian with kinetic and potential operators, and treating them with the best solver schemes; see [2, 3].

Theoretically, we deal with subproblems which can be solved independently, but we have taken into account their nonlinearity, so that standard fundamental solutions, for example, -functions, of the subproblems have to be extended in respect to their nonlinearity.

Here we apply the nonlinear Magnus expansion; see [4, 5].

Our contributions are to use the first and second order schemes and extend them with an extrapolation scheme, in our case with a multiproduct expansion, to gain more accurate and efficient higher order schemes; see [6].

In this paper we concentrate on approximate solutions of the nonlinear evolution equation; for example, with unbounded operators and . We have further , .

We assume there are suitable chosen subspaces of the underlying Banach space such that .

For such equations, we concentrate on applying nonlinear splitting schemes to extrapolation ideas to obtain higher order schemes. Here, we deal with Suzuki's methods and apply factorised symplectic algorithms with forward derivatives; see [6, 7].

The exact solution of the evolution problem (1) is with the evolution operator depending on the actual time and the initial value . We use a formal notation: Here the evolution operator and the Lie derivative associated with are for any unbounded nonlinear operator with Frechet’s derivative .

The paper is outlined as follows. In Section 2, we discuss the standard splitting schemes. The extended nonlinear multiproduct expansion is presented in Section 3. In Section 4, we present the improved methods due to the initialisation process. The alternative methods with respect to the linearisation schemes are presented in Section 5. The numerical experiments are given in Section 6. The conclusion is in Section 7.

2. Nonlinear Operator Splitting Methods

In the literature, there are various types of splitting methods. We mainly consider the following operators splitting schemes in this study.(1) Sequential operator splitting: A-B splitting for , whereby is given from (1). The approximated split solution at the point is defined by .(2) The Strang-Marchuk operator splitting: A-B-A splitting where , is the local time-step. The approximate split solution at the point is defined by .(3) Iterative splitting with respect to one operator where , and is the known split approximation at the time level . The split approximation at the time level is defined by .

2.1. Error Analysis of the Classical Splitting Schemes

In the following, we discuss the abstract error analysis with respect to the Lie operator conventions.

We associate a new operator which is a linear Lie operator.

Definition 1. For the given operator we associate a new operator, denoted by .
This Lie operator acts on the space of the differentiable operators of the type and maps each operator into the new operator , such that for any element

Here the derivatives are given by the following.

Definition 2. The th power of the Lie operator applied to some operator can be expressed as the th derivative of ; that is, the relation is valid for all .

We deal with the problem (1) and split the operator into the sum ; then we have the following results of the abstract splitting errors. We have an appropriate norm in a general Banach space .

Theorem 3. The splitting error for the A-B splitting is given by
The splitting error of the Strang splitting is given by with the commutator .
The splitting error of the iterative splitting is given by where one assumes that the previous error .

Proof. The splitting error for the A-B splitting is given by We evaluate the -function and apply the commutator and obtain
The splitting error of the Strang Splitting is given by We evaluate the -function and apply twice the commutator based on the symmetry of the scheme and obtain where the commutator is given as .
The splitting error of the iterative splitting is given by where we have applied the method of variation of constants to the linearised operators. Furthermore, we assume that the previous error and is the error of the iterative scheme.

3. Nonlinear Multiproduct Decomposition

We apply the abstract standard splitting schemes to the multiproduct decomposition.

We have to carry out the following steps:(i)apply the nonlinear Strang splitting scheme, (ii)embed the Strang splitting scheme into the multiproduct expansion.

To apply the abstract setting of a nonlinear Magnus expansion, we deal with the following modified nonlinear equation: where are noncommuting operators and is a general Banach space; for example, , where is the rank of the matrices.

To apply the nonlinear Magnus expansion, we deal with. where the first order Magnus operator is given by Euler's formula and the second order Magnus operator is given by the midpoint rule: to generalise this to higher order schemes; see [5].

In the following, we apply the schemes for the multiproduct expansions to the time sequences, and .

For the sequential splitting (Lie–Trotter) product we have where are the Magnus expansions (see (22) and (23)) of order .

Furthermore, for the Strang splitting scheme [8], we approximate the scheme based on the symmetrical splitting by

Based on the nonlinear approximations of Strang's, we generalise to higher order schemes the idea of the multiproduct expansion (MPE): where we assume that a linearisation with the first order integration scheme is sufficient.

For the applications, we assume we have to solve time-irreversible problem, and and have to be positive.

We straightforwardly follow the derivation, based on the linear multiproduct expansion; see [6].

We construct as a product with and . Based on the symmetry of the linearised scheme, we have we skip the even orders based on the error truncation of the Strang splitting (see [9]) and obtain where the error terms are nested commutators of and depending on the specific form of .

We apply at step size and obtain this serves as the kernel of a linearised multiproduct expansion; see [6].

The first schemes are given by

3.1. Error Analysis of the Nonlinear Multiproduct Expansion

We deal with the following error of a linearised MPE based on the first order nonlinear Magnus expansion operators. In Theorem 4, we discuss the improvement of a lower order Strang splitting scheme, given as (see (25)), to a higher order scheme based on linearized multiproduct expansion (MPE) methods. The idea is based on an iterative evaluation of the extrapolation methods, see [6], which allows to receive higher accuracy.

Theorem 4. For the numerical solution of (20), one considers the linearised MPE algorithm (26) of linear order . One has the following convergence result: where and is to be chosen uniformly on bounded time intervals and independently of and .

Proof. We apply the telescoping identity and the assumption to split the linearised operators and obtain where .
The linearised MPE expansion can be estimated (see [6]) and the consistency is given by estimating the second part of (33): such a result is obtained by the linear MPE version. So for the first order linearisation, we obtain a consistent scheme with higher accuracy.

Remark 5. The results in Theorem 4 combine splitting of a nonlinear equation in two operators, see (20), and extrapolating a standard splitting scheme, here given as second order Strang splitting scheme, to a higher order approach with the order , while are the stages of the extrapolation scheme. Here the novel result is based on the extension to nonlinear higher order results, while the operators can be linearized and approached with linearized MPE schemes.

4. Improving the Initialisation of Splitting Methods

A delicate problem in splitting methods is to achieve sufficient accuracy in the first splitting steps; see [10]. Based on the accuracy of initial starting solution, the results of all the next methods, for example MPE schemes or iterative scheme, are dependent and influenced by the order of the initialization process; see [11]. Therefore, it is important to take into account the accuracy of the initial solution. In the following, we discuss such on idea to improve the initialization process with the help of the Zassenhaus formula.

The improvement of the initial solution is based on the combination of the novel commutators, that are given in Section 2. Such a new combination allows to reduce the splitting error between the two novel operators and ; see (1).

In Theorem 6, we discuss the improvement of the initial starting solution via the Zassenhaus exponents.

Theorem 6. One solves the initial value problem by using the method given in (5) and (6). One assumes bounded and nonlinear operators and .
The consistency error of the A-B splitting is ; then one can improve the error of the A-B splitting scheme to , by improving the starting conditions by where the are called the nonlinear Zassenhaus exponents, which are given by the novel commutator, for example, the linear case given in [12].
The local splitting error of A-B splitting method can be expressed as follows: where is a function of the nonlinear Lie brackets of and .

Proof. Let us consider the subinterval , where . Then the solution of the subproblem (5) is and after improving the initialisation, we have and so the solution of the subproblem (6) becomes with the help of the Zassenhaus product formula.

Remark 7. The novel result in Theorem 6 is an improvement of the first splitting step to a higher order result of , where is the number of the Zassenhaus exponents. Such an additional prestep can be done for all splitting schemes, for example, A-B splitting, Strang splitting, iterative splitting and so on. Therefore the additional prestep can be done parallel to the existing splitting schemes and can be improved at the end of the computation the splitting results to the order , where is the order of the splitting approach and is the order of the Zassenhaus formula.

Remark 8. For example, the second order A-B splitting after improving the initialisation is where the novel order of the new scheme is , with and .

A next higher order is a third order A-B splitting after improving the initialisation is where the novel order of the new scheme is , with and , and the commutator is given by .

Remark 9. The same idea can also be carried out using the Strang splitting method; see the linear case in [10]. We achieve a novel order of the new scheme to , with and being the number of Zassenhaus exponents.

5. Alternative Approaches: Iterative Schemes for Linearisation

We next discuss fixed-point iteration and Newton's method as alternative approaches to linearise nonlinear problems.

We are to solve the nonlinear problem where .

5.1. Fixed-Point Iteration

The nonlinear equations can be formulated as fixed-point problems: where is the fixed-point map and is nonlinear; for example, .

A solution of (45) is called a fixed point of the map .

The fixed-point iteration is given by and is called a nonlinear Richardson iteration, a Picard iteration, or the method of successive substitution.

Definition 10. Suppose that and . is Lipschitz continuous on with Lipschitz constant if for all .

For the convergence, we have to assume that is a contraction map on with Lipschitz constant .

In the following, we present new results, in Algorithm 14, to an application to a nonlinear Hamiltonian problem, see [3]. The model equations are numerically simulated in Section 6.

Algorithm 11. We apply the fixed-point iterative scheme to decouple the nonseparable Hamiltonian problem while we have the initial condition for the fixed-point iteration:
The initial starting solutions for the ith iterative steps are given as follows: are the solutions of the th iterative step and we have the initial condition for the fixed-point iteration: .
We assume that we have convergent results after iterative steps or with the stopping criterion: while is the Euclidean norm (or a simple vector-norm, e.g., ) and the error bound is given, for example, as .

Example 12. In the following, we present the application of the iterative Verlet as fixed-point scheme to the nonlinear Hamiltonian problem (47)-(48).
We start with .
The iterative scheme based on the iterative Verlet, see [3], is given as where the stopping criterion is or or with (62).

Remark 13. For the alternative scheme, we can also improve the initialisation process; see Section 4 for the standard schemes.
To improve the initial solution we can start with the following preprocesses.
We start with a result of the explicit Euler method:
We start with a result of the explicit RK method:
Such an improvement allows to obtain higher order results and reduce the reduction of the order due to lower order initial solutions; see [11].

5.2. Newton's Method

We solve the nonlinear operator equation (43).

Here, with the Banach spaces is given with the norms and . Let be at least continuously differentiable; furthermore, we assume that is a starting solution of the unknown solution .

Then the successive linearisation leads to general Newton's method: where and .

The method finds the solution to a nonlinear problem by solving a sequence of linear problems of the same kind.

Algorithm 14. We apply Newton's method to the non-separable Hamiltonian problem: where is the zero vector.
Further, we have the initial condition, which are given as follows:
The initial starting solutions for the ith iterative steps are given as follows: are the solutions of the th iterative step and we have the initial condition for the fixed-point iteration: .
We apply Newton's method in its discrete form, which is given as where is the Jacobian matrix (see [13]) and are the given solutions of the th iterative step.
We assume that we have convergent results after iterative steps or with the stopping criterion: while is the Euclidean norm (or a simple vector-norm, e.g., ) and the error bound is given, for example, as .

Example 15. We apply the iterative Verlet scheme to the underlying fixed-point problem (59) of nonlinear Hamiltonian problem (58).
We start with .
The iterative scheme based on the iterative Verlet (see [3]) is given in (51)–(53).

Remark 16. For the improvement method, we can apply the weighted Newton method. We try to skip the delicate outer diagonals in the Jacobian matrix and apply where the function can be applied as a scalar, for example , and the same with ; see [13]. It is important to ensure that is small enough to preserve the convergence.

Remark 17. Linearisation methods can be applied to MPE methods. Nonseparable Hamiltonian problems can be decoupled into separable Hamiltonian problems; see the ideas in [14].

6. Numerical Examples

In this section, we treat experiments to verify the benefit of our methods: we treat nonseparable Hamiltonian with a nonlinear kinetic and nonlinear potential.

The motivation arose from simulating a Levitron.

A Levitron is described on the basis of rigid body theory; see the convention of [15] for the Euler angles. The angular velocity is along the -axis of the system, is along the line of the nodes, and is along the -axis. We transform into body coordinates and obtain

The kinetic energy can be written as The potential energy is given by the sum of the gravitational energy and the interaction potential of the Levitron in the magnetic field of the base plate and is with the magnetic moment of the top and the magnetostatic potential. Furthermore, we use the notation of [16] and the potential of a ring dipole as an approximation for a magnetised plane with a centred unmagnetised hole. Furthermore, we introduced a nondimensionalisation for the variables and the magnetostatic potential: The lengths were scaled by the radius of the base plane, mass is measured in units of , and energy in units of . Therefore the one time unit is . So the dimensionless Hamiltonian with is given by with and being the nondimensionalised inertial parameters and being the ratio of the gravitational and magnetic energy.

For the nonseparable Hamiltonian of (68), we have

The same is given for

and are linearised Lie operators and their vector fields are given by

The transfer to the operators is given in the following description.

The exponential operators and are then just shift operators, with being a symmetric second order splitting method: This corresponds to the velocity form of the Verlet algorithm (VV).

Furthermore, the splitting scheme corresponds to the position form of the Verlet algorithm (PV).

Solving the equations of motion (69) and (70) numerically with nonlinear MPE methods based on the Verlet algorithm, we plot the movement of the centre of mass as shown in Figure 1. The axes in the plot show the nondimensional variables , , and . The trajectory starts at the equilibrium point .

fig1
Figure 1: Trajectory calculated with MPE method supplied with the Verlet algorithm.

The linearisation is done with two and four iterations per time-step, to see whether how many iterations are reasonable. The results are shown in Figure 2.

fig2
Figure 2: Errors of the numerical scheme for MPE method supplied with the iterative Verlet method.

In a first comparison, we deal with the second order Verlet algorithm and improve the scheme with iterative steps. The tables should give an impression of the timescales of the problem and the errors; see Table 1.

tab1
Table 1: Numerical results for MPE method supplied with the iterative Verlet algorithm.

Remark 18. The iterations improve the algorithm, when we apply sufficiently accurate initial solutions. We also obtain a benefit in reducing the computational time instead of applying only expensive Runge-Kutta methods.

We improve the solution with an extrapolation scheme with fourth order. We can see the errors this algorithm produces in comparison with the Runge-Kutta method with small time-steps ( time units per step). Figure 3 presents the results of the 4th order MPE method with different time-steps and compares it with the Runge-Kutta solution.

681575.fig.003
Figure 3: Errors of the numerical scheme: 4th order extrapolation scheme with the Verlet method and kernel ().

Also we tested the 6th order MPE method with different time-steps and compared it with the Runge-Kutta method; see Figure 4.

681575.fig.004
Figure 4: Errors of the numerical scheme: 6th order extrapolation scheme with Verlet method as kernel ().

Table 2 gives an impression of the standard higher order Runge-Kutta solver (4th order scheme) with the timescales of the problem and the errors.

tab2
Table 2: Stability and computational time with 4th order explicit Runge-Kutta.

We could improve the accuracy and computational time needed by using higher order extrapolation schemes; see Table 3.

tab3
Table 3: Errors and computational time with 4th and 6th order MPE schemes using Verlet scheme as kernel.

Remark 19. We improved the basic splitting schemes with the Verlet algorithm and extrapolation schemes. At least more accurate solutions were achieved and computational time was saved. The best result was achieved with order 10 and , and for such a case we could improve the results and obtain them faster than with standard RK schemes.

Remark 20. We were able to accelerate the computation with extrapolation methods and increase the time-steps. Because of the accuracy of the underlying kernel, which is an iterative Verlet method with , we were able to remain at order 8 and only as the time-step.

7. Conclusions and Discussion

We presented novel MPE approaches to nonlinear differential equations. Based on the ideas of the nonlinear Magnus expansion and linearisation schemes, we derived a linearised MPE approach possessing higher accuracy. We discussed the numerical errors and different improvements of the underlying splitting methods. Numerical examples confirmed the application to nonlinear equations. In the future, we will focus on the development of improved MPE methods for more highly nonlinear approaches.

References

  1. M. V. Berry, “The levitron: an adiabatic trap for Spins,” Proceedings of the Royal Society A, vol. 452, pp. 1207–1220, 1996.
  2. H. R. Dullin and R. W. Easton, “Stability of levitrons,” Physica D, vol. 126, no. 1-2, pp. 1–17, 1999. View at Publisher · View at Google Scholar · View at MathSciNet
  3. J. Geiser, “Multiscale methods for levitron problems: theory and applications,” Computers and Structures, vol. 122, pp. 27–32, 2013.
  4. S. Blanes, F. Casas, J. A. Oteo, and J. Ros, “The Magnus expansion and some of its applications,” Physics Reports, vol. 470, no. 5-6, pp. 151–238, 2009. View at Publisher · View at Google Scholar · View at MathSciNet
  5. F. Casas and A. Iserles, “Explicit Magnus expansions for nonlinear equations,” Journal of Physics A, vol. 39, no. 19, pp. 5445–5461, 2006. View at Publisher · View at Google Scholar · View at MathSciNet
  6. S. A. Chin and J. Geiser, “Multi-product operator splitting as a general method of solving autonomous and nonautonomous equations,” IMA Journal of Numerical Analysis, vol. 31, no. 4, pp. 1552–1577, 2011. View at Publisher · View at Google Scholar · View at MathSciNet
  7. S. A. Chin and P. Anisimov, “Gradient symplectic algorithms for solving the radial schrödinger equation,” Journal of Chemical Physics, vol. 124, no. 5, Article ID 054106, 2006.
  8. G. Strang, “On the construction and comparison of difference schemes,” SIAM Journal on Numerical Analysis, vol. 5, pp. 506–517, 1968. View at MathSciNet
  9. E. Hairer, C. Lubich, and G. Wanner, Geometric Numerical Integration Structure-Preserving Algorithms for Ordinary Differential Equations, vol. 31 of Springer Series in Computational Mathematics, Springer, Berlin, Germany, 2002. View at MathSciNet
  10. J. Geiser, G. Tanoğlu, and N. Gücüyenen, “Higher order operator splitting methods via Zassenhaus product formula: theory and applications,” Computers & Mathematics with Applications, vol. 62, no. 4, pp. 1994–2015, 2011. View at Publisher · View at Google Scholar · View at MathSciNet
  11. J. Geiser and G. Tanoglu, “Operator-splitting methods via the Zassenhaus product formula,” Applied Mathematics and Computation, vol. 217, no. 9, pp. 4557–4575, 2011. View at Publisher · View at Google Scholar · View at MathSciNet
  12. D. Scholz and M. Weyrauch, “A note on the Zassenhaus product formula,” Journal of Mathematical Physics, vol. 47, no. 3, Article ID 033505, 7 pages, 2006. View at Publisher · View at Google Scholar · View at MathSciNet
  13. C. T. Kelley, Iterative Methods for Linear and Nonlinear Equations, vol. 16 of Frontiers in Applied Mathematics, Society for Industrial and Applied Mathematics (SIAM), Philadelphia, Pa, USA, 1995. View at Publisher · View at Google Scholar · View at MathSciNet
  14. S. A. Chin, “Symplectic and energy-conserving algorithms for solving magnetic field trajectories,” Physical Review E, vol. 77, no. 6, Article ID 066401, 12 pages, 2008. View at Publisher · View at Google Scholar · View at MathSciNet
  15. H. Goldstein, Classical Mechanics, Addison-Wesley Press, Cambridge, Mass, USA, 3rd edition, 1951. View at MathSciNet
  16. R. F. Gans, T. B. Jones, and M. Washizu, “Dynamics of the Levitron,” Journal of Physics D, vol. 31, pp. 671–679, 1998.