Abstract

Iterative splitting methods have a huge amount to compute matrix exponential. Here, the acceleration and recovering of higher-order schemes can be achieved. From a theoretical point of view, iterative splitting methods are at least alternating Picards fix-point iteration schemes. For practical applications, it is important to compute very fast matrix exponentials. In this paper, we concentrate on developing fast algorithms to solve the iterative splitting scheme. First, we reformulate the iterative splitting scheme into an integral notation of matrix exponential. In this notation, we consider fast approximation schemes to the integral formulations, also known as 𝜙-functions. Second, the error analysis is explained and applied to the integral formulations. The novelty is to compute cheaply the decoupled exp-matrices and apply only cheap matrix-vector multiplications for the higher-order terms. In general, we discuss an elegant way of embedding recently survey on methods for computing matrix exponential with respect to iterative splitting schemes. We present numerical benchmark examples, that compared standard splitting schemes with the higher-order iterative schemes. A real-life application in contaminant transport as a two phase model is discussed and the fast computations of the operator splitting method is explained.

1. Introduction

We are motivated to solving multiple phase problems that arose of transport problem in porous media. In the last years, the interest in numerical simulations with multiphase problems, that can be used to model potential damage events has significantly increased in the area of final repositories for chemical or radioactive waste.

Here, the modeling of the underlying porous media, which is the geosphere, spooled with water, is important, and we apply mobile and immobile pores as a two-phase problem in the media. With such a model, we achieve more realistic scenarios of the transported species, see [1]. The transport is structured in a convection and a diffusion portion. The convection portion is determined by the velocity vector 𝐯 and indicates the spatial direction of the concentrations with the velocity. The diffusion portion is the spatial diffusion process of the concentrations, see [1].

The sorption allegorizes the exchange between the solute (mobile) pollutant and at the surface sorbed (immobile) pollutants and appears in the temporal term as well as in the reaction term. The reaction is reversible. The equilibrium-sorption therefore can be declared as a coefficient in the specific terms.

We concentrate on simplified models and taken into account all real conditions for achieving statements of such realistic simulations of potential damage event. Here we have the delicate problem of coupled partial and ordinary differential equations and their underlying multiscale problems. While we deal with splitting methods and decomposing such scale problems, we can overcome such scaling problems, see [2].

Moreover the computational part is important, while we dealing with large matrices and standard splitting schemes are expensive with respect to compute the exponential matrices. We solve the computational problem with a novel iterative splitting scheme, that concentrate on developing fast algorithms to solve an integral formulation of matrix exponentials. In such a scheme, we consider fast approximation schemes to the integral formulations, also known as 𝜙-functions, see [3]. The novelty is to compute cheaply the decoupled exp matrices and apply only cheap matrix-vector multiplications for the higher-order terms. In general, we discuss an elegant way of embedding recent survey on methods for computing matrix exponential with respect to iterative splitting schemes.

In the following, we describe our model problem. The model equation for the multiple phase equations are given as coupled partial and ordinary differential equations:𝜙𝜕𝑡𝑐𝑖+𝐅𝑖=𝑔𝑐𝑖+𝑐𝑖,im+𝑘𝛼𝑐𝑖+𝑐𝑖,ad𝜆𝑖,𝑖𝜙𝑐𝑖+𝑘=𝑘(𝑖)𝜆𝑖,𝑘𝜙𝑐𝑘+𝑄𝑖[],𝐅,inΩ×0,𝑡(1.1)𝑖=𝐯𝑐𝑖𝐷𝑒(𝑖)𝑐𝑖+𝜂𝐄𝑐𝑖,(1.2)𝜙𝜕𝑡𝑐𝑖,im𝑐=𝑔𝑖𝑐𝑖,im+𝑘𝛼𝑐𝑖,im,ad𝑐𝑖,im𝜆𝑖,𝑖𝜙𝑐𝑖,im+𝑘=𝑘(𝑖)𝜆𝑖,𝑘𝜙𝑐𝑘,im+𝑄𝑖,im[],,inΩ×0,𝑡(1.3)𝜙𝜕𝑡𝑐𝑖,ad=𝑘𝛼𝑐𝑖𝑐𝑖,ad𝜆𝑖,𝑖𝜙𝑐𝑖,ad+𝑘=𝑘(𝑖)𝜆𝑖,𝑘𝜙𝑐𝑘,ad+𝑄𝑖,ad[],inΩ×0,𝑡,(1.4)𝜙𝜕𝑡𝑐𝑖,im,ad=𝑘𝛼𝑐𝑖,im𝑐𝑖,im,ad𝜆𝑖,𝑖𝜙𝑐𝑖,im,ad+𝑘=𝑘(𝑖)𝜆𝑖,𝑘𝜙𝑐𝑘,im,ad+𝑄𝑖,im,ad[],𝑐,inΩ×0,𝑡(1.5)𝑖(𝐱,𝑡)=𝑐𝑖,0(𝐱),𝑐𝑖,ad(𝐱,𝑡)=0,𝑐𝑖,im(𝐱,𝑡)=0,𝑐𝑖,im,ad𝑐(𝐱,𝑡)=0,onΩ,(1.6)𝑖(𝐱,𝑡)=𝑐𝑖,1(𝐱,𝑡),𝑐𝑖,ad(𝐱,𝑡)=0,𝑐𝑖,im(𝐱,𝑡)=0,𝑐𝑖,im,ad[],(𝐱,𝑡)=0,on𝜕Ω×0,𝑡(1.7) where the initial value is given as 𝑐𝑖,0 and we assume Dirichlet boundary conditions with the function 𝑐𝑖,1(𝑥,𝑡) sufficiently smooth, all other initial and boundary conditions of the other phases zero.𝜙:effective porosity (),𝑐𝑖:concentration of the 𝑖th gaseous species in the plasma chamber phase (mol/cm3),𝑐𝑖,im:concentration of the 𝑖th gaseous species in the immobile zones of the plasma chamber phase (mol/cm3),𝑐𝑖,ad:concentration of the 𝑖th gaseous species in the mobile adsorbed zones of the plasma chamber phase (mol/cm3),𝑐𝑖,im,ad:concentration of the 𝑖th gaseous species in the immobile adsorbed zones of the plasma chamber phase (mol/cm3),𝐯:velocity through the chamber and porous substrate [4] (cm/nsec),𝐷𝑒(𝑖):element-specific diffusions-dispersions tensor (cm2/nsec),𝜆𝑖,𝑖:decay constant of the 𝑖th species (1/nsec),𝑄𝑖:source term of the 𝑖th species (mol/(cm3nsec)),𝑔:exchange rate between the mobile and immobile concentration (1/nsec),𝑘𝛼:exchange rate between the mobile and adsorbed concentration or immobile and immobile adsorbed concentration (kinetic controlled sorption) (1/nsec),𝐄:stationary electric field in the apparatus (V/cm),𝜂:the mobility rate in the electric field, see [5] (cm2/nsec).

with 𝑖=1,,𝑀 and 𝑀 denotes the number of components.

The parameters in (1.1) are further described, see also [1].

The effective porosity is denoted by 𝜙 and declares the portion of the porosities of the aquifer, that is filled with plasma, and we assume a nearly fluid phase. The transport term is indicated by the Darcy velocity 𝐯, that presents the flow-direction and the absolute value of the plasma flux. The velocity field is divergence-free. The decay constant of the 𝑖th species is denoted by 𝜆𝑖. Thereby does 𝑘(𝑖) denote the indices of the other species.

In this paper we concentrate on solving linear evolution equations, such as the differential equation,𝜕𝑡𝑢=𝐴𝑢,𝑢(0)=𝑢0,(1.8) where 𝐴 can be an unbounded operator. We assume to achieve large-scale differential equation, which are delicate to solve with standard solvers.

Our main focus will be to consider and contrast higher-order algorithms derived from standard schemes as for example Strang-splitting schemes.

We propose iterative splitting schemes as a solver scheme which is simple to implement and parallelisible.

A rewriting to integral formulation, allows to reduce the computation to numerical approximation schemes. While standard schemes has the disadvantage to compute commutators to achieve higher-order schemes, we could speed up our schemes by recursive algorithms. Iterative schemes can be seen as Successive approximations, which are based on recursive integral formulations in which an iterative method is enforce the time dependency.

The paper is outlined as follows. In Section 2, we present the underlying splitting methods. The algorithms for the exponentials are given in Section 3. Numerical verifications are given in Section 4. In Section 5, we briefly summarize our results.

2. Iterative Splitting Method

The following algorithm is based on the iteration with fixed-splitting discretization step-size 𝜏, namely, on the time interval [𝑡𝑛,𝑡𝑛+1] we solve the following subproblems consecutively for 𝑖=0,2,,2𝑚. (cf. [6, 7]):𝜕𝑐𝑖(𝑡)𝜕𝑡=𝐴𝑐𝑖(𝑡)+𝐵𝑐𝑖1(𝑡),with𝑐𝑖(𝑡𝑛)=𝑐𝑛,𝑐0(𝑡𝑛)=𝑐𝑛,𝑐1=0.0,𝜕𝑐𝑖+1(𝑡)𝜕𝑡=𝐴𝑐𝑖(𝑡)+𝐵𝑐𝑖+1(𝑡),with𝑐𝑖+1(𝑡𝑛)=𝑐𝑛,(2.1) where 𝑐𝑛 is the known split approximation at the time level 𝑡=𝑡𝑛. The split approximation at the time level 𝑡=𝑡𝑛+1 is defined as 𝑐𝑛+1=𝑐2𝑚+1(𝑡𝑛+1), (clearly, the function 𝑐𝑖+1(𝑡) depends on the interval [𝑡𝑛,𝑡𝑛+1], too, but, for the sake of simplicity, in our notation we omit the dependence on 𝑛).

In the following we will analyze the convergence and the rate of convergence of the method (2.1) for 𝑚 tends to infinity for the linear operators 𝐴,𝐵𝐗𝐗, where we assume that these operators and their sum are generators of the 𝐶0 semigroups. We emphasize that these operators are not necessarily bounded, so the convergence is examined in a general Banach space setting.

The novelty of the convergence results are the reformulation in integral-notation. Based on this, we can assume to have bounded integral operators which can be estimated and given in a recursive form. Such formulations are known in the work of [8, 9] and estimations of the kernel part with the exponential operators are sufficient to estimate the recursive formulations.

2.1. Error Analysis

We present the results of the consistency of our iterative method. We assume for the system of operator the generator of a 𝐶0 semigroup based on their underlying graph norms.

Theorem 2.1. Let one consider the abstract Cauchy problem in a Hilbert space 𝐗𝜕𝑡𝑐𝑐(𝑥,𝑡)=𝐴𝑐(𝑥,𝑡)+𝐵𝑐(𝑥,𝑡),0<𝑡𝑇,𝑥Ω,(𝑥,0)=𝑐0(𝑥),𝑥Ω,𝑐(𝑥,𝑡)=𝑐1[],(𝑥,𝑡),𝑥𝜕Ω×0,𝑇(2.2) where 𝐴,𝐵𝐷(𝐗)𝐗 are given linear operators which are generators of the 𝐶0-semigroup and 𝑐0𝐗 is a given element. We assume 𝐴, 𝐵 are unbounded. Further, we assume the estimations of the unbounded operator 𝐵 with sufficient smooth initial conditions (see [8]): 𝐵exp((𝐴+𝐵)𝜏)𝑢0𝜅.(2.3)
Further we assume the estimation of the partial integration of the unbounded operator 𝐵 (see [8]): 𝐵𝜏0exp(𝐵𝑠)𝑠𝑑𝑠𝜏𝐶.(2.4)
Then, we can bound our iterative operator splitting method as 𝑆𝑖exp((𝐴+𝐵)𝜏)𝐶𝜏𝑖,(2.5) where 𝑆𝑖 is the approximated solution for the 𝑖th iterative step and 𝐶 is a constant that can be chosen uniformly on bounded time intervals.

Proof. Let us consider the iteration (2.1) on the subinterval [𝑡𝑛,𝑡𝑛+1].
For the first iterations, we have𝜕𝑡𝑐1(𝑡)=𝐴𝑐1𝑡(𝑡),𝑡𝑛,𝑡𝑛+1,(2.6) and for the second iteration we have: 𝜕𝑡𝑐2(𝑡)=𝐴𝑐1(𝑡)+𝐵𝑐2𝑡(𝑡),𝑡𝑛,𝑡𝑛+1.(2.7)
In general, we have:
(i)for the odd iterations, 𝑖=2𝑚+1 for 𝑚=0,1,2,𝜕𝑡𝑐𝑖(𝑡)=𝐴𝑐𝑖(𝑡)+𝐵𝑐𝑖1𝑡(𝑡),𝑡𝑛,𝑡𝑛+1,(2.8) where for 𝑐0(𝑡)0,(ii)for the even iterations, 𝑖=2𝑚 for 𝑚=1,2,𝜕𝑡𝑐𝑖(𝑡)=𝐴𝑐𝑖1(𝑡)+𝐵𝑐𝑖𝑡(𝑡),𝑡𝑛,𝑡𝑛+1.(2.9)
We have the following solutions for the iterative scheme: the solutions for the first two equations are given by the variation of constants:𝑐1𝐴𝑡(𝑡)=exp𝑛+1𝑡𝑐(𝑡𝑛𝑡),𝑡𝑛,𝑡𝑛+1,𝑐2(𝑡)=exp(𝐵𝑡)𝑐(𝑡𝑛)+𝑡𝑛+1𝑡𝑛𝐵𝑡exp𝑛+1𝑠𝐴𝑐1𝑡(𝑠)𝑑𝑠,𝑡𝑛,𝑡𝑛+1.(2.10)
For the recurrence relations with even and odd iterations, we have the solutions: for the odd iterations: 𝑖=2𝑚+1, for 𝑚=0,1,2,,𝑐𝑖(𝑡)=exp(𝐴(𝑡𝑡𝑛))𝑐(𝑡𝑛)+𝑡𝑡𝑛𝑡exp𝑛+1𝐴𝑠𝐵𝑐𝑖1(𝑡𝑠)𝑑𝑠,𝑡𝑛,𝑡𝑛+1.(2.11)
For the even iterations, 𝑖=2𝑚, for 𝑚=1,2,𝑐𝑖(𝑡)=exp(𝐵(𝑡𝑡𝑛))𝑐(𝑡𝑛)+𝑡𝑡𝑛𝑡exp𝑛+1𝐵𝑠𝐴𝑐𝑖1(𝑡𝑠)𝑑𝑠,𝑡𝑛,𝑡𝑛+1.(2.12)
The consistency is given as the following.
For 𝑒1, we have 𝑐1(𝜏)=exp(𝐴𝜏)𝑐(𝑡𝑛),𝑐(𝜏)=exp((𝐴+𝐵)𝜏)𝑐(𝑡𝑛)=exp(𝐴𝜏)𝑐(𝑡𝑛)+𝑡𝑛+1𝑡𝑛𝐴𝑡exp𝑛+1𝑠𝐵exp(𝑠(𝐴+𝐵))𝑐(𝑡𝑛)𝑑𝑠.(2.13)
We obtain𝑒1=𝑐𝑐1exp((𝐴+𝐵)𝜏)𝑐(𝑡𝑛)exp(𝐴𝜏)𝑐(𝑡𝑛)𝐶1𝜏𝑐(𝑡𝑛).(2.14)For 𝑒2 we have (alternating) 𝑐2(𝜏)=exp(𝐵𝜏)𝑐(𝑡𝑛)+𝑡𝑛+1𝑡𝑛𝐵𝑡exp𝑛+1𝑠𝐴exp(𝑠𝐴)𝑐(𝑡𝑛)𝑑𝑠,𝑐(𝜏)=exp(𝐵𝜏)𝑐(𝑡𝑛)+𝑡𝑛+1𝑡𝑛𝐵𝑡exp𝑛+1𝑠𝐴exp(𝑠𝐴)𝑐(𝑡𝑛+)𝑑𝑠𝑡𝑛+1𝑡𝑛exp(𝐵𝑠)𝐴𝑡𝑛+1𝑡𝑠𝑛𝐴𝑡exp𝑛+1𝑠𝜌𝐵exp(𝜌(𝐴+𝐵))𝑐(𝑡𝑛)𝑑𝜌𝑑𝑠.(2.15)
We obtain𝑒2exp((𝐴+𝐵)𝜏)𝑐(𝑡𝑛)𝑐2𝐶2𝜏2𝑐(𝑡𝑛).(2.16)
For odd and even iterations, the recursive proof is given in the following. For the odd iterations (only iterations over 𝐴), 𝑖=2𝑚+1 for 𝑚=0,1,2,, for 𝑒𝑖, we have:𝑐𝑖(𝜏)=exp(𝐴𝜏)𝑐(𝑡𝑛)+𝑡𝑛+1𝑡𝑛𝐴𝑡exp𝑛+1𝑠𝐵exp(𝑠𝐴)𝑐(𝑡𝑛+)𝑑𝑠𝑡𝑛+1𝑡𝑛𝐴𝑡exp𝑛+1𝑠1𝐵𝑡𝑛+1𝑠1𝑡𝑛𝑡exp𝑛+1𝑠1𝑠2𝐴𝑠𝐵exp2𝐴𝑐(𝑡𝑛)𝑑𝑠2𝑑𝑠1++𝑡𝑛+1𝑡𝑛𝐴𝑡exp𝑛+1𝑠1𝐵𝑡𝑛+1𝑠1𝑡𝑛𝑡exp𝑛+1𝑠1𝑠2𝐴𝑠𝐵exp2𝐴𝑐(𝑡𝑛)𝑑𝑠2𝑑𝑠1++𝑡𝑛+1𝑡𝑛𝐴𝑡exp𝑛+1𝑠1𝐵𝑡𝑛+1𝑖2𝑗=1𝑠𝑗𝑡𝑛𝑡exp𝑛+1𝑖1𝑗=1𝑠𝑗𝐴𝐵𝑠×exp𝑖𝐴𝑐(𝑡𝑛)𝑑𝑠𝑖1𝑑𝑠2𝑑𝑠1,𝑐(𝜏)=exp(𝐴𝜏)𝑐(𝑡𝑛)+𝑡𝑛+1𝑡𝑛𝐴𝑡exp𝑛+1𝑠𝐵exp(𝑠𝐴)𝑐(𝑡𝑛)+𝑑𝑠𝑡𝑛+1𝑡𝑛𝐴𝑡exp𝑛+1𝑠1𝐵𝑡𝑛+1𝑠1𝑡𝑛𝑡exp𝑛+1𝑠1𝑠2𝐴𝑠𝐵exp2𝐴𝑐(𝑡𝑛)𝑑𝑠2𝑑𝑠1++𝑡𝑛+1𝑡𝑛𝐴𝑡exp𝑛+1𝑠1𝐵𝑡𝑛+1𝑠1𝑡𝑛𝑡exp𝑛+1𝑠1𝑠2𝐴𝑠𝐵exp2𝐴𝑐(𝑡𝑛)𝑑𝑠2𝑑𝑠1++𝑡𝑛+1𝑡𝑛𝐴𝑡exp𝑛+1𝑠1𝐵𝑡𝑛+1𝑖2𝑗=1𝑠𝑗𝑡𝑛𝑡exp𝑛+1𝑖𝑗=1𝑠𝑗𝐴𝐵𝑠×exp𝑖𝐴𝑐(𝑡𝑛)𝑑𝑠𝑖𝑑𝑠2𝑑𝑠1.(2.17)
We obtain𝑒𝑖exp((𝐴+𝐵)𝜏)𝑐(𝑡𝑛)𝑐𝑖𝐶𝜏𝑖𝑐(𝑡𝑛),(2.18) where 𝑖 is the number of iterative steps.
The same idea can be applied to the even iterative scheme and also for alternating 𝐴 and 𝐵.

Remark 2.2. The same idea can be applied to 𝐴=𝐷𝐵=𝐯, so that one operator is less unbounded, but we reduce the convergence order 𝑒1=𝐾𝐵𝜏𝛼1𝑒0𝜏+𝒪1+𝛼1,𝑒andhence,2𝑒=𝐾𝐵0𝜏1+𝛼1+𝛼2𝜏+𝒪1+𝛼1+𝛼,(2.19) where 0𝛼1, 𝛼2<1.

Remark 2.3. If we assume the consistency of 𝒪(𝜏𝑚) for the initial value 𝑒1(𝑡𝑛) and 𝑒2(𝑡𝑛), we can redo the proof and obtain at least a global error of the splitting methods of 𝒪(𝜏𝑚1).

2.2. Splitting Method to Couple Mobile, Immobile, and Adsorbed Parts

The motivation of the splitting method are based on the following observations. (i)The mobile phase is semidiscretized with fast finite volume methods and can be stored into a stiffness-matrix. We achieve large time steps, if we consider implicit Runge-Kutta methods of lower order (e.g., implicit Euler) as a time discretization method. (ii)The immobile, adsorbed and immobile-adsorbed phases are purely ordinary differential equations and each of them is cheap to solve with explicit Runge-Kutta schemes. (iii)The ODEs can be seen as perturbations and can be solved all explicit in a fast iterative scheme.

For the full equation we consider the following matrix notation:𝜕𝑡𝐜=𝐴1𝐜+𝐴2𝐜+𝐵1𝐜𝐜im+𝐵2𝐜𝐜ad𝜕+𝐐,𝑡𝐜im=𝐴2𝐜im+𝐵1𝐜im𝐜+𝐵2𝐜im𝐜im,ad+𝐐im,𝜕𝑡𝐜ad=𝐴2𝐜ad+𝐵2𝐜ad𝐜+𝐐ad,𝜕𝑡𝐜im,ad=𝐴2𝐜im,ad+𝐵2𝐜im,ad𝐜im+𝐐im,ad,(2.20) where 𝐜=(𝑐1,,𝑐𝑚)𝑇 is the spatial discretized concentration in the mobile phase, see (1.1), 𝐜im=(𝑐1,im,,𝑐𝑚,im)𝑇 is the concentration in the immobile phase, the some also for the other phase concentrations. 𝐴1 is the stiffness matrix of (1.1), 𝐴2 is the reaction matrix of the right-hand side of (1.1), 𝐵1 and 𝐵2 are diagonal matrices with the exchange of the immobile and kinetic parameters, see (1.4) and (1.5).

Further, 𝐐,,𝐐im,ad are the spatial discretized sources vectors.

Now, we have the following ordinary differential equation:𝜕𝑡𝐴𝐂=1+𝐴2+𝐵1+𝐵2𝐵1𝐵20𝐵1𝐴2+𝐵1+𝐵20𝐵2𝐵20𝐴2+𝐵200𝐵20𝐴2+𝐵2𝐂+𝐐,(2.21) where 𝐂=(𝐜,𝐜im,𝐜ad,𝐜im,ad)𝑇 and the right-hand side is given as 𝐐=(𝐐,𝐐im,𝐐ad,𝐐im,ad)𝑇.

For such an equation, we apply the decomposition of the matrices:𝜕𝑡𝜕𝐂=𝐴𝐂+𝐐,𝑡𝐴𝐂=1𝐴𝐂+2𝐂+𝐐,(2.22) where𝐴1=𝐴1+𝐴20000𝐴20000𝐴20000𝐴2,𝐴2=𝐵1+𝐵2𝐵1𝐵20𝐵1𝐵1+𝐵20𝐵2𝐵20𝐵200𝐵20𝐵2.(2.23)

The equation system is numerically solved by an iterative scheme.

Algorithm 2.4. We divide our time interval [0,𝑇] into subintervals [𝑡𝑛,𝑡𝑛+1], where 𝑛=0,1,𝑁, 𝑡0=0, and 𝑡𝑁=𝑇.
We start with 𝑛=0.
(1)The initial conditions are given with 𝐂0(𝑡𝑛+1)=𝐂(𝑡𝑛). We start with 𝑘=0.(2)Compute the fix-point iteration scheme given as 𝜕𝑡𝐂𝑘=𝐴1𝐂𝑘+𝐴2𝐂𝑘1+𝐐,(2.24) where 𝑘 is the iteration index, see [10]. For the time integration, we apply Runge-Kutta methods as ODE solvers, see [11, 12].(3)The stop criterion for the time interval [𝑡𝑛,𝑡𝑛+1] is given as 𝐂𝑘𝑡𝑛+1𝐂𝑘1𝑡𝑛+1err,(2.25) where is the maximum norm over all components of the solution vector. err is a given error bound, for example, err=104.If (2.25) is fulfilled, we have the result 𝐂𝑡𝑛+1=𝐂𝑘𝑡𝑛+1.(2.26)If 𝑛=𝑁 then we stop and are done.If (2.25) is not fulfilled, we do 𝑘=𝑘+1 and go to (2).

The error analysis of the schemes are given in the following Theorem.

Theorem 2.5. Let 𝐴,𝐵(𝐗) be given linear bounded operators in a Banach space (𝐗). We consider the abstract Cauchy problem: 𝜕𝑡𝐂(𝑡)=𝐴𝐂(𝑡)+𝐵𝐂(𝑡),𝑡𝑛𝑡𝑡𝑛+1,𝐂𝑡𝑛=𝐂𝑛,for𝑛=1,,𝑁,(2.27) where 𝑡1=0 and the final time is 𝑡𝑁=𝑇+. Then problem (2.27) has a unique solution. For a finite steps with time size 𝜏𝑛=𝑡𝑛+1𝑡𝑛, the iteration (2.24) for 𝑘=1,2,,𝑞 is consistent with an order of consistency 𝒪(𝜏𝑞𝑛).

Proof. The outline of the proof is given in [2].

In the next section we describe the computation of the integral formulation with exp-functions.

3. Exponentials to Compute Iterative Schemes

The theoretical ideas can be discussed in the following formulation:𝐷𝐴(𝐵,𝑡)[0]𝐷=exp(𝑡𝐵),𝐴(𝐵,𝑡)[𝑘]=𝑘𝑡0exp((𝑡𝑠)𝐵)𝐴𝐷𝐴(𝐵,𝑠)[𝑘1]𝑑𝑠,(3.1) and the matrix formulation of our two-step scheme is given as𝐴=𝐴0𝐴𝐵00𝐵𝐴0𝐵𝐴,(3.2) the computation of the exp-Matrix can be expressed as:𝐷exp𝐴𝑡=exp(𝐴𝑡)0𝐴𝐷(𝐵)exp(𝐵𝑡)0𝐵(𝐴)[2]𝐷2!𝐵(𝐴)[1]𝐷1!exp(𝐴𝑡)𝐵(𝐴)[𝑛]𝐷𝑛!𝐵(𝐴)[1]1!exp(𝐴𝑡).(3.3)

Here, we have to compute the right-hand side as time dependent term, means we evaluate exp(𝐴𝑡) and exp(𝐵𝑡) as a Taylor expansion. Then the integral formulation can be done with polynomials and we derive the expansion of such integral operators with the 𝜙-function.

The 𝜙-functions are defined as the integration of the exp-functions. We can analytically derive 𝜙-functions with respect to the integral-formulation of a matrix exp-function.

In the following we reduce to a approximation of the fixed right-hand side (means we assume 𝐷𝐴(𝐵,𝑠)[𝑘1]𝐷𝐴(𝐵,0)[𝑘01]).

Later we also follow with more extended schemes.

Computing 𝜙-Functions:
𝜙0(𝑥)=𝑒𝑥,𝜙𝑘+1𝜙(𝑥)=𝑘(𝑥)1/𝑘!𝑥,𝑘0.(3.4)
So the matrix formulation of our scheme is given as𝑌(𝑡)=exp𝐴𝑡𝑌(0),(3.5) where 𝐴 is given as, 𝐴=𝐴0𝐴𝐵00𝐴𝐵0𝐵𝐴,(3.6) the computation of the exp-Matrix can be expressed in a first-order scheme and with the assumption of commutations is given as: 𝜙exp𝐴𝑡=exp(𝐴𝑡)01𝜙(𝐵𝑡)𝐴𝑡exp(𝐵𝑡)02(𝐴𝑡)𝐵(𝐵+𝐴)𝑡2𝜙1𝜙(𝐴𝑡)𝐵𝑡exp(𝐴𝑡)𝑖(𝐴𝑡)𝐵(𝐴+𝐵)𝑖1𝑡𝑖𝜙1(𝐴𝑡)𝐵𝑡exp(𝐴𝑡),(3.7) where we assume 𝑢𝑖(𝑠)=𝑢(𝑡𝑛), mean we approximate the right-hand side term.

For higher-orders we should also include the full derivations of 𝑐1,𝑐2,.

3.1. Derivation of the Formula

Consider the equatioṅ𝑥=𝐴𝑥+𝑎,𝑥(0)=0,(3.8) the solution of this equation in terms of the 𝜙𝑘,𝑘=0,1, is given as𝑥=𝑡𝜙1(𝑡𝐴)𝑎.(3.9) Similarly, the equatioṅ𝑥=𝐴𝑥+𝑏𝑡,𝑥(0)=0,(3.10) has a solution of the form𝑥=𝑡2𝜙2(𝑡𝐴)𝑏.(3.11) In general, the solution of the equation𝑡̇𝑥=𝐴𝑥+𝑎+𝑏𝑡+𝑐22!+,𝑥(0)=0,(3.12) admits the form𝑥=𝑡𝜙1(𝑡𝐴)𝑎+𝑡2𝜙2(𝑡𝐴)𝑏+𝑡3𝜙3(𝑡𝐴)𝑐+.(3.13) In this section, we use the formula (3.21) for iterative schemes given aṡ𝑢1=𝐴𝑢1,̇𝑢2=𝐴𝑢1+𝐵𝑢2,̇𝑢3=𝐴𝑢3+𝐵𝑢2,̇𝑢4=.(3.14) The solution of the first iteration given by𝑢1=𝑒𝐴𝑡𝑢0.(3.15) Inserting this into (3.26) and expanding 𝑒𝐴𝑡 up to the order 1, we have 2nd order approximation of the exact solution which has a form𝑢2=𝑒𝐵𝑡𝑢0+𝜙1(𝑡𝐴)𝐴𝑡𝑢0.(3.16) Similarly, inserting (3.16) into (3.27) and expanding 𝜙1(𝑡𝐴), we have a third order approximation of the exact solution𝑢3=𝑒𝐵𝑡𝑢0+𝜙1(𝑡𝐵)𝐵𝑡𝑢0+𝜙2(𝑡𝐵)𝐵(𝐵+𝐴)𝑡2𝑢0.(3.17) In general for 𝑖=0,2,4,, we have the 𝑝=𝑖+1-th order approximation of the exact solution in terms of the 𝜙𝑖 function as follows:𝑢𝑖=𝑒𝐴𝑡𝑢0+𝜙1(𝑡𝐴)𝐴𝑡𝑢0+𝜙2(𝑡𝐴)𝐴(𝐵+𝐴)𝑡2𝑢0++𝜙𝑖(𝑡𝐴)𝐴(𝐴+𝐵)𝑖1𝑢0.(3.18) For 𝑖=1,3,5,, we have𝑢𝑖=𝑒𝐵𝑡𝑢0+𝜙1(𝑡𝐵)𝐵𝑡𝑢0+𝜙2(𝑡𝐵)𝐵(𝐵+𝐴)𝑡2𝑢0++𝜙𝑖(𝑡𝐵)𝐵(𝐵+𝐴)𝑖1𝑡𝑖𝑢0.(3.19)

3.2. Algorithms

In the following, we discuss the one- and two-side algorithms.

3.3. Two-Side Scheme

For the implementation of the integral formulation, we have to deal with parallel ideas, which means we select independent parts of the formulation.

Step 1. Determine the order of the method by fixed iteration number.

Step 2. Consider the time interval [𝑡0,𝑇], divide it into 𝑁 subintervals so that time step is =(𝑇𝑡0)/𝑁.

Step 3. On each subinterval, [𝑡𝑛,𝑡𝑛+], 𝑛=0,1,,𝑁, use the algorithm by considering initial conditions for each step as 𝑢(𝑡0)=𝑢0, 𝑢𝑖(𝑡𝑛)=𝑢𝑖1(𝑡𝑛)=𝑢(𝑡𝑛), 𝑢2𝑡𝑛=𝜙+0(𝐵𝑡)+𝜙1𝑢𝑡(𝐵𝑡)𝐴𝑡𝑛𝑢3𝑡𝑛=𝜙+0(𝐴𝑡)+𝜙1(𝐴𝑡)𝐵𝑡+𝜙2(𝐴𝑡)𝐵(𝐵+𝐴)𝑡2𝑢𝑡𝑛𝑢2𝑖𝑡𝑛=𝜙+0(𝐵𝑡)+𝜙1(𝐵𝑡)𝐴𝑡++𝜙2𝑖1(𝐵𝑡)𝐴(𝐴+𝐵)2𝑖1𝑡2𝑖𝑢2𝑖1𝑡𝑛𝑢2𝑖+1𝑡𝑛=𝜙+0(𝐴𝑡)+𝜙1(𝐴𝑡)𝐵𝑡++𝜙2𝑖(𝐴𝑡)𝐵(𝐵+𝐴)2𝑖𝑡2𝑖+1𝑢2𝑖𝑡𝑛.(3.20)

Step 4. 𝑢𝑖(𝑡𝑛+)𝑢(𝑡𝑛+).

Step 5. Repeat this procedure for next interval until the desired time 𝑇 is reached.

3.4. One-Side Scheme (Alternative Notation with Commutators)

For the one-side scheme, we taken into account of the following commutator relation.

Definition 3.1. The following relation is given: exp(𝑡𝐴)𝐵exp(𝑡𝐴)=𝐵,exp(𝑡𝐴),(3.21) where [,] is the matrix commutator.
The integration is given as𝑡0exp(𝑠𝐴)𝐵exp(𝑠𝐴)𝑑𝑠=𝐵,𝜙1(,𝑡𝐴)(3.22) the representation is given in [13].
Further, we have the recursive integration𝐵,𝜙𝑖(𝑡𝐴)𝑡𝑖=𝑡0𝐵,𝜙𝑖1(𝑠𝐴)𝑑𝑠,(3.23) where 𝜙𝑖 given defined as 𝜙0𝜙(𝐴𝑡)=exp(𝐴𝑡),𝑖(𝐴𝑡)=𝑡0𝜙𝑖1(𝜙𝐴𝑠)𝑑𝑠,𝑖(𝜙𝐴𝑡)=𝑖1𝑡(𝐴𝑡)𝐼𝑖1/(𝑖1)!𝐴.(3.24)

The iterative scheme with the equations ̇𝑢1=𝐴𝑢1,(3.25)̇𝑢2=𝐴𝑢2+𝐵𝑢1,(3.26)̇𝑢3=𝐴𝑢3+𝐵𝑢2,(3.27)̇𝑢4=,(3.28) is solved as𝑐1(𝑡)=exp(𝐴𝑡)𝑐(𝑡n𝑐),2(𝑡)=𝑐1(𝑡)+𝑐1(𝑡)𝑡0𝑐𝐵,exp(𝑠𝐴)𝑑𝑠,2(𝑡)=𝑐1(𝑡)+𝑐1(𝑡)𝐵,𝜙1,𝑐(𝑡𝐴)3(𝑡)=𝑐2(𝑡)+𝑐1(𝑡)𝑡0𝐵,exp(𝑠𝐴),𝐵,𝜙1𝑐(𝑠𝐴)𝑑𝑠,3(𝑡)=𝑐2(𝑡)+𝑐1(𝑡)𝐵,exp(𝑡𝐴),𝐵,𝜙2(+𝑡𝐴)𝐵,𝐴exp(𝑡𝐴),𝐵,𝜙3(𝑡𝑡𝐴)+𝑂3,(3.29)

The recursion is given as𝑐𝑖(𝑡)=𝑐𝑖1(𝑡)+𝑐1(𝑡)𝑡0𝐵,exp(𝑠𝐴)𝑠10𝑠𝐵,exp1𝐴𝑠20𝑠𝑖20𝑠𝐵,exp𝑖1𝐴𝑑𝑠𝑖1𝑑𝑠1𝑑𝑡.(3.30)

Remark 3.2. For the novel notation, we have embedded the commutator to the computational scheme. For such a scheme we could save to compute additional the commutators.

4. Numerical Examples

In the following, we deal with numerical example to verify the theoretical results.

4.1. First Example: Matrix Problem

For another example, consider the matrix equation𝑢(𝑡)=1230𝑢,𝑢(0)=𝑢0=01,(4.1) the exact solution is𝑢2𝑒(𝑡)=3𝑡𝑒2𝑡5.(4.2) We split the matrix as=+123011100120.(4.3)

The Figure 1 presents the numerical errors between the exact and the numerical solution.

The Figure 2 presents the CPU time of the standard and the iterative splitting schemes.

4.2. Second Experiment: 10×10 Matrix

We deal in the first with an ODE and separate the complex operator in two simpler operators.

We deal with the 10×10 ODE system:𝜕𝑡𝑢1=𝜆1,1𝑢1+𝜆2,1𝑢2++𝜆10,1𝑢10𝜕,(4.4)𝑡𝑢2=𝜆1,2𝑢1𝜆2,2(𝑡)𝑢2++𝜆10,2𝑢10𝜕,(4.5)(4.6)𝑡𝑢10=𝜆1,10𝑢1+𝜆2,10(𝑡)𝑢2+𝜆10,10𝑢10𝑢,(4.7)1(0)=𝑢1,0,,𝑢10(0)=𝑢10,0(initialconditions),(4.8) where 𝜆1(𝑡)+ and 𝜆2(𝑡)+ are the decay factors and 𝑢1,0,,𝑢10,0+. We have the time interval 𝑡[0,𝑇].

We rewrite (4.4) in operator notation, we concentrate us to the following equations:𝜕𝑡𝑢=𝐴(𝑡)𝑢+𝐵(𝑡)𝑢,(4.9) where 𝑢1(0)=𝑢10=1.0, 𝑢2(0)=𝑢20=1.0 are the initial conditions, where we have 𝜆1(𝑡)=𝑡 and 𝜆2(𝑡)=𝑡2.

The operators are splitted in the following way, while operator 𝐴, see (4.10), covers the upper diagonal entries and operator 𝐵, see (4.11), covers the lower diagonal entries of the ODE system (4.4)–(4.7). Such a decomposition allows to reduce the computation of the matrix exponential to a upper or lower matrix and speedup the solver process𝐴=0.010.0100.010.0100.010.010.0200.010.010.010.0300.010.010.010.010.010.010.010.010.0800.010.010.010.010.010.010.010.0100.08,(4.10)𝐵=0.0800.010.010.010.010.010.010.010.0100.080.010.010.010.010.010.010.010.0100000000.020.010.01000000000.010.01000000000.010.01.(4.11)

The Figure 3 present the numerical errors between the exact and the numerical solution. Here we have the best approximation to the exact solution with a sixth-order iterative scheme (6 iterative steps), but no additionally more computational time for finer time step.

The Figure 4 present the CPU time of the standard and the iterative splitting schemes. Compared to standard splitting schemes, the iterative schemes are cheaper to compute and higher in accuracy. We can also choose smaller time steps to obtain more accurate results and use nearly the same computational resource.

Remark 4.1. The computational results show the benefit of the iterative schemes. We save computational time and achieve higher-order accuracy. The one-side and two-side schemes have the same results.

4.3. Third Example: Commutator Problem

We assume to have a large norm of the commutator [𝐴,𝐵] and deal with the matrix equation𝑢(𝑡)=101110𝑢,𝑢(0)=𝑢0=11.(4.12)

In the following, we discuss different splitting ideas.

Version 1
We split the matrix as =+10111090111109,(4.13) while [𝐴,𝐵]max{𝐴,𝐵}.

Here we have to deal with highly noncommutative operators and the computational speedup is given in the iterative schemes, while the commutator is not needed to obtain more accurate results, while the standard splitting schemes deal with such commutators for the error reduction.

The Figure 5 present the numerical errors between the exact and the numerical solution.

The Figure 6 present the CPU time of the standard and the iterative splitting schemes.

Further for the one-side,we obtain more improved results for the following splitting.

Version 2
In this version, we have=+10111090191101.(4.14) The Figure 7 presents the numerical errors between the exact and the numerical solution.

A more delicate problem is given for the stiff matrices.

Version 3
In this version, we have10411104=104101104+11101.(4.15) The Figure 8 present the numerical errors between the exact and the numerical solution.

The Figure 9 present the CPU time of the standard and the iterative splitting schemes.

Remark 4.2. The iterative schemes with fast computations of the exponential matrices have a speedup. The constant CPU time of the iterative schemes shows that it benefit instead of the expensive standard schemes. Also for stiff problems with multi iterative steps, we reach the same results of the standard 𝐴-𝐵 or Strang-Splitting schemes, while we are independent of the commutator estimation.

4.4. Two-Phase Example

The next example is a simplified real-life problem for a multiphase transport-reaction equation. We deal with mobile and immobile pores in the porous media, such simulations are given for waste scenarios.

We concentrate on the computational benefits of a fast computation of the iterative scheme, given with matrix exponentials.

The equation is given as:𝜕𝑡𝑐1+𝐅𝑐1=𝑔𝑐1+𝑐1,im𝜆1𝑐1[],𝜕,inΩ×0,𝑡𝑡𝑐2+𝐅𝑐2=𝑔𝑐2+𝑐2,im+𝜆1𝑐1𝜆2𝑐2[],𝜕,inΩ×0,𝑡𝐅=𝐯𝐷,𝑡𝑐1,im𝑐=𝑔1𝑐1,im𝜆1𝑐1,im[],𝜕,inΩ×0,𝑡𝑡𝑐2,im𝑐=𝑔2𝑐2,im+𝜆1𝑐1,im𝜆2𝑐2,im[],𝑐,inΩ×0,𝑡1(𝐱,𝑡)=𝑐1,0(𝐱),𝑐2(𝐱,𝑡)=𝑐2,0𝑐(𝐱),onΩ,1(𝐱,𝑡)=𝑐1,1(𝐱,𝑡),𝑐2(𝐱,𝑡)=𝑐2,1([],𝑐𝐱,𝑡),on𝜕Ω×0,𝑡1,im(𝐱,𝑡)=0,𝑐2,im𝑐(𝐱,𝑡)=0,onΩ,1,im(𝐱,𝑡)=0,𝑐2,im[].(𝐱,𝑡)=0,on𝜕Ω×0,𝑡(4.16)

In the following, we deal with the semidiscretized equation given with the matrices:𝜕𝑡𝐂=𝐴Λ1Λ𝐺0𝐺01𝐴Λ2𝐺0𝐺𝐺0Λ1𝐺00𝐺Λ1Λ2𝐺𝐂,(4.17) where 𝐂=(𝐜1,𝐜2,𝐜1im,𝐜2im)𝑇, while 𝐜1=(𝐜1,1,,𝐜1,𝐼) is the solution of the first species in the mobile phase in each spatial discretization point (𝑖=1,,𝐼), the same is also for the other solution vectors.

We have the following two operators for the splitting method𝐷𝐴=Δ𝑥2+𝑣21121121121Δ𝑥111111𝐼×𝐼,(4.18) where 𝐼 is the number of spatial points.Λ1=𝜆100𝜆100𝜆100𝜆1𝐼×𝐼,Λ2=𝜆200𝜆200𝜆200𝜆2𝐼×𝐼,𝐺=𝑔00𝑔00𝑔00𝑔𝐼×𝐼.(4.19)

We decouple into the following matrices:𝐴1=𝐴0000𝐴00000000004𝐼×4𝐼,𝐴2=Λ1Λ0001Λ20000Λ1000Λ1Λ24𝐼×4𝐼,𝐴3=𝐺0𝐺00𝐺0𝐺𝐺0𝐺00𝐺0𝐺4𝐼×4𝐼.(4.20)

For the operator 𝐴1 and 𝐴2=𝐴2+𝐴3, we apply the iterative splitting method.

Based on the decomposition, operator 𝐴1 is only tridiagonal and operator 𝐴2 is block diagonal. Such matrix structure reduce the computation of the exponential operators.

The Figure 10 present the numerical errors between the exact and the numerical solution. Here we obtain optimal results for one-side iterative schemes on operator 𝐵, means we iterate with respect to 𝐵 and use 𝐴 as right-hand side.

Remark 4.3. For all iterative schemes, we can reach faster results as for the The iterative schemes with fast computations of the exponential matrices standard schemes. With 4-5 iterative steps we obtain more accurate results as we did for the expensive standard schemes. With one-side iterative schemes we reach the best convergence results.

5. Conclusions and Discussions

In this work, we have presented a very economical and practical method by successive approximations. Here the idea to decouple the expensive computation of matrix exponential via iterative splitting schemes has the benefit of less computational time. While the error analysis present stable methods and higher-order schemes, the applications show the speedup with the iterative schemes. In, future, we concentrate on matrix dependent scheme, based on iterative splitting algorithms.