#### Abstract

Schwarz waveform relaxation (SWR) is a new type of domain decomposition methods, which is suited for solving time-dependent PDEs in parallel manner. The number of subdomains, namely, , has a significant influence on the convergence rate. For the representative nonlinear problem , convergence behavior of the algorithm in the two-subdomain case is well-understood. However, for the multisubdomain case (i.e., ), the existing results can only predict convergence when . Therefore, there is a gap between and . In this paper, we try to finish this gap. Precisely, for a specified subdomain number , we find that there exists a quantity such that convergence of the algorithm on unbounded time domains is guaranteed if . The quantity depends on and we present concise formula to calculate it. We show that the analysis is useful to study more complicated PDEs. Numerical results are provided to support the theoretical predictions.

#### 1. Introduction

Let be a bounded spatial domain of interest. We are interested in the Schwarz waveform relaxation (SWR) algorithm applied to compute solution of the initial-boundary value problem (IBVP):where denotes a function which in general depends in a nonlinear manner on . This is a fundamental model for analyzing the convergence properties of the SWR algorithm and some important results are revisited as follows.

Gander [1] studied the SWR algorithm on bounded and unbounded time intervals in the two-subdomain case. Particularly, the author proved linear convergence of the algorithm on unbounded time intervals, if the derivative of can be bounded from above by a constant , which satisfies (other related or similar studies can be found in [2â€“4]). In the case of subdomains with , Gander and Stuart [5] analyzed the convergence behavior of the SWR algorithm for the linear heat equation on unbounded time intervals. It was shown that the convergence rate depends on and deteriorates as increases. For IBVP (1) with and , the work in [6] can be generalized to obtain a similar convergence result in the case of . In summary, in the multisubdomain case, the convergence behavior of the SWR algorithm for (1) on unbounded time domains is well-understood, when . For and , however, we know nothing up to now.

In this paper, we try to finish this gap. After a brief description of the multisubdomain SWR algorithm in Section 2, we perform a convergence analysis for the multisubdomain SWR algorithm in Section 3. For given , we present concise formula to calculate the allowed upper bound of , namely, , which guarantees convergence of the algorithm on unbounded time domains. We show that the analysis for (1) can be used to study the multisubdomain domain decomposition methods [7, 8] for more complicated PDEs: . Section 4 provides numerical results to support the theoretical prediction and we finish this paper by giving some concluding remarks in Section 5.

#### 2. The Schwarz Waveform Relaxation Algorithm

For the initial-boundary value problem (IBVP) (1), we decompose the whole space domain into subdomains: , where , , , and for . We assume that so that all the subdomains overlap but domains which are not adjacent do not overlap, as shown in Figure 1. Then, the -subdomain SWR algorithm for IBVP (1) can be written aswhere and for all . Let () be the error function at the th iteration. Then, we havewhere we have used the remainder term in Taylorâ€™s expansion for some function which lies between and . In (3), and for all . Following in this section, we define , , and .

*Hypothesis 1. *Assume that (1) is an even integer; (2) the subdomains which are not adjacent do not overlap; (3) all the overlap sizes are equal to ; (4) all the lengths of the subdomains are equal to .

Under this hypothesis, we have

The following two lemmas are useful to analyze the convergence properties of the SWR algorithm in the multisubdomain case.

Lemma 1 (see [1]). *Assume that the function satisfies the following differential inequalities:**where is a function bounded from below by some constant (i.e., ) and for all and . Then, it holds that , .*

Lemma 2. *Assume that the function in (1) satisfies and . Then, the error functions in (3) decay on the interfaces and at the rate**for , and**for , where*

*Proof. *Let be the solution of the following differential equation: where . The solution can be written down aswhich is a time-independent function of . Since and , we have , for , and this implies . Therefore, the difference satisfiesWe have since and . Then, from (11) we get Now, by using Lemma 1 we have ; that is, for . A similar argument holds for the sum , and thusIt is easy to find that this inequality holds on all the subdomains and any iteration index . Hence, it holds that Substituting these two inequalities back into the right hand side of (14) and then evaluating (14) at leads to inequality (6). Evaluating (14) at leads to (7).

#### 3. Convergence Analysis

Based on Hypothesis 1 and Lemmas 1 and 2, here we perform a convergence analysis for the SWR algorithm (2) in the -subdomain case. We then generalize the analysis to more general nonlinear problems. The following notations are used throughout this section:

##### 3.1. Convergence Analysis for (2)

From (6) and (7), we see that the error at a given boundary interface depends on the errors at different boundary interfaces; this leads to the following two independent linear systems of inequalities:where each inequality should be interpreted in the component sense. The vectors and and the matrices and are slightly different if the number of subdomains is even or odd. Under Hypothesis 1 (i.e., is an even integer and , , the vectors and and the matrices and are defined by (16b). For odd, these vectors and matrices can be defined similarly.

To study the diminution of the vectors and , we focus on the spectral norms of and . To this end, we first recall the common definition for the spectral norm; namely,

Lemma 3. *With the argument defined by (16a), the spectral norms of and satisfy *

* Proof. *We prove the bound for . The bound for the matrix can be obtained similarly. Clearly, the matrix can be partitioned as , where is a tridiagonal matrix and is a matrix which has only nonzero entries and these are equal to 1. In fact, it is easy to verify for . From Lemma 3.8 given in [5], we know that the eigenvalues of are given by . The spectral norm of then can be estimated by .

Clearly, to prove as , it suffices to prove . However, as we will show a little later, this in general does not hold for all choices of and . LetThen, the arguments and defined by (16a) can be regarded as functions of ; that is,Clearly, with defined by (16a), it holds thatMoreover, Hypothesis 1 implies , and therefore by using we get . Hence, the quantities and defined by (16a) satisfyFor the case , the functions and are changed toHence, it is easy to get â‡’, since the hyperbolic-sine function satisfies , . On the contrary, for the sine function, it holds that , and . This, together with (24), gives Therefore, for it is not obvious to see .

Lemma 4. *Under Hypothesis 1, for given the function defined by (16a) is increasing for and satisfies and .*

* Proof. *The proof is divided into two parts.*Part I* ( and ). It is easy to get , since . We next prove . To this end, we define Then, it holds that andMoreover, a routine calculation yieldsSince and , we haveIt is easy to get and , where By using , we get = ; this impliesFor any , we have ; this, together with , giveswhere in the last inequality we have used and for any and .

From (32) and (33), we have ,. Therefore, does not have local minimum(s) for . Moreover, by noticing we have for all . This, together with (28)â€“(30), gives , . Hence, .*Part II* ( is an increasing function for ). From the second equality in (23), we know that the function can be represented as where and are defined by (22) and . Let be two constants and . Then, it is easy to prove . (We have with . Moreover, we have ; this, together with and , gives . Hence, for .) Therefore, it is easy to understand that for . Since , , and are positive, we finally get for .

Now, we are in a position to present one of the main results of this section.

Theorem 5. *Under Hypothesis 1, assume that the function in (1) satisfies with , where for specified integer the argument is the unique root of . Then, the -subdomain SWR algorithm (2) with is convergent. In particular, the error functions can be bounded in infinity norm in time and space, as**where*

*Proof. *From (14), for all we have With the constant given above, this implies Since the infinity norm is bounded by the spectral norm, we have Then, by using and and Lemma 3, we get (36). Finally, by using Lemma 4 (i.e., for ), convergence of the SWR algorithm in the multisubdomain case follows.

*Remark 6. *For , that is, the two-subdomain case, we have ; hence, . Since , it is easy to know that is an increasing function of ; this, together with , implies . Therefore, Theorem 5 actually includes Theorem 4.1 given by Gander [1].

##### 3.2. Application to More General Nonlinear Problems

We now consider the following IBVP:where the functions and satisfyWe can also assume that and depend on and , but this only makes a trivial difference. Here, we are interested in applying the domain decomposition strategy to (41) from time step to time step. Assume that (41) is discretized by the backward Euler method:where denotes the step size and . We can also consider some other time integrators, such as Trapezoidal, Runge-Kutta methods, but the analysis is similar. Now, with known from the previous computation step, we focus on calculating through the domain decomposition method [7, 8]:where and . Upon convergence, we get .

To analyze the convergence of the sequence , we need the following lemma.

Lemma 7 (see [9]). *Let be a linear, elliptic operator with in a bounded domain . Suppose that, in , with . Then, it holds that .*

DefineThen, we havewhere for the second equality we have used the mean value theorem for integrals with some lying between and . Subtracting from (44) and then using (46), we getwhere . LetThen, from (42) we have . Now, by using Lemma 7 and a similar procedure as we did in the proof of Lemma 2, it holds thatprovided , where is defined byHere, for or the quantity should be understood as

DefineThen, following the analysis in Section 3.1, Theorem 8 can be derived directly.

Theorem 8. *Let and satisfy (42). Let , where is the unique root of and is defined by (16a). Then, for iterations (44) are convergent and the error functions defined by (45) uniformly decay to zero with a rate .*

For specified , , , , and , Theorem 8 can be used to select a safe step size and therefore it is instructive for designing an* adaptive-step-size* computation.

*Remark 9 (monotonicity of ). *At the end of this section, we claim that is an increasing function of and and is a decreasing function of . We show the increasing monotonicity with respect to and the others can be proved similarly. Indeed, for it holds