Abstract

We investigate properties of algorithms that are used to solve coupled evolutionary partial differential equations posed on neighboring, nonoverlapping domains, where the solutions are coupled by continuity of state and normal flux through a shared boundary. The algorithms considered are based on the widely used approach of iteratively exchanging boundary condition data on the shared boundary at each time step. There exists a significant and sophisticated numerical analysis of such methods. However, computations for practical applications are often carried out under conditions under which it is unclear if rigorous results apply while relatively few iterations are used per time step. To examine this situation, we derive exact matrix expressions for the propagation of the error due to incomplete iteration that can be readily evaluated for specific discretization parameters. Using the formulas, we show that the universal validity of several tenants of the practitioner’s conventional wisdom are not universally valid.

1. Introduction

The class of multiphysics problems in which one physical process operates in one domain while a second physical process operates in a neighboring domain, and the solutions in the component subdomains are coupled by continuity of state and normal flux through a common boundary, which is central to a number of applications [1]. Examples include conjugate heat transfer between a fluid and solid [25], Stokes-Darcy flow in karst aquifers [6] and in catalytic converters [7], modeling of the core and edge plasma flows in a tokamak reactor [8, 9], and flow in heterogeneous porous media [1013].

In applications, the processes in the component subdomains are often themselves represented by complex, multiscale, and multiphysics models and the component models are solved with different discretization methods utilizing significantly different scales. Such coupled problems present enormous challenges for efficient high performance computing [1]. There are very strong incentives to use existing “legacy” codes for the component models and to treat component physics solvers as “black boxes.” For these reasons the most commonly encountered solution strategy is a simple iterative approach [14, 1417]. In this approach, the models in each of the component subdomains are associated with one of the two boundary coupling conditions and subsequently solved independently except for data from the coupling through the boundary. The coupling data for the boundary conditions for one component model are provided by the solution of the other component model from the previous iteration. In a time dependent problem, this iteration is carried out on each time step.

This type of coupled physics problem has been extensively studied by the numerical analysis community, with the goals of deriving accurate, stable numerical methods, efficient, and accurate coupling algorithms and deriving rigorous a priori analyses, for example, in [1015, 17, 18], as well as a posteriori analyses [24, 19]. However, in practice, as mentioned, the component physics is usually complex and there are large differences in the discretization strategies employed in the component subdomains necessitating significant processing of transferred information. Thus, the application of current rigorous mathematical methods and analysis is problematic if not impossible. This is partially the reason that there is a very signicant gulf between the kinds of practices carried out in high performance computational science and the “best practices” determined by the mathematical community [3].

In addition, the computational strategies employed in practice are often rationalized by a body of conventional wisdom which asserts that (1) stability is equivalent to accuracy and (2) the use of unconditionally stable implicit solvers for the component physics generally stabilizes the entire coupled simulation as long as the component solutions are stable. This is often rationalized by some sort of (Neumann) linear stability analysis applied to the simplied case of two coupled linear parabolic problems. Based on these ideas, simulations are deemed acceptable provided they do not exhibit obvious instabilities such as unbounded behavior or rapid oscillation. Moreover, due to the prohibitive computational cost, such conclusions are often based on the single computation at hand, rather than through an exhaustive study.

This paper attempts to address the difficulties that face extending rigorous convergence analysis to the complex models and discretizations that occur in practice and illustrate issues arising from the conventional wisdom. Instead of focusing on deriving conditions that guarantee good behavior as is usual for a standard convergence analysis, we adopt a different approach and develop a rigorous computable error representation that can be evaluated for any given choice of discretization parameters, thus allowing the conventional wisdom to be verified or not. We consider the canonical problem of two linear coupled parabolic equations and formulate the iterative solution method as a fixed point iteration for a block system on the discrete level. We then derive an exact formula for the error in the iterated solution. This formula is related to the Neumann series for the inverse of the full system matrix. The argument has several virtues: it is elementary, it subsumes any sort of ad hoc linear stability analysis, and it is general in the sense that it holds for a variety of discretization methods and range of scales for the component physics. Importantly, it allows for easy evaluation of various discretization parameter choices, for example, step size, space mesh, and number of iterations per time step.

We then present a detailed study of the canonical problem of two linear coupled parabolic equations that amply demonstrates shortcomings of the conventional wisdom. Firstly, we are able to dispel the notion that stability implies accuracy. In particular, we demonstrate that a divergent iteration can be part of a stable algorithm if the number of iterations used is low, while the resulting approximation is inaccurate. In this case, the user might not be aware that the iteration is divergent if they only consider whether or not obvious instability occurs. This case is particularly interesting since a serious problem is being masked by seemingly “reasonable” results. Secondly, we demonstrate that there is no guarantee that an “unconditionally stable” time integration scheme like backward Euler remains unconditionally stable if the system is not solved exactly at each time step. We explore cases where using a limited number of iterations leads to a conditional stability, which depends on the size of the time step. The values of time steps which provide stability can occupy a range with a maximum and minimum value, rather than just a maximum.

The remainder of this paper is organized as follows. Section 2 introduces the problem and the notation associated with discretization and iteration. Section 3 derives the primary results of the paper regarding the stability of the iterative solution. Section 4 provides numerical examples. Section 5 addresses the multirate case, in which the time step in one component is an integer multiple of the time step in the other component. Multirate numerical examples are included. A brief conclusion is given in Section 6.

2. Problem Formulation and Discretization

We consider a system posed on a domain consisting of two neighboring, nonoverlapping, convex, and polygonal subdomains and in 1, 2, or 3 spatial dimensions that share a common polygonal interface boundary (see [1, 2, 8, 9] for applications). We illustrate in Figure 1.

The general form of the PDE system is together with the following additional boundary and initial conditions: (i)boundary conditions for on and initial conditions for on ,(ii)boundary conditions for on and initial conditions for on ,where and are linear, coercive elliptic operators, , are given functions in , and is a unit vector normal to and pointing from to .

We next introduce notation for the discretized problem. The analysis is based on the backward Euler method in time but can accommodate various spatial discretization schemes [3]. For the numerical results included later in the paper, we use a finite volume method in space [13, 18, 19], and the details of that discretization are given in Appendix A. In order to complete the boundary data for each subdomain, we adopt the common strategy of using the solution on the first subdomain to determine Dirichlet boundary data on for the problem on the second subdomain and, likewise, use the solution on the second subdomain to provide Neumann boundary conditions for the problem on the first subdomain; for example, see [24, 812, 14, 15, 17, 19]. Thus, each problem is provided with a full set of boundary data, but the subdomains are not treated symmetrically. By convention, we assume that component 2 provides state data for component 1, and component 1 provides flux data for component 2. This leads to the system of discrete equations: where the superscript is the time step index. The symbols and are the discrete versions of the operators and , except that has been moved to the right hand side. The symbols and are the discrete versions of the source terms and and also take into account the known boundary data on and . and are the projections by which the solution in one subdomain is used to provide boundary data on for the other subdomain. The subscripts “12” and “21” indicate information passing from “ to ” and “ to .” We refer to and , which solve (2), as the implicitly coupled solution. We emphasize that we analyze this method because it is one of the most common approaches used in practice.

Note that the form of the operators and depends on the choice of discretization. Some discretizations, for example, finite volume, require forming these operators using some combination of averaging, extrapolation, and interpolation [1]. Other discretizations provide natural definitions. For example, in the case of the mortar method [18, 19], the coupling between the subdomains has a formal structure based on Lagrange multiplier variables on the interface. In this case and correspond to the off-diagonal blocks of the Schur complement [20] that are formed by eliminating the Lagrange multiplier variables.

In all of the above cases, we now write (2) as a single set of linear equations. Namely, where

2.1. Block Iteration

As mentioned, the implicitly coupled system (3)-(4) is not formed or solved exactly in practice. The common approach is to use an iterative block solution approach that involves a sequence of solutions of each component problem with alternating exchange of coupling boundary data; for example, see [24, 812, 14, 15, 17, 19]. This is easily described using the concept of matrix splitting [2022]. We define a matrix splitting so that Starting with some initial guess , the following equation defines a fixed point iteration which may converge to . For the iterates, we use a double superscript, for the time step and for the iteration index. We use for the number of iterations to be performed at each time step: Note represents the exchange of information between the components, and the subsequent inversion of is the solution of the individual components. In the case of (3), the splitting is chosen to separate the solution of the problems on the subdomains [2022]. This is motivated by the fact that the coupling of the subdomains occurs on a manifold of lower dimension. This splitting can be accomplished in a “Gauss-Jacobi” sense: or in a “Gauss-Seidel” sense:

If this iteration converges, it converges to the implicitly coupled solution , which is the unique fixed point of the iteration. By differencing (5) and (6), we get where . This leads to the well-known result [20, 21]: The iteration converges if the spectral radius of is less than one, and it diverges if the spectral radius is greater than one [2022]. If the iteration converges, then we can get as close as desired to the implicitly coupled solutions by iterating. If for any reason we are unable to iterate until convergence at each time step, the situation is considerably more complicated. If cannot be obtained at a given time step, then it cannot be used as the initial condition for the next time step, and the iterates may wander away from the implicitly coupled solutions . To investigate how the iterates wander from their implicitly coupled counterparts, we must account for both the error from incomplete iteration at every time step and for the error associated with the inherited initial value at every time step.

3. Analysis of Stability of the Iterative Solution

3.1. Stability in Time for a Single Iteration

In this section, we derive an error formula for the case of a single iteration at each time step. The motivation is practical both as this is often encountered in practice and because the relatively simple result shows how the general result proceeds. The error is defined to be the difference between the computed solution and the implicitly coupled solution at each time step. Recall the implicitly coupled solution satisfies For the case of a single iteration per time step, we have and we can suppress the iteration index. The iteration is simply If we define and , then (12) becomes Using (11), we obtain

Substituting and into (14) and rearranging gives Substituting (15) into (16) gives

In general, the error at time step is While the form of (18) is not as concise as (10), it is clear that the stability in time depends on the spectral radius of .

3.2. Stability in Time for Iterations

In this section, we derive a more general version of (18) for the case of iterations at each time step. Beginning with (11), the iterates satisfy The error is now defined as for . We are interested in obtaining a formula for .

The following relationships are useful in deriving the final result: Substituting these into (19) gives the following, for : and, for , Substituting into (11) gives the following, for : and, for ,

Starting with , we can use (23) and (24) to find and and discover the general form for . The process is tedious, but the pattern is quickly apparent. The final form for the error at time step where iterations have been taken per time step is The form of is as follows. For , and, for , the following recursion relationship holds: For example, Note that the results derived above assume that is being inverted exactly. A modification of (25) for the case of inexact inversion is given in Appendix B. A second modification of (25), incorporating the use of a weighted Jacobi method in which the new iterate is taken to be a weighted average of the old iterate and the Jacobi iterate, is given in Appendix C.

We conclude of course that the stability in time is determined by the spectral radius of the matrix . However, given the conventional wisdom, it is important to point out that stability does not imply accuracy. If we examine (25), it is clear from the term that if has a spectral radius of greater than one then even tiny machine rounding errors will grow out of control [20], and this is one way to describe the notion of instability. If the spectral radius of is less than one, this can not occur and the method can then be considered stable. However, the summation term shows that it is still possible for error to accumulate in time, and the size of this error depends on the size of the vectors as well as the effect of and on . We consider these errors to be an issue of accuracy, not stability. This calls into question the conventional wisdom that if the result does not blow up, then the method is stable and therefore accurate.

We emphasize that the error formulas derived above measure the difference between the computed numerical solution and the idealized numerical solution obtained by solving the implicitly coupled discrete equations. The idealized numerical solution is itself a discrete approximation to the continuous solution of the PDE. The accuracy of the implicitly coupled solution relative to the continuous solution is a separate matter and would have some order in and , where and represent the cell width and time step, such as assuming a typical second order accurate spatial discretization and first order accurate time discretization is used. Preserving this ideal accuracy requires that discrete effects such as solution error and errors arising from projecting between component discretizations are minimal [19].

3.3. Relationship between and

An alternative form for the is This form suggests a power series representation for an approximate inverse. Rewriting the splitting as , then Using a power series for the inverse in brackets [2022], assuming that , we get Hence, provided , becomes an increasingly accurate approximation of as the number of iterations increases. Specifically, the first term in (29) approaches (31) as increases, while the second term in (29) goes to zero.

4. Numerical Examples

We can employ the error formulas derived above by forming the matrix for any given set of discretization parameters, for example, step size, space mesh, and number of iterations per time step, and then easily examine the stability of the overall algorithm for particular choices. In this section, we present numerical experiments designed to examine aspects of the conventional wisdom.

We consider a one dimensional domain with and the interface between the two subdomains located at , and we pose the heat equation with constant diffusivity equal to one in each subdomain (). The outer boundary conditions are homogeneous Neumann at and homogeneous Dirichlet at . Error measurement is facilitated by constructing the problems so that the exact continuous solution is . The complete statement of the PDE is

We use finite volume in space and backward Euler in time [13, 18, 19, 21, 22]. The details of the discrete equations are given in Appendix A. The cell-centered finite-volume method provides state values at cell centers. Fluxes at cell boundaries are calculated via differencing, while linear extrapolation is used to compute coupling values at the boundary between subdomains. Component 1 receives a Dirichlet condition at the interface that is a linear extrapolation of the two state values that are the nearest to the interface in component 2 and, similarly, component 2 receives a Neumann condition at the interface which is a linear extrapolation of the two flux values that are the nearest to the interface in component 1 [8, 9]. The extrapolation scheme is depicted in Figure 2.

In the examples, we use a Jacobi iteration with . Once the exchange of information is complete, each subdomain is solved to within machine precision by direct inversion of the matrix . At each time step, the code produces both the implicitly coupled solution, , by inverting the coupled system matrix and the iterative solution, , by performing the specified number of iterations. The implicitly coupled and iterative solutions can then be compared at any time step. It is easily verified that the implicitly coupled solution is unconditionally stable and has accuracy . In the following examples, we concentrate on the error between the implicitly coupled and iterative solutions. We introduce the relative error, which is based on the standard discrete two-norm:

Finally, we conduct similar experiments with a wide range of spatial mesh sizes. The qualitative results are the same in every case, although the specific threshold values may of course vary.

4.1. Case 1: Stability of the Algorithm with Iterations Using a Fixed Time Step,

In the first experiment, we set the grid size to be 16 cells in each subdomain () and fix the time step at . We plot the spectral radius of , , and for various values of in Figure 3. Note that the spectral radius of is less and one, so the iteration is convergent, and the spectral radius of is less than one, so the implicitly coupled scheme would be unconditionally stable if we had the luxury of inverting at each time step. However, the method is unstable for certain values of . If one observes the progress of the iterative solution at a given time step, it is clear that although the iteration is convergent, certain early iterates contain significantly more error than the previous iterate. In other words, the error in the computational iterate does not decrease monotonically. For the particular problem examined here, it is every fourth iterate, starting with the second iterate, that has the increased error. Figure 3 shows that for the first few values in this sequence () the reduced quality of the iterate results in the entire algorithm being unstable, despite the fact that the iteration is convergent. As the number of iterations increases, this effect is reduced in magnitude and eventually disappears. We verify the instability by solving to (80 time steps) and listing the relative error at for several values of in Table 1.

The values and are included in Table 1 to verify that, since the iteration is convergent, the relative error approaches zero as the number of iterations becomes very large. The reduction in relative error for large is fairly slow, since the spectral radius of is approximately .975, as indicated in Figure 3.

The results of this experiment serve to illustrate one of our main points that a low number of iterations can interfere with the unconditional stability of the time discretization. In addition, it shows that a lower number of iterations can be stable, while a higher number is unstable. The plot of the spectral radius of in Figure 3 indicates that using 2, 6, 10, or 14 iterations leads to an unstable method. The numerical values in Table 1 confirm the instability for and ( and yield analogous results). Backward Euler is considered “unconditionally stable” [21, 22]; however, this fact is based on the assumption that the system is solved exactly at each time step. For the iterative algorithm used here, this unconditional stability is only valid in the limit as the number of iterations goes to infinity. This example shows how sensitive the stability of the algorithm is to the number of iterations used. Note that this example also provides another demonstration of the fact that stability does not imply accuracy. Notice in Table 1 that the relative error for the stable case is around ten times that of the nearby stable cases, . Despite the fact that the spectral radius of is less than one for all the stable cases, it is still possible for the summation term in (25) to accumulate significant error in time. This term makes a much larger contribution for the case of than it does for the other stable cases. This is not evident from the spectral radius of alone; note that .

4.2. Case 2: Stability of the Algorithm with Time Step , Using a Fixed Number of Iterations,

In the second experiment, we use identical conditions except that we fix the number of iterations and vary the time step . Figure 4 shows the spectral radii of the relevant matrices for and values of ranging from 0 to 0.1. Since , this range of time step corresponds to a range for the ratio of approximately 1 to 100. Figure 4 implies that the method with is stable for all . Carrying out the calculation confirms this, and the relative errors at the end of the simulation are given in Table 2.

Figure 5 shows the spectral radii of the relevant matrices for . The implication is that the method with is stable only for very small values of . Table 3 gives relative errors for the case.

For the case of , the plot and table are shown in Figure 6 and Table 4. This case illustrates that instability can occur for a limited range of time steps with a minimum and a maximum, not simply for time steps above a certain threshold, as one might expect. The case makes it clear that the method is stable for sufficiently small and sufficiently large time steps and is unstable in between. (In fact this is also true for the case, and if the horizontal axis in Figure 5 is carried out to much larger time steps the spectral radius of does eventually drop below one.) This is a surprising result since one would not expect reducing the time step to promote instability, nor increasing the time step to restore stability.

Finally, in Figure 7 we provide two further plots for the and cases. These show the relationship between and discussed in Section 3.3. The spectral radius of is less than one for all values of , and it decreases monotonically with increasing . We know from the expansions in Section 3.3 that as becomes large then approaches . However, there is no reason to expect the spectral radius of to be monotonic in for small values of , and the plots in this section show that it is not. There is a characteristic bump in the plots of the spectral radius of versus , and it is in this range of values that the spectral radius of may exceed 1, leading to an unstable method. Naturally, as becomes large, the size of this bump is reduced, since is driven closer to . In the case, the spectral radius of is less than 1 for all time steps.

5. The Multirate Case

Multirate integration using different time steps for the different components is a common practice [1, 8, 9]. In this case, the implicitly coupled system and error formulas are changed slightly. We assume the longer time step is an integer multiple of the shorter time step, so component 2 will take steps for every one step in the component 1. The outline of the derivation for is given here, but results easily generalize. The large time steps correspond to the index and the small time steps correspond to fractional indices. First, component 1 equation is straightforward since it only involves one time step: Next, component 2 equation requires compounding two steps:

After carrying out the algebra and generalizing to larger , we obtain the system: The fractional index is used to indicate the small time steps.

Now that the implicitly coupled system has been defined, we can choose a splitting that defines and in the error formulas above. The error formula for the multirate case is very similar to (25), but we must define matrices where is the number of iterations per time step and is the integer number of time steps taken in component 2 for each time step taken in component 1. Let where is an identity matrix of the same size as . The recursion relationship for the is The alternative form for , analogous to (29) is This means that as goes to infinity, approaches (note that ). Finally, the multirate error formula is

5.1. Multirate Example 1: Varying with Constant

The multirate examples are the same as those described in Section 4.2 with only the values of and altered. We use the same Jacobi style iteration, so based on (36) we have In the first multirate example, is set to 1, and is set first to 2 and then to 10. Figure 8 shows the spectral radii of the relevant matrices.

Values of larger than 10 produce no visible change in the plot, so Figure 8(b) can be taken to represent the case of “many” time steps in component 2 within each time step for component 1. There is now a range of time steps for which the iteration diverges. As might be expected, the algorithm is unstable when the iteration is divergent. In other words, when the spectral radius of is greater than one, the spectral radius of is greater than one.

As the number of iterations, , approaches infinity, a divergent iteration must result in an unstable algorithm. However, for the case of finite iteration, this is not guaranteed, and the next experiment illustrates this point.

5.2. Multirate Example 2: Varying with Constant

In the second multirate example, is set to 2, and is set first to 3 and then to 100. Figure 9 shows the spectral radii for the relevant matrices. The results show that the method may be stable, even though the iteration is divergent. Figure 9(a) include a region in which the spectral radius of is more than one, yet the spectral radius of is less than one. In Figure 9(a), where only 3 iterations are used per large time step, the entire range of values for which the iteration is divergent results in a stable algorithm. In Figure 9(b), where the number of iterations is raised to 100, the region of divergence and the region of instability are very nearly the same. Such cases are not unique to multirate examples and have also been observed in single rate examples. Of course these cases can only occur when the number of iterations per time step is small, since iterating many times with a divergent iteration would certainly lead to an unstable algorithm. This is an interesting case, since a stable algorithm could contain a divergent iteration and the user might not know.

5.3. Multirate Example in Two Spatial Dimensions

In this section, the multirate tests in Sections 5.1 and 5.2 are extended to a problem in two spatial dimensions posed on , with the interface located at : The purpose of this example is to examine the changes in the spectral radii of the relevant matrices resulting from the change to two spatial dimensions. We use the standard finite volume discretization with 16 × 16 cells in each subdomain. Since the functions , , , and have no impact on the relevant matrices, we do not state them explicitly.

Plots of the spectral radii of the relevant matrices are given in Figures 10 and 11. The spectral radius of is much smaller in the two space dimension case. Nonetheless, the plots show that the two space dimension case exhibits the same qualitative behavior as the one space dimension case. In particular, there is a range of time steps for which the iteration is divergent and also a range of time steps for which the overall method is unstable.

6. Conclusion

By selecting a coupling strategy comprising space and time grids and an associated rule by which information is exchanged between components, we define an implicitly coupled system. At each time step, we seek a solution of this implicitly coupled system through a block iterative strategy. Since only a limited number of iterations can be performed at each time step, there will be some iteration error which separates the final iterate from the implicitly coupled solution. These errors are propagated to the next time step in the form of errors in the initial condition and are compounded as the incomplete iteration process repeats itself at each time step. The cumulative effect can manifest itself as conditional stability, meaning the solution is only stable for certain values of despite the unconditional stability of the implicitly coupled solution.

We have derived formulas for this error which show that stability hinges on the spectral radius of the matrix and further show that there is a mechanism for error to accumulate in time even if the algorithm is stable. By examining the spectral radius of for a variety of discretization choices on a simple model problem we demonstrate several important points. Firstly, the unconditional stability of the time integration method is not necessarily retained if a low number of iterations per timestep are used. Instead, a conditional stability may occur, where the method is stable for a range of values of . In additon, we show by example that it is possible for the method to be stable even if the iteration is divergent, provided a low number of iterations is used. These results seem to contradict intuition, which suggests that a convergent iteration will produce a stable algorithm and a divergent iteration will produce an unstable algorithm. However, both of these ideas hold only in the limit as the number of iterations goes to infinity, and in the case of finite iteration we have shown that there are exceptions.

Finally, the question of accuracy must be considered in addition to the issue of stability. The error formulas derived above show that the iterative solution may wander away from the implicitly coupled solution over time even if the method is stable in the traditional sense.

Appendices

A. Discrete Equations for 1D Finite Volume with Backward Euler in Time on the Heat Equation

Let there be cells in the 1D spatial grid in a given subdomain, . Let be the discrete solution on cell at time , the constant time step, and the constant cell width. The symbol represents the exact integral over cell of the right hand side of the PDE evaluated at time . The symbol represents the diffusivity function evaluated at the boundary between cell and cell , at time . The system of equations below, when placed into matrix form, describes the terms in (3) as they were implemented for the numerical tests. where is the known Neumann boundary condition at at time .

Consider where is the known Dirichlet boundary condition at at time .

Note that (A.1) and (A.2) are valid for both of our subdomains provided that and are interpreted as given boundary conditions on the outer boundaries of the entire domain and are determined by the extrapolation strategy depicted in Figure 2 at the interface between the subdomains.

B. Inexact Component Solutions (Inexact Inversion of )

The error formulas derived in Sections 3.1 and 3.2 can easily be modified to include the effects of inexact inversion of . We first adjust (18) from Section 3.1, for a single iteration per time step. The iterates are now defined by where represents the error from inexact inversion. The result corresponding to (18) in Section 3.1 is now Notice the extra term resulting from incomplete iteration.

Repeating the process for iterations per time step, the computational iterates are defined by Following the same logic and derivation used in Section 3.2, the expression corresponding to (25) is Notice that, with multiple iterations per time step, the term needs a double index because an error due to inexact inversion is made at every iteration. Equation (B.4) contains both terms from (25) and has an additional term which represents the error caused by the inexact inversions. The matrix is the key to the growth of all three terms and therefore remains the key to stability.

C. Weighted Jacobi

It is worth noting that the error formula (25) can easily be altered to allow for a “relaxed” Jacobi iteration [2022], in which a weighted average of the new and old iterates of a standard Jacobi iteration is taken to be the next iterate (see, e.g., [23]). If we rewrite (25) as with and redefine , then for With this definition of , (25) holds for any value of , with corresponding to standard iteration. In some cases where the iterative convergence at each time step is slow, the iterates tend to oscillate around the implicitly coupled solution at that time step. In these cases, adjusting can drastically accelerate iterative convergence.

Conflict of Interests

The authors declare that there is no conflict of interests regarding the publication of this paper.

Acknowledgments

D. Estep gratefully acknowledges Chalmers University of Technology for the support provided by the appointment as Chalmers Jubilee Professor. B. Sheehan’s work is supported in part by the Department of Energy (DE-FG02-04ER25620). D. Estep’s work is supported in part by the Defense Threat Reduction Agency (HDTRA1-09-1-0036), Department of Energy (DE-FG02-04ER25620, DE-FG02-05ER25699, DE-FC02-07ER54909, DE-SC0001724, DE-SC0005304, INL00120133, and DE0000000SC9279), Idaho National Laboratory (00069249 and 00115474), Lawrence Liver more National Laboratory (B573139, B584647, and B590495), National Science Foundation (DMS-0107832, DMS-0715135, DGE-0221595003, MSPA-CSE-0434354, ECCS-0700559, DMS-1065046, DMS-1016268, DMS-FRG-1065046, and DMS-1228206), and National Institutes of Health (no. R01GM096192). S. Tavener’s work is supported in part by the Department of Energy (DE-FG02-04ER25620 and INL00120133) and National Science Foundation (DMS-1153552).