`Journal of Applied Mathematics and Stochastic AnalysisVolumeΒ 2008Β (2008), Article IDΒ 104525, 26 pageshttp://dx.doi.org/10.1155/2008/104525`
Research Article

## A Numerical Solution Using an Adaptively Preconditioned Lanczos Method for a Class of Linear Systems Related with the Fractional Poisson Equation

School of Mathematical Sciences, Queensland University of Technology, Qld 4001, Australia

Received 21 May 2008; Revised 10 September 2008; Accepted 23 October 2008

Copyright Β© 2008 M. Ilić et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

#### Abstract

This study considers the solution of a class of linear systems related with the fractional Poisson equation (FPE) with nonhomogeneous boundary conditions on a bounded domain. A numerical approximation to FPE is derived using a matrix representation of the Laplacian to generate a linear system of equations with its matrix raised to the fractional power . The solution of the linear system then requires the action of the matrix function on a vector . For large, sparse, and symmetric positive definite matrices, the Lanczos approximation generates . This method works well when both the analytic grade of with respect to and the residual for the linear system are sufficiently small. Memory constraints often require restarting the Lanczos decomposition; however this is not straightforward in the context of matrix function approximation. In this paper, we use the idea of thick-restart and adaptive preconditioning for solving linear systems to improve convergence of the Lanczos approximation. We give an error bound for the new method and illustrate its role in solving FPE. Numerical results are provided to gauge the performance of the proposed method relative to exact analytic solutions.

#### 1. Introduction

In recent times, the study of the fractional calculus and its applications in science and engineering has escalated [13]. The majority of papers dedicated to this topic discuss fractional kinetic equations of diffusion, diffusion-advection, and Fokker-Planck type to describe transport dynamics in complex systems that are governed by anomalous diffusion and nonexponential relaxation patterns [2, 3]. These papers provide comprehensive reviews of fractional/anomalous diffusion and an extensive collection of examples from a variety of application areas. A particular case of interest is the motion of solutes through aquifers discussed by Benson et al. [4, 5].

The generally accepted definition for the fractional Laplacian involves an integral representation (see [6] and the references therein) since the spectral resolution of the Laplacian operator over infinite domains is continuous; for the whole space, we use the Fourier transform and for initial value problems we use the Laplace transform in time [7]. However, when dealing with finite domains the fractional Laplacian subject to homogeneous boundary conditions is usually defined in terms of a summation involving the discrete spectrum. It is nontrivial to extend the latter definition to accommodate nonhomogeneous boundary conditions. To the best of our knowledge, there is no evidence in the literature that suggests this has been done apart from Ilić et al. [8] where the one-dimensional case was discussed. In this paper, we propose the extension to higher dimensions and illustrate the idea in the context of solving the fractional Poisson equation subjected to nonhomogeneous boundary conditions on a bounded domain.

Space fractional diffusion equations have been investigated by West and Seshadri [9] and more recently by Gorenflo and Mainardi [10, 11]. Numerical methods for these fractional equations are still under development. Hackbusch and his group [1214] have developed the theory of -matrices and algorithms that they claim to be of complexity for computing functions of operators that are approximated by a finite difference (or other Galerkin schemes) discretisation matrix. However, the underlying theory is developed using integral representations of the matrix for separable coordinate systems and does not include a discussion of nonhomogeneous boundary conditions, which is essential for the fractional Poisson equation under investigation in this paper. Recently Ilić et al. [8, 15] proposed a matrix representation of the fractional-in-space operator to produce a system of linear ordinary differential equations (ODEs) with a matrix representation of the Laplacian operator raised to the same fractional power. This approach, which was coined the matrix transfer technique (MTT), enabled either the standard finite element, finite volume, or finite difference methods to be exploited for the spatial discretisation of the operator.

In recent years, fractional Brownian motion (FBM) with Hurst index has been used to introduce memory into the dynamics of diffusion processes. A prediction theory and other analytical results on FBM can be found in [16]. As shown in [17], a Girsanov-type formula for the Radon-Nikodym derivative of an FBM with drift with respect to the same FBM is determined by differential equations of fractional order with Dirichlet boundary conditions:for a certain integrable function defined on , where . In this study, we extend problem (1.1) and investigate the solution of a steady-state space fractional diffusion equation with sources, hereafter referred to as the fractional Poisson equation (FPE), on some bounded domain in two dimensions subject to either one or a combination of the usual (nonhomogeneous) boundary conditions of types I, II, or III imposed on the boundary . Although the method we present for solving the FPE is equally applicable to two- and three-dimensional problems and the various coordinate systems used in the solution by separation of variables, we consider only the following problem here.

FPE Problem
Solve the fractional Poisson equation in a finite rectangle:subject toWe choose such a simple region so that an analytic solution can be found, which can be used subsequently to verify our numerical approach. Note also that this system captures type I boundary conditions () and type II boundary conditions (). The latter case has to be analysed separately with care since is an eigenvalue that introduces singularities.
The use of our matrix transfer technique leads to the matrix representation of the FPE (1.2), which requires that the matrix function equationmust be solved. Note that in (1.4), denotes the matrix representation of the Laplacian operator obtained using any of the well-documented methods: finite difference, the finite volume method, or variational methods such as the Galerkin method using finite element or wavelets and , with a vector containing the discrete values of the source/sink term, and a vector that contains all of the discrete boundary condition information. We assume further that both the discretisation process and the implementation of the boundary conditions have been carried out to ensure that is symmetric positive definite, that is, .
The general solution of (1.4) can be written asand one notes the need to determine both the action of the matrix function on the vector and the action of the standard inverse on , where the matrix can be large and sparse.
In the case where , numerous authors have proposed efficient methods to deal directly with (1.5) using Krylov subspace methods and in particular, the preconditioned generalised minimum residual (GMRES) iterative method (see, e.g., the texts by Golub and Van Loan [18], Saad [19], and van der Vorst [20]). In this paper, we investigate the use of Krylov subspace methods for computing an approximate solution for a range of values and indicate how the spectral information gathered from at first solving can be recycled to obtain the complete solution in (1.5), where .
In literature, a majority of references deal with the extraction of an approximation to for scalar analytic function using Krylov subspace methods (see [21, Chapter 13] and the references therein). Druskin and Knizhnerman [22], Hochbruck and Lubich [23], Eiermann and Ernst [24], Lopez and Simoncini [25], van den Eshof et al. [26], as well as many other researchers use the Lanczos approximationwhereis the usual Lanczos decomposition, and the columns of form an orthonormal basis for Krylov subspace . However, as noted by Eiermann and Ernst [24], all basis vectors must be stored to form this approximation, which may prove costly for large matrices. Restarting the process is by no means as straightforward as for the case , and the restarted Arnoldi algorithm for computing given in [24] addresses this issue. Another issue worth pointing out is that although preconditioning linear systems is now well understood and numerous preconditioning strategies exist to accelerate the convergence of many iterative solvers based on Krylov subspace methods [19], preconditioning in many cases cannot be applied to . For example if , one can only deduce from in a limited number of special cases for .
In the previous work by the authors [27], we proposed a spectral splitting method , where is an orthogonal projector onto the invariant subspace associated with a set of eigenvalues on the “singular part” of the spectrum with respect to and an orthogonal projector onto the “regular part” of the spectrum. We refer to that part of the spectral interval where the function to be evaluated has rapid change with large values of the derivatives as the singular part (see [27] for more details). The splitting was chosen in such a way that was a low-degree polynomial (of degree at most 5). Thick restarting was used to construct the projector on the singular part. Unfortunately, the computational overhead associated with constructing the projector , whilst maintaining the requirement of a low-degree polynomial approximation for over the regular part, limits the application of the splitting method to a class of matrices that had fairly compact spectra. The method appeared to work well for applications in statistics [27, 28].
In this paper, we build upon the splitting method idea in the manner outlined as follows to approximate for monotone decreasing function .
(1) Determine an approximately invariant subspace (AIS), for the set of eigenvectors associated with the singular part of with respect to . Form and set , where are the eigenvalues associated with the eigenvectors . The thick restarted Lanczos method discussed in [27, 29] or [30] can be used for the AIS generation.(2) Let and generate orthonormal basis for .(3) Approximate using the Lanczos decomposition to analytic grade , [31].(4) Form . To avoid components of any eigenvectors associated with the singular part reappearing in , we show how this splitting strategy can be embedded in an adaptively constructed preconditioning of the matrix function.
The paper is organised as follows. In Section 2, we use MTT to formulate the matrix representation of FPE to accommodate nonhomogeneous boundary conditions. We also consider the approximation of the matrix function using the Lanczos method with thick restart and adaptive preconditioning. In Section 3, we give an upper bound on the error cast in terms of the linear system residual. In Section 4, we derive an analytic solution to the fractional Poisson equation using the spectral representation of the Laplacian, and in Section 5, we give the results of our algorithm when applied to two different problems, which highlight the importance of using our adaptively preconditioned Lanczos method. In Section 6, we give the conclusions of our work and hint at future research directions.

#### 2. Matrix Function Approximation and Solution Strategy

The general numerical solution procedure MTT is implemented as follows. First apply a standard spatial discretisation process such as the finite volume, finite element, or finite difference method to the standard Poisson equation (i.e., in system (1.2)) in the case of homogeneous boundary conditions to obtain the following matrix form:where it is assumed that is the finite difference matrix representation of the Laplacian, and is the grid spacing. is the representation of , and is the representation of . Then, as was discussed in [15], the solution of FPE subject to homogeneous boundary conditions is approximated by the solution of the following matrix function equation:Next, we apply the same finite difference method to the homogeneous Poisson equation (i.e., Laplace's equation) with nonhomogeneous boundary conditions. The resulting equations can be written in the following matrix form:where represents the discretized boundary values, and the matrix is the same as given above. In other words, if does not satisfy homogeneous boundary conditions, then the modified representationis used, where denotes the extended definition of the Laplacian (see [8] and also refer to Section 4 for further details). Thirdly, we follow [8] to write the fractional Laplacian in the following form:and its matrix representation asHence, the matrix representation for FPE isAssuming that has an inverse, the solution of this equation is

Our aim is to devise an efficient algorithm to approximate the solution in (2.8) using Krylov subspace methods. One notes from (2.8) that the solution comprises two distinct components, , where , , and . We note further in this context that the scalar function is monotone decreasing on , where is symmetric positive definite.

There exists a plethora of Krylov-based methods available in the literature for approximately solving the linear system using, for example, conjugate gradient, FOM, or MINRES (see [19, 20]). Although preconditioning strategies are often employed to accelerate the convergence of many of these methods, we prefer not to adopt preconditioning here so that spectral information gathered about during this linear system solve can be recycled and used to aid the approximation of . As we will see, this recycling is affected through the use of thick restart [30, 32] and adaptive preconditioning [33, 34]. We emphasise that even if is a good preconditioner for , it may not be useful for since we cannot find a relation between and . Thus, many efficient solvers used for the ordinary Poisson equation cannot be employed for the FPE. The adaptive preconditioner, however, can.

We begin our presentation of the numerical algorithm by briefly reviewing the solution of the linear system , where is a symmetric positive definite using the full orthogonal method (FOM) [19] together with thick restart [27, 30, 32].

##### 2.1. Stage 1—Thick Restarted, Adaptively Preconditioned, Lanczos Procedure

Suppose that the Lanczos decomposition of is given bywhere the columns of form an orthonormal basis for , and is the analytic grade defined in [31]. The analytic grade of order of the matrix with respect to is defined as the lowest integer for which , where is the orthogonal projector onto the th Krylov subspace and . The grade can be computed from the Lanczos algorithm using the matrices generated during the process. If is the 1st column of , and , for , then .

In each restart, or cycle, that follows, the Lanczos decomposition is carried up to the analytic grade , which could be different for different cycles. Consequently, for ease of exposition, the subscript will be suppressed so that the only subscript that appears throughout the description below refers to the cycle. Let be some initial approximation to the solution and define .

Cycle 1
(i)Generate Lanczos decompositionwhere , , is tridiagonal, , , , and .(ii)Obtain approximate solution , so thatand residualTest if . If yes, stop; otherwise, continue to cycle 2.

Cycle 2
(i)Find eigenvalue decomposition of , that is, , where .(ii)Select the orthonormal (ON) eigenvectors, , of corresponding to the smallest in magnitude eigenvalues of and form the Ritz vectorswhere are ON, and let the associated Ritz values be stored in the diagonal matrix .(iii)Set and generate the thick-restart Lanczos decompositionwhere , , , and(iv)Obtain approximate solution , so thatand residualTest if . If yes, stop; otherwise, continue to the next cycle.

Cycle
(i)Find eigenvalue decomposition of , that is, .(ii)Select orthonormal (ON) eigenvectors, , of corresponding to the smallest in magnitude eigenvalues of and form the Ritz vectors .(iii)Set and generate thick-restart Lanczos decompositionwhere has similar form as .(iv)Obtain approximate solution , so thatand residualTest if . If yes, stop; otherwise, continue cycling.

###### 2.1.1. Construction of an Adaptive Preconditioner

Another important ingredient in the algorithm described above is the construction of an adaptive preconditioner [33, 34]. Let the thick-restart procedure at cycle produce the approximate smallest Ritz pairs , where . We then check if any of these Ritz pairs have converged to approximate eigenpairs of by testing the magnitude of the upper bound on the eigenpair residualThe eigenpairs deemed to have converged are then locked and used to construct an adaptive preconditioner that can be employed during the next cycle to ensure that difficulties such as spuriousness can be avoided.

Suppose we collect the locked Ritz vectors as columns of the matrix , set , and formwhere . , are the current estimates of the smallest and largest eigenvalues of , respectively, obtained from the restart process. Then, has the same eigenvectors as ; however its eigenvalues are shifted to [33, 34]. Furthermore, it should be noted that these preconditioners can be nested. If is a sequence of such preconditioners, then with and , we haveThus, during the cycles (say cycle ) the adaptively preconditioned, thick- restart Lanczos decompositionis employed.

Note. The preconditioner does not need to be explicitly formed; it can be applied in a straightforward manner from the stored locked Ritz pairs.

In summary, stage 1 consists of employing the adaptively preconditioned Lanczos procedure outlined above to approximately solve the linear system for . At the completion of this process, the residual , and we have the set of locked Ritz pairs. This spectral information is then passed to accelerate the performance of stage 2 of the solution process.

##### 2.2. Stage 2—Matrix Function Approximation Using an Adaptively Preconditioned Lanczos Procedure

At the completion of stage 1, we have generated an approximately invariant eigenspace associated with the smallest in magnitude eigenvalues of . We now show how this spectral information can be recycled to aid with the approximation of , where .

Recall from stage 1 that we have available , where . The important observation at this point is the following relationship between and .

Proposition 2.1. Let be an eigenspace of symmetric matrix such that , with and . Define , then for ,

Proof. Let , then Furthermore,
Thus,By noting thatwe obtain the main result

The following proposition shows that, as was the case for the solution of the linear system in stage 1, these preconditioners can be nested in the case of the matrix function approximation.

Proposition 2.2. Let be a sequence of preconditioners as defined in Proposition 2.1, then

Proof. Let and , then observe that and

Corollary 2.3. Under the hypothesis of Proposition 2.1, one notes the equivalent form of (2.25) as which appears similar to the idea of spectral splitting proposed in [27].

We now turn our attention to the approximation of , which by using Corollary 2.3 can be expressed aswhere . First note that if , then . We expand the Lanczos decomposition to the analytic grade of with . Next perform the spectral decomposition of and set , then compute the Lanczos approximationBased on the theory presented to this point, we propose the following algorithm to approximate the solution of the fractional Poisson equation.

Algorithm 2.4 (Computing the Solution of the FPE Problem). Stage 1. Solve using the thick restarted adaptively preconditioned Lanczos method and generate the AIS, . Return the preconditioner , where .
Stage 2. Compute using the following strategy.
(1)Set .(2)Compute Lanczos decomposition , where is the analytic grade of and , with .(3) Perform the spectral decomposition .(4)Compute linear system residual and estimate from to compute bound (3.9) derived in Section 3.(5)If bound is small, then approximate and exit to step (6), otherwise continue the Lanczos expansion until bound is satisfied.(6) Form , where .

Finally, compose the approximate solution of FPE as .

Remarks
At stage 2, we monitor the upper bound given in Proposition 3.3 to check if the desired accuracy is achieved in the matrix function approximation. If the desired level is not attained, then it may be necessary to repeat the thick-restart procedure to determine the next smallest eigenvalues and their corresponding ON eigenvectors. In fact, this process may need to be repeated until there are no eigenvalues remaining in the “singular” part so that the accuracy of the approximation is dictated entirely by that of the linear system residual. We leave the design of this more sophisticated and generic algorithm for future research.

It is natural at this point to ask what is the accuracy of the approximation (2.33) for a given ? Not knowing at the outset makes it impossible to answer this question. Instead, we opt to provide an upper bound for the error , which is the topic of the following section.

#### 3. Error Bounds for the Numerical Solution

At first, we note that Churchill [35] uses complex integration around a branch point to derive the following:By changing the variable, one can deduce the following expression, for :Noting that , the spectral decomposition and the usual definition of the matrix function enable the following expression for computing to be obtained:

Recall that the approximate solution of the linear system from using the Galerkin approach (FOM or CG) is given by , with residual . We note the similarity to (2.33); however a key observation is that the error in the matrix function approximation cannot be determined in such a straightforward manner as for the linear system [24]. The following proposition enables the error in the matrix function approximation to be expressed in terms of the integral expression given above in (3.3) and the residual of what is called a shifted linear system.

Proposition 3.1. Let be the residual to the shifted linear system , then

Proof. It is known that

It is interesting to observe that for the Lanczos approximation, so that the vectors and are aligned; however their magnitudes are different. Note further thatAn even more important result is the following relationship between their norms.

Proposition 3.2. Let have eigendecomposition , where with the Ritz values for the Lanczos approximation, then for ,

Proof. The result follows from [26], which gives the following polynomial characterisations for the residuals:so that . The result follows by taking the norm and noting that .

We are now in a position to formulate an error bound essential for monitoring the accuracy of the Lanczos approximation (2.33).

Proposition 3.3. Let be the smallest eigenvalue of and the linear system residual obtained by solving the linear system using FOM on the Krylov subspace , then for , one has

Proof. Using the orthogonal diagonalisation , we obtain from Proposition 3.1 thatThe result follows by taking norms and using Proposition 3.2 to obtain

The importance of this result is that it relates the error in the matrix function approximation to a scalar multiple of the linear system residual. This bound can be monitored during the Lanczos decomposition to deduce whether a specified tolerance has been reached in the matrix function approximation. Another key observation from Proposition 3.3 is that it motivates us to shift the small troublesome eigenvalues of , via some form of preconditioning, so that . In this way, the error in the function approximation is dominated entirely by the residual error.

#### 4. Analytic Solution

In this section, we discuss the analytic solution of the fractional Poisson equation, which can be used to verify the numerical solution strategy outlined in Section 2. The theory depends on the definition of the operator via spectral representation. The one-dimensional case was discussed in Ilić et al. [8], and the salient results for two dimensions are repeated here for completeness.

##### 4.1. Homogeneous Boundary Conditions

In operator theory, functions of operators are defined using spectral decomposition. Set , and let be the real space with real inner product . Consider the operatoron absolutely continuous; , where is one of the boundary conditions in the FPE problem with right-hand side equal to zero.

It is known that is a closed self-adjoint operator whose eigenfunctions form an orthonormal basis for . Thus, For any ,If is a continuous function on , thenprovided that . Hence, if the eigenvalue problem for can be solved for the region , then the FPE problem with homogeneous boundary conditions can be easily solved to give

##### 4.2. Nonhomogeneous Boundary Conditions

Before we proceed further, we need to specify the definition of .

Definition 4.1. Let be a complete set of orthonormal eigenfunctions corresponding to eigenvalues of the Laplacian on a bounded region with homogeneous BCs on . LetThen, for any , is defined byIf one of and is the eigenfunction corresponding to this eigenvalue, then one needs .

Proposition 4.2. (1)The operator is linear and self-adjoint, that is, for , .(2)If , where , then

For , Definition 4.1 may be too restrictive, since the functions we are interested in satisfy nonhomogeneous boundary conditions, and the resulting series may not converge or not converge uniformly.

Extension of Definition 4.1
(1)For , define for any (or other possibilities).(2)For , define , where , and is the extension of as defined by Proposition 4.3 below.
It suffices to consider .

Proposition 4.3. Let be an eigenfunction corresponding to the eigenvalue of the Laplacian on the rectangle , and let satisfy the BCs in problem 1. Then, if is an extension of (in symbols ) with adjoint , if . If , the second term on the right-hand side becomes

Proof. It is known that where is the extension of with domain that is the same as without , which is well documented in books on partial differential equations [7]. This is done by calculating the conjunct (concomitant or boundary form) using the Green's formulaThus,which gives the result on substitution.

This result can be readily used to write down the analytic solution to the FPE problem. First, we obtain the spectral representation of the operator by solving the eigenvalue problem:Knowing the eigenvalues and the corresponding orthonormal (ON) eigenfunctions , we can use the finite-transform method with respect to and Proposition 4.3 to obtainwhere is the second term on the right hand side in Proposition 4.3. Hence,

#### 5. Results and Discussion

In this section, we exhibit the results of applying Algorithm 2.4 to solve two FPE test problems. To assess the accuracy of our approximation, we compare the numerical solutions with the exact solution in each case.

Test Problem 1: FPE with Dirichlet Boundary Conditions
Solve on the unit square subject to the type I boundary conditions on boundary . For this problem, the ON eigenfunctions are given byand the corresponding eigenvalues . The analytical solution is then given from Section 4 as For the numerical solution, a standard five-point finite-difference formula with equal grid spacing in the and directions has been used to generate the block tridiagonal matrix given in (2.1) asThe parameters used to test this model are listed in Table 1.

Table 1: Physical parameters for test problem 1.

Test Problem 2: FPE with Mixed Boundary Conditions
Solve on the unit square subject to type III boundary conditionswhere . The analytical solution to this problem is given bywhere the eigenfunctions arewith normalisation factorsThe eigenvalues are determined by finding the roots of the transcendental equation:with determined from a similar equation for . Finally, is given by
For the numerical solution, a standard five-point finite-difference formula with equal grid spacing was again employed in the and directions. However in this example, additional finite-difference equations are required for the boundary nodes as a result of type III boundary conditions. The block tridiagonal matrix required in (2.8) is then similar to that exhibited for example 1, however it has dimension and boundary blocks must be modified to account for the boundary condition contributions.
The parameter values used for this problem are listed in Table 2.

Table 2: Physical parameters for test problem 2.
##### 5.1. Discussion of Results for Test Problem 1

A comparison of the numerical and analytical solutions for test problem 1 is exhibited in Figure 1 for different values of the fractional index , and (with the value 2 representing the solution of the classical Poisson equation). In all cases, it can be observed that good agreement is obtained between theory and simulation, with the analytical (solid contour lines) and numerical (dashed contour lines) solutions almost indistinguishable. In fact, Algorithm 2.4 consistently produced a numerical solution within approximately absolute error of the analytical solution.

Figure 1: Comparisons of numerical (dashed lines) and analytical solutions (solid lines) for test problem 1 computed using Algorithm 2.4: (a) , (b) , (c) , and (d) (classical case).

The impact of decreasing the fractional index from to is particularly evident in Figure 2 from the shape and magnitude of the computed three-dimensional symmetric profiles. Low values of produce a solution exhibiting a pronounced hump-like shape, with the diffusion rate low, the magnitude of the solution high at the centre, and steep gradients evident near the boundary of the solution domain. As increases, the magnitude of the profile diminishes and the solution is much more diffuse and representative of a Gaussian process. These observations motivate the following remark.

Figure 2: Numerical solutions for test problem 1: (a) , (b) , (c) , and (d) (classical case).

Remark 5.1. Over , the Riesz operator as defined bywhere is a - function with rapid decay at infinity, and
is known to generate -stable processes. In fact, the Green's function of the equationis the probability density function of a symmetric -stable process. When , it is the density function of a Cauchy distribution, and when it is the classical Gaussian density. As , the tail of the density function is heavier and heavier. These behaviours are reflected in the numerical results given in the above example; namely, when , the plots exhibit the bell shape of the Gaussian density, but when , the curves are flatter, indicating very heavy tails as expected.

We now report on the performance of Algorithm 2.4 for computing the solution of the FPE. The numerical solutions shown in Figures 1 and 2 were generated using a standard five-point finite difference stencil to construct the matrix representation of the two-dimensional Laplacian operator. The - and -dimensions were divided equally into divisions to produce the symmetric positive definite matrix having its spectrum . One notes for this problem that the homogeneous boundary conditions necessitate only the solution . Algorithm 2.4 was still employed in this case; however in stage 1 we at first solve by the adaptively preconditioned thick restart procedure. The gathered spectral information from stage 1 is then used for the efficient computation of during stage 2.

Figure 3 depicts the reduction in the residual of the linear system and the error in the matrix function approximation during both stages of the solution process for test problem 1 for the case . For this test using FOM(25,10), (subspace size 25 with an additional 10 approximate Ritz vectors augmented at the front of the Krylov subspace) four restarts were required to reduce the linear system residual to , which represented an overall total of matrix-vector products. This low tolerance was enforced to ensure that as many approximate eigenpairs of could be computed and then locked during stage 1 for use in stage 2. An eigenpair was deemed converged when the residual in the approximate eigenpair was less than , where is the current estimate of the largest eigenvalue of . This process saw 1 eigenpair locked after 2 restarts, 5 locked after 3 restarts, and finally 9 locked after 4 restarts. From this figure, we also see that when subspace recycling is used for stage 2 only an additional matrix-vector products are required to compute the solution to an acceptable accuracy. It is also worth pointing out that the Lanczos approximation for this preconditioned matrix function reduces much more rapidly than for the case where preconditioning (dotted line) is not used. Furthermore, the Lanczos approximation in this example lies almost entirely on the curve that represents the optimal approximation obtainable from the Krylov subspace [26]. Finally, we see that the bound (3.9) can be used with confidence as a means for halting stage 2 once the desired accuracy in the bound is reached.

Figure 3: Residual reduction for test problem 1 computed using the two-stage process outlined in Algorithm 2.4.
##### 5.2. Discussion of Results for Test Problem 2

A comparison of the numerical and analytical solutions for test problem 2 is exhibited in Figure 4, again for the values of the fractional index , and . It can be seen that the agreement between theory and simulation is more than acceptable for this case, with Algorithm 2.4 producing a numerical solution within approximately absolute error of the analytical solution. However, the impact of increasing the fractional index from to is less dramatic for problem 2.

Figure 4: Comparisons of numerical (dashed line) and analytical solutions (solid line) for test problem 2 computed using Algorithm 2.4: (a) , (b) , (c) , and (d) (classical case).

The numerical solutions shown in Figure 4 were again generated using a standard five-point finite-difference stencil to construct the matrix representation of the two-dimensional Laplacian operator. The - and -dimensions were divided equally into divisions resulting in the symmetric positive definite matrix having its spectrum . One notes for this problem that type II boundary conditions have produced a small eigenvalue that undoubtedly will hinder the performance of restarted FOM.

Figure 5 depicts the reduction in the residual of the linear system for computing the solution and the error in the matrix function approximation for during both stages of the solution process for test problem 2 with . Using FOM(25,10), a total of nine restarts were required to reduce the linear system residual to , which represented an overall total of matrix-vector products. One notes that this is much higher than Problem 1 and primarily due to the occurrence of small eigenvalues in . The thick restart process saw 1 eigenpair locked after 5 restarts, 4 locked after 6 restarts, and finally 10 locked after 9 restarts. From this figure, we also see that when subspace recycling is used for stage 2 only an additional matrix-vector products are required to compute the solution to an acceptable accuracy, which is clearly much less than the unpreconditioned (dotted line) case. The Lanczos approximation in this example again lies almost entirely on the curve that represents the optimal approximation obtainable from the Krylov subspace. Finally, we see that the bound (3.9) can be used to halt stage 2 once the desired accuracy is reached.

Figure 5: Residual reduction for test problem 2 computed using the two-stage process outlined in Algorithm 2.4.

#### 6. Conclusions

In this work, we have shown how the fractional Poisson equation can be approximately solved using a finite-difference discretisation of the Laplacian to produce an appropriate matrix representation of the operator. We then derived a matrix equation that involved both a linear system solution and a matrix function approximation with the matrix raised to the same fractional index as the Laplacian. We proposed an algorithm based on Krylov subspace methods that could be used to efficiently compute the solution of this matrix equation using a two-stage process. During stage 1, we used an adaptively preconditioned thick restarted FOM method to approximately solve the linear system and then used recycled spectral information gathered during this restart process to accelerate the convergence of the matrix function approximation in stage 2. Two test problems were then presented to assess the accuracy of our algorithm, and good agreement with the analytical solution was noted in both cases. Future research will see higher dimensional fractional diffusion equations solved using a similar approach via the finite volume method.

#### Acknowledgment

This work was supported financially by the Australian Research Council Grant no. LP0348653.

#### References

1. F. Lorenzo and T.T. Hartley, βInitialization, conceptualization, and application in the generalized fractional calculus,β NASA Center for Aerospace Information, Hanover, Md, USA, 1998.
2. R. Metzler and J. Klafter, βThe random walk's guide to anomalous diffusion: a fractional dynamics approach,β Physics Reports, vol. 339, no. 1, pp. 1β77, 2000.
3. R. Metzler and J. Klafter, βThe restaurant at the end of the random walk: recent developments in the description of anomalous transport by fractional dynamics,β Journal of Physics A, vol. 37, no. 31, pp. R161βR208, 2004.
4. D. A. Benson, S. W. Wheatcraft, and M. M. Meerschaert, βApplication of a fractional advectiondispersion equation,β Water Resources Research, vol. 36, no. 6, pp. 1403β1412, 2000.
5. D. A. Benson, S. W. Wheatcraft, and M. M. Meerschaert, βThe fractional-order governing equation of levy motion,β Water Resources Research, vol. 36, no. 6, pp. 1413β1423, 2000.
6. M. M. Meerschaert, J. Mortensen, and S. W. Wheatcraft, βFractional vector calculus for fractional advection-dispersion,β Physica A, vol. 367, pp. 181β190, 2006.
7. B. Friedman, Principles and Techniques of Applied Mathematics, John Wiley & Sons, New York, NY, USA, 1966.
8. M. Ilić, F. Liu, I. W. Turner, and V. Anh, βNumerical approximation of a fractional-in-space diffusion equation—II-with nonhomogeneous boundary conditions,β Fractional Calculus & Applied Analysis, vol. 9, no. 4, pp. 333β349, 2006.
9. B. J. West and V. Seshadri, βLinear systems with Lévy fluctuations,β Physica A, vol. 113, no. 1-2, pp. 203β216, 1982.
10. R. Gorenflo and F. Mainardi, βRandom walk models for space-fractional diffusion processes,β Fractional Calculus & Applied Analysis, vol. 1, no. 2, pp. 167β191, 1998.
11. R. Gorenflo and F. Mainardi, βApproximation of Lévy-Feller diffusion by random walk,β Journal for Analysis and Its Applications, vol. 18, no. 2, pp. 231β246, 1999.
12. I. P. Gavrilyuk, W. Hackbusch, and B. N. Khoromskij, βData-sparse approximation to the operator-valued functions of elliptic operator,β Mathematics of Computation, vol. 73, no. 247, pp. 1297β1324, 2004.
13. W. Hackbusch and B. N. Khoromskij, βLow-rank Kronecker-product approximation to multidimensional nonlocal operators—part I. Separable approximation of multi-variate functions,β Computing, vol. 76, no. 3, pp. 177β202, 2006.
14. W. Hackbusch and B. N. Khoromskij, βLow-rank Kronecker-product approximation to multidimensional nonlocal operators—part II. HKT representation of certain operators,β Computing, vol. 76, no. 3, pp. 203β225, 2006.
15. M. Ilić, F. Liu, I. W. Turner, and V. Anh, βNumerical approximation of a fractional-in-space diffusion equation—I,β Fractional Calculus & Applied Analysis, vol. 8, no. 3, pp. 323β341, 2005.
16. G. Gripenberg and I. Norros, βOn the prediction of fractional Brownian motion,β Journal of Applied Probability, vol. 33, no. 2, pp. 400β410, 1996.
17. Y. Hu, βPrediction and translation of fractional Brownian motions,β in Stochastics in Finite and Infinite Dimensions, T. Hida, R. L. Karandikar, H. Kunita, B. S. Rajput, S. Watanabe, and J. Xiong, Eds., Trends in Mathematics, pp. 153β171, Birkhäuser, Boston, Mass, USA, 2001.
18. G. H. Golub and C. F. Van Loan, Matrix Computations, Johns Hopkins Studies in the Mathematical Sciences, The Johns Hopkins University Press, Baltimore, Md, USA, 3rd edition, 1996.
19. Y. Saad, Iterative Methods for Sparse Linear Systems, SIAM, Philadelphia, Pa, USA, 2nd edition, 2003.
20. H. A. van der Vorst, Iterative Krylov Methods for Large Linear Systems, vol. 13 of Cambridge Monographs on Applied and Computational Mathematics, Cambridge University Press, Cambridge, UK, 2003.
21. N. J. Higham, Functions of Matrices: Theory and Computation, SIAM, Philadelphia, Pa, USA, 2008.
22. V. Druskin and L. Knizhnerman, βKrylov subspace approximation of eigenpairs and matrix functions in exact and computer arithmetic,β Numerical Linear Algebra with Applications, vol. 2, no. 3, pp. 205β217, 1995.
23. M. Hochbruck and C. Lubich, βOn Krylov subspace approximations to the matrix exponential operator,β SIAM Journal on Numerical Analysis, vol. 34, no. 5, pp. 1911β1925, 1997.
24. M. Eiermann and O. G. Ernst, βA restarted Krylov subspace method for the evaluation of matrix functions,β SIAM Journal on Numerical Analysis, vol. 44, no. 6, pp. 2481β2504, 2006.
25. L. Lopez and V. Simoncini, βAnalysis of projection methods for rational function approximation to the matrix exponential,β SIAM Journal on Numerical Analysis, vol. 44, no. 2, pp. 613β635, 2006.
26. J. van den Eshof, A. Frommer, T. Lippert, K. Schilling, and H. A. van der Vorst, βNumerical methods for the QCD overlap operator—I. Sign-function and error bounds,β Computer Physics Communications, vol. 146, pp. 203β224, 2002.
27. M. Ilić and I. W. Turner, βApproximating functions of a large sparse positive definite matrix using a spectral splitting method,β The ANZIAM Journal, vol. 46(E), pp. C472βC487, 2005.
28. M. Ilić, I. W. Turner, and A. N. Pettitt, βBayesian computations and efficient algorithms for computing functions of large, sparse matrices,β The ANZIAM Journal, vol. 45(E), pp. C504βC518, 2004.
29. R. B. Lehoucq and D. C. Sorensen, βDeflation techniques for an implicitly restarted Arnoldi iteration,β SIAM Journal on Matrix Analysis and Applications, vol. 17, no. 4, pp. 789β821, 1996.
30. A. Stathopoulos, Y. Saad, and K. Wu, βDynamic thick restarting of the Davidson, and the implicitly restarted Arnoldi methods,β SIAM Journal on Scientific Computing, vol. 19, no. 1, pp. 227β245, 1998.
31. M. Ilić and I. W. Turner, βKrylov subspaces and the analytic grade,β Numerical Linear Algebra with Applications, vol. 12, no. 1, pp. 55β76, 2005.
32. R. B. Morgan, βA restarted GMRES method augmented with eigenvectors,β SIAM Journal on Matrix Analysis and Applications, vol. 16, no. 4, pp. 1154β1171, 1995.
33. J. Baglama, D. Calvetti, G. H. Golub, and L. Reichel, βAdaptively preconditioned GMRES algorithms,β SIAM Journal on Scientific Computing, vol. 20, no. 1, pp. 243β269, 1998.
34. J. Erhel, K. Burrage, and B. Pohl, βRestarted GMRES preconditioned by deflation,β Journal of Computational and Applied Mathematics, vol. 69, no. 2, pp. 303β318, 1996.
35. R. V. Churchill, Complex Variables and Applications, McGraw-Hill, New York, NY, USA, 2nd edition, 1960.