About this Journal Submit a Manuscript Table of Contents
Journal of Applied Mathematics and Stochastic Analysis
VolumeΒ 2008Β (2008), Article IDΒ 104525, 26 pages
http://dx.doi.org/10.1155/2008/104525
Research Article

A Numerical Solution Using an Adaptively Preconditioned Lanczos Method for a Class of Linear Systems Related with the Fractional Poisson Equation

School of Mathematical Sciences, Queensland University of Technology, Qld 4001, Australia

Received 21 May 2008; Revised 10 September 2008; Accepted 23 October 2008

Academic Editor: NikolaiΒ Leonenko

Copyright © 2008 M. Ilić et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Abstract

This study considers the solution of a class of linear systems related with the fractional Poisson equation (FPE) (βˆ’βˆ‡2)𝛼/2πœ‘=𝑔(π‘₯,𝑦) with nonhomogeneous boundary conditions on a bounded domain. A numerical approximation to FPE is derived using a matrix representation of the Laplacian to generate a linear system of equations with its matrix 𝐴 raised to the fractional power 𝛼/2. The solution of the linear system then requires the action of the matrix function 𝑓(𝐴)=π΄βˆ’π›Ό/2 on a vector 𝑏. For large, sparse, and symmetric positive definite matrices, the Lanczos approximation generates 𝑓(𝐴)π‘β‰ˆπ›½0π‘‰π‘šπ‘“(π‘‡π‘š)𝑒1. This method works well when both the analytic grade of 𝐴 with respect to 𝑏 and the residual for the linear system are sufficiently small. Memory constraints often require restarting the Lanczos decomposition; however this is not straightforward in the context of matrix function approximation. In this paper, we use the idea of thick-restart and adaptive preconditioning for solving linear systems to improve convergence of the Lanczos approximation. We give an error bound for the new method and illustrate its role in solving FPE. Numerical results are provided to gauge the performance of the proposed method relative to exact analytic solutions.

1. Introduction

In recent times, the study of the fractional calculus and its applications in science and engineering has escalated [1–3]. The majority of papers dedicated to this topic discuss fractional kinetic equations of diffusion, diffusion-advection, and Fokker-Planck type to describe transport dynamics in complex systems that are governed by anomalous diffusion and nonexponential relaxation patterns [2, 3]. These papers provide comprehensive reviews of fractional/anomalous diffusion and an extensive collection of examples from a variety of application areas. A particular case of interest is the motion of solutes through aquifers discussed by Benson et al. [4, 5].

The generally accepted definition for the fractional Laplacian involves an integral representation (see [6] and the references therein) since the spectral resolution of the Laplacian operator over infinite domains is continuous; for the whole space, we use the Fourier transform and for initial value problems we use the Laplace transform in time [7]. However, when dealing with finite domains the fractional Laplacian subject to homogeneous boundary conditions is usually defined in terms of a summation involving the discrete spectrum. It is nontrivial to extend the latter definition to accommodate nonhomogeneous boundary conditions. To the best of our knowledge, there is no evidence in the literature that suggests this has been done apart from Ilić et al. [8] where the one-dimensional case was discussed. In this paper, we propose the extension to higher dimensions and illustrate the idea in the context of solving the fractional Poisson equation subjected to nonhomogeneous boundary conditions on a bounded domain.

Space fractional diffusion equations have been investigated by West and Seshadri [9] and more recently by Gorenflo and Mainardi [10, 11]. Numerical methods for these fractional equations are still under development. Hackbusch and his group [12–14] have developed the theory of 𝐻-matrices and algorithms that they claim to be of 𝑁log𝑁 complexity for computing functions of operators that are approximated by a finite difference (or other Galerkin schemes) discretisation matrix. However, the underlying theory is developed using integral representations of the matrix for separable coordinate systems and does not include a discussion of nonhomogeneous boundary conditions, which is essential for the fractional Poisson equation under investigation in this paper. Recently IliΔ‡ et al. [8, 15] proposed a matrix representation of the fractional-in-space operator to produce a system of linear ordinary differential equations (ODEs) with a matrix representation of the Laplacian operator raised to the same fractional power. This approach, which was coined the matrix transfer technique (MTT), enabled either the standard finite element, finite volume, or finite difference methods to be exploited for the spatial discretisation of the operator.

In recent years, fractional Brownian motion (FBM) with Hurst index 𝐻∈(0,1) has been used to introduce memory into the dynamics of diffusion processes. A prediction theory and other analytical results on FBM can be found in [16]. As shown in [17], a Girsanov-type formula for the Radon-Nikodym derivative of an FBM with drift with respect to the same FBM is determined by differential equations of fractional order with Dirichlet boundary conditions:ξ€·βˆ’βˆ‡2𝛼/2β„Ž(π‘₯)=𝑔(π‘₯)ifπ‘₯∈(0,𝑇),β„Ž(π‘₯)=0ifπ‘₯βˆ‰(0,𝑇),(1.1)for a certain integrable function β„Ž(π‘₯) defined on [0,𝑇], where π‘”βˆΆ[0,𝑇]→ℝ. In this study, we extend problem (1.1) and investigate the solution of a steady-state space fractional diffusion equation with sources, hereafter referred to as the fractional Poisson equation (FPE), on some bounded domain Ξ© in two dimensions subject to either one or a combination of the usual (nonhomogeneous) boundary conditions of types I, II, or III imposed on the boundary πœ•Ξ©. Although the method we present for solving the FPE is equally applicable to two- and three-dimensional problems and the various coordinate systems used in the solution by separation of variables, we consider only the following problem here.

FPE Problem
Solve the fractional Poisson equation in a finite rectangle:ξ€·βˆ’βˆ‡2𝛼/2πœ‘=𝑔(π‘₯,𝑦),0<π‘₯<π‘Ž,0<𝑦<𝑏,(1.2)subject toβˆ’π‘˜1πœ•πœ‘πœ•π‘₯+β„Ž1πœ‘=𝑓1π‘˜(𝑦)atπ‘₯=0,2πœ•πœ‘πœ•π‘₯+β„Ž2πœ‘=𝑓2(𝑦)atπ‘₯=π‘Ž,βˆ’π‘˜3πœ•πœ‘πœ•π‘¦+β„Ž3πœ‘=𝑓3π‘˜(π‘₯)at𝑦=0,4πœ•πœ‘πœ•π‘¦+β„Ž4πœ‘=𝑓4(π‘₯)at𝑦=𝑏.(1.3)We choose such a simple region so that an analytic solution can be found, which can be used subsequently to verify our numerical approach. Note also that this system captures type I boundary conditions (π‘˜π‘–=0,β„Žπ‘–=1,𝑖=1,…,4) and type II boundary conditions (β„Žπ‘–=0,π‘˜π‘–=1,𝑖=1,…,4). The latter case has to be analysed separately with care since 0 is an eigenvalue that introduces singularities.
The use of our matrix transfer technique leads to the matrix representation of the FPE (1.2), which requires that the matrix function equation𝐴𝛼/2Ξ¦=𝑏(1.4)must be solved. Note that in (1.4), π΄βˆˆβ„π‘›Γ—π‘› denotes the matrix representation of the Laplacian operator obtained using any of the well-documented methods: finite difference, the finite volume method, or variational methods such as the Galerkin method using finite element or wavelets and 𝑏=𝑏1+𝐴𝛼/2βˆ’1𝑏2, with 𝑏1βˆˆβ„π‘› a vector containing the discrete values of the source/sink term, and 𝑏2βˆˆβ„π‘› a vector that contains all of the discrete boundary condition information. We assume further that both the discretisation process and the implementation of the boundary conditions have been carried out to ensure that 𝐴 is symmetric positive definite, that is, 𝐴∈SPD.
The general solution of (1.4) can be written asΞ¦=π΄βˆ’π›Ό/2𝑏=π΄βˆ’π›Ό/2𝑏1+π΄βˆ’1𝑏2,(1.5)and one notes the need to determine both the action of the matrix function 𝑓(𝐴)=π΄βˆ’π›Ό/2 on the vector 𝑏1 and the action of the standard inverse on 𝑏2, where the matrix 𝐴 can be large and sparse.
In the case where 𝛼=2, numerous authors have proposed efficient methods to deal directly with (1.5) using Krylov subspace methods and in particular, the preconditioned generalised minimum residual (GMRES) iterative method (see, e.g., the texts by Golub and Van Loan [18], Saad [19], and van der Vorst [20]). In this paper, we investigate the use of Krylov subspace methods for computing an approximate solution for a range of values 0<𝛼<2 and indicate how the spectral information gathered from at first solving 𝐴Φ2=𝑏2 can be recycled to obtain the complete solution Ξ¦=Ξ¦1+Ξ¦2 in (1.5), where Ξ¦1=𝑓(𝐴)𝑏1=π΄βˆ’π›Ό/2𝑏1.
In literature, a majority of references deal with the extraction of an approximation to 𝑓(𝐴)𝑣 for scalar analytic function 𝑓(𝑑)βˆΆπ·βŠ‚β„‚β†’β„‚ using Krylov subspace methods (see [21, Chapter 13] and the references therein). Druskin and Knizhnerman [22], Hochbruck and Lubich [23], Eiermann and Ernst [24], Lopez and Simoncini [25], van den Eshof et al. [26], as well as many other researchers use the Lanczos approximationπ‘“ξ€·π΄ξ€Έπ‘£β‰ˆπ‘‰π‘šπ‘“ξ€·π‘‡π‘šξ€Έπ‘’1‖‖𝑣‖‖𝑉,𝑣=π‘šπ‘’1,(1.6)whereπ΄π‘‰π‘š=π‘‰π‘šπ‘‡π‘š+π›½π‘šπ‘£π‘š+1π‘’π‘‡π‘š(1.7)is the usual Lanczos decomposition, and the columns of π‘‰π‘š form an orthonormal basis for Krylov subspace π’¦π‘š(𝐴,𝑣)={𝑣,𝐴𝑣,…,π΄π‘šβˆ’1𝑣}. However, as noted by Eiermann and Ernst [24], all basis vectors must be stored to form this approximation, which may prove costly for large matrices. Restarting the process is by no means as straightforward as for the case 𝑓(𝑑)=1/𝑑, and the restarted Arnoldi algorithm for computing 𝑓(𝐴)𝑣 given in [24] addresses this issue. Another issue worth pointing out is that although preconditioning linear systems is now well understood and numerous preconditioning strategies exist to accelerate the convergence of many iterative solvers based on Krylov subspace methods [19], preconditioning in many cases cannot be applied to 𝑓(𝐴)𝑣. For example if 𝐴𝑀=𝐡, one can only deduce 𝑓(𝐴) from 𝑓(𝐡) in a limited number of special cases for 𝑓(𝑑).
In the previous work by the authors [27], we proposed a spectral splitting method 𝑓(𝐴)𝑣=𝑄𝑓(Ξ›)𝑄𝑇𝑣+π‘π‘š(𝐴)(πΌβˆ’π‘„π‘„π‘‡)𝑣, where 𝑄𝑄𝑇 is an orthogonal projector onto the invariant subspace associated with a set of eigenvalues on the β€œsingular part” of the spectrum 𝜎(𝐴) with respect to 𝑓(𝑑) and πΌβˆ’π‘„π‘„π‘‡ an orthogonal projector onto the β€œregular part” of the spectrum. We refer to that part of the spectral interval where the function to be evaluated has rapid change with large values of the derivatives as the singular part (see [27] for more details). The splitting was chosen in such a way that π‘π‘š(𝑑) was a low-degree polynomial (of degree at most 5). Thick restarting was used to construct the projector 𝑄𝑄𝑇 on the singular part. Unfortunately, the computational overhead associated with constructing the projector 𝑄𝑄𝑇, whilst maintaining the requirement of a low-degree polynomial approximation for 𝑓(𝑑) over the regular part, limits the application of the splitting method to a class of SPD matrices that had fairly compact spectra. The method appeared to work well for applications in statistics [27, 28].
In this paper, we build upon the splitting method idea in the manner outlined as follows to approximate 𝑓(𝐴)𝑣 for monotone decreasing function 𝑓(𝑑)=π‘‘βˆ’π‘ž.
(1) Determine an approximately invariant subspace (AIS), span{π‘ž1,…,π‘žπ‘˜} for the set of eigenvectors associated with the singular part of 𝜎(𝐴) with respect to 𝑓(𝑑). Form π‘„π‘˜=[π‘ž1,…,π‘žπ‘˜] and set Ξ›π‘˜=diag{πœ†1,πœ†2,…,πœ†π‘˜}, where πœ†π‘– are the eigenvalues associated with the eigenvectors π‘žπ‘–,𝑖=1,…,π‘˜. The thick restarted Lanczos method discussed in [27, 29] or [30] can be used for the AIS generation.(2) Let ̃𝑣=(πΌβˆ’π‘„π‘˜π‘„π‘‡π‘˜)𝑣 and generate orthonormal basis for 𝒦ℓ̃(𝐴,𝑣).(3) Approximate 𝑓(𝐴)(πΌβˆ’π‘„π‘˜π‘„π‘‡π‘˜)π‘£β‰ˆπ‘‰β„“π‘“(𝑇ℓ)𝑉𝑇ℓ̃𝑣 using the Lanczos decomposition to analytic grade β„“, 𝐴𝑉ℓ=𝑉ℓ𝑇ℓ+𝛽ℓ𝑣ℓ+1𝑒𝑇ℓ [31].(4) Form 𝑓(𝐴)π‘£β‰ˆπ‘„π‘˜π‘“(Ξ›π‘˜)π‘„π‘‡π‘˜+𝑉ℓ𝑓(𝑇ℓ)𝑉𝑇ℓ̃𝑣. To avoid components of any eigenvectors associated with the singular part reappearing in 𝒦ℓ̃(𝐴,𝑣), we show how this splitting strategy can be embedded in an adaptively constructed preconditioning of the matrix function.
The paper is organised as follows. In Section 2, we use MTT to formulate the matrix representation of FPE to accommodate nonhomogeneous boundary conditions. We also consider the approximation of the matrix function 𝑓(𝐴)=π΄βˆ’π‘žπ‘£ using the Lanczos method with thick restart and adaptive preconditioning. In Section 3, we give an upper bound on the error cast in terms of the linear system residual. In Section 4, we derive an analytic solution to the fractional Poisson equation using the spectral representation of the Laplacian, and in Section 5, we give the results of our algorithm when applied to two different problems, which highlight the importance of using our adaptively preconditioned Lanczos method. In Section 6, we give the conclusions of our work and hint at future research directions.

2. Matrix Function Approximation and Solution Strategy

The general numerical solution procedure MTT is implemented as follows. First apply a standard spatial discretisation process such as the finite volume, finite element, or finite difference method to the standard Poisson equation (i.e., 𝛼=2 in system (1.2)) in the case of homogeneous boundary conditions to obtain the following matrix form:1β„Ž2𝐴Φ=̃𝑔,(2.1)where it is assumed that (1/β„Ž2)𝐴=π‘š(βˆ’βˆ‡2) is the finite difference matrix representation of the Laplacian, and β„Ž is the grid spacing. Ξ¦=π‘š(πœ‘) is the representation of πœ‘, and ̃𝑔=π‘š(𝑔) is the representation of 𝑔. Then, as was discussed in [15], the solution of FPE subject to homogeneous boundary conditions is approximated by the solution of the following matrix function equation:1β„Žπ›Όπ΄π›Ό/2Ξ¦=̃𝑔.(2.2)Next, we apply the same finite difference method to the homogeneous Poisson equation (i.e., Laplace's equation) with nonhomogeneous boundary conditions. The resulting equations can be written in the following matrix form:1β„Ž2π΄Ξ¦βˆ’π‘=0,(2.3)where 𝑏 represents the discretized boundary values, and the matrix 𝐴 is the same as given above. In other words, if πœ‘ does not satisfy homogeneous boundary conditions, then the modified representationπ‘š((βˆ’βˆ‡21)πœ‘)=β„Ž2π΄Ξ¦βˆ’π‘(2.4)is used, where βˆ’βˆ‡2 denotes the extended definition of the Laplacian (see [8] and also refer to Section 4 for further details). Thirdly, we follow [8] to write the fractional Laplacian in the following form:ξ€·βˆ’βˆ‡2𝛼/2=ξ€·βˆ’βˆ‡2𝛼/2βˆ’1(βˆ’βˆ‡2),(2.5)and its matrix representation asξ€·π‘š(βˆ’βˆ‡2𝛼/2ξ€·)=π‘š(βˆ’βˆ‡2𝛼/2βˆ’1)π‘š((βˆ’βˆ‡2)).(2.6)Hence, the matrix representation for FPE is1β„Žπ›Όπ΄π›Ό/21Ξ¦=̃𝑔+β„Žπ›Όβˆ’2𝐴𝛼/2βˆ’1𝑏.(2.7)Assuming that 𝐴 has an inverse, the solution of this equation isΞ¦=β„Žπ›Όπ΄βˆ’π›Ό/2̃𝑔+β„Ž2π΄βˆ’1𝑏.(2.8)

Our aim is to devise an efficient algorithm to approximate the solution Ξ¦ in (2.8) using Krylov subspace methods. One notes from (2.8) that the solution comprises two distinct components, Ξ¦=β„Žπ›ΌΞ¦1+β„Ž2Ξ¦2, where Ξ¦1=π΄βˆ’π‘žΜƒπ‘”, Ξ¦2=π΄βˆ’1𝑏, and 0<π‘ž=𝛼/2<1. We note further in this context that the scalar function 𝑓(𝑑)=π‘‘βˆ’π‘ž is monotone decreasing on 𝜎(𝐴), where π΄βˆˆβ„π‘›Γ—π‘› is symmetric positive definite.

There exists a plethora of Krylov-based methods available in the literature for approximately solving the linear system 𝐴Φ2=𝑏 using, for example, conjugate gradient, FOM, or MINRES (see [19, 20]). Although preconditioning strategies are often employed to accelerate the convergence of many of these methods, we prefer not to adopt preconditioning here so that spectral information gathered about 𝐴 during this linear system solve can be recycled and used to aid the approximation of Ξ¦1. As we will see, this recycling is affected through the use of thick restart [30, 32] and adaptive preconditioning [33, 34]. We emphasise that even if 𝑀 is a good preconditioner for 𝐴, it may not be useful for 𝑓(𝐴) since we cannot find a relation between 𝑓(𝐴) and 𝑓(π΄π‘€βˆ’1). Thus, many efficient solvers used for the ordinary Poisson equation cannot be employed for the FPE. The adaptive preconditioner, however, can.

We begin our presentation of the numerical algorithm by briefly reviewing the solution of the linear system 𝐴Φ2=𝑏, where π΄βˆˆβ„π‘›Γ—π‘› is a symmetric positive definite using the full orthogonal method (FOM) [19] together with thick restart [27, 30, 32].

2.1. Stage 1β€”Thick Restarted, Adaptively Preconditioned, Lanczos Procedure

Suppose that the Lanczos decomposition of 𝐴 is given by𝐴𝑉ℓ=𝑉ℓ𝑇ℓ+𝛽ℓ𝑣ℓ+1𝑒𝑇ℓ=𝑉ℓ+1𝑇ℓ,(2.9)where the columns of 𝑉ℓ form an orthonormal basis for 𝒦ℓ(𝐴,𝑏), and β„“ is the analytic grade defined in [31]. The analytic grade of order 𝑑 of the matrix 𝐴 with respect to 𝑏 is defined as the lowest integer β„“ for which β€–π‘’β„“βˆ’π’«β„“π‘’β„“β€–/‖𝑒ℓ‖<10βˆ’π‘‘, where 𝒫ℓ is the orthogonal projector onto the 𝑙th Krylov subspace 𝒦𝑙 and 𝑒𝑙=𝐴ℓ𝑏. The grade can be computed from the Lanczos algorithm using the matrices 𝑇1,𝑇2,…,𝑇𝑙 generated during the process. If 𝑑1 is the 1st column of 𝑇1, and 𝑑𝑖=π‘‡π‘–π‘‘π‘–βˆ’1, for 𝑖=1,…,β„“, then β€–π‘’β„“βˆ’π’«β„“π‘’β„“β€–/‖𝑒ℓ‖=|𝑒𝑇𝑙+1𝑑𝑙|/‖𝑑𝑙‖.

In each restart, or cycle, that follows, the Lanczos decomposition is carried up to the analytic grade β„“, which could be different for different cycles. Consequently, for ease of exposition, the subscript β„“ will be suppressed so that the only subscript that appears throughout the description below refers to the cycle. Let Ξ¦2(0) be some initial approximation to the solution Ξ¦2 and define π‘Ÿ0=π‘βˆ’π΄Ξ¦2(0).

Cycle 1
(i)Generate Lanczos decomposition𝐴𝑉1=𝑉1𝑇1+𝛽1𝑒1𝑒𝑇,(2.10)where 𝑉1=[𝑣1(1),…,𝑣ℓ(1)], 𝑣1(1)=π‘Ÿ0/𝛽0, 𝑇1 is tridiagonal, 𝑒1=𝑣(1)β„“+1, 𝛽0=β€–π‘Ÿ0β€–, 𝛽1=𝛽ℓ(1), and 𝑒𝑇=𝑒𝑇ℓ.(ii)Obtain approximate solution Φ2(1)=𝑉1𝑇1βˆ’1𝑉𝑇1π‘Ÿ0, so thatΞ¦2(1)=Ξ¦2(0)+𝑉1𝑇1βˆ’1𝑉𝑇1π‘Ÿ0,(2.11)and residualπ‘Ÿ1=π‘βˆ’π΄Ξ¦2(1)=π‘Ÿ0ξ‚Ξ¦βˆ’π΄2(1)=βˆ’π›½1𝑒𝑇𝑇1βˆ’1𝑉𝑇1π‘Ÿ0𝑒1.(2.12)Test if β€–π‘Ÿ1β€–<πœ€. If yes, stop; otherwise, continue to cycle 2.

Cycle 2
(i)Find eigenvalue decomposition of 𝑇1, that is, 𝑇1π‘Œ=π‘ŒΞ›, where Ξ›=diag{πœƒ1,…,πœƒβ„“}.(ii)Select the π‘˜ orthonormal (ON) eigenvectors, π‘Œ1, of 𝑇1 corresponding to the π‘˜ smallest in magnitude eigenvalues of 𝑇1 and form the Ritz vectorsπ‘Š1=𝑉1π‘Œ1=[𝑀1,…,π‘€π‘˜],(2.13)where 𝑀𝑖 are ON, and let the associated Ritz values be stored in the diagonal matrix Ξ›1=diag{πœƒ1,…,πœƒπ‘˜}.(iii)Set 𝑉2=[π‘Š1,𝑒1] and generate the thick-restart Lanczos decomposition𝐴𝑉2=𝑉2𝑇2+𝛽2𝑒2𝑒𝑇,(2.14)where 𝑉2=[𝑀1,…,π‘€π‘˜,𝑣1(2),…,𝑣ℓ(2)], 𝑣1(2)=𝑒1, 𝑒2=𝑣(2)β„“+1, and𝑇2=βŽ‘βŽ’βŽ’βŽ’βŽ’βŽ’βŽ£Ξ›1𝛽2𝑠1𝛽0β‹―2𝑠𝑇1π›Όπ‘˜+1π›½π‘˜+10β‹―0π›½π‘˜+1⎀βŽ₯βŽ₯βŽ₯βŽ₯βŽ₯βŽ¦β‹±β‹±β€¦β‹±β€¦,with𝑠1=π‘Œπ‘‡1𝑒.(2.15)(iv)Obtain approximate solution Φ2(2)=𝑉2𝑇2βˆ’1𝑉𝑇2π‘Ÿ1, so thatΞ¦2(2)=Ξ¦2(1)+𝑉2𝑇2βˆ’1𝑉𝑇2π‘Ÿ1,(2.16)and residualπ‘Ÿ2=π‘βˆ’π΄Ξ¦2(2)=π‘Ÿ1ξ‚Ξ¦βˆ’π΄2(2)=βˆ’π›½2𝑒𝑇𝑇2βˆ’1𝑉𝑇2π‘Ÿ1𝑒2.(2.17)Test if β€–π‘Ÿ2β€–<πœ€. If yes, stop; otherwise, continue to the next cycle.

Cycle (𝑗+1)
(i)Find eigenvalue decomposition of 𝑇𝑗, that is, π‘‡π‘—π‘Œ=π‘ŒΞ›.(ii)Select π‘˜ orthonormal (ON) eigenvectors, π‘Œπ‘—, of 𝑇𝑗 corresponding to the π‘˜ smallest in magnitude eigenvalues of 𝑇𝑗 and form the Ritz vectors π‘Šπ‘—=π‘‰π‘—π‘Œπ‘—.(iii)Set 𝑉𝑗+1=[π‘Šπ‘—,𝑒𝑗] and generate thick-restart Lanczos decomposition𝐴𝑉𝑗+1=𝑉𝑗+1𝑇𝑗+1+𝛽𝑗+1𝑒𝑗+1𝑒𝑇,(2.18)where 𝑇𝑗+1 has similar form as 𝑇2.(iv)Obtain approximate solution Φ2(𝑗+1)=𝑉𝑗+1π‘‡βˆ’1𝑗+1𝑉𝑇𝑗+1π‘Ÿπ‘—, so thatΞ¦2(𝑗+1)=Ξ¦2(𝑗)+Φ2(𝑗+1)=Ξ¦2(0)+𝑗+1𝑖=1π‘‰π‘–π‘‡π‘–βˆ’1π‘‰π‘‡π‘–π‘Ÿπ‘–βˆ’1,(2.19)and residualπ‘Ÿπ‘—+1=π‘βˆ’π΄Ξ¦2(𝑗+1)=βˆ’π›½π‘—+1ξ‚€π‘’π‘‡π‘‡βˆ’1𝑗+1𝑉𝑇𝑗+1π‘Ÿπ‘—ξ‚π‘’π‘—+1.(2.20)Test if β€–π‘Ÿπ‘—+1β€–<πœ€. If yes, stop; otherwise, continue cycling.

2.1.1. Construction of an Adaptive Preconditioner

Another important ingredient in the algorithm described above is the construction of an adaptive preconditioner [33, 34]. Let the thick-restart procedure at cycle 𝑗 produce the π‘˜ approximate smallest Ritz pairs {πœƒπ‘–,𝑀𝑖}π‘˜π‘–=1, where 𝑀𝑖=𝑉𝑗𝑦𝑖. We then check if any of these Ritz pairs have converged to approximate eigenpairs of 𝐴 by testing the magnitude of the upper bound on the eigenpair residualβ€–β€–π΄π‘€π‘–βˆ’πœƒπ‘–π‘€π‘–β€–β€–β‰€π›½π‘—||𝑒𝑇𝑦𝑖||<πœ€2.(2.21)The eigenpairs deemed to have converged are then locked and used to construct an adaptive preconditioner that can be employed during the next cycle to ensure that difficulties such as spuriousness can be avoided.

Suppose we collect the 𝑝 locked Ritz vectors as columns of the matrix 𝑄𝑗=[π‘ž1,π‘ž2,…,π‘žπ‘], set Λ𝑗=diag{πœƒ1,…,πœƒπ‘}, and formπ‘€π‘—βˆ’1=π›Ύπ‘„π‘—Ξ›π‘—βˆ’1𝑄𝑇𝑗+πΌβˆ’π‘„π‘—π‘„π‘‡π‘—,(2.22)where 𝛾=(πœƒmin+πœƒmax)/2. πœƒmin, πœƒmax are the current estimates of the smallest and largest eigenvalues of 𝐴, respectively, obtained from the restart process. Then, 𝐴𝑗=π΄π‘€π‘—βˆ’1 has the same eigenvectors as 𝐴; however its eigenvalues {πœ†π‘–}𝑝𝑖=1 are shifted to 𝛾 [33, 34]. Furthermore, it should be noted that these preconditioners can be nested. If 𝑀1,𝑀2,…,𝑀𝑗 is a sequence of such preconditioners, then with 𝑄=[𝑄1,𝑄2,…,𝑄𝑗] and Ξ›=diag(Λ𝑖,𝑖=1,…,𝑗), we haveπ‘€βˆ’1=π‘€π‘—βˆ’1⋯𝑀2βˆ’1𝑀1βˆ’1=π›Ύπ‘„Ξ›βˆ’1𝑄𝑇+πΌβˆ’π‘„π‘„π‘‡.(2.23)Thus, during the cycles (say cycle 𝑗+1) the adaptively preconditioned, thick- restart Lanczos decompositionπ΄π‘€βˆ’1𝑉𝑗+1=𝑉𝑗+1𝑇𝑗+1+𝛽𝑗+1𝑒𝑗+1𝑒𝑇(2.24)is employed.

Note. The preconditioner π‘€βˆ’1 does not need to be explicitly formed; it can be applied in a straightforward manner from the stored locked Ritz pairs.

In summary, stage 1 consists of employing the adaptively preconditioned Lanczos procedure outlined above to approximately solve the linear system 𝐴Φ2=𝑏 for Ξ¦2. At the completion of this process, the residual β€–π‘Ÿβ€–=β€–π‘βˆ’π΄Ξ¦2β€–<πœ€, and we have the set {πœƒπ‘–,π‘žπ‘–}π‘˜π‘–=1 of locked Ritz pairs. This spectral information is then passed to accelerate the performance of stage 2 of the solution process.

2.2. Stage 2β€”Matrix Function Approximation Using an Adaptively Preconditioned Lanczos Procedure

At the completion of stage 1, we have generated an approximately invariant eigenspace 𝒱=span{π‘ž1,π‘ž2,…,π‘žπ‘˜} associated with the smallest in magnitude eigenvalues of 𝐴. We now show how this spectral information can be recycled to aid with the approximation of Ξ¦1=𝑓(𝐴)̃𝑔, where 𝑓(𝑑)=π‘‘βˆ’π‘ž.

2.2.1. Adaptive Preconditioning

Recall from stage 1 that we have available π‘€βˆ’1=π›Ύπ‘„π‘˜Ξ›π‘˜βˆ’1π‘„π‘‡π‘˜+πΌβˆ’π‘„π‘˜π‘„π‘‡π‘˜, where π‘„π‘˜=[π‘ž1,…,π‘žπ‘˜]. The important observation at this point is the following relationship between 𝑓(𝐴) and 𝑓(π΄π‘€βˆ’1).

Proposition 2.1. Let span{π‘ž1,π‘ž2,…,π‘žπ‘˜} be an eigenspace of symmetric matrix 𝐴 such that π΄π‘„π‘˜=π‘„π‘˜Ξ›π‘˜, with π‘„π‘˜=[π‘ž1,π‘ž2,…,π‘žπ‘˜] and Ξ›π‘˜=diag(πœ‡1,…,πœ‡π‘˜). Define 𝑀=(1/𝛾)π‘„π‘˜Ξ›π‘˜π‘„π‘‡π‘˜+πΌβˆ’π‘„π‘˜π‘„π‘‡π‘˜, then for π‘£βˆˆβ„π‘›, 1𝑓(𝐴)𝑣=𝑓(𝛾)𝑓(π΄π‘€βˆ’1)𝑓(𝛾𝑀)𝑣.(2.25)

Proof. Let π‘Šπ‘Šπ‘‡=πΌβˆ’π‘„π‘˜π‘„π‘‡π‘˜,π‘Šπ‘‡π΄π‘Š=𝐡, then π‘€π‘€βˆ’1=π‘„π‘˜π‘„π‘‡π‘˜+π‘Šπ‘Šπ‘‡=𝐼=π‘€βˆ’1𝑀. Furthermore,
π‘€βˆ’1𝑄𝐴=π‘˜π‘Šξ€Έξƒ©π‘„π›ΎπΌ00𝐡ξƒͺξƒ©π‘‡π‘˜π‘Šπ‘‡ξƒͺ=π΄π‘€βˆ’1.(2.26)Thus,π‘“ξ€·π΄π‘€βˆ’1ξ€Έ=ξ€·π‘„π‘˜π‘Šξ€Έξƒ©π‘“ξ€·π›Ύξ€Έξ€·π΅ξ€Έπ‘„πΌ00𝑓ξƒͺξƒ©π‘‡π‘˜π‘Šπ‘‡ξƒͺ.(2.27)By noting that𝑓𝐴=ξ€·π‘„π‘˜π‘Šξ€Έξƒ©π‘“ξ€·Ξ›π‘˜ξ€Έ0𝐡𝑄0𝑓ξƒͺξƒ©π‘‡π‘˜π‘Šπ‘‡ξƒͺ,𝑓=ξ€·π‘„π›Ύπ‘€π‘˜π‘Šξ€Έξƒ©π‘“ξ€·Ξ›π‘˜ξ€Έ0𝛾𝐼𝑄0𝑓ξƒͺξƒ©π‘‡π‘˜π‘Šπ‘‡ξƒͺ,(2.28)we obtain the main result𝑓(𝐴)𝑓(𝛾)[𝑓(𝛾𝑀)]βˆ’1=ξ€·π‘„π‘˜π‘Šξ€Έξƒ©π‘“ξ€·π›Ύξ€Έξ€·π΅ξ€Έπ‘„πΌ00𝑓ξƒͺξƒ©π‘‡π‘˜π‘Šπ‘‡ξƒͺ=𝑓(π΄π‘€βˆ’1).(2.29)

The following proposition shows that, as was the case for the solution of the linear system in stage 1, these preconditioners can be nested in the case of the matrix function approximation.

Proposition 2.2. Let 𝑀1,𝑀2,…,𝑀𝑗 be a sequence of preconditioners as defined in Proposition 2.1, then 1𝑓(𝐴)𝑣=𝑓(𝛾)𝑓(𝐴𝑀1βˆ’1𝑀2βˆ’1β‹―π‘€π‘—βˆ’1)𝑓(𝛾𝑀1𝑀2⋯𝑀𝑗)𝑣.(2.30)

Proof. Let 𝑄=[𝑄1,𝑄2,…,π‘„π‘˜] and Ξ›=diag(Λ𝑖,𝑖=1,…,𝑗), then observe that 𝑀=𝑀1𝑀2⋯𝑀𝑗=(1/𝛾)𝑄Λ𝑄𝑇+πΌβˆ’π‘„π‘„π‘‡ and 𝑓(𝐴)=𝑓(π΄π‘€βˆ’1)(1/𝑓(𝛾))𝑓(𝛾𝑀).

Corollary 2.3. Under the hypothesis of Proposition 2.1, one notes the equivalent form of (2.25) as 𝑓(𝐴)𝑣=π‘„π‘˜π‘“(Ξ›π‘˜)π‘„π‘‡π‘˜π‘£+𝑓(π΄π‘€βˆ’1)(πΌβˆ’π‘„π‘˜π‘„π‘‡π‘˜)𝑣,(2.31)which appears similar to the idea of spectral splitting proposed in [27].

We now turn our attention to the approximation of Ξ¦1=π΄βˆ’π‘žΜƒπ‘”, which by using Corollary 2.3 can be expressed asπ΄βˆ’π‘žΜƒπ‘”=π‘˜ξ“π‘–=1πœƒπ‘–βˆ’π‘žπ‘žπ‘–π‘žπ‘‡π‘–Μƒπ‘”+(π΄π‘€βˆ’1)βˆ’π‘žΜ‚π‘”,(2.32)where ̂𝑔=(πΌβˆ’π‘„π‘˜π‘„π‘‡π‘˜)̃𝑔. First note that if 𝐴∈SPD, then π΄π‘€βˆ’1∈SPD. We expand the Lanczos decomposition π΄π‘€βˆ’1𝑉ℓ=𝑉ℓ𝑇ℓ+𝛽ℓ𝑣ℓ+1𝑒𝑇ℓ to the analytic grade β„“ of π΄π‘€βˆ’1 with 𝑣1=̂𝑔/‖̂𝑔‖. Next perform the spectral decomposition of 𝑇ℓ=π‘Œβ„“Ξ›β„“π‘Œπ‘‡β„“ and set 𝑄ℓ=π‘‰β„“π‘Œβ„“, then compute the Lanczos approximation(π΄π‘€βˆ’1)βˆ’π‘žΜ‚π‘”β‰ˆπ‘‰β„“π‘‡β„“βˆ’π‘žπ‘‰π‘‡β„“ξ‚‹π‘„Μ‚π‘”=β„“Ξ›β„“βˆ’π‘žξ‚‹π‘„π‘‡β„“Μ‚π‘”.(2.33)Based on the theory presented to this point, we propose the following algorithm to approximate the solution of the fractional Poisson equation.

Algorithm 2.4 (Computing the Solution of the FPE Problem). Stage 1. Solve 𝐴Φ2=𝑏 using the thick restarted adaptively preconditioned Lanczos method and generate the AIS, π’¬π‘˜=span{π‘ž1,…,π‘žπ‘˜}. Return the preconditioner 𝑀=(1/𝛾)π‘„π‘˜Ξ›π‘˜π‘„π‘‡π‘˜+πΌβˆ’π‘„π‘˜π‘„π‘‡π‘˜, where π‘„π‘˜=[π‘ž1,…,π‘žπ‘˜].
Stage 2. Compute Ξ¦1=π΄βˆ’π‘žΜƒπ‘” using the following strategy.
(1)Set ̂𝑔=(πΌβˆ’π‘„π‘˜π‘„π‘‡π‘˜)̃𝑔.(2)Compute Lanczos decomposition π΄π‘€βˆ’1𝑉ℓ=𝑉ℓ𝑇ℓ+𝛽ℓ𝑣ℓ+1𝑒𝑇ℓ, where β„“ is the analytic grade of π΄π‘€βˆ’1 and 𝑉ℓ=[𝑣1,…,𝑣ℓ], with 𝑣1=̂𝑔/‖̂𝑔‖.(3) Perform the spectral decomposition 𝑇ℓ=π‘Œβ„“Ξ›β„“π‘Œπ‘‡β„“.(4)Compute linear system residual β€–π‘Ÿβ„“β€–=|π›½β„“π‘’π‘‡β„“π‘Œβ„“Ξ›β„“βˆ’1π‘Œπ‘‡β„“π‘‰π‘‡β„“Μ‚π‘”| and estimate πœ†minβ‰ˆπœ‡min from 𝑇ℓ to compute bound (3.9) πœ‡βˆ’π‘žminβ€–π‘Ÿβ„“β€– derived in Section 3.(5)If bound is small, then approximate 𝑓(π΄π‘€βˆ’1)Μ‚π‘”β‰ˆπ‘‰β„“π‘‡β„“βˆ’π‘žπ‘‰π‘‡β„“Μ‚π‘” and exit to step (6), otherwise continue the Lanczos expansion until bound is satisfied.(6) Form Ξ¦1=𝑓(𝐴)Μƒπ‘”β‰ˆπ‘„π‘˜Ξ›π‘˜βˆ’π‘žπ‘„π‘‡π‘˜ξ‚‹π‘„Μƒπ‘”+β„“Ξ›β„“βˆ’π‘žξ‚‹π‘„π‘‡β„“Μ‚π‘”, where 𝑄ℓ=π‘‰β„“π‘Œβ„“.

Finally, compose the approximate solution of FPE as Ξ¦=β„Žπ›ΌΞ¦1+β„Ž2Ξ¦2.

Remarks
At stage 2, we monitor the upper bound given in Proposition 3.3 to check if the desired accuracy is achieved in the matrix function approximation. If the desired level is not attained, then it may be necessary to repeat the thick-restart procedure to determine the next π‘˜ smallest eigenvalues and their corresponding ON eigenvectors. In fact, this process may need to be repeated until there are no eigenvalues remaining in the β€œsingular” part so that the accuracy of the approximation is dictated entirely by that of the linear system residual. We leave the design of this more sophisticated and generic algorithm for future research.

It is natural at this point to ask what is the accuracy of the approximation (2.33) for a given β„“? Not knowing (π΄π‘€βˆ’1)βˆ’π‘žΜ‚π‘” at the outset makes it impossible to answer this question. Instead, we opt to provide an upper bound for the error β€–(π΄π‘€βˆ’1)βˆ’π‘žΜ‚π‘”βˆ’π‘‰β„“π‘‡β„“βˆ’π‘žπ‘‰π‘‡β„“Μ‚π‘”β€–, which is the topic of the following section.

3. Error Bounds for the Numerical Solution

At first, we note that Churchill [35] uses complex integration around a branch point to derive the following:ξ€œβˆž0π‘₯βˆ’π‘žπœ‹π‘₯+1𝑑π‘₯=sin(π‘žπœ‹).(3.1)By changing the variable, one can deduce the following expression, for πœ†βˆ’π‘ž,πœ†>0:πœ†βˆ’π‘ž=sin(π‘žπœ‹)ξ€œ(1βˆ’π‘ž)πœ‹βˆž0𝑑𝑑𝑑1/(1βˆ’π‘ž)+πœ†.(3.2)Noting that 𝐴=π΄π‘€βˆ’1∈SPD, the spectral decomposition and the usual definition of the matrix function enable the following expression for computing π΄βˆ’π‘ž to be obtained:π΄βˆ’π‘ž=sin(π‘žπœ‹)ξ€œ(1βˆ’π‘ž)πœ‹βˆž0𝑑1/(1βˆ’π‘ž)𝐼+π΄ξ‚βˆ’1𝑑𝑑.(3.3)

Recall that the approximate solution of the linear system 𝐴π‘₯=𝑣 from 𝒦ℓ(𝐴,𝑣) using the Galerkin approach (FOM or CG) is given by π‘₯β„“=π‘‰β„“π‘‡β„“βˆ’1𝑉𝑇ℓ𝑣, with residual π‘Ÿβ„“=π‘βˆ’π΄π‘₯β„“=βˆ’(π›½β„“π‘’π‘‡β„“π‘‡β„“βˆ’1𝑉𝑇ℓ𝑣)𝑣ℓ+1. We note the similarity to (2.33); however a key observation is that the error in the matrix function approximation cannot be determined in such a straightforward manner as for the linear system [24]. The following proposition enables the error in the matrix function approximation to be expressed in terms of the integral expression given above in (3.3) and the residual of what is called a shifted linear system.

Proposition 3.1. Let π‘Ÿβ„“(𝑑)=π‘£βˆ’(𝐴+𝑑1/(1βˆ’π‘ž)𝐼)𝑉ℓ(𝑇ℓ+𝑑1/(1βˆ’π‘ž)𝐼)βˆ’1𝑉𝑇ℓ𝑣 be the residual to the shifted linear system (𝐴+𝑑1/(1βˆ’π‘ž)𝐼)π‘₯=𝑣, then π΄βˆ’π‘žπ‘£βˆ’π‘‰β„“π‘‡β„“βˆ’π‘žπ‘‰π‘‡β„“π‘£=sin(π‘žπœ‹)ξ€œ(1βˆ’π‘ž)πœ‹βˆž0𝑑1/(1βˆ’π‘ž)𝐼+π΄ξ‚βˆ’1π‘Ÿβ„“(𝑑)𝑑𝑑.(3.4)

Proof. It is known that π΄βˆ’π‘žπ‘£βˆ’π‘‰β„“π‘‡β„“βˆ’π‘žπ‘‰π‘‡β„“π‘£=sin(π‘žπœ‹)ξ€œ(1βˆ’π‘ž)πœ‹βˆž0{𝑑1/(1βˆ’π‘ž)𝐼+π΄ξ‚βˆ’1βˆ’π‘‰β„“ξ‚€π‘‘1/(1βˆ’π‘ž)𝐼+π‘‡β„“ξ‚βˆ’1𝑉𝑇ℓ=}𝑣𝑑𝑑sin(π‘žπœ‹)ξ€œ(1βˆ’π‘ž)πœ‹βˆž0𝑑1/(1βˆ’π‘ž)𝐼+π΄ξ‚βˆ’1𝑑{π‘£βˆ’1/(1βˆ’π‘ž)𝐼+𝐴𝑉ℓ𝑑1/(1βˆ’π‘ž)𝐼+π‘‡β„“ξ‚βˆ’1𝑉𝑇ℓ𝑣}𝑑𝑑.(3.5)

It is interesting to observe that π‘Ÿβ„“(𝑑)=βˆ’(𝛽ℓ𝑒𝑇ℓ(𝑇ℓ+𝑑1/(1βˆ’π‘ž)𝐼)βˆ’1𝑉𝑇ℓ𝑣)𝑣ℓ+1 for the Lanczos approximation, so that the vectors π‘Ÿβ„“β‰‘π‘Ÿβ„“(0) and π‘Ÿβ„“(𝑑) are aligned; however their magnitudes are different. Note further thatπ‘Ÿβ„“π‘’(𝑑)=𝑇ℓ(𝑇ℓ+𝑑𝐼)βˆ’1𝑒1π‘’π‘‡β„“π‘‡β„“βˆ’1𝑒1π‘Ÿβ„“(0).(3.6)An even more important result is the following relationship between their norms.

Proposition 3.2. Let 𝑇ℓ have eigendecomposition π‘‡β„“π‘Œβ„“=π‘Œβ„“Ξ›β„“, where Ξ›β„“=diag(πœ‡π‘–,𝑖=1,…,β„“) with the πœ‡π‘– Ritz values for the Lanczos approximation, then for 𝑑>0, β€–β€–π‘Ÿβ„“β€–β€–(𝑑)=|ℓ𝑖=1πœ‡π‘–πœ‡π‘–+𝑑1/(1βˆ’π‘ž)|β€–β€–π‘Ÿβ„“β€–β€–β‰€β€–β€–π‘Ÿβ„“β€–β€–.(3.7)

Proof. The result follows from [26], which gives the following polynomial characterisations for the residuals:π‘Ÿβ„“=πœ‹β„“(𝐴)π‘£πœ‹β„“(0),πœ‹β„“(𝜏)=det(πœπΌβˆ’π‘‡β„“)=β„“βˆπ‘–=1(πœβˆ’πœ‡π‘–π‘Ÿ),β„“πœ‹(𝑑)=𝑑ℓ𝐴+𝑑1/(1βˆ’π‘ž)πΌξ‚π‘£πœ‹π‘‘β„“(0),πœ‹π‘‘β„“ξ‚€ξ‚€π‘‡(𝜏)=detπœπΌβˆ’β„“+𝑑1/(1βˆ’π‘ž)𝐼=ξ‚ξ‚β„“βˆπ‘–=1ξ‚€πœβˆ’(πœ‡π‘–+𝑑1/(1βˆ’π‘ž)),(3.8)so that π‘Ÿβ„“(𝑑)=πœ‹β„“(𝐴)𝑣/πœ‹π‘‘β„“(0)=(πœ‹β„“(0)/πœ‹π‘‘β„“(0))π‘Ÿβ„“=βˆβ„“π‘–=1(πœ‡π‘–/(πœ‡π‘–+𝑑1/(1βˆ’π‘ž)))π‘Ÿβ„“. The result follows by taking the norm and noting that 𝑑>0.

We are now in a position to formulate an error bound essential for monitoring the accuracy of the Lanczos approximation (2.33).

Proposition 3.3. Let πœ†min be the smallest eigenvalue of 𝐴 and π‘Ÿβ„“ the linear system residual obtained by solving the linear system 𝐴π‘₯=̂𝑔 using FOM on the Krylov subspace 𝒦ℓ(𝐴,̂𝑔), then for 0<π‘ž<1, one has β€–β€–π΄βˆ’π‘žΜ‚π‘”βˆ’π‘‰β„“π‘‡β„“βˆ’π‘žπ‘‰π‘‡β„“β€–β€–Μ‚π‘”β‰€πœ†βˆ’π‘žminβ€–β€–π‘Ÿβ„“β€–β€–.(3.9)

Proof. Using the orthogonal diagonalisation 𝐴=𝑄Λ𝑄𝑇, we obtain from Proposition 3.1 thatπ΄βˆ’π‘žΜ‚π‘”βˆ’π‘‰β„“π‘‡β„“βˆ’π‘žπ‘‰π‘‡β„“Μ‚π‘”=sin(π‘žπœ‹)ξ€œ(1βˆ’π‘ž)πœ‹βˆž0𝑄𝑑1/(1βˆ’π‘ž)𝐼+Ξ›βˆ’1π‘„π‘‡π‘Ÿβ„“(𝑑)𝑑𝑑.(3.10)The result follows by taking norms and using Proposition 3.2 to obtainβ€–β€–π΄βˆ’π‘žΜ‚π‘”βˆ’π‘‰β„“π‘‡β„“βˆ’π‘žπ‘‰π‘‡β„“β€–β€–β‰€Μ‚π‘”sin(π‘žπœ‹)ξ€œ(1βˆ’π‘ž)πœ‹βˆž0‖‖‖1diag𝑑1/(1βˆ’π‘ž)𝐼+πœ†π‘–ξ‚ξ‚‡β€–β€–β€–β€–β€–π‘Ÿ,𝑖=1,…,𝑛𝑑𝑑ℓ‖‖.(3.11)

The importance of this result is that it relates the error in the matrix function approximation to a scalar multiple of the linear system residual. This bound can be monitored during the Lanczos decomposition to deduce whether a specified tolerance has been reached in the matrix function approximation. Another key observation from Proposition 3.3 is that it motivates us to shift the small troublesome eigenvalues of 𝐴, via some form of preconditioning, so that πœ†minβ‰ˆ1. In this way, the error in the function approximation is dominated entirely by the residual error.

4. Analytic Solution

In this section, we discuss the analytic solution of the fractional Poisson equation, which can be used to verify the numerical solution strategy outlined in Section 2. The theory depends on the definition of the operator (βˆ’βˆ‡2)𝛼/2 via spectral representation. The one-dimensional case was discussed in IliΔ‡ et al. [8], and the salient results for two dimensions are repeated here for completeness.

4.1. Homogeneous Boundary Conditions

In operator theory, functions of operators are defined using spectral decomposition. Set Ξ©={(π‘₯,𝑦)∣0<π‘₯<π‘Ž,0<𝑦<𝑏}, and let β„‹ be the real space β„’2(Ξ©) with real inner product βŸ¨π‘’,π‘£βŸ©=βˆ«βˆ«Ξ©π‘’π‘£π‘‘π‘†. Consider the operatorξ‚€πœ•π‘‡πœ‘=βˆ’2πœ•π‘₯2+πœ•2πœ•π‘¦2ξ‚πœ‘=(βˆ’Ξ”)πœ‘(4.1)on π’Ÿ={πœ‘βˆˆβ„‹βˆ£πœ‘π‘₯,πœ‘π‘¦ absolutely continuous; πœ‘π‘₯,πœ‘π‘¦,πœ‘π‘₯π‘₯,πœ‘π‘₯𝑦,πœ‘π‘¦π‘¦βˆˆβ„’2(Ξ©),ℬ(πœ‘)=0}, where ℬ(πœ‘) is one of the boundary conditions in the FPE problem with right-hand side equal to zero.

It is known that 𝑇 is a closed self-adjoint operator whose eigenfunctions {πœ‘π‘–π‘—}βˆžπ‘–,𝑗=1 form an orthonormal basis for β„‹. Thus, π‘‡πœ‘π‘–π‘—=πœ†2π‘–π‘—πœ‘π‘–π‘—,𝑖,𝑗=1,2,…. For any πœ‘βˆˆπ’Ÿ,πœ‘=βˆžξ“βˆžπ‘–=1𝑗=1π‘π‘–π‘—πœ‘π‘–π‘—,𝑐𝑖𝑗=βŸ¨πœ‘,πœ‘π‘–π‘—βŸ©,π‘‡πœ‘=βˆžξ“βˆžπ‘–=1𝑗=1πœ†2π‘–π‘—π‘π‘–π‘—πœ‘π‘–π‘—.(4.2)If πœ“ is a continuous function on ℝ, thenπœ“(𝑇)πœ‘=βˆžξ“βˆžπ‘–=1𝑗=1πœ“(πœ†2𝑖𝑗)π‘π‘–π‘—πœ‘π‘–π‘—,(4.3)provided that βˆ‘βˆžπ‘–=1βˆ‘βˆžπ‘—=1|πœ“(πœ†2𝑖𝑗)𝑐𝑖𝑗|2<∞. Hence, if the eigenvalue problem for 𝑇 can be solved for the region Ξ©, then the FPE problem with homogeneous boundary conditions can be easily solved to giveπœ‘(π‘₯,𝑦)=βˆžξ“βˆžπ‘–=1𝑗=1βŸ¨π‘”,πœ‘π‘–π‘—βŸ©πœ†π›Όπ‘–π‘—πœ‘π‘–π‘—(π‘₯,𝑦).(4.4)

4.2. Nonhomogeneous Boundary Conditions

Before we proceed further, we need to specify the definition of βˆ’(βˆ‡2)𝛼/2.

Definition 4.1. Let {πœ‘π‘–π‘—} be a complete set of orthonormal eigenfunctions corresponding to eigenvalues πœ†2𝑖𝑗(β‰ 0) of the Laplacian (βˆ’Ξ”) on a bounded region Ξ© with homogeneous BCs on πœ•Ξ©. Let𝔉𝛾={π‘“βˆˆπ’Ÿ(Ξ©)βˆ£βˆžξ“βˆžπ‘–=1𝑗=1|||πœ†π›Ύπ‘–π‘—||||||𝑐𝑖𝑗|||2<∞,𝑐𝑖𝑗=βŸ¨π‘“,πœ‘π‘–π‘—βŸ©,𝛾=max(𝛼,0)}.(4.5)Then, for any π‘“βˆˆπ”‰π›Ύ, (βˆ’Ξ”)𝛼/2𝑓 is defined byξ€·ξ€Έβˆ’Ξ”π›Ό/2𝑓=βˆžξ“βˆžπ‘–=1𝑗=1πœ†π›Όπ‘–π‘—π‘π‘–π‘—πœ‘π‘–π‘—.(4.6)If one of πœ†2𝑖𝑗=0 and πœ‘0 is the eigenfunction corresponding to this eigenvalue, then one needs βŸ¨π‘“,πœ‘0⟩=0.

Proposition 4.2. (1)The operator 𝑇=(βˆ’Ξ”)𝛼/2 is linear and self-adjoint, that is, for 𝑓,π‘”βˆˆπ”‰π›Ύ, βŸ¨π‘‡π‘“,π‘”βŸ©=βŸ¨π‘“,π‘‡π‘”βŸ©.(2)If π‘“βˆˆπ”‰π›Ύ, where 𝛾=max(0,𝛼,𝛽,𝛼+𝛽), then(βˆ’Ξ”)𝛼/2(βˆ’Ξ”)𝛽/2𝑓=(βˆ’Ξ”)𝛼+𝛽/2𝑓=(βˆ’Ξ”)𝛽/2(βˆ’Ξ”)𝛼/2𝑓.(4.7)

For 𝛼>0, Definition 4.1 may be too restrictive, since the functions we are interested in satisfy nonhomogeneous boundary conditions, and the resulting series may not converge or not converge uniformly.

Extension of Definition 4.1
(1)For 𝛼=2π‘š,π‘š=0,1,2,…, define (βˆ’Ξ”)𝛼/2𝑓=(βˆ’Ξ”)π‘šπ‘“ for any π‘“βˆˆπΆ2π‘š(Ξ©) (or other possibilities).(2)For π‘šβˆ’1<𝛼/2<π‘š,π‘š=1,2,…, define (βˆ’Ξ”)𝛼/2𝑓=𝑇𝑔, where 𝑔=(βˆ’Ξ”)π‘šβˆ’1π‘“βˆˆπΆ2(π‘šβˆ’1)(Ξ©), and 𝑇 is the extension of 𝑇=(βˆ’Ξ”)𝛼/2+1βˆ’π‘š as defined by Proposition 4.3 below.
It suffices to consider 0<𝛼<2.

Proposition 4.3. Let πœ‘π‘–π‘—(π‘₯,𝑦) be an eigenfunction corresponding to the eigenvalue πœ†2𝑖𝑗(β‰ 0) of the Laplacian (βˆ’Ξ”) on the rectangle Ξ©, and let πœ‘ satisfy the BCs in problem 1. Then, if 𝑇 is an extension of 𝑇=(βˆ’Ξ”)𝛼/2 (in symbols π‘‡βŠ‚π‘‡) with adjoint π‘‡βˆ—βŠ‚π‘‡βˆ—, βŸ¨πœ‘π‘–π‘—,π‘‡πœ‘βŸ©=πœ†π›Όπ‘–π‘—βŸ¨πœ‘π‘–π‘—,πœ‘βŸ©βˆ’πœ†π›Όβˆ’2π‘–π‘—ξ‚†ξ€œπ‘0πœ‘π‘–π‘—ξ€·ξ€Έ0,π‘¦π‘˜1𝑓1ξ€·π‘¦ξ€Έξ€œπ‘‘π‘¦+𝑏0πœ‘π‘–π‘—ξ€·ξ€Έπ‘Ž,π‘¦π‘˜2𝑓2𝑦+ξ€œπ‘‘π‘¦π‘Ž0πœ‘π‘–π‘—ξ€·ξ€Έπ‘₯,0π‘˜3𝑓3ξ€·π‘₯ξ€Έξ€œπ‘‘π‘₯+π‘Ž0πœ‘π‘–π‘—ξ€·ξ€Έπ‘₯,π‘π‘˜4𝑓4ξ€·π‘₯𝑑π‘₯(4.8)if π‘˜π‘–β‰ 0,𝑖=1,…,4. If π‘˜π‘–=0, the second term on the right-hand side becomes βˆ’πœ†π›Όβˆ’2π‘–π‘—ξ‚†ξ€œπ‘0πœ•πœ‘π‘–π‘—ξ€·ξ€Έ0,π‘¦π‘“πœ•π‘₯1ξ€·π‘¦ξ€Έβ„Ž1ξ€œπ‘‘π‘¦βˆ’π‘0πœ•πœ‘π‘–π‘—ξ€·ξ€Έπ‘Ž,π‘¦π‘“πœ•π‘₯2ξ€·π‘¦ξ€Έβ„Ž2+ξ€œπ‘‘π‘¦π‘Ž0πœ•πœ‘π‘–π‘—ξ€·ξ€Έπ‘₯,0π‘“πœ•π‘¦3ξ€·π‘₯ξ€Έβ„Ž3ξ€œπ‘‘π‘₯βˆ’π‘Ž0πœ•πœ‘π‘–π‘—ξ€·ξ€Έπ‘₯,π‘π‘“πœ•π‘¦4ξ€·π‘¦ξ€Έβ„Ž4.𝑑π‘₯(4.9)

Proof. It is known that βŸ¨πœ‘π‘–π‘—,ξ‚¬π‘‡πœ‘βŸ©=π‘‡βˆ—πœ‘π‘–π‘—ξ‚­=𝑇,πœ‘βˆ—πœ‘π‘–π‘—ξ‚­=,πœ‘π‘‡πœ‘π‘–π‘—ξ‚­=,πœ‘βˆ’Ξ”ξ€Έξ€·βˆ’Ξ”π›Ό/2βˆ’1πœ‘π‘–π‘—ξ‚­=,πœ‘βˆ’Ξ”π›Ό/2βˆ’1πœ‘π‘–π‘—,ξ€·ξ€Έπœ‘ξ‚­,βˆ’Ξ”(4.10)where (βˆ’Ξ”) is the extension of (βˆ’Ξ”) with domain π’Ÿβ€²(Ξ©) that is the same as π’Ÿ(Ξ©) without ℬ(πœ‘)=0, which is well documented in books on partial differential equations [7]. This is done by calculating the conjunct (concomitant or boundary form) using the Green's formulaξ€œξ€œΞ©(π‘£βˆ‡2π‘’βˆ’π‘’βˆ‡2ξ€œπ‘£)𝑑𝑆=πœ•Ξ©ξ‚€π‘£πœ•π‘’πœ•π‘›βˆ’π‘’πœ•π‘£ξ‚πœ•π‘›π‘‘π‘ .(4.11)Thus,ξ‚¬πœ‘π‘–π‘—,π‘‡πœ‘βŸ©=πœ†π›Όβˆ’2π‘–π‘—ξ‚¬πœ‘π‘–π‘—,ξ€·ξ€Έβˆ’Ξ”πœ‘βŸ©=βˆ’πœ†π›Όβˆ’2π‘–π‘—ξ€œξ€œΞ©πœ‘π‘–π‘—βˆ‡2πœ‘π‘‘π‘†=βˆ’πœ†π›Όβˆ’2π‘–π‘—ξ‚†ξ€œξ€œΞ©πœ‘βˆ‡2πœ‘π‘–π‘—ξ€œπ‘‘π‘†+πœ•Ξ©ξ‚€πœ‘π‘–π‘—πœ•πœ‘πœ•π‘›βˆ’πœ‘πœ•πœ‘π‘–π‘—ξ‚ξ‚‡πœ•π‘›π‘‘π‘ (4.12)which gives the result on substitution.

This result can be readily used to write down the analytic solution to the FPE problem. First, we obtain the spectral representation of the operator 𝑇 by solving the eigenvalue problem:βˆ’Ξ”πœ‘=πœ†2πœ‘,ℬ(πœ‘)=0.(4.13)Knowing the eigenvalues πœ†π‘–π‘— and the corresponding orthonormal (ON) eigenfunctions πœ‘π‘–π‘—, we can use the finite-transform method with respect to πœ‘π‘–π‘— and Proposition 4.3 to obtainπœ†π›Όπ‘–π‘—βŸ¨πœ‘π‘–π‘—,πœ‘βŸ©=βŸ¨π‘”,πœ‘π‘–π‘—βŸ©+πœ†π›Όβˆ’2𝑖𝑗𝑏𝑖𝑗,(4.14)where πœ†π›Όβˆ’2𝑖𝑗𝑏𝑖𝑗 is the second term on the right hand side in Proposition 4.3. Hence,πœ‘(π‘₯,𝑦)=βˆžξ“βˆžπ‘–=1𝑗=1βŸ¨π‘”,πœ‘π‘–π‘—βŸ©πœ†π›Όπ‘–π‘—πœ‘π‘–π‘—(π‘₯,𝑦)+βˆžξ“βˆžπ‘–=1𝑗=1π‘π‘–π‘—πœ†2π‘–π‘—πœ‘π‘–π‘—(π‘₯,𝑦).(4.15)

5. Results and Discussion

In this section, we exhibit the results of applying Algorithm 2.4 to solve two FPE test problems. To assess the accuracy of our approximation, we compare the numerical solutions with the exact solution in each case.

Test Problem 1: FPE with Dirichlet Boundary Conditions
Solve (βˆ’βˆ‡2)𝛼/2πœ‘=𝑔0/π‘˜ on the unit square [0,1]Γ—[0,1] subject to the type I boundary conditions πœ‘=0 on boundary πœ•Ξ©. For this problem, the ON eigenfunctions are given byπœ‘π‘–π‘—=ξ€½ξ€Ύ2sin(π‘–πœ‹π‘₯)sin(π‘—πœ‹π‘¦)βˆžπ‘–,𝑗=1,(5.1)and the corresponding eigenvalues πœ†2𝑖𝑗=πœ‹2(𝑖2+𝑗2). The analytical solution is then given from Section 4 asπœ‘(π‘₯,𝑦)=16𝑔0π‘˜πœ‹2βˆžξ“βˆžπ‘–=0𝑗=0sin[(2𝑖+1)πœ‹π‘₯]sin[(2𝑗+1)πœ‹π‘¦](2𝑖+1)(2𝑗+1)[(2𝑖+1)2πœ‹2+(2𝑗+1)2πœ‹2]𝛼/2.(5.2) For the numerical solution, a standard five-point finite-difference formula with equal grid spacing β„Ž=1/𝑛 in the π‘₯ and 𝑦 directions has been used to generate the block tridiagonal matrix π΄βˆˆβ„(π‘›βˆ’1)2Γ—(π‘›βˆ’1)2 given in (2.1) as⎑⎒⎒⎒⎒⎒⎒⎒⎣⎀βŽ₯βŽ₯βŽ₯βŽ₯βŽ₯βŽ₯βŽ₯⎦⎑⎒⎒⎒⎒⎒⎒⎒⎣⎀βŽ₯βŽ₯βŽ₯βŽ₯βŽ₯βŽ₯βŽ₯⎦𝐴=π΅βˆ’πΌβˆ’πΌπ΅βˆ’πΌβ‹±β‹±β‹±βˆ’πΌπ΅βˆ’πΌβˆ’πΌπ΅where𝐡=4βˆ’1βˆ’14βˆ’1βˆ’14βˆ’1β‹±β‹±βˆ’1βˆ’14βˆˆβ„(π‘›βˆ’1)Γ—(π‘›βˆ’1).(5.3)The parameters used to test this model are listed in Table 1.

tab1
Table 1: Physical parameters for test problem 1.

Test Problem 2: FPE with Mixed Boundary Conditions
Solve (βˆ’βˆ‡2)𝛼/2πœ‘=𝑔0/π‘˜ on the unit square [0,1]Γ—[0,1] subject to type III boundary conditionsβˆ’πœ•πœ‘πœ•π‘₯+𝐻1πœ‘=𝐻1πœ‘βˆžatπ‘₯=0,πœ•πœ‘πœ•π‘₯+𝐻2πœ‘=𝐻2πœ‘βˆžβˆ’atπ‘₯=1,πœ•πœ‘πœ•π‘¦+𝐻3πœ‘=𝐻3πœ‘βˆžat𝑦=0,πœ•πœ‘πœ•π‘¦+𝐻4πœ‘=𝐻4πœ‘βˆžat𝑦=1,(5.4)where 𝐻𝑖=β„Žπ‘–/π‘˜. The analytical solution to this problem is given byπœ‘(π‘₯,𝑦)=πœ‘βˆž+𝑔0π‘˜βˆžξ“βˆžπ‘–=1𝑗=1𝛼𝑖,𝑗𝑋(πœ‡π‘–,π‘₯)π‘Œ(πœˆπ‘—,𝑦)(πœ‡2𝑖+𝜈2𝑗)𝛼/2𝑁π‘₯(πœ‡π‘–)𝑁𝑦(πœˆπ‘—),(5.5)where the eigenfunctions are𝑋(πœ‡π‘–,π‘₯)=πœ‡π‘–cos(πœ‡π‘–π‘₯)+𝐻1sin(πœ‡π‘–π‘₯),π‘Œ(πœˆπ‘—,𝑦)=πœˆπ‘—cos(πœˆπ‘—π‘¦)+𝐻3sin(πœˆπ‘—π‘¦),(5.6)with normalisation factors𝑁2π‘₯(πœ‡π‘–1)=2πœ‡ξ‚ƒξ‚€2𝑖+𝐻21𝐻1+2πœ‡2𝑖+𝐻22+𝐻1ξ‚„,𝑁2𝑦(πœˆπ‘—1)=2πœˆξ‚ƒξ‚€2𝑗+𝐻23𝐻1+4𝜈2𝑗+𝐻24+𝐻3ξ‚„.(5.7)The eigenvalues πœ‡π‘– are determined by finding the roots of the transcendental equation:tan(πœ‡)=πœ‡(𝐻1+𝐻2)πœ‡2βˆ’π»1𝐻2(5.8)with πœˆπ‘— determined from a similar equation for 𝜈. Finally, 𝛼𝑖,𝑗 is given by𝛼𝑖,𝑗=10𝑋(πœ‡π‘–,πœ‰)π‘Œ(πœˆπ‘—,πœ‚)𝑁π‘₯(πœ‡π‘–)𝑁𝑦(πœˆπ‘—)dπœ‰dπœ‚.(5.9)
For the numerical solution, a standard five-point finite-difference formula with equal grid spacing β„Ž=1/𝑛 was again employed in the π‘₯ and 𝑦 directions. However in this example, additional finite-difference equations are required for the boundary nodes as a result of type III boundary conditions. The block tridiagonal matrix required in (2.8) is then similar to that exhibited for example 1, however it has dimension π΄βˆˆβ„(𝑛+1)2Γ—(𝑛+1)2 and boundary blocks must be modified to account for the boundary condition contributions.
The parameter values used for this problem are listed in Table 2.

tab2
Table 2: Physical parameters for test problem 2.
5.1. Discussion of Results for Test Problem 1

A comparison of the numerical and analytical solutions for test problem 1 is exhibited in Figure 1 for different values of the fractional index 𝛼=0.5,1.0,1.5, and 2 (with the value 2 representing the solution of the classical Poisson equation). In all cases, it can be observed that good agreement is obtained between theory and simulation, with the analytical (solid contour lines) and numerical (dashed contour lines) solutions almost indistinguishable. In fact, Algorithm 2.4 consistently produced a numerical solution within approximately 2% absolute error of the analytical solution.

fig1
Figure 1: Comparisons of numerical (dashed lines) and analytical solutions (solid lines) for test problem 1 computed using Algorithm 2.4: (a) 𝛼=0.5, (b) 𝛼=1.0, (c) 𝛼=1.5, and (d) 𝛼=2 (classical case).

The impact of decreasing the fractional index from 𝛼=2 to 0.5 is particularly evident in Figure 2 from the shape and magnitude of the computed three-dimensional symmetric profiles. Low values of 𝛼 produce a solution exhibiting a pronounced hump-like shape, with the diffusion rate low, the magnitude of the solution high at the centre, and steep gradients evident near the boundary of the solution domain. As 𝛼 increases, the magnitude of the profile diminishes and the solution is much more diffuse and representative of a Gaussian process. These observations motivate the following remark.

fig2
Figure 2: Numerical solutions for test problem 1: (a) 𝛼=0.5, (b) 𝛼=1.0, (c) 𝛼=1.5, and (d) 𝛼=2 (classical case).

Remark 5.1. Over ℝ𝑛, the Riesz operator (βˆ’Ξ”)𝛼/2 as defined byξ€·ξ€Έβˆ’Ξ”π›Ό/21𝑓(π‘₯)=ξ€œπ‘”(𝛼)ℝ𝑛||||π‘₯βˆ’π‘¦π›Όβˆ’π‘›π‘“(𝑦)𝑑𝑦,(5.10)where 𝑓 is a 𝐢∞- function with rapid decay at infinity, and
πœ‹π‘”(𝛼)=𝑛/22𝛼Γ(𝛼/2)Ξ“(𝑛/2βˆ’π›Ό/2)(5.11)is known to generate 𝛼-stable processes. In fact, the Green's function of the equationπœ•π‘ξ€·ξ€Έπœ•π‘‘=βˆ’βˆ’Ξ”π›Ό/2𝑝(𝑑,π‘₯),𝑇>0,π‘₯βˆˆβ„(5.12)is the probability density function of a symmetric 𝛼-stable process. When 𝛼=1, it is the density function of a Cauchy distribution, and when 𝛼=2 it is the classical Gaussian density. As 𝛼→0, the tail of the density function is heavier and heavier. These behaviours are reflected in the numerical results given in the above example; namely, when 𝛼→2, the plots exhibit the bell shape of the Gaussian density, but when 𝛼→0, the curves are flatter, indicating very heavy tails as expected.

We now report on the performance of Algorithm 2.4 for computing the solution of the FPE. The numerical solutions shown in Figures 1 and 2 were generated using a standard five-point finite difference stencil to construct the matrix representation of the two-dimensional Laplacian operator. The π‘₯- and 𝑦-dimensions were divided equally into 31 divisions to produce the symmetric positive definite matrix π΄βˆˆβ„900Γ—900 having its spectrum 𝜎(𝐴)βŠ‚[0.0205,7.9795]. One notes for this problem that the homogeneous boundary conditions necessitate only the solution Ξ¦=β„Žπ›Όπ΄βˆ’π›Ό/2̃𝑔. Algorithm 2.4 was still employed in this case; however in stage 1 we at first solve 𝐴π‘₯=̃𝑔 by the adaptively preconditioned thick restart procedure. The gathered spectral information from stage 1 is then used for the efficient computation of Ξ¦ during stage 2.

Figure 3 depicts the reduction in the residual of the linear system and the error in the matrix function approximation during both stages of the solution process for test problem 1 for the case 𝛼=1. For this test using FOM(25,10), (subspace size 25 with an additional 10 approximate Ritz vectors augmented at the front of the Krylov subspace) four restarts were required to reduce the linear system residual to β‰ˆ1Γ—10βˆ’15, which represented an overall total of 110 matrix-vector products. This low tolerance was enforced to ensure that as many approximate eigenpairs of 𝐴 could be computed and then locked during stage 1 for use in stage 2. An eigenpair was deemed converged when the residual in the approximate eigenpair was less than πœƒmaxΓ—10βˆ’10, where πœƒmax is the current estimate of the largest eigenvalue of 𝐴. This process saw 1 eigenpair locked after 2 restarts, 5 locked after 3 restarts, and finally 9 locked after 4 restarts. From this figure, we also see that when subspace recycling is used for stage 2 only an additional 30 matrix-vector products are required to compute the solution Ξ¦ to an acceptable accuracy. It is also worth pointing out that the Lanczos approximation for this preconditioned matrix function reduces much more rapidly than for the case where preconditioning (dotted line) is not used. Furthermore, the Lanczos approximation in this example lies almost entirely on the curve that represents the optimal approximation obtainable from the Krylov subspace [26]. Finally, we see that the bound (3.9) can be used with confidence as a means for halting stage 2 once the desired accuracy in the bound is reached.

104525.fig.003
Figure 3: Residual reduction for test problem 1 computed using the two-stage process outlined in Algorithm 2.4.
5.2. Discussion of Results for Test Problem 2

A comparison of the numerical and analytical solutions for test problem 2 is exhibited in Figure 4, again for the values of the fractional index 𝛼=0.5,1.0,1.5, and 2. It can be seen that the agreement between theory and simulation is more than acceptable for this case, with Algorithm 2.4 producing a numerical solution within approximately 4% absolute error of the analytical solution. However, the impact of increasing the fractional index from 𝛼=0.5 to 2 is less dramatic for problem 2.

fig4
Figure 4: Comparisons of numerical (dashed line) and analytical solutions (solid line) for test problem 2 computed using Algorithm 2.4: (a) 𝛼=0.5, (b) 𝛼=1.0, (c) 𝛼=1.5, and (d) 𝛼=2 (classical case).

The numerical solutions shown in Figure 4 were again generated using a standard five-point finite-difference stencil to construct the matrix representation of the two-dimensional Laplacian operator. The π‘₯- and 𝑦-dimensions were divided equally into 30 divisions resulting in the symmetric positive definite matrix π΄βˆˆβ„961Γ—961 having its spectrum 𝜎(𝐴)βŠ‚[0.000758,7.9795]. One notes for this problem that type II boundary conditions have produced a small eigenvalue that undoubtedly will hinder the performance of restarted FOM.

Figure 5 depicts the reduction in the residual of the linear system for computing the solution Ξ¦1 and the error in the matrix function approximation for Ξ¦2 during both stages of the solution process for test problem 2 with 𝛼=1. Using FOM(25,10), a total of nine restarts were required to reduce the linear system residual to β‰ˆ1Γ—10βˆ’15, which represented an overall total of 240 matrix-vector products. One notes that this is much higher than Problem 1 and primarily due to the occurrence of small eigenvalues in 𝜎(𝐴). The thick restart process saw 1 eigenpair locked after 5 restarts, 4 locked after 6 restarts, and finally 10 locked after 9 restarts. From this figure, we also see that when subspace recycling is used for stage 2 only an additional 25 matrix-vector products are required to compute the solution Ξ¦2 to an acceptable accuracy, which is clearly much less than the unpreconditioned (dotted line) case. The Lanczos approximation in this example again lies almost entirely on the curve that represents the optimal approximation obtainable from the Krylov subspace. Finally, we see that the bound (3.9) can be used to halt stage 2 once the desired accuracy is reached.

104525.fig.005
Figure 5: Residual reduction for test problem 2 computed using the two-stage process outlined in Algorithm 2.4.

6. Conclusions

In this work, we have shown how the fractional Poisson equation can be approximately solved using a finite-difference discretisation of the Laplacian to produce an appropriate matrix representation of the operator. We then derived a matrix equation that involved both a linear system solution and a matrix function approximation with the matrix 𝐴 raised to the same fractional index as the Laplacian. We proposed an algorithm based on Krylov subspace methods that could be used to efficiently compute the solution of this matrix equation using a two-stage process. During stage 1, we used an adaptively preconditioned thick restarted FOM method to approximately solve the linear system and then used recycled spectral information gathered during this restart process to accelerate the convergence of the matrix function approximation in stage 2. Two test problems were then presented to assess the accuracy of our algorithm, and good agreement with the analytical solution was noted in both cases. Future research will see higher dimensional fractional diffusion equations solved using a similar approach via the finite volume method.

Acknowledgment

This work was supported financially by the Australian Research Council Grant no. LP0348653.

References

  1. F. Lorenzo and T.T. Hartley, β€œInitialization, conceptualization, and application in the generalized fractional calculus,” Tech. Rep. NASA/TPβ€”208415, NASA Center for Aerospace Information, Hanover, Md, USA, 1998.
  2. R. Metzler and J. Klafter, β€œThe random walk's guide to anomalous diffusion: a fractional dynamics approach,” Physics Reports, vol. 339, no. 1, pp. 1–77, 2000. View at Publisher Β· View at Google Scholar Β· View at Zentralblatt MATH Β· View at MathSciNet
  3. R. Metzler and J. Klafter, β€œThe restaurant at the end of the random walk: recent developments in the description of anomalous transport by fractional dynamics,” Journal of Physics A, vol. 37, no. 31, pp. R161–R208, 2004. View at Publisher Β· View at Google Scholar Β· View at Zentralblatt MATH Β· View at MathSciNet
  4. D. A. Benson, S. W. Wheatcraft, and M. M. Meerschaert, β€œApplication of a fractional advectiondispersion equation,” Water Resources Research, vol. 36, no. 6, pp. 1403–1412, 2000.
  5. D. A. Benson, S. W. Wheatcraft, and M. M. Meerschaert, β€œThe fractional-order governing equation of levy motion,” Water Resources Research, vol. 36, no. 6, pp. 1413–1423, 2000.
  6. M. M. Meerschaert, J. Mortensen, and S. W. Wheatcraft, β€œFractional vector calculus for fractional advection-dispersion,” Physica A, vol. 367, pp. 181–190, 2006. View at Publisher Β· View at Google Scholar
  7. B. Friedman, Principles and Techniques of Applied Mathematics, John Wiley & Sons, New York, NY, USA, 1966.
  8. M. IliΔ‡, F. Liu, I. W. Turner, and V. Anh, β€œNumerical approximation of a fractional-in-space diffusion equationβ€”II-with nonhomogeneous boundary conditions,” Fractional Calculus & Applied Analysis, vol. 9, no. 4, pp. 333–349, 2006. View at Zentralblatt MATH Β· View at MathSciNet
  9. B. J. West and V. Seshadri, β€œLinear systems with LΓ©vy fluctuations,” Physica A, vol. 113, no. 1-2, pp. 203–216, 1982. View at Publisher Β· View at Google Scholar Β· View at MathSciNet
  10. R. Gorenflo and F. Mainardi, β€œRandom walk models for space-fractional diffusion processes,” Fractional Calculus & Applied Analysis, vol. 1, no. 2, pp. 167–191, 1998. View at Zentralblatt MATH Β· View at MathSciNet
  11. R. Gorenflo and F. Mainardi, β€œApproximation of LΓ©vy-Feller diffusion by random walk,” Journal for Analysis and Its Applications, vol. 18, no. 2, pp. 231–246, 1999. View at Zentralblatt MATH Β· View at MathSciNet
  12. I. P. Gavrilyuk, W. Hackbusch, and B. N. Khoromskij, β€œData-sparse approximation to the operator-valued functions of elliptic operator,” Mathematics of Computation, vol. 73, no. 247, pp. 1297–1324, 2004. View at Zentralblatt MATH Β· View at MathSciNet
  13. W. Hackbusch and B. N. Khoromskij, β€œLow-rank Kronecker-product approximation to multidimensional nonlocal operatorsβ€”part I. Separable approximation of multi-variate functions,” Computing, vol. 76, no. 3, pp. 177–202, 2006. View at Publisher Β· View at Google Scholar Β· View at Zentralblatt MATH Β· View at MathSciNet
  14. W. Hackbusch and B. N. Khoromskij, β€œLow-rank Kronecker-product approximation to multidimensional nonlocal operatorsβ€”part II. HKT representation of certain operators,” Computing, vol. 76, no. 3, pp. 203–225, 2006. View at Publisher Β· View at Google Scholar Β· View at Zentralblatt MATH Β· View at MathSciNet
  15. M. IliΔ‡, F. Liu, I. W. Turner, and V. Anh, β€œNumerical approximation of a fractional-in-space diffusion equationβ€”I,” Fractional Calculus & Applied Analysis, vol. 8, no. 3, pp. 323–341, 2005. View at Zentralblatt MATH Β· View at MathSciNet
  16. G. Gripenberg and I. Norros, β€œOn the prediction of fractional Brownian motion,” Journal of Applied Probability, vol. 33, no. 2, pp. 400–410, 1996. View at Publisher Β· View at Google Scholar Β· View at Zentralblatt MATH Β· View at MathSciNet
  17. Y. Hu, β€œPrediction and translation of fractional Brownian motions,” in Stochastics in Finite and Infinite Dimensions, T. Hida, R. L. Karandikar, H. Kunita, B. S. Rajput, S. Watanabe, and J. Xiong, Eds., Trends in Mathematics, pp. 153–171, BirkhΓ€user, Boston, Mass, USA, 2001. View at MathSciNet
  18. G. H. Golub and C. F. Van Loan, Matrix Computations, Johns Hopkins Studies in the Mathematical Sciences, The Johns Hopkins University Press, Baltimore, Md, USA, 3rd edition, 1996. View at Zentralblatt MATH Β· View at MathSciNet
  19. Y. Saad, Iterative Methods for Sparse Linear Systems, SIAM, Philadelphia, Pa, USA, 2nd edition, 2003. View at Zentralblatt MATH Β· View at MathSciNet
  20. H. A. van der Vorst, Iterative Krylov Methods for Large Linear Systems, vol. 13 of Cambridge Monographs on Applied and Computational Mathematics, Cambridge University Press, Cambridge, UK, 2003. View at Zentralblatt MATH Β· View at MathSciNet
  21. N. J. Higham, Functions of Matrices: Theory and Computation, SIAM, Philadelphia, Pa, USA, 2008. View at MathSciNet
  22. V. Druskin and L. Knizhnerman, β€œKrylov subspace approximation of eigenpairs and matrix functions in exact and computer arithmetic,” Numerical Linear Algebra with Applications, vol. 2, no. 3, pp. 205–217, 1995. View at Publisher Β· View at Google Scholar Β· View at Zentralblatt MATH Β· View at MathSciNet
  23. M. Hochbruck and C. Lubich, β€œOn Krylov subspace approximations to the matrix exponential operator,” SIAM Journal on Numerical Analysis, vol. 34, no. 5, pp. 1911–1925, 1997. View at Publisher Β· View at Google Scholar Β· View at Zentralblatt MATH Β· View at MathSciNet
  24. M. Eiermann and O. G. Ernst, β€œA restarted Krylov subspace method for the evaluation of matrix functions,” SIAM Journal on Numerical Analysis, vol. 44, no. 6, pp. 2481–2504, 2006. View at Publisher Β· View at Google Scholar Β· View at Zentralblatt MATH Β· View at MathSciNet
  25. L. Lopez and V. Simoncini, β€œAnalysis of projection methods for rational function approximation to the matrix exponential,” SIAM Journal on Numerical Analysis, vol. 44, no. 2, pp. 613–635, 2006. View at Publisher Β· View at Google Scholar Β· View at MathSciNet
  26. J. van den Eshof, A. Frommer, T. Lippert, K. Schilling, and H. A. van der Vorst, β€œNumerical methods for the QCD overlap operatorβ€”I. Sign-function and error bounds,” Computer Physics Communications, vol. 146, pp. 203–224, 2002.
  27. M. IliΔ‡ and I. W. Turner, β€œApproximating functions of a large sparse positive definite matrix using a spectral splitting method,” The ANZIAM Journal, vol. 46(E), pp. C472–C487, 2005. View at Zentralblatt MATH Β· View at MathSciNet
  28. M. IliΔ‡, I. W. Turner, and A. N. Pettitt, β€œBayesian computations and efficient algorithms for computing functions of large, sparse matrices,” The ANZIAM Journal, vol. 45(E), pp. C504–C518, 2004. View at MathSciNet
  29. R. B. Lehoucq and D. C. Sorensen, β€œDeflation techniques for an implicitly restarted Arnoldi iteration,” SIAM Journal on Matrix Analysis and Applications, vol. 17, no. 4, pp. 789–821, 1996. View at Publisher Β· View at Google Scholar Β· View at Zentralblatt MATH Β· View at MathSciNet
  30. A. Stathopoulos, Y. Saad, and K. Wu, β€œDynamic thick restarting of the Davidson, and the implicitly restarted Arnoldi methods,” SIAM Journal on Scientific Computing, vol. 19, no. 1, pp. 227–245, 1998. View at Publisher Β· View at Google Scholar Β· View at Zentralblatt MATH Β· View at MathSciNet
  31. M. IliΔ‡ and I. W. Turner, β€œKrylov subspaces and the analytic grade,” Numerical Linear Algebra with Applications, vol. 12, no. 1, pp. 55–76, 2005. View at Publisher Β· View at Google Scholar Β· View at MathSciNet
  32. R. B. Morgan, β€œA restarted GMRES method augmented with eigenvectors,” SIAM Journal on Matrix Analysis and Applications, vol. 16, no. 4, pp. 1154–1171, 1995. View at Publisher Β· View at Google Scholar Β· View at Zentralblatt MATH Β· View at MathSciNet
  33. J. Baglama, D. Calvetti, G. H. Golub, and L. Reichel, β€œAdaptively preconditioned GMRES algorithms,” SIAM Journal on Scientific Computing, vol. 20, no. 1, pp. 243–269, 1998. View at Publisher Β· View at Google Scholar Β· View at Zentralblatt MATH Β· View at MathSciNet
  34. J. Erhel, K. Burrage, and B. Pohl, β€œRestarted GMRES preconditioned by deflation,” Journal of Computational and Applied Mathematics, vol. 69, no. 2, pp. 303–318, 1996. View at Publisher Β· View at Google Scholar Β· View at Zentralblatt MATH Β· View at MathSciNet
  35. R. V. Churchill, Complex Variables and Applications, McGraw-Hill, New York, NY, USA, 2nd edition, 1960. View at Zentralblatt MATH Β· View at MathSciNet