#### Abstract

In the paper, a class of perturbed Volterra equations of convolution type with three kernel functions is considered. The kernel functions , , , correspond to the class of equations interpolating heat and wave equations. The results obtained generalize our previous results from 2010.

#### 1. Introduction

We study perturbed Volterra equations of the form where is the gamma function, denotes the convolution, , , and is the Laplace operator.

The perturbation approach to Volterra equations of convolution type has been used by many authors, see, for example, [1]. Such approach may be applied to more general, not necessary convolution equations, too. Recently, perturbed Volterra equations, deterministic and stochastic as well, have been studied for instance by Karczewska and Lizama [2]. The authors consider the class of equations with three kernel functions which satisfy some scalar auxiliary equations. Such condition enables to construct the family of resolvent operators admitted by the Volterra equations. In consequence, the resolvent approach to the considered Volterra equations can be used. Unfortunately, the resolvent approach proposed by Karczewska and Lizama may be used to (1.1) for some particular kernel functions and , for , only. Hence, in our case, the method proposed in [2] cannot be applied for (1.1).

Motivation for the study of the fractional integro-differential equations comes from several problems appearing in physics, biology, and/or engineering. There are many phenomena well modeled by deterministic or stochastic fractional equations, see, for example, [3ā7]. Several results on stochastic Volterra equations, fractional as well, particularly on the existence of strong solutions to those equations have been obtained by one of us [8ā14].

Equation (1.1) is an interesting example of using the so-called fractional calculus in the theory of āclassicalā equations. Let us emphasize that (1.1) is a generalization of the equations which interpolate the heat and wave equations [15, 16]. Two convolutions appearing in (1.1) with the kernel functions and , respectively, represent some perturbation acting on the Volterra equation of convolution type.

Fractional calculus is a generalization of ordinary differentiation and itegration to arbitrary order [4, 17ā19]. There is an increasing interest in applications of fractional calculus in many fields of mathematics [20], mechanics [5, 21, 22], physics [23, 24], and even in biology [6, 25]. A thorough and comprehensive survey of analytical and numerical methods used in solving many problems with applications of fractional calculus is contained in a recent monograph by Baleanu et al. [17].

Spectral methods belong to frequently used tools to obtain approximate solutions to complicated problems like fluid dynamic equations, weather predictions, and many others (see e.g., the monograph of Canuto et al. [26]). Recently, these methods have been used as a tool for calculation of fractional derivatives and integrals [27] and to solve Volterra equations with fractional time [28]. In general, spectral methods consist in representation of the solution to the equation under consideration in a finite subspace whereas the exact solution belongs to space of infinite dimension. The method presented in the present paper belongs to that class.

The paper is organized as follows. In Section 2, a general idea of Galerkin method to integral equations is presented and approximation by the use of finite dimensional Hilbert space is explained. Section 3 presents a system of linear equations obtained from (1.1) by a discrete formulation enabling for numerical solutions. The detailed form of matrices appearing in that approximation is presented for one dimensional case in Section 3.1 and for two spatial dimension case in Section 3.2. The set of basis functions is represented in Section 3.3 and numerical methods used to solve large-scale sparse linear systems are discussed in Section 3.4, as well. Examples of numerical solutions to (1.1) are exhibited and discussed in detail in Section 4, whereas error estimations for the precision of approximate results are given in Section 5.

#### 2. Galerkin Method

Let represent a set of orthonormal functions on the interval , spanning a Hilbert space .

*Definition 2.1. * Let . The number
where is a weight function, is called the *scalar product* of functions on the interval .

Let us recall that two functions are orthonormal when where is the Kronecker delta.

We are looking for an approximate solution to (1.1) as an element of the subspace , spanned on first basic functions For simplicity of notations, let us consider (1.1) in one spatial dimension only. Inserting (2.3) into (1.1), one obtains where function represents the approximation error function. From (2.3) and (2.4), one gets

*Definition 2.2. * The *Galerkin approximation* of (1.1) is the function , such that , that is,

It follows from Definitions 2.2 and 2.1 and (2.5) that Therefore Using (2.2), (2.8) can be written in an abbreviated form where In general .

The solution of the set of coupled differential equations (2.9) for coefficients provides Galerkin approximation (2.3) to (1.1).

#### 3. Discretization

Equations can be solved using discretization in a space variable. In one-dimesional case, let us introduce a grid of points , where . The grid approximation of a second derivative of a function is given by Then the set of equations (2.9) takes the following form: where and .

In two-dimensional case, with the grid , where for , the set of equations (2.9) takes the form Both sets of linear equations (3.2) and (3.3) can be written in a matrix form where vectors , and matrix have block forms Detailed structure of blocks occurring in (3.5) is given below.

##### 3.1. One-Dimensional Case

Blocks and are -dimensional column vectors. For the sake of space, we present their transpositions

Blocks have the form where , , whereas

Vectors and are -dimensional, whereas the matrix has dimension . The matrix is a sparse one. The number of nonzero elements of the matrix is at most (with closed boundary conditions) or (with periodic boundary conditions).

##### 3.2. Two-Dimensional Case

In two-dimensional case, -dimensional vectors and read as Blocks have the form of embedded blocks where each term is an embedded block of the size . In particular block is a matrix with all null elements, block is again a sparse matrix of the form where , , and block is diagonal The matrix is the sparse matrix of elements. However, only at most (with closed boundary conditions) or (with periodic boundary conditions) elements are nonzero.

##### 3.3. Basis Functions

The basis functions have to be orthogonal on the interval with respect to a weight function . We use the set of Legendre polynomials which are solutions of the Legendre differential equation for . The Legendre polynomials are orthogonal on the interval with the weight function Taking the basis function in the form ensures that the functions fulfill the orthonormality relations (2.2) on the interval . Therfeore they can be used as a basis in the Galerkin method. In principle, any set of functions orthonormal on the interval can be used. For our purposes, however, the Lagrange polynomials appeared more efficient in practical applications than for instance Chebyshev polynomials.

##### 3.4. Methods for Solving Large Linear Systems

The matrices , both in one- and two-dimensional case, are sparse matrices. In order to obtain a reasonable approximation of the solution to (1.1), their sizes have to be large. Those facts suggest an application of iterative methods for solving the linear systems (3.4).

In general, the matrix is nonsymmetric. We have tested on our examples two iterative methods developed for solving large-scale linear systems of non-symmetric matrices. One of those metods is so called BiCGSTAB (BiConjugate Gradient Stabilized method) [29, 30]. The other one is the GMRES (General Minimal Residual method) [29, 31]. In both methods, a suitable preconditioning is necessary.

For cases discused in the paper, the GMRES method appeared to be more efficient. Usually, after a proper choice of auxiliary parameters of calculations, the GMRES was requiring less number of iterations and converging faster than the BiCGSTAB method.

#### 4. Examples of Numerical Solutions

In this section, several examples of approximate numerical solutions to (1.1) are presented. The function where and is the space dimension has been taken as the initial condition. Such function, which is substantially different than zero only in a finite region, may represent a distribution of the temperature in a rod (or plane) heated locally or a distribution of gas (or liquid) particles which may diffuse in a nonhomogeneous medium. The values of constants and in (4.1) were chosen for a clear graphical presentation of the results.

##### 4.1. One-Dimensional Case

For presentation of approximate numerical results we chose an interval with equidistant grid points. The Hilbert space was spanned on the basis of functions described in Section 3.3.

In our previous study [28], we have shown that when increases from to the solution of unperturbed evolution governed by (1.1) with changes from pure heat (diffusive) behaviour to pure wave motion. Below we present results for fractional cases (an intermediate case) pointing out the effects of perturbations. For the sake of space, we show a few examples only, explaining the general influences of perturbations on the character of solutions.

Figure 1 illustrates the effect of perturbation in the form , where represents an intensity of the perturbation, whereas the function . It is clearly seen that this perturbation, periodic in time variable, produces a wavy formations in space with amlitudes strongly depending on the intensity of perturbation. Results obtained with the opposite sign of the perturbation term, , (not presented here for the sake of space), show that such perturbation decreases diffusive behaviour of the system and enforces a wavelike evolution.

The effect of the perturbation of the form , when , is presented in Figure 2. As in the former case represents the intensity of perturbation. It is clear that the perturbation in such form generally increases the amplitude of the solution. The change of sign of the perturbation, , produces the opposite effect, the amplitude of decreases, like in the case of dumping.

##### 4.2. Two-Dimensional Case

In this subsection, we show some results obtained for two-dimensional case. The calculations have been performed on the grid of points, where , , with basic functions spanning the Hilbert space .

Figures 3, 4, and 5 illustrate several examples of the numerical solutions to (1.1) for and different perturbation functions and .

The case , corresponds to the unperturbed equation. Its solution, as seen from Figure 3, evolves in the wave manner with a significant influence of diffusion due to fractional value of , between the pure diffusion case () and the pure wave case (). For more examples of unperturbed evolution of solutions to (1.1) in two-dimensional case, see [28].

Results obtained for two-dimensional cases with nonzero perturbations generally exhibit the properties similar to those in one-dimensional case. Again, like in the one-dimensional case, the presence of perturbation in the form of results in an increase of a wave frequency (see, e.g., Figure 4). In other words the perturbation of such form produces additional wavy behaviour of the solution. The change of sign of the perturbation term changes the phase of that behaviour.

The presence of perturbation term with influences mainly the amplitude of the solution. Comparing Figure 5 to Figure 4, one notices that the amplitude increases with . Perturbation with the opposite sign results in decrease of amplitude, like in the case of dumping.

#### 5. Precision of Numerical Results

A comparison between the analytic and numerical solutions to (1.1) is possible only for one-dimensional case when there is no perturbation and or . Despite the existence of the analytic solutions for this case for an arbitrary , given (for case) in terms of Mittag-Leffler functions [15, 16], their computation is not practical.

For non-perturbed case, we defined in [28] an error estimate as the maximum of the absolute value of the difference between the exact analytical solution and approximate numerical one where maximum is taken over all grid points . For and and 2, , , the error estimate was always less than .

When we consider presented method for obtaining numerical solution to fractional perturbed Volterra equation (1.1), there are three levels of numerical errors.

The first level corresponds to the error of Galerkin approximation (2.3) which depends on basic functions number . In a special case, when , and operator is replaced by identity operator, we can estimate the approximation error using the following result from [27].

*Remark 5.1 (see [27, Remark 5.2]). *If (see (2.3)) is the Legendre-Gauss-Lobatto interpolation of , , , , then
where is a positive constant (see [26]). Therefore, we can get the following error bounds:

Also at the first level, the integrals (2.11) and (2.12) are calculated. In our method, we use a Gauss-Legendre quadrature for numerical integration. The exact error of such quadrature can be found, for example, in Theoremāā7.3.5 in [32].

The second level is the Laplacian discretization (Section 3). In this case, the numerical error can be estimated by , where is the spatial grid step.

The last one is is the residual error of GMRES method for solving large linear systems. In our computations, the residual error threshold was set to , which was small enough to obtain reliable solution.

The joint error estimate from all three levels is not obvious. Moreover, in the considered perturbed case, the analytic solution to (1.1) is not known. Therefore, in order to estimate the accuracy of numerical solutions, we proceed in the following manner which is also applicable for two- and higher-dimensional cases.

When we are not able to confront the numerical solutions with analytic ones, we can investigate how does approximate solution change with increasing numbers of grid points and increasing number of basic functions. One can expect that increasing number of grid points and increasing number of basic functions should result in a better approximation of the true (unknown) solution. Taking appropriate sequences of those numbers and estimating the largest differences between consecutive solutions, one can show convergence of approximation errors. To do it let us define the following quantities.

Let denote the maximum difference between two solutions obtained for the same and the same grid (defined by ) but with different numbers of basic functions and .

Then let denote the maximum difference between two solutions obtained for the same and the same Hilbert space (the same ) but with different numbers of grid points (in one direction) and where means the value in the node of bigger size obtained from values calculated for grid of smaller size by cubic-spline interpolation.

In two-dimensional cases, the appropriate error estimates read Figures 6, 7, and 8 present some examples of the dependence of the above defined error estimates on the numbers of grid points and the size of the basis. Presented examples contain both one- and two-dimensional cases and two cases of boundary conditions.

The general conclusions of that investigation are the following. In all cases the error estimates decrease fast with increasing number of grid points or with increasing size of the basis. That decrease is in log plots seen as close to a straight line, that is, error estimates decrease almost exponentially. Then taking large enough grid and large enough set of basic functions, one can obtain, in principle, an error less then arbitrary small number. In practice increasing the sizes of basis and grid produces a sharp increase of numerical operations causing accumultion of rounding errors. That property can be compensated by increasing the precision of representation of real numbers (using double or quadruple precision) and so on. All those actions require higher and higher computer power to obtain results in a reasonable computing time.

Our estimates show, however, that a reasonable approximations can be obtained with relatively low values of and .

#### Acknowledgment

A. Karczewska is thankful for support from āStochastic Analysis Research Network,ā Grant PIA-CONICYT-Scientific Research Ring no. 1112.