Research Article | Open Access

# A Jacobi-Collocation Method for Second Kind Volterra Integral Equations with a Smooth Kernel

**Academic Editor:**Juan J. Nieto

#### Abstract

The purpose of this paper is to provide a Jacobi-collocation method for solving second kind Volterra integral equations with a smooth kernel. This method leads to a fully discrete integral operator. First, it is shown that the fully discrete integral operator is stable in both and weighted norms. Then, the proposed approach is proved to arrive at an optimal (the most possible) convergent order in both norms. One numerical example demonstrates the efficiency and accuracy of the proposed method.

#### 1. Introduction

In this paper, we provide a Jacobi-collocation approach for solving the second kind Volterra integral equation of the form where the kernel function and the input function are given smooth functions about their variables and is the unknown function to be determined.

For ease of analysis, we will write (1) into an operator form. By introducing the integral operator by (1) is reformulated as

It is well known that there are many numerical methods for solving second kind Volterra integral equations such as the Runge-Kutta method and the collocation method based on piecewise polynomials; see, for example, Brunner [1] and references therein. For more information of the progress on the study of the problem, we refer the readers to [2–8]. Recently, a few works touched the spectral approximation to Volterra integral equations. In [9], Elnagar and Kazemi provided a novel Chebyshev spectral method for solving nonlinear Volterra-Hammerstein integral equations. Then, this method was investigated by Fujiwara in [10] for solving the first kind Fredholm integral equation under multiple-precision arithmetic. Nevertheless, no theoretical results were provided to justify the high accuracy. In [11], Tang et al. developed a novel Legendre-collocation method for solving (3). Inspired by the work of [11], Chen and Tang in [5, 12] obtained the spectral Jacobi-collocation method for solving the second kind Volterra integral equations with general weakly singular kernels for . In [13], a spectral and pseudospectral Jacobi-Galerkin approach was presented for solving (3). In [14], Wei and Chen considered a spectral Jacobi-collocation method for solving Volterra type integrodifferential equation. In [15], Cai considered a Jacobi-collocation method for solving Fredholm integral equations of second kind with weakly singular kernels.

Unfortunately, all these papers [5, 11–14] give the convergence analysis but suffer from the stability analysis. Because of lack of the stability analysis, the approximate solutiondoes not attain the most possible convergence order. Moreover, all of those papers do not answer that the approximate equation has a unique solution. Hence, in this paper, we will provide a Jacobi-collocation method for solving (3), which extends the Legendre spectral method developed in [11]. This spectral method leads to a fully discrete linear system. We are going to show that the fully discrete integral operator is stabile; that is, the approximate equation has a unique solution, and then, present the optimal (the most possible) convergent order of the approximate solution based on the stability analysis. We organize this paper as follows. In Section 2, as demonstrated in [13], we review a spectral Jacobi-collocation method for solving (3). In Section 3, a few important results are presented to analyze the Jacobi-collocation approach. In Sections 4 and 5, we analyze the Jacobi-collocation method, including the stability of the approximate equation and the convergent order of the approximate solution, in both and weighted norms, respectively. In Section 6, one numerical example is presented to show the efficiency and accuracy of this method.

The problem under study deserves more investigations in future works. Moreover, we believe that the semianalytical approaches are useful to investigate the problem. For related terminologies and applications of semianalytical approaches, please refer to [16–18].

#### 2. A Spectral Jacobi-Collocation Method

In this section, we are going to review the spectral Jacobi-collocation method for solving (3). To this end, we introduce several index sets: , and . We let for be a weight function and then use the notation to be the set of all square integrable functions associated with the weight function , equipped with the norm For , we denote the points by to be the set of Jacobi-Gauss points corresponding to the Jacobi weight function . By introducing we define the Lagrange fundamental interpolation polynomial by Let be the set of all polynomials of degree not more than ; clearly, We use the notion to denote the set of all continuous functions on , equipped with the norm For , we define a linear functional on such that, for any , The collocation method for solving (3) is to seek a vector such that satisfies The above equation can be rewritten as For , we define the interpolating operator by It it well known that is written as the form Using these notations we can reformulate (12) into an operator form The difficulty in solving the linear system (12) is to compute the integral term in (12), accurately. In this paper, we adopt the numerical integration rule proposed in [11] to overcome this difficulty. For this purpose, we introduce a simple linear transformation which transfers the integral operator into the following form: Then, by using -point Legendre-Gauss quadrature formula relative to the Legendre weight , we can obtain the discrete integral operator as follows:

Thus, using those notations, a fully discrete spectral Jacobi-collocation method for solving (3) is to seek a vector such that satisfying

It is easy to show that the operator equation (20) has the following form:

In [11], for the case , based on the Gronwall' inequality, Tang et al. analyze the convergence of a spectral Jacobi-collocation method for solving (3) in both and weighted spaces. However, the stability analysis of the spectral method is not given. Moreover, we observe that the convergence order of the approximate solution in the space is not optimal. Hence, the purpose of this paper is to illustrate that for sufficiently large and , the operator has a uniformly bounded inversion in both and spaces, respectively. Moreover, we also show that the approximate solution attains at the most possible convergent order.

#### 3. Some Preliminaries and Useful Results

In this section, we will introduce some technical results, which contribute to analyze the stability and convergence on the spectral Jacobi-collocation method for solving (3). To this end, for , we use the notation to denote the th differential operator on the variable . For , we introduce the nonuniformly weighted Sobolev space by It follows from [19] that there exists a positive constant independent of such that, for and , which implies that

Moreover, we have the following.

Lemma 1. *Suppose that . If the parameters satisfy the next conditions:
**
then there exists a positive constant independent of such that, for ,
*

*Proof. *This is a consequence of Theorem 3.4 and - in [20].

For , the binomial coefficients are given by We use the notation to denote the set of all functions whose th derivative is continuous on , endowed with the usual norm

For , the notation is used to denote the set of all functions such that, for is continuous on . Let

Next we consider the difference between and .

Lemma 2. *Assume that the kernel function for . If two parameters and satisfy the conditons
**
then there exists a positive constant independent of such that when ,
*

*Proof. *First of all, by setting
the integral operator is written as
In addition, using the hypothesis that and implies that . Thus, we write the difference between and as follows:

Employing Cauchy-Schwartz inequality to the right hand side of the above equation and then using the result (26) with and produce that there exists a positive constant independent of ,
where is given by

It remains to estimate . A direct computation leads to

Making use of a linear transform to the right hand side of the above equation produces

Using the discrete Cauchy Schwartz inequality into the right hand side of the above equation obtains
where combining the fact that leads to
Substituting the above estimate on into the right hand side of (35) yields the desired conclusion (31) with being given by

Using Lemma 2, we can obtain the following.

Corollary 3. *Suppose that the conditions of Lemma 2 hold, then for , the following two estimates hold:
*

*Proof. *We observe that if (42) holds, then by using the fact
we can easily obtain the result (43). Thus, we only require to estimate (42). In fact, by using the inverse inequality relative to two norms weighted with different Jacobi weight functions in Theorem 3.31 in [19], there exists a positive constant independent of such that, for and ,
By the above inequality, we can obtain that
where combining (31) yields the desired conclusion (42).

#### 4. The Stability and Convergence Analysis under the Norm

In this section, we will establish that, for sufficiently large and , the operator has a uniformly bounded inversion in the space and then show that the approximate solution arrives at the most possible convergent order under the norm. To this end, we first give some notations. For and , the notation is used to denote the space of functions whose th derivative is Hölder continuous on with exponent . The norm of the space is defined by

Lemma 4. *Suppose that the kernel function ; then the operator is a bounded linear operator from to ; that is, for ,
**
Moreover, for , the operator is also a linear bounded operator; that is, for ,
*

*Proof. *It is easily proved that the operator is a linear operator from the space to the space or from to .

Next we illustrate that (48) holds. By the definition of the norm,
which implies that
On the other hand, for all , by introducing
we can obtain that
where using the triangle inequality yields that
The left thing is to give an estimation of and . First, employing Lagrange midvalue differential theorem to yields that
A direct estimation for produces that
Thus, substituting the estimates (55)-(56) into the right hand side of (54) leads that
where (51) yields the desired conclusion (48).

In the following we show that the result (49) holds. Noticing,
Using Cauchy-Schwartz inequality to the right hand side of the equation above yields
which implies that
This complete the proof of (49).

The next result concerns on the bound of the norm for . For this purpose, we introduce the result on the Lebesgue constant corresponding to the Lagrange interpolation polynomials associated with the zeros of the Jacobi polynomials, which comes from Lemma 3.4 in [5]: Further, we also require to make use of another result of Ragozin, coming from [21, 22], which states that, for any , there exist a polynomial and a positive constant such that A combination of (61) and (62) leads to that there exists a positive constant such that

Lemma 5. *Suppose that the kernel function . Then there exists a positive constant independent of such that when ,
*

*Proof. *It follows from Lemma 4 that for , where combining (63) obtains that there exists a positive constant independent of :
Substituting the estimate (48) into the right hand side of the above equation yields the desired conclusion with being given by

We use the notation to denote the largest integer not more than . Moreover, by Theorem 3.10 in [23], if the kernel function is a smooth function, the operator has a bounded inversion; that is, for any , there exists a positive constant such that

Theorem 6. *Suppose that , . If we choose as follows:
**
then there exists a positive integer such that when and for ,
**
where appears in (67).*

*Proof. *It follows from the hypothesis that that tends to zero as tends to . Hence, using (64) there exists a positive integer such that ,

On the other hand, using (61) with the hypothesis that yields that there exists a positive constant such that, for ,
where combining (43) and (68) produces that there exists a positive constant ,
Similarly as before, by the fact that tends to as tends to , there exists a positive integer such that, for ,
Hence, when , combining these three estimates (67), (70), and (73) yields that
proving the desired conclusion (69).

Theorem 6 ensures that, for sufficient large , the operator equation (20) has a unique solution . The next result considers the convergent order of the approximate solution in norm.

Theorem 7. *Suppose that the kernel function , , and . If we choose as in (68), then there exist a positive constant and a positive integer such that, for ,
*

*Proof. *We first notice that it follows from the hypothesis that and that (3) has a unique solution , which implies that . By using the triangle inequality,
Upon the estimation (63) with , we only require to estimate the second term in the right hand side of the above equation. In fact, employing to both sides of (3) yields that
A direct computation of the above equation and (20) confirms that

By Theorem 6, there exists a positive integer such that ,
where combining (61) leads that there exists a positive constant such that
To obtain the estimation of the right hand side of equation (80), we let
Clearly,
It remains to estimate and , respectively. First, using the hypothesis that and the result (49) with produces that there exists a positive constant independent of such that
where combining the result (23) with yields that there exists a positive constant independent of such that
Hence, a combination of (84) and the following inequality
produces that there exists a positive constant such that
On the other hand, using the results (24) and (31) leads that there exists a positive constant such that
where combining (68) and (85) yields that there exists a positive constant ,

A combination of the above estimation and (80), (82), and (86) yields the desired result.

Theorem 7 illustrates that the approximate solution obtained by the proposed method arrives at the most possible convergent order.

#### 5. The Stability and Convergence Analysis under the Norm

As demonstrated in the previous section, in this section we are going to prove that, for sufficiently large and , the operator has a uniformly bounded inversion in the space and then show that the approximate solution arrives at the optimal convergent order. To this end, we first give a few results.

Lemma 8. *Suppose that and . If , then one has that
*

*Proof. *We will prove that the result (89) holds in the following four cases: (1) and but while and .

Firstly, we notice that, for , ,
which confirms the desired conclusion.

If the conditions and hold, then using
produces
which ensures the desired conclusion.

In a similar approach as the above case, clearly, the result (89) holds for the case that while .

At last, when the conditions and hold, using the next equation
can produce
Thus, again using the same method as before yields the desired result.

Next we ensure that the operator is a bounded linear operator with certain positive constant .

Lemma 9. *Suppose that , ; then is a bounded linear operator from into with ; that is, there exists a positive constant such that, for ,
*

*Proof. *By the estimation (49) in Lemma 5, there exists a positive constant such that, for ,
On the other hand, for , without loss of generality, we assume that . By introducing
we have
Hence, it remains to estimate and , respectively. For this purpose, by reformulating and as follows
and then employing Cauchy-Schwarz inequality to and , respectively, we can obtain that
Using the hypothesis that and the Lagrange midvalue differential theorem yields that

A direct estimation for produces that
If the condition holds, then we have
otherwise, using (102), where combining (89) with and leads to that there exists a positive constant such that
A combination of (98)–(104) and the triangle inequality yields that there exists a positive constant such that
where (96) draws the desired conclusion.

The next result concerns on difference between and for . For this purpose, we will make use of the next result proposed in [5]. For any , there exists a positive constant independent of :

A combination of (61) and (106) leads to that there exists a positive constant such that, for ,

Again using Theorem 3.10 in [23], we know that is the unique eigenvalue of Volterra integral operator ; consequently, the operator has a bounded inversion; that is, for any , there exists a positive constant such that

Theorem 10. *Suppose that and . If one chooses as follows:
**
then there exists a positive integer such that and for ,
**
where appears in (108).*

*Proof. *This proof is similar to that of Theorem 6. By Lemma 9, for , we have , where combining (95) and (107) obtains that there exists a positive constant ,
Hence, by the fact that , there exists a positive integer such that, for ,

On the other hand, using (106) obtains that there exists a positive constant such that
By the hypothesis that , and , a combination of (42) and (109) yields that there exists a positive constant such that
Substituting the above estimation into the right hand side of (113) produces that
Again using the same fact that , there exists a positive integer such that for ,

When , these three estimates (108), (112), and (116) yield that
which infers our result.

This above result shows that (20) has a unique solution in the space . Next result considers the approximate order of the solution .

Theorem 11. *Suppose that the kernel function , , and . If one chooses as in (109), then there exist a positive constant and a positive integer such that, for ,
*

*Proof. *The proof of Theorem 11 is similar as that of Theorem 7. It follows from Theorem 7 that , which implies that . By using the triangle inequality,
Upon the estimation in (23) with and , we only need to estimate . Employing the result (78), (106), and Theorem 7, there exist a positive constant and a positive integer such that ,
To obtain the estimation of the right hand side of (120), we let
Clearly,

Upon the estimation (84), we only require to estimate . In fact, a combination of (87) and (109) yields that there exists a positive constant :

A combination of (84) and (120)–(123) yields the desired result.

Theorem 11 illustrates that the proposed method preserves the optimal order of convergence.

#### 6. One Numerical Example

In this section, we are going to present one numerical example to demonstrate the efficiency of the spectral Jacobi-collocation method for solving (3). In each example, we use two spectral collocation approaches associated with the weight function and , respectively. Here, we compute the Gauss-Jacobi quadrature rule nodes and weights by Theorems 3.4 and 3.6 discussed in [19]. All computer programs are compiled by Matlab language.

*Example 6. *Consider the second kind Volterra integral equation (1) with
The corresponding exact solution is given by . As expected, the errors show an exponential decay, since in this semilog representation the error variations are essentially linear versus the degrees of the polynomial.

From the theoretical results we observe that the numerical errors should decay with an exponential rate, and we also find that the errors show an exponential decay (Tables