Table of Contents Author Guidelines Submit a Manuscript
Journal of Function Spaces
Volume 2017 (2017), Article ID 4751357, 11 pages
https://doi.org/10.1155/2017/4751357
Research Article

Convergence Analysis of Generalized Jacobi-Galerkin Methods for Second Kind Volterra Integral Equations with Weakly Singular Kernels

School of Mathematics and Quantitative Economics, Shandong University of Finance and Economics, Jinan, Shandong 250014, China

Correspondence should be addressed to Haotao Cai; nc.ude.efuds@oatoahiac

Received 5 June 2017; Accepted 13 July 2017; Published 31 August 2017

Academic Editor: Xinguang Zhang

Copyright © 2017 Haotao Cai. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Abstract

We develop a generalized Jacobi-Galerkin method for second kind Volterra integral equations with weakly singular kernels. In this method, we first introduce some known singular nonpolynomial functions in the approximation space of the conventional Jacobi-Galerkin method. Secondly, we use the Gauss-Jacobi quadrature rules to approximate the integral term in the resulting equation so as to obtain high-order accuracy for the approximation. Then, we establish that the approximate equation has a unique solution and the approximate solution arrives at an optimal convergence order. One numerical example is presented to demonstrate the effectiveness of the proposed method.

1. Introduction

In this paper we present a generalized Jacobi-Galerkin method for solving Volterra integral equations of second kind with weakly singular kernels. Specifically, for a given function with and a parameter , we define a Volterra integral operator by and then consider the Volterra integral equation of the formwhere is a given function and is the unknown to be determined.

In view of the singularity of the kernel function in the operator , the solution of (2) exhibits a singularity at the point in its derivative even if the forcing term is a smooth function. There are many numerical attempts based on the spline approximation to overcome the difficulty caused by the singularity of the solution of (2) (see [18]). Recently, spectral methods using Jacobi polynomial basis have received considerable attention to approximating the solution of integral equations due to their high accuracy and easy implementation (see [917]). In particular, Chen and Tang in [11] proposed a Jacobi-collocation spectral method for second kind Volterra integral equations with weakly singular kernels. Some function transformations and variable transformations are employed to change the equation into new Volterra integral equations possessing better regularity so that the orthogonal polynomial theory can be applied accordingly. In [12], they proposed a spectral Jacobi-Galerkin approach for solving (2). A rigorous error estimate was given in both the infinite norm and the weighted square norm. As far as I am concerned, all existing spectral methods always either suppose that the original equation has a sufficiently smooth solution or convert the equation into a new one with a solution of better regularity than that of the original equation (2) so that the spectral method can be applied. It goes without saying that the function transformation makes the resulting equations and approximations more complicated, which leads us to consider the generalized spectral method involved.

We organize this paper as follows. In Section 2, we develop a generalized Jacobi-Galerkin method for solving (2) and then show the stability and convergence of this algorithm. For the semidiscrete system proposed in previous section, we construct the efficient numerical integration scheme so as to obtain the fully discrete linear system. In Sections 3 and 4, we give a few technical results and analyze the stability and convergence analysis, respectively. In Section 5, one numerical example is presented to illustrate the efficiency and accuracy of this method. In addition, a conclusion is drawn.

2. A Generalized Jacobi-Galerkin Method

In this section, we first introduce some index sets: with and with for . We let ,  , be a Jacobi weight function and let denote the space of measurable functions whose square is Lebesgue integrable in relative to the Jacobi weight function . The inner product and norm of this space are given by For , we let be the Jacobi orthonormal polynomial of degree relative to the weight function .

The following result regarding the regularity of the solution of (2) comes from [6].

Theorem 1. Suppose that with . Then the original equation (14) has a unique solution . Moreover, if the function is expressed aswhere , then the solution can be written in the formHere and , and the coefficients and are some constants.

Now we define an index set by and suppose that is the cardinality of the set , and then we define a nonpolynomial function set by It follows from the notations above that Theorem 1 is rewritten as follows.

Corollary 2. Suppose that the kernel function . If there exist some constants ,  , such thatwhere , then there exist some constants ,  , such that the solution has the similar decompositionwhere .

Now we introduce another finite dimensional space given by and then let

The generalized spectral Galerkin method for solving (2) is to seek a vector such that satisfying the equation If we use which is the orthogonal projection operator from to , then the equation mentioned above has the operator formBy expression (8), can be written as

The conventional Jacobi-Galerkin method is to choose as the approximation space and the test space, but when the original solution has a singularity, the approximation solution suffers from possessing lower-order accuracy. In order to overcome this difficulty, the same as in [7], we include the set of the known nonpolynomial functions reflecting the singularity of the original solution in the usual Jacobi-Galerkin approximation space . Hence, we call this method the generalized spectral Galerkin method.

Next we are going to analyze this generalized Chebyshev-Galerkin method. We first show the stability of the original operator .

Theorem 3. Suppose that ; then there exists a positive constant such that for

Proof. First, it follows from and Theorems and in [18] that the integral operator is compact. On the other hand, by the fact that is not the eigenvalue of the integral operator we conclude that is injective from into itself. Thus, using Theorem   in [18], the inverse operator exists and is bounded. This completes result (16).

On the other hand, for a single function, let denote the th generalized usual differential operator, and for any function of several variables, let denote the th partial generalized differential operator on the variable . We introduce the nonuniformly weighted Sobolev space ,  , by with the norm If we use which is the orthogonal projection operator from to , it is clear that there holdsThroughout the remainder of this paper, we use the symbol to denote a positive constant which may take different values on different occurrences. Moreover, it follows from [19] that, for , there exists a positive constant such that, for ,which implies thatIn particular, if the function has the decomposition with and , then using (20) yields that

In the following we consider the stability and convergence result regarding approximation equation (14).

Theorem 4. If , there exists a positive integer such that and for ,where appears in (16). Moreover, there exists a positive constant such that

Proof. Since is a compact operator from into itself and for all as tends to , we conclude that there exists a positive integer such that, for and for , where it and (16) and the triangle inequalityyield conclusion (23).
On the other hand, subtracting (2) from (14) obtainsBy applying the operator to both sides of (2), we have Thus,A combination of (27) and (29) giveswhere it and (16) imply thatHence, by the solution expansion of (9) and (22) with ,  , and , we conclude thatA combination of (31) and (32) presents the desired conclusion.

In the remainder of the section, we write the matrix form of (14). To this end, for , by introducingwe then define four block matrices by It is clear thatLikewise, we define the matrices , , , and . Using these notations, we define and by Associated with , by letting we define the vector as Thus using the matrices and vectors above, the matrix form of (14) is written as

In order to solve system (39) in previous section, the matrix entries of integral form in (39) must be computed. Hence, the main purpose of this section is going to approximate the integral operator and the inner product based on the Gauss-Jacobi quadrature rule. To this end, for and , we denote by and the set of Jacobi-Gauss points and the corresponding weights relative to the weight function . We use the notation to denote the set of all polynomials of degree not more than . Moreover, the classical Gauss-Jacobi quadrature rule is given byThus, upon relation (35) we only need to give the fully discrete form of and . A direct computation using the Gauss-Jacobi quadrature rule (40) yields that In order to give the fully discrete form of matrix , we first approximate the integral operator . For this purpose, for , we introduce a variable transformation which converts the interval into . Thus, the operator has the following form: In particular, when , then Next we define an integral operator bywhere Consequently, the integral operator is rewritten asIn order to discretize the operator , we first discretize the operator . To this end, for , we define the Lagrange interpolation polynomial by where Thus, for , let denote the -degree interpolation polynomial of about the variable relative to the weight ; that is, and then replacing by in (45) yields the discrete form of as follows:It follows from the Gauss-Jacobi quadrature rule (40) thatSubsequently, we can obtain the fully discrete form of the operator ,In order to approximate the inner product and easily analyze the stability and convergence of fully discrete equation, we define the operator byOn the other hand, letUsing these notations above, we replace and by and , obtainingwhere is given by In order to observe that (56) is a fully discrete form, we have to write its matrix. To this end, suppose ; replacing the operator in (33) by the operator given in (54) and then using Gauss-Jacobi quadrature (40) produce that The same as before, we define four matrices , , , and and then set On the other hand, replacing the function in (37) by the interpolation polynomial and then using Gauss-Jacobi quadrature (40) produce that Using the notations above, in a similar manner, we let Hence, we have the following matrix form of (56):where

3. Some Useful Results

In this section we are going to give some technical results so as to analyze the fully discrete equation (56).

Lemma 5. Suppose the kernel function . If , then there exists a positive constant such that, for ,

Proof. We only need to show the first inequality in (64) since the other is the same. In fact, a direct application of the high-order derivative formula to yields thatwhere is the binomial coefficient given by . Clearly, Substituting the above result into the right hand side of (65) yields thatUsing the Cauchy-Schwartz inequality to the right hand side of (67) yields the desired conclusion with being given by

Now we give the difference between and for . To this end, we introduce the result in [8]:  for , there exists a positive constant such that, for ,

Lemma 6. Suppose the kernel function . If three parameters ,  , and   satisfy ,  ,  and , then there exists a positive constant such that, for ,Similarly, there also exists a positive constant such that, for ,

Proof. We only prove the first result (69), and the other is the same. We first observe thatAssociated with the above equation, we define by the left hand side of (71) and then define and by It is clear thatApplying Cauchy inequality to the right hand side of (73) produces thatIt follows from the hypothesis that and yield thatOn the other hand, an application of (68) produces thatIt follows from the first estimation in (64) thatFor , a direct observation for using the condition that can obtain thatA combination of (73)–(78) yields the desired conclusion (69).

Now we introduce the operator as and then we estimate the difference between and .

Lemma 7. Suppose the kernel function . If three parameters ,  ,  and satisfy , then there exists a positive constant such that, for ,Similarly, there also exists a positive constant such that, for ,

Proof. The same as Lemma 6, we only need to show that (80) holds. For , by the definition of the operators and ,A direct estimation yields thatin which combining result (68) with implies thatIn the following we estimate . If we let thenBy the assumption that , there exists a positive constant such that . Thus we define two functions and by It is obvious thatApplying Cauchy inequality to the right hand side of (88) yields thatClearly,By using the second estimation in (64), we haveSubstituting results (89)–(91) into (86) can obtain thatwhere is defined by The left thing is to estimate for . To this end, we let Then, for , a direct estimation for yields that which implies thatA consequence of (83), (92), and (96) produces the desired conclusion.

The next result is concerned with the difference between and . To this end, we introduce the result proposed in [911]: for any , there exists a positive constant independent of ,

Lemma 8. Suppose the kernel function . If three parameters ,  , and   satisfy , then there exists a positive constant such that, for ,Moreover, if for , then there exists a positive constant such that

Proof. We only show that (98) holds, since the proof of the result (99) is the same. For , using triangle inequality produces thatIt follows from result (80) in Lemma 7 that we only need to estimate the second term in the right hand side of (100). By definition of and in which combining (97) produces that there exists a positive constant such that This and (69) conclude the desired conclusion (98).

As a consequence of Lemma 8, for , by using the inverse inequality relative to two norms weighted with different Jacobi weight functions in Theorem   in [19], we can easily obtain the following.

Corollary 9. Suppose the conditions in Lemma 8 hold. Then there exists a positive constant such that, for ,

4. Convergence Analysis

In the section, we are going to analyze the convergence of the approximate solution of the fully discrete generalized Jacobi-Galerkin method. First we give the stability analysis of the operator .

Theorem 10. Suppose the kernel function . If three parameters ,  ,  and  satisfy and if we choose as then there exists a positive integer such that and for ,where appears in (16).

Proof. For , there exists functions for and a polynomial function such that By using (99) and (104), there exists a positive constant such that, for , Because of the result we conclude that there exists a positive integer such that, for ,where denotes the cardinality of the set as in Section 2.
On the other hand, by estimation (103) in Corollary 9 and the choice of in (104) there exists a positive constant such that It follows from the fact that tends to zero as that there exists a positive integer constant such that, for ,Hence, when , a combination of (109) and (111) produces that This combining (23) and the next inequality produces Hence, we draw the desired conclusion.

Theorem 10 ensures that, for sufficiently large , if we select as in (104), then the fully discrete system possesses (62) which possesses a unique solution . The next result is concerned with the convergence of the approximation solution .

Theorem 11. Suppose the kernel function and is given by (8). Three parameters ,  ,  and satisfy and is given in (104). Then there exist a positive constant and a positive integer such that, for ,

Proof. By using the triangle inequality, we haveBased on expression (9) of the solution , an application of relation (19) yields that in which combining estimation (20) with produces