Abstract

We develop the multiwavelet Galerkin method to solve the Volterra–Fredholm integral equations. To this end, we represent the Volterra and Fredholm operators in multiwavelet bases. Then, we reduce the problem to a linear or nonlinear system of algebraic equations. The interesting results arise in the linear type where thresholding is employed to decrease the nonzero entries of the coefficient matrix, and thus, this leads to reduction in computational efforts. The convergence analysis is investigated, and numerical experiments guarantee it. To show the applicability of the method, we compare it with other methods and it can be shown that our results are better than others.

1. Introduction

A mathematical model of the spatiotemporal development of an epidemic yields the following Volterra–Fredholm integral equation (VFIE):where the given functions and with are assumed to be continuous functions. Furthermore, we consider to be integrable, where is a nonlinear function and is a continuous function. Besides, the given functions are selected so that equation (1) has a unique solution.

The parabolic boundary value problems lead to these types of integral equations and widely arise from various physical and biological models. The VFIE also appears in the literature in mixed form aswhere and are analytic functions. Many authors studied the mixed form of VFIE numerically. Among these, we can mention collocation method [1], projection method [2], spline collocation method [3], wavelet collocation method [4], Adomian decomposition method [5], and so on [68]. Among these studies, we focus on a paper that uses the multiwavelet Galerkin method to solve linear mixed VFIE as mentioned in [9]. In [9], the wavelet transform matrix and the operational matrix of integration are utilized to reduce the problem of linear mixed VFIE to a sparse linear system of algebraic equations. By searching among the sources, we can find a small number of papers in the field of numerical and analytical solutions to problem (1). In [10], the Lagrange collocation method is employed to solve this problem. Wang and Wang [11] applied the Taylor collocation method to find the numerical solution of the equation. Also, the convergence analysis is investigated for the proposed method. The Taylor collocation method was applied by Karamete and Sezer [12] to solve this equation as well as the high-order linear Fredholm–Volterra integrodifferential equations [13]. The Bell polynomials have been employed for solving this equation [14].

The motivation of our work is to develop the multiwavelet Galerkin method to solve (1). We split the problem into two configurations, linear and nonlinear. After using the multiwavelet Galerkin method, both types reduce to the system of the linear and nonlinear algebraic equations, respectively. The interesting results arise in the linear type where thresholding is employed to decrease the nonzero entries of the coefficient matrix. This gives the sparse system. This property is very useful to reduce the computational cost. We use Alpert’s multiwavelet bases constructed in [15] following [16], and these bases have been used to solved PDEs, ODEs, and integral equations [9, 1719]. These bases allow us to have high vanishing moments, compact support, and properties such as orthogonality and interpolation [15]. These characteristics of Alpert’s multiwavelets lead to a sparse representation of differential and integral operators [16, 20].

This paper is organized as follows: In Section 2, we briefly introduce Alpert’s multiwavelet bases. In Section 3, the Multiwavelets Galerkin method is used to solve this proble, and the conditions for convergence of the proposed method are discussed. Section 5 contains some numerical results to confirm the validity and efficiency of the proposed method, and Section 6 contains a few concluding remarks.

2. Alpert’s Multiwavelet Bases

Let . We consider the uniform finite discretizations where the subintervals are determined by the point with . For , we introduce the subspace as a space of piecewise polynomial bases of degree less than multiplicity parameter that is spanned bywhere and are the dilation and translation operators, respectively, and are the primal interpolating scaling bases introduced by Alpert et al. [16]. Given nodes , which are the roots of Legendre polynomial of degree , the interpolating scaling bases are defined aswhere are the Lagrange interpolating polynomials at the point and are the Gauss–Legendre quadrature weights [16, 18]. These bases form an orthonormal bases on with respect to the -inner product. Due to the definition of the space , the spaces have dimension and obviously are nested:

Hence, we consider the complement subspace of in such thatwhere denotes orthogonal sums. According to (6), the space may be inductively decomposed to

The complement subspace has dimension and is spanned by multiwavelet bases , as

Because Alpert’s multiwavelets are completely introduced in [16], we avoid this and refer the readers to [15, 16, 19].

Every function can be represented in the formwhere is the orthogonal projection that maps onto the subspace . To find the coefficients that are determined by , we shall compute the integrals. We apply the -point Gauss–Legendre quadrature by a suitable choice of the weights and nodes for to avoid these integrals [16, 19], via

Convergence analysis of the projection is investigated for the -times continuously on differentiable function :

For the full proof of this approximation and further details, we refer the readers to [15]. Thus, we can conclude that converges to with rate of convergence .

Assume that the vector function with includes the scaling functions and is called multiscaling function. Approximation (9) may be rewritten using the vector that includes entries as follows:where is an -dimensional vector. The building blocks of these bases construction can be applied to approximate a higher-dimensional function. To this end, one can introduce the two-dimensional subspace that is spanned by

Thus, by this assumption, to derive an approximation of the function by the projection operator , we havewhere the components of the square matrix of order are obtained bywhere . Consider the -th partial derivatives of to be continuous. Utilizing this assumption, the error of this approximation can be bounded as follows:where is a constant.

Given orthogonal projection operator that maps onto , the multiscale projection operator can be represented asand consequently, any function can be approximated as a linear combination of multiwavelets and single-scale interpolating scaling functions aswhere

Note that we can compute the coefficients by using (10). But in many cases, multiwavelet coefficients from zero up to higher-level must be evaluated numerically. To avoid this, we use multiwavelet transform matrix , introduced in [18, 19]. This matrix connects multiwavelet bases and multiscaling functions, viawhere is a vector with the same dimension (here ). This representation helps to rewrite equation (18) as to formwhere we have the -dimensional vector whose entries are and and is given by employing the multiwavelet transform matrix as .

The multiwavelet coefficients (details) become small when the underlying function is smooth (locally) with increasing refinement levels. If the multiwavelet bases have vanishing moments [19, 21], then details decay at the rate of [17]. Because vanishing moments of Alpert’s multiwavelets are equal to , consequently, one can obtain . This allows us to truncate the full wavelet transforms while preserving most of the necessary data. Thus, we can set to zero all details that satisfy a certain constraint using thresholding operator :and the elements of are determined bywhere . Now, we can bound the approximation error after thresholding [17] viawhere is the projection operator after thresholding with the threshold and is a constant independent of .

3. Multiwavelet Galerkin Method

In order to obtain multiwavelet Galerkin solution of (1), assume that solution can be approximated as an expansion of the Alpert’s multiwavelets, i.e.,where the dimension vector of unknowns must be specified. This solution is selected such that it satisfies (1) approximately. Also, it is obtained from the solution of the minimization problem

Since is an inner product space with finite dimension, it can be shown that this minimization problem has a unique solution [22].

Let us rewrite (1) in the operator formwhere with . Furthermore, the operator is assumed to be compact on to and is a given continuous function. Due to these assumptions, is a continuous function and, consequently, is also continuous function. Due to (11), the function can be approximated at a rate of at least :where and are matrices and the rest are -dimensional vectors. Now, we introduce the residual in the approximation of (1):where and

Symbolically,

To find , it requires that the approximate solution satisfies

This is multiwavelet Galerkin’s method and yields a linear or nonlinear system that must be solved to obtain the approximate solution. Note that if and only if . Thus, we can rewrite (31) asor equivalently,

Note that whenever . Due to this, equation (34) can be rewritten as

According to (35), we obtain the system of linear or nonlinear equations. Due to the higher vanishing moments and increasing refinement level , for the linear type of this equation, we can discard coefficients by hard thresholding introduced in the previous section. We can reduce the computational efforts using proper methods for this type of system such as the GMRES method. The GMRES method is introduced by Saad and Shultz [23] for solving sparse and large linear systems. The generates an approximate solution whose residual norm is minimum by using a Krylov subspace. In this paper, we use restarted GMRES Algorithm 2 [24]. To use this method, we must first define Arnoldi’s algorithm. Arnoldi’ s procedure is an algorithm for building an orthogonal basis of the Krylov subspace . The -th Krylov subspace is defined as follows:

Here, we assume that system (35) of the linear type to be of form and is a matrix with column vectors . Also, is a Hessenberg matrix whose nonzero entries are defined by Algorithm 1.

(1)Choose a vector , such that
(2)For do
(3)Compute for
(4) for
(5)
(6)If then stop
(7)
(8)End do
(1)Compute , and
(2)Generate the Arnoldi basis and the matrix using the Arnoldi algorithm starting with
(3)Compute the minimizer of and
(4)If satisfied then stop, else set and go to 1
(1)Compute , and(2)Generate the Arnoldi basis and the matrix using the Arnoldi algorithm starting with (3)Compute the minimizer of and (4)If satisfied then stop, else set and go to 1

To investigate the convergence analysis, one can prove that . Thus, when because is a compact operator. Now, we can raise the convergence theorem.

Theorem 1. Let be a compact operator and be injective. Assume that the sequence is collectively compact and pointwise convergent to .

Then, exists and is uniformly bounded. Also, the solution of (27) and (31) satisfy the error estimate

Proof. Because the sequence converges pointwise to in and is collectively compact, we conclude thatFor all sufficiently large , we haveand as a consequence of this, is reversible. Note that the inverse operator exists due to the Riesz theorem. The investigation is based on the approximation of by ,To prove the existence of , assume thatThus, exists and is uniformly bounded due to geometric series theorem, i.e.,Using (42), exists. Taking norm and using (42), we obtainApplying the operator to both sides of (27) and then rearranging, we can obtainSubtracting (35) from (44), we obtainTaking norm and employing (43),This is equivalent to (37). It is straightforward to show that as . Assume that be a sequence of continuous functions so that as . Since the orthonormal projection satisfies , we can obtainThus, for each real number , there exists a number such that, for every number , one can write . This then implies thatThis implies that , for sufficiently large value of and because is arbitrary, consequently, as .

4. Numerical Experiments

To verify the accuracy and efficiency of the proposed method, we consider a series of numerical examples. In the linear type of equation (1), we aim to generate a sparse matrix to reduce the computational costs. We illustrate the rate of sparsity which is defined by [21,25]where is the threshold (small positive number) and and are the number of elements remaining after thresholding and the total number of elements, respectively.

Example 1. Consider the linear VFIE given in [11] asThe exact solution is [11].
The effects of the multiplicity parameter , the refinement level , and thresholding with different thresholds are reported in Table 1 and Figure 1. The results confirm the theoretical claims and demonstrate the effectiveness of the method. Note that the error decreases as parameters and increase due to the rate of convergence . We compare the error of the proposed method, the Lagrange collocation method [10], Taylor collocation method [11], and Taylor polynomial method [26] in Table 2. Due to Table 2, our method is flexible than other methods, and without changing the multiplicity parameter , we can improve the results. Figure 2 illustrate the effect of thresholding with different threshold parameters on the coefficient matrix. It can be seen that the number of matrix elements decreases when the threshold parameter increases.

Example 2. Let us consider the following VFIE:In Table 3 and Figure 3, we show the effect of the parameters , , and on sparsity and -error. It is obvious that increasing the parameters and reduces the error. A comparison of the proposed method with other methods such as Taylor method [26] and the Lagrange collocation [10] is reported in Table 4. In Figure 4, we illustrate the effect of thresholding on the coefficient matrix by taking different threshold when and .

Example 3. Let us consider nonlinear VFIE (1) withThe exact solution of this equation is .
In Figure 5, we illustrate the effect of parameter and on the -error and the error of approximation is plotted in Figure 6 taking and .

Example 4. Consider the nonlinear VFIE aswithThe exact solution is .
Table 5 is reported to show the efficiency and accuracy of the proposed method. We observe when the refinement level and multiplicity parameter increase, the -errors decrease. Figure 7 shows the error of proposed method on taking and .

5. Conclusion

We have employed the multiwavelet Galerkin method to solve the Volterra–Fredholm integral equations. To this end, the Volterra and Fredholm operators are represented in multiwavelet bases. Applying this method leads to a linear or nonlinear system of algebraic equations. In the linear type, we obtain a new sparse system using thresholding due to the decay in the wavelet coefficients. The convergence analysis is investigated, and one can show that the rate of convergence is . The numerical examples illustrate the efficiency and accuracy of the method.

Data Availability

The raw/processed data required to reproduce these findings cannot be shared at this time due to legal or ethical reasons.

Conflicts of Interest

The author declares that there are no conflicts of interest.

Acknowledgments

This project was supported by researchers supporting project number RSP-2020/210, King Saud University, Riyadh, Saudi Arabia.