Abstract

We present an efficient numerical algorithm for computing the eigenvalue of the linear homogeneous integral equations. The proposed algorithm is based on antithetic Monte Carlo algorithm and a low-discrepancy sequence, namely, Faure sequence. To reduce the computational time we reduce the variance by using the antithetic variance reduction procedure. Numerical results show that our scheme is robust and accurate.

1. Introduction

In Monte Carlo (MC) methods the random variables are simulated by computer generated pseudorandom sequences, and the numerical solution is performed by averaging over a large number of simulations. Pseudorandom numbers mimic the realizations of independent identically distributed random variables. Due to a fundamental result in probability theory, Monte Carlo methods based on such pseudorandom numbers can converge only proportionally to the square root of the number of simulations. It was noticed that the randomness of the simulating sequence is not essential for numerical integration. Deterministic sequences which fill the space uniformly can also be used. One of the most powerful methods for improving Monte Carlo simulation is to use low-discrepancy numbers [1]. Therefore, instead of using random numbers, we can employ a deterministic sequence of numbers. Such a sequence is called a low-discrepancy sequence. The great advantage of low-discrepancy sequences is that their rate of convergence is rather than , which greatly increases the competitiveness of Monte Carlo simulations. Using low-discrepancy sequences to carry out Monte Carlo algorithm is sometimes called quasi-Monte Carlo (QMC) algorithm [1]. The Faure sequence is one of the well-known quasirandom sequences used in quasi-Monte Carlo applications. In [2, 3] details of Faure sequence and its various scrambling methods are described.

The problem of using Monte Carlo and quasi-Monte Carlo methods for finding an eigenpair of matrices has been extensively studied [4]. In [5] a numerical method for finding the eigenvalue of the linear homogenous integral equation is proposed by applying antithetic Monte Carlo method to obtain a reliable scheme.

In this paper we present a new numerical method based on antithetic variance reduction method and quasi-Monte Carlo algorithm on scrambled Faure sequence for estimating the eigenvalue of the following linear homogenous integral equation: It can be written in the form of If , then and are called the eigenvalue and eigenfunction of the above equations, respectively. Also, is called the kernel of this integral equation. Throughout this paper, we suppose that the kernel is symmetric and positive definite; that is, And , for all .

Now, we consider the following continuous Markov Chain with length, : where , . The statistical nature of constructing the chain (4) is and , which are the probability of starting chain from and transition probability from state to , respectively [1].

Theorem 1. Under the above assumptions, the random variable where is unbiased estimator for ; that is,

Proof. See [5].

Theorem 2. Under the above conditions, one has

Proof. See [5].

2. Monte Carlo and Quasi-Monte Carlo Methods

The Monte Carlo method was invented by Stanislaw Ulam, a Polish born mathematician who worked for John von Neumann on the United States’ Manhattan Project during World War II, in 1946 [6]. Monte Carlo methods are based on the simulation of stochastic processes whose expected values are equal to computationally interesting quantities. MC methods offer simplicity of construction and are often designed to mirror some process whose behavior is only understood in a statistical sense. However, there are wide classes of problems where MC methods are the only known computational method of solution. Despite the universality of MC methods, a serious drawback is their slow convergence, which is based on the behavior of the size of statistical sampling errors. One generic approach to improve the convergence of MC methods has been the use of highly uniform random numbers in place of the usual pseudorandom numbers. While pseudorandom numbers are constructed to mimic the behavior of truly random numbers, these highly uniform numbers, called quasirandom numbers (or low-discrepancy sequences), are constructed to be as evenly distributed as mathematically possible. The use of quasirandom numbers in MC leads to quasi-Monte Carlo methods [7, 8]. Quasi-Monte Carlo methods are a variant of ordinary Monte Carlo methods that employ highly uniform quasirandom numbers in place of Monte Carlo’s pseudorandom numbers. QMC methods can be viewed as deterministic versions of Monte Carlo methods [7]. Determinism enters in two ways: by working with deterministic points rather than random samples and by the availability of deterministic error bounds instead of probabilistic MC error bounds. It could be argued that most practical implementations of MC methods are, in fact, quasi-Monte Carlo methods since the purportedly random samples that are used in Monte Carlo calculation are often generated in the computer by the deterministic algorithm. In QMC methods deterministic nodes are selected in such a way that the error bound is as small as possible. The very nature of the QMC methods with its completely deterministic procedures implies that we get deterministic and thus guaranteed error bounds [7]. In principle, it is therefore always possible to determine in advance an integration rule that yields a given accuracy. For example, for MC it is necessary to increase the number of simulations100 times to reduce the error by a factor of 10, whereas the QMC requires less (in general much less) than 100 times, and only 10 times in optimal cases. Monte Carlo methods offer statistical error estimates; however, while quasi-Monte Carlo has a faster convergence rate than normal Monte Carlo, one cannot obtain error estimates from quasi-Monte Carlo sample values by any practical way. A recently proposed method, called randomized quasi-Monte Carlo method, takes advantage of MC and QMC methods. Randomness can be brought to bear on quasirandom sequences through scrambling and other related randomization techniques in randomized QMC methods, which provide an elegant approach to obtain error estimates for quasi-Monte Carlo based on treating each scrambled sequence as a different and independent random sample. The core of randomized quasi-Monte Carlo method is to find an effective and fast algorithm to scramble (randomize) quasirandom sequences. Besides obtaining error estimates for quasi-Monte Carlo, scrambling techniques provide a natural way to parallelize quasirandom sequences. This scheme is especially suitable for distributed or grid computing. By scrambling a quasirandom sequence we can produce a family of related quasirandom sequences.

3. Quasirandom or Low-Discrepancy Sequences

Quasirandom numbers are constructed to minimize a measure of their deviation from uniformity called discrepancy. A quasirandom sequence is normally generated in the unit -dimensional hypercube, , and attempts to fill the hypercube as uniformly as possible. The original construction of quasirandom sequences was related to the van der Corput sequence, which is a one-dimension quasirandom sequence based on digital inversion. This digital inversion method is a central idea behind the construction of many current quasirandom sequences in arbitrary bases and dimensions. Following that, Halton [9] generalized the van der Corput sequence to dimensions, and Sobol [10] introduced the sequences that now bear his name. As significant generalization of these methods was proposed by Faure [6]. Niederreiter [7] generalized the existing construction of the Sobol and Faure sequences to arbitrary bases. These are now called Niederreiter sequences. Tezuka [11] further generalized Niederreiter sequences by using the polynomial arithmetic analogue of Halton sequences. The construction of quasirandom sequences is based on minimizing their discrepancy. Quasirandom sequences aim to have the fraction of their points within any subset as close as possible to the subset’s volume fraction. Based on stardiscrepancy, we expose that the following definition of a low-discrepancy sequence in is as follows.

Definition 3. For any and sequence , let denote the first points of the sequence . If one has where the constant depends only on the dimension , then the sequence is called a low-discrepancy sequence.

There are some variations for random and quasirandom sequences. Tezuka formulated the generalized Faure sequence, based on Halton sequence but using polynomials for reordering. According to Traub, generalized Faure method converges significantly faster than Monte Carlo and simple Faure method. In [2] we represent two methods that mix Faure sequence with the best scrambling Faure sequence and called them MFaure and M2Faure sequences. Figures 1, 2, and 3 represent Faure, MFaure, and M2Faure sequences for dimension 40. Figure 1 illustrates a disadvantage of the Faure sequence: the Faure construction leads to a sequence that has correlations between its individual coordinates. This leads, among others, to bad two-dimensional projections and also has its consequences when the sequence is used for numerical integration. We must stress that its performance in low dimensions is better. In Figures 2 and 3, we see the advantages of the MFaure and M2Faure sequences [3]. In this paper, we employ M2Faure sequence in the proposed algorithm.

4. Variance Reduction Techniques

As Monte Carlo simulation is slow to converge and there are various methods for increasing the speed of convergence, this essentially comes down to simulate the estimator in such a way that their variance is reduced. One simple method is antithetic sampling. With antithetic sampling one draws samples in pairs. If is uniformly distributed, then so is . For every random, draw two paths that are therefore simulated: one with and one with . This ensures that the mean of the drawn paths is zero, and the symmetry of the normal distribution is achieved by construction [12].

Assume that we want to compute with a random variable uniformly distributed on . The Naive Monte Carlo (NMC) is as follows: where , , are independent copies of . In the antithetic variates (AV) method we use the numbers and introduce the antithetic Monte Carlo estimator We note that the AV estimator is unbiased [13]. Suppose that . Then the variance of AV estimator is given by If and are negatively correlated, then we have a reduction of the variance compared to Naive Monte Carlo estimator based on random numbers.

5. The Proposed Algorithm (AVQMC Algorithm)

(1)Choose a value of Markov chain length and a value of number of Markov chain .(2)Create a continuous quasirandom walk (Markov chain) , where for are elements of M2Faure sequence.(3)For each quasirandom walk   simulate the following random variable: (3.1)Create such that for .(3.2)Set the following random variable: (4)Obtain the quasi-Monte Carlo estimation (5)Print.(6)End of algorithm.

6. Simulation Results

In this section we compare three approaches of Monte Carlo methods, namely, Naive Monte Carlo (NMC), antithetic variates Monte Carlo (AVMC) [13], and the proposed algorithm (AVQMC) methods. We run algorithms for evaluating the eigenvalue of the following examples. We compare the relative error of AVQMC and AVMC algorithms and the Naive Monte Carlo (NMC) algorithm for each example. Figures 4 and 5 illustrate the relative error against number of sample sizes with for Examples 4 and 5, respectively. All figures show that, when the sample size is small, AVQMC and AVMC algorithms are better than NMC algorithm. Also, when the sample size is large, AVQMC is better than all, and another advantage is that relative errors of method with Faure sequence are deterministic. Also, we obtain the estimation of the eigenvalue, with standard deviation (STD) estimation for different values of the number of Markov chains . For each example the Monte Carlo simulation result is summarized in a figure. The numerical results in Figures 6 and 7 confirm that the AVQMC algorithm is a reliable stochastic algorithm for computing the eigenvalue of the homogeneous linear integral equations. Also, we outline the standard deviation (STD) of the unbiased Monte Carlo estimator, the approximated eigenvalue , and the relative errors of three methods for Examples 4 and 5 in Table 1.

Example 4. Consider equation (2) with kernel , . We can easily show that the exact eigenvalue is 1.

Example 5. Consider equation (2) with kernel , . We can easily show that the exact eigenvalue is .

7. Conclusion

In this paper we have proposed an efficient algorithm and analyzed the performance of the algorithm for computing the eigenvalue of the linear homogenous integral equations. We studied the accuracy and robustness of the proposed algorithm. The numerical results in Table 1 show that AVQMC algorithm has better improvement, in both error and convergence, than AVMC and NMC algorithms.