Table of Contents
Advances in Numerical Analysis
Volume 2011, Article ID 826376, 9 pages
Research Article

Partitioning Inverse Monte Carlo Iterative Algorithm for Finding the Three Smallest Eigenpairs of Generalized Eigenvalue Problem

Department of Statistics, Faculty of Mathematical Sciences, University of Guilan, P.O. Box 1914, Rasht, Iran

Received 5 December 2010; Accepted 15 February 2011

Academic Editor: Michele Benzi

Copyright © 2011 Behrouz Fathi Vajargah and Farshid Mehrdoust. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.


A new Monte Carlo approach for evaluating the generalized eigenpair of real symmetric matrices will be proposed. Algorithm for the three smallest eigenpairs based on the partitioning inverse Monte Carlo iterative (IMCI) method will be considered.

1. Introduction

It is well known that the problem of calculating the largest or smallest generalized eigenvalue problem is one of the most important problems in science and engineering [1, 2]. This problem arises naturally in many applications. Mathematically, it is a generalization of the symmetric eigenvalue problem, and it can be reduced to an equivalent symmetric eigenvalue problem. Let be real symmetric matrices and the matrix a positive definite matrix. Consider the problem of evaluating the eigenvalues of the pencil , that is, the values for which A generalized eigenvalue problem (1.1) is said to be symmetric positive definite (S/PD) if is symmetric and is positive definite.

2. Inverse Vector Iteration Method

Another procedure for eigenvalue prediction is to use the Rayleigh quotient given by [3] since is positive definite, then (2.1) is well defined.

Theorem 2.1. Suppose that are eigenvalues for pencil and , corresponding eigenvectors. Then for arbitrary initial vector one has [3] where is as introduced in (2.1).

Theorem 2.2. The inverse vector iteration method for arbitrary choice vector is convergent to the smallest eigenvalue and corresponding eigenvector for pencil . Also, the rate of convergence depends on , where is number of iterations [3].

Algorithm 1 evaluates the smallest eigenpair based on the inverse vector iteration [4].

Algorithm 1

3. Monte Carlo Method for Matrix Computations

Suppose that the matrix and two vectors are given. Consider the following Markov chain with length : where for . The statistical nature of constructing the chain (3.1) follows as where and show the probability of starting chain at and transition probability from state to , respectively.

In fact Define the random variable using the following recursion for Now, define the following random variable:

Theorem 3.1. Consider the following system: Let the nonsingular matrix ,such that , then the system (3.6) can be presented in the following form: where . Then under condition ,one has [5]

Suppose that is the th iterative solution of the following recursion relation with . If we set the random variable then

By simulating random paths with length we can find The Monte Carlo estimation can be evaluated by which is an approximation of .

From all possible permissible densities, we apply the following: The choice of the initial density vector and the transition probability matrix leads to an almost Optimal Monte Carlo (MAO) algorithm.

Theorem 3.2. Using the above choice and the variance of the unbiased estimator for obtaining the inverse matrix is minimized [4].

There is a global algorithm that evaluates the solution of system (3.6) for every matrix . The complexity of algorithm is , where and are the average length of Markov chian and the number of simulated paths, respectively [2].

4. Inverse Monte Carlo Iterative Algorithm (IMCI)

Inverse Monte Carlo iterative algorithm can be applied when is a nonsingular matrix. In this method, we calculate the following functional in each steps: It is more efficient that we first evaluate the inverse matrix using the Monte Carlo algorithm [1, 2, 4]. The algorithm can be realized as in Algorithm 2.

Algorithm 2

5. Partitioning IMCI

Let the matrix be partitioned into four blocks , and , where and are square matrices of order and such that : By assumption that all the indicated matrix inversions are realized, it is easy to verify that where Thus inverting a matrix of order comes down to inverting four matrices, of which two have order and two have order , and several matrix multiplications. Therefore the basic Monte Carlo for solving will be called as the dimension of matrix andequals to threshold. This action causes the convergence acceleration. Now, we can use the following recursion algorithm to obtain the inverse of matrix (see Algorithm 3).

Algorithm 3

6. Finding More Than One Generalized Eigenvalues

Assume that an eigenvalue and its corresponding eigenvector have been computed using the partitioning IMCI algorithm. In the first step of the above algorithm, we deflate the matrix to the matrix . Then, we repeat again the first step of the algorithm to obtain the dominant eigenvalue of which is the second dominant eigenvalue of . Let values of eigenvalues of pencil be computed. Suppose that is a matrix such that the columns of are vector of eigenvector of pencil , that is, where is eigenvector corresponding eigevalue .

Now, let where Hence, if we find the th smallest eigenpair of pencil , then we can evalute , that is, th smallest eigenvalue of pencil .

7. Numerical Results

In this section, the experimental results for obtaining the three smallest eigenpairs outlined in Tables 1, 2, and 3. The numerical tests are performed on Intel(R) (Core(TM)2 CPU, 1.83 GHz) personal machine.

Table 1: Number of chains .
Table 2: The solution when the number of chains increases.
Table 3: Total computational time for general and partitioning methods.

8. Conclusion and Future Study

We have seen that Monte Carlo algorithms can be used for finding more than one eigenpair of generalized eigenvalue problems. We analyze the computational complexity, speedup, and efficiency of the algorithm in the case of dealing with sparse matrices. Finally, a new method for computing eigenpairs as the partitioned method is presented. In Figure 1 the comparison of computational times between general Monte Carlo algorithm and partitioning algorithm is shown. The scatter diagram in Figure 2, shows that there is a linear relationship between matrix dimension (equivalently, the number of matrix elements) and total computational time for partitioning IMCI.

Figure 1: Computational times for general and partition methods.
Figure 2: Regression function .


  1. I. Dimov and A. Karaivanova, “Iterative Monte Carlo algorithm for linear algebra problem,” Lecture Note in Computer Science, pp. 66–77, 1996. View at Google Scholar
  2. I. Dimov, Monte Carlo Methods for Applied Scientists, World Scientific Publishing, 2008.
  3. Y. Saad, Numerical Methods for Large Eigenvalue Problems, Manchester University Press, 1991.
  4. B. Fathi, “A way to obtain Monte Carlo matrix inversion with minimal error,” Applied Mathematics and Computation, vol. 191, no. 1, pp. 225–233, 2007. View at Publisher · View at Google Scholar · View at Scopus
  5. R. Y. Rubinstein, Simulation and the Monte Carlo Method, John Wiley & Sons, New York, NY, USA, 1981.