Abstract

A new Monte Carlo approach for evaluating the generalized eigenpair of real symmetric matrices will be proposed. Algorithm for the three smallest eigenpairs based on the partitioning inverse Monte Carlo iterative (IMCI) method will be considered.

1. Introduction

It is well known that the problem of calculating the largest or smallest generalized eigenvalue problem is one of the most important problems in science and engineering [1, 2]. This problem arises naturally in many applications. Mathematically, it is a generalization of the symmetric eigenvalue problem, and it can be reduced to an equivalent symmetric eigenvalue problem. Let be real symmetric matrices and the matrix a positive definite matrix. Consider the problem of evaluating the eigenvalues of the pencil , that is, the values for which A generalized eigenvalue problem (1.1) is said to be symmetric positive definite (S/PD) if is symmetric and is positive definite.

2. Inverse Vector Iteration Method

Another procedure for eigenvalue prediction is to use the Rayleigh quotient given by [3] since is positive definite, then (2.1) is well defined.

Theorem 2.1. Suppose that are eigenvalues for pencil and , corresponding eigenvectors. Then for arbitrary initial vector one has [3] where is as introduced in (2.1).

Theorem 2.2. The inverse vector iteration method for arbitrary choice vector is convergent to the smallest eigenvalue and corresponding eigenvector for pencil . Also, the rate of convergence depends on , where is number of iterations [3].

Algorithm 1 evaluates the smallest eigenpair based on the inverse vector iteration [4].

Input:
 Initial vector π‘₯ 0
begin
 Set πœ† 1 ( 1 ) = π‘₯ 𝑇 0 𝐴 π‘₯ 0 π‘₯ 𝑇 0 𝐡 π‘₯ 0
 For   𝑗 = 0 , 1 , 2 , …
 begin
  Solve linear system 𝐴 𝑧 𝑗 + 1 = 𝐡 π‘₯ 𝑗 for 𝑧 𝑗 + 1
  Set πœ† 1 ( 𝑗 + 1 ) = 𝑧 𝑇 𝑗 + 1 𝐴 𝑧 𝑗 + 1 𝑧 𝑇 𝑗 + 1 𝐡 𝑧 𝑗 + 1
  Set π‘₯ 𝑗 + 1 = 𝑧 𝑗 + 1 β€– 𝑧 𝑗 + 1 β€–
  Output:   π‘₯ 𝑗 , πœ† 1 ( 𝑗 )
 end
end

3. Monte Carlo Method for Matrix Computations

Suppose that the matrix and two vectors are given. Consider the following Markov chain with length : where for . The statistical nature of constructing the chain (3.1) follows as where and show the probability of starting chain at and transition probability from state to , respectively.

In fact Define the random variable using the following recursion for Now, define the following random variable:

Theorem 3.1. Consider the following system: Let the nonsingular matrix ,such that , then the system (3.6) can be presented in the following form: where . Then under condition ,one has [5]

Suppose that is the th iterative solution of the following recursion relation with . If we set the random variable then

By simulating random paths with length we can find The Monte Carlo estimation can be evaluated by which is an approximation of .

From all possible permissible densities, we apply the following: The choice of the initial density vector and the transition probability matrix leads to an almost Optimal Monte Carlo (MAO) algorithm.

Theorem 3.2. Using the above choice and the variance of the unbiased estimator for obtaining the inverse matrix is minimized [4].

There is a global algorithm that evaluates the solution of system (3.6) for every matrix . The complexity of algorithm is , where and are the average length of Markov chian and the number of simulated paths, respectively [2].

4. Inverse Monte Carlo Iterative Algorithm (IMCI)

Inverse Monte Carlo iterative algorithm can be applied when is a nonsingular matrix. In this method, we calculate the following functional in each steps: It is more efficient that we first evaluate the inverse matrix using the Monte Carlo algorithm [1, 2, 4]. The algorithm can be realized as in Algorithm 2.

input:
  𝐴 ∈ 𝑅 𝑛 Γ— 𝑛 , 𝑓 0 ∈ 𝑅 𝑛
begin
 Starting from initial vector 𝑓 0
 For   𝑗 = 1 , 2 , …
 begin
  Using global algorithm, calculate the sequence of Monte Carlo
  iterations by solving the following system
           𝐴 𝑓 𝑗 = 𝐡 𝑓 𝑗 βˆ’ 1
  Set
        πœ† ( 𝑗 ) = ⟨ 𝐴 𝑓 𝑗 , β„Ž 𝑗 ⟩ ⟨ 𝐡 𝑓 𝑗 , β„Ž 𝑗 ⟩ = ⟨ 𝐡 𝑓 𝑗 βˆ’ 1 , β„Ž 𝑗 ⟩ ⟨ 𝐡 𝑓 𝑗 , β„Ž 𝑗 ⟩
  Output:
  Smallest eigenvector πœ† 1 ( 𝑗 ) , and corresponding eigenvector 𝑓 𝑗 .
 end
end

5. Partitioning IMCI

Let the matrix be partitioned into four blocks , and , where and are square matrices of order and such that : By assumption that all the indicated matrix inversions are realized, it is easy to verify that where Thus inverting a matrix of order comes down to inverting four matrices, of which two have order and two have order , and several matrix multiplications. Therefore the basic Monte Carlo for solving will be called as the dimension of matrix andequals to threshold. This action causes the convergence acceleration. Now, we can use the following recursion algorithm to obtain the inverse of matrix (see Algorithm 3).

Partitioning inverse   ( 𝑆 , 𝑛 )
begin:
   𝑛 = r a n k ( 𝑆 ) ; 𝑝 = 𝑛 / 2
   𝐴 = 𝑆 [ 1 ∢ 𝑝 , 1 ∢ 𝑝 ] ; 𝐡 = 𝑆 [ 1 ∢ 𝑝 , 𝑝 + 1 ∢ 𝑛 ]
   𝐢 = 𝑆 [ 𝑝 + 1 ∢ 𝑛 , 1 ∢ 𝑝 ] ; 𝐷 = 𝑆 [ 𝑝 + 1 ∢ 𝑛 , 𝑝 + 1 ∢ 𝑛 ]
   π‘š = size ( 𝐴 )
 if π‘š ≀ threshold
   𝐴 𝐴   =  Monte Carlo procedure   ( 𝐴 )
 else begin:
    𝐴 𝐴 =  Partitioning  inverse   ( 𝐴 , π‘š )
    𝑁 =  Partitioning  inverse   ( 𝐷 βˆ’ 𝐢 βˆ— 𝐴 𝐴 βˆ— 𝐡 )
    𝑀 = βˆ’ 𝑁 βˆ— 𝐢 βˆ— 𝐴 𝐴 ; 𝐿 = βˆ’ 𝐴 𝐴 βˆ— 𝐡 βˆ— 𝑁
    𝐾 = 𝐴 𝐴 βˆ’ 𝐴 𝐴 βˆ— 𝐡 βˆ— 𝑀
    𝑆 𝑆 [ 1 ∢ 𝑝 , 1 ∢ 𝑝 ] = 𝐾 ; 𝑆 𝑆 [ 1 ∢ 𝑝 , 𝑝 + 1 ∢ 𝑛 ] = 𝐿
    𝑆 𝑆 [ 𝑝 + 1 ∢ 𝑛 , 1 ∢ 𝑝 ] = 𝑀 ; 𝑆 𝑆 [ 𝑝 + 1 ∢ 𝑛 , 𝑝 + 1 ∢ 𝑛 ] = 𝑁
 end
end

6. Finding More Than One Generalized Eigenvalues

Assume that an eigenvalue and its corresponding eigenvector have been computed using the partitioning IMCI algorithm. In the first step of the above algorithm, we deflate the matrix to the matrix . Then, we repeat again the first step of the algorithm to obtain the dominant eigenvalue of which is the second dominant eigenvalue of . Let values of eigenvalues of pencil be computed. Suppose that is a matrix such that the columns of are vector of eigenvector of pencil , that is, where is eigenvector corresponding eigevalue .

Now, let where Hence, if we find the th smallest eigenpair of pencil , then we can evalute , that is, th smallest eigenvalue of pencil .

7. Numerical Results

In this section, the experimental results for obtaining the three smallest eigenpairs outlined in Tables 1, 2, and 3. The numerical tests are performed on Intel(R) (Core(TM)2 CPU, 1.83 GHz) personal machine.

8. Conclusion and Future Study

We have seen that Monte Carlo algorithms can be used for finding more than one eigenpair of generalized eigenvalue problems. We analyze the computational complexity, speedup, and efficiency of the algorithm in the case of dealing with sparse matrices. Finally, a new method for computing eigenpairs as the partitioned method is presented. In Figure 1 the comparison of computational times between general Monte Carlo algorithm and partitioning algorithm is shown. The scatter diagram in Figure 2, shows that there is a linear relationship between matrix dimension (equivalently, the number of matrix elements) and total computational time for partitioning IMCI.