Abstract

In this paper, we consider four methods for determining certain eigenvalues and corresponding eigenvectors of large-scale generalized eigenvalue problems which are located in a certain region. In these methods, a small pencil that contains only the desired eigenvalue is derived using moments that have obtained via numerical integration. Our purpose is to improve the numerical stability of the moment-based method and compare its stability with three other methods. Numerical examples show that the block version of the moment-based (SS) method with the Rayleigh–Ritz procedure has higher numerical stability than respect to other methods.

1. Introduction

Many problems arising in different fields of science and engineering can be reduced to the generalized eigenvalue problem [13]:where are real or complex, large, sparse, and only a few of the eigenvalues are desired. Also, when is (identity matrix), we have a standard eigenvalue problem. Computing eigenpairs of the generalized and standard eigenvalue problems is one of the important problems in many scientific applications [47]. There are several methods for solving such eigenvalue problems [8]. Among these methods, the iterative methods are used to generate a subspace that contains the desired eigenvectors. Techniques based on the Krylov subspaces are powerful tools for building desired subspaces for large-scale eigenvalue problems [911]. Expressed methods in this article find all of the zeros that lie in a circle using numerical integration. In this paper, we briefly describe moment-based method in Section 2, Rayleigh–Ritz with contour integral method in Section 3, block version of the Sakurai–Sugiura method in Section 4, and block version of the SS method with Rayleigh–Ritz procedure in Section 5 for solving generalized eigenvalue problem (1). In Section 6, we provide four numerical tests for comparing four methods, and in Section 6, we apply the BSSRR method with selected matrices from different application areas, and finally, we draw some conclusions in Section 7.

2. Moment-Based Method (SS Method)

For solving (1), we consider computing entire poles of a rational function:

Those are eigenvalues of equation (1) and lie in a circle using numerical integration. Let be positively oriented closed Jordan curve [12] in the complex plane and be distinct eigenvalues that lie in the . Letwhere is located inside and the m × m Hankel matrices , be and . Also, let

Then, we have the following theorem.

Theorem 1. If for , then the eigenvalues of the pencil are given by.

Proof. In [13], by approximating the integral of equation (3) via the N-point trapezoidal rule, we obtainLet be the eigenvalues of pencil . We regard as the approximations for andAlso, let be the Vandermonde matrix for . Then, the approximations for the eigenvectors are obtained by

3. Rayleigh–Ritz with Contour Integral Method (CIRR Method)

We consider (1), let be symmetric and let B be positive definite and be eigenpairs of the matrix pencil . We apply a Rayleigh–Ritz procedure with an orthonormal basis . The projected matrices are given by and . is used to generate a sequence of subspace containing approximations to the desired eigenvector. The Ritz values of the projected pencil are taken as approximate eigenvalues for original pencil with corresponding Ritz vectors. In this method, by applying the Rayleigh–Ritz procedure moments are not explicitly used [17]. The algorithm is as follows.Rayleigh–Ritz procedure(1)Construct an orthonormal basis (2)Form and (3)Compute the eigenpairs of (4)Set

Theorem 2. Let be defined by (4). Suppose that is expanded by the eigenvectors asThen,

Proof. It follows from (4) and (8) thatSince is an eigenpair of the matrix pencil , we have and thus , and thusBy the residue theorem, we obtain the result.
We define the Vandermonde matrix with byFrom the equation (9), we havewhere , and .

Theorem 3. If are distinct and for , then

Proof. Since are mutually distinct and for . V and D are nonsingular. Therefore, it follows from (13) thatSince the vectors are orthonormal basis of , equation (14) holds.

For nonzero vector , we define the moments:where is located inside . Also, we obtain the following approximations via the N-point trapezoidal rule:

4. Block Sakurai–Sugiura Method (BSS Method)

In this method, for solving (1), we reformulate the SS method in the context of the resolvent theory. This method has the potential to resolve degenerated eigenvalues.

Theorem 4. Let be a regular pencil of order N. Then, there exist nonsingular matrices such thatwhere are Jordan blocks, is nilpotent, and denotes the identity matrix of order .
Proof. In [12].
Here, because are the regular matrices, we can define and . According to the (18), we will partition row vectors in and into , and column vectors in and into , respectively, for

Theorem 5. Resolvent of the regular pencil is decomposed intowhere is an eigenvalue of Jordan block .

Proof. Let . According to Theorem 4, we haveUsing the resolvent of the Jordan block,And , we get the result.

Theorem 6. The localized moment matrix is written as

Proof. In [18].

Definition 1. Let C and D be arbitrary matrices, where . A size-reduced moment matrix is defined as

Theorem 7. If ranks of both and are , nonsingular part of a matrix pencil is equivalent to .

Proof. In [18].

Theorem 8. The right eigenvectors of the original matrix pencil are given by , and its adjoint is given by .

Proof. In [18].

Theorem 9. If all elements of and are nonzero, and there is no degeneracy in , then nonsingular part of a matrix pencil is equivalent to .

Proof. By choosing row vectors of and vectors of to beAnd for , respectively, we have and . As for the rank of , we consider that column vectors of from the Krylov series of starting from . Because has not degenerated, and elements of are nonzero, these column vectors are linearly independent, and thus the rank of is f.

5. Block Version of the SS Method with the Rayleigh–Ritz Procedure (BSSRR Method)

We suggest a new algorithm for computing all poles of analytic function (2) with the use of the algorithm in [19]. As the eigenpairs of equation (1) can be obtained from , where and are small Hankel matrices. Let , a random matrix, and , where

Then, the block version of the SS method with Rayliegh–Ritz procedure [20] constructs the LM-dimensional subspace:

For the Rayleigh–Ritz procedure, the subspace contains all eigenvectors of (1) for . With using N-point trapezoidal rule for equation (25), we havewhere is the quadrature point and is the corresponding weight [21].

Based on Theorem 4, we analyse the relationship between the contour integral spectral projection and the Krylov subspace.

5.1. An Arnoldi-Based Interpretation of the Contour Integral Spectral Projection

Since the matrices are nonsingular, we define . According to the Jordan block structure of (18), we partition row vectors in into and column vectors in into , respectively, for . Then, we can derive the following lemma and theorem.

Lemma 1. Let be a k-degree polynomial. Then, we havewhere

Proof. From Theorem 6 and the binomial theorem , we have the following relation:Here, since and for ,Therefore, Lemma 1 is proved.

Definition 2. Let , , and let the Krylov subspace defined byThen, the subspace is defined by the sum of the Krylov subspaces, i.e.,

Theorem 10. Let be the subspace of the block version of the SS method with the Rayleigh–Ritz procedure defined by (26). Then, we have

Proof. From the definition of (25) and Lemma 1, we have . Therefore,Therefore, Theorem 10 is proved.

Remark 1. Theorem 10 shows that the block version of the SS method with the Rayleigh–Ritz procedure can be regarded as the Rayleigh–Ritz procedure based on the block Krylov subspace . Here, we note that, in the block version of the SS method with the Rayleigh–Ritz procedure, the basis vectors of are explicitly computed by (25) and the QR decomposition of S (Algorithm 1).
And for .

Input: for
Output: eigenpairs for
(1)Solve for [1416]
(2)Compute for
(3)Compute QR decomposition of
(4)Compute eigenpairs of the matrix pencil

6. Numerical Experiments

In this section, we have provided five numerical examples. In Examples 14, we have discussed stability Algorithm 2 (SS method), Algorithm 3 (CIRR method), Algorithm 4 (BSS method), and Algorithm 1 (BSSRR method), and in Example 5, we have applied the BSSRR method with selected matrices in the fields of engineering sciences. Also, we have used to compute relative residual for all of the methods. In computational results tables, the number of eigenpairs has been named NE.

Input:
Output:
(1)Set
(2)Solve , [1416]
(3)Set
(4)Compute by (5)
(5)Compute the eigenvalues of the pencil
(6)Compute by (7)
(7)Set
Input:
Output:
(1)Set
(2)Solve for , , [1416]
(3)Compute , by (17)
(4)Construct an orthonormal basis from
(5)Form and
(6)Compute the eigenpairs of
(7)Set
(8)Select the approximate eigenpairs from
Input: for
Output: for
(1)Solve [1416] and calculate
(2)Compute
(3)Construct Hankel matrices and
(4)Perform singular value decomposition,
(5)Construct
(6)Compute eigenvalues of to have
(7)Compute
(8)Compute

Example 1. A real symmetric matrix was prepared, which has five primary eigenvalues −12.03, −12.02, −12.01, −12.00, −11.99. In the range of [−12.5, −11.5], other eigenvalues were taken randomly in the range of [−40, 40], and a random unitary matrix was prepared to construct A. An identity matrix was used for B. After applying Algorithms 14, we obtained numerical results that have been shown in Table 1.

Example 2. We let that A, B were complex, random matrices, and B was positive definite. After applying Algorithms 14, we obtained numerical results that have been shown in Table 2. Also, the relative residual for described methods has been drawn in Figure 1 for n = 1000.

Example 3. In this example, A, B were taken sparse, symmetric, and random, and A was positive definite. After applying Algorithms 14, we obtained numerical results that have been shown in Table 3. Also, the relative residual for described methods has been drawn in Figure 2 for n = 1000.

Example 4. We consider matrices:After applying Algorithms 14, we obtained numerical results that have been shown in Table 4. Also, the relative residual for described methods has been drawn in Figure 3 for n = 1000.

Example 5. In this example, we selected seventeen matrices from the UF sparse matrix collection. Two major requirements were used in the selection procedure: matrices with different parameters and matrices arising in different application areas were chosen. We consider the following symptoms parameters.
The order N, the number of nonzero elements NZ, and the condition number CON. The application areas of the selected matrices are listed in Table 5. Matrices from the different areas were selected and thus obtained results by running the matrices will be typical in several scientific fields. We applied the BSSRR method for the calculation of relative residual generalized eigenvalue problem (1) when A is one of the selected matrices in Table 5 and B is identity matrix. As the dimension A, B is equal. We computed relative residual respect to In Table 6 and computed respect to other norms in Table 7, too, the number of eigenpairs for each matrix was sixteen in Table 7.

7. General Conclusions and Plans for Future Work

Several specific conclusions were drawn in connection with the numerical results presented in the previous section. Some general conclusions are given as follows:(1)All numerical experiments indicate that CIRR, BSS, and BSSRR methods have higher stability than respect to the SS method(2)BSSRR method has less relative residual respect to SS, CIRR, and BSS methods(3)If is used for calculation of relative residual in the BSSRR method, then we have higher accuracy and less consuming time

Designing quadrature points with higher performance and a more precise error analysis of the BSSRR method is a part of our future work.

Data Availability

No data were used to support this study.

Conflicts of Interest

The authors declare that they have no conflicts of interest.

Acknowledgments

The support of Eng. Akbar Shahidzadeh Arabani is gratefully acknowledged.