Abstract

The Jacobi–Davidson iteration method is efficient for computing several eigenpairs of Hermitian matrices. Although the involved correction equation in the Jacobi–Davidson method has many developed variants, the behaviors of them are not clear for us. In this paper, we aim to explore, theoretically, the convergence property of the Jacobi–Davidson method influenced by different types of correction equations. As a by-product, we derive the optimal expansion vector, which imposed a shift-and-invert transform on a vector located in the prescribed subspace, to expand the current subspace.

1. Introduction

The complex n-dimensional vector space and n-by-n matrix space are denoted by and , respectively; and these notations can be straightforwardly carried over to real space. Many iterative methods for computing a few extreme eigenpairs of the standard eigenvalue problemwhere is a large, sparse Hermitian matrix, is an eigenvalue, and is the associated eigenvector of matrix A, have been developed, such as the Krylov subspace methods [1, 2], the locally optimal block preconditioned conjugate gradient method [3], the Davidson method [4], and their variants [58]. Here, we use to denote the Euclidean norm of a vector, which will be defined in the next section.

In the Jacobi–Davidson iteration method, the projection onto the orthogonal complement of the current approximation elegantly avoids the singularity of the shifted matrix and ensures the positive definite of the resulting coefficient matrix when the current approximation is near to the desired eigenvalue. In [9], the authors proposed several alternative variants of the correction equations and presented some predictions reasonably on the convergence property influenced by these correction equations. Also, in [10], the authors explored the existence and uniqueness on solutions of different correction equations. However, to our best knowledge, there is no analysis to compare the convergence affected by different types of correction equations [11, 12]. In this paper, we aim to analyse, theoretically, the convergence property of the Jacobi–Davidson method influenced by different types of correction equations.

The organization of this paper is as follows. In Section 2, we firstly introduce some notations and present some preliminaries. In Section 3, we then give the main convergence results. And, finally, in Section 5, we end the paper with some concluding remarks.

2. Preliminaries

For convenience of statements, we firstly introduce some notations. In this paper, is used to represent the identity matrix, and, for short, it can be simplified as I. And, with a suitable dimension is used to denote the i-th column of I. We use to denote the inner product defined in the complex vector space , and the corresponding induced Euclidean norm of either a vector or a matrix is denoted by . For two given vectors , indicates the angle between and , and, thus, the distance between them, with and , can be expressed, equivalently, as . In addition, for a given matrix , and denote its transpose and conjugate transpose, respectively, and this notation can be straightforwardly carried over to vectors.

The eigenpairs of the Hermitian eigenvalue problem are denoted by , with , , being orthonormal, (1). For a given matrix pair and a nonzero vector , the function,is defined as the generalized Rayleigh quotient of associated with the matrix pair .

In [6], Sleijpen and Van der Vorst proposed the efficient Jacobi–Davidson iteration method for computing a few eigenpairs of the Hermitian eigenvalue problem (1). In each iteration step, say, the k-th, of the Jacobi–Davidson method, the so-called correction equation,with being the residual with respect to the current unit approximation and its Rayleigh quotient , requires to be solved to expand the current projection subspace.

It should be noticed that the projection onto the orthogonal complement of guarantees positive definite and well-conditioning of the coefficient matrix of the correction equation in (3). In addition, the Jacobi–Davidson iteration method attains, elegantly, the cubic convergence rate locally [6, 1315].

In [9], Genseberger and Sleijpen proposed a more restrictive alternative correction equation:where U is an n-by-m matrix of which the columns expand an orthonormal basis of the current projection subspace, say, subspace .

Although the projection in (4) is more complicated and expensive than that in (3), projecting onto the orthogonal complement of the current approximate subspace may further improve the conditioning of the correction equation, especially in the case of clustering eigenvalues. Obviously, if U is chosen as , with being the current Ritz vector, the correction equation in (4) is reduced to that in (3).

Generally speaking, the solutions of the correction equations in (3) and (4) do not lead to the same expansion of the projection subspace, and, as emphasized in [9], it is unclear which of them is more effective in actual computations. Therefore, different convergence behaviors of the Jacobi–Davidson method are expected.

In the Jacobi–Davidson iteration method, once we obtain the current search subspace , we firstly project matrix A onto it and obtain the corresponding projected matrix ; we then compute the eigenpairs of T with smaller size, that is, with ; and, finally, use the Ritz pairs to approximate the desired eigenpairs of matrix A, see [6] for more details. Note that the Ritz vectors , , can also span subspace , and, thus, in the subsequent discussions, we assign in the correction equation in (4).

3. The Main Results

In this section, we exhibit the main result which depicts different convergence behaviors of the Jacobi–Davidson method affected by different types of correction equations (3) and (4).

If the correction equations (3) and (4) are solved exactly, we, frequently, obtain the solutions:respectively. Thus, expanding the current search subspace by and in the above equations is equivalent towhere , , and , as is included in the current search subspace.

Next, we first present a result which depicts the distance between and the desired eigenvector .

Theorem 1. Consider whose columns , , are Ritz vectors of A, and , the Rayleigh quotient of . Assume that admits the orthogonal direct sum decomposition:and let , then solves the following minimum problem:where with its elements being and .

Proof. Based on the orthogonal direct sum decomposition (7), the matrix U can be represented as with , which indicates thatwhere . Then, it follows from thatAccording to the property of the generalized Rayleigh quotient, we know that vector y such that minimizeswith the minimum being zero, and, thus, we obtain the least squares solution uniquely.
In [16], the author considered the problem of optimal expansion of subspaces for eigenvector approximations. Specifically, starting from a prescribed subspace, the author derived the optimal resulting vector, obtained from multiplying matrix A with a vector located in the prescribed subspace, to expand the current prescribed subspace. Likewise, Theorem 1 provides the optimal vector, obtained by imposing a shift-and-invert transform on a vector located in the prescribed subspace, to expand the current subspace.

Theorem 2. Assume that the current search subspace spanned by U obtained by the Jacobi–Davidson method satisfiesand then, we have the following estimatewhere , , and are defined as above and is the first row of matrix C.

Proof. It follows from the orthogonal direct sum decomposition in (7) thatAs a result, under assumption (12) and by straightforward computations, we obtainwhich completes the proof.

Theorem 2 indicates that vector approximates the least squares solution better than that of under the assumption in (12), which straightforwardly results in that the exact solution of (4) approximates the desired eigenvector better than that of the exact solution of (3).

4. Numerical Results

In this section, we use one simple example to examine the numerical behavior of the Jacobi–Davidson method affected by different type of correction equations. We simplify the Jacobi–Davidson method equipped with (3) as the JD method and that equipped with (4) as the JDG method.

We would like to compare the two methods JD and JDG in aspects of the number of iteration steps (“IT”) and the computing time in seconds (“CPU”). In addition, for the last approximation , we use the absolute error (“ERR”) of the approximation to the desired eigenvalue, namely, , to check whether the two methods converge to a same target eigenvalue. The two methods are terminated once their residual norms could achieve the stopping criterion , where is the residual, and are the approximate eigenvector, and the Rayleigh quotient of , respectively.

Example 1. [1, 5] Let us consider the following partial differential equation:defined on the domain of imposed with the Dirichlet boundary conditions. We could acquire the eigenvalue problem (1) by discretization of the partial differential equation (16) by five-point finite difference scheme on an grid. Here, and are chosen with , and the dimension of the resulting matrix is .
We use Table 1 to list the numerical results of Example 1. From this table, we observe that the JDV method behaves more efficient and robust than the JD method, and that the JDV method outperforms the JD method in both terms of iteration steps and CPU time, although the two methods converge to the desired eigenpair globally.

5. Concluding Remarks

In this paper, we analyse, theoretically, the behaviors of the Jacobi–Davidson method affected by correction equations (3) and (4), respectively. Admittedly, the comparisons may be impractically useful for unknowns of the eigenvectors, and, thus, presenting their comparisons in actual computations may be a challenging and interesting problem.

Data Availability

The data used to support the findings of this study are included within the article.

Conflicts of Interest

The authors declare that there are no conflicts of interest regarding the publication of this paper.

Acknowledgments

This work was supported by the National Natural Science Foundation of China (Grant no. 61602188), Shandong Provincial Natural Science Foundation (Grant no. ZR2019QD018), Scientific Research Foundation of Shandong University of Science and Technology for Recruited Talents (Grant nos. 2017RCJJ068 and 2017RCJJ069).