Abstract

The paper solves the eigenvalues of a symmetric matrix by using three novel algorithms developed in the m-dimensional affine Krylov subspace. The n-dimensional eigenvector is superposed by a constant shifting vector and an m-vector. In the first algorithm, the m-vector is derived as a function of eigenvalue by maximizing the Rayleigh quotient to generate the first characteristic equation, which, however, is not easy to determine the eigenvalues since its roots are not of simple ones, exhibiting turning points, spikes, and even no intersecting point to the zero line. To overcome that difficulty by the first algorithm, we propose the second characteristic equation through a new quotient with the inner product of the shifting vector to the eigen-equation. The Newton method and the fictitious time integration method are convergent very fast due to the simple roots of the second characteristic equation. For both symmetric and nonsymmetric eigenvalue problems solved by the third algorithm, we develop a simple iterative detection method to maximize the Euclidean norm of the eigenvector in terms of the eigen-parameter, of which the peaks of the response curve correspond to the eigenvalues. Through a few finer tunings to the smaller intervals sequentially, a very accurate eigenvalue and eigenvector can be obtained. The efficiency and accuracy of the proposed iterative algorithms are verified and compared to the Lanczos algorithm and the Rayleigh quotient iteration method.

1. Introduction

Let be an real symmetric matrix. For a nonzero vector ,is the Rayleigh quotient [1, 2]. If is an eigenvector of , then the corresponding Rayleigh quotient is an eigenvalue , satisfying the vector form eigen-equation:where λ is an eigen-parameter and nontrivial solutions with exist only for λ being an eigenvalue; otherwise, .

In the free vibrations of a multidegree mass-damping-spring structural system, the second-order ordinary differential equations system is usually transformed to the Rayleigh quotient to find the natural frequencies [3]. The Rayleigh quotient has important roles in physical applications, and the eigenvalue problems are ubiquitous to be the main mathematical tools to explain many physical phenomena [4, 5], such as vibration frequencies, stability of dynamical systems, and energy levels in quantum mechanics, and also in the Mathieu’s equations for describing the wave propagation, the electromagnetic, and the elastic membrane as well as the heat conduction [6].

The Rayleigh quotient (1) can be used to find λ of a symmetric matrix eigenvalue problem (2). The related systems which lead to eigenvalue problems were discussed by Tisseur and Meerbergen [7], and different numerical methods to solve the eigenvalue problems were depicted in [810].

The basic ingredient of the Rayleigh–Ritz method is the construction of an orthogonal matrix , and set ; such that the original n-dimensional eigenvalue problem is reduced to an eigenvalue problem in an m-dimensional subspace with and , where is a Ritz pair to approximate the true eigenvalue and eigenvector. The computational cost in the m-dimensional subspace can be significantly reduced if .

The Lanczos method is further reduced to a symmetric tridiagonal matrix. A lot of works were concerned with the Lanczos algorithm [1114]. However, it is restricted to finding at most m dominant eigenvalues of . Bai [15] gave an error analysis of the Lanczos algorithm for the nonsymmetric eigenvalue problem.

The numerical computations in [16, 17] revealed that the methods based on the Krylov subspace could be very effective in the nonsymmetric eigenvalue problems by using the Lanczos biorthogonalization algorithm and Arnoldi’s algorithm. The Arnoldi and nonsymmetric Lanczos methods are both of the Krylov subspace methods. Among the many algorithms to solve the matrix eigenvalue problems, the Arnoldi method [1720], the nonsymmetric Lanczos algorithm [21], and the subspace iteration method [22] are well known.

The present paper will introduce some novel methods to find all eigenvalues of in an affine Krylov subspace. Two new characteristic equations are derived to solve λ, of which both the eigenvector and the eigenvalue λ can be solved simultaneously. Moreover, we will develop a very powerful iterative detection method to solve nonsymmetric eigenvalue problems in the affine Krylov subspace using the Galerkin condition to derive the governing equation of the m-vector. The affine Krylov subspace method was first developed by Liu [23] to solve a linear equations system, which is, however, not yet used to solve the matrix eigenvalue problems. There are many applications of the eigenvalues in PageRank [24, 25] and in free vibration [26].

Subsequently, we express the Rayleigh quotient in an affine Krylov subspace in Section 2, where a main result is derived to present the n-dimensional eigenvector in terms of a constant shifting vector and an m-vector governed by a nonhomogeneous linear equations system. The numerical algorithm for a symmetric matrix is developed in Section 3, where we derive the first characteristic equation. To improve the property of the first characteristic equation, we propose a new quotient in Section 4, where the second characteristic equation is derived for the symmetric eigenvalue problem. For the sake of comparison, we describe the Lanczos algorithm and the Rayleigh quotient iteration method in Section 5. The numerical tests of symmetric eigenvalue problems are carried out in Section 6. In Section 7, we discuss the properties of these two derived characteristic equations. In Section 8, we propose an iterative detection method for the nonsymmetric eigenvalue problems by maximizing the Euclidean norm of the derived eigenvector in the affine Krylov subspace, which is very powerful. Finally, we highlight the major results in Section 9.

2. The Rayleigh Quotient in an Affine Krylov Subspace

2.1. Affine Krylov Subspace

Given a nonzero constant shifting vector , the m-dimensional Krylov subspace is generated by

We employ Arnoldi’s process (a modification of the Gram–Schmidt process) to orthogonalize and normalize the Krylov vectors , , whose resultant vectors , satisfy , , where is the Kronecker delta symbol. After giving and the dimension m, the Arnoldi procedure is written as follows [27]:

Let U denote the Krylov matrix with dimension which possesses the orthogonal property

2.2. A Main Theorem

Theorem 1. For , , the eigenvector foris given bywhere and the eigenvalue is to be solved from a nonlinear characteristic equation.

Proof. Because of , we can expand bywhere the coefficient is to be determined. By using equations (6) and (9), we can derivewhereis a constant matrix.
The maximality in equation (7) renders.which by equations (10)–(13) leads toIt follows from equation (16) that is proportional to , and as R defined by equation (1), we can write.whereis an eigenvalue to be determined. By equations (11), (13), and (17), is solved byhence, is a nonlinear m-vector function of . Then, inserting into equation (9), we can achieve equation (8).
From equation (18), it follows thatUpon inserting equation (19) for into equations (10) and (12) and substituting and into the above equation, we can obtain a nonlinear characteristic equation to solve . This ends the proof.
Theorem 1 indicates that the eigenvector can be expressed in terms of and the m-vector, which is a vector function of as shown by equation (8). Therefore, the current computational cost for solving the Rayleigh quotient and symmetric eigenvalue problems is very cheap by merely solving an m-dimensional linear system (19) to determine and a nonlinear equation (20) to determine the eigenvalue , and the eigenvector is computed by equation (9). We will give examples to show that the zero points of coincide to the roots obtained from the determinant-form characteristic equation as follows:however, it is not an efficient method to solve the eigenvalues when n is large.
Equation (20) is the first characteristic equation derived in the m-dimensional affine Krylov subspace, which can overcome the inefficiency of the determinant-form characteristic equation. However, the Rayleigh quotient used in equation (20) may cause a weak property of the resulting characteristic equation as to be demonstrated by examples given in Section 6.

Remark 1. We may consider a larger subspace withsuch that , and setwhere is constructed from by the Arnold procedure. As that carried out in the proof of Theorem 1, we can derivewhere .
Then, by the maximality in equation (7), is proportional to with , and thus we come to an eigenvalue problem for :Unlike equation (19) for , we cannot solve explicitly in terms of from equation (25). Therefore, we can conclude that instead of is necessary, and expanding by equation (9), rather than by equation (23), is a key way to succeed the presented new method, which is named the Rayleigh quotient method (RQM).

3. Numerical Algorithm

For equation (20), Liu and Atluri [28] considered the variables transformation bywhere t is a fictitious time variable. Equation (20) is equivalent to

Adding by equation (26) on both sides of equation (27)and using equation (26) for , yields

Multiplying equation (29) by , obtains

Using further, leads towhich is a first-order ODE for .

The numerical procedures of the proposed algorithm RQM based on Theorem 1 are summarized as follows:(i)Give , select m, and , give , and , and construct and (ii)For , execute the following steps by using the fictitious time integration method (FTIM) [28] of equation (31)

For a quick convergence, we can also apply the Newton method to solve equation (20) as follows:wherein which

The script k denotes the kth step value. If satisfies , then stop, and the eigenvalue and the eigenvector are obtained; otherwise, go to step (ii).

When the Krylov matrix U is constructed, the computational cost of RQM is quite saving with the computations of four scalars , , , and , and three m-dimensional vectors , , and at each iteration step.

4. A New Quotient Method

In addition to equation (18), by taking the inner product of to the eigen-equation (2), we have a simpler quotient to express the eigenvalue bywhere

Under the condition , equation (36) is well-defined. Upon comparing to and in equations (10) and (12) as being quadratic nonlinear functions of , and in equation (37) are linear functions of . Therefore, from equation (36), a simpler nonlinear equation is available as follows:which is the second characteristic equation derived in the m-dimensional affine Krylov subspace.

The numerical procedures based on the new quotient method are depicted as follows:(i)Give , select m, and , give , and , and construct and (ii)For , carry out the following steps by using the FTIMor by the Newton method:where , , in whichIf , then stop; otherwise, go to step (ii).

It can be seen that the presented iterative algorithm resorted on the new quotient is simpler than that in Section 3. We name this technique a new quotient method (NQM). When the Krylov matrix U is constructed, the computational cost of the NQM is quite saving with the computations of four scalars , , , and , and an m-dimensional vector at each iteration step.

5. Lanczos Algorithm and Rayleigh Quotient Iteration Method

As pointed in [8], the Lanczos algorithm can produce a tridiagonal matrix T from the symmetric matrix A by

Let Q denote the Krylov matrix with dimension which satisfies . Then, is an tridiagonal matrix, given by

Let be the characteristic function of a submatrix of . Starting from and , we can apply the recurrence formula for where and , and then the Newton method to find the eigenvalue of is given bywhich is starting from an initial guess and ending until .

For the purpose of comparison, we list the Rayleigh quotient iteration method (RQIM) as follows [8]:

Remark 2. Instead of solving an m-dimensional linear system (19) by the presented two new methods, the RQIM solves an n-dimensional linear system at each iteration. In the RQM and NQM, we give an initial guess of to seek an exact eigenvalue ; but in the Rayleigh quotient iteration method, the initial guess is , which is more difficult than that given an initial guess to seek certain eigenvalue .

Remark 3. Although the methods in Sections 3 and 4 and the Lanczos algorithm in Section 5 are performed in the m-dimensional (affine) Krylov subspace, for an n-dimensional symmetric eigenvalue problem, the first two methods can find n eigenvalues with a suitable value of , while the Lanczos algorithm can find at most m eigenvalues, since the Lanczos algorithm is reduced the symmetric eigenvalue problem to find the eigenvalues of an tridiagonal symmetric matrix.

6. Symmetric Eigenvalue Problems

Example 1. We first demonstrate the usefulness of Theorem 1 bywhich possesses the exact eigenvalues .
We use this simple example to compare the characteristic equations (20) and (38) to the determinant-form characteristic equation as follows:The parameters used in the RQM are , , and . In Figure 1(a), is plotted with respect to the eigen-parameter by a solid line, where three remarked points correspond to the eigenvalues . traces across the zero line at the first point which is a simple root with and , but at the last two points and , locates at the local minimal points where , which are not simple roots with turning points. The Newton method can be applied to find the first eigenvalue , but for the last two eigenvalues, it would be failed due to at the root . When the Newton method is not applicable to find the last two eigenvalues and , however, we can apply the FTIM to quickly find all eigenvalues.
In Figure 1(b), det is plotted with respect to , where three intersecting points to the zero line correspond to the eigenvalues . The pattern of the characteristic curve obtained from det is different from in Figure 1(a) by the solid line. The characteristic curve obtained from det traces across those three points, which are all simple roots of det .
When n is a large number, to realize the explicit form of the conventional characteristic equation det may be a tedious work and it is hard to employ the Newton method to solve the determinant-form characteristic equation to find all the eigenvalues.
But in the m-dimensional affine Krylov subspace with , the first characteristic equation has the explicit form in equation (20), and its roots are easily found by using the FTIM. With , , , and , the eigenvalue is obtained by the FTIM through two steps, whose error to compare 3 is zero and the error of is very small. With , , and , the eigenvalue is obtained by the FTIM through 92 steps, whose error to compare 6 is and is obtained. With , , and , the eigenvalue is obtained by the FTIM through 62 steps, whose error to compare 9 is and is obtained.
When we apply the Newton method to find the eigenvalue under a convergence criterion and with an initial guess , we obtain through four steps, whose error to compare 3 is and the error of . By starting from , we obtain through eight steps, whose error to compare 9 is zero and the error of . Obviously, when the Newton method is applicable, it converges faster and more accurate than the FTIM.
However, the Newton method with the Rayleigh quotient is failed to find the eigenvalue . In order to remedy this drawback, we employ the new quotient in equation (38). As shown in Figure 1(a) by the dashed line, at each root, the characteristic curve is crossing the zero line, which possesses the good property as that in equation (49) for the polynomial characteristic curve in Figure 1(b). It means that the roots of are all simple roots with . The property of the new quotient is better than the Rayleigh quotient used in equation (20). Then, we apply the NQM and the Newton method to solve the characteristic equation (38). By starting from , through seven steps under , the error to compare 3 is and is obtained. By starting from and through five steps, the error to compare 6 is and is obtained. By starting from and through six steps, the error to compare 9 is and is obtained.

Example 2. We apply the new algorithms to solve equation (2) withwhere .
By taking , , and , we plot with respect to in Figure 2(a), where four points correspond to the four eigenvalues . It can be seen that there are several spikes and the quality of the first characteristic curve is not good. In contrast, we plot with respect to in Figure 2(b) by a solid line, where four points obviously correspond to the four eigenvalues . The improvement of the property from the spikes of to the crossing points with simple roots of is significant, which renders the work to find the eigenvalues quite easily.
We emphasize that m must be large enough to find all eigenvalues. If we reduce to , as shown in Figure 2(b) by a blue dashed line, the characteristic curve intersects to the zero line at only three points . In this occasion, we cannot find the eigenvalue by using the proposed NQM with . If one employs the Lanczos algorithm with , only two eigenvalues can be found.
By using the Newton method to solve in equation (38) under a convergence criterion and giving an initial guess , the eigenvalue obtained is , which converges within ten steps. The error to compare 12 is and the error of . Starting from , the eigenvalue is obtained with four steps. The error to compare 8 is and the error of . Starting from , the eigenvalue is obtained with six steps. The error to compare is and the error of . Starting from , the eigenvalue is obtained with six steps. The error to compare is and the error of .
On the other hand, we can solve this problem under a convergence criterion , by applying the FTIM to solve in equation (38). The parameters , , and are taken. With and , the eigenvalue we obtain is through 31 steps, whose error to compare 12 is . The error of . With and , the eigenvalue obtained is through 20 steps, whose error to compare 8 is and the error of . With and , the eigenvalue obtained is through 13 steps, whose error to compare is and the error of . With and , the eigenvalue obtained is through nine steps, whose error to compare 4 is . The error of . Overall, both the Newton method and the FTIM are convergent very fast and highly accurate in the solution of the eigenvalue problem with the new quotient in of equation (38) derived in the affine Krylov subspace.
Then, we apply the Lanczos algorithm together with the Newton method to determine the eigenvalues in Table 1, where the parameters m and are listed, and the errors of eigenvalues are compared. When error 1 denotes the errors of eigenvalues obtained by the Lanczos algorithm, error 2 denotes the errors obtained by the present NQM. For the eigenvalues −4, 8, and 12, the present method is more accurate than the Lanczos algorithm. To obtain the eigenvalue by the Lanczos algorithm, was used.

Example 3. Let , . In equation (2), we takeA is a Hilbert matrix with very large condition number.
We employ the Newton method together with the NQM in Section 4 to find the eigenvalues. The parameters are , , and . In Figure 3(a), is plotted with respect to , where the intersecting point to the zero line corresponds to an eigenvalue as a simple root of .
Starting from , the eigenvalue is obtained with five steps under . The above eigenvalue is very close to 1.567050691098231 given in [29]. In contrast, when the Rayleigh quotient is employed in of equation (20), it gives no root in the interval as shown in Figure 3(b), which is incorrect.
For and with , 1.61889985892416 is obtained through five steps which is very close to 1.61889985892434 given in [29].
The smallest eigenvalue of the Hilbert matrix is difficult to be found. However, by using the NQM, we can do it easily. For with , starting from , the eigenvalue is obtained with 22 steps under , and the error of , which is very accurate. For with , starting from , the eigenvalue is obtained with 26 steps under , and the error is .
In Table 2, we list the largest eigenvalues for , obtained by the Lanczos method and the NQM, which are convergent very fast with four or five steps by starting from the initial guess . It can be seen that the presented results are very close to that given in [29] by using the cyclic Jacobi method [30]. Although, we solve the eigenvalues in a lower-dimensional subspace with dimension , the results are very accurate.
To display the high efficiency of the new algorithm NQM, under , we plot the number of iterations and the computed largest eigenvalues of the Hilbert matrices with in Figure 4. Through 4 to 6 iterations the iterative algorithm is convergence as shown in Figure 4(a), and the largest eigenvalues increase with respect to n and tend to 1.96 as shown in Figure 4(b).

Example 4. As a practical application of the presented method NQM in Section 4, we consider Mathieu’s eigenvalue problem [31, 32]where with a constant amplitude.
To convert equations (52) and (53) into a symmetric matrix eigenvalue problem, letbe the eigenfunction, where are coefficients to be determined, and are the simplest orthonormal basesMultiplying equation (52) by , integrating it by parts from to and using the corresponding boundary conditions, we can deriveInserting equation (54) for into equation (56), taking , and using the orthonormality of , renders an n-dimensional eigenvalue problem (2), where the components of the symmetric coefficient matrix are given by [6]We employ the NQM in Section 4 to find the eigenvalues with , , and . In Table 3, we list the first ten eigenvalues computed from the present method and compared to that obtained by Liu [6]. By the NQM method, they are convergence within five steps under , and the errors of are between to . Then, we apply the Lanczos algorithm in Section 5 to find the eigenvalues; however, under the Lanczos algorithm does not converge within 1000 iterations. The main reason is that the characteristic equation generated from the Lanczos algorithm is highly ill-posed; for example, we have and . The function undergoes a huge jump near the root of the characteristic equation. To compute the first four eigenvalues by the Lanczos algorithm, we must take ; otherwise, the eigenvalues obtained are incorrect.
The asymptotic eigenvalues are given by [33].In Table 4, we list the asymptotic eigenvalues from to , where the parameters used in the NQM are , for and for . They are convergence within five steps under . It can be seen that the asymptotic eigenvalues calculated by the present method are coincident to that in equation (58) when n is large.

7. Discussions

One may assert that the shortcoming of the characteristic curve in equation (20) is attributed to the Rayleigh quotient computed in the m-dimensional affine Krylov subspace. Even with for a full space, the shortcoming of still exists. Now, we equal m to n and solve the eigenvalue problem in the n-dimensional space. For Example 1, we plot the two characteristic curves and with respect to in Figure 5(a), where we take and .

Comparing Figures 1(a)5(a), we can see that the characteristic curve based on the Rayleigh quotient is still exhibiting two crossing points near to and and one turning point near to . Moreover, a fictitious root is happened near to 3.488, which is not the eigenvalue. The zigzags of the characteristic curve of occur because the values of and are very small in the order of as shown in Figure 5(b), such that the ratio in renders the zigzags.

On the other hand, the curve crosses the zero line at as expected, and for the same reason the characteristic curve of exhibits the smaller zigzags, because the values of and are quite small in the order of as shown in Figure 5(c), where three peaks of and are happened at . Besides these three points at , the values of and are very small owing to the small value of .

In order to reflect these observations, we can prove the following results.

Theorem 2. Under the condition , we have , , , , where denotes the set of all eigenvalues of .

Proof. Because of , is an matrix, whose inversion is equal to in view of equation (6):We first prove thatis a solution of equation (19) with . From equations (59) and (60), it follows thatwhich is just equation (19) with . Thus, equation (60) is true. Next, inserting it into equation (9) yieldsOn the other hand, we can recast equation (61) todue to equation (59). Removing the right side to the left side, yieldsWhen is an eigenvalue of , it is also an eigenvalue of , and meanwhile the null space of is not empty. Hence, it leads toThen, by equations (9) and (59), we haveWe complete the proof of this theorem.
Owing to the above theoretical results, we can explain why the values of and in Figure 5(b), and and in Figure 5(c) are very small in the whole interval besides at the eigenvalue points, at which the picks appear.
When we apply the pivoting Gaussian elimination method to solve equation (61) whose right-hand side is not zero, the nonzero solution of is obtained at the eigenvalue point. For this reason, we can explain why and in Figure 5(b) and and in Figure 5(c) exhibit pick values at the eigenvalue points. When and the matrix is given, we can check the values of to detect the approximate positions of eigenvalues. For Example 2, we plot versus the eigen-parameter in Figure 6, where we can observe four peaks happened at the eigenvalues .

Example 5. We apply the abovementioned detection method to [8].We plot the response curve of versus the eigen-parameter in Figure 7, where we can observe four peaks happened at the eigenvalues .
In order to detect the eigenvalue in a more precise location to obtain more accurate eigenvalue, we will develop an iterative detection algorithm in the next section.

8. Nonsymmetric Eigenvalue Problems

8.1. A New Detection Method of Eigenvalues

When is not a symmetric matrix, the Rayleigh quotient is not applicable to determine the eigenvalue. However, we can derive equation (19) by projecting the vector eigen-equation (2) into the Krylov subspace by using the Galerkin condition

Inserting equation (9) for x and using equation (6) yieldswhich by equations (6) and (14) leads to equation (19) again. Obviously, two different approaches render the same equation (19), which is the basic equation for solving the matrix eigenvalue problems.

Upon comparing to the original vector eigen-equation in equations (2) and (19) is different for the appearance of a nonhomogeneous term in the right side, which is the projection of equation (2) in the affine Krylov subspace, not in the Krylov subspace. To excite the nonzero response of , we must give a nonzero exciting vector .

In the proof of Theorem 2, we do not need the symmetric property of . When is a nonsymmetric matrix having real eigenvalues, is solved from equation (19) with :which is not necessary given by equation (62). Therefore, is not a zero vector when is not an eigenvalue.

Later, we give examples to show that the peaks of at the eigenvalues are happened in the response curve of versus the eigen-parameter , which motivates us using a simple maximum method to determine the eigenvalue by collocating points inside an interval bywhere the size of must be large enough to include at least one real eigenvalue . Therefore, the numerical procedures for determining the eigenvalue of a given nonsymmetric matrix are summarized as follows: (i) select m, , a, and b. (ii) Construct U. (iii) For each collocating point solving equation (19), setting and taking the optimal to meet equation (71).

In equation (71), we carry out the computation of the peak point by collocating points at and pick up the maximal point denoted by . We can write a program by gradually decaying the size of the interval centered at the previous peak point by renewing the interval to a finer one. Give an initial interval with , and we fix the collocating points by and pick up the maximal point denoted by . Then, a finer interval is given by and , and in that interval, we pick up the new maximum point denoted by . Continuing this process until , we can obtain the eigenvalue with high precision. We fix and . This algorithm is shortened as an iterative detection method (IDM). The technique of IDM is restricted to detect the real eigenvalues.

In order to construct the response curve, we choose a large interval of to include all eigenvalues, such that the rough locations of the eigenvalues can be observed in the response curve as the peaks. Then, to precisely determine the individual eigenvalue, we choose a small initial interval to include that eigenvalue as internal point. A few iterations by the IDM can compute very accurate eigenvalue.

When the eigenvalue is obtained, if one wants to compute the eigenvector, we can normalize the nth-component of by . Letwhere are the components of and is the Kronecker delta symbol. Then, it follows from equation (2) an (n − 1)-dimensional linear system:

We can apply the Gaussian elimination method to solve .

8.2. Examples

Example 6. Now, we can apply the detection method to a nonsymmetric matrix [8].We plot versus the eigen-parameter in Figure 8, where we can observe three peaks happened at the eigenvalues .
For this example, with , , and , we can obtain the eigenvalue exactly by using equation (71), and with , we can obtain the eigenvalue . They are exact eigenvalues.

Example 7. We consider another nonsymmetric matrix:where . With , we plot versus the eigen-parameter in Figure 9(a), where we can observe four peaks happened at the eigenvalues , while with , we plot versus the eigen-parameter in Figure 9(b), where we can observe three peaks happened at the eigenvalues .
For this example, with , , and , we can obtain the eigenvalue by using the IDM in equation (71), and with and , we can obtain the eigenvalue . They are exact eigenvalues. But with , , and , we can obtain the eigenvalue , whose accuracy is reduced.

Example 8. Consider the Frank nonsymmetric matrix [15, 34, 35]For , the maximum eigenvalue is given byWe take and plot versus the eigen-parameter in Figure 10 by a solid line, where we can observe three peaks happened at the first three eigenvalues. We find that m can be reduced to , whose response curve still has three picks near to the one with as shown in Figure 10 by a dashed line.
Starting from and and through six iterations under a convergence criterion , we can obtain , which is very close to the one in equation (77). It is remarkable that by using equation (73), we can obtain . Starting from and and through six iterations, we can obtain and . Starting from and and through six iterations, we can obtain and .

Example 9. The abovementioned techniques are applicable to the eigenvalue problems of symmetric matrices. We apply the new algorithm IDM to solve equation (2) with [8].We take and plot versus in Figure 11(a) in an interval [0, 16], where we can observe two peaks happened at the first two eigenvalues. For , a new peak for the third eigenvalue appears but the peak for the first eigenvalue seems disappeared. However, when , the pick for the first eigenvalue is quite small as shown in Figure 11(b) in an interval [15, 16]. The values of are greatly reduced when m is changed from to .
By taking and under , the RQIM through seven iterations to find with .
By taking and in the IDM, starting from and and through seven iterations under a convergence criterion , we can obtain , which is very close to obtained by the RQIM. By using equation (73), we can obtain the corresponding eigenvector x whose error to satisfy the eigen-equation (2) is , which is more accurate than obtained by the RQIM.
Starting from and , with and through seven iterations, we can obtain and . Starting from and , with and through six iterations, we can obtain and .
Starting from and , with and through six iterations, we can obtain a small eigenvalue and .

Example 10. We revisit Example 2 again. We take and plot versus the eigen-parameter in Figure 12 in an interval , where we can observe four peaks happened at four eigenvalues. In Table 5, the IDM is more accurate than the Lanczos algorithm.

Example 11. We revisit the eigenvalue problem for the Hilbert matrix; however, we solve this eigenvalue problem by using the IDM in Section 8.1. We take and plot versus the eigen-parameter in Figure 13 in an interval [0, 2], where we can observe five peaks happened at five eigenvalues.
Due to highly ill-conditioned nature of the Hilbert matrix, it is a quite difficult eigenvalue problem. For this problem, we take , , and  = 50 to compute the largest eigenvalue, which is given as 2.182696097757424. The affine Krylov subspace method is convergent very fast with six iterations, and the error of eigen-equation is . The CPU time for the affine Krylov subspace method together with the IDM is 1.52 s.
However, finding the largest eigenvalue in the full space with dimension , it does not converge within 100 iterations. By using the MATLAB, we obtain 2.182696097757423 which is close to that obtained by the IDM. The QR method implemented in MATLAB computes all the eigenvalues in the diagonal matrix D and the eigenvectors of A in 0.000812 s with an accuracy equal to , which in comparison to the IDM is much fast and more accurate than the IDM for the largest eigenvalue.
Notice that the smallest eigenvalue is very difficult to be computed, since it is very close to zero. However, we can obtain the smallest eigenvalue with one iteration and the error is obtained. For the eigenvalue problem of the Hilbert matrix, the MATLAB leads to a wrong eigenvalue , which is negative and contradicts to the positive eigenvalues of the Hilbert matrix. The eigenfunction in MATLAB cannot guarantee to obtain a positive eigenvalue for the positive definite Hilbert matrix. The first 41 eigenvalues are all negative. The first 73 eigenvalues are less than . So, most of these eigenvalues computed by MATLAB should be spurious. The MATLAB is effective for general purpose eigenvalue problem with normal matrices, but for the highly ill-conditioned matrices, the effectiveness of the MATLAB might be lost. Due to the highly ill-posed nature of the smallest eigenvalue of the Hilbert matrix, we may consider the QR algorithm with shift proposed by Francis [36, 37] to resolve this difficulty.
Then, we apply the Lanczos algorithm together with the Newton method to determine the largest eigenvalue of the Hilbert matrix with , where we take . The Lanczos algorithm spent 26 iterations to satisfy the convergence criterion . The largest eigenvalue obtained is 2.182664504835603 with a poor accuracy of . For the IDM, six iterations and are obtained, which is much more accurate and efficient than the Lanczos algorithm.

9. Conclusions

In conclusion, we can claim that the weakness of the Rayleigh quotient used to generate the first characteristic equation is inherent, which is happened no matter which is taken. Upon using the new quotient in the second characteristic equation, we can improve this property which can help us to quickly find the eigenvalue and the eigenvector. There already existed many excellent methods and algorithms to solve the symmetric eigenvalue problems, and the current paper proposed an alternative approach, which merely needs an initial guess of . Therefore, we can guess different initial values to quickly find all eigenvalues. In many algorithms such as the Rayleigh quotient iteration method, they required the initial guess of the eigenvector, which is more difficult than that giving the initial guess of to achieve certain eigenvalue. The presented algorithms RQM and NQM can find the eigen-pair simultaneously. The computational cost is saving for solving merely a lower m-dimensional linear system and the derived characteristic equation, whose high efficiency and high accuracy were identified by the numerical tests. Through some tests, we found that the proposed new quotient method can find more eigenvalues than that obtained by the Lanczos algorithm, if the same value of m is used. If m is large enough and , the presented method can find all eigenvalues very effectively. For the eigenvalue problems of nonsymmetric matrices, we have developed a simple and powerful technique to determine the eigenvalues as the peaks of the response curve of with respect to the eigen-parameter in a selected interval. The vector was expressed in terms of a nonzero exciting vector and an m-vector developed in the m-dimensional affine Krylov subspace, whose governing linear equation was derived by using the Galerkin condition. An iterative detection method was developed, where through a few finer tuning in a smaller interval, very accurate eigenvalue can be obtained. Also then, we reduced the eigen-equation with one dimension to a nonhomogeneous linear system to determine the eigenvector with high accuracy.

Data Availability

All the related data used to support the findings of this study are included within the manuscript.

Conflicts of Interest

The authors declare that they have no conflicts of interest.

Acknowledgments

The authors would like to thank the National Science and Technology Council, Taiwan, for their financial support (grant number NSTC 111-2221-E-019-048).