Abstract

We propose an inexact Newton method for solving inverse eigenvalue problems (IEP). This method is globalized by employing the classical backtracking techniques. A global convergence analysis of this method is provided and the R-order convergence property is proved under some mild assumptions. Numerical examples demonstrate that the proposed method is very effective in solving the IEP with distinct eigenvalues.

1. Introduction

In the present paper, we consider inverse eigenvalue problems (IEP) which are defined as follows. Letand let be a sequence of real symmetricmatrices. Define and denote its eigenvalues bywith the increasing order. Givenreal numberswhich are arranged in increasing order, the IEP is to find a vectorsuch that Such vectoris called a solution of the IEP. This type of inverse problem arises in a variety of applications, for instance, the inverse Toeplitz eigenvalue problem [1, 2], inverse Sturm-Liouville’s problem, inverse vibrating string problem, and the pole assignment problem; see [35] and the references therein for more details on these applications.

Defineby Then, solving IEP (2) is equivalent to solving the nonlinear equationon. It is clear thatis a solution of the IEP if and only ifis a solution of the equation. Based on this equivalence, Newton’s method can be applied to the IEP, and it converges quadratically [6]. As it is known, each iteration of Newton’s method involves solving a complete eigenproblem for the matrix. To overcome this drawback, different Newton-like methods have been proposed and studied [7, 8]. To alleviate the over-solving problem, Bai et al. presented in [9] an inexact Cayley transform method for solving the nonlinear system. To avoid the computation of the approximate Jacobian equations, Shen and Li proposed in [10, 11] Ulm-like methods for solving the IEPs. However, all these numerical methods for solving the IEP converge only locally.

In this paper, we study the numerical methods with global convergence property for solving the IEP. Since the IEP is a nonlinear equation, we review some classical work on solving the general nonlinear equation . Among the inexact Newton-type methods where a line search procedure is used, an inexact Newton backtracking method was proposed in [12]. It performed backtracking along the inexact Newton step, and computational results on a large set of test problems have shown its robustness and efficiency [13, 14].

The purpose of the present paper is motivated by the inexact Newton backtracking method proposed in [12], to propose an inexact Newton-type method which combines with the Cayley transform method for solving the IEP. In the backtracking procedure, we use the Rayleigh quotient instead of the classical merit function and therefore reduce the computational cost. Under the classical assumption, which is also used in [7, 9, 10], that the given eigenvalues are distinct and the Jacobian matrix is invertible, we show that this method is globally convergent. Some numerical examples are reported to illustrate the effectiveness of the proposed method with distinct eigenvalues.

The paper is organized as follows. In Section 2, a global inexact Newton-type algorithm is proposed. The global convergence analysis is given in Section 3. And finally in Section 4, some numerical examples are given to confirm the numerical effectiveness and the good performance of our algorithm.

2. A Globally Inexact Newton-Like Cayley Transform Method

In this section, we present our algorithm. Letdenote the set of all realmatrices. Let and denote the 2-norm and the Frobenius norm in, respectively. The induced 2-norm inis also denoted by ; that is, Then, we havefor any . For and a positive number, we useto stand for the open ball with radius and center . Letbe the eigenvalues of matrix and letbe the normalized eigenvectors corresponding to. Defineby Letbe given withand write.

The Cayley transform method for computing approximately the eigenproblem of the matrixwas proposed in [6] and was applied in [9, 10]. We now recall this method and then apply it to our algorithm. Suppose thatis a solution of the IEP. Then, there exists an orthogonal matrixsuch that Assume that, , andare the current approximations of , , and , respectively. Define , where is a skew-symmetric matrix. Then, (6) can be rewritten as Based on (7), we define the new approximationofby neglecting the second-order terms in: By equating the diagonal elements in (8), we have whereare the column vectors of. Thus, once we getby solving the Jacobian equation, we can obtainby equating the off-diagonal elements in (8); that is,

In order to update the new approximationof, we construct an orthogonal matrixusing Cayley's transform and set; that is, we can obtainby solving Finally, the new approximations of eigenvalues can be obtained by whereare the column vectors of.

Note that (12) can be computed as follows. Computeand letbe theth column ofat first. Then, solve, , iteratively from thelinear systems: Finally, set. Sinceis an orthogonal matrix andis skew-symmetric matrix, we see thatmust be orthogonal. To maintain the orthogonality of, (14) cannot be solved inexactly. One could expect that it requires only few iterations to solve each system of (14). This is due to the fact that asconverges to,converges to zero; see [6,]. Consequently, the coefficient matrix on the left-hand side of (14) approaches the identity matrix in the limit.

For solving the general nonlinear equation, linesearch techniques [15] are often used to enlarge the convergence basin of a locally convergent method. They are based on a globally convergent method for a problem of the form, whereis an appropriately chosen merit function whose global minimum is a zero of. In these cases, for a given direction, we have the iteration form , whereis such that. The existence of such anis ensured if there exists ansuch thatfor all.

In typical linesearch strategies, the step lengthis chosen by using so-called backtracking approach. Among the backtracking method, inexact Newton backtracking methods (INB) [12] is a globally convergent process where theth iteration of an inexact Newton method is embedded in a backtracking strategy. The merit functionof INB usually used; see, for example, [1214, 16]. Thanks to (3), this will involve computingofwhich are costly to compute. Our intention here is to replace them by the Rayleigh quotient (see (13)). In Section 3 we will show that this replacement retain superlinear and global convergence.

The details of our algorithm for solving the IEP are specified as Algorithm 1. In Step 7, the following sufficient decrease in the merit functionbased on the Rayleigh quotient is provided: The while loop in Step 7 is also called backtracking loop below. Note that if we set when is away from an accumulation point , thenormay be possible. Accordingly, the forcing termgiven in (20) guarantees thatfor each.

3. Convergence Analysis

In this section, we analyze the global behavior of Algorithm 1. We will show that if the given eigenvalues are distinct and if there exists an accumulation pointofsuch that the Jacobian matrixis invertible, then the iterations are guaranteed to remain near and , as . Furthermore, forsufficiently large, we have the equality. Thus, we obtain that the ultimate rate of convergence iswhich depends on the choices of thegiven in (20).

Algorithm 1 (Inexact Newton-Like Backtracking Cayley Transform Method for IEP). For any , , , . Compute the orthonormal eigenvectors and the eigenvalues of . Let Foruntil convergence do the following.
Step 1. Formfor,.
Step 2. Solveinexactly from the approximate Jacobian equation: such that whereand
Step 3. Setand.
Step 4. Form the skew-symmetric matrixby (10) with.
Step 5. Compute matrix by solving (12).
Step 6. Compute by (13).
Step 7. While do the following:
Step 8. Set. As Steps 4–6, compute, respectively, the new approximations and.

It is worth noting that ifhas no accumulation point, has one or more accumulation points and the Jacobian matrix is singular at each of them, or the vectorcomputed by solving the Jacobian equation (18) is such that, then our algorithm fails.

It is clear that ifasandis an accumulation point of, then . Letbe generated by Algorithm 1 (see Step 7) and definefor each. The following lemma is taken from [17, Lemma 2].

Lemma 2 (see [17]). For any , , one has where.

Based on Lemma 2, the following lemma is a straightforward application of [9, Lemma 4].

Lemma 3. Assume thatis an accumulation point of. Let the given eigenvaluesbe distinct. Then for any, where.

As shown in [18, Theorem 2.3], in the case when the given eigenvaluesare distinct, the eigenvalues ofare distinct too for any pointin some neighborhood of. It follows that the functiondefined in (3) is analytic in the same neighborhood. However, ifis not near the solution, the analyticity of the functioncannot be guaranteed.

For any symmetric matrix, set, where, are the eigenvalues of. As proved by D. Sun and J. Sun [19, Theorem 4.7], is a strongly semismooth function. Based on this result, for anyand, there existssufficiently small such that

The following lemma says that the backtracking loop in Step 7 of Algorithm 1 terminates after a finite number of steps.

Lemma 4. There existssuch that, for any, there issatisfying

Proof. By using the strong semismoothness of all eigenvalues of a real symmetric matrix [19], for any given, there existssufficiently small such that whenever. Choose and set For any, let. Then, by the definition ofgiven in (28), we have
On the other hand, by (19), one gets that This together with (26) and (27) yields that and the proof is completed.

Next, we give sufficient conditions for Algorithm 1 not to break down in the backtracking loop in Step.

Lemma 5. Ifand there existsfor which then the backtracking loop terminates.

Proof. For constantin (32) and the given, choosing, there existssufficiently small such that (26) holds whenever. We choosesatisfying It follows that which gives. Thus, we have This together with (19) gives that It follows that the backtracking loop terminates. This completes the proof.

The next lemma gives condition under which (32) is satisfied.

Lemma 6. Assume that the Jacobian matrixis invertible and set. Then there existssuch that (32) holds.

Proof. By using condition (19), one has that
We finish the proof by taking.

Lemmas 5 and 6 yield the result below.

Corollary 7. Assume thatandare invertible. Setand . Then, the backtracking loop in Step 7 of Algorithm 1 terminates with for anysmall enough such that (26) holds whenever.

Proof. Suppose thatis the final value determined by the while-loop. If, then (38) is trivial. Assume that; that is, the body of the while-loop has been executed at least once. Denoting the penultimate value by, then it follows from (33) that. Thus, we have This completes the proof.

Lemma 8. Assume thatis an accumulation point ofsuch that there exists a constantindependent offor which wheneveris sufficiently nearandis sufficient large. Thenas.

Proof. Suppose that . Then, there exists sufficiently small such that there are infinitely many for which and (40) hold wheneverforsufficiently large.
Sinceis an accumulation point of, there exists subsequence such that for sufficiently large. Choose satisfyingand. It follows that Then, by (40), we have, forsufficiently large, Note thatas. It follows that, which is a contradiction. Therefore,as.

Lemma 9. Assume that is an accumulation point of such that is invertible. Set and let be such that , where . Suppose that for sufficiently large. Then, for all sufficiently large,

Proof. By the definitions ofand, for allsufficiently large, Then, we have, for allsufficiently large, Noting that is theth column of, thenfor. So, for allsufficiently large, It follows from Banach lemma thatis invertible andfor allsufficiently large. This completes the proof.

Lemmas 6, 8, and 9 yield the result below.

Corollary 10. Assume that is an accumulation point of such that is invertible. Set and let be determined by Lemma 9. Assume that for sufficiently large. Then as .

Proof. By Lemma 9, we knowis invertible andfor allsufficiently large. From the proof of Lemma 6, (40) holds for the constantindependent of. Therefore, as follows from Lemma 8.

Corollary 11. Assume thatis an accumulation point ofsuch thatis invertible. Then, as. Moreover, for allsufficiently large, one has.

Proof. Ifis computed in the backtracking loop, the backtracking terminates withsuch that (38) holds. Sinceby Corollary 10, we havefor allsufficiently large. Thus, the series is divergent. Then, we have and so This together with (38) yields thatfor allsufficiently large.

Lemma 12. Assume thatis an accumulation point ofsuch thatis invertible. Then, there existssuch that wheneverandsufficiently large, where.

Proof. Set. Then, we have, for allsufficiently large, It follows that, for allsufficiently large, Set and write. It follows that, where In view of thatfor any, one has that, for allsufficiently large, Since we have
It follows from (52) that, for allsufficiently large, Combining (56) with (57), one has that, for allsufficiently large, which gives Note that, Therefore, we obtain from (54) that (49) holds for allsufficiently large. This completes the proof.

Lemma 13. Assume thatis an accumulation point ofsuch thatis invertible. Then, there exist,sufficiently small such thatand wheneverandsufficiently large, whereis determined by Lemma 9.

Proof. Thanks to Corollary 11, we haveandfor allsufficiently large. Thus, the Jacobian equation (18) is equivalent to, for allsufficiently large, Assume that the residual of this approximate Jacobian equation is defined by; that is, for allsufficiently large, This together withgives, for allsufficiently large, By (19), Lemmas 9 and 12, we obtain, for allsufficiently large, It follows from Lemma 3 that Thus, we can chooseandsufficiently small such that wheneverand. Therefore, combining this with the definition ofgiven in Lemma 12, (61) follows.

Lemma 14 (see [6]). There exist two positive numbers and such that, for any orthogonal matrix with , the skew-symmetric defined by satisfies .

Based on Lemma 14, by using the similar arguments in the proof of [10, Lemma 5], we can obtain the following lemma. If we write , then there exists such that .

Lemma 15. Suppose that the given eigenvalues are distinct and the Jacobian matrix is invertible. Then, there exist and such that, forsufficiently large, if and , then whereis determined by Lemma 14.

In order to prove our global convergence result for Algorithm 1, we introduce some notations. Letand. Set Our main global convergence result is as follows.

Theorem 16. Assume that is generated by Algorithm 1. Suppose that the given eigenvalues are distinct and is an accumulation point of such that is invertible. Then, and as . Moreover, the convergence is of R-order .

Proof. It follows immediately from Corollaries 10 and 11 that and as . For the given in (69), there exists sufficiently large such that and . Set , where and are given in (70) and (69), respectively. Then, . We will show that, for all sufficiently large, Suppose that (71) and (72) hold for some . Consider the case . Thanks to Lemma 3, we have, for all , Then, by using Lemma 13, one has that, for all , where the last inequality follows from the definition ofin (69). By Lemma 15, we have, for all , Therefore, we conclude that (71) and (72) hold for all. Moreover, we see from (71) thatconverges towith R-order. This completes the proof.

Ifgenerated by Algorithm 1 converges to a solution at which the Jacobian matrix is invertible, then the ultimate rate of convergence is governed by the choices of theas in the local theory of [9].

4. Numerical Examples

In this section, we illustrate the effectiveness of Algorithm 1 in solving IEP on three examples. The tests were carried out in MATLAB 7.0 running on a PC Intel Pentium P6200 of 2.13 GHz CPU.

The given parameters used in our algorithm were , , , , and . In the while loop, we choose to minimize if 80 iterations of the backtracking loop fail to produce the sufficient decrease in .

Linear systems (14) and (19) are all solved iteratively by the QMR method [20] using the MATLAB qmr function. In order to guarantee the orthogonality of in (14), this system is solved up to machine precision eps (). The inner loop stopping tolerance for (18) is given by (20). The stopping criterion of the outer iteration in our algorithm is

Example 1. This is an inverse Toeplitz eigenvalue problem (see [2] for more details on this inverse problem) with distinct eigenvalues. The basis matrices are given as follows: The given real eigenvalues and a solution, respectively, are In Table 1, we report our numerical results for various starting points: where , errs, ite. and stand for the starting point, the error value of the left-hand side of (76) for the last three iterates of the algorithm, the number of outer iteration, and the accumulation point corresponding to the starting point.

Example 2. This is a Toeplitz-plus-Hankel inverse eigenvalue problem (see [1] for more details on this inverse problem) with distinct eigenvalues. The basis matricesare given as follows: The given real eigenvalues are andis a solution. In Table 2, we report our numerical results for various starting points:

Example 3 (see [10]). Given , where the basis matrices are defined from as follows: whereis theth column of the identity matrix. The given real eigenvalues and a solution, respectively, are and. Table 3 shows the numerical results for the following various starting points: where are the three accumulation points of this problem.

We observe from Tables 13 that our algorithm is convergent for different starting points. We also see that our algorithm converges to a solution of the IEP, which is not necessarily equal to the original one. An interesting question is to consider the performance of the algorithm when the starting pointis chosen as random vector, which needs future study.

Conflict of Interests

The authors declare that there is no conflict of interests regarding the publication of this paper.

Acknowledgment

The second author’s work was supported in part by the National Natural Science Foundation of China (Grant no. 61170109).