Abstract

We propose a generalized inexact Newton method for solving the inverse eigenvalue problems, which includes the generalized Newton method as a special case. Under the nonsingularity assumption of the Jacobian matrices at the solution c*, a convergence analysis covering both the distinct and multiple eigenvalue cases is provided and the quadratic convergence property is proved. Moreover, numerical tests are given in the last section and comparisons with the generalized Newton method are made.

1. Introduction

Inverse eigenvalue problems (IEPs) arise in a remarkable variety of applications such as geophysics, control design, system identification, exploration and remote sensing, principal component analysis, molecular spectroscopy, particle physics, structural analysis, circuit theory, and applied mechanics. One may refer to [114] for the applications, mathematical theory, and algorithms of IEPs. Based on different applications, inverse eigenvalue problems appear in many forms, for example, additive inverse eigenvalue problems, multiplicative inverse eigenvalue problems, Jacobian matrix inverse eigenvalue problems, nonnegative matrix inverse eigenvalue problems, and Toeplitz matrix inverse eigenvalue problems [3, 15, 16].

Let be the linear space of symmetric matrices of size . Let be continuously differentiable. Given real numbers , which are arranged in the decreasing order , the IEP considered here is to find a vector such that The vector is called a solution of the IEP (1). A typical choice for is which has been studied extensively (cf. [15, 1720]).

Define the function by Then solving the IEP (1) is equivalent to solving the equation on . Based on this equivalence, Newton’s method can be applied to the IEP, and it converges quadratically [15, 16, 21]. However, distinction of the given eigenvalues is usually assumed among these works. In the case when multiple eigenvalues are present, solving the IEP becomes much more complicated because the eigenvalue function defined by (3) is not differentiable around the solution , in general, and the eigenvectors corresponding to the multiplier value cannot generally be defined as a continuous functions of at ; see [15]. Therefore, either Newton’s method or the theoretical analysis may get into trouble. In paper [15], the authors have analyzed the convergence properties of Newton's method in the multiple case. However, the nonsingularity assumption needed for convergence in [15] was that the inverse of the involved Jacobian and/or the involved approximate Jacobian matrices at all iterations are bounded, and the bound must be independent of the initial point . For ensuring this nonsingularity assumption for Newton's method, D. Sun and J. Sun introduced in [22] a generalized Newton method and, by using the tool of the strong semismoothness of the eigenvalue function for symmetric matrices, developed a new approach to study the convergence issue for the case with multiple eigenvalues. They presented there a nonsingularity assumption in terms of the Jacobian matrices evaluated at the solution to establish a general convergence result of Newton's method. Note that in each Newton iteration (outer iteration) of the generalized Newton method, we need to solve exactly the Jacobian equation (inner iteration). When the problem size is large, the inversion is costly, and one may employ iterative methods to solve the equation. Although iterative methods can reduce the complexity, it may oversolve the systems in the sense that the last few inner iterations before convergence may not improve the convergence of the outer Newton iteration. The generalized inexact Newton method is a method that stops the inner iteration before convergence. By choosing a suitable stopping criterion, we can reduce the total cost of the whole inner-outer iteration.

In this paper, we give a generalized inexact Newton method for solving the IEP (1) which can reduce the generalized Newton method. Motivated by Sun’s idea of the strong semismoothness, we give a convergence analysis of this method. By choosing a suitable stopping criterion, we show that the generalized inexact Newton method converges superlinearly. It should be noted that the analysis of the present paper is “distinction free.” Though the nonsingularity assumption is stated in terms of the Jacobian matrices evaluated at the solution , the inverse of the approximate Jacobian matrices related to all iterations is ensured to be bounded and moreover the upper bound is independent of the initial point . A numerical example is presented in the last section to illustrate that our results and comparisons with the generalized Newton method are made.

2. Semismoothness and Relative Generalized Jacobian

Let be a locally Lipschitz continuous function. Then, according to Rademacher’s theorem, is differentiable almost everywhere. Let be the set of differentiable points of and let be the Jacobian of whenever it exists. Denote Then Clarke’s generalized Jacobian [23] is where “conv” stands for the convex hull in the usual sense of convex analysis [24]. Then we are ready to give the following definition of semismoothness. For original concept of semismoothness for functions and vector-valued functions, one may refer to [25, 26].

Definition 1. Suppose that is a locally Lipschitz continuous function. is said to be semismooth at if is directionally differentiable at and for any is said to be -order () semismooth at if is semismooth at and In particular, is called strongly semismooth at if is -order semismooth at . A function is said to be a (strong) semismooth function if it is (strong) semismooth everywhere on .

Now, let us consider the composite nonsmooth function: where is nonsmooth but of special structure and is continuously differentiable. It should be noted that neither nor is easy to compute even if , , and are available. To circumvent the difficulty in computing , Potra et al. [27] introduced the concept of generalized Jacobian: Furthermore, in order to weaken the nonsingularity assumption on the generalized Jacobians, D. Sun and J. Sun also introduced in [22] the following concepts of relative generalized Jacobians.

Definition 2. Let be a subset of . The -relative generalized Jacobians   and of at are defined by

Lemma 3 presents the properties of the relative generalized Jacobians which has been proved in [22]. We omit the proof here.

Lemma 3. Let be Lipschitz continuous near . Then(i) and are compact subsets of and , respectively.(ii) and are upper semicontinuous at .

3. The Generalized Inexact Newton Method

Let be continuously differentiable and let . In what follows, we suppose that are the eigenvalues of the matrix with and write Let us define Recall that the function is defined by (3) and this means that is a composite nonsmooth function. Then the concept of generalized Jacobian can be applied to and we get see [22, Proposition 5.1]. Hence, according to this, the generalized Newton method for solving the IEP can be described as follows; see [22].

Algorithm 4 (the generalized Newton method). (1) For until convergence, do the following.(a)Compute a .(b)Form according to (13).(c)Solve from the equation
The generalized Newton method converges quadratically; see for instance [22, Theorem 5.3]. In deriving the quadratic convergence of this method, it was assumed that system (14) is solved exactly. Usually, one solves these systems by iterative methods, in particular in the case when is large. However iterative methods may bring an oversolving problem in the sense that the last few iterations before convergence are usually insignificant as far as the convergence of the outer iteration is concerned. This oversolving of the inner iterations will cause a waste of time and does not improve the efficiency of the whole method. The generalized inexact Newton method is derived precisely to avoid the oversolving problem in the inner iterations. Instead of solving (14) exactly, one solves it iteratively until a reasonable tolerance is reached. The tolerance is chosen carefully such that it is small enough to guarantee the convergence of the outer iteration, but large enough to reduce the oversolving problem of the inner iterations. We find that the tolerance has to be of in order to have a convergence rate of for the outer iteration. We now give the generalized inexact Newton method for solving the IEP. We will prove in Section 4 that the convergence rate of the method is equal to .

Algorithm 5 (the generalized inexact Newton method). (1) For until convergence, do the following.(a) Compute a .(b) Form according to (13).(c) Solve inexactly from the equation until the residual satisfies

4. Convergence Analysis

In this section, we carry on a convergence analysis of Algorithm 5. Let be given with Let be the solution of the IEP. Let be defined by (11) and Then for any ,   is continuously differentiable at and moreover Thus, according to Definition 2, we obtain the following relative generalized Jacobian of at :

We first present the following two lemmas which are important for the proof of the main theorem. Lemma 6 illustrates the continuous property about the eigenvalues and can be found in many papers; see for example [1518, 28]. However Lemma 7 is a crucial result in [22] and has been proved there.

Lemma 6. Suppose that there exists such that the matrix has eigenvalues given by (17). Suppose that is Lipschitz continuous around . Then there exist positive numbers and such that, for each , the following assertion holds:

Lemma 7. is a strongly semismooth function.

With Lemmas 6 and 7, we are in the position to prove the superlinear convergence of Algorithm 5. In order to ensure the superlinear convergence, one may want to assume that all are nonsingular. However, as noted by [15], it is generally possible to choose the eigenvectors such that is nonsingular when multiple eigenvalues are presented. Hence we assume here that all matrices are nonsingular.

Theorem 8. Suppose that there exists such that the matrix has eigenvalues given by (17). Let be defined by (20). If (i) for each , , (ii) all are nonsingular, and (iii) is Lipschitz continuous around , then there exists a positive number such that, for each , the sequence generated by Algorithm 5 converges to with Here is a positive constant.

Proof. Since is Lipschitz continuous around , it follows from Lemma 6 that is also Lipschitz continuous around . On the other hand, note that all are nonsingular. Thus, thanks to Lemma 3, there exist positive numbers and such that, for each , all are nonsingular and Since, by Lemma 7, is a strong semismooth function, there exist positive numbers and such that for all and all Let Take such that Below we will show that and are as desired. For this end, let be the th-iteration. We assert that Granting this and by the assumption that , we can complete the proof of the theorem. To give the proof of assertion (27), we suppose that . Note that . Then, by (23) and (24), we obtain that for all On the other hand, noting that , one has by Lemma 6 and (16) that Since , we derive from (15) that Combining this with (28)–(30), we obtain where the last inequality holds because   and  . Furthermore, by the fact that , we get which together with the assumption implies . Therefore, we complete the proof of the assertion and hence the proof of the whole theorem.

In the special case when , Algorithm 5 reduces Algorithm 4. Thus, applying Theorem 8, we have the following result for the generalized Newton method which coincides with [22, Theorem 5.4].

Corollary 9. Suppose that there exists such that the matrix has eigenvalues given by (17). Let be defined by (20). If (i) for each , , (ii) all are nonsingular, and (iii) is Lipschitz continuous around , then there exists a positive number such that, for each , the sequence generated by Algorithm 4 converges quadratically to .

5. A Numerical Example

In this section, we present the numerical performance of Algorithm 5 with that of Algorithm 4. Our aim is, for the inverse eigenvalue problems with multiple eigenvalues, to illustrate the advantage of the generalized inexact Newton method over the generalized Newton method in terms of minimizing the oversolving problem and the overall computational complexity. The test was carried out in MATLAB 7.0 running on a Genuine Intel(R) PC with 1.6 GHz CPU.

We consider the typical choice, and use Toeplitz matrices as our : Thus is a symmetric Toeplitz matrix with the first column equal to . This numerical example has been studied extensively; see for instance [12, 14, 17, 18, 20].

In the tests, we tried Algorithms 4 and 5 on ten -by- matrices. For each matrix, we first generate a vector such that there exist some integers such that , where are the eigenvalues of matrix . Set Then we choose as the prescribed eigenvalues. Since both Algorithms 4 and 5 are locally convergent, is formed by chopping the components of to four decimal places. For both algorithms, the stopping tolerance for the outer (Newton) iterations is . The inner systems (14) and (15) are all solved by the QMR method [29] via the MATLAB QMR function, where the maximal number of iterations is set to be 1000. Also, the initial guess for the Jacobian equations in the th outer iteration is set to be obtained at the th outer iteration. The inner loop stopping tolerance for (15) is given by (16), while for (14) in Algorithm 4, we are supposed to solve it up to machine precision eps (which is 2.2 × 10−16).

Comparisons of Algorithm 5 with Algorithm 4 are illustrated in Table 1. In this table, we give the total numbers of outer iterations averaged over the ten tests and the average total numbers of inner iterations required for solving the Jacobian equations. Here “I” means no preconditioner, while “P” means that the MATLAB incomplete LU factorization (MILU) is adopted as the preconditioner, that is, LUINC (A, [drop-tolerance, 1, 1, 1]), where the drop tolerance is set to be 0.01. We can see from this table that is small for Algorithm 4 and also for Algorithm 5 when . This confirms the theoretical convergence rate of the two algorithms. In terms of , we see that one requires less inner iterations in Algorithm 5 than those in Algorithm 4. Thus, we can conclude that Algorithm 5 with around 1.5 is much better than Algorithm 4. On the other hand, we also note that the MILU preconditioner is quite effective for the Jacobian equations.

6. Concluding Remarks

In this paper, we have proposed a generalized inexact Newton method for the inverse eigenvalue problem. We show that our inexact method converges superlinearly. This inexact version can minimize the oversolving problem of the generalized Newton method and give a good tradeoff between the inner and outer iterations. We also present numerical experiments to illustrate our results.

It is a pity that similar approaches cannot been extended to the inexact Newton-like method up till now. This is another interesting topic of our works.

Conflict of Interests

The author declares that there is no conflict of interests regarding the publication of this paper.

Acknowledgment

This project was supported in part by the National Natural Science Foundation of China (Grant no. 11101379).