- About this Journal ·
- Abstracting and Indexing ·
- Aims and Scope ·
- Annual Issues ·
- Article Processing Charges ·
- Articles in Press ·
- Author Guidelines ·
- Bibliographic Information ·
- Citations to this Journal ·
- Contact Information ·
- Editorial Board ·
- Editorial Workflow ·
- Free eTOC Alerts ·
- Publication Ethics ·
- Reviewers Acknowledgment ·
- Submit a Manuscript ·
- Subscription Information ·
- Table of Contents

Abstract and Applied Analysis

Volume 2014 (2014), Article ID 721346, 6 pages

http://dx.doi.org/10.1155/2014/721346

## A Generalized Inexact Newton Method for Inverse Eigenvalue Problems

Department of Mathematics, Zhejiang Normal University, Jinhua 321004, China

Received 5 December 2013; Accepted 7 January 2014; Published 19 February 2014

Academic Editor: Chong Li

Copyright © 2014 Weiping Shen. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

#### Abstract

We propose a generalized inexact Newton method for solving the inverse eigenvalue problems, which includes the generalized Newton method as a special case. Under the nonsingularity assumption of the Jacobian matrices at the solution **c***, a convergence analysis covering both the distinct and multiple eigenvalue cases is provided and the quadratic convergence property is proved. Moreover, numerical tests are given in the last section and comparisons with the generalized Newton method are made.

#### 1. Introduction

Inverse eigenvalue problems (IEPs) arise in a remarkable variety of applications such as geophysics, control design, system identification, exploration and remote sensing, principal component analysis, molecular spectroscopy, particle physics, structural analysis, circuit theory, and applied mechanics. One may refer to [1–14] for the applications, mathematical theory, and algorithms of IEPs. Based on different applications, inverse eigenvalue problems appear in many forms, for example, additive inverse eigenvalue problems, multiplicative inverse eigenvalue problems, Jacobian matrix inverse eigenvalue problems, nonnegative matrix inverse eigenvalue problems, and Toeplitz matrix inverse eigenvalue problems [3, 15, 16].

Let be the linear space of symmetric matrices of size . Let be continuously differentiable. Given real numbers , which are arranged in the decreasing order , the IEP considered here is to find a vector such that The vector is called a solution of the IEP (1). A typical choice for is which has been studied extensively (cf. [15, 17–20]).

Define the function by Then solving the IEP (1) is equivalent to solving the equation on . Based on this equivalence, Newton’s method can be applied to the IEP, and it converges quadratically [15, 16, 21]. However, distinction of the given eigenvalues is usually assumed among these works. In the case when multiple eigenvalues are present, solving the IEP becomes much more complicated because the eigenvalue function defined by (3) is not differentiable around the solution , in general, and the eigenvectors corresponding to the multiplier value cannot generally be defined as a continuous functions of at ; see [15]. Therefore, either Newton’s method or the theoretical analysis may get into trouble. In paper [15], the authors have analyzed the convergence properties of Newton's method in the multiple case. However, the nonsingularity assumption needed for convergence in [15] was that the inverse of the involved Jacobian and/or the involved approximate Jacobian matrices at all iterations are bounded, and the bound must be independent of the initial point . For ensuring this nonsingularity assumption for Newton's method, D. Sun and J. Sun introduced in [22] a generalized Newton method and, by using the tool of the strong semismoothness of the eigenvalue function for symmetric matrices, developed a new approach to study the convergence issue for the case with multiple eigenvalues. They presented there a nonsingularity assumption in terms of the Jacobian matrices evaluated at the solution to establish a general convergence result of Newton's method. Note that in each Newton iteration (outer iteration) of the generalized Newton method, we need to solve exactly the Jacobian equation (inner iteration). When the problem size is large, the inversion is costly, and one may employ iterative methods to solve the equation. Although iterative methods can reduce the complexity, it may oversolve the systems in the sense that the last few inner iterations before convergence may not improve the convergence of the outer Newton iteration. The generalized inexact Newton method is a method that stops the inner iteration before convergence. By choosing a suitable stopping criterion, we can reduce the total cost of the whole inner-outer iteration.

In this paper, we give a generalized inexact Newton method for solving the IEP (1) which can reduce the generalized Newton method. Motivated by Sun’s idea of the strong semismoothness, we give a convergence analysis of this method. By choosing a suitable stopping criterion, we show that the generalized inexact Newton method converges superlinearly. It should be noted that the analysis of the present paper is “distinction free.” Though the nonsingularity assumption is stated in terms of the Jacobian matrices evaluated at the solution , the inverse of the approximate Jacobian matrices related to all iterations is ensured to be bounded and moreover the upper bound is independent of the initial point . A numerical example is presented in the last section to illustrate that our results and comparisons with the generalized Newton method are made.

#### 2. Semismoothness and Relative Generalized Jacobian

Let be a locally Lipschitz continuous function. Then, according to Rademacher’s theorem, is differentiable almost everywhere. Let be the set of differentiable points of and let be the Jacobian of whenever it exists. Denote Then Clarke’s generalized Jacobian [23] is where “conv” stands for the convex hull in the usual sense of convex analysis [24]. Then we are ready to give the following definition of semismoothness. For original concept of semismoothness for functions and vector-valued functions, one may refer to [25, 26].

*Definition 1. *Suppose that is a locally Lipschitz continuous function. is said to be semismooth at if is directionally differentiable at and for any is said to be -order () semismooth at if is semismooth at and
In particular, is called strongly semismooth at if is -order semismooth at . A function is said to be a (strong) semismooth function if it is (strong) semismooth everywhere on .

Now, let us consider the composite nonsmooth function: where is nonsmooth but of special structure and is continuously differentiable. It should be noted that neither nor is easy to compute even if , , and are available. To circumvent the difficulty in computing , Potra et al. [27] introduced the concept of generalized Jacobian: Furthermore, in order to weaken the nonsingularity assumption on the generalized Jacobians, D. Sun and J. Sun also introduced in [22] the following concepts of relative generalized Jacobians.

*Definition 2. *Let be a subset of . The -relative generalized Jacobians and of at are defined by

Lemma 3 presents the properties of the relative generalized Jacobians which has been proved in [22]. We omit the proof here.

Lemma 3. *Let be Lipschitz continuous near . Then*(i)* and are compact subsets of and , respectively.*(ii)* and are upper semicontinuous at .*

#### 3. The Generalized Inexact Newton Method

Let be continuously differentiable and let . In what follows, we suppose that are the eigenvalues of the matrix with and write Let us define Recall that the function is defined by (3) and this means that is a composite nonsmooth function. Then the concept of generalized Jacobian can be applied to and we get see [22, Proposition 5.1]. Hence, according to this, the generalized Newton method for solving the IEP can be described as follows; see [22].

*Algorithm 4 (the generalized Newton method). *(1) For until convergence, do the following.(a)Compute a .(b)Form according to (13).(c)Solve from the equation

The generalized Newton method converges quadratically; see for instance [22, Theorem 5.3]. In deriving the quadratic convergence of this method, it was assumed that system (14) is solved exactly. Usually, one solves these systems by iterative methods, in particular in the case when is large. However iterative methods may bring an oversolving problem in the sense that the last few iterations before convergence are usually insignificant as far as the convergence of the outer iteration is concerned. This oversolving of the inner iterations will cause a waste of time and does not improve the efficiency of the whole method. The generalized inexact Newton method is derived precisely to avoid the oversolving problem in the inner iterations. Instead of solving (14) exactly, one solves it iteratively until a reasonable tolerance is reached. The tolerance is chosen carefully such that it is small enough to guarantee the convergence of the outer iteration, but large enough to reduce the oversolving problem of the inner iterations. We find that the tolerance has to be of in order to have a convergence rate of for the outer iteration. We now give the generalized inexact Newton method for solving the IEP. We will prove in Section 4 that the convergence rate of the method is equal to .

*Algorithm 5 (the generalized inexact Newton method). *(1) For until convergence, do the following. (a) Compute a . (b) Form according to (13). (c) Solve inexactly from the equation
until the residual satisfies

#### 4. Convergence Analysis

In this section, we carry on a convergence analysis of Algorithm 5. Let be given with Let be the solution of the IEP. Let be defined by (11) and Then for any , is continuously differentiable at and moreover Thus, according to Definition 2, we obtain the following relative generalized Jacobian of at :

We first present the following two lemmas which are important for the proof of the main theorem. Lemma 6 illustrates the continuous property about the eigenvalues and can be found in many papers; see for example [15–18, 28]. However Lemma 7 is a crucial result in [22] and has been proved there.

Lemma 6. *Suppose that there exists such that the matrix has eigenvalues given by (17). Suppose that is Lipschitz continuous around . Then there exist positive numbers and such that, for each , the following assertion holds:
*

Lemma 7. * is a strongly semismooth function.*

With Lemmas 6 and 7, we are in the position to prove the superlinear convergence of Algorithm 5. In order to ensure the superlinear convergence, one may want to assume that all are nonsingular. However, as noted by [15], it is generally possible to choose the eigenvectors such that is nonsingular when multiple eigenvalues are presented. Hence we assume here that all matrices are nonsingular.

Theorem 8. *Suppose that there exists such that the matrix has eigenvalues given by (17). Let be defined by (20). If (i) for each , , (ii) all are nonsingular, and (iii) is Lipschitz continuous around , then there exists a positive number such that, for each , the sequence generated by Algorithm 5 converges to with
**
Here is a positive constant.*

*Proof. *Since is Lipschitz continuous around , it follows from Lemma 6 that is also Lipschitz continuous around . On the other hand, note that all are nonsingular. Thus, thanks to Lemma 3, there exist positive numbers and such that, for each , all are nonsingular and
Since, by Lemma 7, is a strong semismooth function, there exist positive numbers and such that for all and all
Let
Take such that
Below we will show that and are as desired. For this end, let be the th-iteration. We assert that
Granting this and by the assumption that , we can complete the proof of the theorem. To give the proof of assertion (27), we suppose that . Note that . Then, by (23) and (24), we obtain that for all
On the other hand, noting that , one has by Lemma 6 and (16) that
Since , we derive from (15) that
Combining this with (28)–(30), we obtain
where the last inequality holds because and . Furthermore, by the fact that , we get
which together with the assumption implies . Therefore, we complete the proof of the assertion and hence the proof of the whole theorem.

In the special case when , Algorithm 5 reduces Algorithm 4. Thus, applying Theorem 8, we have the following result for the generalized Newton method which coincides with [22, Theorem 5.4].

Corollary 9. *Suppose that there exists such that the matrix has eigenvalues given by (17). Let be defined by (20). If (i) for each , , (ii) all are nonsingular, and (iii) is Lipschitz continuous around , then there exists a positive number such that, for each , the sequence generated by Algorithm 4 converges quadratically to .*

#### 5. A Numerical Example

In this section, we present the numerical performance of Algorithm 5 with that of Algorithm 4. Our aim is, for the inverse eigenvalue problems with multiple eigenvalues, to illustrate the advantage of the generalized inexact Newton method over the generalized Newton method in terms of minimizing the oversolving problem and the overall computational complexity. The test was carried out in MATLAB 7.0 running on a Genuine Intel(R) PC with 1.6 GHz CPU.

We consider the typical choice, and use Toeplitz matrices as our : Thus is a symmetric Toeplitz matrix with the first column equal to . This numerical example has been studied extensively; see for instance [12, 14, 17, 18, 20].

In the tests, we tried Algorithms 4 and 5 on ten -by- matrices. For each matrix, we first generate a vector such that there exist some integers such that , where are the eigenvalues of matrix . Set
Then we choose as the prescribed eigenvalues. Since both Algorithms 4 and 5 are locally convergent, is formed by chopping the components of to four decimal places. For both algorithms, the stopping tolerance for the outer (Newton) iterations is . The inner systems (14) and (15) are all solved by the QMR method [29] via the MATLAB QMR function, where the maximal number of iterations is set to be 1000. Also, the initial guess for the Jacobian equations in the th outer iteration is set to be obtained at the th outer iteration. The inner loop stopping tolerance for (15) is given by (16), while for (14) in Algorithm 4, we are supposed to solve it up to machine precision eps (which is *≈*2.2 × 10^{−16}).

Comparisons of Algorithm 5 with Algorithm 4 are illustrated in Table 1. In this table, we give the total numbers of outer iterations averaged over the ten tests and the average total numbers of inner iterations required for solving the Jacobian equations. Here “I” means no preconditioner, while “P” means that the MATLAB incomplete LU factorization (MILU) is adopted as the preconditioner, that is, LUINC (A, [drop-tolerance, 1, 1, 1]), where the drop tolerance is set to be 0.01. We can see from this table that is small for Algorithm 4 and also for Algorithm 5 when . This confirms the theoretical convergence rate of the two algorithms. In terms of , we see that one requires less inner iterations in Algorithm 5 than those in Algorithm 4. Thus, we can conclude that Algorithm 5 with around 1.5 is much better than Algorithm 4. On the other hand, we also note that the MILU preconditioner is quite effective for the Jacobian equations.

#### 6. Concluding Remarks

In this paper, we have proposed a generalized inexact Newton method for the inverse eigenvalue problem. We show that our inexact method converges superlinearly. This inexact version can minimize the oversolving problem of the generalized Newton method and give a good tradeoff between the inner and outer iterations. We also present numerical experiments to illustrate our results.

It is a pity that similar approaches cannot been extended to the inexact Newton-like method up till now. This is another interesting topic of our works.

#### Conflict of Interests

The author declares that there is no conflict of interests regarding the publication of this paper.

#### Acknowledgment

This project was supported in part by the National Natural Science Foundation of China (Grant no. 11101379).

#### References

- P. J. Brussard and P. W. Glaudemans,
*Shell Model Applications in Nuclear Spectroscopy*, Elsevier, New York, NY, USA, 1977. - C. I. Byrnes, “Pole placement by output feedback,” in
*Three Decades of Mathematics Systems Theory*, vol. 135 of*Lecture Notes in Control and Information Sciences*, pp. 31–78, Springer, New York, NY, USA, 1989. - M. T. Chu and G. H. Golub, “Structured inverse eigenvalue problems,”
*Acta Numerica*, vol. 11, pp. 1–71, 2002. - S. Elhay and Y. M. Ram, “An affine inverse eigenvalue problem,”
*Inverse Problems*, vol. 18, no. 2, pp. 455–466, 2002. View at Publisher · View at Google Scholar · View at Scopus - G. M. L. Gladwell, “Inverse problems in vibration,”
*Applied Mechanics Reviews*, vol. 39, pp. 1013–1018, 1986. - G. M. L. Gladwell, “Inverse problems in vibration II,”
*Applied Mechanics Reviews*, vol. 49, pp. 25–34, 1996. - O. Hald,
*On discrete and numerical Sturm-Liouville problems [Ph.D. thesis]*, Department Mathematics, New York University, New York, NY, USA, 1970. - K. T. Joseph, “Inverse eigenvalue problem in structural design,”
*AIAA Journal*, vol. 30, no. 12, pp. 2890–2896, 1992. View at Scopus - N. Li, “A matrix inverse eigenvalue problem and its application,”
*Linear Algebra and Its Applications*, vol. 266, no. 1–3, pp. 143–152, 1997. View at Scopus - M. Müller, “An inverse eigenvalue problem: computing B-stable Runge-Kutta methods having real poles,”
*BIT*, vol. 32, no. 4, pp. 676–688, 1992. View at Publisher · View at Google Scholar · View at Scopus - R. L. Parker and K. A. Whaler, “Numerical methods for establishing solutions to the inverse problem of electromagnetic induction,”
*Journal of Geophysical Research*, vol. 86, no. 10, pp. 9574–9584, 1981. View at Scopus - J. Peinado and A. M. Vidal, “A new parallel approach to the Toeplitz inverse Eigenproblem using Newton-like methods,” in
*Vector and Parallel Processing—VECPAR 2000*, Lecture Notes in Computer Science, pp. 355–368, Springer, Berlin, Germany, 2001. - M. S. Ravi, J. Rosenthal, and X. A. Wang, “On decentralized dynamic pole placement and feedback stabilization,”
*IEEE Transactions on Automatic Control*, vol. 40, no. 9, pp. 1603–1614, 1995. View at Publisher · View at Google Scholar · View at Scopus - W. F. Trench, “Numerical solution of the inverse eigenvalue problem for real symmetric toeplitz matrices,”
*SIAM Journal on Scientific Computing*, vol. 18, no. 6, pp. 1722–1736, 1997. View at Scopus - S. Friedland, J. Nocedal, and M. L. Overton, “The formulation and analysis of numerical methods for inverse eigenvalue problems,”
*SIAM Journal on Numerical Analysis*, vol. 24, no. 3, pp. 634–667, 1987. View at Scopus - S. F. Xu,
*An Introduction to Inverse Algebric Eigenvalue Problems*, Peking University Press, 1998. - Z.-J. Bai, R. H. Chan, and B. Morini, “An inexact Cayley transform method for inverse eigenvalue problems,”
*Inverse Problems*, vol. 20, no. 5, pp. 1675–1689, 2004. View at Publisher · View at Google Scholar · View at Scopus - R. H. Chan, H. L. Chung, and S.-F. Xu, “The inexact Newton-like method for inverse eigenvalue problem,”
*BIT Numerical Mathematics*, vol. 43, no. 1, pp. 7–20, 2003. View at Publisher · View at Google Scholar · View at Scopus - R. H. Chan, S.-F. Xu, and H.-M. Zhou, “On the convergence rate of a quasi-Newton method for inverse eigen value problems,”
*SIAM Journal on Numerical Analysis*, vol. 36, no. 2, pp. 436–441, 1999. View at Scopus - W. P. Shen, C. Li, and X. Q. Jin, “A Ulm-like method for inverse eigenvalue problems,”
*Applied Numerical Mathematics*, vol. 61, no. 3, pp. 356–367, 2011. View at Publisher · View at Google Scholar · View at Scopus - W. N. Kublanovskaja, “On an approach to the eigenvalue problem,”
*Zapiski. Nauk. Sem. Lenin. Otdel. Math. Inst. (Akad. Nauk SSSR V. A. Steklova)*, pp. 138–149, 1970. - D. Sun and J. Sun, “Strong semismoothness of eigenvalues of symmetric matrices and its application to inverse eigenvalue problems,”
*SIAM Journal on Numerical Analysis*, vol. 40, no. 6, pp. 2352–2367, 2002. View at Publisher · View at Google Scholar · View at Scopus - F. Clark,
*Optimization and Nonsmooth Analysis*, John Wiley and Sons, New York, NY, USA, 1983. - R. T. Rockafeliarc,
*Convex Analysis*, Princeton University Press, Princeton, NJ, USA, 1970. - R. Miffin, “Semismooth and semiconvex functions in constrained optimization,”
*SIAM Journal on Control and Optimization*, vol. 15, pp. 957–972, 1977. - L. Qi and J. Sun, “A nonsmooth version of Newton's method,”
*Mathematical Programming*, vol. 58, no. 1–3, pp. 353–367, 1993. View at Publisher · View at Google Scholar · View at Scopus - F. A. Potra, L. Qi, and D. Sun, “Secant methods for semismooth equations,”
*Numerische Mathematik*, vol. 80, no. 2, pp. 305–324, 1998. View at Scopus - G. H. Gloub and C. F. Van Loan,
*Matrix Computations*, The Johns Hopkins University Press, Baltimore, Md, USA. - R. W. Freund and N. M. Nachtigal, “QMR: a quasi-minimal residual method for non-Hermitian linear systems,”
*Numerische Mathematik*, vol. 60, no. 1, pp. 315–339, 1991. View at Publisher · View at Google Scholar · View at Scopus