`ISRN Mathematical AnalysisVolume 2013 (2013), Article ID 258072, 5 pageshttp://dx.doi.org/10.1155/2013/258072`
Research Article

## Solving Separable Nonlinear Equations Using LU Factorization

Department of Mathematics, Western Washington University, Bellingham, WA 98225-9063, USA

Received 15 May 2013; Accepted 4 June 2013

Academic Editors: R. Barrio and B. Djafari Rouhani

Copyright © 2013 Yun-Qiu Shen and Tjalling J. Ypma. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

#### Abstract

Separable nonlinear equations have the form where the matrix and the vector are continuously differentiable functions of and . We assume that and has full rank. We present a numerical method to compute the solution for fully determined systems () and compatible overdetermined systems (). Our method reduces the original system to a smaller system of equations in alone. The iterative process to solve the smaller system only requires the LU factorization of one matrix per step, and the convergence is quadratic. Once has been obtained, is computed by direct solution of a linear system. Details of the numerical implementation are provided and several examples are presented.

#### 1. Introduction

Many applications [1, 2] lead to a system of separable nonlinear equations: where the matrixand the vectorare continuously differentiable functions of andwith. Typicallyis very small, and for compatible systems (i.e., those with exact solutions)is usually close to. We assume that is Lipschitz continuous and has full rankin a neighborhood of a solution; thus, we assume thathas full rankin this neighborhood.

Standard projection methods, such as VARPRO [3, 4], transform the problem (1) to the minimization of a function inalone: where we use the Euclidean norm throughout this paper.

In [5, 6], we proposed a different method, using left orthonormal null vectors of, to reduce (1) to an equation of the form inalone. We assumed that the system is fully determined (), andhas full rank. That algorithm was extended to overdetermined systems () with full rankin [7]. One QR factorization ofis required in each iterative step of the methods used to solve these smaller systems for. For details of these methods and their relationship to other methods, see [1, 7].

In this paper, we use a special set of linearly independent left null vectors ofto construct a bordered matrixwhich inherits the smoothness of. We use this to construct a system ofequations of the formin the unknowns alone. The smaller system inherits the Lipschitz continuity and nonsingularity of the Jacobian matrix of the original system (1) so that quadratic convergence of the Newton or Gauss-Newton method for solvingis guaranteed. The QR factorization ofused in previous methods is here replaced by an LU factorization of, so the cost of each iterative step may be significantly reduced. The method works for both fully determined systems and compatible overdetermined systems. This paper extends the work using bordered matrices and LU factorization for underdetermined systems in [8] () to fully determined and compatible overdetermined systems.

In the next section, we show how to reduce the original system to a new system in the variablesonly. The corresponding numerical algorithm is presented in Section 3. Examples and computational results are provided in Section 4.

#### 2. Analysis

Assume thatis an exact solution of (1), and thatis Lipschitz continuous and has full rankin a neighborhood of. For allin this neighborhood, we write the singular value decomposition ofas whereandare orthogonal matrices and. Since the singular valuesare continuously dependent on, andhas full rankand thus, we have in a small neighborhood of. In this neighborhood, the following result holds.

Theorem 1. Let be continuously differentiable and full rank in a neighborhood of . Then, there exists an constant matrixsuch that the bordered matrix has full rank in a neighborhood of.

Proof. Letbe a point nearfor which the singular valuesatisfies (5). Let
Then, which implies that is invertible withby (5). For allin a sufficiently small neighborhood ofcontinuity ensures that we can satisfy
This guarantees that for all such hence by the Neumann Lemma [9],is invertible in this neighborhood.

Sincein invertible, there exists anmatrixand anmatrixwith linearly independent columns that satisfy

We use this to reduce the original system (1) to the form (3) for fully determined systems and compatible overdetermined systems with full rank as follows.

Theorem 2. (a)if and only if
(b)has full rank if and only ifhas full rank.

Proof. (a) Using (6) and (11), we have which implies the result sinceis nonsingular.
(b) Differentiating (13) and using, we have which implies thathas full rank if and only ifhas full rank.

#### 3. Computation

Based on (12), we solve theequations inunknownsforand then compute. For the fully determined system, we use Newton’s method to compute, and for compatible overdetermined systems, we use the Gauss-Newton method

To evaluate we use an LU factorization ofto solve forfrom (11):

Now, consider. Use the following notation:denotes a matrix of the same size as a given matrixbut with each entry being the partial derivative of the corresponding entry ofwith respect to. Then,

Differentiating (18), the matricescan be computed by solving using the previously computedand the LU factorization of.

Notice that Lipschitz continuity ofimplies Lipschitz continuity ofand, and also ofand. By (18) this implies Lipschitz continuity ofand (20) then implies Lipschitz continuity of. Using (19), it follows thatis Lipschitz continuous. Now, Theorem 2(b) guarantees quadratic convergence of the Newton and Gauss-Newton methods forsufficiently nearsince the corresponding convergence conditions are satisfied [912]. In particular, for the Gauss-Newton method, our assumption that we have a compatible overdetermined system implies that we have a zero residual problem.

An outline of our algorithm follows.

Algorithm 3. Given:with the corresponding positive integers, a small positive number, and a pointnear the solution. Compute an SVD factorization ofand use it to formas in (7). For, do steps (a)–(e).(a)Formas in (6) and compute its LU factors. (b)Formas in (17) and (18). (c)Formas in (19) and (20). (d)Findusing (15) ifor (16) if. (e)Ifstop, outputand obtainfrom (12). Otherwise replacebyand go to(a).
By (11), in step(e). Thus, is the firstcomponent of, which can be computed by using an LU decomposition of.
Each iteration of our method requires one LU factorization ofor aboutflops [13]. A QR factorization of thematrixcosts aboutflops [13], so ifis close tothe new method approximately halves this cost. The matrix to be factored in step(d)is onlyand typicallyis very small, so this cost is negligible. Sinceandare typically large, the computational cost of the other steps is also small relative to the cost of the matrix factorization.

#### 4. Examples

The following three examples illustrate the method.

Example 1. Consider
This equation has a solution, , at whichhas full rank. We choose. An SVD ofproduces the matrix block

We list the errors inin Table 1, which clearly shows quadratic convergence of the Newton iteration. Usinggiveswith.

Table 1: Newton iterations for Example 1.

The next example is a discretized version of an interface eigenvalue problem for a differential equation on two intervals. For references on interface problems in differential equations, see [14].

Example 2. Consider where thetridiagonal matrix
For eigenvalues of such matrices, see [15]. We useso that,, and. This equation has a solution and for,and, at whichhas full rank. We choose. An SVD ofproduces an appropriatematrix block. We list the errors inin Table 2, again showing quadratic convergence. Usinggiveswith .

Table 2: Newton iterations for Example 2.

The relative flop count shows that in this example using one LU factorization ofinstead of a QR factorization of, as used in alternative methods, reduces the flop count per iteration by approximately. In general, ifandare large and close to each other, as is usually the case in compatible systems, the flop count is approximately halved.

We now give an example of a compatible overdetermined system.

Example 3. Consider where thematrix, thematrix, andare, respectively, and. Here, and.

We chooseso that. This equation has a solutionandfor, at whichhas full rank. We choose. An SVD ofproduces an appropriatematrix block. We list the errors inin Table 3, clearly showing quadratic convergence of the Gauss-Newton method. Usinggiveswith. In each iteration of this example, using one LU factorization ofinstead of one QR factorization ofreduces the flop count by approximately, measured by (25).

Table 3: Gauss-Newton iterations for Example 3.

#### 5. Conclusions

We present an algorithm for solving the separable equation where the matrixand the vectorare continuously differentiable functions of andwith. We assume that is Lipschitz continuous and has full rankin a neighborhood of a solution; thus, has full rankin this neighborhood. Our technique replaces (28) byequations inunknownswhich we solve by either the Newton or the Gauss-Newton iterative method, in both cases with quadratic convergence. Our method uses one LU factorization of anmatrix instead of a QR factorization of the matrixper iteration, and may thus be substantially more efficient than competing methods (approximately halving the cost) whenis close toandis large, as is usually the case in compatible systems. The method is applicable to fully determined separable systems and to compatible overdetermined separable systems.

#### References

1. G. Golub and V. Pereyra, “Separable nonlinear least squares: the variable projection method and its applications,” Inverse Problems, vol. 19, no. 2, pp. R1–R26, 2003.
2. K. M. Mullen and I. H. M. van Stokkum, “The variable projection algorithm in time-resolved spectroscopy, microscopy and mass spectrometry applications,” Numerical Algorithms, vol. 51, no. 3, pp. 319–340, 2009.
3. G. H. Golub and V. Pereyra, “The differentiation of pseudo-inverses and nonlinear least squares problems whose variables separate,” Report STAN-CS-72-261, Stanford University, Computer Science Department, 1972.
4. G. H. Golub and V. Pereyra, “The differentiation of pseudo-inverses and nonlinear least squares problems whose variables separate,” SIAM Journal on Numerical Analysis, vol. 10, no. 2, pp. 413–432, 1973.
5. Y.-Q. Shen and T. J. Ypma, “Solving nonlinear systems of equations with only one nonlinear variable,” Journal of Computational and Applied Mathematics, vol. 30, no. 2, pp. 235–246, 1990.
6. T. J. Ypma and Y.-Q. Shen, “Solving $N+m$ nonlinear equations with only m nonlinear variables,” Computing, vol. 44, no. 3, pp. 259–271, 1990.
7. G. G. Lukeman, Separable Overdetermined Nonlinear Systems: An Application of the Shen-Ypma Algorithm, VDM Verlag Dr. Müller, Saarbrücken, Germany, 2009.
8. Y.-Q. Shen and T. J. Ypma, “Numerical bifurcation of separable parameterized equations,” Electronic Transactions on Numerical Analysis, vol. 34, pp. 31–43, 2008-2009.
9. J. M. Ortega and W. C. Rheinboldt, Iterative Solution of Nonlinear Equations in Several Variables, Academic Press, New York, NY, USA, 1970.
10. J. E. Dennis Jr. and R. B. Schnabel, Numerical Methods for Unconstrained Optimization and Nonlinear Equations, SIAM, Philadelphia, Pa, USA, 1996.
11. P. Deuflhard and A. Hohmann, Numerical Analysis in Modern Scientific Computing, Springer, New York, NY, USA, 2nd edition, 2003.
12. C. T. Kelley, Iterative Methods for Linear and Nonlinear Equations, SIAM, Philadelphia, Pa, USA, 1995.
13. G. H. Golub and C. F. Van Loan, Matrix Computationsed, Johns Hopkins, Baltimore, Md, USA, 3rd edition, 1996.
14. Z. Li and K. Ito, The Immersed Interface Method, SIAM, Philadelphia, Pa, USA, 2006.
15. G. D. Smith, Numerical Solution of Partial Differential Equations: Finite Difference Methods, Oxford University Press, New York, NY, USA, 3rd edition, 1985.