Abstract

When the Hessian matrix is not positive, the Newton direction may not be the descending direction. A new method named eigenvalue decomposition-based modified Newton algorithm is presented, which first takes the eigenvalue decomposition of the Hessian matrix, then replaces the negative eigenvalues with their absolute values, and finally reconstructs the Hessian matrix and modifies the searching direction. The new searching direction is always the descending direction. The convergence of the algorithm is proven and the conclusion on convergence rate is presented qualitatively. Finally, a numerical experiment is given for comparing the convergence domains of the modified algorithm and the classical algorithm.

1. Introduction

Newton algorithm (also known as Newton’s method) is commonly used in numerical analysis, especially in nonlinear optimization [1]. But, in the definition domain of the object function, the method requires that (1) object function is twice differentiable, (2) the Hessian matrix must be positive, and (3) the initial solution should be near the extreme point [2]. If there is a point in the definition domain whose Hessian matrix is not positive, then the Newton direction of this point is not the descending direction. So the classical algorithm may not be convergent in the definition domain.

There are several improved forms of the classical Newton algorithm on the above problem: (1) combining Newton direction with the steepest descent direction [3]; (2) combining Newton direction with the negative curvature direction [4]; (3) adding a diagonal matrix to the Hessian matrix so that the sum matrix is positive, where is a positive number bigger than the absolutely value of the smallest eigenvalue [5]. This paper presents a new algorithm to handle the “nonpositive problem.”

2. Newton Algorithm

We discuss the extreme value problems of multi-function. Suppose object function is a twice differentiable function, and the extreme value problem of is

If the Hessian matrix of is positive, the Taylor expansion to the second order of is:

As we know, is the necessary condition, so the derivative form of (2) is

Equation (3) is usually called Newton's equation. When matrix is positive, and the inverse matrix exists, we can get an iterative equation as follows:

Equation (4) is usually called Newton iterative algorithm format, where is called negative gradient direction, and is called Newton direction.

Next, we must give a proper step length to ensure the value of function declining. A well-known algorithm is “Damped Newton Algorithm,” which gets the step length by solving the following optimization problem:

3. Modified Newton Algorithm Based on Eigenvalue Decomposition

3.1. Modified Algorithm

First, take eigenvalue decomposition on the Hessian matrix : where is a unitary matrix; is a diagonal matrix. Because matrix is not positive, there must be a few negative eigenvalues in . We replace the negative eigenvalues with their absolutely values, so the modified searching direction is:

The inner product of the negative gradient direction with is always positive, because the right part of the next equation is a quadratic form [6]: Equation (8) shows that the angle between the negative gradient direction and is less than 90°, so the new direction is always a descent direction. Then we can get the similar iterative equation by replacing the Newton direction with , that is, where is the step length obtained by solving optimization problem (5).

3.2. Convergence Conclusion

We prove that the modified Newton algorithm based on eigenvalue decomposition is convergent. First, we talk about Wolfe-Powell condition [7].

If step length satisfies the two following equations:

Equation (10) is named Wolfe-Powell condition, where and .

Theorem 1. Let be a continuously differentiable function, and let be defined by where has a bound. Then there must be a finite interval , for all , the Wolfe-Powell conditions are right.

Proof. Let , as and satisfies the following equation: Then According to the Mean Value Theorem, there must be satisfies the following equation: Notice , and , so we can get: Notice is a continuously differentiable operator, so there is an open set satisfies , where . For any when , the following equations are right:
Finally, we can get the Wolfe-Powell condition.

Theorem 2. Let be a continuously differentiable function, so has lower bounded in ; is uniformly continuous on , which contains a horizontal set , where and . If satisfy the following conditions: , where stands for the angle between and , and is a constant, then there must be a satisfying or by Wolfe-Powell conditions.

Proof. We proof the theorem by contradiction. Suppose or for all , and there is a set and a constant satisfy for all . Because satisfies the angle condition , so
Equation (17) shows that is a descending direction, so is a decreasing sequence. Suppose the limit of is , and . That is , so , take the Wolfe-Powell condition (16) on index , It is easy to get , again by the Wolfe-Powell condition (16), Further
Because of , (20) is in contradiction with the assumption that is uniformly continuous on . The contradiction suggests that .

4. Simulations on Convergence Properties

As the new searching direction satisfies the following equation:

So, if the Hessian matrix of all points in definition domain is positive, then the convergence rate of modified algorithm equals the classical Newton algorithm. The other side, if there are any points with its Hessian matrix negative, then the convergence rate at most equals the one of classical Newton algorithm.

In addition, in order to verify that the improved algorithm has a wider convergence domain than the classical Newton algorithm, we employ a convex function: as the objective function. Let the definition domain be , and we choose the grid points by 0.1 interval, then use both classical Newton's algorithm and eigenvalue decomposition-based algorithm to find the minimum value. The results are drawn in Figure 1.

In Figure 1(c), the green point means that the algorithm has found the minimum value using this point as the initial solution; and the red point means that the algorithm did not converge or has not found the minimum value. Obviously, the convergence domain of eigenvalue decomposition algorithm is bigger than the one of the classical Newton algorithm.

5. Conclusion

A new algorithm is put forward based on eigenvalue decomposition and classical Newton algorithm. The eigenvalue decomposition based-algorithm has a wider convergence domain than the classical Newton algorithm.

Acknowledgments

This work is supported by the Natural Science Foundation of China (60974124), the Program for New Century Excellent Talents in University, and the Project sponsored by SRF for ROCS, SEM in China, the key lab open foundation for Space Flight Dynamics technique (SFDLXZ-2010-004).