Abstract

One-parameter families of Newton's iterative method for the solution of nonlinear equations and its extension to unconstrained optimization problems are presented in the paper. These methods are derived by implementing approximations through a straight line and through a parabolic curve in the vicinity of the root. The presented variants are found to yield better performance than Newton's method, in addition that they overcome its limitations.

1. Introduction

Newton’s method is one of the most fundamental tools in computational mathematics, operations research, optimization, and control theory. It has many applications in management science, industrial and financial research, chaos and fractals, dynamical systems, variational inequalities and equilibrium-type problems, stability analysis, data mining, and even to random operator equations. Its role in optimization theory cannot be overestimated as the method is the basis for the most effective procedures in linear and nonlinear programming. For a more detailed survey, one can refer to [1] and the references cited therein.

Let be a sufficiently continuous differentiable function of a single variable . One of the most basic problems in numerical analysis is to find the solution of frequently occurring nonlinear equation of the form Let represent the graph of the function .

A large number of iterative methods have been developed for finding out the solution of single variable nonlinear equations as well for the solution of a system of nonlinear equations. One important reason for these methods is that none of them works for all types of problems. For a more detailed survey of these most important methods, many excellent textbooks are available in the literature [24].

Newton’s method is probably the simplest, most flexible, best known, and most used numerical method. However, as it is well known, a major difficulty in the application of Newton’s method is the selection of initial guess which must be chosen sufficiently close to the true solution in order to guarantee the convergence. Finding a criterion for choosing initial guess is quite cumbersome and the method may fail miserably if at any stage of computation, the derivative of the function is either zero or sufficiently small. Due to this reason, it exhibits poor convergence and falls in stability problems.

Also for solving nonlinear, univariate, and unconstrained optimization problems, Newton’s method [1] is an important and basic method which converges quadratically. The idea behind Newton’s method is to approximate the objective function locally by a quadratic function which at agrees with the function up to second derivatives. Again, the condition in a neighborhood of the root is required for the success of Newton’s method.

The purpose of this paper is to eliminate the defects of Newton’s method by the simple modification of iteration processes. Numerical results indicate that the proposed iterative formulae are effective and comparable to the well-known Newton’s method. Furthermore, the presented techniques have guaranteed convergence unlike Newton’s method and are as simple as this known technique.

2. Proposed Methods for Single-Variable Nonlinear Equations

In this section, we shall derive two families by applying approximation via a straight line and via a parabolic curve.

(a) Approximation by a Straight Line.
Consider the equation of a straight line having slope equal to and passing through the point , in the form where is an initial guess to an exact root of (1.1) and , . Let be the better approximation to the root. Assume that the straight line (2.1) intersects the graph of the function (1.2) at a point . Now the straight line (2.1) while passing through the point takes the form Expanding the left-hand side by means of Taylor’s expansion about the point and simplifying (retaining the terms up to ), we get an iteration formula given by
The general formula for successive approximations is given by This is the one-parameter family of Newton’s method. This formula was independently derived earlier by Mamta et al. [5] and Kanwar and Tomar [6] by using different approaches. In order to obtain the quadratic convergence of the method, the sign of entity should be chosen so that the denominator is the largest in magnitude. This formula is well defined, even if is zero unlike Newton’s method.

(b) Approximation by a Parabola
Consider a parabola in the form Adopting the same procedure as done in the previous case (a), one can obtain the following iterative formula given by
In (2.7), the sign in the denominator should be chosen so that the denominator is the largest in magnitude. This is a parabolic version of Newton’s method [6] and does not fail if . Note that for , the classical Newton’s formula can be recovered from the formulae (2.5) and (2.7).

(c) Exponential Iteration Formulae.
Exploiting the main idea of Mamta et al. [5], Chen and Li [7] derived new classes of quadratically convergent exponential iterative methods. On the similar lines, we can also derive exponential iterative formulae for solving nonlinear equations.
If letting be the better approximation to the exact root , then from (2.5) and (2.7), we obtain the following exponential iteration formulae: respectively. Letting in these formulae, we obtain another exponential iterative formula given by Note that by taking the first-order Taylor’s expansion of in (2.9), Newton’s formula can be achieved. The idea can further be generalized (similar to Mir and Rafiq [8, 9]) to the case of multiple zeros of nonlinear equations.

3. Extension to Unconstrained Optimization Problems

In this section, we shall extend the formulae (2.5) and (2.7) to solve nonlinear, univariate and unconstrained optimization problems.

Consider the nonlinear optimization problem: minimize , where the function is nonlinear twice-differentiable function.

(a) Extension of Formula (2.5).
Assume that is sufficiently smooth function and has an extremum (maxima or minima) at a point . From (2.3), consider the auxiliary function with parameter as It is possible to construct a quadratic function which agrees with up to second derivatives in the neighborhood of a point , that is, We may calculate an estimate of at by finding the point where the derivative of vanishes [1, 10], that is, , we have This gives the following iterative formula given by This is a one-parameter family of Newton’s method for unconstrained optimization problem and do not fail even if unlike Newton’s method.

(b) Extension of Formula (2.7).
Similarly, it is possible to construct a quadratic function from (2.6) which agrees with up to second derivatives in the neighborhood of a point , that is, Taking into account that , we get This is the modification over the formula (2.7) for unconstrained optimization problem which again does not fail even if . In (3.4) and (3.6), the sign in the denominator should be chosen so that the denominator is largest in magnitude. If we let in (3.4) and (3.6), we obtain Newton’s iteration formula for unconstrained optimization problem [1].
Adopting the same procedure as in exponential iteration formulae, we can also derive exponential quadratically convergent iterative formulae for unconstrained optimization. Recently, Kahya [10] also derived similar formulae, namely, (3.4) and (3.6) by using the different approach based on the ideas of Mamta et al. [5].

4. Convergence Analysis

Here, we shall present the mathematical proof for the order of convergence of iterative formulae (3.4) and (3.6), respectively.

Theorem 4.1. Let be a sufficiently differentiable function defined on , and let be an optimum point of . Assume that the initial guess is sufficiently close to and in . Then the iteration scheme defined by formula (3.4) is quadratically convergent and satisfies the following error equation: where and

Proof. Since is an optimum point of , that is, and . Expanding and about by Taylor’s expansion, we obtain and Furthermore, Using (4.5) in (3.4), we get This proves the quadratic convergence of the formula (3.4).

Theorem 4.2. Let be a sufficiently differentiable function defined on , and let be an optimum point of . Assume that initial guess is sufficiently close to in . Then the iteration scheme defined by formula (3.6) is quadratically convergent and satisfies the following error equation:

Proof. Using (4.3) and (4.4), we have Using (4.8) in (3.6), we get It is interesting to note that this error equation is the same as that of Newton’s method. This completes the proof of the theorem.

5. Numerical Examples

Here we consider some examples to compare the number of iterations needed in the traditional Newton’s method and its modifications, namely, (2.5), (2.7), (3.4), and (3.6) respectively, for solving nonlinear equation (Table 1 and Table 2) as well as unconstrained optimization problems (Table 3 and Table 4). Here, for simplicity, the formulae are tested for . Computations have been performed using in double-precision arithmetic.

In the following problems, we are to find the root of equations in the given interval .

6. Discussion and Conclusions

This study presents several iterative formulae of second order for solving scalar nonlinear equations and unconstrained optimization problems. The numerical examples considered in Table 1, Table 2, Table 3, and Table 4 above show that in many cases our methods are efficient alternative to Newton’s method which may converge slowly or even fail. These are the simple extensions of Newton’s formula and have well-known geometric derivations. These methods remove the severe conditions or of Newton’s method for the case of nonlinear equations or for the case of nonlinear unconstrained optimization problems, respectively. The behaviors of Newton’s method and the proposed modifications can be compared by their correction factors. For example, Newton’s correction factor is now modified by , where the parameter is chosen such that the corresponding function values and have opposite signs. However, for and if derivatives of the function are singular or almost singular, Newton’s method will either fail or diverge. Therefore, these modifications have two remarkable advantages over Newton’s method, namely, (i) if , the modified denominator of proposed modifications is well defined and never zero, provided is not accepted as an approximation to the required root or optima, respectively, and hence, they are well defined even if or happens; (ii) the absolute value of the modified denominator of modified techniques is always greater than the denominator of Newton’s method, that is, , provided is not accepted as an approximation to the required root or optima, respectively. This means that the proposed methods are numerically more stable unlike Newton’s method. Finally numerical experiments demonstrate that the parabolic methods outperform Newton’s method and the one-parameter family of Newton’s method.

Acknowledgment

The authors would like to thank the reviewers and the academic editor for many valuable comments and suggestions.