Abstract
We propose an approach to enhance the performance of a diagonal variant of secant method for solving large-scale systems of nonlinear equations. In this approach, we consider diagonal secant method using data from two preceding steps rather than a single step derived using weak secant equation to improve the updated approximate Jacobian in diagonal form. The numerical results verify that the proposed approach is a clear enhancement in numerical performance.
1. Introduction
Solving systems of nonlinear equations is becoming more essential in the analysis of complex problems in many research areas. The problem considered is to find the solution of nonlinear equations: where is continuously differentiable in an open neighborhood of a solution of the system (1.1). We assume that there exists with and , where is the Jacobian of at for which it is assumed to be locally Lipschitz continuous at . The prominent method for finding the solution to (1.1) is the Newton's method which generates a sequence of iterates from a given initial guess via where . However, Newtonβs method requires the computation of the matrix entailing the first-order derivatives of the systems. In practice, computations of some functions derivatives are quite costly, and sometimes they are not available or could not be done precisely. In this case, Newton's method cannot be used directly.
It is imperative to mention that some efforts have been already carried out in order to eliminate the well-known shortcomings of Newton's method for solving systems of nonlinear equations, particularly large-scale systems. These so-called revised Newton-type methods include Chord-Newton method, inexact Newton's method, quasi-Newton's method, and so forth (e.g., see [1β4]). On the other hand, most of these variants of Newton's method still have some shortcomings as Newton's counterpart. For example, Broyden's method [5] and Chord's Newton's method need to store an matrix, and their floating points operations are .
To deal with these disadvantages, a diagonally Newton's method has been suggested by Leong et al. [6] by approximating the Jacobian matrix into nonsingular diagonal matrix and updated in every iteration. Incorporating this updating strategy, Leong et al. [6], based upon the weak secant equation of Denis and Wolkonicz [7], showed that their algorithm is appreciably cheaper than Newton's method and some of its variants. It is worth to report that, they employ a standard one-step two-point approach in Jacobian approximation, which is commonly used by most Newton-like methods. In contrast, this paper presents a new diagonal-updating Newton's method by extending the procedure of [6] and employs a two-step multipoint scheme to increase the accuracy of the approximation of the Jacobian. We organize the rest of this paper as follows. In the next section, we present the details of our method. Some numerical results are reported in Section 3. Finally, conclusions are made in Section 4.
2. Two-Step Diagonal Approximation
Here, we define our new diagonal variant of Newton's method which generates a sequence of points via where is a diagonal approximation of the Jacobian matrix updated in each iteration. Our plan is to build a matrix through diagonal updating scheme such that is a good approximation of the Jacobian in some senses. For this purpose, we make use of an interpolating curve in the variable space to develop a weak secant equation which is derived initially by Dennis and Wolkowicz [7]. This is made possible by considering some of the most successful two-step methods (see [8β11] for more detail). Through integrating this two-step information, we can present an improved weak secant equation as follows: By letting and in (2.2), we have Since we used information from the last two steps instead of one previous step in (2.2) and (2.3), consequently we require to build an interpolating quadratic curves and , where interpolates the last two preceding iterates and , and interpolates the last two preceding function evaluation and (which are assumed to be available).
Using the approach introduced in [8], we can determine the value of in (2.2) by computing the values of , , and . If , can be computed as follows:
Let us define by then and are give as that is, in (2.4) is given by .
Assume that is a nonsingular diagonal matrix, the updated version of , that is, is then defined by where is the deviation between and which is also a diagonal matrix. To preserve accurate Jacobian approximation, we update in such a way that the following condition is satisfied: We proceed by controlling the growth error of through minimizing its size under the Frobenius norm, such that (2.10) holds. Consequently we consider the following problem: where ; are the diagonal elements of and ; are the components of vector .
In view of the fact that, the objective function in (2.11) is convex, and the feasible set is also convex, then (2.11) has an unique solution. Hence its Lagrangian function can be expressed as follows: where is the Lagrangian multiplier associated with the constraint.
Taking the partial derivative of (2.12) with respect to each component of then set them equal to zero, multiply the relation by and sum them all together, yields Using (2.14) and the constraint, we obtain after some simplifications Next, by substituting (2.15) into (2.13) and performing some little algebra, we obtain Letting and , where is the trace operation, we can finally present the updating formula for as follows. denotes the Euclidean norm of a vector.
To safeguard on the possibly of generating undefined , we propose to use our updating scheme for : Now, we can describe the algorithm for our proposed method as follows:
Algorithm 2.1 (2-MFDN). Step 1. Choose an initial guess and , let .Step 2 (Compute ). If stop, where .Step 3. If define . Else if set and and go to 5.Step 4. If compute , , and via (2.4)β(2.6), respectively and find and using (2.7) and (2.8), respectively. If set and .Step 5. Let and update as defined by (2.17).Step 6. Check if if yes retain , that is, computed by Step 5. Else set, .Step 7. Set and go to 2.
3. Numerical Results
In this section, we analyze the performance of 2-MFDN method compared to four Newton-Like methods. The codes are written in MATLAB 7.0 with a double-precision computer, the stopping criterion used is The methods that we considered are(i)Newton's method (NM). (ii)Chord (fixed) Newton-method (FN).(iii)The Broyden method (BM).(iv)MFDN stands for the method proposed by Leong et al. [6].
The identity matrix has been chosen as an initial approximate Jacobian. Six (2.4) different dimensions are performed on the benchmark problems, that is, 25, 50, 100, 500, 1000, and 250000.
The symbol βββ is used to indicate a failure due to:(i)the number of iteration is at least 500 but no point of that satisfies (3.1) is obtained; (ii)CPU time in second reaches 500; (iii)insufficient memory to initiate the run.
In the following, we illustrate some details on the benchmarks test problems.
Problem 1. Trigonometric system of Shin et al. [12]:
Problem 2. Artificial function:
Problem 3. Artificial function:
Problem 4. Trigonometric function of Spedicato [13]:
Problem 5. Spares system of Shin et al. [12]:
Problem 6. Artificial function:
Problem 7 (see [14]).
From Table 1, it is noticeably that using the 2-step approach in building the diagonal updating scheme has significantly improved the performance of the one-step diagonal variant of Newton's method (MFDN). This observation is most significant when one considers CPU time and number of iterations, particularly as the systems dimensions increase. In addition, it is worth mentioning that, the result of 2-MFDN method in solving Problem 3 shows that the method could be a good solver even when the Jacobian is nearly singular.
The numerical results presented in this paper shows that 2-MFDN method is a good alternative to MFDN method especially for extremely large systems.
4. Conclusions
In this paper, a new variant of secant method for solving large-scale system of nonlinear equations has been developed (2-MFDN). The method employs a two-step, 3-point scheme to update the Jacobian approximation into a nonsingular diagonal matrix, unlike the single-step method. The anticipation behind this approach is enhanced by the Jacobian approximation. Our method requires very less computational cost and number of iterations as compared to the MFDN method of Leong et al. [6]. This is more noticeable as the dimension of the system increases. Therefore, from the numerical results presented, we can wind up that, our method (2-MFDN ) is a superior algorithm compared to NM, FN, BM, and MFDN methods in handling large-scale systems of nonlinear equations.