Table of Contents
International Journal of Engineering Mathematics
Volume 2016, Article ID 6390367, 18 pages
http://dx.doi.org/10.1155/2016/6390367
Research Article

A New Accurate and Efficient Iterative Numerical Method for Solving the Scalar and Vector Nonlinear Equations: Approach Based on Geometric Considerations

Aix-Marseille Université, IFSTTAR, LBA UMR T24, 13016 Marseille, France

Received 31 March 2016; Accepted 12 June 2016

Academic Editor: Josè A. Tenereiro Machado

Copyright © 2016 Grégory Antoni. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Abstract

This paper deals with a new numerical iterative method for finding the approximate solutions associated with both scalar and vector nonlinear equations. The iterative method proposed here is an extended version of the numerical procedure originally developed in previous works. The present study proposes to show that this new root-finding algorithm combined with a stationary-type iterative method (e.g., Gauss-Seidel or Jacobi) is able to provide a longer accurate solution than classical Newton-Raphson method. A numerical analysis of the developed iterative method is addressed and discussed on some specific equations and systems.

1. Introduction

Solving both nonlinear equations and systems is a situation very often encountered in various fields of formal or physical sciences. For instance, solid mechanics is a branch of physics where the resolution of problems governed by nonlinear equations and systems occurs frequently [110]. In most cases, Newton method (also known as Newton-Raphson algorithm) is most commonly used for approximating the solutions of scalar and vector nonlinear equations [1113]. But, over the years, several other numerical methods have been developed for providing iteratively the approximate solutions associated with nonlinear equations and/or systems [1425]. Some of them present the advantage of having both high accuracy and strong efficiency using a numerical procedure based on an enhanced Newton-Raphson algorithm [26]. In this study, we propose to improve the iterative procedure developed in previous works [27, 28] for finding numerically the solution of both scalar and vector nonlinear equations. This study is decomposed as follows: (i) in Section 2, a new numerical geometry-based root-finding algorithm coupled with a stationary-type iterative method (such as Jacobi or Gauss-Seidel) is presented with the aim of solving any system of nonlinear equations [29, 30]; (ii) in Section 3, the numerical predictive abilities associated with the proposed iterative method are tested on some examples and also compared with other algorithms [31, 32].

2. New Iterative Numerical Method for Solving the Scalar and Vector Nonlinear Equations Based on a Geometric Approach

2.1. Problem Statement

We consider vector-valued function , which is continuous and infinitely differentiable (i.e., ), checking the following equation:where denotes the vector-valued variable (with ), is th component associated with vector (with ), is the transpose operator associated with the variable , and denotes the class of infinitely differentiable functions in domain . It should be mentioned that: (i) the nonlinear function has a unique solution on domain which is an open subset of , that is, such as ; (ii) the case of scalar equation () with only one variable () is obtained when , that is, .

Equation (1) can also rewritten as system of -scalar nonlinear equations, that is,where denotes th component associated with vector-valued function (see (1)), that is, th nonlinear equation of system . It should be noted that: (i) in the case of (with ), the nonlinear system (2) has a unique solution set such as ; (ii) in the case of , nonlinear system (2) is transformed in scalar nonlinear equation which has a unique solution such as ; (iii) (1) and (2) are mathematically equivalent, that is, .

With the aim of numerically solving system (2), we adopt a Root-Finding Algorithm (RFA) coupled with a Stationary Iterative Procedure (SIP) such as Jacobi or Gauss-Seidel [26, 30]. The use of any SIP allows to reduce the considered nonlinear system to a successive set of nonlinear equations with only one variable and therefore it can be solved with a RFA [30]. In the present study, we propose an extended version of RFA already developed in [27, 28] and combined with a Jacobi or Gauss-Seidel type iterative procedure for dealing any system of nonlinear equations.

2.2. Stationary Iterative Procedures (SIPs) with Root-Finding Algorithms (RFAs)
2.2.1. Jacobi and Gauss-Seidel Iterative Procedures

In order to solve a system of nonlinear equations, any RFA can be used if it is combined with a SIP (i.e., Jacobi or Gauss-Seidel) [26, 29, 30]. A Jacobi or Gauss-Seidel type procedure applied to nonlinear system (1) can be described as follows:with(i)in the case of Jacobi procedure:(ii)in the case of Gauss-Seidel procedure:where (resp., ) denotes th (resp., th) iteration associated with the variable (), is the set of kept constant variables, and represents one set of variables .

2.3. Used Root-Finding Algorithm (RFA)

In previous works [27, 28], a root-finding algorithm (RFA) has been developed for approximating the solutions of scalar nonlinear equations. The new RFA presented here is an extended version to that previously developed taking into account some geometric considerations. In this paper, we propose to use a RFA coupled with Jacobi and Gauss-Seidel type procedures for iteratively solving nonlinear system . Hence, we adopt a new RFA for finding approximate solution (when is fixed and with the known set ) associated with each nonlinear equation of system (see Section 2.1). For each nonlinear equation , parametrized by the set of variables , depending only on one variable , we introduce the exact and inexact local curvature associated with the curve representing the nonlinear equation in question.

The used RFA is based on the following main steps (see [27] for more details):(i)In the first step, we consider the iterative tangent and normal straight lines associated with nonlinear function at point (see Figure 1): where (resp., ) denotes the value (resp., first-order derivative) of function at point , is the set of known variables and are two functionals associated with .(ii)In the second step, we introduce the iterative exact and inexact local curvature associated with the curve representing nonlinear function at point (see Figure 1):where denotes the absolute-value function associated with the variable , is the exact () or inexact () radius of the osculating at point , (with ) is functional associated with , and is the second-order derivative of function at point . It should be noted that: (i) we consider the exact radius associated with the true osculating circle at point (see [33]); (ii) in line with [27, 28], we consider an inexact radius associated with the osculating circle at point , that is, (see (7)).(iii)In the third step, we define the iterative center associated with the exact and inexact osculating circle at point , that is, (see Figure 1) By taking (7) and (8), the iterative centers are (with )where is the iterative centers associated with the exact and inexact osculating circle (with ) associated with each curve representing nonlinear function at point , are two functionals associated with and is sign function (i.e., when , when , and when ).(iv)In the fourth step, we introduce the iterative point such as , that is,where is a functional associated with .(v)In the fifth step, we define the iterative straight line passing through two iterative points and , that is, (with )where is the set of known variables.(vi)In the sixth step, we introduce the iterative straight line passing through the point and the iterative perpendicular straight line such as (with )where is a functional associated with .(vii)In the last step, we define the iterative point which is the solution of the following relation (with )withwhere is a functional associated with .In line with (10), (14) can be rewritten as

Figure 1: Schematic diagram with the specific entities used by the new RFA applied on th component associated with system in the case of monotonically increasing (a) and decreasing (b) evolution with the known set of parameters .

The new iterative method that we will thereafter appoint as “Adaptative Geometric-based Algorithm” (AGA) enables providing a more convenient approximate solution associated with a system of nonlinear equations of type , that is, (see Figure 2)withand for conditions , , , and :and for conditions and :where denotes the fixed-point function [13] used to the considered RFA (i.e., AGA) and is the set of known variables.

Figure 2: Geometric interpretation of the new RFA (i.e., AGA) applied on th component associated with system in the case of monotonically increasing (a) and decreasing (b) evolution with the known set of parameters .

The different conditions associated with the proposed RFA (i.e., AGA) are as follows:(i)First condition [BC1] is(ii)Second condition [BC2] is(iii)Third condition [BC3] is(iv)Fourth condition [BC4] is(v)Fifth condition [BC5] is(vi)Sixth condition [BC6] is

3. Numerical Examples

3.1. Preliminary Remarks

In this section, we propose to evaluate the predictive abilities associated with the numerical iterative method developed in Section 2.3 (i.e., AGA) on some examples both in the case of scalar and vector nonlinear equations. Hence, AGA is compared to other iterative Newton-Raphson type methods [27, 28, 3032] coupled with Jacobi (J) and Gauss-Seidel (GS) techniques. All the numerical implementations associated with these presented iterative methods have been made in Matlab software (see [26, 3439]).

The used iterative methods for the different examples are as follows (see [27, 28, 3032]):(i)Newton-Raphson Algorithm (NRA):where denotes the first-order differential operator associated with nonlinear function at point and is the inverse transform operator of . It is important to highlight that NRA can be used if and only if operator “” exists, that is, , ( is determinant of operator ).(ii)Standard Newton’s Algorithm (SNA):(iii)Third-order Modified Newton Method (TMNM):

In order to stop the iterative process associated with each considered algorithm, we consider three coupled types of criteria for dealing with nonlinear equations:(i)For scalar-valued equations :(a)(C1S) on the iteration number,where denotes the maximum number of iterations associated with scalar-valued equations.(b)(C2S) on the residue error,where is the tolerance parameter associated with the residue error criterion for scalar-valued equations and is the absolute-value norm.(c)(C3S) on the approximation error,where is the tolerance parameter associated with the absolute error criterion for scalar-valued equations.(ii)For vector-valued equations :(a)(C1V) on the iteration number, we adopt the same condition that (C1S), that is, (where denotes the maximum number of iterations associated with vector-valued equations).(b)(C2V) on the residue error,where (resp., ) is the tolerance parameter associated with the residue error criterion for vector-valued equations and is the vector -norm (here, ). It is important to point out that is so-called Euclidean norm.(c)(C3V) on the approximation error,where is the tolerance parameter associated with the absolute error criterion for vector-valued equations.

Here, for the stopping criteria (C1S, C1V), (C2S, C2V), and (C3S, C3V) associated with the iterative process, we consider: (i) the maximum number of iterations ; (ii) the tolerance parameter for the scalar-valued equations; (iii) the tolerance parameter (with ) for the vector-valued equations.

3.2. Examples

We consider the following nonlinear equations.(i)In case (i.e., scalar-valued equations), one has the following:

Example 1. Consider the following:

Example 2. Consider the following:(ii)In case (i.e., vector-valued equations), one has the following:

Example 3. Consider the following:

Example 4. Consider the following:

3.3. Results and Discussion

All the numerical results of Examples 14 are shown in Figures 332. For Example 1 (resp., Example 2) with guest start point (resp., ), we can see that approximate solutions provided by AGA with condition [BC1]/[BC3] (i.e., condition is the same that where ) are better than AGA with condition , NRA/SNA (when , NRA and SNA are the same), and TMNM. For Example 1 (resp., Example 2) in the case of guest start point (resp., ), we can see that approximate solutions provided by AGA with conditions , and in the first iterations are accurately better than NRA/SNA (when , NRA and SNA are the same) and TMNM. For Example 3 with guest start point couple , we can observe that approximate solutions given by AGA using Gauss-Seidel (GS) or Jacobi (J) procedure with: (i) condition are more accurate numerically than NRA, TMNM, and SNA; (ii) conditions and are accurately better than NRA (only in the first iterations) and SNA. In the case of guest start point couple , we can see that approximate solutions provided by AGA using: (i) Gauss-Seidel (GS) procedure with condition give much greater numerical accuracy than NRA and SNA; (ii) Gauss-Seidel (GS) procedure with conditions and offer much greater numerical accuracy than NRA and SNA; (iii) Jacobi (J) procedure with conditions , and are better than NRA (only in the first iterations) and SNA. For Example 4 with guest start point couple , we can see that approximate solutions given by AGA using: (i) Gauss-Seidel (GS) procedure with conditions and are more accurate numerically than SNA, TMNM, and NRA; (ii) Gauss-Seidel (GS) procedure with condition are accurately better than TMNM and NRA (only in the first iterations) and SNA; (iii) Jacobi (J) procedure with conditions , and [BC3] are more accurate numerically than NRA (only in the first iterations) and both TMNM and SNA. In the case of guest start point couple , approximate solutions obtained by AGA using: (i) Gauss-Seidel (GS) procedure with conditions and are more accurate numerically than SNA and NRA; (ii) Jacobi (J) procedure with the conditions , , and are accurately better than NRA (only in the first iterations) and SNA. Overview of different numerical results shows that the Adaptive Geometry-based Algorithm (AGA) can be able to provide quite accurately the approximate solutions associated with both nonlinear equations and systems and can potentially provide a better or more suitable approximate solution than that of Newton-Raphson Algorithm (NRA).

Figure 3: Evolution of approximate solutions associated with () compared to th iteration for Example 1 (where ) with NRA/SNA (black solid line with circles), TMNM (green solid line with circles), and AGA with condition / (blue solid line with circles) and condition (red solid line with circles).
Figure 4: Evolution of approximate solutions associated with () compared to th iteration for Example 1 (where ) with NRA/SNA (black solid line with circles), TMNM (green solid line with circles), and AGA with condition (blue solid line with circles) and condition / (magenta solid line with circles).
Figure 5: Evolution of residue error (C2S) and approximation error (C3S) associated with () compared to th iteration for Example 1 (where ) with NRA/SNA (black solid line with diamonds and squares), TMNM (green solid line with diamonds and squares), and AGA with condition / (blue solid line with diamonds and squares) and condition (red solid line with diamonds and squares).
Figure 6: Evolution of residue error (C2S) and approximation error (C3S) associated with () compared to th iteration for Example 1 (where ) with NRA/SNA (black solid line with diamonds and squares), TMNM (green solid line with diamonds and squares), and AGA with condition [BC4] (blue solid line with diamonds and squares) and condition / (magenta solid line with diamonds and squares).
Figure 7: Evolution of approximate solutions associated with () compared to th iteration for Example 2 (where ) with NRA/SNA (black solid line with circles), TMNM (green solid line with circles), and AGA with condition / (blue solid line with circles) and condition (red solid line with circles).
Figure 8: Evolution of approximate solutions associated with () compared to th iteration for Example 2 (where ) with NRA/SNA (black solid line with circles), TMNM (green solid line with circles), and AGA with condition (blue solid line with circles), condition (red solid line with circles), and condition (magenta solid line with circles).
Figure 9: Evolution of residue error (C2S) and approximation error (C2S) associated with () compared to th iteration for Example 1 (where ) with NRA/SNA (black solid line with diamonds and squares), TMNM (green solid line with diamonds and squares), and AGA with condition / (blue solid line with diamonds and squares) and condition (red solid line with diamonds and squares).
Figure 10: Evolution of residue error (C2S) and approximation error (C3S) associated with () compared to th iteration for Example 2 (where ) with NRA/SNA (black solid line with diamonds and squares), TMNM (green solid line with diamonds and squares), and AGA with condition (blue solid line with diamonds and squares), condition (red solid line with diamonds and squares), and condition (magenta solid line with diamonds and squares).
Figure 11: Evolution of approximate solutions () associated with () compared to th iteration for Example 3 (where ) with NRA (black solid line with circles) and some other algorithms coupled with Gauss-Seidel (GS) procedure: SNA-GS (cyan solid line with circles), TMNM-GS (green solid line with circles), AGA-GS with condition (blue solid line with circles), condition (red solid line with circles), and condition (magenta solid line with circles).
Figure 12: Evolution of approximate solutions () associated with () compared to th iteration for Example 3 (where ) with NRA (black solid line with circles) and some other algorithms coupled with Jacobi (J) procedure: SNA-J (cyan dashed line with circles), TMNM-J (green dashed line with circles), and AGA-J with condition (blue dashed line with circles), condition (red dashed line with circles), and condition (magenta dashed line with circles).
Figure 13: Evolution of residue error using (C2V1) and (C2V2) conditions associated with () compared to th iteration for Example 3 (where ) with NRA (black solid line with diamonds) and some other algorithms coupled with Gauss-Seidel (GS) procedure: SNA-GS (cyan solid line with diamonds), TMNM-GS (green solid line with diamonds), and AGA-GS with condition (blue solid line with diamonds), condition (red solid line with diamonds), and condition [BC3] (magenta solid line with diamonds).
Figure 14: Evolution of residue error using (C2V1) and (C2V2) conditions associated with () compared to th iteration for Example 3 (where ) with NRA (black solid line with diamonds) and some other algorithms coupled with Jacobi (J) procedure: SNA-J (cyan dashed line with diamonds), TMNM-J (green dashed line with diamonds), and AGA-J with condition (blue dashed line with diamonds), condition (red dashed line with diamonds), and condition (magenta dashed line with diamonds).
Figure 15: Evolution of the approximation error using (C3V1) and (C3V2) conditions associated with () compared to th iteration for Example 3 (where ) with NRA (black solid line with squares) and some other algorithms coupled with Gauss-Seidel (GS) procedure: SNA-GS (cyan solid line with squares), TMNM-GS (green solid line with squares), and AGA-GS with condition (blue solid line with squares), condition (red solid line with squares), and condition (magenta solid line with squares).
Figure 16: Evolution of the approximation error using (C3V1) and (C3V2) conditions associated with () compared to th iteration for Example 3 (where ) with NRA (black solid line with squares) and some other algorithms coupled with Jacobi (J) procedure: SNA-J (cyan dashed line with squares), TMNM-J (green dashed line with squares), and AGA-J with condition (blue dashed line with squares), condition (red dashed line with squares), and condition (magenta dashed line with squares).
Figure 17: Evolution of approximate solutions () associated with () compared to th iteration for Example 3 (where ) with NRA (black solid line with circles) and some other algorithms coupled with Gauss-Seidel (GS) procedure: SNA-GS (cyan solid line with circles) and AGA-GS with condition (blue solid line with circles), condition (red solid line with circles), and condition (magenta solid line with circles).
Figure 18: Evolution of approximate solutions () associated with () compared to th iteration for Example 3 (where ) with NRA (black solid line with diamonds and squares) and some other algorithms coupled with Jacobi (J) procedure: SNA-J (cyan dashed line with circles) and AGA-J with condition (blue dashed line with circles), condition (red dashed line with circles), and condition (magenta dashed line with circles).
Figure 19: Evolution of residue error using (C2V1) and (C2V2) conditions associated with () compared to th iteration for Example 3 (where ) with NRA (black line) and some other algorithms coupled with Gauss-Seidel (GS) and Jacobi (J) procedures: SNA-GS/SNA-J (cyan line) and AGA-GS/AGA-J with condition (blue line), condition (red line), and condition (magenta line).
Figure 20: Evolution of approximation error for (C3V1) and (C3V2) conditions associated with () compared to th iteration for Example 3 (where ) with NRA (black line) and some other algorithms coupled with Gauss-Seidel (GS) and Jacobi (J) procedures: SNA-GS/SNA-J (cyan line) and AGA-GS/AGA-J with condition (blue line), condition (red line), and condition (magenta line).
Figure 21: Evolution of approximate solutions () associated with () compared to th iteration for Example 4 (where ) with NRA (black solid line with circles) and some other algorithms coupled with Gauss-Seidel (GS) procedure: SNA-GS (cyan solid line with circles), TMNM-GS (green solid line with circles), and AGA-GS with condition (blue solid line with circles), condition (red solid line with circles), and condition (magenta solid line with circles).
Figure 22: Evolution of approximate solutions () associated with () compared to th iteration for Example 4 (where ) with NRA (black solid line with circles) and some other algorithms coupled with Jacobi (J) procedure: SNA-J (cyan dashed line with circles), TMNM-J (green dashed line with circles), and AGA-J with condition (blue dashed line with circles), condition (red dashed line with circles), and condition (magenta dashed line with circles).
Figure 23: Evolution of residue error using (C2V1) and (C2V2) conditions associated with () compared to th iteration for Example 4 (where ) with NRA (black solid line with diamonds) and some other algorithms coupled with Gauss-Seidel (GS) procedure: SNA-GS (cyan solid line with diamonds), TMNM-GS (green solid line with diamonds), and AGA-GS with condition (blue solid line with diamonds), condition (red solid line with diamonds), and condition (magenta solid line with diamonds).
Figure 24: Evolution of residue error using (C2V1) and (C2V2) conditions associated with () compared to th iteration for Example 4 (where ) with NRA (black solid line with diamonds) and some other algorithms coupled with Jacobi (J) procedure: SNA-J (cyan dashed line with diamonds), TMNM-J (green dashed line with diamonds), and AGA-J with condition (blue dashed line with diamonds), condition (red dashed line with diamonds), and condition (magenta dashed line with diamonds).
Figure 25: Evolution of the approximation error using (C3V1) and (C3V2) associated with () compared to th iteration for Example 4 (where ) with NRA (black solid line with squares) and some other algorithms coupled with Gauss-Seidel (GS) procedure: SNA-GS (cyan solid line with squares), TMNM-GS (green solid line with squares), and AGA-GS with condition (blue solid line with squares), condition (red solid line with squares), and condition (magenta solid line squares).
Figure 26: Evolution of the approximation error using (C3V1) and (C3V2) conditions associated with () compared to th iteration for Example 4 (where ) with NRA (black solid line with squares) and some other algorithms coupled with Jacobi (J) procedure: SNA-J (cyan dashed line with squares), TMNM-J (green dashed line with squares), and AGA-J with condition (blue dashed line with squares), condition (red dashed line with squares), and condition (magenta dashed line with squares).
Figure 27: Evolution of approximate solutions () associated with () compared to th iteration for Example 4 (where ) with NRA (black solid line with circles) and some other algorithms coupled with Gauss-Seidel (GS) procedure: SNA-GS (cyan solid line with circles) and AGA-GS with condition (blue solid line with circles), condition (red solid line with circles), and condition (magenta solid line with circles).
Figure 28: Evolution of approximate solutions () associated with () compared to th iteration for Example 4 (where ) with NRA (black solid line with circles) and some other algorithms coupled with Jacobi (J) procedure: SNA-J (cyan dashed line with circles) and AGA-J with condition (blue dashed line with circles), condition (red dashed line with circles), and condition (magenta dashed line with circles).
Figure 29: Evolution of the residue error using (C2V1) and (C2V2) conditions associated with () compared to th iteration for Example 4 (where ) with NRA (black solid line with diamonds) and some other algorithms coupled with Gauss-Seidel (GS) procedure: SNA-GS (cyan solid line with diamonds) and AGA-GS with condition (blue solid line with diamonds), condition (red solid line with diamonds), and condition (magenta solid line with diamonds).
Figure 30: Evolution of residue error using (C2V1) and (C2V2) conditions associated with () compared to th iteration for Example 4 (where ) with NRA (black solid line with diamonds) and some other algorithms coupled with Jacobi (J) procedure: SNA-J (cyan dashed line with diamonds) and AGA-J with condition (blue dashed line with diamonds), condition (red dashed line with diamonds), and condition (magenta dashed line with diamonds).
Figure 31: Evolution of approximation error using (C3V1) and (C3V2) conditions associated with () compared to th iteration for Example 4 (where ) with NRA (black solid line with squares) and some other algorithms coupled with Gauss-Seidel (GS) procedure: SNA-GS (cyan solid line with squares) and AGA-GS with condition (blue solid line with squares), condition (red solid line with squares), and condition (magenta solid line with squares).
Figure 32: Evolution of the approximation error using (C3V1) and (C3V2) conditions associated with () compared to th iteration for Example 4 (where ) with NRA (black solid line with squares) and some other algorithms coupled with Jacobi (J) procedure: SNA-J (cyan dashed line with squares) and AGA-J with condition (blue dashed line with squares), condition (red dashed line with squares), and condition (magenta dashed line with squares).

4. Concluding Comments

The present work concerns a new numerical iterative method for approximating the solutions of both scalar and vector nonlinear equations. Based on an iterative procedure previously developed in a study, we propose here an extended form of this numerical algorithm including the use of a stationary-type iterative procedure in order to solve systems of nonlinear equations. A predictive numerical analysis associated with this proposed method for providing a more accurate approximate solution in regard to nonlinear equations and systems is tested, assessed and discussed on some specific examples.

Competing Interests

The author declares that there are no competing interests regarding the publication of this paper.

References

  1. O. C. Zienkiewicz and R. L. Taylor, The Finite Element Method: Solid Mechanics, vol. 2, Butterworth-Heinemann, 5th edition, 2000. View at MathSciNet
  2. T. Belytschko, W. K. Liu, and B. Moran, Nonlinear Finite Elements for Continua and Structures, John Wiley & Sons, New York, NY, USA, 2000. View at MathSciNet
  3. J. R. Hugues, The Finite Element Method: Linear Static and Dynamic Finite Element Analysis, Dover Civil and Mechanical Engineering, Dover, New York, NY, USA, 2000.
  4. I. Doghri, Mechanics of Deformable Solids: Linear, Nonlinear, Analytical and Computational Aspects, Springer, Berlin, Germany, 2000. View at Publisher · View at Google Scholar · View at MathSciNet
  5. A. Curnier, Méthodes Numériques en Mécanique des Solides, Presses Polytechniques et Universitaires Romandes, 2000.
  6. M. Kojić and K.-J. Bathe, Inelastic Analysis of Solids and Structures, Computational Fluid and Solid Mechanics, Springer, Berlin, Germany, 2005. View at MathSciNet
  7. M. Bonnet and A. Frangi, Analyse des Solides Déformables par la Méthode des Éléments Finis, Editions de l'Ecole Polytechnique, Paris, France, 2007.
  8. J. Besson, G. Cailletaud, J. L. Chaboche, and S. Forest, Non-Linear Mechanics of Materials, vol. 167 of Solid Mechanics and Its Applications, Springer, New York, NY, USA, 2010.
  9. R. De Borst, M. A. Crisfield, J. J. C. Remmers, and C. V. Verhoosel, Non-Linear Finite Element Analysis of Solids and Structures, Computational Mechanics, Wiley-Blackwell, 2012.
  10. M. Bonnet, A. Frangi, and C. Rey, The Finite Element Method in Solid Mechanics, McGraw-Hill Education, New York, NY, USA, 2014.
  11. C. T. Kelley, Solving Nonlinear Equations with Newton's Method, vol. 1 of Fundamental Algorithms for Numerical Calculations, SIAM, Philadelphia, Pa, USA, 2003.
  12. P. Deuflhard, Newton Methods for Nonlinear Problems. Computational Mathematics, vol. 35, Springer, New York, NY, USA, 2005.
  13. J. P. Dedieu, Points Fixes, Zéros et la Méthode de Newton, Mathématiques et Applications, Springer, 2006.
  14. R. W. Hamming, Numerical Methods for Scientists and Engineers, Dover, New York, NY, USA, 2nd edition, 1987.
  15. W. C. Rheinboldt, Methods for Solving Systems of Nonlinear Equations, CBMS- NSF Regional Conference Series in Applied Mathematics, Book 70, Society for Industrial and Applied Mathematics, Philadelphia, Pa, USA, 2nd edition, 1987.
  16. C. T. Kelley, Iterative Methods for Linear and Nonlinear Equations. Number 16 in Frontiers in Applied Mathematics, SIAM, Philadelphia, Pa, USA, 1995.
  17. M. T. Darvishi and A. Barati, “A third-order Newton-type method to solve systems of nonlinear equations,” Applied Mathematics and Computation, vol. 187, no. 2, pp. 630–635, 2007. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet · View at Scopus
  18. A. Golbabai and M. Javidi, “A third-order Newton type method for nonlinear equations based on modified homotopy perturbation method,” Applied Mathematics and Computation, vol. 191, no. 1, pp. 199–205, 2007. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus
  19. M. A. Noor and M. Waseem, “Some iterative methods for solving a system of nonlinear equations,” Computers and Mathematics with Applications, vol. 57, no. 1, pp. 101–106, 2009. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus
  20. S. Amat, S. Busquier, C. Bermúdez, and S. Plaza, “On two families of high order Newton type methods,” Applied Mathematics Letters, vol. 25, no. 12, pp. 2209–2217, 2012. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus
  21. J. R. Sharma and H. Arora, “On efficient weighted-Newton methods for solving systems of nonlinear equations,” Applied Mathematics and Computation, vol. 222, pp. 497–506, 2013. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet · View at Scopus
  22. M. S. Petkovića, B. Neta, L. D. Petkovićc, and J. Džunić, “Multipoint methods for solving nonlinear equations: a survey,” Applied Mathematics and Computation, vol. 226, pp. 635–660, 2014. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus
  23. J. R. Sharma and P. Gupta, “An efficient fifth order method for solving systems of nonlinear equations,” Computers and Mathematics with Applications, vol. 67, no. 3, pp. 591–601, 2014. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus
  24. A. Singh and J. P. Jaiswal, “An efficient family of optimal eighth-order iterative methods for solving nonlinear equations and its dynamics,” Journal of Mathematics, vol. 2014, Article ID 569719, 14 pages, 2014. View at Publisher · View at Google Scholar · View at MathSciNet
  25. M.-D. Junjua, S. Akram, N. Yasmin, and F. Zafar, “A new Jarratt-type fourth-order method for solving system of nonlinear equations and applications,” Journal of Applied Mathematics, vol. 2015, Article ID 805278, 14 pages, 2015. View at Publisher · View at Google Scholar
  26. A. Quarteroni, R. Sacco, and F. Saleri, Méthodes Numériques Pour le Calcul Scientifique: Programmes en MATLAB, Springer, New York, NY, USA, 2000.
  27. G. Antoni, “A new iterative algorithm for approximating zeros of nonlinear scalar equations: geometry-based solving procedure,” Asian Journal of Mathematics and Computer Research, vol. 10, no. 2, pp. 78–97, 2016. View at Google Scholar
  28. G. Antoni, “A geometry-based iterative algorithm for finding the approximate solution of systems of nonlinear equations,” Asian Journal of Mathematics and Computer Research, In press.
  29. M. N. Vrahatis, G. D. Magoulas, and V. P. Plagianakos, “From linear to nonlinear iterative methods,” Applied Numerical Mathematics, vol. 45, no. 1, pp. 59–77, 2003. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus
  30. G. Antoni, “A new Newton-type method for solving both non-linear equations and systems,” Asian Journal of Mathematics and Computer Research, vol. 6, no. 3, pp. 193–212, 2015. View at Google Scholar
  31. G. Antoni, “A new class of two-step iterative algorithms for finding roots of nonlinear equations,” Asian Journal of Mathematics and Computer Research, vol. 7, no. 3, pp. 175–189, 2016. View at Google Scholar
  32. G. Antoni, “Some iterative algorithms with sub-steps to solve systems of non-linear equations,” Asian Journal of Mathematics and Computer Research, vol. 9, no. 3, pp. 214–227, 2016. View at Google Scholar
  33. F. P. Miller, A. F. Vandome, and J. McBrewster, Frenet-Serret Formulas: Vector Calculus, Curve, Derivative, Euclidean Space, Kinematics, Darboux Frame, Differential Geometry of Curves, Affine Geometry of Curves, Alphascript, San Carlos, Calif, USA, 2010.
  34. G. W. Recktenwald, Numerical Methods with MATLAB: Implementations and Applications, Pearson, New York, NY, USA, 2nd edition, 2000.
  35. J. H. Mathews and K. K. Fink, Numerical Methods Using Matlab, Pearson, New Jersey, NJ, USA, 4th edition, 2004.
  36. W. Y. Yang, W. Cao, T. S. Chung, and J. Morris, Applied Numerical Methods Using MATLAB, Wiley Interscience, 1st edition, 2005.
  37. J. Kiusalaas, Numerical Methods in Engineering with MATLAB, Cambridge University Press, Cambridge, UK, 2nd edition, 2009.
  38. R. Butt, Introduction to Numerical Analysis Using MATLAB, Jones & Bartlett Learning, 1st edition, 2009.
  39. S. C. Chapra, Applied Numerical Methods with MATLAB for Engineers and Scientists, McGraw-Hill Higher Education, 3rd edition, 2011.