Abstract

The current manuscript is concerned with the development of the Newton–Raphson method, playing a significant role in mathematics and various other disciplines such as optimization, by using fractional derivatives and fractional Taylor series expansion. The development and modification of the Newton–Raphson method allow us to establish two new methods, which are called first- and second-order fractional Newton–Raphson (FNR) methods. We provide convergence analysis of first- and second-order fractional methods and give a general condition for the convergence of higher-order FNR. Finally, some illustrative examples are considered to confirm the accuracy and effectiveness of both methods.

1. Introduction

Newton–Raphson method is one of the most powerful techniques to locate the solutions of linear or nonlinear equations in numerous areas of science. Compare to the other methods, the convergence rate of this method is much better. In this method, the neighbourhood of the solution of the equations is determined by utilizing the tangent lines of the curves. However, this method has some drawbacks such as having no solution for some forms of equations in a situation that the initial guess or an iteration coincides with a loop which leads to divergence or oscillation of the method.

The theory and applications of the Newton–Raphson method were presented in [1]. Modified versions of Newton’s method were given in [24]. Newton’s method has been implemented for solving constrained or unconstrained minimization problems in [57]. A novel block Newton method was developed for the computation invariant pairs to represent eigenvalues and eigenvectors in [8]. A criterion was established for selecting the appropriate model and its applications in [9]. Various optimization methods such as the Newton–Raphson, bisection, gradient, and secant methods were reviewed and discussed in [10]. The optimization solution of the estimating function in the regression models was determined in [11, 12].

In the current study, we focus on developing a novel modification of the Newton–Raphson method by utilizing fractional derivatives and fractional Taylor series expansion which allows us to eliminate the shortcomings of this method. One advantage of this method is that this method can be applied to various fractional derivatives such as Riemann–Liouville and Caputo derivatives [1322]. Convergence analysis of first- and second-order FNR is provided, and some conditions on initial guess are obtained. Moreover, these conditions are generalized for higher-order FNR. The main advantages of first- and second-order FNR are that they are more effective and accurate compared to other existing methods. Moreover, the convergency of the obtained solutions is faster than the convergency of the solutions, obtained by other methods.

2. Preliminaries

This section is devoted to fundamental notions in fractional calculus [1618].

Definition 1. order of Riemann–Liouville integral is given by [16]

Definition 2. order fractional derivative in Caputo sense is given by [16]where is the ordinary differential operator of order .

Theorem 1. [23, 24] Let us suppose that for where , and then, we havewith , for all where . Notice that the property of Riemann–Liouville derivative of constant is different than zero unlike the Caputo derivative.

3. Fractional Newton–Raphson (FNR) Method

In this section, we give the formulation of the first- and second-order FNR [2528].

3.1. First-Order FNR

By taking the first two terms of fractional Taylor series expansion, we have

To solve for the intercept, we set and rearrange the terms

Thus, we have

Repetition of this algorithm generates a sequence of values which leads to the following recurrence relation:

3.2. Second-Order FNR

By taking the first three terms of the fractional Taylor series expansion, we get

To solve for the intercept, we set and rearrange termswhich leads to the following

Repetition of this algorithm generates a sequence of values which are formulated by the following recurrence relation:

4. Convergence

In this section, the convergence analysis of first- and second-order FNR is given by the following theorems, respectively.

Theorem 2. Assume that is twice continuously differentiable on an open interval and that there exists a point with . Implementing the first-order FNR method, we have the following recurrence relation:Under the assumption that converges to as we havefor sufficiently large. Thus, converges to quadratically.

Proof. Let , so that . Setting and in fractional Taylor’s Theorem, we getfor some . Since , we haveHaving the condition that derivative of is continuous with as long as is close enough to allows us to divide by which leads to the following:As a result, the formulation of first-order FNR gives the following:After rearrangement, we haveFinally,By continuity, converges to . Convergence of to leads to convergence of to . As a result, we haveiffor sufficiently large .

Theorem 3. Assume that is three times continuously differentiable on an open interval and that there exists with . Implementing the first-order FNR method, we have the following recurrence relation:Under the assumption that converges to as we havefor sufficiently large . Thus, converges to quadratically.

Proof. Let , so that . Setting and , in fractional Taylor’s Theorem, we getfor some . Since , we haveHaving the condition that derivative of is continuous with as long as is close enough to allows us to divide by which leads to the following:As a result, the formulation of first-order FNR gives the following:After rearrangement, we haveFinally,By continuity, converges to . Convergence of to leads to convergence of to . As a result, we haveiffor sufficiently large .
In general, for the convergence of higher-order FNR, we obtain the following condition:for sufficiently large .

5. Numerical Examples

In this section, some illustrative examples are presented to show the implementation of first- and second-order FNR which allows us to confirm the obtained results given in the previous section. Matlab R2016b with stopping criterion and a maximum of 500 iterations are utilized. In the tables of corresponding examples, the reached root , , and the number of iterations are shown.

Example 1. Let us consider the function with roots and .
It can be observed from Tables 1 and 2 that the estimation of second-order FNR is better than the one of the first-order FNR when the order of the derivative is close to one in both Caputo and Riemann–Liouville derivatives. In Figures 14, the convergence plane of the polynomial function is given when for various values of .

Example 2. Let us consider the function , whose only root is .
It can be observed from Tables 3 and 4 that the estimation of second-order FNR is better than the one of the first-order FNR when the order of the derivative is close to one in both Caputo and Riemann–Liouville derivatives. In Figures 58, the convergence plane of the polynomial function is given when for various values of .

6. Conclusion

First- and second-order FNR are developed, and analyzed and applied in this study. Moreover, the convergence of both methods is established. It is shown that second-order FNR gives better results compare to first-order FNR when the order of fractional derivative is close to one in both Caputo and Riemann–Liouville derivatives. It is also shown that the order of convergence for first-order FNR is quadratic while one of the second-order FNR is . It is clear from tables that as the fractional parameter increases to one, the number of iterations decreases for both developed methods. Moreover, figures depict that the convergence of approximate solutions is better for which can be seen also from the tables. Generally, it is obvious from the obtained formulation that the order of convergence for order FNR is . The obtained results are verified by presented examples, too.

Data Availability

The data used to support the findings of this study are available from the corresponding author upon request.

Conflicts of Interest

The authors declare that they have no conflicts of interest.