Abstract

In order to solve systems of nonlinear equations, two novel iterative methods are presented. The successive over-relaxation method and the Chebyshev-like iterative methods to solve systems of nonlinear equations have combined to obtain the new algorithms. By this combination, two powerful hybrid methods are obtained. Necessary conditions for convergence of these methods are presented. Furthermore, the stability analysis of both algorithms is investigated. These algorithms are applied for solving two real stiff systems of ordinary differential equations. These systems arise from an HIV spreading model and an SIR model of an epidemic which formulates the spread of a nonfatal disease in a certain population. Numerical results show promising convergence and stability for both new hybrid methods.

1. Introduction

Systems of nonlinear equations (SNEs) arise in many different areas of science and engineering. As a matter of fact, there are different problems where many of nonlinear equations depending on some independent variables must be solved as well. One may find such problems in different areas of applied sciences. In practice, obtaining the exact solutions of such systems is usually impossible, because of their inherent nonlinear properties. Hence, to obtain an approximate solution for nonlinear systems, one has to solve them by an iterative method. The oldest and confident method to solve these systems is the well-known Newton’s method which is a second order convergent algorithm [1]. Many researchers have worked on solving SNEs in the first decades of this century. Indeed, there are different high-order iterative methods to solve SNEs. But despite of their high-order convergence property, they are not useful in practice, because of their cost in computing the second derivatives. For example, the Halley method [2] is one of such methods. It must be noted that, in an nonlinear system the first Fréchet derivative matrix has elements and the second Fréchet derivative matrix has elements. These show a huge number of operations in order to evaluate a new solution approximation in any iteration. Tacitly, duo to the limitation on working with some computers, for large SNEs one cannot compute the first or the second Fréchet derivative matrices. Hence, presenting any new method which does not need computation of derivative matrices will be very welcome in the area of solving SNEs.

Suppose that and for consider as a nonlinear real-valued function on domain . Let , that is, is a vector-valued, multivariable function on domain . In general, one may present an SNE as . Most of the iterative methods to solve SNEs need to evaluate the Jacobian matrix related to in one or more points at each iteration. Clearly, computing of the Jacobian matrix is the most time consuming section in all iterative algorithms that need this matrix.

There are different iterative schemes to solve SNEs which some of them are hybrid algorithms. For example, Babajee et al. [3] presented two Chebyshev-like algorithms free from second-order derivative for solving SNEs. A successive over-relaxation Steffensen–Newton method to solve SNEs is introduced by Darvishi and Darvishi [4]. Also, Darvishi and Darvishi [5] introduced some hybrid methods, namely, the successive over-relaxation (SOR) Newton methods to solve SNEs. A family of two-point third-order Chebyshev-like methods is introduced by Traub [6]. This family of iterative methods was based on the approximation of the second derivative in the Chebyshev method by a finite difference between two first derivatives. Indeed this third-order Chebyshev-like algorithm is a powerful iterative method to solve systems of nonlinear equations. Also, as successive over-relaxation (SOR) method is a promising iterative method to solve systems of linear equations; in this paper, we combine these two powerful methods to introduce new hybrid methods for solving systems of nonlinear equations. We nominate these hybrid methods as SOR Chebyshev-like (SORCL) algorithms. As the reported numerical results show, these hybrid methods can solve systems of nonlinear equations which arise from the stiff systems of ordinary differential equations (ODEs). Convergence and stability analysis of both methods are discussed. Meanwhile, a comparison study between results of our methods and other methods is presented.

2. SOR Chebyshev-Like Algorithms

A general form of an SNE can be presented as follows:where is a multivariable, vector-valued function. That is, wherein is a real-valued, multivariable function. The regularity conditions on are its differentiability and nonsingularity of its Jacobian matrix.

The following one-step iterative SOR-Newton method to solve SNE (1) is presented by Ortega and Rheinboldt [7]:where , is the relaxation parameter, and is the number of iteration steps. In (2), the th approximation of (1), namely, will be the starting point to obtain the approximation of the solution in the next step which is considered as follows:

Now, to obtain the elements of the next approximation, first, we set

Then, using relation (2), the elements of th approximation are computed one by one. Considering are obtained in this iteration, then we have

As a matter of fact, we have

To see more details of the SOR-Newton method, we refer the readers to pages 214–222 of [7]. According to SOR-Newton method, we combine the SOR and the Chebyshev-like algorithms [6] to obtain two convergent, stable, and efficient algorithms for solving SNE (1).

2.1. The First SOR Chebyshev-Like Algorithm

In this section, to solve SNE (1), the first SOR Chebyshev-like algorithm, denoted by SORCL1 is presented. This is defined by solving the following equation for :

In this algorithm, first, we setand then, we obtain from (7) as follows:whereis the classical Newton iteration and . Finally, we setor

The iterative scheme (12) is the iterative method of the SORCL1 algorithm, and we apply the iterations until receiving the desired convergence.

2.2. The Second SOR Chebyshev-Like Algorithm

The second SOR Chebyshev-like algorithm which is denoted by SORCL2 solves the following equation:for , and then we setwhereso is obtained as follows:

Finally, we setor

Equation (18) is the iterative scheme of the SORCL2 method to solve SNE (1). This iterative equation is iterated until receiving the desired convergence for the approximate solution of the problem. In summary, equations (12) and (18), respectively, are the iterative schemes of SORCL1 and SORCL2 algorithms. One may apply them to solve an SNE like (1) using an initial guess .

3. Convergence Investigation

Convergence property of the SORCL1 and SORCL2 algorithms is presented in this section. To do this, we need the following hypotheses:(i) is a convex set.(ii), where has defined in (1) and is a real-valued strictly convex function on domain .(iii).(iv)For some , is a nonempty and compact set. As a matter of fact, such real number exists and consequently set is a nonempty and compact set.(v)Let and is the Hessian matrix of function that is evaluated at such that for , except in the case that is the point, say , at which is the minimum value of .

Parts (I) and (II) from above hypothesises show that the Hessian matrix of is a positive semi-definite one. Clearly, sets are convex for all . Hypothesis (IV) shows that takes its minimum at some point . It must be noted that, hypothesis (IV) is satisfied nontrivially, subject to receives its minimum at for some . In addition, by hypothesises (I), (II), and (III), there exists such that and is a compact set and is a nonincreasing function. From (II) the minimum point is unique. Furthermore as the final result, a necessary and sufficient condition for function to receive its minimum at a point is .

By hypotheses (I)–(IV) and their mentioned results, convergence of SORCL1 algorithm can be investigated as follows.

3.1. Convergence Investigation of SORCL1

Theorem 1. Let the terms of sequence be computed bywhere inis the classical Newton method andhence

Also, for and , we define sets and values as follows:in addition, consider that satisfies in

Then, for any initial guess the sequence is a well-defined sequence and converges to , where is the solution of the SNE (1).

Proof. First, we show that sequence is a well defined sequence. To do this, we consider sets and from definition of sets as follows:By definition of and , that is,we have . As sequence is a nonincreasing one, then , consequently . Hence, using the mathematical induction one may prove that all terms of are in set , consequently the sequence is a well-defined one. To prove convergence property of sequence , we use Taylor’s expansion on function around for , where denotes the open hyper-line between and . From Taylor’s expansion, we can writeorUsing relation (22), we haveSubstituting relation (29) in (28) givesAs Newton’s method is a convergent algorithm, thus, for that satisfies inthe following inequality holds:Hence, relations (30) and (32) giveTherefore,where is the greatest lower bound of set ; We note that, from the SOR method, we have . It is worth mentioning that because the Hessian matrix of is a positive semidefinite one. Two cases and must be investigated, separately.
Case : Using hypothesis (V), there is a sequence that converges to . Also, as and sequence is a nonincreasing one, the following inequality holds:Thus, as function is a continuous function, .
Case : As satisfies in the following inequality,two following cases must be considered.

Case 1. .
We know that the sequence is a nonincreasing and bounded sequence from below; hence, it is a convergent sequence. Then, . We consider as a limit point of sequence and for we define the following sets:We consider relation (32), that is,as and is bounded, hence, the following relation holds:Therefore,Again, as Newton’s method is a convergent iterative scheme, we havethusTherefore, . Furthermore, if , hence and as is a nonincreasing sequence, hence, . Otherwise if , we havewhere is a known permutation of numbers , n. Consider the following sets:As and hence , consequently all sets are nonempty sets. Besides, we can show that these sets are closed sets. Now, we consider . As is a closed set then there is a positive such that for We know that ; therefore, there exists some such that for . For , continuity of implies that there is a positive such thatSuppose that for all is defined in such a way thatwhere is a function of . It must be mentioned that for any positive one can select such that if , otherwise for any sequence that approaches to zero, there is a sequence with and . For all , we have . As is a compact set, consequently a limit point of exists such that . Relation (47) givesFurthermore, relation (48) yieldshencewherein is the dot product operator of two vectors. As is a strictly convex function, equation (51) is a contradiction. Consequently, a positive exists such that for the following inequality holds:Relation implies that there exists an such thatBesides, there exist some exist such that ; hence, there is an in such a way that hence . If and for , thenRelation (54) is a contradiction, therefore, for we have . Consequently, for we cannot find any such that ; therefore, one cannot find any in , i.e., that is a contradiction. Therefore, .

Case 2. .
There is a very similar discussion of case A for Case B, and hence to avoid any repetition we do not present that discussion here.

3.2. Convergence Investigation of SORCL2

Convergence analysis of SORCL2 is similar to Theorem 1. In this section, we only present a sketch of proof of this analysis. To do this, we need our mentioned hypothesises (I)–(V).

Theorem 2. We assume that the elements of sequence are obtained by the following relations:whereis the classical Newton’s method and

Now, for , we define sets asand if we define as

Finally, we consider which satisfies in the following inequality:

Then, for any , sequence is a well-defined sequence and converges to the solution of SNE (1), say .

Proof. Similar to proof of Theorem 1, we can prove that the sequence is a well-defined sequence. Hence, it is enough to prove that the sequence is a convergent one. If is selected in such a way that , hence, by Taylor’s expansion on , we haveorRelation (56) givessubstituting relation (64) in relation (63) givesHence,As is a bounded function, there is a positive number such thatthussubstituting (68) in (66) yieldsAs Newton’s method is a convergent algorithm, thus, for any that satisfies inwe haveby using these relations, inequality (68) is changed toHence,Then,orwhere is the greatest lower bound of . For , we define the following sets:Relation (69) givesAs and is bounded thentherefore, from (77) we haveHowever, from the convergence of Newton’s method,Note that is bounded; then,Thus, . Hereafter, we can prove the theorem similar to proof of the previous theorem.

4. Stability Analysis

Stability analysis of SORCL1 and SORCL2 algorithms are investigated in this part. It is well-known that the continuous function on the compact set is also a bounded function.

Definition 1 (see [8]). is called a stable solution for SNE , if for every there exists such that if is a solution for SNE and then where and , respectively, are the initial starting points to achieve the solutions and by an iterative method.
First, we investigate numerical stability of SORCL1 method.

Theorem 3. We consider the iteration step of the SORCL1 method as follows:

Suppose that and are two solutions of SNE which are obtained by the starting points and , respectively. Then, for every , there exists such that implies that where is a continuous function on the compact set .

Proof. By mathematical induction for , we haveAs is a bounded function, there are positive numbers , , , and such thatSo for , we haveThus, for , we haveNow, we assume that, for , the required inequality holds and we prove the inequality for . We haveby the induction hypothesis for and bounded property of function we haveSet hence whenever . This completes the proof.
The numerical stability of the SORCL2 method is presented in the following theorem.

Theorem 4. We consider the iteration step of SORCL2 method as follows:

If solutions and of nonlinear system are obtained by SORCL2 algorithm using the initial guesses and , respectively, then for every there exists such that if where is a continuous function on the compact set .

Proof. Here also, the theorem is proved by the mathematical induction. For , we haveDue to the fact that the functions and are bounded, there are positive numbers , , , and such thatSince Newton’s method is a convergent method, there are sufficiently large integers and such thatTherefore, for , we haveThus, for , we haveNow, we assume that the required inequality holds for . We show that the inequality also holds for . We havethus for and positive numbers and we can writeConsequently, for , we have whenever . This completes the proof.
Hence, by Theorems 3 and 4, algorithms SORCL1 and SORCL2 are stable iterative methods.

5. Numerical Results

One of the models which while obtaining its solutions needs solving a system of nonlinear equations is the system of ordinary differential equations (ODEs). A system of ODEs arises in many different areas of applied mathematics and social and natural sciences. One may find many problems in physics and biology which are mathematically modelled by systems of ODEs. For example, the susceptible-infected-recovered (SIR) model of an epidemic is shown by a system of ODEs [912]. The HIV spreading model is also expressed by a system of ODEs [1316]. Besides, studying of dynamics of infiltration of cancer cells needs solving a system of nonlinear ODEs [17]. Also, a system of ODEs can arise easily from a th order linear differential equation. There are many analytical and numerical schemes to solve a system of ODEs [1822]. One of the numerical methods to solve a system of ODEs is discretizing it by a finite difference method and then solving it. This changes the system of differential equations to an algebraic system of (nonlinear) equations which can be solved by (nonlinear) iterative solvers. In this section, we solve two stiff systems of ODEs by our methods. Furthermore, a comparison study between our algorithms and another algorithms including MATLAB function is presented. This solver is a suitable one for solving stiff systems of ODEs.

Problem 1. As the first example, we solve the following system of ODEs which arises from spreading of viruses in an HIV disease:with initial conditionsBesides, in model (97), we have the followings:: the function of density of normal cells in time : the function of HIV-free cells in time : the function of infected cells by the HIV viruses in time ,: natural death rate of noninfected cells: natural death rate of HIV-infected cells: natural death rate of HIV viruses: source term for noninfected cells: the rate of cells which are infected by HIV viruses: the rate of proliferation of cell density: the number of HIV viruses which are produced by each infected cell: maximum density of cellsFinally, for describing model (97), we have the followings [14, 16, 17, 23]:(i) shows the logical proliferation of the normal cells.(ii)The term shows the incidence of HIV infection of normal cells.(iii)Growth of infected cells is not statistically significant.(iv)It is assumed that any infected cell produces virus particles during its life-time. This includes any of its daughter cells of infected cells.(v)At a constant rate , the body produces cells from precursors in the bone marrow and thymus.(vi)When -cells are stimulated by antigens or mitogens, -cells proliferate via mitosis with a rate .The following values are considered for the parameters in this study:There are some numerical methods to solve model (97). Merdan [24] solved the model by homotopy perturbation method, Ongun [25] applied the Laplace Adomian decomposition method to solve the model, Merdan et al. [26] applied the Padè approximate method and the modified variational iteration method (VIM) for solving (97), Yüzbaşi [27] used the Bessel collocation method. A modification of the classical Laplace Adomian decomposition method is used by Doǧan [28] and finally, Srivastava et al. [29] applied the differential transform method (DTM) to solve the model.

5.1. Discretization

If we integrate the following differential equation in interval ,the following difference equation is obtained:

One may discretize (100) according to the following integral approximation:where . In (102), if , we have an explicit method; otherwise, we have an implicit method. Using (101) and (102) change (100) towhere . Therefore, discretizing (97) by (103) yieldswhere .

Problem 1 is solved by SORCL1 and SORCL2 methods. For comparison of our results, we used the other methods to solve the problem. For this comparison, we used the Laplace Adomian decomposition method (LADM), LADM-Padè method [25], the variational iteration method (VIM), and modified variational iteration method (MVIM) [26]. All of these methods had presented their results only for the first six iterations and they could not solve the problem for large values of . Hence, we have compared results of SORCL1 and SORCL2 methods with these methods for day. Also, we solved the problem by ode15s MATLAB function. Tables 13 show this comparison. Furthermore, plots of Figures 1(a)1(c), respectively, show values of and cells for day which are obtained by all methods. As we can see from these tables, our methods solve the problem as well as the other methods. But even though the mentioned methods cannot solve the problem for large values of time, SORCL1 and SORCL2 methods can do that as well. Plot (a) in Figure 1 shows the obtained results for values by all seven mentioned methods. Similarly, plots (b) and (c) in Figure 1 show the numerical results for values of and cells, respectively. Finally, the results of SORCL1 and SORCL2 methods for about 300 days are presented in plots of Figure 2.

If we consider then indeed we are trying to solve system . To achieve a required accuracy for solving Problem 1, we solved the problem by all seven methods in addition with Newton’s method. Our stopping criterion was . The numerical results are presented in Table 4. As we can see from this table, VIM, MVIM, LADM, and LADM-Padè methods could not solve the problem. The results of this table show quality of our novel methods with respect to the other methods.

Model (97) shows the model of spreading of HIV virus in a body without any treatment. Figure 2(a) shows the values of , , and cells for about 300 days which are obtained by the SORCL1 method. There is a complete description for these values in reality. For example, number of cells during days has a normal situation and this shows the preclinical period. That is, one is infected by the virus but he/she is asymptomatic. After this time, decreasing of cells shows progress of the disease. There are similar descriptions for values of and cells. Similarly, Figure 2(b) shows number of , , and cells for about 300 days which have obtained by SORCL2 method.

Problem 2. As the second example, we consider the following SIR model:with the initial conditionsIn model (105), we have: number of at risk people in time : number of involved people in time : number of cured people in time : rate of disinfection: cured rateAfter discretization of (105), for we haveSystem (105) models the spread of a nonfatal disease in a population. This model is solved by SORCL1 and SORCL2 algorithms for and . The results are compared with results which have obtained by ode15s and Newton’s methods. Table 5 shows the CPU time and accuracy of solution to solve Problem 2 for all four methods. As shown in Table 5, the results of SORCL1 and SORCL2 are very better than the other methods. Also, plots in Figures 3(a) and 3(b) show the values of , and for about one year which are obtained by SORCL1 and SORCL2 methods. Similarly, Figures 3(c) and 3(d) show these results which have obtained by Newton’s and methods, respectively. As we can see from plots of Figure 3, all results are in a similar mode, but as shown in Table 5, the CPU time for new methods are less than these values for Newton’s and ode15s methods.
For this problem, we also have a comparative study between SORCL1, SORCL2, ode15s, and Newton’s method. Our stopping criteria have been . The results of this study are presented in Table 5. As we can see from this table, CPU time of Newton’s method is about 4.3 times of CPU time for SORCL1 and SORCL2 methods and CPU time of ode15s is about 1.2 times of this value for our presented methods. These results show the quality of SORCL1 and SORCL2 methods with respect to Newton’s and ode15s methods.
It is worthy mention that Awawdeh et al. [30] investigated solution of (105) by the homotopy analysis method (HAM). They used 20 terms to approximate functions and . Using 20 terms shows complexity of HAM to solve the SIR model while our presented methods do not have any complexity for their application.

5.2. Another Tolerance Level

To justify qualification of SORCL1 and SORCL2 methods, we considered another tolerance levels as , for HIV model and and for the SIR model. The results of these tolerance levels are reported in Tables 69. As these tables show, for these tolerance levels also our algorithms have better values with respect to the other methods. Even though some methods have failed to solve the problems and are not comparable with our algorithms.

6. Conclusions

In this paper, two high-order methods for solving systems of nonlinear equations are introduced. Convergence and stability analysis for both methods are presented. These methods solved two well-known stiff systems of ODEs as well. The relaxation parameter, , in these iterative methods is very important and obtaining its optimal value needs more works. The novel methods which presented in this paper are stable ones because they can approximate values of unknown functions for large values of time. This confirms our theoretical study on stability. Besides, they solve problems faster than the other existing methods which have solved our test problems in this paper. Therefore these methods are powerful methods. Therefore, by results from our test problems (reported and nonreported), we express the following facts about SORCL1 and SORCL2 algorithms:(i)They are suitable schemes to solve systems of ordinary differential equations (stiff or non-stiff).(ii)They can use to solve any system of nonlinear equations which arise from different areas of science and technology.(iii)They are fast algorithms.(iv)By our theoretical study and numerical simulations, they are stable methods.(v)Obtaining the optimal value for relaxation parameter of these methods is a worthy project and needs more studies.(vi)Investigation of how our methods be affected if the initial conditions are incrementally perturbed can be a part of future research.(vii)The last suggestion for a future work is extension of our algorithms for updating them to solve systems of fractional order differential equations. This plan includes handling coupled systems of fractional differential equations (see, for example, [31]).

Data Availability

The data that support the findings of this study are available from the corresponding author upon reasonable request.

Conflicts of Interest

The authors declare that they have no conflicts of interest.