- About this Journal ·
- Abstracting and Indexing ·
- Aims and Scope ·
- Annual Issues ·
- Article Processing Charges ·
- Author Guidelines ·
- Bibliographic Information ·
- Citations to this Journal ·
- Contact Information ·
- Editorial Board ·
- Editorial Workflow ·
- Free eTOC Alerts ·
- Publication Ethics ·
- Recently Accepted Articles ·
- Reviewers Acknowledgment ·
- Submit a Manuscript ·
- Subscription Information ·
- Table of Contents

Abstract and Applied Analysis

Volume 2013 (2013), Article ID 586708, 10 pages

http://dx.doi.org/10.1155/2013/586708

## Fourth- and Fifth-Order Methods for Solving Nonlinear Systems of Equations: An Application to the Global Positioning System

Instituto de Matemática Multidisciplinar, Universitat Politècnica de València, Camino de Vera s/n, 40022 Valencia, Spain

Received 4 March 2013; Accepted 18 April 2013

Academic Editor: Changsen Yang

Copyright © 2013 Manuel F. Abad et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

#### Abstract

Two iterative methods of order four and five, respectively, are presented for solving nonlinear systems of equations. Numerical comparisons are made with other existing second- and fourth-order schemes to solve the nonlinear system of equations of the *Global Positioning System* and some academic nonlinear systems.

#### 1. Introduction

The search for solutions of nonlinear systems of equations is an old and difficult problem with wide applications in sciences and engineering. The best known method, for being very simple and effective, is Newton’s method. Its generalization to a system of equations was proposed by Ostrowski [1] and to Banach spaces by Kantorovič [2]. In the literature, several modifications have been made on classical methods in order to accelerate the convergence or to reduce the number of operations and evaluations of functions in each step of the iterative process. The extension of the variants of Newton’s method described by Weerakoon and Fernando in [3], by Özban in [4] and Gerlach in [5], to the functions of several variables has been developed in [6–9]. In [6, 7], families of variants of Newton’s method of third-order have been designed by using open and closed formulas of quadrature, including the families of the methods defined by Frontini and Sormani in [8]. Using the generic formula of the interpolatory quadrature, in [9] a family of methods is obtained with order of convergence , under certain conditions, where is the order up to which the partial derivatives of each coordinate function evaluated in the solution are canceled. Indeed, Darvishi and Barati improved in [10] the method from Frontini and Sormani, getting a fourth-order scheme. In addition to multistep methods based on interpolatory quadrature, other schemes have been developed by using different techniques, as extension to several variables of one-dimensional schemes (see [11]), Adomian decomposition (see [12, 13], e.g.), the one proposed by Darvishi and Barati in [14, 15] with super cubic convergence, and the methods proposed by Cordero et al. in [16] with orders of convergence four and five. Another procedure to develop iterative methods for nonlinear systems is the replacement of the second derivative by some approximation. In [17], Traub presented a family of multipoint methods based on approximating the second derivative that appears in the iterative formula of Chebyshev’s scheme, and more recently, Babajee et al. in [18] designed two Chebyshev-like methods free from second derivatives. Recently, Sharma et al. [19] designed a fourth-order scheme by using weight-function technique. Another well-known acceleration technique is the composition of two iterative methods of orders and , respectively, obtaining a method of order (see [17]). New evaluations of the Jacobian matrix and the nonlinear function are usually needed in order to increase the order of convergence.

Now, we are going to introduce the problem and some necessary concepts in order to develop the modified methods and to analyze their convergence. Let us consider the problem of finding a real zero of a function , that is, a solution of the nonlinear system of equations with unknowns. The best known iterative method is the classical Newton method given by where is the Jacobian matrix of the function evaluated in the th iteration . Traub, in [17], introduced a variant of Newton’s method of convergence order three. We are going to describe it because our methods combine Traub with Newton's method. Traub's scheme consists of the composition of Newton's method with itself, but with a frozen Jacobian matrix, its iterative expression is where is the th iteration of Newton’s method.

On the other hand, recently Sharma et al. in [19] have developed a fourth-order method for solving nonlinear systems of equations. The algorithm is composed of two weighted Newton steps, and it is given by

In the following, we remember some known notions and results that we need in order to analyze the convergence of the new methods. Let be sufficiently Frechet differentiable in . By using the notation introduced in [20], the th derivative of at , is the -linear function such that . It is easy to observe that(1), (2) for all permutation of ,where is the set of lineal operators of in .

From the above properties, we can use the following notation:(1), (2).

On the other hand, for lying in a neighborhood of a solution of , we can apply Taylor's expansion, and assuming that the Jacobian matrix is nonsingular, we have where , . We observe that since and .

In addition, we can express as where is the identity matrix. Therefore, . From the previous equation, we obtain

Then, if the inverse of is provided that , verify

Solving the system involved in (8), we have that

We denote as the error in the th iteration. The equation
where is a -linear function , is called the *error equation*, and is the *order of convergence*. Observe that is .

In [7], the concept of * computational order of convergence* was introduced as follows.

*Definition 1. *Let be a zero of a function , and suppose that , , and are three consecutive iterations close to . Then, the computational order of convergence can be approximated using the formula

In addition, in order to compare different methods, we use the * efficiency index*, , where is the order of convergence and is the total number of new functional evaluations (per iteration) required by the method. This is the most used index, but not the only one. In [17], Traub uses a * computational index* defined as , where is the number of products/quotients per iteration. We recall that the number of products and quotients that we need for solving linear systems with the same matrix of coefficients, by using factorization, is
where is the size of each system. We will use these indices in order to compare the different iterative methods. Kung and Traub in [21] conjectured that the order of convergence of any multipoint method without memory for solving nonlinear equations cannot exceed the bound (called the * optimal order*). Ostrowski's method [1], Jarrett's scheme [22], and King's procedure [23] are some of the optimal one-dimensional fourth-order methods. We have adapted the definition of optimal order of convergence to the case of iterative methods to solve nonlinear systems. The extension to several variables of the conjecture of Kung and Traub could be done in the following way [24].

Conjecture 2. *Given a multipoint iterative method to solve nonlinear systems of equations which requires functional evaluations per step such that of them correspond to the functional evaluations of the Jacobian matrix and to evaluations of the nonlinear function. We conjecture that the optimal order for this method is if .*

In this paper, we propose two new and competitive iterative methods of orders four and five, respectively, that improve other known methods.

The rest of this paper is organized as follows: in Section 2, we make an introduction to the Global Positioning System (GPS), focusing on the way that the receiver calculates the user position using the ephemeris data of the artificial satellites. In Section 3, we present our new iterative methods and analyze its convergence order, and by using the idea of a technique presented in [25], it is also proved that, in general, if we combine two methods of orders and , respectively, with , in the same way that we do it in our method of order five, the order of convergence of the resultant method is . In Section 4, we show an application of this analysis in order to solve the nonlinear system of the GPS and several academic nonlinear systems of equations. A comparison is established among the new methods and Newton and Sharma’s methods in terms of convergence order, approximated computational convergence order (ACOC), and computational and efficiency indices, and , respectively.

#### 2. Basics on Global Positioning System

This section introduces the basic concept of how a GPS receiver determines its position. From the satellite constellation, the equations required for solving the user position conform a nonlinear system of equations. In addition, some practical considerations (i.e., the inaccuracy of the user clock) will be included in these equations. These equations are usually solved through a linearization and a fixed point iteration method. The obtained solution is in a Cartesian coordinate system, and after that the result will be converted into a spherical coordinate system. However, the Earth is not a perfect sphere; therefore, once the user position is estimated, the shape of the Earth must be taken into consideration. The user position is then translated into the Earth-based coordinate system. In this paper, we are going to focus our attention in solving the nonlinear system of equations of the GPS giving the results in a Cartesian coordinate system. We can find further information about GPS in [26].

##### 2.1. Basic GPS Concepts

The position of a point in space can be found by using the distances measured from this point to some known position in space. We are going to use an example to illustrate this point.

Figure 1 shows a two-dimensional case. In order to determine the user position , three satellites , , and and three distances are required. The trace of a point with constant distance to a fixed point is a circle in the two-dimensional case. Two satellites and two distances give two possible solutions because two circles intersect at two points. A third circle is needed to uniquely determine the user position. For similar reasons in a three-dimensional case, four satellites and four distances are needed. The equal-distance trace to a fixed point is a sphere in a three-dimensional case. Two spheres intersect to make a circle. This circle intersects another sphere, and this intersection produces two points. In order to determine which point is the user position, one more satellite should be needed. In GPS, the position of the satellite is known from the ephemeris data transmitted by the satellite. By measuring the distance from the receiver to the satellite, the position of the receiver can be determined. In the above discussion, the distance measured from the user to the satellite is assumed to be very accurate, and there is no bias error. However, the distance measured between the receiver and the satellite has a constant unknown bias, because the user clock usually is different from the GPS clock. In order to solve this bias error, one more satellite is required. Therefore, in order to find the user position, five satellites are needed. If one uses four satellites and the measured distance with bias error to measure a user position, two possible solutions can be obtained. Theoretically, one cannot determine the user position. However, one of the solutions is close to the Earth's surface, and the other one is in the space. In fact, as we will see in Section 4, in this memory, we have used four satellites, and sometimes we have found the solution in the space. Since the user position is usually close to the surface of the earth, it can be uniquely determined. Therefore, the general statement is that four satellites can be used to determine a user position, even though the distance measured has a bias error. The method of solving the user position discussed in the next subsections is through iteration. The initial position is often selected at the center of the Earth. In the following discussion, four satellites are considered as the minimum number required for finding the user position.

##### 2.2. Basic Equations for Finding User Position

In this section, the basic equations for determining the user position will be presented. Assume that the distance measured is accurate, and under this condition, three satellites should be sufficient. Let us suppose that there are three known points at locations or , or , and or and an unknown point at or . If the distances between the three known points to the unknown point can be measured as , , and , these distances can be written as

Because there are three unknowns and three equations, the values of , , and can be determined from these equations. Theoretically, there should be two sets of solutions as they are second-order equations. These equations can be solved by linearizing them and making an iterative approach. The solution of these equations will be discussed later in Section 2.4. In GPS operation, the positions of the satellites are given. This information can be obtained from the data transmitted from the satellites. The distances from the user (the unknown position) to the satellites must be measured simultaneously at a certain time instance. Each satellite transmits a signal with a time reference associated with it. By measuring the time of the signal traveling from the satellite to the user, the distance between the user and the satellite can be found. The distance measurement is discussed in the next section.

##### 2.3. Measurement of Pseudorange

Every satellite sends a signal at a certain time . The receiver will receive the signal at a later time . The distance between the user and the satellite can be determined as where is the speed of light, is often referred to as the true value of pseudorange from user to satellite , is referred to as the true time of transmission from satellite , and is the true time of reception. From a practical point of view, it is difficult, if not impossible, to obtain the correct time from the satellite or the user. The actual satellite clock time and actual user clock time are related to the true time as where is the satellite clock error and is the user clock bias error. Besides the clock error, there are other factors affecting the pseudorange measurement. The measured pseudorange can be written as where is the satellite position error effect on the range, is the tropospheric delay error, is the ionospheric delay error, is the receiver measurement noise error, and is the relativistic time correction. Some of these errors can be corrected; for example, the tropospheric delay can be modeled, and the ionospheric error can be corrected in a two-frequency receiver. The errors will cause inaccuracy of the user position. However, the user clock error cannot be corrected through receiver information. Thus, it will remain as an unknown. So, the system of (13) must be modified as where is the user clock bias error expressed in distance, which is related to the quantity by . In the system of (17), four equations are needed to solve four unknowns , , , and . Thus, in a GPS receiver, a minimum of four satellites is required to solve the user position.

##### 2.4. Solution of User Position from Pseudoranges

One common way to solve the system of (17) is to linearize them. The system can be written in a simplified form as (with , and , , and ) are the unknowns. The pseudorange and the positions of the satellites , , are known. By differentiating (18),

In (19), , , , and can be considered as the only unknowns. The quantities , , , and are treated as known values because one can assume some initial values for these quantities. From these initial values, a new set of , , , and can be calculated. These values are used to modify the original , , , and to find another new set of solutions. This new set of , , , and can be considered again as known quantities. This process continues until the absolute values of , , , and are very small and within a certain predetermined limit. The final values of , , , and are the desired solution. This method is often referred to as an iteration method of fixed point. With , , , and as unknowns, the above equation becomes a set of linear equations. This procedure is often referred to as linearization. The expression (19) can be written in matrix form as where

The solution of (20) is

This process obviously does not provide the needed solutions directly. However, the desired solutions can be obtained from it. In order to find the desired position solution, this procedure must be used repetitively in an iterative way. A quantity is often used to determine whether the desired result is reached, and this quantity can be defined as

When is lower than a certain predetermined threshold, the iteration will stop. Sometimes, the clock bias is not included in (23). In this paper, we use as stopping criterion the quantity because it is stronger than (23). As we can verify in [27] the above iterative method used to calculate via software, the receiver position in the GPS is Newton's method, a well-known method of second-order of convergence. In this work, we improve the GPS software by means of two methods of order four and five, respectively, that converge to the solution with less number of iterations and better or than Newton scheme.

#### 3. Description of the Methods and Convergence Analysis

##### 3.1. A Fourth-Order Method

In this section, we display a new method for solving nonlinear systems that we call , obtained by combining (Newton and Traub's method). Its iterative expression is where is the th iteration of Newton’s method. In the next result, we are going to prove that the convergence order of the method is four.

Theorem 3. *Let be sufficiently differentiable at each point of an open neighborhood of that is a solution of the nonlinear system . Let one suppose that is continuous and nonsingular in . Then, the sequence obtained by using the iterative expression (24) converges to with order four. The error equation is
**
where , and .*

*Proof. *Taylor expansion of and around gives
where , and . From (26), we obtain
where , , and . Taylor's expansion of is

On the other hand, we have that
and by operating, we get

Analogously, we obtain the expression of . Given that the th iteration of Traub's scheme is
then

Besides, the expression of is
or, equivalently from (32),

So,
and provided that , solving the linear system of equations involved, we have , , and , so

Then,

Finally, by replacing (28) and (37) in the iterative expression (24), we obtain the error equation
and the proof of the theorem is completed.

##### 3.2. A Fifth-Order Method

In this section, we show a new method for solving nonlinear systems that we call , which is obtained by combining again Newton and Traub's methods but in a different way. Its iterative expression is where is the th iteration of Newton's method. We prove in the next result that the convergence order of this method is five.

Theorem 4. *Let be sufficiently differentiable at each point of an open neighborhood of that is a solution of the nonlinear system . Let one suppose that is continuous and nonsingular in . Then, the sequence obtained by using the iterative expression (39) converges to with order five. The error equation is
**
where , and .*

*Proof. *Following the procedure used in Theorem 3, we have that

On the other hand,

Finally, by replacing (42) and (44) in the iterative expression (39), we obtain the error equation
and the proof of the theorem is completed.

##### 3.3. *Pseudocomposition *

In [25] a technique called * pseudocomposition* that uses a known method as a predictor and the Gaussian quadrature as a corrector was introduced. The order of convergence of the resulting scheme depends, among other factors, on the order of the last two steps of the predictor. Following this idea, we generalize the procedure used to design method .

Then, we can establish the next result.

Theorem 5. *Let be sufficiently differentiable at each point of an open neighborhood of that is a solution of the nonlinear system . Let one suppose that is continuous and nonsingular in . Let be the th iteration of an iterative method of order and the th iteration of an iterative method of order , with . The sequence obtained by the iterative expression
**
converges to with the order of convergence .*

*Proof. *Taylor’s expansions of and are

Taylor’s expansion of around gives
where , and . On the other hand, we have that
where . Finally, by replacing (47), (48), and (49) in the iterative expression (46), we obtain the error equation

Then, the convergence order of the method that results from this combination of a method of order with another of order with is .

#### 4. Numerical Results

Numerical computations have been carried out using variable precision arithmetic, with 2000 digits of mantissa, in MATLAB 7.1. The stopping criterion has been , and therefore, we check that the iterate sequence converges to an approximation of the solution of the nonlinear system. For every method, we count the number of iterations needed to reach the wished tolerance, and we calculate the approximated computational order of convergence ACOC, the efficiency index , the computational index , and an error estimation made with the last values of and .

##### 4.1. Numerical Results Obtained with Academic Nonlinear Systems

Now, we are going to compare , , and Newton () and Sharma’s () schemes with some nonlinear academic systems in order to prove the effectiveness and the computational order of convergence of the methods developed in this work. The test systems used are as follows:(a), (b), (c), where and such that When is odd, the exact zeros of are and .

In Table 1, we can find a comparative among the different numerical methods for the nonlinear systems (a), (b), and (c). As we can see, the approximated computational orders of convergence are the expected ones, and methods and are clearly very competitive in terms of error estimation.

The efficiency index, , and the computational index, , of the different methods are as follows:

In Figures 2 and 3, we show these efficiency indices for . It can be concluded that our methods improve Sharma's scheme in terms of , although the classical efficiency of Sharma's procedure is better for . and are competitive, obtaining better error estimation than and , with the same number of iterations.

##### 4.2. Numerical Results for the GPS Problem

In order to test the proposed schemes on the problem of a user position of a GPS device, we have requested to the Cartographic Institute of Valencia to provide us with data of known geocentric coordinates.

Concretely, the Cartographic Institute of Valencia provided us with the following:(i)an example of a fixed-point GPS in the geocentric coordinates: , , and . It is a point located in Alcoy (Alicante, Spain),(ii)observations from that fixed point (file *.09o) for a day,(iii)positions of the satellites for that day:*.09n and *.sp3 files,(iv)description of RINEX format (*.09o file): http://www.igs.org/components/formats.html(v)description of the ephemeris file and satellite positions sp3: http://igscb.jpl.nasa.gov/igscb/data/format/sp3c.txt(vi)link to other libraries for analysis calculations: http://www.ngs.noaa.gov/gps-toolbox/exist.htm.

With these data, we obtain the positions of the visible satellites in the instant that corresponds to the provided data. With these coordinates, we calculate the approximated pseudoranges for every satellite, and then we are able to build the nonlinear system of equations of GPS (18) using four of the satellites, with which we check the iterative methods of Newton, Sharma, , and .

In Table 2, we can find a comparative among the iterative methods , , , and for the nonlinear system of the GPS. We recall that the coordinates of the center of the Earth and , that is, , are usually used as initial estimation. Despite this, we have also tested the methods with some other initial conditions. We denote that as the Earth’s solution and as the exterior space solution.

As we can see, for this particular system of equations, Newton's method does not converge to the user position for all the initial estimations, so does , but is a good method in all senses, very competitive in respect of known methods.

#### 5. Conclusions

In this paper, we have gone in depth on an emerging line of investigation, the GPS receivers software improvement. Concretely, GPS receivers currently use Newton's method to solve the nonlinear system (18) and to calculate their exact position with the information obtained from signals received from the GPS constellation of satellites. We propose two different combinations of Newton method and Traub's methods, obtaining two methods of fourth- () and fifth-order (). Using the idea presented in [25], called * pseudocomposition*, it is proved that combining in a particular way two methods of order and , respectively, with , the order of convergence of the resulting scheme is . We have numerically compared the different methods, and we have concluded that and are very competitive in terms of the error estimation.

#### Acknowledgments

This research was supported by Ministerio de Ciencia y Tecnología MTM2011-28636-C02-02 and FONDOCYT 2011-1-B1-33 República Dominicana. The authors would also like to thank the work of the anonymous referee.

#### References

- A. M. Ostrowski,
*Solution of Equations and Systems of Equations*, Prentice-Hall, Englewood Cliffs, NJ, USA, 1964. - L. V. Kantorovič, “Functional analysis and applied mathematics,”
*Uspekhi Matematicheskikh Nauk*, vol. 3, pp. 89–185, 1948 (Russian). View at MathSciNet - S. Weerakoon and T. G. I. Fernando, “A variant of Newton's method with accelerated third-order convergence,”
*Applied Mathematics Letters*, vol. 13, no. 8, pp. 87–93, 2000. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet - A. Y. Özban, “Some new variants of Newton's method,”
*Applied Mathematics Letters*, vol. 17, no. 6, pp. 677–682, 2004. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet - J. Gerlach, “Accelerated convergence in Newton's method,”
*SIAM Review*, vol. 36, no. 2, pp. 272–276, 1994. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet - A. Cordero and J. R. Torregrosa, “Variants of Newton's method for functions of several variables,”
*Applied Mathematics and Computation*, vol. 183, no. 1, pp. 199–208, 2006. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet - A. Cordero and J. R. Torregrosa, “Variants of Newton's method using fifth-order quadrature formulas,”
*Applied Mathematics and Computation*, vol. 190, no. 1, pp. 686–698, 2007. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet - M. Frontini and E. Sormani, “Third-order methods from quadrature formulae for solving systems of nonlinear equations,”
*Applied Mathematics and Computation*, vol. 149, no. 3, pp. 771–782, 2004. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet - A. Cordero and J. R. Torregrosa, “On interpolation variants of Newton's method for functions of several variables,”
*Journal of Computational and Applied Mathematics*, vol. 234, no. 1, pp. 34–43, 2010. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet - M. T. Darvishi and A. Barati, “A fourth-order method from quadrature formulae to solve systems of nonlinear equations,”
*Applied Mathematics and Computation*, vol. 188, no. 1, pp. 257–261, 2007. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet - M. T. Darvishi, “Some three-step iterative methods free from second order derivative for finding solutions of systems of nonlinear equations,”
*International Journal of Pure and Applied Mathematics*, vol. 57, no. 4, pp. 557–573, 2009. View at Zentralblatt MATH · View at MathSciNet - G. Adomian,
*Solving Frontier Problems of Physics: The Decomposition Method*, Kluwer Academic, Dordrecht, The Netherlands, 1994. View at MathSciNet - D. K. R. Babajee, M. Z. Dauhoo, M. T. Darvishi, and A. Barati, “A note on the local convergence of iterative methods based on Adomian decomposition method and 3-node quadrature rule,”
*Applied Mathematics and Computation*, vol. 200, no. 1, pp. 452–458, 2008. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet - M. T. Darvishi and A. Barati, “A third-order Newton-type method to solve systems of nonlinear equations,”
*Applied Mathematics and Computation*, vol. 187, no. 2, pp. 630–635, 2007. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet - M. T. Darvishi and A. Barati, “Super cubic iterative methods to solve systems of nonlinear equations,”
*Applied Mathematics and Computation*, vol. 188, no. 2, pp. 1678–1685, 2007. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet - A. Cordero, E. Martínez, and J. R. Torregrosa, “Iterative methods of order four and five for systems of nonlinear equations,”
*Journal of Computational and Applied Mathematics*, vol. 231, no. 2, pp. 541–551, 2009. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet - J. F. Traub,
*Iterative Methods for the Solution of Equations*, Chelsea Publishing, New York, NY, USA, 1982. - D. K. R. Babajee, M. Z. Dauhoo, M. T. Darvishi, A. Karami, and A. Barati, “Analysis of two Chebyshev-like third order methods free from second derivatives for solving systems of nonlinear equations,”
*Journal of Computational and Applied Mathematics*, vol. 233, no. 8, pp. 2002–2012, 2010. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet - J. R. Sharma, R. K. Guha, and R. Sharma, “An efficient fourth order weighted-Newton method for systems of nonlinear equations,”
*Numerical Algorithms*, vol. 62, no. 2, pp. 307–323, 2013. View at Publisher · View at Google Scholar · View at MathSciNet - A. Cordero, J. L. Hueso, E. Martínez, and J. R. Torregrosa, “A modified Newton-Jarratt's composition,”
*Numerical Algorithms*, vol. 55, no. 1, pp. 87–99, 2010. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet - H. T. Kung and J. F. Traub, “Optimal order of one-point and multipoint iteration,”
*Journal of the Association for Computing Machinery*, vol. 21, pp. 643–651, 1974. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet - P. Jarrat, “Some fourth order multipoint iterative methods for solving equations,”
*Mathematics of Computation*, vol. 20, pp. 434–437, 1966. - R. F. King, “A family of fourth order methods for nonlinear equations,”
*SIAM Journal on Numerical Analysis*, vol. 10, pp. 876–879, 1973. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet - V. Arroyo, A. Cordero, and J. R. Torregrosa, “Approximation of artificial satellites' preliminary orbits: the efficiency challenge,”
*Mathematical and Computer Modelling*, vol. 54, no. 7-8, pp. 1802–1807, 2011. View at Publisher · View at Google Scholar · View at Scopus - A. Cordero, J. R. Torregrosa, and M. P. Vassileva, “Pseudocomposition: a technique to design predictor-corrector methods for systems of nonlinear equations,”
*Applied Mathematics and Computation*, vol. 218, no. 23, pp. 11496–11504, 2012. View at Publisher · View at Google Scholar · View at MathSciNet - J. B. Y. Tsui,
*Fundamentals of Global Positioning System Receivers, A Software Approach*, Wiley Interscience, 2005. - X. Sun, Y. Ji, H. Shi, and Y. Li, “Evaluation of two methods for three satellites position of GPS with altimeter aiding,” in
*Proceedings of the 5th International Conference on Information Technology and Applications (ICITA '08)*, pp. 667–670, June 2008. View at Scopus