Table of Contents Author Guidelines Submit a Manuscript
Mathematical Problems in Engineering
Volume 2013 (2013), Article ID 369694, 11 pages
http://dx.doi.org/10.1155/2013/369694
Research Article

Neural Network for WGDOP Approximation and Mobile Location

1Department of Information Management, Tainan University of Technology, Tainan 71002, Taiwan
2Department of Communication Engineering, Chung-Hua University, Hsinchu 30012, Taiwan
3Department of Electronic Engineering, National Quemoy University, Quemoy 89250, Taiwan

Received 12 April 2013; Revised 17 June 2013; Accepted 17 June 2013

Academic Editor: Ker-Wei Yu

Copyright © 2013 Chien-Sheng Chen et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Abstract

This paper considers location methods that are applicable in global positioning systems (GPS), wireless sensor networks (WSN), and cellular communication systems. The approach is to employ the resilient backpropagation (Rprop), an artificial neural network learning algorithm, to compute weighted geometric dilution of precision (WGDOP), which represents the geometric effect on the relationship between measurement error and positioning error. The original four kinds of input-output mapping based on BPNN for GDOP calculation are extended to WGDOP based on Rprop. In addition, we propose two novel Rprop–based architectures to approximate WGDOP. To further reduce the complexity of our approach, the first is to select the serving BS and then combines it with three other measurements to estimate MS location. As such, the number of subsets is reduced greatly without compromising the location estimation accuracy. We further employed another Rprop that takes the higher precision MS locations of the first several minimum WGDOPs as the inputs into consideration to determine the final MS location estimation. This method can not only eliminate the poor geometry effects but also significantly improve the location accuracy.

1. Introduction

Mobile positioning is becoming an increasingly important problem, whose solutions can be generally divided into two major categories—handset-based methods and network-based methods [1]. When equipped with a global positioning system (GPS) receiver, handset-based location schemes require modifications to the handset to calculate its own position. The network-based location schemes can be used in situations where GPS signals are not available, for example, indoor environment, or when GPS-embedded handsets are not available. The network-based methods require the estimation of the mobile location based on the received signals between the mobile station (MS) and a set of base stations (BSs). For many applications in wireless sensor networks (WSN), such as environmental sensing and activities measuring, it is crucial to know the locations of the sensor nodes; this is known as a “localization problem” [1].

Geometric dilution of precision (GDOP) can be applied as a criterion for choosing the appropriate geometric configuration of the measurement units. Different stations can constitute different combinations, and the stations randomly selected can yield with relatively poor accuracy. Reference [2] proposed a method based on fuzzy clustering to analyze the positioning distribution of each combination. The key of this method is to apply only those stations with good GDOP and time of arrival (TOA). In reference [3] a time-varying function based on the GDOP curved surface is defined, so it is quite complex and difficult to effectively track a mobile target and maintain good GDOP. However, the proposed target localization scheme is to use the best mobile sensor nodes (MSNs) location, which can effectively mitigate the effect of GDOP and provide more accurate location estimates in mobile sensing systems. On the other hand, reference [4] also proposed a difficult hybrid algorithm, by combining pseudorange differencing and ridge regression technique, to improve both the position accuracy of pseudolite only system and effectively reduce the GDOP effects. But the proposed method in this paper can be applied for those conditions, such as in indoor navigation system design and when GPS satellite signals are not available.

This paper considers both network-based and handset-based methods, employing the concept of GDOP, which was developed to select the optimal geometric configuration of satellites in GPS. Thus when enough measurements are available, the optimal measurements selected can reduce the adverse geometry effects, thereby improving the location accuracy. However, excessive or redundant measurements would increase the computational overhead, and sometimes it may not improve the location accuracy significantly. So before positioning, it is very important to select a subset with the most reasonable measurement units rapidly.

Implementation of the GDOP method assumes that all pseudorange errors are independent and identically distributed [5]. But the measurements errors usually have different variances in practice [6]. Generally, the satellite signal is obtained by combining the values such as user range accuracy, carrier-to-noise ratio, elevation angle, and the ephemeris. In [7], Sairo et al. proposed a method that takes into account the different error variances. In [8], the elevation and receiver’s signal-to-noise ratio (SNR) is used to weigh GDOP and provide the positioning solution. When baro-altitude measurements or a priori terrain elevation information is used, the conventional GDOP formula cannot be applied and must be modified [9]. The combinations of GPS and Galileo satellite constellations will provide more visible satellites with better geometric distribution and their accessibility will be significantly improved. A weighted GDOP (WGDOP) algorithm was proposed in [10] for the combined GPS-Galileo navigation receiver. Reference [11] considered the value of the WGDOP, the number of visible satellites, and the constellation costs as three objective functions of navigation constellation performance. Simulation results show that these optimal solutions can provide more visible satellites and better WGDOP. In addition, several methods based on WGDOP have been proposed to improve the GPS positioning accuracy [1218]. Most, if not all, of those methods need matrix inversion to calculate WGDOP. The matrix inversion method is rather time consuming and causes a great deal of computational burden. Thus those performances are achieved at the expense of increased computational complexity, which is usually too much to be practical.

Simon and El-Sherief [19, 20] employed backpropagation neural network (BPNN), a supervised learning neural network [21], to obtain an approximation to the GDOP function. The BPNN was employed to “learn” the relationship between the entries of a measurement matrix and the eigenvalues of its inverse. Three other input-output relationships were proposed and compared based on simulation results [22]. However, BPNN generally converges slowly and tends to get trapped in local minima. Considering both effectiveness and efficiency, this paper presents two novel architectures based on an alternative artificial neural networks method, namely, the resilient backpropagation (Rprop) method [21], to approximate WGDOP. Rprop is an algorithm with good convergence speed, accuracy, and robustness to the training parameter [23]. Comparing to the BPNN, the Rprop converges faster and needs less training. Fast convergence can curtail the training time, and build relevant predictive models of neural network quickly. But collecting the training data spends a lot of cost, time, and resources. A training pattern consists of a set of the input vectors and the corresponding output vectors. In many situations, the data for training is not available. The accuracy of prediction may be affected by the lack of sufficient training data. However, the more training data we have the more training time and resource it costs. It is very critical to achieve higher accuracy when the training data is not sufficient. Both faster convergence speed and less number of training iterations for neural network are very important. Simulation results have shown that the proposed architectures based on Rprop can provide faster convergence and less number of training iterations. So they can be applied to select the location measurement units based on Rprop in GPS, WSN, and cellular communication systems. In practice, the measurement units of GPS, WSN, and cellular communication systems are satellites, sensors, and BSs, respectively.

To select the most appropriate set of BSs to achieve the minimum positioning error, we need to consider not only GDOP effect but also the non-line-of-sight (NLOS) error statistic to each BS in cellular communication systems. The reciprocal of square root of an upper bound of NLOS errors is set as the weight coefficient. In this paper, we select a subset of four measurements in the location process. We will first select the serving BS as one of the four, and then combine it with three other measurements to form the WGDOP subsets. After calculating the WGDOP values, the subset with minimum WGDOP is used to estimate the MS location. Simulation results show that Rprop provides much better estimation of average WGDOP residual than those based on BPNN. The number of epochs of Rprop is less than the traditional BPNN. Rprop algorithm converges faster than BPNN and requires much less convergence time. The proposed architectures using Rprop for WGDOP approximation always yield better location results compared with the other architectures. The proposed architectures using Rprop for both WGDOP approximation and the matrix inversion method provide nearly identical MS location estimation. The four randomly selected BSs with poor WGDOP will give bad location estimation, and the positioning accuracy would be seriously affected by the geometric configuration of BSs and MS. The proposed BSs selection criterion reduces greatly the number of subsets while providing a comparable level of accuracy in location estimation. Therefore, the conclusion is that the proposed algorithm can be applied in practical situations.

To improve positioning accuracy even further, the higher precision MS locations according to the first several minimum WGDOP subsets can be used as the inputs to the next Rprop. After training, Rprop can be applied to predict the final MS location by means of the higher precision MS locations. The simulation results confirm that the proposed methods employing at most two higher MS locations can always perform better than using all seven BSs. In essence, these methods greatly reduce the NLOS error and enhance the performance of MS location estimation effectively.

The remainder of this paper is organized as follows: Section 2 presents the calculation of GDOP and WGDOP. Section 3 describes briefly BPNN and Rprop methods. The six types of mapping for WGDOP approximation based on Rprop are proposed in Section 4. Section 5 presents a proposed BSs selection criterion and the location methods. Simulation results are given in Section 6 followed by conclusion in Section 7.

2. Calculation of GDOP and WGDOP

The concept of GDOP is commonly used to determine the geometric effect of GPS satellite configurations. It has a simple form if all the measurements have the same error variance. The smaller the GDOP value, the more accurate the positioning is. In order to improve the positioning accuracy, we should minimize GDOP among the selected measurement units.

Using a three-dimensional (3D) Cartesian coordinate system, the distances between satellite and the user can be expressed as where and are the locations of the user and satellite , respectively; is the speed of light, denotes the time offset, and is pseudorange measurements noise. Equation (1) can be linearized with Taylor’s series expansion at the approximate user position , neglecting the higher order terms. Defining as at , we can obtain where , , are, respectively, the coordinate offsets of , , , The linearized equations can be expressed in a vector form with direction cosines from the user to the th satellite where is the geometry matrix.

The vector variable in (4) can be solved with the least-square (LS) algorithm, namely, Assuming that the pseudorange errors are uncorrelated with equal variances , the error covariance matrix can be expressed as The variances are functions of the diagonal elements of . The GDOP is a measure of accuracy for positioning systems and is defined as

In practice, the measurement errors do not have the same variance, especially when different systems are combined. The covariance matrix has the form Now, define a weight matrix as follows, where , are variances of measurement errors.

With the weighting matrix defined above, we now need to solve a weighted least-square (WLS) problem and the solution is given by To select the set of the most appropriate measurement units that renders the minimum positioning error, we must consider not only the GDOP effect but also the ranging error statistics. In this paper, we employ WGDOP, instead of GDOP, to select measurement units so as to improve the accuracy of location. The optimal subset is the one with the minimum WGDOP, which is given by the trace of the inverse of the matrix: The conventional method for calculating WGDOP is to use matrix inversion for all subsets, requiring a great deal of computational effort. When the number of measurement units increases, the computation time will increase rapidly. In this paper, we employ an artificial neural network learning algorithm to obtain approximate WGDOP.

3. Traditional BPNN Algorithm and Rprop Algorithm

It has been known that BPNN is capable of learning and realizing both linear and nonlinear functions [21]. The learning process of BPNN can be considered as one of gradient descent methods that minimizes some measure, for example, mean-square value of the difference between the actual output vector of network and the desired output vector. Define an error function where is the output vector of the network while is the desired output vector. Then, the gradient decent algorithm is employed to adapt the weights (namely, synapses) as follows: where is a predetermined learning rate, and denotes the weight connecting neuron to neuron . The major drawbacks of traditional BPNN include slow learning process and the tendency to be trapped easily in local minima.

Comparing to the traditional BPNN algorithm, the Rprop algorithm offers faster convergence and is usually more capable of escaping from local minima. In a sense, Rprop is a first-order algorithm and its time and memory requirement scales linearly with the number of parameters. In practice, Rprop is easier to implement than BPNN. Besides, a hardware implementation for Rprop has been presented in [24]. Briefly speaking, Rprop performs a direct adaptation of the weighting step based on local gradient information. The main idea of Rprop is to reduce the potential spurious effect of the partial derivative on weight-updates by retaining only the sign of the derivative as an indication of the direction, in which the error function will be changed by the weight-update. We introduce for each weight an individual update-value , which solely determines the size of the weight-update. This adaptive update-value evolves during the learning process based on its local sight on the error function , according to the following learning rule [25]: where . We can simply describe the adaptation rule as follows: every time the partial derivative of the error function with respect to the corresponding weight changes its sign, the update-value is decreased by a factor . If the derivative retains its sign, the update-value is slightly increased in order to accelerate convergence in shallow regions.

Once the update-value for each weight is adapted, the weight-update itself follows a very simple rule: if the derivative is positive (increasing error), the weight is decreased by its update-value, and if the derivative is negative, the update-value is added to the weight:

4. Proposed Network Architectures for WGDOP Approximation

Researchers have employed the conventional BPNN to estimate GDOP; see, for example, [19, 20, 22]. This can reduce the computational complexity required to compute the matrix inversion in calculating GDOP. Since the statistics of different location measurement units are, in general, not equal to each other, WGDOP serves as an index for the precision of location in different networks, such as GPS, WSN, and cellular communication systems. In this paper, the original four types BPNN, defined by four different input-output mapping, for GDOP calculation [19, 20, 22] will be extended to WGDOP with the employment of Rprop. In addition, we propose two new mapping architectures.

To further reduce the computational overhead and improve location performance, the selection of optimal measurement units is necessary. Instead of using all visible satellites, four satellites are usually sufficient for GPS positioning. As such, we take only four BSs from among seven with better geometry to estimate the MS location in cellular communication networks. The different structures of four location measurement units were implemented to illustrate the applicability of Rprop for WGDOP predictions in 3D environments. Accordingly, from (13), is a matrix and it has four eigenvalues, namely, , . Therefore, the four eigenvalues of are , . Consequently, WGDOP can be expressed as On the other hand, the geometry matrix and weight matrix composed of four location measurement units in two-dimensional (2D) environments are respectively.

Therefore, is a matrix and WGDOP is

In the following, we present six types of Rprop mapping architectures for WGDOP prediction in 3D and 2D environments and the mapping relationship with the three layer “input -hidden neuron number-output ” structures. These six types of architectures are described by a block diagram as shown in Figure 1.

369694.fig.001
Figure 1: The input-output relationships for six types of mapping using Rprop.

Type 1. (A) 3D: four inputs are mapped to four outputs The network has the input-output pairs: Input: Output: .One can see that the mapping from to is nonlinear, which is usually difficult to be determined analytically. After the training period, this mapping relationship can be approximated quite well by neural network. WGDOP is estimated by taking the square root of the sum of the outputs.

(B) 2D: three inputs are mapped to three outputs The network has the following input-output pair:Input: Output: .The sum of the outputs gives the square of WGDOP.

Type 2. (A) 3D: four inputs are mapped to one output. In this case, WGDOP values are directly used as the output of the training data. The network has the following structure:Input: Output: .

(B) 2D: three inputs are mapped to one outputInput: Output:

Type 3. (A) 3D: ten inputs are mapped to four outputs. This type of mapping is used to obtain approximations to the inverse of the eigenvalues from the elements of the matrix . The matrix is a symmetric matrix and can be expressed as where , , , , , , .

The network has the structure with the following input-output pair:Input: Output: .

(B) 2D: six inputs are mapped to three outputsInput: Output: .

Type 4. (A) 3D: ten inputs are mapped to one output. This is a type of mapping from the elements of the matrix to approximate WGDOP. The network has the following input-output relationship:Input: Output: .

(B) 2D: six inputs are mapped to one outputInput:  Output:

Type 5. (A) 3D: sixteen inputs are mapped to four outputs. Using this type of mapping, the elements of matrix and are utilized to approximate the inverse of the eigenvalues without having to calculate . The network has the following mapping architecture:Input: Output: .

(B) 2D: twelve inputs are mapped to three outputsInput: Output: .

Type 6. (A) 3D: sixteen inputs are mapped to one output. This architecture is proposed to train the mapping for approximating WGDOP from the elements of matrix and . The network has the following input-output relationship:Input: Output:

(B) 2D: twelve inputs are mapped to one outputInput: Output:

Given a number of known input vectors and corresponding output vectors, Rprop is employed to train a network until it obtains approximate WGDOP values. After the training, the elements of matrix and as input data cannot only pass through the trained Rprop more quickly but also predict the WGDOP more accurately. The simulation results show that the proposed Type 5 and Type 6 need fewer hidden neurons and the number of training iterations. Thus they have much reduced computational load and are more practical. Note that all the above architectures for obtaining WGDOP are applicable regardless of the number of the location measurement units.

5. Proposed BS Selection Criterion and Location Methods Using Next Rprop

5.1. Proposed BS Selection Criterion

The proposed BS selection criterion with minimum WGDOP can be modified for application in cellular communication systems. Incorporating not only BSs geometry but also NLOS error statistics, we focus on the reciprocal of the square root of the upper bound of the NLOS errors as factors to determine the weight matrix. The measurements are divided into a number of subsets, and then the location estimates rendered by the minimum WGDOP subset can be determined accordingly. In this paper, we only consider the subsets of four measurements. Thus, the measurements could be divided into 35 () possible subsets. In such system, the BS which serves a particular MS is called the serving BS which can provide more accurate measurements. To further simplify the process, the proposed BS selection criterion first chooses the serving BS and selects three measurements from the other six BSs to form a subset. As such, the number of possible subsets is reduced from 35 () to 20 () and the computational load can be much reduced. In this way, WGDOP is computed for 20 possible subsets and the one with the smallest WGDOP is selected.

5.2. Proposed Location Methods Using Next Rprop

Based on the above BS selection criterion, the simplest location method employs the BSs with minimum WGDOP to estimate the MS location. In general, the subsets with smaller WGDOP provide more accurate MS location results. The proposed method calculates the MS location of several subsets with the first minimum WGDOP. The corresponding higher precision MS locations of the first minimum WGDOP subsets are defined as the candidate points. We then employ another Rprop to estimate the final MS location. Specifically, the candidate points are fed into the next Rprop which has the following input-output mapping:Input: candidate points ()Output: true MS location.During the training period, Rprop is trained to establish the relationship between the candidate points and the true MS location. After the training, the candidate points become the input data and are passed through the trained Rprop to predict the final MS location.

6. Simulation Results

We consider the problem of mobile location using TOA measurements and attempt to improve the performance of the MS location estimate in 2D environments. Computer simulations are performed to investigate improvement in location accuracy. We consider a center hexagonal cell (where the serving BS resides) with six adjacent hexagonal cells of the same size, as shown in Figure 2. The serving BS, that is, , is located at . Each cell has a radius of 5 km and the MS locations are uniformly distributed in the center cell [26]. The dominant error for wireless location systems is usually due to the NLOS propagation effect. NLOS error statistics are provided and can vary significantly from one BS to another. In this paper, the NLOS propagation model is based on the uniformly distributed noise model [27], in which the TOA NLOS errors from all the BSs are different and assumed to be uniformly distributed over , for , where is an upper bound. The specific variables are chosen as follows: ,  m,  m,  m,  m,  m, and m. The reciprocal of the square root of an upper bound of the NLOS errors is set to be diagonal elements of the weight matrix .

369694.fig.002
Figure 2: Seven-cell system layout.

In the simulation, we have only considered single hidden layer, which is most commonly used. Figure 3 shows how the converged WGDOP residual varies as the number of training iterations (epochs) increases. Here, WGDOP residual is defined to be the difference between the actual WGDOP and the estimated WGDOP. Observe that WGDOP residual decreases as the number of epochs increases. At the beginning of the training period, the error is decreasing rapidly. When the number of epochs increases to more than 2000, this reduction slows down. This suggests that Rprop offers much faster convergence, compared to the traditional BPNN [22].

369694.fig.003
Figure 3: WGDOP residual reduction according to the number of epochs.

The number of hidden neurons can also be critical here. When there are too few hidden neurons, a bigger error may occur. On the other hand, a larger number of hidden neurons can slow down the convergence. Some general rules for determining the number of hidden neurons are (i) half of the sum of the input neurons and the output neurons; (ii) the same as that of the number of input layer’s neurons; (iii) two times the number of input neurons plus one; (iv) three times the number of input neurons plus one. Figure 4 shows the results of WGDOP residual for various numbers of hidden neurons. The hidden layer neurons with provide reasonably accurate results.

369694.fig.004
Figure 4: The WGDOP residual with various neurons numbers of the hidden layer.

Based on the optimized neural network structure stated above, the Rprop algorithm can be applied to predict WGDOP value after the training period. Figures 5 and 6 show the mean-square error (MSE) of WGDOP over time when using BPNN and Rprop, respectively. So Rprop method can offer much faster convergence time. Rprop algorithm has been used to train the neural networks and was shown to converge faster than BPNN. The proposed Rprop algorithm with six types can yield smaller MSE with very short time consuming.

369694.fig.005
Figure 5: Behavior of the average learning time for BPNN and Rprop for Types 1, 3, and 5.
369694.fig.006
Figure 6: Mean-squared error over time for BPNN and Rprop for Types 2, 4, and 6.

From Figures 7 and 8, one can see the cumulative distribution function (CDF) curves for the six types of mapping architectures based on BPNN and Rprop. The quantity of epochs of Rprop is fewer than the traditional BPNN. Even Rprop-based Type 5 with 1,000 epochs gives a superior performance than BPNN-based Types 1, 3, and 5 with 2,000 epochs. We have found that when there are only 500 pieces of input-output patterns as training data, the proposed Rprop-based Type 6 still works better than BPNN-based Types 2, 4, and 6 with 2,000 epochs. The results obtained are very promising and confirm the quality of the Rprop algorithm with respect to both convergence time and the number of epochs.

369694.fig.007
Figure 7: Comparison of WGDOP residual CDFs based on BPNN and Rprop for Types 1, 3, and 5.
369694.fig.008
Figure 8: The CDF of WGDOP residual of BPNN and Rprop for Types 2, 4, and 6.

We can observe that Rprop-based method is better than BPNN-based one. The Rprop algorithm derives more accurate estimations of average WGDOP residual than BPNN with a learning rate of 0.01. The superior performance for the Rprop algorithm has been demonstrated by comparing the average WGDOP residual. We can find that the WGDOP precision of Rprop is better than BPNN. Therefore, we apply Rprop and propose an algorithm to estimate WGDOP in this paper.

Figure 9 shows CDFs of the WGDOP residual for the six types of mapping architectures with hidden neurons after 2000 training iterations, where is the number of input neurons. WGDOP is equal to the square root of the sum of , , which are the outputs of three-output architectures. We can see that the one-output architectures approximate WGDOP with much better accuracy than those of the three-output architectures. The Type 1 mapping architecture predicts the eigenvalues inverse and then obtains WGDOP value with poor accuracy. The results show that the proposed Type 5 yields the best performance among the three-output architectures and the proposed Type 6 provides much better accuracy than all the other one-output architectures. The proposed Type 6 with hidden neurons and 500 epochs renders superior performance to other architectures, such as those with neurons and 2000 epochs. In order to minimize the computational load, we used the proposed Type 6 with the aforementioned parameters in subsequent simulations because it offers satisfactory prediction performance.

369694.fig.009
Figure 9: CDFs of the WGDOP residual for the six types of mapping architectures.

With the minimum WGDOP algorithm, MS location can be estimated by the linear lines of position algorithm (LOP) [28], distance-weighted method, and threshold method which we have proposed in [29, 30]. Figure 10 shows the CDF of the average location error of these methods with different subsets. From simulation results, using a subset of the serving BS in combination with three other BSs yields comparable performance to using a subset of selecting four measurements from seven BSs. The former has many fewer choices of subsets, that is, instead of ; thus, it requires greatly reduced computation. With the proposed BSs selection criterion, Rprop-based WGDOP approximation method and matrix inversion method provide nearly identical MS location estimation.

369694.fig.0010
Figure 10: CDFs of the location error for various methods.

One can see in Figure 11 that four randomly selected BSs with poor geometry yield extremely bad location estimation, and the location accuracy can be critically affected by the relative geometry between BSs and MS. In contrast, our proposed BS selection criterion always provides much better location estimation than the random subsets of four BSs chosen from seven BSs. The simulation results show that the positioning precision of using all seven BSs is only slightly better than that of the proposed BSs selection criterion. Selecting four BSs with the best geometry is generally sufficient to render substantial decrease in the positioning error.

369694.fig.0011
Figure 11: Comparison of location error CDFs using all seven BSs, the subset with minimum WGDOP approximation, and the subset selected four BSs randomly.

To enhance the performance of location estimation, we apply the first () higher precision MS location with minimum WGDOP to next Rprop. Based on the hidden neurons and 500 training iterations structure, we use only one candidate point () and still achieved better performance as shown in Figure 12. Higher accuracy of MS location is obtained when more candidate points are taken into consideration. Our simulations show that the proposed location methods with just two candidate points can even outperform those of using all seven BSs. Therefore, it is sufficient to select the first two higher precision candidate points.

369694.fig.0012
Figure 12: CDFs of the location error of using the candidate points based on Rprop.

7. Conclusion

This paper presents novel Rprop-based architectures for both WGDOP approximation and location estimation. The proposed architectures based on Rprop for WGDOP approximation can be applied to GPS, WSN, and cellular communication systems. Since WGDOP index depends on both NLOS a priori information and BS geometry, the reciprocal of the square root of an upper bound of NLOS errors is set to be the weight coefficient. In this paper, the architectures of traditional BPNN to approximate GDOP are extended to WGDOP based on Rprop. The results show that the proposed architectures for predicting WGDOP yield much improved accuracy and significantly reduced computation. We also propose to combine the serving BS and the other three measurements, thereby reducing the number of possible subsets while achieving comparable performance. The robust performance of the proposed Rprop-based architectures is the same as the matrix inversion method. To significantly improve the location accuracy, we further employ another Rprop that takes the higher precision MS locations of the first several minimum WGDOP as the inputs, to determine the final MS location estimation and achieve effective performance improvement. In general, the matrix inversion method by using all seven BSs can give a good performance of MS location estimation. However, the simulation results confirm that the proposed methods, employing at most two higher MS locations, can perform better than the traditional method by using all seven BSs with matrix inversion. In essence, the proposed methods can make effective performance improvement of MS location estimation. The proposed algorithms can be applicable to all positioning techniques.

References

  1. G. Sun, J. Chen, W. Guo, and K. J. R. Liu, “Signal processing techniques in network-aided positioning: a survey of state-of-the-art positioning designs,” IEEE Signal Processing Magazine, vol. 22, no. 4, pp. 12–23, 2005. View at Publisher · View at Google Scholar · View at Scopus
  2. Y. Lu, H. Wu, and Z. Huang, “An Improved optimization method based on fuzzy clustering in MLAT for A-SMGCS,” in Proceedings of the 9th International Conference on Fuzzy Systems and Knowledge Discovery (FSKD '12), pp. 424–428, Sichuan, China, May 2012.
  3. C.-D. Wann, “Mobile sensing systems based on improved GDOP for target localization and tracking,” in Proceedings of the IEEE Sensors Conference, pp. 1–4, Taipei, Taiwan, October 2012.
  4. K. Tiwary, G. Sharada, and Prahlada, “Mitigating effect of high GDOP on position fix accuracy of pseudolite only navigation system,” International Journal of Computer and Electronics Research, vol. 2, pp. 74–80, 2013. View at Google Scholar
  5. E. D. Kaplan, Understanding GPS: Principles and Applications, Artech House, London, UK, 1996.
  6. G. M. Siouris, Aerospace Avionics Systems—A Modern Synthesis, Academic Press, San Diego, Calif, USA, 1993.
  7. H. Sairo, D. Akopian, and J. Takala, “Weighted dilution of precision as quality measure in satellite positioning,” IEE Proceedings Radar, Sonar and Navigation, vol. 150, no. 6, pp. 430–436, 2003. View at Publisher · View at Google Scholar · View at Scopus
  8. Y. Yong and M. Lingjuan, “GDOP results in all-in-view positioning and in four optimum satellites positioning with GPS PRN codes ranging,” in Proceedings of the Position, Location and Navigation Symposium (PLANS '04), pp. 723–727, Monterey, Calif, USA, April 2004.
  9. M. Pachter, J. Amt, and J. Raquet, “Accurate positioning using a planar pseudolite array,” in Proceedings of the IEEE/ION Position, Location and Navigation Symposium (PLANS '08), pp. 433–440, Monterey, Calif, USA, May 2008. View at Publisher · View at Google Scholar · View at Scopus
  10. B. Xu and S. Bingjun, “Satellite selection algorithm for combined GPS-Galileo navigation receiver,” in Proceedings of the 4th International Conference on Autonomous Robots and Agents (ICARA '09), pp. 149–154, Wellington, New Zealand, February 2009. View at Publisher · View at Google Scholar · View at Scopus
  11. H. Lu and X. Liu, “Compass augmented regional constellation optimization by a multi-objective algorithm based on decomposition and PSO,” Chinese Journal of Electronics, vol. 21, no. 2, pp. 374–378, 2012. View at Google Scholar
  12. K. Kawamura and T. Tanaka, “Study on the improvement of measurement accuracy in GPS,” in Proceedings of the SICE-ICASE International Joint Conference, pp. 1372–1375, Busan, Korea, October 2006. View at Publisher · View at Google Scholar · View at Scopus
  13. E.-J. Zhong and T.-Z. Huang, “Geometric dilution of precision in navigation computation,” in Proceedings of the International Conference on Machine Learning and Cybernetics (ICMLC '06), pp. 4116–4119, Dalian, China, August 2006. View at Publisher · View at Google Scholar · View at Scopus
  14. H.-W. Cheng and Z.-K. Sun, “A nonlinear optimized location algorithm for bistaic radar system,” in Proceedings of the IEEE National Aerospace and Electronics Conference (NAECON '95), vol. 1, pp. 201–205, Dayton, Ohio, USA, May 1995.
  15. R. Yarlagadda, I. Ali, N. Al-Dhahir, and J. Hershey, “GPS GDOP metric,” IEE Proceedings Radar, Sonar and Navigation, vol. 147, no. 5, pp. 259–264, 2000. View at Publisher · View at Google Scholar
  16. N. Levanon, “Lowest GDOP in 2-D scenarios,” IEE Proceedings Radar, Sonar and Navigation, vol. 147, no. 3, pp. 149–155, 2000. View at Publisher · View at Google Scholar · View at Scopus
  17. C. Park, I. Kim, J. G. Lee, and G.-I. Jee, “A satellite selection criterion incorporating the effect of elevation angle in GPS positioning,” Control Engineering Practice, vol. 4, no. 12, pp. 1741–1746, 1996. View at Publisher · View at Google Scholar · View at Scopus
  18. P.-C. Nie, J. Yao, and D.-Y. Yi, “Positioning geometric strength analysis of the COMPASS navigation system for global and regional users,” Applied Mechanics and Materials, vol. 58–60, pp. 348–352, 2011. View at Publisher · View at Google Scholar · View at Scopus
  19. D. Simon and H. El-Sherief, “Navigation satellite selection using neural networks,” Neurocomputing, vol. 7, no. 3, pp. 247–258, 1995. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at Scopus
  20. D. Simon and H. El-Sherief, “Fault-tolerant training for optimal interpolative nets,” IEEE Transactions on Neural Networks, vol. 6, no. 6, pp. 1531–1535, 1995. View at Publisher · View at Google Scholar · View at Scopus
  21. D. E. Rumelhart, G. E. Hinton, and R. J. Williams, “Learning representations by back-propagating errors,” Nature, vol. 323, no. 6088, pp. 533–536, 1986. View at Google Scholar · View at Scopus
  22. D.-J. Jwo and K.-P. Chin, “Applying back-propagation neural networks to GDOP approximation,” The Journal of Navigation, vol. 55, no. 1, pp. 97–108, 2002. View at Publisher · View at Google Scholar · View at Scopus
  23. C. Lgel and M. Hüsken, “Improving the Rprop learning algorithm,” in Proceedings of the 2nd ICSC International Symposium on Neural Computation (NC '00), pp. 115–121, Berlin, Germany, May 2000.
  24. L. M. Patnaik and K. Rajan, “Target detection through image processing and resilient propagation algorithms,” Neurocomputing, vol. 35, no. 1–4, pp. 123–135, 2000. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at Scopus
  25. M. Riedmiller and H. Braun, “Direct adaptive method for faster backpropagation learning: the RPROP algorithm,” in Proceedings of the IEEE International Conference on Neural Networks (ICNN '93), pp. 586–591, San Francisco, Calif, USA, April 1993.
  26. L. Cong and W. Zhuang, “Nonline-of-sight error mitigation in mobile location,” IEEE Transactions on Wireless Communications, vol. 4, no. 2, pp. 560–573, 2005. View at Publisher · View at Google Scholar · View at Scopus
  27. S. Venkatraman, J. Caffery Jr., and H.-R. You, “A novel TOA location algorithm using LoS range estimation for NLoS environments,” IEEE Transactions on Vehicular Technology, vol. 53, no. 5, pp. 1515–1524, 2004. View at Publisher · View at Google Scholar · View at Scopus
  28. J. J. Caffery Jr., “A new approach to the geometry of TOA location,” in Proceedings of the IEEE Vehicular Technology Conference (VTC '00), pp. 1943–1949, Boston, Mass, USA, September 2000. View at Scopus
  29. C.-S. Chen, S.-L. Su, and Y.-F. Huang, “Hybrid TOA/AOA geometrical positioning schemes for mobile location,” IEICE Transactions on Communications, vol. E92.B, no. 2, pp. 396–402, 2009. View at Publisher · View at Google Scholar · View at Scopus
  30. C.-S. Chen, S.-L. Su, and Y.-F. Huang, “Mobile location estimation in wireless communication systems,” IEICE Transactions on Communications, vol. E94.B, no. 3, pp. 690–693, 2011. View at Publisher · View at Google Scholar · View at Scopus