Robust Estimation Methods in the Presence of Extreme ObservationsView this Special Issue
Research Article | Open Access
Stefano Sampaio Suraci, Leonardo Castro de Oliveira, Ivandro Klein, Vinicius Francisco Rofatto, Marcelo Tomio Matsuoka, Sergio Baselga, "Monte Carlo-Based Covariance Matrix of Residuals and Critical Values in Minimum L1-Norm", Mathematical Problems in Engineering, vol. 2021, Article ID 8123493, 9 pages, 2021. https://doi.org/10.1155/2021/8123493
Monte Carlo-Based Covariance Matrix of Residuals and Critical Values in Minimum L1-Norm
Robust estimators are often lacking a closed-form expression for the computation of their residual covariance matrix. In fact, it is also a prerequisite to obtain critical values for normalized residuals. We present an approach based on Monte Carlo simulation to compute the residual covariance matrix and critical values for robust estimators. Although initially designed for robust estimators, the new approach can be extended for other adjustment procedures. In this sense, the proposal was applied to both well-known minimum L1-norm and least squares into three different leveling network geometries. The results show that (1) the covariance matrix of residuals changes along with the estimator; (2) critical values for minimum L1-norm based on a false positive rate cannot be derived from well-known test distributions; (3) in contrast to critical values for extreme normalized residuals in least squares, critical values for minimum L1-norm do not necessarily tend to be higher as network redundancy increases.
The least-squares (LS) estimator, also known as L2-norm minimization, is the standard method in surveying data processing. In case of absence of outliers, it is the best linear unbiased estimator for the unknown parameters . It also provides the maximum likelihood solution, if observational errors are normally distributed. The LS minimizes the sum of the squares of the residuals v (weighted by the weight matrix of observations P), that is,
However, if there are outliers in the sample, the LS will provide biased parameters [2, 3]. Here, we follow the definition of Lehmann : “an outlier is an observation that is so probably caused by a gross error that it is better not used or not used as it is.” In surveying engineering, statistical testing procedures are commonly applied to deal with data possibly contaminated by outliers. This idea goes back to the pioneering work of Baarda [5, 6]. He introduced the Data Snooping (DS) procedure in order to detect outliers in a geodetic network. The issue of outlier detection in surveying engineering has been widely explored in the literature (see, e.g., [7–13]). A conceptual analysis of measurement errors and outliers in geodetic networks can be found in .
The iterative approach of DS, also known as Iterative Data Snooping (IDS) , is the most well-established outlier identification method in geodetic networks. For a review of DS and its variations, we suggest . In the IDS procedure, every observation i is tested against outliers by computing its normalized residual , i.e., the ratio between its LS residual and its LS residual standard deviation :
The ith observation with the extreme (highest) normalized LS residual (max |wi|) is compared to its critical value |Zα/2|, which is generally taken from the normal statistical table (α being the user-defined significance level of the test). If max |wi| > |Zα/2|, then the respective observation is flagged as an outlier and usually excluded from the observations set. The same procedure is repeated iteratively until no observation is flagged as an outlier. It is worth mentioning that (2) is a simplification of for scenarios of uncorrelated observations, an assumption that was adopted throughout this work.
However, IDS involves not only a single test but also multiple hypothesis testing. In that case, the “false positive rate” (type I error probability or significance level α) is the rate of experiments in which at least one observation is flagged as an outlier, when, in fact, there is none. In this context, Lehmann  showed that the critical values cannot be derived from well-known univariate test distributions (e.g., normal distribution), due to the correlations between residuals. Hence, in order to fully control false positive rates in geodetic networks, critical values for IDS started to be numerically computed by means of Monte Carlo simulation (MCS) (see, e.g., [15, 16]).
As mentioned, IDS, however, is based on LS residuals, which are very “sensitive” to outliers . On the other hand, minimum L1-norm is one of the standard robust estimation methods in Geodesy . The estimator that minimizes L1-norm is more “resistant” or “robust” to outliers, as they tend to be almost completely projected onto the corresponding residuals . Numerical examples can be found in [19, 20]. The test against outliers by computing normalized residuals is also useful in minimum L1-norm [18, 21].
Minimum L1-norm seeks the minimization of the sum of weighted absolute residuals :where p is the weight vector of uncorrelated observations and |v| is the vector of absolute residuals. In particular, Minimum L1-norm is likely to provide higher outlier identification success rate than IDS for low-redundancy networks [15, 22].
The minimum L1-norm solution may not be unique , and its vector of residuals in geodetic networks tends to be sparse, with many residuals being equal to zero (see, e.g., [20, 24]). This means that corresponding observations are accepted as “perfect” observations, without any measurement errors. Geodetic observations, however, always have (at least) random errors . Hence, such assumption of “perfect” observations is disconnected to the physical reality of geodetic networks. Besides, the estimator that minimizes L1-norm is biased except for some particular cases . Therefore, the final estimation of a network should always be performed by LS, even if minimum L1-norm was applied to identify outliers [8, 27].
Minimum L1-norm has no analytical direct solution and needs to be solved by numerical methods. This work focuses on the solution by the simplex method  of linear programming, the most widely used approach for solving minimum L1-norm . In geodetic networks, it has already been applied by many authors (see, e.g., [19, 20, 29–31]).
Another technique commonly employed to solve minimum L1-norm in geodetic networks is the iterative reweighted least squares (IRLS) (see, e.g., [24, 32–34]). It has also been applied in deformation analysis of geodetic networks [35–37]. However, it is worth mentioning that IRLS does not seem to be always reliable , as it is a “local” estimator and may produce unacceptable solutions if it gets stuck in a local optimum .
Other techniques that were performed to compute minimum L1-norm in geodetic networks include simulated annealing ; genetic algorithm [39, 40]; and linear programming by an interior point method . Solutions of minimum L1-norm by these methods and by IRLS are outside the scope of this paper.
To actually identify outliers based on minimum L1-norm results, geodesists have already tried two different criteria: (1) the ratio between residuals and respective observation standard deviation σi as the test statistic (see, e.g., [15, 41]); and (2) the normalized residual (Equation 2, considering and from minimum L1-norm) as the test statistic (see, e.g., [21, 42]). For both criteria, the chosen critical values (from which the respective observation would be classified as an outlier) were, in general, common values taken from the univariate normal statistical table, such as |Zα/2| = 3.00 (α = 0.27%) and |Zα/2| = 3.29 (α = 0.1%), which, as mentioned, are not appropriate even for IDS. However, it remains unclear how to properly choose critical values in minimum L1-norm.
In this context, this paper has three main contributions: (1) it provides a new procedure to compute critical values for normalized residuals in robust estimation based on MCS control of false positive rate; (2) it serves as a method to compare different quality control procedures by preserving respective critical values with the same false positive rate; and (3) it provides a Monte Carlo approach to compute the covariance matrix of residuals Ʃv for any adjustment procedure, which is, indeed, a prerequisite to compute critical values for residuals.
The outline of this paper is as follows. First, we present the new approach to estimate Ʃv by means of MCS. We highlight that such technique can be widely applied to any adjustment procedure, including LS, the estimator that minimizes L1-norm and other robust estimators. Then, also by MCS, we present a procedure to compute critical values for normalized residuals in robust estimation, based on the control of false positive rate (unprecedented in geodesy). Experiments were conducted in three different leveling networks, focusing on the minimum L1-norm solution by the simplex method and on comparison with LS/IDS.
1.1. Covariance Matrix of Residuals by Means of MCS
Given a geodetic network with m observations and n unknowns parameters, its respective mathematical model may be defined by equations (4) and (5), with Amxn being the “design” matrix with the coefficients of the parameters vector xnx1 and lmx1, the vector of reduced observations . In equation (5) (stochastic model), is the variance factor and is ƩLmxm the covariance matrix of observations. As mentioned, vmx1 is the residual vector and Pmxm is the (symmetric positive-definite) weight matrix of observations. The solution for x produced by the LS is given by equation (6).
Hence, applying the general law of propagation of variances , the covariance matrix of parameters Ʃx is expressed by
Assuming and being the vector of adjusted observations, its covariance matrix ƩLa is
In matrix Ʃv, elements in position (i, j) represent the covariance between residuals of observations i and j and elements (i, i) represent the variance of the ith observational residual. Hence, despite the adjustment procedure, once its Ʃv is obtained, of each observation (equation (2)) can be easily calculated as , with being the ith element of the main diagonal of Ʃv.
For other adjustment procedures, however, the computation of Ʃv is not so immediate. Actually, some robust estimators even lack a closed-form expression for the computation of its Ʃv in the literature. In order to fully address this issue, which is also a prerequisite to enable the calculation of critical values for normalized residuals in robust estimation, we propose the following approach (Figure 1) to compute Ʃv for any adjustment procedure, by means of MCS. The inputs are functional and stochastic models (matrices A and ƩL) of the geodetic network and the chosen adjustment procedure (e.g., LS or minimum L1-norm).(1)synthetically generate M = 200,000 vectors of (pseudo-) random normally distributed errors of observations , with expected mean μ = 0 and covariance matrix ƩL, i.e., ; this amount of MCS trials was suggested by Rofatto et al. (2)For each MCS trial, let , and we compute the respective residuals vector by performing the chosen adjustment procedure; then, each will have m elements , , with being the residual of the ith observation in the kth MCS trial(3)Considering the average of residuals of the ith observation in all MCS trials , we compute the variance (equation (10)) for each residual and compute the covariance for each pair of residuals (equation (11)):(4)fix the estimated Ʃv by placing variances of each ith residual in the ith element of the main diagonal and covariances in its corresponding position (i, j).
Regarding Ʃv in minimum L1-norm, some authors (see, e.g., [8, 21]) derived variances of the subset of nonzero residuals, and Junhuan  presented an expression valid for IRLS solution. On another hand, our MCS method computes all variances and covariances and is more general, not only for any minimum L1-norm solution but also for any adjustment procedure.
1.2. Critical Values Based on False Positive Rates
Recently, many works (see, e.g., [9, 15, 16]) have investigated critical values for normalized residuals in IDS based on MCS control of false positive rate. Motivated by mentioned works, we propose the following procedure for any robust estimator (Figure 2). In fact, the method could be applied to any adjustment procedure, but it is clearly more useful for robust estimators in the process of outlier identification. The inputs are matrices A and ƩL of the geodetic network, the desired α (false positive rate), and the robust estimator for which critical values of normalized residual will be estimated.(1)compute Ʃv of the geodetic network by the procedure from Figure 1, considering the robust estimator selected.(2)synthetically generate M = 200,000 vectors of (pseudo-) random normally (or with other distribution) distributed errors of observations , with expected mean μ = 0 and covariance matrix ƩL, i.e., ; in order to avoid any kind of bias, these new vectors should be different from the vectors of step (1). Random errors generally follow normal distributions in geodetic observations. However, an advantage of an MCS approach is that it may be applied to other error distributions, as can be seen in the work of Lehmann .(3)compute the max-|| (i.e., the maximum absolute ) of each MCS trial.(4)order the set of all max-|| values in the ascending order.(5)The critical value will be the one in position (1 − α) M of the ordered set.
2. Experiments and Results
Experiments were performed in three simulated leveling networks (Figure 3). Network A consists in one control station, m = 6 observations (height differences), and n = 3 unknowns (station heights). Network B consists in one control station, m = 10 observations, and n = 4 unknowns. Network C consists in one control station, m = 15 observations, and n = 5 unknowns. Therefore, the “mean redundancy number”  of network A is rA = (6–3)/6 = 0.50, network B is rB = (10–4)/10 = 0.60, and network C is rC = (15–5)/15 = 0.67.
For all networks, the standard deviation of the observations was given by , where (in km) is the length of the respective leveling line. In the ascending order of the observation index, the lengths (in km) of each leveling line were as follows: for network A, [42, 38, 27, 22, 23, 33]; for network B, [37, 28, 33, 26, 40, 32, 39, 29, 34, 41]; and for network C, [30, 34, 25, 37, 28, 38, 29, 35, 31, 26, 33, 36, 27, 32, 24]. Therefore, for example, σi of the 4th observation of network A (which is also the lowest σi of all networks) is .
Minimum L1-norm adjustments were performed by the simplex method of linear programming. Normally distributed pseudorandom numbers were generated by the Mersenne Twister algorithm  and transformed from uniform to normal distribution by the Ziggurat technique . All experiments were performed in the software Octave.
At first, we computed Ʃv for the three networks in three different ways: (1) Ʃv(LS-A) for the LS adjustment by its (well-established) analytical formulation (equation (9)); (2) Ʃv(LS-MCS) for the LS adjustment, performing the new approach we presented by means of MCS; and (3) Ʃv(L1-MCS) for minimum L1-norm adjustment (simplex solution), performing the new approach we presented by means of MCS. Table 1 presents the elements (rounded to one decimal place) of these matrices computed for network A. Respective matrices for networks A, B, and C (rounded to three decimal places) can be found in the appendix.
Then, we applied the new procedure to compute critical values for normalized residuals by MCS and based on the false positive rate (significance level α) in minimum L1-norm (simplex solution). For comparison and further analysis, in Table 2, we also presented critical values for IDS with the same procedure and critical values from the normal distribution statistical table.
Although the use of MCS seems computationally expensive, today, this is no longer an obstacle even for personal computers . As so, the whole process of computing minimum L1-norm critical values, which includes the estimation of respective Ʃv(L1-MCS) (Figure 2), took approximately 14, 21, and 29 minutes (for networks A, B, and C, respectively) using an Intel (R) Core (TM) i5 2.50 GHz processor with 4 Gb RAM.
The first issue to be addressed is the covariance matrix of residuals for network A, B, and C. Table 3 presents the maximum, minimum, and average differences of the variances (elements of the main diagonal) and covariances (elements outside the main diagonal) between matrices Ʃv(LS-MCS) and Ʃv(LS-A) and between matrices Ʃv(LS-MCS) and Ʃv(L1-MCS).
Ʃv(LS-A) and Ʃv(LS-MCS) had very close values for the three networks. The difference between corresponding elements were always less than 0.300 mm2 (being less than 0.100 mm2 for more than 75% of all networks elements), and average differences computed were less than 0.060 mm2 for all networks. These differences are relatively very low, as the minimum variance of observations (of the 4th observation of network A) was . Hence, this result validates our strategy presented to compute Ʃv based on MCS.
On the other hand, we can clearly see that elements of Ʃv(L1-MCS) were very different from Ʃv(LS-MCS) (and Ʃv(LS-A)) ones. The maximum variance differences (in mm2) were 11.703, 24.525, and 6.606 for networks A, B, and C, respectively, and average variance differences were always higher than 4.900 mm2. Hence, as expected, using Ʃv(LS-A) is not appropriate in the context of minimum L1-norm.
As expected, critical values for normalized residuals in IDS and minimum L1-norm presented in Table 2 were always different from critical values obtained in the univariate normal distribution table. Moreover, minimum L1-norm critical values were always higher than IDS ones. This characteristic highlights the importance of controlling the false positive rate properly by MCS, as was proposed in this research.
Finally, we note also that critical values for both minimum L1-norm and IDS vary from different networks. Although IDS values tend to increase with network redundancy, as already shown by , the same cannot be claimed for minimum L1-norm. Hence, this issue may be a subject for further investigation.
4. Concluding Remarks
In this work, we successfully developed and presented an approach by means of MCS to compute the covariance matrix of residuals and critical values for normalized residuals in any adjustment procedure. Since the LS method has a well-established analytical expression for the covariance matrix of residuals, our MCS strategy to estimate it was first applied to LS. We found that differences in respective elements between our strategy and analytical formulation were negligible, which validates our approach.
Numerical results of the whole procedure of computing critical values, which includes the estimation of the respective residuals covariance matrix, were presented in three leveling networks for minimum L1-norm solved by the simplex method of linear programming and compared to LS (for the covariance matrix of residuals) and IDS based on LS results (for critical values). In this sense, we highlight that, as mentioned, the minimum L1-norm solution may not be unique. Hence, conclusions of this paper are associated with the simplex solution of minimum L1-norm.
We have shown that the covariance matrix of residuals may change along with the adjustment procedure (in our case, from LS to minimum L1-norm). Therefore, since robust estimators generally do not have a well-established solution to compute the covariance matrix of residuals, the approach presented for any adjustment procedure (including robust estimators) herein is a valuable strategy.
Surveyors cannot rely on critical values from univariate normal distribution either for IDS or minimum L1-norm. Moreover, critical values vary even among robust estimators. However, unlike IDS, the critical values in minimum L1-norm do not necessarily tend to increase with network redundancy. Hence, the main contribution of this work was the proposed Monte Carlo-based critical values to control the false positive rate for normalized residuals of robust estimators.
Future research should perform this proposal in order to provide a fair comparison among different quality control procedures with the same false positive rate. Furthermore, one can investigate effects of chosen false positive rates in probability levels of classes of errors in outlier identification, i.e., type II error, type III error, overidentification positive and negative, and statistical overlap (see  associated with robust estimators).
The proposed approach for the computing residuals covariance matrix can be extended to covariance matrices other than the residuals one in future works. One can, e.g., compute the network parameters in each MCS trial and then compute the parameter covariance matrix of the chosen adjustment procedure.
The relationship between network redundancy and critical values for normalized residuals in robust estimation also needs further investigation. Besides, the new approach for the computation of the covariance matrix of residuals and for the estimation of critical values for normalized residuals described here should be applied for other robust estimators and other types of geodetic networks, such as Global Navigation Satellite System (GNSS) networks.
All codes that support the findings of this study are available from the corresponding author upon request.
Conflicts of Interest
The authors declare that there are no conflicts of interest regarding the publication of this paper.
This work was supported by the Department of Science and Technology of the Brazilian Army. The authors would like to thank the research group “Controle de Qualidade e Inteligência Computacional em Geodesia” (dgp.cnpq.br/dgp/espelhogrupo/0178611310347329).
- P. J. G. Teunissen, “Distributional theory for the DIA method,” Journal of Geodesy, vol. 92, pp. 59–80, 2018.
- P. J. Huber and E. M. Ronchetti, “Robust statistics,” Wiley Series in Probability and Statistics, John Wiley & Sons, Hoboken, NJ, USA, 2nd edition, 2009.
- I. E. Koch, I. Klein, L. Gonzaga Jr., M. T. Matsuoka, V. F. Rofatto, and M. R. Veronez, “Robust estimators in geodetic networks based on a new metaheuristic: independent vortices search,” Sensors, vol. 19, no. 20, p. 4535, 2019.
- R. Lehmann, “On the formulation of the alternative hypothesis for geodetic outlier detection,” Journal of Geodesy, vol. 87, pp. 373–386, 2013.
- W. Baarda, Statistical Concepts in Geodesy, Netherlands Geodetic Commission, Delft, Netherlands, 1967.
- W. Baarda, A Testing Procedure for Use in Geodetic Networks, Netherlands Geodetic Commission, Delft, Netherlands, 1968.
- S. Baselga, “Nonexistence of rigorous tests for multiple outlierdetection in least-squares adjustment,” Journal of Surveying Engineering, vol. 137, no. 3, pp. 109–112, 2011.
- Y. Gao, E. J. Krakiwsky, and J. Czompo, “Robust testing procedure for detection of multiple blunders,” Journal of Surveying Engineering, vol. 118, no. 1, pp. 11–23, 1992.
- R. Lehmann, “Improved critical values for extreme normalized and studentized residuals in Gauss–Markov models,” Journal of Geodesy, vol. 86, no. 12, pp. 1137–1146, 2012.
- R. Lehmann and M. Lösler, “Multiple outlier detection: hypothesis tests versus model selection by information criteria,” Journal of Surveying Engineering, vol. 142, no. 4, Article ID 04016017, 2016.
- A. J. Pope, The Statistics of Residuals and the Detection of Outliers, National Oceanic and Atmospheric Administration, Rockville, MD, USA, 1976.
- V. F. Rofatto, M. T. Matsuoka, I. Klein, M. R. Veronez, M. L. Bonimani, and R. Lehmann, “A half-century of Baarda’s concept of reliability: a review, new perspectives, and applications,” Survey Review, vol. 52, no. 372, pp. 261–277, 2020.
- P. J. G. Teunissen, Testing Theory: An Introduction, Delft University Press, Delft, Netherlands, 2nd edition, 2006.
- S. S. Suraci and L. C. Oliveira, “Outlier=gross error? Do only gross errors cause outliers in geodetic networks? Addressing these and other questions,” Bulletin of Geodetic Sciences, vol. 25, Article ID e2019s004, 2020.
- I. Klein, S. S. Suraci, L. C. Oliveira, V. F. Rofatto, M. T. Matsuoka, and S. Baselga, “An attempt to analyse Iterative Data Snooping and L1-norm based on Monte Carlo simulation in the context of leveling networks,” Survey Review, 2021.
- V. F. Rofatto, M. T. Matsuoka, I. Klein, M. R. Veronez, and L. G. da Silveira Jr., “A Monte Carlo-based outlier diagnosis method for sensitivity analysis,” Remote Sensing, vol. 12, no. 5, pp. 860–700, 2020.
- C. Inal, M. Yetkin, S. Bulbul, and B. Bilgen, “Comparison of L1 norm and L2 norm Minimisation methods in trigonometric levelling networks,” Tehnički Vjesnik, vol. 25, no. 1, pp. 216–221, 2018.
- P. Junhuan, “The asymptotic variance–covariance matrix, Baarda test and the reliability of L1-norm estimates,” Journal of Geodesy, vol. 78, pp. 668–682, 2005.
- A. Amiri-Simkooei, “Formulation of L1 norm minimization in Gauss-Markov models,” Journal of Surveying Engineering, vol. 129, no. 1, pp. 37–43, 2003.
- M. Yetkin and C. Inal, “L1 norm minimization in GPS networks,” Survey Review, vol. 43, no. 323, pp. 523–532, 2011.
- A. R. Amiri-Simkooei, “On the use of two L1 norm minimization methods in geodetic networks,” Earth Observation and Geomatics Engineering, vol. 2, no. 1, pp. 1–8, 2018.
- S. S. Suraci and L. C. Oliveira, “Aplicação das normas L1 e L∞ em redes altimétricas: identificação de outliers e construção do modelo estocástico,” Revista Cartográfica, vol. 101, pp. 135–153, 2019.
- N. Abdelmalek and W. Malek, Numerical Linear Approximation in C, CRC Press, London, UK, 2008.
- S. Baselga, I. Klein, S. S. Suraci, L. C. de Oliveira, M. T. Matsuoka, and V. F. Rofatto, “Performance comparison of least squares, iterative and global L1 norm minimization and exhaustive search methods for outlier detection in leveling networks,” Acta Geodynamica et Geomaterialia, vol. 17, no. 4, pp. 425–438, 2020.
- C. D. Ghilani, Adjustment Computations: Spatial Data Analysis, John Wiley & Sons, Hoboken, NJ, USA, 5nd edition, 2010.
- R. W. Farebrother, “Unbiased L1 and L∞ estimation,” Communications in Statistics–Theory and Methods, vol. 14, no. 8, pp. 1941–1962, 1985.
- J. Marshall and J. Bethel, “Basic concepts of L1 norm minimization for surveying applications,” Journal of Surveying Engineering, vol. 122, no. 4, pp. 168–179, 1996.
- G. Dantzig, Linear Programming and Extensions, Princeton University Press, Princeton, NJ, USA, 1963.
- S. Gašincová and J. Gašinec, “Comparison of the method of least squares and the simplex method for processing geodetic survey results,” GeoScience Engineering, vol. 59, no. 3, pp. 21–35, 2013.
- J. Marshall, “L1-norm pre-analysis measures for geodetic networks,” Journal of Geodesy, vol. 76, pp. 334–344, 2002.
- S. S. Suraci, L. C. de Oliveira, and I. Klein, “Two aspects on L1-norm adjustment of leveling networks,” Revista Brasileira de Cartografia, vol. 71, no. 2, pp. 486–500, 2019.
- R. C. Erenoglu and S. Hekimoglu, “Efficiency of robust methods and tests for outliers for geodetic adjustment models,” Acta Geodaetica et Geophysica Hungarica, vol. 45, no. 4, pp. 426–439, 2010.
- S. Hekimoglu, “Do robust methods identify outliers more reliably than conventional tests for outliers?” Zeitschrift Fuer Vermessungswesen, vol. 3, pp. 174–180, 2005.
- Y. Sisman, “Outlier measurement analysis with the robust estimation,” Scientific Research and Essays, vol. 5, no. 7, pp. 668–678, 2010.
- A. R. Amiri-Simkooei, S. M. Alaei-Tabatabaei, F. Zangeneh-Nejad, and B. Voosoghi, “Stability analysis of deformation-monitoring network points using simultaneous observation adjustment of two epochs,” Journal of Surveying Engineering, vol. 143, no. 1, Article ID 04016020, 2017.
- M. Eshagh, L. Sjöberg, and R. Kiamehr, “Evaluation of robust techniques in suppressing the impact of outliers in a deformation monitoring network-a case study on the Tehran Milad tower network,” Acta Geodaetica et Geophysica Hungarica, vol. 42, no. 4, pp. 449–463, 2007.
- K. Nowel, “Application of Monte Carlo method to statistical testing in deformation analysis based on robust M-estimation,” Survey Review, vol. 48, no. 348, pp. 212–223, 2016.
- A. Khodabandeh and A. R. Amiri-Simkooei, “Recursive algorithm for L1 norm estimation in linear models,” Journal of Surveying Engineering, vol. 137, no. 1, pp. 1–8, 2011.
- S. Baselga, “Global optimization solution of robust estimation,” Journal of Surveying Engineering, vol. 133, no. 3, pp. 123–128, 2007.
- J. L. Berné and S. Baselga, “Robust estimation in geodetic networks,” Física de la Tierra, vol. 17, pp. 7–22, 2005.
- S. Hekimoglu and R. C. Erenoglu, “Effect of heteroscedasticity and heterogeneousness on outlier detection for geodetic networks,” Journal of Geodesy, vol. 81, pp. 137–148, 2007.
- C. R. Schwarz and J. J. Kok, “Blunder detection and data snooping in LS and robust adjustments,” Journal of Surveying Engineering, vol. 119, no. 4, pp. 127–136, 1993.
- I. Klein, M. T. Matsuoka, M. P. Guzatto, F. G. Nievinski, M. R. Veronez, and V. F. Rofatto, “A new relationship between the quality criteria for geodetic networks,” Journal of Geodesy, vol. 93, no. 4, pp. 529–544, 2019.
- M. Matsumoto and T. Nishimura, “Mersenne twister: a 623-dimensionally equidistributed uniform pseudo-random number generator,” ACM Transactions on Modeling and Computer Simulation, vol. 8, pp. 3–30, 1998.
- G. Marsaglia and W. W. Tsang, “The ziggurat method for generating random variables,” Journal of Statistical Software, vol. 5, no. 8, 2000.
Copyright © 2021 Stefano Sampaio Suraci et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.