Research Article  Open Access
Measurement Data Fitting Based on Moving Least Squares Method
Abstract
In the electromagnetic field measurement data postprocessing, this paper introduced the moving least squares (MLS) approximation method. The MLS combines the concept of moving window and compact support weighting functions. It can be regarded as a combination of weighted least squares and segmented least square. The MLS not only can acquire higher precision even with low order basis functions, but also has good stability due to its local approximation scheme. An attractive property of MLS is its flexible adjustment ability. Therefore, the data fitting can be easily adjusted by tuning weighting function’s parameters. Numerical examples and measurement data processing reveal its superior performance in curves fitting and surface construction. So the MLS is a promising method for measurement data processing.
1. Introduction
The measurement data of electromagnetic (EM) field plays a key role in EM environment assessment. However, the measurement points are limited, and, in order to describe the EM field distribution more accurately, the postprocessing is necessary. The data acquiring for nonmeasurement point is essentially a typical function approximation or surface construction problem. Owing to the instruments errors, environmental interference, or terrain changes, the deviation emerges inevitably and the fitting method is preferred in the postprocessing.
Currently, the least squares (LS) method has been most widely used in data fitting. The commonly used basis functions are polynomials [1], rational functions [2], Gaussian, exponential, smoothing spline in curve fitting, the Bspline [3], the nonuniform rational Bsplines (NURBS) [4], Bézier surfaces [5], and radial basis function [6] in surface construction. And, simultaneously, the deformations of LS as RLS (recursive least squares), TLS (total least squares), PLS (partial least squares), WLS (weighted least squares), GLS (generalized least squares), and SLS (segmented least squares) have been also put forward. However, all the above LS based methods are global approximation schemes which are not suitable for large amount of data, irregular or scattered distribution cases. So the moving least squares (MLS) method which is local approximation was proposed in measurement data processing.
The MLS approximation was introduced by Lancaster and Salkauskas for surface generation problems [7]. It has been used for surface construction with unorganized point clouds [8], regression in learning theory [9], and sensitivity analysis [10]. However, the major applications of MLS are to form a lot of meshless methods [11], as the diffuse element method (DEM) [12], the wellknown elementfree Galerkin method (EFGM) [13], and the meshless local PetrovGalerkin method [14]. These kinds of methods have high computational precision and stability. The disadvantage of the MLS lies in the algebra equations system that is sometimes illconditioned, so Cheng and Peng [15] proposed the improved method. The error estimates and stability of MLS [16–19] and the variation as complex or Hermite [20, 21] were intensively discussed. On the whole, the researches of MLS approximation theory are much less than applications.
As a data fitting method, the MLS can be regarded as a combination of WLS and SLS because of its compact support weighting function. Moreover, the introduced moving window in MLS shows superior performance versus SLS. Firstly, the compact support weighting function indicates that only partial measurement data nearby the unknown measurement point are involved in calculating which indicates the MLS inherits localized treatment of SLS. Then, the segmentation is rigid in SLS and causes the problems of how to select the segment and fitting discontinuity. However, the moving window in MLS acts as a soft segment. The segment selection is avoided and the fitting continuity and smoothness are guaranteed. Finally, weighting function parameters provide a convenient adjustment option for MLS.
Hence, this paper proposed the MLS method for measurement data fitting. The structure of the paper is as follows. In Section 2, a brief description is given for MLS approximation. Then, the weighting function is discussed in Section 3. In Section 4, the numerical examples of curve fitting are carried out. And finally, the measurement data fitting for substation is implemented in Section 5. Conclusions are drawn in Section 6.
2. Moving Least Squares Approximation
In MLS, an arbitrary function can be approximated by where , , are basis functions, is the number of terms in basis functions, and are the coefficients. The basis functions can be polynomials, Chebyshev polynomials, Legendre polynomials, trigonometric function, wavelet function, radial basis function, and so forth. For example, the basis functions of onedimensional polynomials have the following forms: linear basis as , and quadratic basis as , . In the following, we consider onedimensional curve fitting for demonstration.
The obvious difference between the traditional LS method and MLS is the coefficients. For MLS, the coefficients varied with , while they are constant in LS. In order to determine the coefficients, a function which is similar to WLS is defined as where , , are the given nodes. are the weighting functions with compact support which can be also represented as . The subscript “” means the center of is located in . The schematic diagram of weighted scheme in MLS is shown in Figure 1.
The weight is still imposed on the square error between fitting and given value. However, with respect to the WLS, the main difference is the weighting function which means a locally defined function in MLS versus a global one in WLS. Owing to the compact support, only the nodes that located in the support domain are involved for coefficients calculation, so it is similar to the SLS. The matrix form for (2) can be rewritten as where and and are Minimizing (3) with the respect to coefficients , the following expression can be obtained: Further, the coefficients are where is known as the MLS shape function which affects fundamentally the approximate performance. Hence, MLS can be regarded as a combination of WLS and SLS. The process of MLS approximation for onedimensional case can be summarized in the following pattern.
Flowchart for MLS Approximation Technique(1) Give the nodes , .(2) Select the basis functions and then determine the matrix in (4).(3) Loop over every unknown point and form the shape function.(a)Select weighting function .(b)Calculate the weight function of for nodes and form the diagonal matrix in (4).(c)Obtain the matrixes and using the formulae and .(d)Calculate the inverse matrix by the SVD method.(e)Form shape function .(4) End unknown point loop.(5) Give the approximation function .
3. Weighting Function
Weighting function plays a very important role in MLS. In the previous pattern, the vector is given and the matrix is predetermined when specific basis functions are selected, and therefore they both are not relevant to the unknown point and they just act as constant in forming shape function. Hence, only the matrix is defined on , and the shape function is mainly decided by the weighting function.
The basic requirements for weighting function are compact support, nonnegative definite, and continuous and with higher derivatives so as to ensure the uniqueness of the coefficients. The compact support characteristic is the essence for MLS. It is obvious from Figure 1 that a relative large support domain means more nodes are involved in calculation which is approaching the WLS method. While decreasing the support domain, the locality for the MLS will be enhanced, yet the smoothness declines. How to choose an appropriate support radius depends on the fitting errors, smoothness, and the problems’ own characteristics.
The commonly used weighting functions include the Gaussian, cubic spline function and compact supported radial basis function (CSRBF). However, we focused on the Gaussian and cubic spline function in this paper. The Gaussian weighting function is where is the relative distance, is the influencing radius, and is shape parameter. Because the weighting function is only defined in the influencing domain, it is a compact support function. In addition, the fitting square error is weighted which just acts as a moving window. Hence, we have where the norm can be selected as Euclidean distance. In two or threedimensional cases, the distance is norm or norm. The cubic spline weighting function is
There are two adjustment parameters as and in Gaussian function whereas there is only for cubic spline. And, correspondingly, the Gaussian is very flexible for MLS and is adopted in the following discussion. Gaussian and cubic spline functions are shown in Figure 2, and the cubic spline function is amplified by 1.5 in order to facilitate the comparison. Numerical tests show that the Gaussian function is similar to cubic spline when .
Consequently, we can conclude the following in MLS.(1)For basis functions selection, the linear, quadratic, or higher order polynomials are the candidates. As polynomials order increases, better smoothness fitting is obtained. However, the computation cost will significantly increase and even lead to illconditioned problems. Therefore, in two or threedimensional cases, only lower order polynomials are preferred.(2)For parameter setting in Gaussian weighting function, the influencing radius is the key issue. The larger radius means better fitting smoothness, yet with more computation cost. And, also, the larger shape parameter means the locality is enhanced while smoothness declines.
4. Numerical Example
Two numerical examples are carried out to investigate the fitting performance of MLS. One is a periodic function and the other is the famous test function in Runge phenomenon. Their formula is The white noise with SNR = 20 dB is added to the above functions. The maximum of noise is about 0.2216 and 0.1264 in the above two cases. The original functions and corresponding noisy data are shown in Figure 3.
(a)
(b)
Parameter Settings. The interval is in the above functions. For the first fitting, and are set as and . The MLS basis functions are , . The polynomials in LS are chosen as the best one as . For the second fitting, only is changed to . The comparisons of curve fitting are shown in Figure 4.
(a)
(b)
It can be seen obviously that the local approximation scheme of MLS can acquire much better results than LS method. The MLS fitting curve can follow the changes of the original function even with low order basis function, while, for the global approximation scheme like LS, the oscillation phenomenon occurs and the approximation error increases dramatically. Here, we defined the relative root mean square error (RRMSE) as The numerical results reveal that the maximum error (MAE) and RRMSE in the first fitting are , for MLS and , for LS, respectively. For the second fitting, the corresponding results are , for MLS and , for LS. The time consuming for the MLS and LS is corresponding to 93.5 ms and 10.8 ms. Finally, a series of experiments are implemented for different MLS and LS settings. The numerical results are listed in Table 1.

5. Measurement Data Fitting
In this section, the electric field intensity and magnetic flux density of a 500 kV substation were measured. The measurement points are uniformly distributed in the domain of 125 m × 50 m with interval m. So the total measurement points are . The schematic diagram of points’ distribution is shown in Figure 5.
The corresponding measurement data of and are shown in Tables 2 and 3.


Based on measurement data of Tables 2 and 3, the surfaces of and over the measurement domain are constructed by MLS. Then, for a specific measurement line, the MLS for curve fitting is implemented.
5.1. MLS Approximation for Surface Construction
Firstly, the surfaces and contours of and are drawn in Figure 6. The following can be seen from figures. The electric field intensity varies very strongly compared to the magnetic flux density. The reason is that the electric field intensity is affected significantly by the substation equipment whereas it is less affected for magnetic flux density. So field quantity owns sharp variation characteristics. The surfaces and contours are constructed by linear interpolation. Thereby, the surfaces have relative steep changes, and the corresponding contours have poor smoothness. However, the real electromagnetic field distribution is actually continuous and smooth.
(a)
(b)
(c)
(d)
According to the physical law of electromagnetic field distribution, the following considerations on the parameters setting can be concluded. In the surface fitting, in order to follow the rapid changes in electric field, a small influencing radius and a larger shape parameter are preferred. Hence, the smaller fitting errors are obtained at the expense of smoothness. But for surface fitting, the radius can be increased and the shape parameter can be decreased. Therefore, a smoother fitting surface can be formed.
Parameter Settings. The test points’ interval is chosen as m over the measurement domain. The total test points are 126 × 51 = 6426. The linear basis functions as and quadric basis functions as are adopted in MLS. The parameters are set as and in surface fitting and and in surface fitting. The fitting surfaces and contours of MLS with linear basis functions are shown in Figure 7.
(a)
(b)
(c)
(d)
It is obvious that the surfaces and contours get smoother after fitting. The numerical results of MAE, maximum relative error (MRE), and RRMSE are shown in Table 4.

Therefore, the quadric basis functions can acquire more accurate approximation than the linear type. The time consuming for linear and quadric basis functions is 780 ms and 950 ms, respectively.
5.2. MLS Approximation for Curve Fitting
Then, we focused on the curve fitting for specific line measurement data. The magnetic flux density on the line m is selected which is corresponding to the seventh column in Table 3.
Parameter Settings. Choosing the test points’ interval m: the quadric basis function is adopted in MLS approximation. The influencing radius and shape parameter have the following combinations. In Figure 8(a), the parameter is fixed and the radius varied, while, in Figure 8(b), the radius is fixed and the shape parameter varied.
(a)
(b)
From Figure 8, we can conclude that larger radius means smoother fitting but with larger fitting errors; larger shape parameter means the ability of following rapid changes is enhanced and smoothness declines; the effect of influencing radius is obvious compared to that of shape parameter. However, the fitting can be easily adjusted by setting the Gaussian function parameters.
6. Conclusions
The MLS approximation method for measurement data fitting was proposed in this paper. Numerical examples and measurement data fitting reveal the superior performance of MLS. The following conclusions can be made.
Firstly, the MLS can be regarded as a combination of WLS and SLS. The essences of MLS are the concept of moving window and compact support weighting functions. Compared to SLS, it realizes soft segment which avoids the fitting discontinuity problems and the fitting smoothness is guaranteed, while, compared to WLS, only the nodes that located in the support domain are involved for coefficients calculation; hence the locality is enhanced and can follow the rapid changes.
Then, the MLS approximation can acquire higher precision even with low order basis functions (e.g., linear basis). And also, MLS is stable for complex fittings because of its local approximation scheme, while oscillation phenomenon occurs for high order polynomials LS fitting.
Finally, there are weighting function parameters as influence radius and shape parameter which means the fitting can be easily adjusted by tuning above parameters. The MLS method is much more flexible than traditional LS based methods. So the MLS is a promising method for measurement data processing.
Conflict of Interests
The authors declare that there is no conflict of interests regarding the publication of this paper.
Acknowledgment
This work was supported by the National Natural Science Foundation of China (no. 51377174).
References
 S. A. Dyer and X. He, “Leastsquares fitting of data by polynomials,” IEEE Instrumentation and Measurement Magazine, vol. 4, no. 4, pp. 46–51, 2001. View at: Publisher Site  Google Scholar
 S. A. Dyer and J. S. Dyer, “Leastsquares fitting of data by rational functions: Levy's method (Part 1),” IEEE Instrumentation and Measurement Magazine, vol. 12, no. 6, pp. 40–43, 2009. View at: Publisher Site  Google Scholar
 H. Park, “Bspline surface fitting based on adaptive knot placement using dominant columns,” Computer Aided Design, vol. 43, no. 3, pp. 258–264, 2011. View at: Publisher Site  Google Scholar
 S.H. Bae and B. K. Choi, “NURBS surface fitting using orthogonal coordinate transform for rapid product development,” CAD Computer Aided Design, vol. 34, no. 10, pp. 683–690, 2002. View at: Publisher Site  Google Scholar
 D. Lasser, “Triangular subpatches of rectangular Bézier surfaces,” Computers & Mathematics with Applications, vol. 55, no. 8, pp. 1706–1719, 2008. View at: Publisher Site  Google Scholar  MathSciNet
 S. Liu and C. C. L. Wang, “Quasiinterpolation for surface reconstruction from scattered data with radial basis function,” Computer Aided Geometric Design, vol. 29, no. 7, pp. 435–447, 2012. View at: Publisher Site  Google Scholar  MathSciNet
 P. Lancaster and K. Salkauskas, “Surfaces generated by moving least squares methods,” Mathematics of Computation, vol. 37, no. 155, pp. 141–158, 1981. View at: Publisher Site  Google Scholar  MathSciNet
 B. Mederos, L. Velho, and L. H. de Figueiredo, “Moving least squares multiresolution surface approximation,” in Proceedings of the 16th Brazilian Symposium on Computer Graphics and Image Processing (SIBGRAPI '03), pp. 19–26, Sao Carlos, Brazil, October 2003. View at: Publisher Site  Google Scholar
 H.Y. Wang, D.H. Xiang, and D.X. Zhou, “Moving leastsquare method in learning theory,” Journal of Approximation Theory, vol. 162, no. 3, pp. 599–614, 2010. View at: Publisher Site  Google Scholar  MathSciNet
 L. Tian, Z. Lu, and W. Hao, “Moving least squares based sensitivity analysis for models with dependent variables,” Applied Mathematical Modelling, vol. 37, no. 8, pp. 6097–6109, 2013. View at: Publisher Site  Google Scholar  MathSciNet
 T. Belytschko, Y. Krongauz, D. Organ, M. Fleming, and P. Krysl, “Meshless methods: an overview and recent developments,” Computer Methods in Applied Mechanics and Engineering, vol. 139, no. 1–4, pp. 3–47, 1996. View at: Publisher Site  Google Scholar
 B. Nayroles, G. Touzot, and P. Villon, “Generalizing the finite element method: diffuse approximation and diffuse elements,” Computational Mechanics, vol. 10, no. 5, pp. 307–318, 1992. View at: Publisher Site  Google Scholar
 T. Belytschko, Y. Y. Lu, and L. Gu, “Elementfree Galerkin methods,” International Journal for Numerical Methods in Engineering, vol. 37, no. 2, pp. 229–256, 1994. View at: Publisher Site  Google Scholar  MathSciNet
 S. N. Atluri and T. Zhu, “A new meshless local PetrovGalerkin (MLPG) approach in computational mechanics,” Computational Mechanics, vol. 22, no. 2, pp. 117–127, 1998. View at: Publisher Site  Google Scholar  MathSciNet
 Y. Cheng and M. Peng, “Boundary elementfree method for elastodynamics,” Science in China, Series G: Physics Astronomy, vol. 48, no. 6, pp. 641–657, 2005. View at: Google Scholar
 M. G. Armentano and R. G. Durán, “Error estimates for moving least square approximations,” Applied Numerical Mathematics, vol. 37, no. 3, pp. 397–416, 2001. View at: Publisher Site  Google Scholar  MathSciNet
 J. F. Wang, F. X. Sun, Y. M. Cheng, and A. X. Huang, “Error estimates for the interpolating moving leastsquares method,” Applied Mathematics and Computation, vol. 245, pp. 321–342, 2014. View at: Publisher Site  Google Scholar  MathSciNet
 H. Ren, K. Pei, and L. Wang, “Error analysis for moving least squares approximation in 2D space,” Applied Mathematics and Computation, vol. 238, pp. 527–546, 2014. View at: Publisher Site  Google Scholar  MathSciNet
 Y. Lipman, “Stable moving leastsquares,” Journal of Approximation Theory, vol. 161, no. 1, pp. 371–384, 2009. View at: Publisher Site  Google Scholar  MathSciNet
 H. Ren, J. Cheng, and A. Huang, “The complex variable interpolating moving leastsquares method,” Applied Mathematics and Computation, vol. 219, no. 4, pp. 1724–1736, 2012. View at: Publisher Site  Google Scholar  MathSciNet
 Z. Komargodski and D. Levin, “Hermite type movingleastsquares approximations,” Computers and Mathematics with Applications, vol. 51, no. 8, pp. 1223–1232, 2006. View at: Publisher Site  Google Scholar  MathSciNet
Copyright
Copyright © 2015 Huaiqing Zhang et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.