Abstract

In the electromagnetic field measurement data postprocessing, this paper introduced the moving least squares (MLS) approximation method. The MLS combines the concept of moving window and compact support weighting functions. It can be regarded as a combination of weighted least squares and segmented least square. The MLS not only can acquire higher precision even with low order basis functions, but also has good stability due to its local approximation scheme. An attractive property of MLS is its flexible adjustment ability. Therefore, the data fitting can be easily adjusted by tuning weighting function’s parameters. Numerical examples and measurement data processing reveal its superior performance in curves fitting and surface construction. So the MLS is a promising method for measurement data processing.

1. Introduction

The measurement data of electromagnetic (EM) field plays a key role in EM environment assessment. However, the measurement points are limited, and, in order to describe the EM field distribution more accurately, the postprocessing is necessary. The data acquiring for nonmeasurement point is essentially a typical function approximation or surface construction problem. Owing to the instruments errors, environmental interference, or terrain changes, the deviation emerges inevitably and the fitting method is preferred in the postprocessing.

Currently, the least squares (LS) method has been most widely used in data fitting. The commonly used basis functions are polynomials [1], rational functions [2], Gaussian, exponential, smoothing spline in curve fitting, the B-spline [3], the nonuniform rational B-splines (NURBS) [4], Bézier surfaces [5], and radial basis function [6] in surface construction. And, simultaneously, the deformations of LS as RLS (recursive least squares), TLS (total least squares), PLS (partial least squares), WLS (weighted least squares), GLS (generalized least squares), and SLS (segmented least squares) have been also put forward. However, all the above LS based methods are global approximation schemes which are not suitable for large amount of data, irregular or scattered distribution cases. So the moving least squares (MLS) method which is local approximation was proposed in measurement data processing.

The MLS approximation was introduced by Lancaster and Salkauskas for surface generation problems [7]. It has been used for surface construction with unorganized point clouds [8], regression in learning theory [9], and sensitivity analysis [10]. However, the major applications of MLS are to form a lot of meshless methods [11], as the diffuse element method (DEM) [12], the well-known element-free Galerkin method (EFGM) [13], and the meshless local Petrov-Galerkin method [14]. These kinds of methods have high computational precision and stability. The disadvantage of the MLS lies in the algebra equations system that is sometimes ill-conditioned, so Cheng and Peng [15] proposed the improved method. The error estimates and stability of MLS [1619] and the variation as complex or Hermite [20, 21] were intensively discussed. On the whole, the researches of MLS approximation theory are much less than applications.

As a data fitting method, the MLS can be regarded as a combination of WLS and SLS because of its compact support weighting function. Moreover, the introduced moving window in MLS shows superior performance versus SLS. Firstly, the compact support weighting function indicates that only partial measurement data nearby the unknown measurement point are involved in calculating which indicates the MLS inherits localized treatment of SLS. Then, the segmentation is rigid in SLS and causes the problems of how to select the segment and fitting discontinuity. However, the moving window in MLS acts as a soft segment. The segment selection is avoided and the fitting continuity and smoothness are guaranteed. Finally, weighting function parameters provide a convenient adjustment option for MLS.

Hence, this paper proposed the MLS method for measurement data fitting. The structure of the paper is as follows. In Section 2, a brief description is given for MLS approximation. Then, the weighting function is discussed in Section 3. In Section 4, the numerical examples of curve fitting are carried out. And finally, the measurement data fitting for substation is implemented in Section 5. Conclusions are drawn in Section 6.

2. Moving Least Squares Approximation

In MLS, an arbitrary function can be approximated by where , , are basis functions, is the number of terms in basis functions, and are the coefficients. The basis functions can be polynomials, Chebyshev polynomials, Legendre polynomials, trigonometric function, wavelet function, radial basis function, and so forth. For example, the basis functions of one-dimensional polynomials have the following forms: linear basis as , and quadratic basis as , . In the following, we consider one-dimensional curve fitting for demonstration.

The obvious difference between the traditional LS method and MLS is the coefficients. For MLS, the coefficients varied with , while they are constant in LS. In order to determine the coefficients, a function which is similar to WLS is defined as where , , are the given nodes. are the weighting functions with compact support which can be also represented as . The subscript “” means the center of is located in . The schematic diagram of weighted scheme in MLS is shown in Figure 1.

The weight is still imposed on the square error between fitting and given value. However, with respect to the WLS, the main difference is the weighting function which means a locally defined function in MLS versus a global one in WLS. Owing to the compact support, only the nodes that located in the support domain are involved for coefficients calculation, so it is similar to the SLS. The matrix form for (2) can be rewritten as where and and are Minimizing (3) with the respect to coefficients , the following expression can be obtained: Further, the coefficients are where is known as the MLS shape function which affects fundamentally the approximate performance. Hence, MLS can be regarded as a combination of WLS and SLS. The process of MLS approximation for one-dimensional case can be summarized in the following pattern.

Flowchart for MLS Approximation Technique(1) Give the nodes , .(2) Select the basis functions and then determine the matrix in  (4).(3) Loop over every unknown point and form the shape function.(a)Select weighting function .(b)Calculate the weight function of for nodes and form the diagonal matrix in (4).(c)Obtain the matrixes and using the formulae and .(d)Calculate the inverse matrix by the SVD method.(e)Form shape function .(4) End unknown point loop.(5) Give the approximation function .

3. Weighting Function

Weighting function plays a very important role in MLS. In the previous pattern, the vector is given and the matrix is predetermined when specific basis functions are selected, and therefore they both are not relevant to the unknown point and they just act as constant in forming shape function. Hence, only the matrix is defined on , and the shape function is mainly decided by the weighting function.

The basic requirements for weighting function are compact support, nonnegative definite, and continuous and with higher derivatives so as to ensure the uniqueness of the coefficients. The compact support characteristic is the essence for MLS. It is obvious from Figure 1 that a relative large support domain means more nodes are involved in calculation which is approaching the WLS method. While decreasing the support domain, the locality for the MLS will be enhanced, yet the smoothness declines. How to choose an appropriate support radius depends on the fitting errors, smoothness, and the problems’ own characteristics.

The commonly used weighting functions include the Gaussian, cubic spline function and compact supported radial basis function (CSRBF). However, we focused on the Gaussian and cubic spline function in this paper. The Gaussian weighting function is where is the relative distance, is the influencing radius, and is shape parameter. Because the weighting function is only defined in the influencing domain, it is a compact support function. In addition, the fitting square error is weighted which just acts as a moving window. Hence, we have where the norm can be selected as Euclidean distance. In two- or three-dimensional cases, the distance is norm or norm. The cubic spline weighting function is

There are two adjustment parameters as and in Gaussian function whereas there is only for cubic spline. And, correspondingly, the Gaussian is very flexible for MLS and is adopted in the following discussion. Gaussian and cubic spline functions are shown in Figure 2, and the cubic spline function is amplified by 1.5 in order to facilitate the comparison. Numerical tests show that the Gaussian function is similar to cubic spline when .

Consequently, we can conclude the following in MLS.(1)For basis functions selection, the linear, quadratic, or higher order polynomials are the candidates. As polynomials order increases, better smoothness fitting is obtained. However, the computation cost will significantly increase and even lead to ill-conditioned problems. Therefore, in two- or three-dimensional cases, only lower order polynomials are preferred.(2)For parameter setting in Gaussian weighting function, the influencing radius is the key issue. The larger radius means better fitting smoothness, yet with more computation cost. And, also, the larger shape parameter means the locality is enhanced while smoothness declines.

4. Numerical Example

Two numerical examples are carried out to investigate the fitting performance of MLS. One is a periodic function and the other is the famous test function in Runge phenomenon. Their formula is The white noise with SNR = 20 dB is added to the above functions. The maximum of noise is about 0.2216 and 0.1264 in the above two cases. The original functions and corresponding noisy data are shown in Figure 3.

Parameter Settings. The interval is in the above functions. For the first fitting, and are set as and . The MLS basis functions are , . The polynomials in LS are chosen as the best one as . For the second fitting, only is changed to . The comparisons of curve fitting are shown in Figure 4.

It can be seen obviously that the local approximation scheme of MLS can acquire much better results than LS method. The MLS fitting curve can follow the changes of the original function even with low order basis function, while, for the global approximation scheme like LS, the oscillation phenomenon occurs and the approximation error increases dramatically. Here, we defined the relative root mean square error (RRMSE) as The numerical results reveal that the maximum error (MAE) and RRMSE in the first fitting are , for MLS and , for LS, respectively. For the second fitting, the corresponding results are , for MLS and , for LS. The time consuming for the MLS and LS is corresponding to 93.5 ms and 10.8 ms. Finally, a series of experiments are implemented for different MLS and LS settings. The numerical results are listed in Table 1.

5. Measurement Data Fitting

In this section, the electric field intensity and magnetic flux density of a 500 kV substation were measured. The measurement points are uniformly distributed in the domain of 125 m × 50 m with interval  m. So the total measurement points are . The schematic diagram of points’ distribution is shown in Figure 5.

The corresponding measurement data of and are shown in Tables 2 and 3.

Based on measurement data of Tables 2 and 3, the surfaces of and over the measurement domain are constructed by MLS. Then, for a specific measurement line, the MLS for curve fitting is implemented.

5.1. MLS Approximation for Surface Construction

Firstly, the surfaces and contours of and are drawn in Figure 6. The following can be seen from figures. The electric field intensity varies very strongly compared to the magnetic flux density. The reason is that the electric field intensity is affected significantly by the substation equipment whereas it is less affected for magnetic flux density. So field quantity owns sharp variation characteristics. The surfaces and contours are constructed by linear interpolation. Thereby, the surfaces have relative steep changes, and the corresponding contours have poor smoothness. However, the real electromagnetic field distribution is actually continuous and smooth.

According to the physical law of electromagnetic field distribution, the following considerations on the parameters setting can be concluded. In the surface fitting, in order to follow the rapid changes in electric field, a small influencing radius and a larger shape parameter are preferred. Hence, the smaller fitting errors are obtained at the expense of smoothness. But for surface fitting, the radius can be increased and the shape parameter can be decreased. Therefore, a smoother fitting surface can be formed.

Parameter Settings. The test points’ interval is chosen as  m over the measurement domain. The total test points are 126 × 51 = 6426. The linear basis functions as and quadric basis functions as are adopted in MLS. The parameters are set as and in surface fitting and and in surface fitting. The fitting surfaces and contours of MLS with linear basis functions are shown in Figure 7.

It is obvious that the surfaces and contours get smoother after fitting. The numerical results of MAE, maximum relative error (MRE), and RRMSE are shown in Table 4.

Therefore, the quadric basis functions can acquire more accurate approximation than the linear type. The time consuming for linear and quadric basis functions is 780 ms and 950 ms, respectively.

5.2. MLS Approximation for Curve Fitting

Then, we focused on the curve fitting for specific line measurement data. The magnetic flux density on the line  m is selected which is corresponding to the seventh column in Table 3.

Parameter Settings. Choosing the test points’ interval  m: the quadric basis function is adopted in MLS approximation. The influencing radius and shape parameter have the following combinations. In Figure 8(a), the parameter is fixed and the radius varied, while, in Figure 8(b), the radius is fixed and the shape parameter varied.

From Figure 8, we can conclude that larger radius means smoother fitting but with larger fitting errors; larger shape parameter means the ability of following rapid changes is enhanced and smoothness declines; the effect of influencing radius is obvious compared to that of shape parameter. However, the fitting can be easily adjusted by setting the Gaussian function parameters.

6. Conclusions

The MLS approximation method for measurement data fitting was proposed in this paper. Numerical examples and measurement data fitting reveal the superior performance of MLS. The following conclusions can be made.

Firstly, the MLS can be regarded as a combination of WLS and SLS. The essences of MLS are the concept of moving window and compact support weighting functions. Compared to SLS, it realizes soft segment which avoids the fitting discontinuity problems and the fitting smoothness is guaranteed, while, compared to WLS, only the nodes that located in the support domain are involved for coefficients calculation; hence the locality is enhanced and can follow the rapid changes.

Then, the MLS approximation can acquire higher precision even with low order basis functions (e.g., linear basis). And also, MLS is stable for complex fittings because of its local approximation scheme, while oscillation phenomenon occurs for high order polynomials LS fitting.

Finally, there are weighting function parameters as influence radius and shape parameter which means the fitting can be easily adjusted by tuning above parameters. The MLS method is much more flexible than traditional LS based methods. So the MLS is a promising method for measurement data processing.

Conflict of Interests

The authors declare that there is no conflict of interests regarding the publication of this paper.

Acknowledgment

This work was supported by the National Natural Science Foundation of China (no. 51377174).