#### Abstract

In this paper, the minimum mean square error (MMSE) estimation problem for calculation of distances between two signals via the Kalman filtering framework is considered. The developed algorithm includes two stages: the Kalman estimate of a state vector computed at the first stage is nonlinearly transformed at the second stage based on a distance function and the MMSE criterion. In general, the most challenging aspect of application of the distance estimator is calculation of the multivariate Gaussian integral. However, it can be successfully overcome for the specific metrics between two points in line, between point and line, between point and plane, and others. In these cases, the MMSE estimator is defined by an analytical closed-form expression. We derive the exact closed-form bilinear and quadratic MMSE estimators that can be effectively applied for calculation of an inner product, squared norm, and Euclidean distance. A novel low-complexity suboptimal estimator for special composite functions of linear, bilinear, and quadratic forms is proposed. Radar range-angle responses are described by the functions. The proposed estimators are validated through a series of experiments using real models and metrics. Experimental results show that the MMSE estimators outperform existing estimators that calculate distance and angle in nonoptimal manner.

#### 1. Introduction

The problem of measuring the distance between real-valued signals or images arises in most areas of scientific research. In particular, the familiar Euclidean distance plays a prominent role in many important application contexts not only in engineering, economics, statistics, and decision theory, but also in fields such as machine learning, cryptography, image recognition, and others. The statistical methods related to the distance estimation can be categorized into image and signal processing areas.

The concept of a distance metric is widely used in image processing and computer vision [1–5] (also see references therein). The distance provides a quantitative measure of the degree of match between two images or objects. These objects might be two profiles of persons, a person and a target profile, camera of a robot and people, or any two vectors taken across the same features (variables) derived from color, shape, and/or texture information. Image similarity measures play an important role in many image algorithms and applications including retrieval, classification, change detection, quality evaluation, and registration [6–12].

The proposed paper deals with the distance estimation between random signals. In signal processing, a good distance metric helps in improving the performance of classification, clustering and localization in wireless sensor networks, radar tracking, and other applications [13–20]. The Bayesian classification approach based on concepts of the Euclidian and Mahalanobis distances is often used in discriminant analysis. Survey of the classification procedures which minimize a distance between raw signals and classes in multifeature space is given in [21, 22]. The distance estimation algorithm based on the goodness-of-fit functions where the best parameters of the fitting functions are calculated given the training data is considered in [23]. Algorithm for estimation of a walking distance using a wrist-mounted inertial measurement unit device is proposed in [24]. The concept of distance between two samples or between two variables is fundamental in statistics due to the fact that a sum of squares of independent normal random variables has a chi-square distribution. Knowledge of the distribution and usage of the usual approximations make a confidence interval for distance metrics [25, 26]. Usage of the Taylor series expansions for aircraft geometric-height estimation using range and bearing measurements is addressed in [27, 28]. The minimum mean square error (MMSE) estimation of a state vector in the presence of information about the absolute value of a difference between its subvectors is proposed in [29].

In many applications, it is interesting to estimate not only a position or state of an object but also a nonlinear distance function which gives information to effectively control target tracking. However, most authors have not focused on a simultaneous estimation of a state and distance functions in dynamical models such as a Kalman filtering framework.

The problem of estimation of the distance function, , between two vector signals and is considered in the paper, but its difference from the aforementioned references is that the both signals and are unknown, and they should be simultaneously estimated with the function using indirect measurements. For example, we observe positions of two points and in a line and a distance between the points represents the absolute difference, i.e., . The positions and and consequently the distance are unknown, and our problem is to optimally calculate three estimates . Note that the simple distance estimator is not an optimal solution.

The purpose of the paper is to derive an analytical closed-form MMSE estimator for distance functions between random signals such as the absolute value, the Euclidean distance, inner product, bilinear, and quadratic forms. The advantage of the estimator is quick and accurate calculation of distance metrics compared to the approximate or iterative estimators. A further study of using the estimators is also done for the object tracking problem where we can obtain important practical results for the distance estimation of signals in linear Gaussian discrete-time systems.

The following list highlights the primary contributions of this paper:(1)Extension of the MMSE approach to the estimation of a nonlinear functions of a state vector within the Kalman filtering framework. The obtained MMSE-optimal solution represents a two-stage estimator.(2)Derivation of analytical expressions for the different metrics between two points in a line, between a point and a line, and between a point and a plane. We establish that the obtained estimators represent compact closed-form formulas depending on the Kalman filter state estimates and error covariance.(3)The MMSE estimators for quadratic and bilinear forms of a state vector are investigated and applied, including the estimators for the square of a norm , the square of the Euclidean distance , and the inner product . A novel low-complexity algorithm for suboptimal estimation of a special class of composite functions is proposed. Tracking radar responses such as range, angles, and range rate are described by the functions.(4)Performance of the proposed MMSE estimators through real examples illustrates their theoretical and practical usefulness.

This paper is organized as follows. Section 2 presents a statement of the MMSE estimation problem for an arbitrary nonlinear function of a state vector within a Kalman filtering framework. In Section 3, the general MMSE estimator is proposed, and computational complexity of the estimator is discussed. The concept of a closed-form estimator is introduced. In Section 4, the closed-form MMSE estimator for absolute value of a linear form of a state vector is derived (Theorem 1). In particular cases, the estimator calculates distances between two points in 1-D line, between a point and line in 2-D plane, and between a point and plane in 3-D space. The comparative analysis of the estimator via several practical examples is presented. In Section 5, the MMSE estimators for quadratic and bilinear forms of a state vector are comprehensively studied (Theorems 2 and 3). Effective matrix formulas for the quadratic and bilinear MMSE estimators are derived and applied with the Euclidean distance, a norm, and inner product of vector signals. In Section 6, a low-complexity suboptimal estimator for composite nonlinear functions is proposed and recommended for calculation of radar range-angle responses. In Section 7, the efficiency of the suboptimal estimator is demonstrated on an 2-D dynamical model. Finally, we conclude the paper in Section 8. The list of main notations is given in Table 1.

#### 2. Problem Statement

The basic framework for the Kalman filter involves estimation of a state of a discrete-time linear dynamical system with additive Gaussian white noise:where is a state vector, is a measurement vector, and and are zero-mean Gaussian white noises with process and measurement noise covariances, respectively, i.e., , and , , , , and . The initial state and the process and measurement noises are mutually uncorrelated.

In parallel with the state-space model (1), consider the nonlinear function of a state vector:which in particular case represents a distance metric in .

Given the overall noisy measurements , our goal is to desire optimal estimators and for the state vector (1) and nonlinear function (2), respectively.

There are a multitude of statistics-based methods to estimate the unknown value from the sensor measurements We focus on the MMSE approach, which minimizes the mean square error (MSE), , which is a common measure of estimator quality.

The MMSE estimator is the conditional mean (expectation) of the unknown given the known observed value of the measurements, [30, 31]. The most challenging problem in the MMSE approach is how to calculate the conditional mean. In this paper, explicit formulas for distance metrics within the Kalman filtering framework are derived.

#### 3. General Formula for Optimal Two-Stage MMSE Estimator

In this section, the optimal MMSE estimator for the general function of a state vector is proposed. It includes two stages: the optimal Kalman estimate of the state vector computed at the first stage is used at the second stage for estimation of .

First stage (calculation of Kalman estimate): the mean square estimate of the state based on the measurements and error covariance are described by the recursive Kalman filter (KF) equations [30, 31]:where and are the time update estimate and error covariance, respectively, and is the filter gain matrix.

Second stage (optimal MMSE estimator): next, the optimal MMSE estimate of the nonlinear function based on the measurements also represents a conditional mean, that is,where is a multivariate conditional Gaussian probability density function.

Thus, the best estimate in equation (4) represents the optimal MMSE estimator, , which depends on the Kalman estimate and error covariance determined by KF equation (3).

*Remark 1. *(closed-form MMSE estimator). In general case, the calculation of the optimal estimate, , is reduced to calculation of the multivariate Gaussian integral (5). The lack of the estimate is impossibility to calculate the integral in explicit form for the arbitrary nonlinear function . Analytical calculation of the integral (closed-form MMSE estimator) is possible only in special cases considered in the paper. The closed-form estimators for distance metrics in terms of and are proposed in Sections 4 and 5.

The Euclidean distance between two points, , is defined asIn this particular case where and represent two points located on the 1-D line, the Euclidean distance represents the absolute value (see Figure 1), i.e.,In Section 4, the MMSE estimator for the absolute value is comprehensively studied.

#### 4. Closed-Form MMSE Estimator for Absolute Value

##### 4.1. MMSE Estimator for Absolute Value of Linear Form

Lemma 1. *(MMSE estimator for | x|). Let be a normal random variable, and and are the MMSE estimate and error variance, respectively. Then, the MMSE estimator for the absolute value has the following closed-form expression:where is the cumulative distribution function of the standard normal distribution, .*

The derivation of equation (8) is given in the Appendix.

Let be a linear form (LF) of the normal random vector, , and and are the MMSE estimate and error covariance, respectively. Then, the MMSE estimate of the linear form and its error variance can be calculated asand we have the following theorem.

Theorem 1. *(MMSE estimator for absolute value of LF). Let be a normal random vector, and and are the MMSE estimate and error covariance, respectively. Then, the closed-form MMSE estimator for the absolute value is defined by formula (8):where and are determined by equation (9).*

The MMSE estimator (10) allows to calculate distances measured in terms of the absolute value in *n*-dimensional space.

##### 4.2. Examples of MMSE Estimator for Distance between Points

Let be a normal state vector, and and are the Kalman estimate and error covariance, respectively, .

*Example 1. *(distance on 1-D line). The MMSE estimator for the distance between the moving point and given sequence in 1-D line takes the form

*Example 2. *(distance between point and line). The shortest distance from the moving point to the line in 2-D plane is shown in Figure 2. The distance is given bySubstituting and into equations (9) and (10), we get the MMSE estimator for the shortest distance (12):where and are determined by equation (9):The MMSE estimator (12)–(14) can be generalized on 3-D space.

*Example 3. *(distance between point and plane). Similar to equation (12), the shortest distance between the moving point and the plane in 3-D space,is shown in Figure 3.

Substituting and into equations (9) and (10), we getwhere and are determined by equation (9):The MMSE distance estimators in Theorem 1 and Examples 1∼3 are summarized in Table 2.

##### 4.3. Numerical Examples

In this section, numerical examples demonstrate the accuracy of the two closed-form estimators calculated for the absolute value . The optimal MMSE estimator is compared with the *simple suboptimal* one .

###### 4.3.1. Estimation of Distance between Random Location and Given Point in 1-D Line

Let be a scalar random position measured in additive white noise; then, the system model iswhere is the known initial condition and and are the uncorrelated white Gaussian noises.

The KF equation (3) gives the following:

Consider the distance between and the known point i.e., Then, the optimal MMSE estimate of the distance is defined by (10). Further we are interested in the special case in which and In this case, formula (10) represents the optimal estimate of the distance between the current position and the origin point, i.e.,

In parallel to the optimal estimate (20), consider the simple suboptimal estimate,

*Remark 2. *Reviewing formula (20), we find the following. If the values of and are large , then and if or if which implies that both estimates are quite close, i.e., Assuming the estimate is far enough from zero, then the large values of the functions and depend on the error variance . Using (19), the steady-state value of the variance satisfies the quadratic equation with solution . Since the variance depends on the noise statistics and this fact can be used in practice to compare the proposed estimators. For example, if the estimate is far enough from zero and the product is small , then and In this case, both estimators are close, Simulation results confirm this result.

Next, we test the efficiency of the proposed estimators. The estimators are compared under different values of the noise variances and The following scenarios were considered: Case 1: small noises, Case 2: medium noises, Case 3: large noises, Both estimators were run with the same random noises for further comparison. The Monte Carlo simulation with 1000 runs was applied in calculation of the root mean square error (RMSE), and Define the average RMSE over the time interval asThe simulation results are illustrated in Table 3 and Figures 4∼7.

In Case 1, interest is zero and nonzero initial condition At and the signal and its estimate are close to zero, and at In this case, the values of and are not large; therefore, the optimal and suboptimal estimates are different as shown in Figure 4 and confirmed by the values and in Table 3. At and the estimate is far enough from zero, and According to Remark 1, the optimal and suboptimal estimates are approximately equal, as shown in Figure 5. The equal values confirm the fact.In Cases 2 and 3, the variance is not small; therefore, the initial condition does not play a significant role in comparing both estimators. In these cases, the optimal estimator has better performance than the simple suboptimal one Typical graphics are shown in Figures 6 and 7, and the values and in Table 3 confirm that conclusion.

Thus, the simulation results in Section 4.3.1 show that the optimal estimator is suitable for practical applications.

**(a)**

**(b)**

**(a)**

**(b)**

**(a)**

**(b)**

**(a)**

**(b)**

###### 4.3.2. Estimation of Distance between Two Random Points in 1-D Line

Consider a motion of two random points and in 1-D line. Assume that evolution of the state vector from time to is defined by the random walk model:where and are the known initial conditions and and are uncorrelated white Gaussian noises.

Assuming we measure the true position of the points with correlated measurement white noises and respectively, the measurement equation iswhere

Our goal is to estimate the unknown distance between the current location of the points and

According to the proposed two-step estimation procedure, the optimal Kalman estimate and error covariance computed at the first stage are used at the second stage for estimation of the distance Using formulas (9) and (10) for and we obtain the best MMSE estimate for the distance:

In parallel with the optimal distance estimator (24), we consider the simple suboptimal estimator .

*Remark 3. *As we see, the optimal estimate of the distances in (20) and (24) depends on the functions and The functions in formulas (20) and (24) are calculated in the pairs of points and respectively. The second pair depends on the state estimate and error covariance Therefore, Remark 2 is also valid for models (22) and (23). For example, if the estimate is far enough from zero and the variance is small, then The simulation results in Figure 8 with and very close values of the average RMSEs, and , confirm this fact.

In addition, we are interested in the following new scenarios: Case 1: both points and are fixed, and their positions are measured with small noises Case 2: the first point is fixed, but the movement of the second one is subject to a small noise Case 3: the movement of both points is subject to a medium noiseThe model parameters and simulation results for the scenarios are illustrated in Table 4. From Table 4, we observe the strong difference between the average RMSEs and , i.e., It is not a surprise that the optimal estimator (24) is better than the suboptimal one,

**(a)**

**(b)**

#### 5. MMSE Estimators for Bilinear and Quadratic Forms

##### 5.1. Optimal Closed-Form MMSE Estimator for Quadratic Form

Consider a quadratic form (QF) of the state vector :

In this case, the optimal MMSE estimator (4) can be explicitly calculated in terms of the Kalman estimate and error covariance .

Theorem 2. *(MMSE estimator for QF). Let be a normal random vector, and and are the Kalman estimate and error covariance, respectively. Then, the optimal MMSE estimator for the QF has the following closed-form structure:*

*Proof. *Using the formulas and , we obtainIn parallel to the optimal quadratic estimator (26), we consider the simple suboptimal estimator denoted as , which is obtained by direct calculation of the QF at the point such asThe simple estimator (28) depends only on the Kalman estimate and does not require the KF error covariance in contrast to the optimal one (26). The following result compares the estimation accuracy of the optimal and suboptimal quadratic estimators.

Lemma 2. *(difference between MSEs for quadratic estimators). The difference between the true MSEs and for the optimal and simple suboptimal quadratic estimators is *

*Proof. *Using the fact that the MMSE estimator is unbiased, and the equality , we obtainLet us illustrate Theorem 2 and Lemma 2 on the example of the squared norm of a random vector, Then, and the quadratic estimators and difference between their MSEs take the formWe see the difference depends on the quality of the KF data processing (3).

##### 5.2. Optimal Closed-Form MMSE Estimator for Bilinear Form

Let and be two arbitrary state vectors. Then, a bilinear form (BLF) on the state space can be written as follows:

Note that a BLF can be written as a QF in the vector . In this case,

For the QF (25), the optimal bilinear estimator can be explicitly calculated in terms of the Kalman estimate and block error covariance matrix :where is a cross covariance between estimation errors and

Applying Theorem 2 to the QF and taking into consideration the block structure of the matrix , we have the following.

Theorem 3. *(MMSE estimator for BLF). Let be a joint normal random vector, and and are the Kalman estimate and block error covariance matrix (33). Then, the optimal MMSE estimator for the BLF has the following closed-form structure:*

*Example 4. *(estimation of inner product and squared Euclidean distance). Using the bilinear estimator (34) with the MMSE estimator for the inner product takes the formNext, calculate the optimal MMSE estimator for the squared Euclidean distance between two points or where The Kalman estimate and error covariance of the difference take the formApplying the quadratic estimator (26) with , we obtain the MMSE estimator for the squared Euclidean distance:The MMSE estimators for bilinear and quadratic forms are summarized in Table 5.

##### 5.3. Practical Usefulness of Squared Euclidean Distance

In many practical problems, for example, finding the shortest distance from a point to a curve, , or comparing a distance with a threshold value, , there is no need to calculate the original Euclidean distance but we just need to calculate its square due to the equivalence of the problems, or . In such situations, the optimal quadratic estimator (37) for the squared Euclidean distance, , can be successfully used.

*Example 5. *(deviation of normal and nominal trajectories). Suppose that the piecewise feedback control law depends on the difference between a normal and nominal trajectories. For example, it is given bywhere is the Euclidean distance and is the distance threshold (see Figure 9).

In view of the above, rewrite the control law in the equivalent form:where is the squared of the Euclidean distance and is the new threshold.

Using the quadratic estimator (37) for the squared distance we obtain the MMSE estimator, which can be used in the control (39):In the next section, we discuss application of the linear, bilinear, and quadratic estimators (Theorems 1–3) for estimation of composite nonlinear functions.

#### 6. Suboptimal Estimator for Composite Nonlinear Functions

##### 6.1. Definition of Composite Function

Consider a composite function depending on LF, QF, and BLF, such aswhere the inside functions are defined as

*Example 6. *(composite and inside functions in object tracking). Let be an object state vector consisting of the position and corresponding velocity components in the Cartesian coordinates , i.e.,In the spherical coordinates, we assume that a Doppler radar is located at the origin of the Cartesian coordinates, and it measures the following quantities obtained via nonlinear composite functions of the state components depending on LF, QF, and BLF:where is the range (distance), is the bearing angle, is the elevation angle, and is the range rate.

##### 6.2. Suboptimal Estimator for Composite Functions

Given the Kalman estimate and covariance , we estimate a quantity obtained via the composite function The idea of the algorithm is based on the optimal MMSE estimators for LF, QF, and BLF proposed in equations (10), (26), and (34), respectively. We have

Replacing the unknown inside functions with the corresponding optimal estimates (45), we obtain the novel suboptimal estimator for the composite function i.e.,

*Example 7. *(estimation of cosine of angle). Let be a joint normal state vector, andare the Kalman estimate and block error covariance.

The cosine of angle between two vectors is equal toWe observe the ratio (48) represents the composite function depending on the three inside functions , , and :The optimal MMSE estimators for the inside functions and are known. Using equation (45), we haveReplacing the inside functions with their estimates , we get the suboptimal estimator for the cosine of angle:Numerical example illustrates the applicability of the all estimators proposed in the paper.

#### 7. Numerical Example: Motion in a Plane

In this section, we estimate the range and the bearing angle in 2-D motion of an object. Because of the difficulties of getting analytical closed-form expressions for the optimal estimators for range and bearing, we apply the simple estimator and the estimator based on the composite functions . In addition, we are interested in the angle between the two state vectors and at time instants and , respectively, .

##### 7.1. Suboptimal Estimators for Range-Angle Response

The example of Section 4.3.2 is considered again. Consider the 2-D models (22) and (23) describing motion of the two random points and . To calculate the range , tangent of the bearing angle and cosine of the angle , we use the following formulas:

The following estimators for the range-angle responses (52) are illustrated and compared:(1)Simple estimator:(2)Estimator for composite functions:

Note that the simple and composite estimates and for a bearing angle coincide. In equation (54), and are the error cross-covariances satisfying the following recursive:

In equations (53)–(55), the values and represent the Kalman estimate, KF gain, and variance of the error respectively.

##### 7.2. Simulation Results

The simple and composite estimators were run with the same random noises for further comparison. The Monte Carlo simulation with 1000 runs was applied in calculation of the RMSEs for the range , the bearing angle , and the angle between state vectors . Figures 10–12 show the range and angle estimates for the model parameters in equations (22) and (23), with , , , , and . The following results about the relative performance of the above estimators can be made. Figure 10(a) presents the range estimators as well as the true range . Figure 10(b) shows the comparison of the RMSEs for the range estimators. Comparing and on the interval , we obtain the values and , respectively. From Figures 10(a) and 10(b), the range estimator has the better performance compared to the simple one . This is due to the fact that the MMSE estimate of the squared norm contains the error variances and as additional terms. If the variances tend to zero, then the range estimators will converge, i.e., .(2)Figure 11 shows the true value of tangent of the bearing angle and the corresponding simple (or composite) estimate, . We observe the negligible difference between the true tangent value and its simple estimate . The average RMSE of the estimate over the time interval is It demonstrates reasonable accuracy of the estimator for the unknown ratio (tangent of angle) .(3)Similar simulation procedures, as in (1) and (2), were used to check performance of the estimators and . The true cosine value is shown in Figure 12 for comparison with the estimated values.

**(a)**

**(b)**

For detailed consideration of the proposed estimators, we divide the whole time interval into two subintervals and , respectively. From Figure 12, we can observe that on the first subinterval, the estimate is better than , and on the second one, the difference between them is negligible. This is also confirmed by the values of presented in Table 6.

Note that both estimators and are based on the MMSE estimators for a squared norm and inner product. Therefore, the difference between them becomes small if the KF error variances are small (see (c) in equations (53) and (54)). In our case, the steady-state values of the variances are and .

#### 8. Conclusion

In this paper, we propose a novel MMSE approach for the estimation of distance metrics under the Kalman filtering framework. The main contributions of the paper are listed in the following.

Firstly, an optimal two-stage MMSE estimator for an arbitrary nonlinear function of a state vector is proposed. The distance metric is an important practical case of such nonlinearities, detailed study of which is given in the paper. Implementation of the MMSE estimator is reduced to calculation of the multivariate Gaussian integral. To avoid the difficulties associated with its calculation, the concept of a closed-form estimator depending on the Kalman filter statistics is introduced. We establish relations between the Euclidean metrics and the closed-form estimator, which lead to simple compact formulas for the real-life distances between points presented in Table 2.

Secondly, an important class of bilinear and quadratic estimators is comprehensively studied. These estimators are applied to a square of norm, Euclidean distance, and inner product. Table 5 summarizes the results. Moreover, an effective low-complexity suboptimal estimator for nonlinear composite functions is developed using the MMSE bilinear and quadratic estimators. As shown in Section 6.1, radar tracking range-angle responses are described by the composite functions.

Simulation and experimental results show that the proposed estimators perform significantly better than the existing suboptimal distance or angle estimators such as a simple estimator defined in the paper. The low-complexity estimator developed in Section 6.1 is quite promising for radar data processing. Also, the numerical results confirm the fact that the more accurate the Kalman estimate of a state vector, the more accurately we can obtain the range and angle estimates.

#### Appendix

*Proof of Lemma 1. *The derivation of formula (8): direct calculation of the Gaussian integral gives