Research Article | Open Access
Vladimir Shin, Georgy Shevlyakov, Woohyun Jeong, Yoonsoo Kim, "Closed-Form Distance Estimators under Kalman Filtering Framework with Application to Object Tracking", Mathematical Problems in Engineering, vol. 2020, Article ID 9141735, 16 pages, 2020. https://doi.org/10.1155/2020/9141735
Closed-Form Distance Estimators under Kalman Filtering Framework with Application to Object Tracking
In this paper, the minimum mean square error (MMSE) estimation problem for calculation of distances between two signals via the Kalman filtering framework is considered. The developed algorithm includes two stages: the Kalman estimate of a state vector computed at the first stage is nonlinearly transformed at the second stage based on a distance function and the MMSE criterion. In general, the most challenging aspect of application of the distance estimator is calculation of the multivariate Gaussian integral. However, it can be successfully overcome for the specific metrics between two points in line, between point and line, between point and plane, and others. In these cases, the MMSE estimator is defined by an analytical closed-form expression. We derive the exact closed-form bilinear and quadratic MMSE estimators that can be effectively applied for calculation of an inner product, squared norm, and Euclidean distance. A novel low-complexity suboptimal estimator for special composite functions of linear, bilinear, and quadratic forms is proposed. Radar range-angle responses are described by the functions. The proposed estimators are validated through a series of experiments using real models and metrics. Experimental results show that the MMSE estimators outperform existing estimators that calculate distance and angle in nonoptimal manner.
The problem of measuring the distance between real-valued signals or images arises in most areas of scientific research. In particular, the familiar Euclidean distance plays a prominent role in many important application contexts not only in engineering, economics, statistics, and decision theory, but also in fields such as machine learning, cryptography, image recognition, and others. The statistical methods related to the distance estimation can be categorized into image and signal processing areas.
The concept of a distance metric is widely used in image processing and computer vision [1–5] (also see references therein). The distance provides a quantitative measure of the degree of match between two images or objects. These objects might be two profiles of persons, a person and a target profile, camera of a robot and people, or any two vectors taken across the same features (variables) derived from color, shape, and/or texture information. Image similarity measures play an important role in many image algorithms and applications including retrieval, classification, change detection, quality evaluation, and registration [6–12].
The proposed paper deals with the distance estimation between random signals. In signal processing, a good distance metric helps in improving the performance of classification, clustering and localization in wireless sensor networks, radar tracking, and other applications [13–20]. The Bayesian classification approach based on concepts of the Euclidian and Mahalanobis distances is often used in discriminant analysis. Survey of the classification procedures which minimize a distance between raw signals and classes in multifeature space is given in [21, 22]. The distance estimation algorithm based on the goodness-of-fit functions where the best parameters of the fitting functions are calculated given the training data is considered in . Algorithm for estimation of a walking distance using a wrist-mounted inertial measurement unit device is proposed in . The concept of distance between two samples or between two variables is fundamental in statistics due to the fact that a sum of squares of independent normal random variables has a chi-square distribution. Knowledge of the distribution and usage of the usual approximations make a confidence interval for distance metrics [25, 26]. Usage of the Taylor series expansions for aircraft geometric-height estimation using range and bearing measurements is addressed in [27, 28]. The minimum mean square error (MMSE) estimation of a state vector in the presence of information about the absolute value of a difference between its subvectors is proposed in .
In many applications, it is interesting to estimate not only a position or state of an object but also a nonlinear distance function which gives information to effectively control target tracking. However, most authors have not focused on a simultaneous estimation of a state and distance functions in dynamical models such as a Kalman filtering framework.
The problem of estimation of the distance function, , between two vector signals and is considered in the paper, but its difference from the aforementioned references is that the both signals and are unknown, and they should be simultaneously estimated with the function using indirect measurements. For example, we observe positions of two points and in a line and a distance between the points represents the absolute difference, i.e., . The positions and and consequently the distance are unknown, and our problem is to optimally calculate three estimates . Note that the simple distance estimator is not an optimal solution.
The purpose of the paper is to derive an analytical closed-form MMSE estimator for distance functions between random signals such as the absolute value, the Euclidean distance, inner product, bilinear, and quadratic forms. The advantage of the estimator is quick and accurate calculation of distance metrics compared to the approximate or iterative estimators. A further study of using the estimators is also done for the object tracking problem where we can obtain important practical results for the distance estimation of signals in linear Gaussian discrete-time systems.
The following list highlights the primary contributions of this paper:(1)Extension of the MMSE approach to the estimation of a nonlinear functions of a state vector within the Kalman filtering framework. The obtained MMSE-optimal solution represents a two-stage estimator.(2)Derivation of analytical expressions for the different metrics between two points in a line, between a point and a line, and between a point and a plane. We establish that the obtained estimators represent compact closed-form formulas depending on the Kalman filter state estimates and error covariance.(3)The MMSE estimators for quadratic and bilinear forms of a state vector are investigated and applied, including the estimators for the square of a norm , the square of the Euclidean distance , and the inner product . A novel low-complexity algorithm for suboptimal estimation of a special class of composite functions is proposed. Tracking radar responses such as range, angles, and range rate are described by the functions.(4)Performance of the proposed MMSE estimators through real examples illustrates their theoretical and practical usefulness.
This paper is organized as follows. Section 2 presents a statement of the MMSE estimation problem for an arbitrary nonlinear function of a state vector within a Kalman filtering framework. In Section 3, the general MMSE estimator is proposed, and computational complexity of the estimator is discussed. The concept of a closed-form estimator is introduced. In Section 4, the closed-form MMSE estimator for absolute value of a linear form of a state vector is derived (Theorem 1). In particular cases, the estimator calculates distances between two points in 1-D line, between a point and line in 2-D plane, and between a point and plane in 3-D space. The comparative analysis of the estimator via several practical examples is presented. In Section 5, the MMSE estimators for quadratic and bilinear forms of a state vector are comprehensively studied (Theorems 2 and 3). Effective matrix formulas for the quadratic and bilinear MMSE estimators are derived and applied with the Euclidean distance, a norm, and inner product of vector signals. In Section 6, a low-complexity suboptimal estimator for composite nonlinear functions is proposed and recommended for calculation of radar range-angle responses. In Section 7, the efficiency of the suboptimal estimator is demonstrated on an 2-D dynamical model. Finally, we conclude the paper in Section 8. The list of main notations is given in Table 1.
2. Problem Statement
The basic framework for the Kalman filter involves estimation of a state of a discrete-time linear dynamical system with additive Gaussian white noise:where is a state vector, is a measurement vector, and and are zero-mean Gaussian white noises with process and measurement noise covariances, respectively, i.e., , and , , , , and . The initial state and the process and measurement noises are mutually uncorrelated.
In parallel with the state-space model (1), consider the nonlinear function of a state vector:which in particular case represents a distance metric in .
There are a multitude of statistics-based methods to estimate the unknown value from the sensor measurements We focus on the MMSE approach, which minimizes the mean square error (MSE), , which is a common measure of estimator quality.
The MMSE estimator is the conditional mean (expectation) of the unknown given the known observed value of the measurements, [30, 31]. The most challenging problem in the MMSE approach is how to calculate the conditional mean. In this paper, explicit formulas for distance metrics within the Kalman filtering framework are derived.
3. General Formula for Optimal Two-Stage MMSE Estimator
In this section, the optimal MMSE estimator for the general function of a state vector is proposed. It includes two stages: the optimal Kalman estimate of the state vector computed at the first stage is used at the second stage for estimation of .
First stage (calculation of Kalman estimate): the mean square estimate of the state based on the measurements and error covariance are described by the recursive Kalman filter (KF) equations [30, 31]:where and are the time update estimate and error covariance, respectively, and is the filter gain matrix.
Second stage (optimal MMSE estimator): next, the optimal MMSE estimate of the nonlinear function based on the measurements also represents a conditional mean, that is,where is a multivariate conditional Gaussian probability density function.
Remark 1. (closed-form MMSE estimator). In general case, the calculation of the optimal estimate, , is reduced to calculation of the multivariate Gaussian integral (5). The lack of the estimate is impossibility to calculate the integral in explicit form for the arbitrary nonlinear function . Analytical calculation of the integral (closed-form MMSE estimator) is possible only in special cases considered in the paper. The closed-form estimators for distance metrics in terms of and are proposed in Sections 4 and 5.
The Euclidean distance between two points, , is defined asIn this particular case where and represent two points located on the 1-D line, the Euclidean distance represents the absolute value (see Figure 1), i.e.,In Section 4, the MMSE estimator for the absolute value is comprehensively studied.
4. Closed-Form MMSE Estimator for Absolute Value
4.1. MMSE Estimator for Absolute Value of Linear Form
Lemma 1. (MMSE estimator for |x|). Let be a normal random variable, and and are the MMSE estimate and error variance, respectively. Then, the MMSE estimator for the absolute value has the following closed-form expression:where is the cumulative distribution function of the standard normal distribution, .
The derivation of equation (8) is given in the Appendix.
Let be a linear form (LF) of the normal random vector, , and and are the MMSE estimate and error covariance, respectively. Then, the MMSE estimate of the linear form and its error variance can be calculated asand we have the following theorem.
Theorem 1. (MMSE estimator for absolute value of LF). Let be a normal random vector, and and are the MMSE estimate and error covariance, respectively. Then, the closed-form MMSE estimator for the absolute value is defined by formula (8):where and are determined by equation (9).
The MMSE estimator (10) allows to calculate distances measured in terms of the absolute value in n-dimensional space.
4.2. Examples of MMSE Estimator for Distance between Points
Let be a normal state vector, and and are the Kalman estimate and error covariance, respectively, .
Example 1. (distance on 1-D line). The MMSE estimator for the distance between the moving point and given sequence in 1-D line takes the form
Example 2. (distance between point and line). The shortest distance from the moving point to the line in 2-D plane is shown in Figure 2. The distance is given bySubstituting and into equations (9) and (10), we get the MMSE estimator for the shortest distance (12):where and are determined by equation (9):The MMSE estimator (12)–(14) can be generalized on 3-D space.
Example 3. (distance between point and plane). Similar to equation (12), the shortest distance between the moving point and the plane in 3-D space,is shown in Figure 3.
Substituting and into equations (9) and (10), we getwhere and are determined by equation (9):The MMSE distance estimators in Theorem 1 and Examples 1∼3 are summarized in Table 2.
4.3. Numerical Examples
In this section, numerical examples demonstrate the accuracy of the two closed-form estimators calculated for the absolute value . The optimal MMSE estimator is compared with the simple suboptimal one .
4.3.1. Estimation of Distance between Random Location and Given Point in 1-D Line
Let be a scalar random position measured in additive white noise; then, the system model iswhere is the known initial condition and and are the uncorrelated white Gaussian noises.
The KF equation (3) gives the following:
Consider the distance between and the known point i.e., Then, the optimal MMSE estimate of the distance is defined by (10). Further we are interested in the special case in which and In this case, formula (10) represents the optimal estimate of the distance between the current position and the origin point, i.e.,
In parallel to the optimal estimate (20), consider the simple suboptimal estimate,
Remark 2. Reviewing formula (20), we find the following. If the values of and are large , then and if or if which implies that both estimates are quite close, i.e., Assuming the estimate is far enough from zero, then the large values of the functions and depend on the error variance . Using (19), the steady-state value of the variance satisfies the quadratic equation with solution . Since the variance depends on the noise statistics and this fact can be used in practice to compare the proposed estimators. For example, if the estimate is far enough from zero and the product is small , then and In this case, both estimators are close, Simulation results confirm this result.
Next, we test the efficiency of the proposed estimators. The estimators are compared under different values of the noise variances and The following scenarios were considered: Case 1: small noises, Case 2: medium noises, Case 3: large noises, Both estimators were run with the same random noises for further comparison. The Monte Carlo simulation with 1000 runs was applied in calculation of the root mean square error (RMSE), and Define the average RMSE over the time interval asThe simulation results are illustrated in Table 3 and Figures 4∼7.
In Case 1, interest is zero and nonzero initial condition At and the signal and its estimate are close to zero, and at In this case, the values of and are not large; therefore, the optimal and suboptimal estimates are different as shown in Figure 4 and confirmed by the values and in Table 3. At and the estimate is far enough from zero, and According to Remark 1, the optimal and suboptimal estimates are approximately equal, as shown in Figure 5. The equal values confirm the fact.In Cases 2 and 3, the variance is not small; therefore, the initial condition does not play a significant role in comparing both estimators. In these cases, the optimal estimator has better performance than the simple suboptimal one Typical graphics are shown in Figures 6 and 7, and the values and in Table 3 confirm that conclusion.
Thus, the simulation results in Section 4.3.1 show that the optimal estimator is suitable for practical applications.
4.3.2. Estimation of Distance between Two Random Points in 1-D Line
Consider a motion of two random points and in 1-D line. Assume that evolution of the state vector from time to is defined by the random walk model:where and are the known initial conditions and and are uncorrelated white Gaussian noises.
Assuming we measure the true position of the points with correlated measurement white noises and respectively, the measurement equation iswhere
Our goal is to estimate the unknown distance between the current location of the points and
According to the proposed two-step estimation procedure, the optimal Kalman estimate and error covariance computed at the first stage are used at the second stage for estimation of the distance Using formulas (9) and (10) for and we obtain the best MMSE estimate for the distance:
In parallel with the optimal distance estimator (24), we consider the simple suboptimal estimator .
Remark 3. As we see, the optimal estimate of the distances in (20) and (24) depends on the functions and The functions in formulas (20) and (24) are calculated in the pairs of points and respectively. The second pair depends on the state estimate and error covariance Therefore, Remark 2 is also valid for models (22) and (23). For example, if the estimate is far enough from zero and the variance is small, then The simulation results in Figure 8 with and very close values of the average RMSEs, and , confirm this fact.
In addition, we are interested in the following new scenarios: Case 1: both points and are fixed, and their positions are measured with small noises Case 2: the first point is fixed, but the movement of the second one is subject to a small noise Case 3: the movement of both points is subject to a medium noiseThe model parameters and simulation results for the scenarios are illustrated in Table 4. From Table 4, we observe the strong difference between the average RMSEs and , i.e., It is not a surprise that the optimal estimator (24) is better than the suboptimal one,
5. MMSE Estimators for Bilinear and Quadratic Forms
5.1. Optimal Closed-Form MMSE Estimator for Quadratic Form
Consider a quadratic form (QF) of the state vector :
In this case, the optimal MMSE estimator (4) can be explicitly calculated in terms of the Kalman estimate and error covariance .
Theorem 2. (MMSE estimator for QF). Let be a normal random vector, and and are the Kalman estimate and error covariance, respectively. Then, the optimal MMSE estimator for the QF has the following closed-form structure:
Proof. Using the formulas and , we obtain