Research Article | Open Access

# Performance Analysis for Distributed Fusion with Different Dimensional Data

**Academic Editor:**Ming Gao

#### Abstract

Different sensors or estimators may have different capability to provide data. Some sensors can provide a relatively higher dimensional data, while other sensors can only provide part of them. Some estimators can estimate full dimensional quantity of interest, while others may only estimate part of it due to some constraints. How is such kind of data with different dimensions fused? How do the common part and the uncommon part affect each other during fusion? To answer these questions, a fusion algorithm based on linear minimum mean-square error (LMMSE) estimation is provided in this paper. Then the fusion performance is analyzed, which is the main contribution of this work. The conclusions are as follows. First, the fused common part is not affected by the uncommon part. Second, the fused uncommon part will benefit from the common part through the cross-correlation. Finally, under certain conditions, both the more accurate common part and the stronger correlation can result in more accurate fused uncommon part. The conclusions are all supported by some tracking application examples.

#### 1. Introduction

Estimation of the stochastic system state or parameters has wide applications. For example, in target tracking applications, the evolution of the target state can often be represented by a stochastic dynamic system, where the state transition model is driven by some process noise. The observations of the measurement model are also corrupted by some measurement noise in general. Since the state model and measurement model are both stochastic, the output of the estimators, for example, a Kalman filter, is also stochastic. When there are multiple sensors or estimators, the data fusion techniques are usually used for potential better estimation purpose.

Data fusion is the problem of how to utilize useful information contained in multiple sets of data for the purpose of estimation of an unknown quantity—a parameter or a process [1]. The most common situation is that the data to be fused are of the same dimensions. But, in some cases, the data of different dimensions may need to be fused. The following are some examples to show the different dimensional data fusion in target tracking applications.

*Measurement-to-Measurement Fusion.* Suppose that we have two radars, A and B. Radar A can sense target 1 and target 2 simultaneously, while radar B can only sense target 1. Then the measurement-to-measurement fusion for such a scenario is a fusion problem with different dimensionalities.

*Track-to-Track Fusion.* Constant velocity (CV) model based estimator can only provide estimation of position and velocity, while constant acceleration (CA) model based estimator can provide estimation of position, velocity, and acceleration. The fusion of such two estimators is also a fusion problem with different dimensionalities. This is very common in maneuvering target tracking using the interacting multiple model (IMM) algorithm.

*Measurement-to-Track Fusion.* A CV model based estimator provides the target’s state estimation of position and velocity, while a sensor (a radar or GPS) provides the target’s position measurement. This is a measurement-to-track fusion problem with different dimensional data.

The reason for such phenomenons is that some sensors or estimators may be subject to some constraints compared to the full dimensional data provider. In the above examples, radar B may have narrower coverage than radar A; the CV model based estimator cannot provide acceleration estimation due to the model itself; the sensor cannot provide target velocity measurement because of its sensing capability.

For such kind of fusion with different dimensional data, how to deal with the uncommon part is a problem which needed to be considered. A simple way is to abandon the uncommon part when fusing. This is quite natural but some useful information will be lost. To fully use all available information, an LMMSE estimator is provided in this work. In fact, if the uncommon part and the common part have some kind of cross-correlation, the correlation will help in fusion.

The relationship between the correlation and the estimator’s performance has been discussed in some literatures. For example, Doppler radar’s range and range rate measurement errors are often correlated. Reference [2] concluded that negative correlation has the best tracking performance. With more detailed simulation and analysis, [3] concluded that, for steady state estimation, negative correlation has the best tracking performance, positive correlation is not always worse than without correlation. Reference [3] also discussed the coefficient selection strategy for one step state estimation. Reference [4] proposed a fusion algorithm in which local estimates have correlations. Reference [5] analyzed the fusion performance with the correlation for the scalar case. Reference [6–9] also disscussed the fusion algorithm in the existence of correlation. Although these literatures discussed the relationship between correlation and the fusion performance, the fusion performance analysis of the different dimensional data fusion is very rare. To reveal the factors which affect the fusion performance, the performance is analyzed in this paper.

The rest of the paper is organized as follows. Section 2 is the problem formulation part. Fusion algorithm is proposed in Section 3. Performance analysis is given in Section 4, which is the main contribution of this work. Some examples are given in Section 5 and Section 6 is the conclusion.

#### 2. Problem Formulation

In general, filter or model’s output can be seen as an estimator. In this work, for the unification of the problem formulation, sensor’s measurement is also treated as an “estimator” in which the filter’s output is the same as the input, the original measurement.

The following problem is considered. There are two estimators. One can provide the full dimensional estimate of an estimand (the quantity to be estimated), and the other can only provide partial estimate of the estimand. In this paper, the estimators are stochastic, which means estimators are affected by some noises.

Assume is the estimand, which can be written as .

Estimator 1 is as follows:

Estimator 2 is as follows: It can be seen that is the common part and is the uncommon part. The dimensions of those vectors are The mean, covariance, and cross covariance of the noises are where , means , are positive definite matrices.

#### 3. Fusion Algorithm with Different Dimensional Data

##### 3.1. Introduction to the LMMSE Estimator

The minimum mean-square error (MMSE) estimation is Bayesian estimation where the expected value of a positive definite cost function is to be minimized. It is a tool which estimates a random variable in terms of another random variable . The solution is the conditional mean .

Since the distributional information needed for the evaluation of the conditional mean is not always available, the linear minimum mean-square error (LMMSE) estimator is often used in practice. LMMSE estimator yields the estimate as a linear function of the observation and requires only the first two moments. It is a widely used estimation method.

Consider the vector-valued random variables and , where is a measurement of . The best estimate of in terms of in LMMSE sense [10] is where is the prior mean of , is the prior covariance matrix of , is the prior mean of , and is the prior covariance matrix of . is the cross covariance matrix between and .

The LMMSE estimator of one random vector in terms of another random vector (the measurement) is such that the estimation error is(1)zero-mean,(2)uncorrelated from the measurements.

LMMSE estimator has the following properties.(1)It is the best estimator (in the MMSE sense) for Gaussian random variables.(2)It is the best estimator within the class of linear estimators.LMMSE estimation is essentially known as best linear unbiased estimation (BLUE) [1], which is proved to be identical to the linear weighted least squares (WLS) estimation [11].

##### 3.2. Fusion Algorithm Using the LMMSE Estimation

Since can provide the full estimate of , can be regarded as the prior information.

The prior information is as follows: Next, is regarded as the measurement. Since is the prior information, The cross covariance between the prior information and the measurement is then Here it is assumed that , which means or can also provide some new information.

The LMMSE fuser for this problem is the following: It is the updated covariance which is used for performance analysis. can be rearranged as where stands for the updated part’s (common data) covariance matrix: It is the same as the fusion algorithm in [4].

stands for the updated part’s (uncommon data) covariance matrix: It is affected by the part. The following performance analysis is on the updated uncommon part ( part).

#### 4. Performance Analysis of the Uncommon Part

##### 4.1. The Uncommon Part’s Impact on the Fused Common Part

From (12), it is very clear that the fused common part will not be affected by the uncommon part.

##### 4.2. The Cross-Correlation’s Impact on the Fused Uncommon Part

From (13), it can be easily seen that the fused uncommon part is affected by the common part.

First, some properties of the positive matrix are introduced. If are positive definite matrices, then they have the following properties [12].(I)For , if , then ; otherwise .(II).(III).Before fusion, the covariance matrix of part is . After fusion, it becomes . From (13), it can be seen that where and can be regarded as the cross-correlation matrix.

Theorem 1. *If , then ; otherwise .*

*Proof. *Because , from Property (II), it follows that .

The conclusion can then be directly obtained from (14) and Property (I).

It can be seen from (14) that if ,

The following are the conclusions from the above.(1)If , which means there is no cross-correlation between and , the fused uncommon part will be the same as the unfused one.(2)If , which means the cross-correlation is full row rank, the fused uncommon part is definitely better than the unfused one.If and , the following shows which component of will benefit from the fusion. Assume that where , , are row vectors. If only the th component of is considered, the following corollaries can be obtained.

Corollary 2. *If , then .*

*Proof. *It can be seen from (14) that .

Thus if , then .

Furthermore, since , from Property (I), it can be seen that − .

It can be seen from Corollary 2 that if one certain component of the uncommon part is cross-correlated with the common part , then its fused result is better than the unfused one.

Corollary 3. *If and , .*

*Proof. *If , then is a row vector.

If , then .

From Corollary 2, Corollary 3 can be directly achieved.

It can be seen from Corollary 3 that if the uncommon part is a scalar and the cross-correlation exists, the fused result is better than the unfused one.

##### 4.3. The Accuracy of the Independent Common Part’s Impact on the Fused Uncommon Part

Assume that estimator can be obtained with different precision. The covariance matrix of higher precision is and the covariance matrix of lower precision is . The corresponding fused covariance matrix of is and . Assume that . If the two estimators and are independent, which means and , the following theorem can be obtained.

Theorem 4. *Under the condition that and are independent, if , then ; otherwise .*

*Proof. *When and are independent, . From (14), the fusion covariance for is the following:
The difference between the two covariance matrices is
Since , it thus follows that .

From Property (III),
According to (17) and Property (I), if , then ; otherwise .

It can be seen from Theorem 4 that increasing the independent common part’s accuracy can improve the fused performance of uncommon part.

The following two corollaries can be easily obtained.

Corollary 5. *Under the condition that and are independent, if , then .*

Corollary 6. *Under the condition that and are independent, if and , then .*

The proof is similar to that of Corollaries 2 and 3 and will be omitted here.

Corollaries 5 and 6 are the supplement of Theorem 4 for the single component case and scalar case, which also mean that increasing the independent common part accuracy can improve the fused result of the uncommon part.

##### 4.4. The Level of Correlation’s Impact on the Fused Uncommon Part

Assume that is the th component of vector and it is the only nonzero component of : where is the correlation coefficient.

Theorem 7. *Under the condition that there is only one nonzero component in , if the absolute value of the correlation coefficient increases, the fused covariance will decrease.*

*Proof. *If there is only one nonzero component in ,
Thus when increases, will decrease.

It can be seen from Theorem 7 that under some condition, stronger cross-correlation can result in better fused result.

When , is a scalar, and the corresponding correlation coefficient is . The following corollary can be obtained.

Corollary 8. *If , when increases, the fused results will decrease.*

The proof is the same as that of Theorem 7.

It can be seen from Corollary 8 that if the common part is a scalar, stronger cross-correlation can lead to better fused result.

#### 5. Illustrative Examples

##### 5.1. The Example for Improving the Fusion Result by the Existence of Cross-Correlation

*Example 1. *In target tracking applications, constant acceleration (CA) model based estimator can provide position, velocity, and acceleration estimation while constant velocity (CV) model based estimator can only provide position and velocity. The state vector of CA is and the state vector of CV is . When fusing the estimates from two models, position and velocity estimates are considered to be the common part and acceleration is considered to be the uncommon part. Assume the two estimators are independent.

Assume there is a target moving with constant velocity motion. Two estimators are used to estimate the target’s state. One estimator uses the CA model and the other one uses the CV model. The two estimators’ initial covariance matrices are
Assume only the position can be observed by the sensors and the measurement noise variances are both . The sampling interval is . Both estimators’ updated state covariance matrices are achieved by the Kalman filter. Because the CV model cannot provide estimation of the acceleration part, there are two ways to achieve the acceleration’s estimation. One way is to use the CA model’s acceleration estimation directly and the other way is to use the fusion result. Figure 1 shows the acceleration variance of the two ways. Acceleration estimate from the CA model is always correlated with the velocity and position estimates because of the state equation. The fusion results should benefit from the correlation and Figure 1 supports this analysis.

The following are some analyses for one step fusion. Assume the covariance matrices of the two models are
The cross covariance vector between the common part and uncommon part is .

Using (10), the fusion result is

If there is no cross-correlation between acceleration and the other part,
the fusion result

It can be seen that without cross-correlation, the performance of the uncommon part cannot be improved.

However, with correlation, we have , which means that the existence of cross-correlation can help improve the fusion result.

##### 5.2. The Examples for Increasing the Accuracy of the Independent Common Part to Improve the Fusion Result

*Example 2. *The simulation setting is the same as in Example 1, which is a CA-CV fusion problem. In Example 2, CA model is the same as in Example 1, CV model’s measurement is more accurate than in Example 1, and the measurement noise variance is .

Figure 2 shows the fusion results using two different CV estimators. It is known that more accurate measurement can lead to more accurate estimation. So the CV estimator in Example 2 is more accurate than the CV estimator in Example 1. Figure 2 supports the conclusion that more accurate independent common part estimator can lead to more accurate uncommon part’s fusion result.

The following are some more analyses compared with Example 1. Here the covariance matrices of the two models are assumed to be
and the fusion result is

In Example 1, . Here .

Since , it can be easily seen that more accurate common part estimation can lead to better fusion result.

*Example 3. *There are two radars which observe the same target. One is a Doppler radar, which can provide range and range rate measurements. The other is a regular radar, which can only provide range measurement. Doppler radar’s range and range rate measurement errors are sometimes correlated. The two radars’ measurement errors are independent of each other. The state vectors are and , respectively. The corresponding covariance matrices are
After fusion,
When decreases, will also decrease.

When , .

Let
When , the covariance after fusion is

When , the covariance after fusion is

Since , it can be easily seen that more accurate common part estimation can lead to more accurate fusion result.

Figure 3 shows as a function of , which changes from to . From the figure, it can be clearly seen that when improving the regular radar’s range accuracy, the range rate accuracy will be improved.

##### 5.3. The Example for the Stronger Correlation to Improve the Fusion Result

*Example 4. *The simulation setting is the same as in Example 3. The correlation coefficient is a variable. From (29), it can be seen that the bigger the , the smaller the , which means stronger correlation can lead to better fusion result.

When , .

Let
then

Let
then

Since , it can be easily seen that stronger correlation can lead to better fusion result.

Figure 4 shows as a function of , which changes from to .

It can be seen that the stronger the correlation, the better the fused result.

*Example 5. *Examples 3 and 4 are combined together. The range accuracy and correlation coefficient are changing simultaneously. From (29), when increases and decreases, will decrease.

And if and , .

Figure 5 shows as a function of and .

Figure 5 supports the conclusion that fusion result benefits from stronger correlation and more accurate common part.

#### 6. Conclusion

Some sensors or estimators can provide higher dimensional measurement or estimation. But due to some constraints, other sensors or estimators can only provide partial measurement or estimation. To fuse such kind of data with different dimensions, a fusion algorithm based on LMMSE estimation is provided. To reveal the relationship between the common part and the uncommon part, the fusion performance is analyzed and the following four conclusions are obtained. The fused common part is not affected by the uncommon part. The fused uncommon part benefits from the common part through the cross-correlation. The more accurate independent common part will result in better performance of the fused uncommon part. In some cases, stronger cross-correlation will result in better performance of the fused uncommon part. The above conclusions are all supported by some target tracking examples.

#### Conflict of Interests

The authors declare that there is no conflict of interests regarding the publication of this paper.

#### Acknowledgments

This research work was supported by the National Key Fundamental Research & Development Programs (973) of China (2013CB329405), Foundation for Innovative Research Groups of the National Natural Science Foundation of China (61221063), Natural Science Foundation of China (61203221, 61174138, and 61473217), Ph.D. Programs Foundation of Ministry of Education of China (20100201120036), and Fundamental Research Funds for the Central University.

#### References

- X. R. Li, Y. Zhu, J. Wang, and C. Han, “Optimal linear estimation fusion: part .I. Unified fusion rules,”
*IEEE Transactions on Information Theory*, vol. 49, no. 9, pp. 2192–2323, 2003. View at: Publisher Site | Google Scholar - Y. Bar-Shalom, “Negative correlation and optimal tracking with doppler measurements,”
*IEEE Transactions on Aerospace and Electronic Systems*, vol. 37, no. 3, pp. 1117–1120, 2001. View at: Publisher Site | Google Scholar - X. H. Yuan, C. Z. Han, and Z. S. Duan, “Performance analysis and correlation selection with Doppler measurements,” in
*Proceedings of the 12th International Conference on Information Fusion*, pp. 373–379, Seattle, Wash, USA, July 2009. View at: Google Scholar - Y. Bar-Shalom and L. Campo, “The effect of the common process noise on the two-sensor fused-track covariance,”
*IEEE Transactions on Aerospace and Electronic Systems*, vol. 22, no. 6, pp. 803–805, 1986. View at: Google Scholar - X. H. Yuan, C. Z. Han, and F. Lian, “Fusion performance analysis with the correlation,” in
*Proceedings of the 13th Conference on Information Fusion*, Edinburgh, UK, July 2010. View at: Google Scholar - X. R. Li and P. Zhang, “Optimal linear estimation fusion—part III: cross-correlation of local estimation errors,” in
*Proceedings of the 4th International Conference of Information Fusion*, pp. WeB1.11–WeB1.18, Montreal, Canada, 2001. View at: Google Scholar - S. J. Julier and J. K. Uhlmann, “Non-divergent estimation algorithm in the presence of unknown correlations,” in
*Proceedings of the American Control Conference*, pp. 2369–2373, June 1997. View at: Google Scholar - Y. C. Eldar, A. Beck, and M. Teboulle, “A minmax Chebyshev estimator for bounded error estimation,”
*IEEE Transactions on Signal Processing*, vol. 56, no. 4, pp. 1388–1397, 2008. View at: Publisher Site | Google Scholar | MathSciNet - Y. Wang and X. R. Li, “Distributed estimation fusion with unavailable cross-correlation,”
*IEEE Transactions on Aerospace and Electronic Systems*, vol. 48, no. 1, pp. 259–278, 2012. View at: Publisher Site | Google Scholar - Y. Bar-shalom, X. R. Li, and T. Kirubarajan,
*Estimation with Applications to Tracking and Navigation: Theory, Algorithms and Software*, Wiley, New York, NY, USA, 2001. - X. R. Li, K. Zhang, J. Zhao, and Y. Zhu, “Optimal linear estimation fusion, part V: relationships,” in
*Proceeding of the 5th International Conference on Information Fusion (FUSION '02)*, vol. 1, pp. 497–504, Annapolis, Md, USA, July 2002. View at: Publisher Site | Google Scholar - R. A. Horn and C. R. Johnson,
*Matrix Analysis*, chapter 7, Cambridge University Press, 1985. View at: Publisher Site | MathSciNet

#### Copyright

Copyright © 2014 Xianghui Yuan et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.