Abstract

This paper develops a robust extended-target multisensor multitarget multi-Bernoulli (ET-MS-MeMBer) filter for enhancing the unsatisfactory quality of measurement partitions arising in the classical ET-MS-MeMBer filter due to increased clutter intensities. Specifically, the proposed method considers the influence of the clutter measurement set by introducing the ratio of the target likelihood to the clutter likelihood. With the constraint of the clutter measurement set, it can obtain better multisensor measurement partitioning results under the original two-step greedy partitioning mechanism. Subsequently, the single-target multisensor likelihood function for the clutter case is derived. Simulation results reveal a favorable comparison to the ET-MS-MeMBer filter in terms of accuracy in estimating the target cardinality and target state under conditions with increased clutter intensities.

1. Introduction

Multiple-target tracking (MTT) [1] estimates the number and states of moving targets based on the sensor observations. Traditional MTT algorithms adopt a point target observation model [2], which assumes that one target generates at most one measurement per time step. However, in the high-resolution sensor system, one target may occupy multiple-resolution cells and, thus, can produce multiple measurements. Such targets are known as extended targets [3]. The measurements generated by the extended target provide information not only about the target’s motion state but also the target’s shape. This shape information, e.g., the size and structure of the extended target, can further facilitate the target classification and recognition task [4].

The conventional multiple-target tracking algorithm involving a data association step [5] has been widely used in the fields of medical image processing [6], network defense [7], and battlefield environment monitoring [8]. However, for extended-target tracking, the association between measurements and targets becomes more complicated and suffers from a “combinatorial explosion” problem. Therefore, the random finite set (RFS) [9, 10] has received much attention in the MTT domain for the merit of avoiding explicit data association steps. This theory leads to the development of various MTT algorithms, including the probability hypothesis density (PHD) [11], cardinalized PHD (CPHD) [12], arithmetic average multi-Bernoulli (AA-MB) [13], and multisensor multitarget multi-Bernoulli (MS-MeMBer) [14] filters.

The abovementioned filters based on the RFS have further been applied to the field of extended-target tracking [1523]. Based on the PHD filter, an extended-target PHD (ET-PHD) filter was proposed in [15], and a Gaussian implementation of it was given in [16]. As the ET-PHD filter can only estimate the centroid state of the extended target, the PHD filters with shape estimation were proposed in [17, 18]. In [17], a Gaussian inverse Wishart PHD (GIW-PHD) filter was proposed, where the target kinematical state is modeled as a Gaussian distribution and the target extension is assumed to follow an inverse Wishart distribution. As an improvement to the GIW-PHD filter, the GGIW-PHD filter proposed in [18] can additionally obtain the estimation of target measurement rates. Besides the PHD filter, other filters based on the RFS [1923] have also been modified to track the extended target. In [19], an extended-target CPHD filter that can estimate the centroid state of extended targets was proposed. Compared with the ET-PHD filter, this filter achieves better filtering results at the cost of higher computational complexity. Based on the labeled RFS, the Gamma Gaussian inverse Wishart (GGIW) implementation of the labeled multi-Bernoulli (LMB) filter was introduced in [20]. In [2123], the Poisson multi-Bernoulli Mixture (PMBM) filter was generalized to the extended-target tracking case.

For the multisensor case, the extended-target MS-MeMBer (ET-MS-MeMBer) filter and its Gaussian inverse Wishart (GIW) mixture implementation was proposed in [24]. The ET-MS-MeMBer filter assumes that the observation of extended targets obeys the approximate Poisson-Body (APB) model [25], and the update process of the MS-MeMBer filter is modified accordingly. However, this method ignores the constraint of the clutter measurement set in the multisensor measurement partitioning process, which may degenerate the quality of the multisensor measurement partition and, consequently, the filtering performance of the MS-MeMBer filter in high-clutter-density scenarios. To solve this problem, a robust ET-MS-MeMBer filter is proposed in this paper. By introducing the ratio of the target likelihood to the clutter likelihood, the influence of the clutter measurement set is considered. Therefore, better measurement partitions can be obtained under the original two-step partitioning mechanism [14]. The single-target multisensor likelihood function for the clutter case is also derived. Simulation results show that compared to the ET-MS-MeMBer filter, the proposed filter has higher accuracy in estimating the target cardinality and target state under high-clutter-density conditions.

The rest of this paper is organized as follows. Section 2 gives a review of the ET-MS-MeMBer filter. The proposed filter is derived in Section 3. The simulation results are presented in Section 4, and the conclusions are given in Section 5.

2. The ET-MS-MeMBer Filter

2.1. The Prediction Step

Let x and denote a single-target state and a multitarget state, where is the cardinality of targets at time . Assume that, at time , the multitarget posterior density is a multi-Bernoulliwhere each represents an independent Bernoulli density with probability and distribution .

Then, the predicted multitarget density is also a multi-Bernoulliwhere and are the Bernoulli density and the cardinality of newborn targets at time k. The survival Bernoulli density followswhere is the probability that a target can survive to the next time, is the inner product function, and is the state transition function.

2.2. The Distance Partition and Multisensor Measurement Partition

The update step of the ET-MS-MeMBer filter requires the multisensor measurement partitions. Since each extended target may generate multiple measurements at each scan, our first task is to divide each sensor’s measurements into groups that contain measurements from the same target. Here, we utilize a distance partition method [17] which is based on the general belief that measurements from the same extended target are spatially close to each other.

Let be the collection of all measurements generated on sensor i at time , where is the l-th element in and n is the number of the measurements. The distance partition of can be denoted as , where

is the cardinality function. Each disjoint subset ( for the clutter measurement subset and for the target measurement subset) in partition contains several measurements in and satisfies . is the collection of all possible distance partitions .

Multisensor measurement partition aims to group all sensors’ measurements from the same target into the same subset. We consider a multisensor measurement set , where S is the number of sensors. Suppose that, at time k, the predicted multitarget density contains Bernoulli components. For a given set of distance partitions , the multisensor measurement partition can be defined as , where

Here, is the clutter measurement set generated by all S sensors, and is the target measurement set corresponding to the l-th Bernoulli component. is defined as the collection of all possible multisensor measurement partitions . Finally, we define the mapping function , where specifies the sensor label i and the index of in .

2.3. The Update Step

Suppose that, at time , the predict multitarget density is a multi-Bernoulli consisting of independent Bernoulli component:

Then, the posterior density can be approximated by a multi-Bernoulli as follows:

The updated probability and the distribution followwhere is the probability that a target is not detected by any sensor and is the single-target multisensor likelihood function. is the weight of in , which can be calculated as the score of divided by the sum of all partitioning scores in :where

and quantify how likely the set is a clutter set, and the set is associated with the j-th Bernoulli component, respectively. is the n-th derivative of the clutter probability generating function , and is the clutter intensity of sensor i.

3. The Robust ET-MS-MeMBer Filter

3.1. The Constraint of the Clutter Measurement Set

In [14], a two-step greedy partitioning mechanism is proposed, which can obtain several high-scoring measurement partitions. In this mechanism, the target measurement set corresponding to each Bernoulli component is first obtained separately, and then, the target measurement sets corresponding to different Bernoulli components are combined to obtain all possible measurement partitions. However, it can be seen from (9) that the two-step greedy partitioning mechanism ignores the influence of the clutter measurement set . Considering the constraint of the clutter measurement set, can be rewritten as

We introduce the functionwhich denotes the ratio of the likelihood that is generated by a target to the likelihood that is from the clutter.

According to [24], the single-target multisensor likelihood function followswhere is the mean of the extended measurements, is the target detection probability by sensor i, is the measurement likelihood of sensor i, and is the clutter intensity.

Hence, (12) becomes

In (14), denotes the single-target multisensor likelihood function for the clutter case and follows

Finally, (11) becomes

It can be seen from (16) that the influence of the clutter measurement set has been introduced into the two-step greedy partitioning mechanism.

3.2. The Numerical Implementation

We define the target state as , where contains the target position, target velocity, and target acceleration and represents the target shape. Utilizing the GIW mixture from [17, 24], we can derive a numerical implementation of the proposed filter.

Suppose that the distributions of the Bernoulli component at time and the newborn target at time k are both GIW mixtures as follows:

and are the number of the GIW component. denotes the Kronecker product. and are the weight of the corresponding GIW component. is a Gaussian function with mean and variance . is an inverse Wishart distribution with degrees of freedom and inverse scale matrix .

Then, the distribution of the predicted Bernoulli component is also a GIW mixture, and the legacy Bernoulli component at time follows

In (19), the Gaussian and the inverse Wishart parameters satisfy

is the state transition matrix. is the process noise covariance matrix. is a temporal decay constant. is the sampling period. is the maneuver correlation time. is the unit matrix of size d.

If the distribution of the predicted Bernoulli component satisfiesthen the updated Bernoulli density follows

In (26), the weight of the GIW component followswhere is the likelihood ratio.

The update of the GIW component is a recursive procedure withwhere

In (28), the updated GIW parameters followwhere

Here, is the observation matrix at time .

4. Experimental Results

In this section, we consider tracking 4 extended targets moving within an observation area of size [−1000 m, 1000 m] [−1000 m, 1000 m]. The target shape is modeled as an ellipse [17] with a major axis and a minor axis . The duration time is 80 s. In the simulation, the constant acceleration (CA) model is considered. The target trajectories are shown in Figure 1. The initial position and the survival time of all the targets are shown in Table 1.

In (22), the sampling period and the temporal decay constant . The maneuver correlation time . In the two-step measurement partitioning method [14], the number of subsets in is no more than , and the number of partitions in is no more than . The pruning threshold is set as , and the number of the GIW components of each Bernoulli density is up to . The merge threshold of Gaussian components is , and the merge threshold of inverse Wishart components is . We use the optimal subpattern assignment (OSPA) distance [26] as the metric to evaluate the filtering performance. The OSPA distance accounts for estimation errors in both position and cardinality, and in this section, we set the order and the penalty factor .

Now, we consider observations from 4 sensors. The sensors are located at [−900 m, 0 m], [0 m, −900 m], [900 m, 0 m], and [0 m, 900 m], respectively. For every sensor, the detection probability and the clutter density parameter . The measurement noise standard deviations along x-y coordinates of each sensor are , , , and . The mean of the extended measurements is set as . Figure 2 shows the measurements collected from sensor 1. The estimated states by using the proposed method and the ET-MS-MeMBer filter are shown in Figures 3 and 4, respectively. We can observe that compared with the ET-MS-MeMBer filter, the proposed method can obtain more accurate state estimation results. We also compare the filtering performance of the ET-MS-MeMBer filter with that of the proposed method via 50 Monte Carlo runs. Figure 5 shows the cardinality estimations of the two methods. The black line is the true cardinality of the targets, and the magenta dotted line and the blue dotted line represent estimated standard deviations of the ET-MS-MeMBer filter and the proposed filter, respectively. We can observe that compared with the ET-MS-MeMBer filter, the proposed method obtains a smaller standard deviation of the estimated cardinality. Figure 6 shows the OSPA errors of the two methods. Note that, for both two methods, the delay in response to target birth and target death leads to the high peaks at the corresponding times. Nevertheless, the OSPA error of the proposed method is still smaller than that of the ET-MS-MeMBer filter.

We also consider the case of a varying number of sensors with and for each sensor i. As before, 50 Monte Carlo runs are performed. Figure 7 shows the time-averaged OSPA error of the two methods, and Figure 8 shows the running time. We observe that as the number of the sensors increases, the time-average OSPA error of both two methods decreases, however, at the cost of the growing computational time. Comparatively, the proposed method performs better.

Figure 9 shows the time-averaged OSPA error of the two methods in scenarios with different clutter intensities. The corresponding running time of the two methods is shown in Figure 10. The clutter density is varied in a range of {5, 20, 40, 60, 80, 100}. Besides, the number of sensors is 3, and the detection probability is . It is observed that the OSPA error and running time of the two methods are similar in the low clutter density case, i.e., . With the increase of the clutter intensity, the filtering performance of the two methods decreases. However, the proposed method requires less time and has smaller OSPA errors when the clutter intensity is high.

5. Conclusions

In this paper, we propose a robust MS-MeMBer filter for extended-target tracking in clutters. We modify the multisensor measurement partitioning process by considering the constraint of the clutter measurement set to make it robust under conditions with different clutter intensities. Accordingly, we propose the single-target multisensor likelihood function for the clutter case. Simulation results show that compared with the ET-MS-MeMBer filter, the proposed method has smaller OSPA errors with less time consumption under high-clutter-density conditions.

Data Availability

The simulation data used to support the findings of this study are included within the article. No measured data were used to support this study.

Conflicts of Interest

The authors declare that they have no conflicts of interest.