Complexity

Complexity / 2020 / Article
Special Issue

Unmanned Autonomous Systems in Complex Environments

View this Special Issue

Research Article | Open Access

Volume 2020 |Article ID 8810340 | https://doi.org/10.1155/2020/8810340

Liwei Zhang, Jiahong Lai, Zenghui Zhang, Zhen Deng, Bingwei He, Yucheng He, "Multimodal Multiobject Tracking by Fusing Deep Appearance Features and Motion Information", Complexity, vol. 2020, Article ID 8810340, 10 pages, 2020. https://doi.org/10.1155/2020/8810340

Multimodal Multiobject Tracking by Fusing Deep Appearance Features and Motion Information

Academic Editor: Guang Li
Received06 Aug 2020
Revised17 Aug 2020
Accepted13 Sep 2020
Published25 Sep 2020

Abstract

Multiobject Tracking (MOT) is one of the most important abilities of autonomous driving systems. However, most of the existing MOT methods only use a single sensor, such as a camera, which has the problem of insufficient reliability. In this paper, we propose a novel Multiobject Tracking method by fusing deep appearance features and motion information of objects. In this method, the locations of objects are first determined based on a 2D object detector and a 3D object detector. We use the Nonmaximum Suppression (NMS) algorithm to combine the detection results of the two detectors to ensure the detection accuracy in complex scenes. After that, we use Convolutional Neural Network (CNN) to learn the deep appearance features of objects and employ Kalman Filter to obtain the motion information of objects. Finally, the MOT task is achieved by associating the motion information and deep appearance features. A successful match indicates that the object was tracked successfully. A set of experiments on the KITTI Tracking Benchmark shows that the proposed MOT method can effectively perform the MOT task. The Multiobject Tracking Accuracy (MOTA) is up to 76.40% and the Multiobject Tracking Precision (MOTP) is up to 83.50%.

1. Introduction

The objective of Multiobject Tracking (MOT) is to track multiple objects at the same time and estimate their current states, such as locations, velocities, and sizes, while maintaining their motion identifications. Hence, the MOT is one of the most important abilities of autonomous systems, but it remains challenging because the target objects may be obscured, or it may be interfered by objects of similar shape. Owing to the rapid development of object detectors, several tracking-by-detection methods [15] have been widely proposed to address the MOT problem. Typically, the existing tracking-by-detection methods involve two main computational steps: object detection and tracking. These methods first detect the location of objects and then compute the trajectories of the objects based on the results of object detection [68]. The accuracy of object tracking is highly related to the performance of object detection. Hence, the important thing about the MOT is to track the new targets that appear at any time and find lost tracking target objects from detections and associate again. However, most of the tracking-by-detection methods are based on vision-based object detections. In the case of occlusion and overexposure, vision-based object detection may lead to false association with existing trajectories. For example, Figure 1(a) shows the failure of vehicle detection on the image with the occlusion of humans. Figure 1(b) shows the camera is disabled when overexposure.

The scene of autonomous driving may contain multiple objects, and the states of the objects are usually uncertain [9, 10]. In this case, the vision-based object detections are susceptible to occlusion or overexposure, which will easily lead to false checks or loss of target tracking. Besides, one major challenge of the MOT is how to reduce incorrect identity switching. Because the tracked objects often have high similarities, it is challenging to track objects correctly and perform correct Re-Identification(RE-ID).

Multimodal data fusion has the potential to improve the stability and accuracy of the MOT. However, a majority of traditional methods use the camera, LiDAR, or radar. These methods need to design hand-crafted features [11]. However, the hand-crafted features are often not of high precision, and it is difficult to guarantee the tracking performance. Hence, it is necessary to design a feature learning method that can automatically learn appearance features from raw visual data. Moreover, in autonomous driving systems, since the objects are moving rather than stationary, the motion information of objects should be integrated with the appearance features to achieve the MOT tasks. In addition, some MOT methods include depth information in the tracking process by using depth camera in order to improve tracking performance. For example, Mehner et al. [12] used an ordinary camera to obtain 2D information of objects and used a depth camera to obtain depth information to assist in locating the objects in world coordinates. Although it can improve the accuracy, the depth camera has a small field of view, high noise, and is easily affected by sunlight, so it is not effective as LiDAR. Moreover, they only use Kalman Filter for tracking, which does not work well in complex scenarios.

In this paper, we propose a multimodal MOT method by fusing the motion information and the deep appearance features of objects. This paper employs a 2D object detector, i.e., You Only Look Once (YOLOv3) [3] and a 3D object detector, i.e., PointRCNN [5] to process the RGB image and laser point cloud, respectively. The combination of 2D detection and 3D detection is helpful to improve the robustness of object detection. Then, the MOT is achieved by associating the motion information and the deep appearance features of the target object. A set of experiments on the KITTI Tracking Benchmark is performed to demonstrate the effectiveness of the proposed MOT method. Our contributions are summarized as follows:(1)The 2D object detection based on the image and the 3D object detection-based laser point cloud are combined to detect the location of objects, which is robust against light changes and occlusion.(2)We apply CNN that is pretrained to discriminate vehicles on a large-scale vehicle Re-Identification dataset to automatically extract the deep appearance features of the target object without manually designing features.(3)A multimodal MOT method is proposed by fusing the motion information and deep appearance features of the object to achieve the MOT task. In addition, the proposed method obtains competitive qualitative and quantitative tracking results on the KITTI tracking benchmark.

The rest of the paper is organized as follows. Section 2 introduces related works. Section 3 presents the proposed multimodal MOT method. Experiments and their results are presented in Section 4. Finally, the conclusion and future work are summarized in Section 5.

This section provides an overview of the two related research topics: multiobject tracking and object detection.

2.1. Multiobject Tracking

The problem of the MOT first appeared in the tracking of object trajectory. For example, tracking of multiple enemy aircraft or passing missiles. With the development of computer vision, researchers have proposed several MOT methods from different aspects in the past few decades. For example, the single-object tracking method is extended to support multiple objects. According to the data association, the existing MOT methods can be divided into two categories: offline and online MOT methods. In offline methods [1316], the detection of all frames in the sequence is combined to obtain the object trajectory robustly. These methods need to construct a global graph structure, which leads to high computational complexity. However, in the online MOT method [1720], the target detector is only associated with the existing trajectories frame by frame. Hence, online methods are more suitable for real-time tracking.

Most of the existing MOT methods rely on motion information produced from Kalman Filter [21], Hungarian algorithm with Kalman Filter [17], Particle Filter [22], or probability hypothesis density filter [23]. However, in autonomous driving systems, due to the uncertainty of the scene, it is impossible to track objects stably only by using motion information. Therefore, more recent methods combine the motion features with the appearance features to improve the re-identification of target objects. Traditionally, the appearance features of objects are manually designed [24], which cannot provide reliable features, especially, in complex scenes. Owing to the rapid development of deep learning, deep convolutional networks [9, 25, 26] have been widely used to extract the appearance features from raw visual data. For example, Wojke et al. [17] used CNN to extract the pedestrian image features and measure the distance between features for human detection.

2.2. Object Detection

Most of the existing 2D object detection methods are based on CNNs, which can be divided into two-stage detectors and one-stage detectors. In the two-stage detectors, such as RCNN [27], Fast RCNN [28], Faster RCNN [1], and FPN [29], they use Region Proposal Networks (RPN) to generate the candidate regions and then perform bounding-box classification and regression. For example, RCNN starts with the extraction of a set of object proposals by the selective search. Then, each proposal is rescaled to a fixed size image and fed into a CNN model that is trained on ImageNet. In this way, the presence of an object within each region is predicted and its category is recognized. Although the two-stage detectors have made great progress, their main drawback is that the redundant feature calculation of a large number of overlapping schemes results in a very slow detection speed.

The One-stage detectors have YOLO [3, 30, 31], Single Shot MultiBox Detector (SSD) [2], and RetinaNet [32]. These detectors do not need the RPN. They directly generate the categories’ probability and bounding boxes of the objects. These methods only use one-stage calculation to get the final detection results. For example, the YOLO applies a single neural network to the whole image. This network divides the images into regions and predicts the bounding boxes and the probabilities for each region simultaneously. Compared with the two-stage detectors, the one-stage detectors have a higher detection speed.

Because the point-cloud data contains richer geometric features, 3D object detection has attracted more and more attention. Compared with 2D object detection, 3D object detection is more challenging because it needs to process the point clouds of the scene. Chen et al. [33] projected point cloud to the bird’s view and used 2D CNNs to learn the features of point cloud for 3D boxes’ generation. Song and Xiao [34, 35] divided the point cloud into equally spaced 3D voxels and used 3D CNNs to learn the features of voxels to generate 3D boxes. Shi et al. [36] used PointNet++ [37] to process the point-cloud inputs for 3D boxes’ generation. Besides, some methods [38, 39] estimate 3D bounding boxes based on images.

3. Method

This section introduces the proposed multimodal MOT method that tracks multiple objects at the same time and records their trajectories. The proposed MOT method includes the four main computations: object detection with Nonmaximum Suppression, motion information extraction, learning deep appearance feature, and object tracking with data association. Figure 2 shows an overview of the proposed MOT method. We combine the result of 2D object detection and 3D object detection such that the location of the object can be detected robustly. Based on this, the motion information and appearance features of objects are computed respectively. Finally, the motion information and appearance features of objects are associated to track the target object.

3.1. Object Detection with NMS

The first task of the MOT is to detect the location of objects in the scene. In this paper, we propose to combine the results of 2D object detection and 3D object detection for robust object detection. We use the 2D detector, i.e., YOLOV3 [3] that is trained on the training set of the KITTI 2D object detection benchmark and uses the 3D detector, i.e., PointRCNN [5], that is trained on the training set of the KITTI 3D object detection benchmark.

The 2D detector processes the RGB image. The output of 2D object detection is a set of detections , where is the number of objects at frame . The 3D detector processes the point clouds that were collected from a LiDAR. The output of 3D object detection is , where is the number of objects at frame . For further calculation, we project the LiDAR point in the 3D space into the 2D space according to combine camera and LiDAR calibration:where is the projected point in the RGB image. denotes the 3D LiDAR point. and are the intrinsic camera parameters. The is the camera matrix, and the is the rectification matrix to make the image co-planar. projects the point X in the LiDAR coordinates onto the camera coordinate system. Both the intrinsic and extrinsic parameters are available in the KITTI dataset [40]. Figure 3 shows an example of point projections.

After the 3D point clouds are projected onto the image, two overlapping boxes will appear on the same object. This paper further uses the Nonmaximum Suppression (NMS) algorithm to get rid of the extra boxes. The NMS sorts all detection boxes on the basis of their scores and selects box M with the highest score. All other detection boxes with the large overlapping area with M are suppressed by using a predefined threshold :where is the detection box to be screened, when is greater than , will be removed. In our experiment, is set to 0.7. Figure 4 shows a comparison result by the detection method without NMS and with NMS.

3.2. Learning Object Appearance Features

Before implementing the MOT, we need to extract the appearance features of the object. This paper employs CNN to automatically learn the deep appearance features of objects from raw visual data. The CNN is trained on a large-scale benchmark dataset [41]. The dataset contains over 50,000 images of 776 vehicles captured by 20 cameras. Figure 5 shows several samples in this dataset.

Table 1 illustrates the architecture of CNN used in this paper. The CNN model is inspired by the wide residual network [40, 42] that consists of two convolution layers and six residual blocks. The Dense layer 10 extracts a 128-dimensional global feature. The final batch and -norm layer projects feature to a unit hypersphere. By resizing the tracked vehicle image to 224 × 224, then inputting it into the network. Finally, we get a 128-dimensional feature vector that is used as the deep appearance features of the object.


NamePatch size/strideOutput size

Conv 13 × 3/132 × 128 × 64
Conv 23 × 3/132 × 128 × 64
Max pool 33 × 3/232 × 64 × 32
Residual 43 × 3/132 × 64 × 32
Residual 53 × 3/132 × 64 × 32
Residual 63 × 3/264 × 32 × 16
Residual 73 × 3/164 × 32 × 16
Residual 83 × 3/2128 × 16 × 8
Residual 93 × 3/1128 × 16 × 8
Dense 10128
Batch and normalization128

3.3. Extraction of Object Motion Information

Since the objects are usually moving rather than stationary, it is necessary to extract the motion information of objects for the MOT. This paper employs the Kalman filter to predict the state of the object and then extract its motion information. We use eight parameters to describe the tracking state at frame , where is the bounding box center position, is the aspect ratio, is the height of the bounding box, and represents the corresponding velocity in the image coordinate system.

Because the interval of time between each frame is very short, it can be regarded as a linear model of constant-velocity motion. We get the predicted object state at the next frame and calculate the error covariance matrix between the predicted state and the true state:where is the predicted object state at frame . A is a state transition matrix, and is the object state at frame . And Q is the covariance matrix of the predict noise. Then, we can get the Kalman gain matrix K and calculate the estimated state :where is the measured value and H is the conversion matrix from to . R is the covariance matrix of the measurement noise. Finally, update the covariance matrix :

3.4. Object Tracking Based on Data Association

The next is to associate the deep appearance features and the motion information of the object for the MOT. First, this paper uses the Mahalanobis distance to compare the motion correlation between the predicted state of the Kalman Filter and the newly detected bounding boxes:where denotes the th bounding box detection, and represent the mean and covariance of the th predicted bounding box. A threshold can be adjusted to control the minimum confidence of the motion information association between objects i and j. We denote this decision with an indicator , as shown in equation (7). The indicator will be equal to 1 if the Mahalanobis distance is smaller or equal to a threshold , which is set to 9.4877 for our four-dimensional measurement space:

Next, the above method is only a suitable related measurement index when motion uncertainty is very low. However, in the image space, only using the Kalman filter framework is a rough prediction. Therefore, this paper also adopted the second metric. It measures the smallest cosine distance of the appearance features between the ith track and jth detection as follows:where is the appearance feature vector of detection and represents the feature vector of the th tracked object at the most recent frame k. In our experiment, parameter k is set to a maximum number of 100 available vectors. In addition, in order to determine whether the appearance features are related, we introduce a binary indicator, as shown in equation (9). A threshold is set for this indicator on a VeRi dataset:

Then, the Mahalanobis distance determines whether the prediction position of the Kalman filter is related to the new detection, which is especially useful for short-term prediction. And the cosine distance considers the appearance of tracking objects, which is especially useful for recovering identity after a long period of occlusion. Therefore, this paper combines the two metrics using a weighted sum:where we call an association admissible if and . The hyperparameter is used to control the influence of each metric on the combined association. For example, when there is substantial object motion, the prediction of the constant-velocity motion model becomes less effective. Thus, the appearance metric becomes more significant by reducing the valve of ; on the contrary, when there are limited vehicles on the road without long-term partial occlusions, increasing the valve of can improve the importance of distance metric.

Finally, in our implementation, the maximum number of frames allowed to lose the target is considered. In order to avoid redundant computations, if a tracked object is not re-identified in the most recent frames passed since its last instantiation, it will be assumed that it has left the scene. If the object is seen again, a new ID will be assigned to it. The judgement of a new track is that an object in the result of detection can never be associated with the existing MOT methods. If the prediction of the object position can be correctly correlated with the detection in the consecutive frames, we can confirm that a new track target has appeared.

4. Experiment

This section introduces the dataset, evaluation metric, training parameters, and experimental evaluation results in the experiments on the KITTI Tracking Benchmark.

4.1. Dataset

The proposed method was evaluated on the KITTI tracking benchmark [43]. The KITTI dataset was collected under 4 different scenarios, including city, residential, road, and campus. Some samples of the KITTI dataset are shown in Figure 6. The dataset consists of 21 training sequences and 29 test sequences. In each sequence, LIDAR point clouds, RGB images, and calibration files were provided. In the training sequences, eight different classes were labeled, including car, pedestrian, and cyclist. The objects in images were annotated with 3D and 2D bounding boxes between different frames and had a unique ID. In this work, we used all 29 testing sequences for modal validation and only used on the car subset for model evaluation because it had the most instances of all object types.

4.2. Evaluation Metric

The indexes used to evaluate the performance of the proposed MOT method were as follows:(1)Mostly Tracked (MT) : objects are successfully tracked to at least 80% of their trajectories during their life span.(2)Mostly Lost (ML) : objects are successfully tracked to less than 20% of their trajectories during their life span.(3)Identity Switches (IDS) : the number of times objects’ identities have changed during their life span.(4)Fragmentation (Frag) : due to the missing detection, the number of times a trajectory is interrupted.(5)FP and FN : the total number of false positives and false negatives (missed targets).(6)Multiobject Tracking Accuracy (MOTA) : it combines three error sources, i.e., FP, FN, and IDS as follows [44]. Equation (11) shows the computation of the MOTA, where is the index of the frame and is the number of the ground truth:(7)Multiobject Tracking Precision(MOTP) : the alignment accuracy between the annotated and the predicted bounding boxes [44].

4.3. Training Parameters

This paper trained the 2D detector, i.e., the YOLOv3, on the training set of the KITTI 2D object detection benchmark [5], and trained the 3D detector, i.e., the PointRCNN, on the training set of the KITTI 3D object detection benchmark [36]. The IOU threshold of the NMS module was set to 0.7. The minimum number of matched frames required to create a new trajectory is set to 3 and the maximum number of frames allowed to lose the target . And because the prediction results of Kalman Filter is rough and there are many scenes with long-term partial occlusions in the KITTI dataset, we set .

4.4. Qualitative Evaluation

We evaluated the proposed tracking method qualitatively by using the KITTI test sequence. Different scenarios including occlusions, clutter, parked vehicles, and false positives from detectors were considered in the qualitative evaluation.

Figure 7 shows an example of the test sequence 0 in the test set. Each vehicle was assigned a tracking ID as a reference. Despite the compact and messy parking of the vehicle, the proposed MOT method can continuously detect and track the vehicles. Moreover, from this figure, we can see that, since the image is easily affected by the environment, such as illumination changes and partial occlusion, the shape of the detected target will change. In addition, the scale of the target object may be very different. In this case, the proposed MOT method still obtained a relatively high tracking performance. The experimental results show that our method can locate each car well even in the cluttered and strong lighting scene and maintain the ID of the car unchanged.

Figure 8 shows another example from the test sequence 1. Figure 8(a) shows that the object detector produces a false detection result, and Figure 8(b) shows the false positive of the detector is overcome by data association. In the case of transient errors in object detection, the proposed MOT method can still track the target stably. Hence, these experimental results demonstrated the robustness of the proposed MOT method.

4.5. Benchmark Results

We further evaluated the proposed MOT method on the KITTI Tracking Benchmark. In this evaluation, we considered some published online MOT methods for comparison. The results are presented in Table 2. It can be seen that the proposed MOT method is very competitive. In particular, the proposed MOT method returns the fewest number of identity switches, while maintaining competitive MOTA scores, MOTP scores, and track fragmentations. The tracking accuracy is mainly affected by a large number of false positives. Given their overall impact on the MOTA score, the combination of the 2D and 3D object detection results can significantly improve the performance of the MOT. Besides, because we set the maximum allowed trackage and associate the object motion information and appearance features, the proposed MOT method has the fewest number of identity switches. Therefore, the proposed MOT method can generate a relatively stable trajectory of the target object.


MethodMOTA (%)↑MOTP (%)↑MT (%)↑ML (%)↓IDS↓FRAG↓

SASN-MCF nano [45]70.8682.6558.007.85443975
SSP [46]72.7278.5553.858.00185932
CIWT [12]75.3979.2549.8510.31165660
Complexer-YOLO [47]75.7078.4658.005.0811862092
DSM [48]76.1583.4260.008.31296868

Ours76.4083.5047.3814.00147608

4.6. Ablation Study

The ablation study was to evaluate the effects of hyperparameters on the performance of the proposed MOT method. Table 3 shows the results of the ablation study on the KITTI benchmark. The hyperparameter is the threshold of IOU, and the denotes the minimum number of matching frames required to create a new trajectory. From the table, we can be seen that when , this may miss some correct detection results. That is because the number of detected objects is reduced. When , this may result in some wrong detection results, which is also the reason why it has the most IDS. means that track immediately when a new target is detected, which leads to more IDS and FRAG. The makes the minimum IDS, but MOTA is lower. Therefore, we finally set and .


ParameterMOTA (%)↑MOTP (%)↑MT (%)↑ML (%)↓IDS↓FRAG↓

Nt=0.671.2782.9243.738.4259409
Nt=0.870.0983.1149.646.81289559
Fmin = 172.0582.7046.015.02161548
Fmin = 568.1383.3032.9713.2635229

Ours72.3683.9447.317.3694395

5. Conclusion

This paper proposed a multimodal MOT method by fusing the motion information and the deep appearance feature of objects. In this method, we use a Nonmaximum Suppression algorithm to combine a 2D object detector and a 3D object detector for robust object detection. Then, the deep appearance features of objects are learned by a CNN, and the motion information of objects is computed by the Kalman Filter. The MOT task is achieved by associating the appearance features and the motion information of the target object. The effectiveness of the proposed MOT method was demonstrated in a set of experiments. The proposed MOT method can track objects stably in crowded scenes and effectively avoid false detection. In the KITTI tracking benchmark, the proposed method also shows competitive results.

Although 3D object detection is used in the proposed MOT method, it is only used as the auxiliary information for 2D object detection. 3D object detection can provide accurate position and size estimation for automatic driving. Therefore, our future work will be towards the direction of 3D multitarget tracking that can adapt to a more complex environment.

Data Availability

The data used to support the findings of this study are available from the corresponding author upon request.

Conflicts of Interest

The authors declare that there are no conflicts of interest regarding the publication of this paper.

Acknowledgments

This work was supported by the National Natural Science Foundation of China (Project no. 61673115). This work was also partly funded by the German Science Foundation (DFG) and National Science Foundation of China (NSFC) in project Cross Modal Learning under contract Sonderforschungsbereich Transregio 169.

References

  1. S. Ren, K. He, R. Girshick, and J. Sun, “Faster R-CNN: towards real-time object detection with region proposal networks,” Neural Information Processing Systems, vol. 39, pp. 91–99, 2015. View at: Google Scholar
  2. W. Liu, “SSD: single Shot MultiBox detector,” in Proceedings of the European Conference on Computer Vision, pp. 21–37, Amsterdam, Netherlands, October 2016. View at: Google Scholar
  3. R. Joseph and A. Farhadi, “Yolov3: an incremental improvement,” 2018, https://arxiv.org/pdf/1804.02767.pdf. View at: Google Scholar
  4. C. Zhu, Y. He, and M. Savvides, “Feature selective anchor-free module for single-shot object detection,” in Proceedings of the Computer Vision and Pattern Recognition, pp. 840–849, Hubballi, India, December 2019. View at: Google Scholar
  5. D. Maturana and S. Scherer, “VoxNet: a 3D convolutional neural network for real-time object recognition,” in Proceedings of the Intelligent Robots and Systems, pp. 922–928, Hamburg, Germany, September 2015. View at: Google Scholar
  6. S. Sharma, J. A. Ansari, J. K. Murthy, and K. M. Krishna, “Beyond pixels: leveraging geometry and shape cues for online multi-object tracking,” in Proceedings of the International Conference on Robotics and Automation, pp. 3508–3515, Brisbane, Australia, May 2018. View at: Google Scholar
  7. M. D. Breitenstein, F. Reichlin, B. Leibe, E. Koller-Meier et al., “Online multiperson tracking-by-detection from a single, uncalibrated camera,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 33, no. 9, pp. 1820–1833, 2011. View at: Publisher Site | Google Scholar
  8. H. Zhou, W. Ouyang, J. Cheng, X. Wang et al., “Deep continuous conditional random fields with asymmetric inter-object constraints for online multi-object tracking,” IEEE Transactions on Circuits and Systems for Video Technology, vol. 29, no. 4, pp. 1011–1022, 2019. View at: Publisher Site | Google Scholar
  9. W. Luo, B. Yang, and R. Urtasun, “Fast and furious: real time end-to-end 3D detection, tracking and motion forecasting with a single convolutional net,” in Proceedings of the Computer Vision and Pattern Recognition, pp. 3569–3577, Salt Lake City, UT, USA, June 2018. View at: Google Scholar
  10. N. Smolyanskiy, A. Kamenev, and S. Birchfield, “On the importance of stereo for accurate depth estimation: an efficient semi-supervised deep neural network approach,” in Proceedings of the Computer Vision and Pattern Recognition, pp. 1007–1015, Salt Lake City, UT, USA, June 2018. View at: Google Scholar
  11. A. Asvadi, P. Girao, P. Peixoto, and U. Nunes, “3D object tracking using RGB and LIDAR data,” in Proceedings of the IEEE International Conference on Intelligent Transportation Systems, Rio de Janeiro, Brazil, November 2016. View at: Google Scholar
  12. A. Osep, W. Mehner, M. Mathias, and B. Leibe, “Combined image- and world-space tracking in traffic scenes,” in Proceedings of the International Conference on Robotics and Automation, pp. 1988–1995, Singapore, June 2017. View at: Google Scholar
  13. E. Levinkov, J. Uhrig, S. Tang, M. Omran et al., “Joint graph decomposition & node labeling: problem, algorithms, applications – supplement –,” in Proceedings of the IEEE Conf. Comput. Vis. Pattern Recognit, pp. 1904–1912, Honolulu, HI, USA, July 2017. View at: Google Scholar
  14. M. Keuper, S. Tang, B. Andres, T. Brox et al., “Motion segmentation & multiple object tracking by correlation co-clustering,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 42, no. 1, pp. 140–153, 2020. View at: Publisher Site | Google Scholar
  15. B. Wang, G. Wang, K. L. Chan, and L. Wang, “Tracklet association with online target-specific metric learning,” in Proceedings of the Computer Vision and Pattern Recognition, pp. 1234–1241, Columbus, OH, USA, June 2014. View at: Google Scholar
  16. C. Wang, H. Liu, and Y. Gao, “Scene-adaptive hierarchical data association for multiple objects tracking,” IEEE Signal Processing Letters, vol. 21, no. 6, pp. 697–701, 2014. View at: Publisher Site | Google Scholar
  17. N. Wojke, A. Bewley, and D. Paulus, “Simple online and realtime tracking with a deep association metric,” in Proceedings of the International Conference on Image Processing, pp. 3645–3649, Beijing, China, September 2017. View at: Google Scholar
  18. L. Zhang and L. V. Der Maaten, “Structure preserving object tracking,” in Proceedings of the Computer Vision and Pattern Recognition, pp. 1838–1845, Cancun, Mexico, December 2013. View at: Google Scholar
  19. S. H. Bae and K. J. Yoon, “Robust online multi-object tracking based on tracklet confidence and online discriminative appearance learning,” in Proceedings of the Computer Vision and Pattern Recognition (CVPR), Columbus, OH, USA, June 2014. View at: Google Scholar
  20. F. Poiesi, R. Mazzon, A. Cavallaro, and I. U. Cviu, “Multi-target tracking on confidence maps: an application to people tracking,” Computer Vision and Image Understanding, vol. 117, no. 10, pp. 1257–1272, 2013. View at: Publisher Site | Google Scholar
  21. A. Bewley, Z. Ge, L. Ott, F. Ramos, and B. Upcroft, “Simple online and realtime tracking,” in Proceedings of the International Conference on Image Processing, pp. 3464–3468, Las Vegas, NV, USA, January 2016. View at: Google Scholar
  22. B. Liu, S. Cheng, and Y. Shi, “Particle filter optimization: a brief introduction,” in Proceedings of the International Conference on Swarm Intelligence, pp. 95–104, Springer, 2016. View at: Google Scholar
  23. S. A. Goli, B. H. Far, and A. O. Fapojuwo, “An accurate multi-sensor multi-target localization method for cooperating vehicles,” in Information Reuse and Integration, pp. 197–217, Springer, 2016. View at: Google Scholar
  24. B. D. Lucas and T. Kanade, “An iterative image registration technique with an application to stereo vision,” in Proceedings of the International Joint Conference on Artificial Intelligence, pp. 674–679, Vancouver, Canada, August 1981. View at: Google Scholar
  25. L. Chen, H. Ai, R. Chen, and Z. Zhuang, “Aggregate tracklet appearance features for multi-object tracking,” IEEE Signal Processing Letters, vol. 26, no. 11, pp. 1613–1617, 2019. View at: Publisher Site | Google Scholar
  26. X. Su, X. Qu, Z. Zou, P. Zhou et al., “K-reciprocal harmonious attention network for video-based person Re-identification,” IEEE Access, vol. 7, pp. 22457–22470, 2019. View at: Publisher Site | Google Scholar
  27. R. Girshick, J. Donahue, T. Darrell, and J. Malik, “Rich feature hierarchies for accurate object detection and semantic segmentation,” in Proceedings of the Computer Vision and Pattern Recognition, pp. 580–587, Columbus, OH, USA, June 2014. View at: Google Scholar
  28. R. Girshick, “Fast R-CNN,” in Proceedings of the International Conference on Computer Vision, pp. 1440–1448, Santiago, Chile, December 2015. View at: Google Scholar
  29. T. Lin, P. Dollar, R. Girshick, K. He, B. Hariharan, and S. Belongie, “Feature pyramid networks for object detection,” in Proceedings of the Computer Vision and Pattern Recognition, pp. 936–944, Honolulu, HI, USA, July 2017. View at: Google Scholar
  30. J. Redmon, S. K. Divvala, R. Girshick, and A. Farhadi, “You only Look once: unified, real-time object detection,” in Proceedings of the Computer Vision and Pattern Recognition, pp. 779–788, Las Vegas, NV, USA, June 2016. View at: Google Scholar
  31. J. Redmon and A. Farhadi, “YOLO9000: better, faster, stronger,” in Proceedings of the computer vision and pattern recognition, pp. 6517–6525, Honolulu, HI, USA, July 2017. View at: Google Scholar
  32. T. Lin, P. Goyal, R. Girshick, K. He, and P. Dollar, “Focal loss for dense object detection,” in Proceedings of the International Conference on Computer Vision, pp. 2999–3007, Venice, Italy, July, 2017. View at: Google Scholar
  33. X. Chen, H. Ma, J. Wan, B. Li, and T. Xia, “Multi-view 3D object detection network for autonomous driving,” in Proceedings of the Computer Vision and Pattern Recognition, pp. 6526–6534, Honolulu, HI, USA, July 2017. View at: Google Scholar
  34. S. Song and J. Xiao, “Deep sliding shapes for amodal 3D object detection in RGB-D images,” in Proceedings of the Computer Vision and Pattern Recognition, pp. 808–816, Las Vegas, NV, USA, June 2016. View at: Google Scholar
  35. Y. Zhou and O. Tuzel, “VoxelNet: end-to-end learning for point cloud based 3D object detection,” in Proceedings of the Computer Vision and Pattern Recognition, pp. 4490–4499, Salt Lake City, UT, USA, June 2018. View at: Google Scholar
  36. S. Shi, X. Wang, and H. Li, “PointRCNN: 3D object proposal generation and detection from point cloud,” in Proceedings of the Computer Vision and Pattern Recognition, pp. 770–779, Long Beach, CA, USA, June 2019. View at: Google Scholar
  37. C. R. Qi, L. Yi, H. Su, and L. J. Guibas, “PointNet++: Deep Hierarchical Feature Learning on Point Sets in a Metric Space,” in Proceedings of the NIPS, Long Beach, CA, USA, December 2017. View at: Google Scholar
  38. J. Ku, A. D. Pon, and S. L. Waslander, “Monocular 3D object detection leveraging accurate proposals and shape reconstruction,” in Proceedings of the Computer Vision and Pattern Recognition, pp. 11867–11876, Long Beach, CA, USA, June 2019. View at: Google Scholar
  39. Z. Deng and L. J. Latecki, “Amodal detection of 3D objects: inferring 3D bounding boxes from 2D ones in RGB-depth images,” in Proceedings of the Computer Vision and Pattern Recognition, pp. 398–406, Honolulu, HI, USA, July 2017. View at: Google Scholar
  40. S. Zagoruyko, N. J. Komodakis, and P. Recognition, “Wide residual networks,” in Proceedings of the BMVC, Newyork, NY, USA, September 2016. View at: Google Scholar
  41. X. Liu, W. Liu, T. Mei, and H. Ma, “PROVID: progressive and multimodal vehicle reidentification for large-scale urban surveillance,” IEEE Transactions on Multimedia, vol. 20, no. 3, pp. 645–658, 2018. View at: Publisher Site | Google Scholar
  42. N. Wojke and A. Bewley, “Deep cosine metric learning for person Re-identification,” in Proceedings of the Workshop on Applications of Computer Vision, pp. 748–756, Vancouver, Canada, September 2018. View at: Google Scholar
  43. A. Geiger, P. Lenz, and R. Urtasun, “Are we ready for autonomous driving? the kitti vision benchmark suite,” in Proceedings of the 2012 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Providence, RI, USA, June 2012. View at: Google Scholar
  44. K. Bernardin, R. Stiefelhagen, and V. Processing, “Evaluating multiple object tracking performance: the CLEAR MOT metrics,” EURASIP Journal on Image and Video Processing, vol. 2008, 10 pages, 2008. View at: Publisher Site | Google Scholar
  45. G. Gunduz and TAJITOI Vehicles, Efficient Multi-Object Tracking by Strong Associations on Temporal Window, IEEE, Newyork, NY, USA, 2019.
  46. P. Lenz, A. Geiger, and R. Urtasun, “FollowMe: efficient online min-cost flow tracking with bounded memory and computation,” in Proceedings of the International Conference on Computer Vision, pp. 4364–4372, Santiago, Chile, December 2015. View at: Google Scholar
  47. M. Simon, “Complexer YOLO: real-time 3D object detection and tracking on semantic point clouds,” in Proceedings of the Computer Vision and Pattern Recognition, p. 0, Long Beach, CA, USA, June 2019. View at: Google Scholar
  48. D. Frossard and R. Urtasun, “End-to-end learning of multi-sensor 3D tracking by detection,” in Proceedings of the International Conference on Robotics and Automation, pp. 635–642, Brisbane, Australia, May 2018. View at: Google Scholar

Copyright © 2020 Liwei Zhang et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.


More related articles

 PDF Download Citation Citation
 Download other formatsMore
 Order printed copiesOrder
Views293
Downloads229
Citations

Related articles

Article of the Year Award: Outstanding research contributions of 2020, as selected by our Chief Editors. Read the winning articles.