Complexity

Complexity / 2021 / Article
Special Issue

Artificial Intelligence for Smart System Simulation

View this Special Issue

Research Article | Open Access

Volume 2021 |Article ID 5597168 | https://doi.org/10.1155/2021/5597168

Zeqing Zhang, Weiwei Lin, Yuqiang Zheng, "Multidirection Object Detection in Aerial View of Traffic Target under Complex Scenes", Complexity, vol. 2021, Article ID 5597168, 9 pages, 2021. https://doi.org/10.1155/2021/5597168

Multidirection Object Detection in Aerial View of Traffic Target under Complex Scenes

Academic Editor: Abd E.I.-Baset Hassanien
Received30 Jan 2021
Revised21 Feb 2021
Accepted31 Mar 2021
Published12 Apr 2021

Abstract

Focusing on DOTA, the multidirectional object dataset in aerial view of vehicles, CMDTD has been proposed. The reason why it is difficult for applying the general object detection algorithm in multidirectional object detection has been analyzed in this paper. Based on this, the detection principle of CMDTD including its backbone network and multidirectional multi-information detection end module has been studied. In addition, in view of the complexity of the scene faced by aerial view of vehicles, a unique data expansion method is proposed. At last, three datasets have been experimented using the CMDTD algorithm, proving that the cascaded multidirectional object detection algorithm with high effectiveness is superior to other methods.

1. Introduction

With the development of deep learning, rapid progress has been made in remote sensing or aerial image processing and analysis [15]. However, these methods cannot deal with multidirectional object detection. Different from the traditional object detection, which the detection frame is generally horizontal or vertical giant, the detection frame given by multidirectional object detection can be rectangular in any direction. Recently, a plurality of popular general object detection algorithms has been ineffective on vehicle object detection datasets with the best result reaching only 52.93%. Compared with the experimental result on the DOTA dataset [6], the rotation of the detection object boundary box, a large number of small vehicle objects in aerial view of images, and insufficient utilization of data information are main causes for the poor performance.

To improve the effect of multidirectional aerial view of the vehicle object detection dataset, the CMDTD (a cascaded multidirectional object detection algorithm) algorithm has been utilized with FasterRCNN as the benchmark method through using the cascade idea of Cascade RCNN for reference. Moreover, vehicle objects are classified in a coarse-to-fine manner for fitting boundary prediction via a multi-information cascade output end. Data augmentation training is performed on the classes with few samples in statistical data in order to address the problem of sample imbalance. By using the method, the effect of multidirectional object detection can be effectively improved. Hence, it has certain research value for implementing multidirectional vehicle object detection in complex scenes.

2. Research Models

2.1. ResNeXt Backbone Network

Compared with land vehicles, although there are few classes of aerial vehicles objects, big differences between the classes can be witnessed. In order to extract superior object features, ResNeXt [7], which is more powerful than ResNet, is selected in the backbone network of CMDTD with the adoption of the feature fusion method of FPN [8]. The submodule structures of ResNet and ResNeXt are shown in Figure 1. The submodule of ResNet is composed of three convolutional layers. Regarding the input feature map X, X will extract new features, i.e., F(X) upon three convolution operations. And then, residual blocks are output upon the superimposition of new and old features. The process can be expressed as

According to equation (1), the feature obtained by F(X) is the difference between Y and X, which is also called residual characteristic, and the submodule is called residual block. Based on ResNet, the residual was expanded using ResNeXt.

As shown in Figure 1, three convolutional layers are divided into 32 groups of convolutional combinations with consistent parameter sizes, and the sum is equal to the size of the three original convolutional layers. The feature map X is superimposed, or , after 32 groups of convolution features. On this basis, new features are superimposed with the old features for acquiring output features, which is shown as

Although parameters are not increased in the new combination, the complexity of feature transformation is increased, strengthening the expressive ability of network towards features.

2.2. Multi-Information Cascade Output

The output end of the general detector and the multidirection and multi-information detection output end proposed in this paper are demonstrated in Figure 2. Based on Figure 2(a), the output end of the general detector should undergo a class determination and border prediction after acquiring the region proposal in RPN (regional proposal network). Being consistent with the output end of the general detector in the first half part, the multi-information cascade output end proposed in this paper will regress to the horizontal outer border of the object. However, it only determines whether the object is foreground or background in the second stage of class determination. Moreover, the length and width information (length, width, and aspect ratio) of the object can be calculated upon obtaining the horizontal frame of the object. Next, RoI pooling performs a new object feature extraction based on the acquired horizontal frame. Finally, a fine classification of the object is performed in the second FCN (fully connected network) based on the length and width information of the object and extracted features. At the same time, the third FCN predicts the positions of four vertices of the object according to the extracted features, acquiring the quadrilateral bounding box of the object.

Compared with the detection of land vehicles, sizes of objects in aerial view of vehicles are diverse, causing that it becomes more difficult to regress the positions. Preciser object boundary can be obtained in the second stage in comparison to the object proposal region, contributing to the boundary positioning in the future. Regarding class determination, objects of different classes may have the same texture and color features yet in varied sizes. For example, small vehicles and large vehicles with similar color characteristics can be distinguished as per the size and aspect ratio of the object. Hence, introducing the aspect ratio information of the object during the fine classification of the object might improve the accuracy of classification. During implementation, the length, width, and aspect ratio calculated by the horizontal frame can be obtained at the detection end, which can be introduced to the FCN together with features upon reducing by 1000 times.

During training, the end is composed of four losses, including two position losses and two class losses. The position loss is the prediction losses of horizontal boundary and the vertex, while the class loss is the foreground classification loss and the fine classification loss. Among them, L1 smoothing loss and cross entropy loss are applied in the position loss and the classification loss, respectively.

2.3. Prediction of Vertex Information

The positioning object boundary at the detection end is presented in Figure 3. In this paper, the object box has been regressed for three times at the detection end. It is performed in chronological order, namely, object candidate box, object horizontal bounding box, and rotating rectangular box, which are corresponding to the dashed box in light blue, the dotted box in blue, and the rectangular box in green.

The dashed box in light blue is far away from the real boundary of the object. It is because the region proposal network regresses to the object boundary according to the anchor. If the anchor point position deviates from the object boundary, a poor regression effect will be achieved. In the experiment, the aspect ratio of anchor is [1 : 1, 1 : 2, 2 : 1, 1 : 3, 3 : 1] with the base size of 6 pixels, so as to adapt to dense small objects and long objects. The dashed box in blue is the horizontal bounding box of regression in the second stage. This regression is approaching to the real boundary of the object as the proposed position of the object is the horizontal rectangle that is externally connected to the regression object. The rectangular box in green is obtained by regressing the four vertices of the horizontal bounding box, and the predictive equation is shown as follows:where x, y, , and h are the center coordinates, length, and width of the horizontal predictive frame, respectively. And the fully connected network can obtain the first vertex of the object through regressing tx1 and ty1. During training, the point closest to the upper left corner of the horizontal bounding box of the object is taken as the first vertex, and the second, third, and fourth vertices are obtained in clockwise order. The real predicted amount can be calculated aswhere and are the real coordinates of the vertex; and are the real predictions. And calculation is conducted with the smoothing L1 loss during training. Other coordinate points are calculated in a similar manner.

2.4. Unbalanced Data Distribution and Data Augmentation

As can be observed from Figure 4, intensive vehicles and ships can be found due to mass of scenes such as ports and parking lots in aerial view of vehicles images. In the training set after cutting, ships and vehicles have high proportions. In contrast, other classes have extremely proportions.

Images of the 5 classes (including football field, athletic field, rugby field, baseball field, and roundabout) with the least proportions among data are expanded, so as to cope with the problem of unbalanced training data. According to the expanded process presented in Figure 5, horizontal flip, vertical flip, and simultaneously horizontal and vertical flip are performed on images. At this point, data of the class with few proportions have been expanded by 3 times. Since other classes can be witnessed in the augmented image, these classes will be augmented appropriately. And the augmented ratios of all classes are shown in Figure 6.

3. Experimental Results

3.1. Experimental Dataset and Evaluation Indicators

Verification has been performed on the vehicle-based dataset: DOTA [6] and HRSC2016 [9] in this paper. Moreover, CMDTD also performs verification on the multidirectional scene and ICDAR2015 [10]. As a large-scale aerial view of the vehicle image dataset, the DOTA dataset contains 1,411 training images, 937 test images, and 458 verification images. These image sizes are ranged from 800 × 800 to 4000 × 4000, including 15 classes and 188, 282 instance objects. The dataset provides horizontal rectangle labeling and vertex labeling with two detection tasks opened. The first task is a multidirectional object detection task, and the second task is a horizontal object detection task.

HRSC2016 is a dataset of aerial view of maritime vehicle images that are originated from 6 main ports. To be specific, the numbers of training set, verification set, and test set are 436, 181, and 444, respectively, and the image size is ranged from 300 × 300 to 500 × 900. The ICDAR2015 dataset is originated from a detection task of the ICDAR 2015 Robust Reading Competition. It is a task that collects images taken in reality. Specially, 1000 out of 1500 images are training images, while the remaining 500 images are test images with the size of 720 × 1280.

To compare with other methods, CMDTD adopts the calculation standard evaluation method of mAP in DOTA and HRSC2016 and evaluates with F-measure in ICDAR2015. The index is calculated by the recall rate and precision, shown as follows:where is generally set as 0.5.

3.2. Experimental Setup

The experimental environment configured for CMDTD is shown in Table 1.


Operating systemUbuntu16.04
Kernel version4.15.0–70-generic
Processor modelIntel(R)core(TM) i7-6700K CPU @ 4.00GHz
Graphics card modelGeForce GTX 1080
Programming languagePython
FramePytorch 1.0.0, mmdetection

Concerning the ICDAR2015 dataset, CMDTD is processed through cutting the image into 1088 × 1088 in a way similar to the DOTA dataset. Besides, the network input size is set to 1088 × 1088 during training, with training frequency and parameter details consistent with HRSC2016. The input size of the network is set to 720 × 1280 for testing, so that the image can be input in its original size for testing.

3.3. Result Comparison

The comparison results of five methods such as CMDTD and RRPN [11], ICN [12], and SCRDet [13] in the DOTA dataset task 1 (multidirectional detection) are displayed in Table 2. The mAP indexes of CMDTD are higher than that of other methods, reaching up to 72.81%. In addition, concerning small object detection, the first and the second positions in the detection accuracy of CMDTD belong to ships and cars with 85.5% and 69.51%, respectively.


RRPN [11]ICN [12]Ding et al. [14]R3Det [15]SCRDet [13]CMDTD

Aircraft88.5281.4088.6489.5489.9888.15
Baseball field71.2074.3078.5281.9980.6584.46
Bridge31.6647.7043.4448.4652.0950
Athletic field59.3070.3075.9262.5268.3669.98
Car51.8564.9068.8170.4868.3669.51
Big car56.1967.8073.6874.2960.3273.68
Ship57.2570.0083.5977.5472.4185.5
Tennis court90.8190.8090.7490.8090.8590.35
Rugby field72.8479.177.2781.3987.9481.09
Storage tank67.3878.2081.4683.5486.8683.52
Football field56.6953.6058.3961.9765.0259.1
Roundabout52.8462.9053.5459.8266.6865.33
Port53.0867.0062.8365.4466.2571.64
Swimming pool51.9464.2058.9367.4668.2466.48
Helicopter53.5850.2047.6760.0565.2154.28
mAP61.0168.2069.5671.6972.6172.81

The comparison results of 5 methods in CMDTD and RFCN [6], ICN [12], and SCRDet [13] in the DOTA dataset task 2 (horizontal detection) are shown in Table 3. The detection result of CMDTD in the horizontal direction is obtained by taking the smallest external rectangle from the result of the rotating rectangle. Of which, the method proposed in this paper ranks the second with mAP reaching 73.7%, in surpass of most of the other methods.


R-FCN [16]FRH [17]FPN [8]ICN [12]SCRDet [13]CMDTD

Aircraft79.3380.3288.7090.0090.1887.38
Baseball field44.2677.5575.1077.7081.8883.96
Bridge36.5832.8652.6053.4055.3051.33
Athletic field53.5368.1359.2073.3073.2970.01
Car39.3853.6669.4073.5072.0970.06
Big car34.1552.4978.8065.0077.6574.47
Ship47.2950.0484.578.2078.0685.99
Tennis court45.6690.4190.6090.8090.9190.36
Rugby field47.7475.0581.3079.1082.4471.49
Storage tank65.8459.5982.6084.8086.3983.60
Football field37.9257.0052.5057.2064.5358.26
Roundabout44.2349.8162.1062.1063.4565.57
Port47.2361.6976.6073.5075.7775.60
Swimming pool50.6456.4666.3070.2078.2173.31
Helicopter34.9041.8560.1058.1060.1154.14
MAP47.2460.4672.0072.5075.3573.70

The comparison results of CMDTD in 6 methods including the ICDAR2015 dataset, CTPN [18], and RRPN [11] are shown in Table 4. These methods are detection methods based on one stage or two stage. The comprehensive score of the method proposed in this paper ranks the second, reaching 83.88%. An excellent performance can be observed.


MethodsRecall rateAccuracyComprehensive scoring

CTPN [18]51.5674.2260.85
RRPN [11]82.1773.2377.44
EAST [19]78.3383.2780.72
R2CNN [20]79.6885.6282.54
FOTS RT [21]85.9579.8382.78
R3Det [15]83.5486.4384.96
CMDTD80.2187.9283.88

The comparison results of CMDTD in 7 methods including the HRSC2016 dataset, RRD [22], and R2CNN [20] are demonstrated in Table 5. The mAP index of the method proposed in this paper ranks the top, reaching 89.68%.


MethodsmAP

R2CNN [20]73.07
RRPN [11]79.08
RetinaNet-H [10]82.89
RRD [22]84.30
RetinaNetR [15]89.18
RoI-Transformer [14]86.20
R3Det [15]89.14
CMDTD89.68

3.4. The Influence of Different Modules on Model Results

Influences of different settings on the model results, including the influence of cascade, the influence of location information on classification [23], and the influence of data augmentation are investigated by CMDTD on DOTA task 1. The results are shown in Table 6. Experimental results are presented as follows:


No cascadeNo location informationNo data augmentationCMDTD

Aircraft83.6088.1888.7388.15
Baseball field65.0582.9981.1784.46
Bridge37.0950.1348.8750.00
Athletic field66.2971.6371.7969.98
Car64.0970.2069.0369.51
Big car60.4974.1174.4373.68
Ship76.0085.0885.3985.50
Tennis court86.4390.6590.6590.35
Rugby field60.6777.5377.7681.09
Storage tank67.4083.7484.0783.52
Football field44.2353.3253.3059.10
Roundabout51.2366.0165.2965.33
Port59.5572.0271.8071.64
Swimming pool54.1966.3665.8966.48
Helicopter37.7351.3858.0354.28
MAP60.9472.2272.4872.81

For the influence of data augmentation, the mAP of the model trained by CMDTD on the dataset without data augmentation achieved only 72.48%. Furthermore, the detection effects of football field, rugby field, and baseball field are reduced by 5.8%, 3.33%, and 3.29%, respectively.

Detection results of CMDTD in two aerial views of image datasets can be found in Figures 7 and 8. It can be observed from the figure that there are a large number of objects in the DOTA dataset with significant differences in size and aspect ratios. The object of the HRSC2016 dataset is long rectangle. CMDTD can effectively capture objects in various directions [24], which also presents a satisfactory detection effect on small objects [25].

4. Conclusion

Firstly, an aerial view based on aerial view of vehicle image object data set is proposed to overcome the shortcomings of the General Object Detection Algorithm in aerial view. On this basis, CMDTD has been proposed. The reason why it is difficult for applying the general object detection algorithm in multidirectional object detection has been analyzed in this paper. Based on this, the detection principle of CMDTD, including its backbone network, multidirectional multi-information detection end module, and aerial view of vehicle data augmentation method has been studied. At last, three datasets have been experimented using the CMDTD algorithm, proving that the cascaded multidirectional object detection algorithm with high effectiveness is superior to other methods.

Data Availability

The data used to support the findings of this study are included within [1, 2].

Conflicts of Interest

The authors declare that they have no conflicts of interest.

References

  1. D. Hong, L. Gao, N. Yokoya et al., “More diverse means better: multimodal deep learning meets remote-sensing imagery classification,” IEEE Transactions on Geoscience and Remote Sensing, vol. 99, pp. 1–15, 2020. View at: Publisher Site | Google Scholar
  2. D. Hong, L. Gao, J. Yao, B. Zhang, A. Plaza, and J. Chanussot, “Graph convolutional networks for hyperspectral image classification,” IEEE Transactions on Geoscience and Remote Sensing, pp. 1–13, 2020. View at: Publisher Site | Google Scholar
  3. D. Hong, N. Yokoya, J. Chanussot, and X. X. Zhu, “An augmented linear mixing model to address spectral variability for hyperspectral unmixing,” IEEE Transactions on Image Processing, vol. 28, no. 4, pp. 1923–1938, 2019. View at: Publisher Site | Google Scholar
  4. X. Wu, D. Hong, J. Tian, J. Chanussot, W. Li, and R. Tao, “ORSIm detector: a novel object detection framework in optical remote sensing imagery using spatial-frequency channel features,” IEEE Transactions on Geoscience and Remote Sensing, vol. 57, no. 7, pp. 5146–5158, 2019. View at: Publisher Site | Google Scholar
  5. X. Wu, D. Hong, J. Chanussot, Y. Xu, R. Tao, and Y. Wang, “Fourier-based rotation-invariant feature boosting: an efficient framework for geospatial object detection,” IEEE Geoscience and Remote Sensing Letters, vol. 17, no. 2, pp. 302–306, 2019. View at: Google Scholar
  6. G. S. Xia, X. Bai, J. Ding et al., “DOTA: a large-scale dataset for object detection in aerial view of images,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 3974–3983, Salt Lake City, UT, USA, June 2018. View at: Publisher Site | Google Scholar
  7. S. Xie, R. Girshick, P. Dollár, Z. Tu, and K. He, “Aggregated residual transformations for deep neural networks,” in Proceedings of the 2017 IEEE Conference on Computer Vision and Pattern Recognition, pp. 1492–1500, Honolulu, HI, USA, July 2017. View at: Publisher Site | Google Scholar
  8. T. Y. Lin, P. Dollár, R. Girshick, K. He, B. Hariharan, and S. Belongie, “Feature pyramid networks for object detection,” in Proceedings of the 2017 IEEE Conference on Computer Vision and Pattern Recognition, pp. 2117–2125, Honolulu, HI, USA, July 2017. View at: Publisher Site | Google Scholar
  9. Z. Liu, L. Yuan, L. Weng, and Y. Yang, “A high resolution optical satellite image dataset for ship recognition and some new baselines,” in Proceedings of the International Conference on Pattern Recognition Appli- Cations and Methods, pp. 324–331, Porto, Portugal, February 2017. View at: Google Scholar
  10. D. Karatzas, L. Gomez-Bigorda, A. Nicolaou et al., “ICDAR 2015 competition on robust reading,” in Proceedings of the International Conference on Document Analysis and Recognition, pp. 1156–1160, Tunis, Tunisia, August 2015. View at: Publisher Site | Google Scholar
  11. J. Ma, W. Shao, H. Ye et al., “Arbitrary-Oriented scene text detection via rotation proposals,” IEEE Transactions on Multimedia, vol. 20, no. 11, pp. 3111–3122, 2018. View at: Publisher Site | Google Scholar
  12. S. M. Azimi, E. Vig, R. Bahmanyar, M. Körner, and P. Reinartz, “Towards multi-class object detection in unconstrained remote sensing imagery,” in Proceedings of the Asian Conference on Computer Vision, pp. 150–165, Perth, Australia, December 2018. View at: Google Scholar
  13. X. Yang, J. Yang,, J. Yan et al., “Scrdet: towards more robust detection for small, cluttered and rotated objects,” in Proceedings of the IEEE International Conference on Computer Vision, pp. 8232–8241, Seoul, Korea, November 2019. View at: Google Scholar
  14. J. Ding, N. Xue, Y. Long, G.-S. Xia, and Q. Lu, “Learning Roi transformer for detecting oriented objects in aerial view of images,” 2018, arXiv preprint arXiv:1812.00155. View at: Google Scholar
  15. X. Yang, Q. Liu, J. Yan, and A. Li, “R3DET: refined single-stage detector with feature refinement for rotating object,” 2019, arXiv preprint arXiv:1908.05612. View at: Google Scholar
  16. J. Dai, Y. Li, K. He, and J. Sun, “R-FCN: object detection via region-based fully convolutional net- works,” in Proceedings of the Advances in Neural Information Processing Systems, pp. 379–387, Barcelona, Spain, December 2016. View at: Google Scholar
  17. S. Ren, K. He, R. Girshick, and J. Sun, “Faster R-CNN: towards real-time object detection with region proposal networks,” Advances in Neural Information Processing Systems, vol. 39, no. 6, pp. 91–99, 2015. View at: Publisher Site | Google Scholar
  18. Z. Tian, W. Huang, T. He, P. He, and Y. Qiao, “Detecting text in natural image with connectionist text proposal network,” in Proceedings of the European Conference on Computer Vision, pp. 56–72, Amsterdam, The Netherlands, October 2016. View at: Publisher Site | Google Scholar
  19. X. Zhou, C. Yao, H. Wen et al., “EAST: an efficient and accurate scene text detector,” in Proceedings of the 2017 IEEE Conference on Computer Vision and Pattern Recognition, pp. 5551–5560, Honolulu, HI, USA, July 2017. View at: Google Scholar
  20. Y. Jiang, X. Zhu, X. Wang et al., “R2CNN: rotational region CNN for orientation robust scene text detection,” 2017, arXiv preprint arXiv:1706.09579. View at: Google Scholar
  21. X. Liu, D. Liang, S. Yan, D. Chen, Y. Qiao, and J. Yan, “Fots: fast oriented text spotting with a unified network,” in Proceedings of the 2018 IEEE Conference on Computer Vision and Pattern Recognition, pp. 5676–5685, Salt Lake City, UT, USA, June 2018. View at: Google Scholar
  22. M. Liao, Z. Zhu, B. Shi, G.-S. Xia, and X. Bai, “Rotation-sensitive regression for oriented scene text detection,” in Proceedings of the 2018 IEEE Conference on Computer Vision and Pattern Recognition, pp. 5909–5918, Salt Lake City, UT, USA, June 2018. View at: Publisher Site | Google Scholar
  23. X. Xue, X. Wu, and J. Chen, “Optimizing biomedical ontology alignment through a compact multiobjective particle swarm optimization algorithm driven by knee solution,” Discrete Dynamics in Nature and Society, vol. 2020, no. 7, Article ID 4716286, pp. 1–10, 2020. View at: Publisher Site | Google Scholar
  24. X. Xue and J. Chen, “Using compact evolutionary tabu search algorithm for matching sensor ontologies,” Swarm and Evolutionary Computation, vol. 48, pp. 25–30, 2019. View at: Publisher Site | Google Scholar
  25. Z. Du, J. Pan, S. Chu, H. Luo, and P. Hu, “Quasi-affine transformation evolutionary algorithm with communication schemes for application of RSSI in wireless sensor networks,” IEEE Access, vol. 8, pp. 8583–8594, 2019. View at: Publisher Site | Google Scholar

Copyright © 2021 Zeqing Zhang et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Related articles

No related content is available yet for this article.
 PDF Download Citation Citation
 Download other formatsMore
 Order printed copiesOrder
Views372
Downloads619
Citations

Related articles

No related content is available yet for this article.

Article of the Year Award: Outstanding research contributions of 2021, as selected by our Chief Editors. Read the winning articles.