Advanced Visual Analyses for Smart and Autonomous VehiclesView this Special Issue
Editorial | Open Access
Zhijun Fang, Jenq-Neng Hwang, Shih-Chia Huang, "Advanced Visual Analyses for Smart and Autonomous Vehicles", Advances in Multimedia, vol. 2018, Article ID 1762428, 2 pages, 2018. https://doi.org/10.1155/2018/1762428
Advanced Visual Analyses for Smart and Autonomous Vehicles
Thanks to the major advances of sensing, communication, and computing, connected and automated vehicle (CAV) technologies have been the strong driving force behind the rapid development of intelligent transportation system (ITS). Advanced visual analyses techniques based on camera/radar/lidar/IR sensing spanning the fields of computer vision, image/video analyses, machine learning, etc. have become the indispensable tools for CAV technologies, especially for enhancing safe and autonomous operation of vehicles in traffic, mainly based on three main components (i.e., environment, vehicle, and driver—EVD) of the overall driving context.
More specifically, advanced visual analyses can infer the dynamic environment scenes outside the vehicles, such as roadway situations and weather conditions, pedestrians, and other vehicles moving trajectories, as well as traffic lights/signs, etc. Advanced visual analyses can also improve the detection accuracies of vehicles’ driving states, such as the vibration, speed, acceleration, and abrupt turns. Moreover, advanced visual analyses can serve as nonintrusive monitoring of the drivers which needs to be safely maneuvered in the environment. The complex dynamics of various events and interaction of various “EVD” system components affect the overall safety and comfort of driving, as well as the condition of the traffic flow. Real-time awareness and dynamic response of these system components can proactively result in better driving safety systems and driving experiences, which can accurately, reliably, and very quickly identify the conditions that would lead to an accident and to force corrective actions so that the accident can be prevented.
The goal of this special issue is to share new advanced visual analysis techniques, bring forward challenges, and present comprehensive reviews for CAVs. More specifically, this special issue focuses on state-of-the-art researches of integrating advanced visual analysis techniques, which can be effectively applied to vehicle and driver sensing, road and pedestrian monitoring, data fusion analysis, and correction and response, etc. After several iterations of reviewing processes, five papers are accepted for this special issue, which covers the advance of visual analysis techniques for visual tracking, scene understanding, lane detection, vehicle classification and power controlling, etc.
More specifically, the paper entitled “Robust Visual Tracking with Discrimination Dictionary Learning” proposes an effective tracking algorithm based on learned discrimination dictionary. Based on the learned dictionary, target candidates to be tracked can get a more stable representation. Additionally, the observation likelihood is evaluated based on both the reconstruction error and the dictionary coefficients. The paper entitled “Scene Understanding Based on High-Order Potentials and Generative Adversarial Networks” proposes a scene understanding framework based on a generative adversarial network (GAN) to implement a fully convolutional semantic segmentation model. The high-order potentials are adopted to achieve the fine details and consistency of the segmented semantic image. The paper entitled “Lane Detection based on Connection of Various Feature Extraction Methods” presents a new preprocessing and ROI selection method for lane detection. First, in the preprocessing stage, the RGB color space is converted to the HSV color space and white features on the HSV model are also extracted. At the same time, the preliminary edge feature detection is added in the preprocessing stage, and then the part below the image is selected as the ROI area based on the proposed preprocessing. The paper entitled “Pretraining Convolutional Neural Networks for Image-based Vehicle Classification” proposes a convolution neural networks (CNN) for image-based vehicle classification with four categories, including motorcycle, transporter, and passenger. An unsupervised pretraining approach is introduced to initialize CNN parameters for better classification performance. Finally, the paper entitled “A Power Control Algorithm Based on Outage Probability Awareness in Vehicular Ad Hoc Networks” addresses a power control algorithm to overcome the problems of random mobility of nodes, interference in multiusers, and high outage probability. The proposed power control method aims at minimizing the outage probability, taking advantage of available cumulative interference at the transmitter of each terminal. Furthermore, the interference model is built by a stochastic geometric theory, from which the expression between outage probability and cumulative interference can be derived.
As we conclude the introduction to this special issue and the contents of the selected five papers, we would like to thank all authors for their valuable contributions. We also express our deep gratitude to all the reviewers for their timely and insightful comments on all submitted papers. It is our sincere expectation that the contents in this special issue are informative and useful from various aspects related to connected and automated vehicle (CAV) technologies.
Conflicts of Interest
The authors declare that they have no conflicts of interest.
Copyright © 2018 Zhijun Fang et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.