Table of Contents Author Guidelines Submit a Manuscript
Mathematical Problems in Engineering
Volume 2015, Article ID 250461, 11 pages
http://dx.doi.org/10.1155/2015/250461
Research Article

An Automatic Traffic Sign Detection and Recognition System Based on Colour Segmentation, Shape Matching, and SVM

Department of Electrical, Electronic & Systems Engineering, Universiti Kebangsaan Malaysia, Jalan Reko, 43600 Bangi, Selangor, Malaysia

Received 12 June 2015; Revised 2 September 2015; Accepted 25 October 2015

Academic Editor: Pan Liu

Copyright © 2015 Safat B. Wali et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Abstract

The main objective of this study is to develop an efficient TSDR system which contains an enriched dataset of Malaysian traffic signs. The developed technique is invariant in variable lighting, rotation, translation, and viewing angle and has a low computational time with low false positive rate. The development of the system has three working stages: image preprocessing, detection, and recognition. The system demonstration using a RGB colour segmentation and shape matching followed by support vector machine (SVM) classifier led to promising results with respect to the accuracy of 95.71%, false positive rate (0.9%), and processing time (0.43 s). The area under the receiver operating characteristic (ROC) curves was introduced to statistically evaluate the recognition performance. The accuracy of the developed system is relatively high and the computational time is relatively low which will be helpful for classifying traffic signs especially on high ways around Malaysia. The low false positive rate will increase the system stability and reliability on real-time application.

1. Introduction

In order to solve the concerns over road and transportation safety, automatic traffic sign detection and recognition (TSDR) system has been introduced. An automatic TSDR system can detect and recognise traffic signs from and within images captured by cameras or imaging sensors [1]. In adverse traffic conditions, the driver may not notice traffic signs, which may cause accidents. In such scenarios, the TSDR system comes into action. The main objective of the research on TSDR is to improve the robustness and efficiency of the TSDR system. To develop an automatic TSDR system is a tedious job given the continuous changes in the environment and lighting conditions. Among the other issues that also need to be addressed are partial obscuring, multiple traffic signs appearing at a single time, and blurring and fading of traffic signs, which can also create problem for the detection purpose. For applying the TSDR system in real-time environment, a fast algorithm is needed. As well as dealing with these issues, a recognition system should also avoid erroneous recognition of nonsigns.

The aim of this research is to develop an efficient TSDR system which can detect and classify traffic signs into different classes in real-time environment. For detecting the red traffic signs, a combination of colour and shape based algorithm is presented which will up the procedure of the detection stage and for recognition SVMs with bagged kernels are introduced.

This paper is organized as follows: Section 2 presents the related works in the field of development of the TSDR system. In Section 3, the overall methodology is discussed. The experimental results and discussions are summarized in Section 4. In Section 5, the conclusion and some suggestions are made for future improvement on the field of automatic traffic sign detection and recognition.

2. Related Work

According to [2], the first work on automated traffic sign detection was reported in Japan in 1984. This attempt was followed by several methods introduced by different researchers to develop an efficient TSDR system and minimize all the issues stated above. An efficient TSDR system can be divided into several stages: preprocessing, detection, tracking, and recognition. In the preprocessing stage the visual appearance of images has been enhanced. Different colour and shape based approaches are used to minimize the effect of environment on the test images [36]. The goal of traffic sign detection is to identify the region of interest (ROI) in which a traffic sign is supposed to be found and verify the sign after a large-scale search for candidates within an image [7]. Different colour and shape based approaches are used by the researchers to detect the ROI. The popular colour based detection methods are HSI/HSV Transformation [8, 9], Region Growing [10], Colour Indexing [11], and YCbCr colour space transform [12]. As the colour information can be unreliable due to illumination and weather change, shape based algorithm is introduced. The popular shape based approaches are Hough Transformation [1315], Similarity Detection [16], Distance Transform Matching [17], and Edges with Haar-like features [18, 19].

The tracking stage is necessary to ensure real-time recognition. In addition, the information provided by the images of the traffic signs will help verify the correct identification and thus detect and follow the object [20]. The most common tracker adapted is the Kalman filter [18, 21, 22].

Several methods have been used by the researchers for recognizing traffic sign. Ohara et al. [23] and Torresen et al. [24] used the Template Matching technique, which is a fast and straightforward method. Genetic Algorithm is used by Aoyagi and Asakura [25] and de la Eccalera et al. [26] which is said to be unaffected by the illumination problem. The main advantage of the AdaBosst is its simplicity, feature selection for large dataset, and generalization [27]. Li et al. [28] used Adaboost learning containing five classical Haar wavelets and four HoG (Histogram of Oriented Gradient) features. Greenhalgh and Mirmehdi [29, 30] showed a comparison between SVM, MLP, HOG-based classifiers, and Decision Trees and found that a Decision Tree has the highest accuracy rate and the lowest computational time. Its accuracy is approximately 94.2%, whereas the accuracy of the SVM is 87.8% and that of MLP is 89.2%. Neural Network is flexible, adaptive, and robust [31]. Hechri and Mtibaa [12] used a 3-layer MLP network whereas Sheng et al. [32] used a Probabilistic Neural Network for the recognition process. Support Vector Machine (SVM) is another popular method used by the researchers which is robust against illumination and rotation with a very high accuracy. Yang et al. [33] and García-Garrido et al. [34] used SVM with Gaussian Kernels for the recognition whereas Park and Kim [35] used an advanced SVM technique that improved the computational time and the accuracy rate for gray scale images.

For improving the recognition rate of the damaged or partially occluded sign, Soheilian et al. in [36] used template matching followed by a 3D reconstruction algorithm. The distortion-invariant fringe-adjusted joint transform correlation (FJTC) was used by Khan et al. in [37] and Principal Component Analysis (PCA) is used by Sebanja and Megherbi in [38] which have a very high accuracy rate. In [39], Prieto and Allen used a self-organizing map (SOM) for recognition whose main idea was to apply SOM at every level of RSs with a hit rate of 99%.

In our approach, for reducing the processing time RGB segmentation and shape matching based detection and SVM with bagged kernel are used for recognizing the red traffic signs. Grey-scale images are used to make our detection and recognition algorithm more robust to changes in illumination.

3. Methodology

3.1. Image Acquisition

The samples are collected from an inexpensive on board camera (Canon SX170 IS) which is connected to a laptop placed inside of a vehicle (Figure 1). The images were taken in different roads and highways in Malaysia under various weather conditions (Table 1) from 8:00 A.M. to 8:00 P.M. after every two seconds. The camera is placed in the left side of the dashboard so that it can capture the traffic sign of left side. The aim of this section is to create a database of traffic sign images under different variations.

Table 1: Environmental condition for image acquisition.
Figure 1: Model for sample collections (a) used car with a camera placed on the left side of the dashboard, (b) camera setup including a laptop, and (c) on road camera range and sign detection.
3.2. Image Preprocessing

Image preprocessing is an important part of the TSDR system whose main idea is to remove low-frequency background noise, normalising the intensity of the individual particles images, removing reflections, and masking portions of images. Below is a description of selected image preprocessing techniques. The input image is divided into channels R, G, and B separately. In the proposed approach, filters are applied on each channel threshold to select those regions of the image where the values of the pixels fall in the range of our target object. For example, for traffic signs with a red background (such as stop signs), the threshold for channel R is pixels with values in the range of 90–255 and for channels G and B the range is 0–70. The region of interest (ROI) is the logical sum of the three filtered channels of R, G, and B.

3.3. Shape Matching Based Detection

The idea is to use colour characteristic of the preferred object to accelerate the procedure without employing model-based classifiers which is a time consuming process [4042]. After filtering and analysing the features of the detected object, the candidates of the traffic sign are selected based on shape matching. The flow chart of the system is shown in Figure 2.

Figure 2: The overall block diagram of the detection system.
3.3.1. Objects Features Analysing

One of the important steps is to eliminate noise from the image therefore to better deal with the ROI. Appropriate filters have an enormous effect on accuracy and speed of the procedure without deleting any useful information. In the proposed system, for image smoothing and filling up the smaller region to extract the region of interest, a median filter was used.

3.3.2. Shape Matching and Candidate Selection

As almost all traffic signs containing red colour are round or octagonal, the proposed method drew on these common shapes to detect hypothetical shapes which are close to traffic signs. Those regions with in the range of 0.7–1.3 are accepted as candidates for traffic signs: where is the area of the region and is the longest width.

3.3.3. Traffic Sign Detection

The area range for road signs determines the distance in which the system can detect the traffic sign. Outside of this range, objects with the same range of pixels value cannot be traffic signs. In this level, crucial information such as centre, area, and longest width of each region is calculated. This information is used to decide whether or not each region is a traffic sign. The detected traffic sign blob images are then passed to SVM for recognition.

3.4. Support Vector Machine (SVM) Based Recognition

After the detection of traffic sign, the region of interest (ROI) is passed to the SVM for recognition. The SVM is one of the most successful kernel methods with a given labeled training dataset , where and . In the semisupervised SVM, the total image is clustered for building the bagged kernel. Then, the modification of the base kernel is done. A number of SVMs are trained separately using a bootstrap algorithm and are then aggregated via a suitable combination technique. A bagged kernel is a kernel function encoding the similarity between unlabeled samples [43]. For training sample and given dataset PQ, the bootstrapping is built replicate training datasets by random resampling but replacing the values of given dataset PQ repeatedly. For a dataset , kernel methods calculate the comparison between training samples , using pair-wise inner products between mapped samples. Thus the final kernel matrix is . For the dataset formation, a set of 400 traffic sign images are generated from 100 traffic sign samples and for nonsign images a set of 1000 images are generated from 250 nonsign samples; those are collected randomly by a camera attached with a car in different times of the day and varying weather condition. It also includes partially occulted, slightly damaged, faded, and blurred signs for making the system more successful in real-time environment. All the candidates are scaled down to 25 × 25 pixels and in each step in 1.2 factors to smooth the progress of the features extraction process.

The proposed algorithm is discussed in the following steps.(1)The computation of the base SVM kernel is done.(2)The -means algorithm with various initializations is performed at times but with the similar number of clusters . The result is cluster assessments for each sample .(3)A bagged kernel is built based on the time fraction between and and assigned to the same cluster  where returns “1” if samples and belong to the same cluster according to the th realization of the clustering and “–1” otherwise.(4)Consider the sum or the product between the original and bagged kernels,(5)With the resultant modified kernel , an SVM is trained. The flow chart of the overall SVM with bagged kernel is showed in Figure 3.Different outcomes are obtained from step () because the -means give various solutions in each process. In the semisupervised setting, a reduced dataset is used to compute the cluster centres. The test pixels can be assigned to the nearest cluster in each of the bagged runs to compute . This way, the assignment can be done sequentially or can be parallelized, and only the cluster centres have to be maintained. Intensity correction and histogram equalization are applied to the standard traffic sign images for reducing the effect of variable lighting and illumination and then used to train the SVM.

Figure 3: Flow chart of parallel SVM with bagged kernel.

4. Result and Discussion

4.1. Performance of Image Preprocessing

For saving the storage capacity and reducing the computational complexity, the original images are scaled down into 250 × 250 pixels. In the proposed approach, after the image acquisition process described in Section 2, the image preprocessing is performed by the RGB segmentation approach. In the proposed approach, a filter is applied on each channel threshold field to select just those regions of the image where values of the pixels are in the range of the target object. The region of interest (ROI) is actually the logical sum of the three filtered channels of R, G, and B, as shown in Figure 4. The median filter is applied for image smoothing and filling the smaller regions of the image, which is shown in Figure 4(f).

Figure 4: Colour processing for traffic sign detection: (a) original image, (b) R channel after threshold, (c) G channel after threshold, (d) B channel after threshold, (e) logical sum of three channels, and (f) ROI after filtering and smoothing.
4.2. Performance of Traffic Sign Detection

The final selected candidates such as range of pixel values, area, and shape are drawn on the image by using extracted data (centre and area) of each of them. In the proposed method, only consider those traffic signs containing red colours. After applying shape-matching technique for the images containing the nontraffic signs, the output is given that “no road sign is detected.” The result has been classified into four sections. False positive (FP) is where the sign is not detected correctly. For the false negative (FN), the sign is detected as a nonsign region. True positive (TP) is defined as the sign is correctly detected and in the true negative (TN), a nonsign region is correctly recognised as a nonsign region. The contingency matrix of the detection performance is given in Table 2.

Table 2: Contingency matrix of the RGB segmentation and shape matching sign detection method.

From Table 4, the sensitivity and specificity values are calculated. Sensitivity is defined as the ability of identifying a condition correctly whereas specificity is defined as the ability of excluding a condition correctly:(i)Sensitivity or recall = = 83.4%.(ii)Specificity = = %100.(iii)Accuracy = = 94.85%.In the tests, it has been concluded that several problems affected the detection performance. Variant lighting conditions, occultation, and illumination of traffic signs are the main reasons of the false detection. The outcome of the proposed detection method shows that the red colour of the traffic sign is segmented and unswervingly illuminated by the sun. This happens because of the property of the colour segmentation using RGB model involved in comparing the RGB values. In the developed system, the computational time is around 0.25 s and the accuracy rate is 94.85%. Figure 4 shows the detection steps of the traffic sign detection system. The result of our detected traffic sign is given in Figures 5 and 6. In Figure 5, first and second columns show the true positives and true negatives, respectively. Third column shows the false negatives.

Figure 5: Examples of TP in variant lighting conditions (a), (b), and (c); example of TN in variant lighting conditions (d), (e), and (f); and examples of false detection (g), (h), and (i).
Figure 6: Final detection: (a) sample traffic sign, (b) closed curve obtained by colour thresholding, (c) after filtering and smoothing the candidate, and (d) detected ROI after shape matching and candidate selection.
4.3. Performance of Recognition

In the proposed system, after the colour segmentation and shape matching, semisupervised SVM is applied and the total image is clustered for building the bagged kernel. After that, the modification of the base kernel is done. A number of SVMs are trained separately using a bootstrap algorithm and then they are aggregated via a suitable combination technique. Intensity correction and histogram equalization are applied to the standard traffic sign images for reducing the effect of variable lighting and illumination and then used to train the SVM. A total number of 350 images consist of two different shapes which are circular and octagonal, respectively. Table 3 shows the database used to train the NN.

Table 3: Modification of traffic signs used to train NN.
Table 4: Example of traffic signs used to train NN.

Table 4 shows the final recognition results after the colour segmentation and shape matching technique are applied. Among the 123 traffic signs, 79 signs are octagonal considered as G1 and 44 of them are circular traffic signs considered as G2. Among the 44 traffic signs, there are also two different classes such as “no parking” and “do not enter” signs.

According to this data, evaluation parameters are sensitivity or recall, specificity, precision or PPV, FPR, and accuracy rate (AR) based on the number of FP, FN, TP, and TN values as follows:(i)Sensitivity or recall = = 89.43%.(ii)Specificity = = 99.12%.(iii)Precision or PPV = = 98.21%.(iv)False positive rate (FPR) = = 0.009.(v)Accuracy = = 95.71%.The overall accuracy of the traffic sign recognition is 95.71% whereas the accuracy of the detection phase is 94.85%. According to the data analysis, the TPR is 89.43% whereas the FPR is 0.009%. Octagonal or “BERHENTI” sign has the highest recognition rate of 94.94% and “no parking” sign has the lowest rate of 84.09%. The processing time of the recognition system is 0.18 s. The overall processing time of the TSDR system is 0.43 s. To evaluate the system performance the ROC curve and the area under the curve are shown in Figure 7.

Figure 7: ROC curve; FPR versus TPR.
4.4. Performance Comparison of SVM Based Recognition System

A comparison of previous studies in detecting the traffic sign is given in Table 4. From Table 5, it can be observed that SVM used in [44] has the highest recall rate with an overall good accuracy of over 90%. 327 signs out of 340 signs are correctly classified. MSER and HOG based SVM used in [30] had the highest overall accuracy of 97.6% with a false positive rate of 0.85 and 92 signs out of 104 signs are classified correctly. In the proposed system, the lowest false positive rate was 0.009 and accuracy 95.71%. The precision is 98.21% and recall is 89.43%. 112 signs among the 123 detected signs are classified correctly. The proposed method has the highest precision rate (98.21%) and lowest FPR (0.009). The accuracy of the proposed method is 95.71%, which is good compared to other systems.

Table 5: Comparison between proposed method and several existing methods.

The main limitation of the developed system is that it is only applicable for red traffic signs. The “warning sign” and the “prohibitory sign” contain red which is the most important sign as they are more responsible for traffic accidents. The proposed method has a low detection rate as colour tends to be unreliable due to various factors like illumination, variable lighting, blurring, and fading. That is why the recognition process is also affected in terms of overall accuracy rate. Another limitation is the lack of images in the Malaysian traffic sign database. The overall processing time is 0.43 s, which is still in the higher side compared to [44]. To recognize all types of signs in Malaysia, reduce the processing time, and improve the Malaysian traffic sign database can be proposed as a future work.

5. Conclusion

The goal of this research is to develop an efficient TSDR system based on Malaysian traffic sign dataset. In the image acquisition stage, the images were captured by an on board camera under different weather conditions and the image preprocessing was done by using RGB colour segmentation. The recognition process is done by SVM with bagged kernel which is used for the first time for traffic sign classification. The developed system has shown promising results with respect to the accuracy of 95.71%, false positive rate (0.009), and processing time (0.43 s). The recognition performance is evaluated by using ROC curve analysis. The simulation results are compared with the existing methods showing the correctness of the implementation.

Conflict of Interests

The authors declare that there is no conflict of interests regarding the publication of this paper.

References

  1. J. P. C. Pascual, Advanced driver assistance system based on computer vision using detection, recognition and tracking of road signs [Ph.D. thesis], Charles III University of Madrid, Getafe, Spain, 2009.
  2. P. Paclík, J. Novovičová, and R. P. W. Duin, “Building road-sign classifiers using a trainable similarity measure,” IEEE Transactions on Intelligent Transportation Systems, vol. 7, no. 3, pp. 309–321, 2006. View at Google Scholar
  3. A. Ruta, Y. M. Li, and X. H. Liu, “Detection, tracking and recognition of traffic signs from video input,” in Proceedings of the 11th International IEEE Conference on Intelligent Transportation Systems (ITSC '08), pp. 55–60, IEEE, Beijing, China, December 2008. View at Publisher · View at Google Scholar · View at Scopus
  4. P. Gil Jiménez, S. M. Bascón, H. G. Moreno, S. L. Arroyo, and F. L. Ferreras, “Traffic sign shape classification and localization based on the normalized FFT of the signature of blobs and 2D homographies,” Signal Processing, vol. 88, no. 12, pp. 2943–2955, 2008. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at Scopus
  5. S. Lafuente-Arroyo, S. Salcedo-Sanz, S. Maldonado-Bascón, J. A. Portilla-Figueras, and R. J. López-Sastre, “A decision support system for the automatic management of keep-clear signs based on support vector machines and geographic information systems,” Expert Systems with Applications, vol. 37, no. 1, pp. 767–773, 2010. View at Publisher · View at Google Scholar · View at Scopus
  6. G. A. Tagunde and N. J. Uke, “Detection, classification and recognition of road traffic signs using color and shape features,” International Journal of Advanced Technology & Engineering Research, vol. 2, no. 4, pp. 202–206, 2012. View at Google Scholar
  7. H. Gündüz, S. Kaplan, S. Günal, and C. Akinlar, “Circular traffic sign recognition empowered by circle detection algorithm,” in Proceedings of the 21st Signal Processing and Communications Applications Conference (SIU '13), pp. 1–4, IEEE, New York, NY, USA, April 2013. View at Publisher · View at Google Scholar · View at Scopus
  8. S. Maldonado-Bascón, S. Lafuente-Arroyo, P. Gil-Jiménez, H. Gómez-Moreno, and F. López-Ferreras, “Road-sign detection and recognition based on support vector machines,” IEEE Transactions on Intelligent Transportation Systems, vol. 8, no. 2, pp. 264–278, 2007. View at Publisher · View at Google Scholar · View at Scopus
  9. G. A. Tagunde and N. J. Uke, “Detection, recognition and recognition of road traffic signs using colour and shape features,” International Journal of Advance Technology & Engineering Research, vol. 2, no. 4, pp. 202–206, 2012. View at Google Scholar
  10. L. Priese and V. Rehrmann, “On hierarchical color segmentation and applications,” in Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR '93), pp. 633–634, IEEE, New York, NY, USA, June 1993. View at Publisher · View at Google Scholar
  11. M. J. Swain and D. H. Ballard, “Color indexing,” International Journal of Computer Vision, vol. 7, no. 1, pp. 11–32, 1991. View at Publisher · View at Google Scholar · View at Scopus
  12. A. Hechri and A. Mtibaa, “Automatic detection and recognition of road sign for driver assistance system,” in Proceedings of the 16th IEEE Mediterranean Electrotechnical Conference (MELECON '12), pp. 888–891, Yasmine Hammamet, Tunisia, March 2012. View at Publisher · View at Google Scholar · View at Scopus
  13. G. Overett and L. Petersson, “Large scale sign detection using HOG feature variants,” in Proceedings of the IEEE Intelligent Vehicles Symposium (IV '11), pp. 326–331, Baden-Baden, Germany, June 2011. View at Publisher · View at Google Scholar · View at Scopus
  14. F. Zaklouta and B. Stanciulescu, “Real-time traffic-sign recognition using tree classifiers,” IEEE Transactions on Intelligent Transportation Systems, vol. 13, no. 4, pp. 1507–1514, 2012. View at Publisher · View at Google Scholar · View at Scopus
  15. A. Møgelmose, M. M. Trivedi, and T. B. Moeslund, “Vision-based traffic sign detection and analysis for intelligent driver assistance systems: perspectives and survey,” IEEE Transactions on Intelligent Transportation Systems, vol. 13, no. 4, pp. 1484–1497, 2012. View at Publisher · View at Google Scholar · View at Scopus
  16. S. Vitabile, G. Pollaccia, G. Pilato, and E. Sorbello, “Road signs recognition using a dynamic pixel aggregation technique in the HSV color space,” in Proceedings of the 11th International Conference on Image Analysis and Processing (ICIAP '01), pp. 572–577, Palermo, Italy, September 2001. View at Publisher · View at Google Scholar · View at Scopus
  17. D. M. Gavrila, “Traffic sign recognition revisited,” in Mustererkennung 1999: 21. DAGM-Symposium Bonn, 15.-17. September 1999, pp. 86–93, Springer, Berlin, Germany, 1999. View at Publisher · View at Google Scholar
  18. B. Höferlin and K. Zimmermann, “Towards reliable traffic sign recognition,” in Proceedings of the IEEE Intelligent Vehicles Symposium, pp. 324–329, Xi'an, China, June 2009. View at Publisher · View at Google Scholar · View at Scopus
  19. V. A. Prisacariu, R. Timofte, K. Zimmermann, I. Reid, and L. Van Gool, “Integrating object detection with 3D tracking towards a better driver assistance system,” in Proceedings of the 20th International Conference on Pattern Recognition (ICPR '10), pp. 3344–3347, IEEE, Istanbul, Turkey, August 2010. View at Publisher · View at Google Scholar · View at Scopus
  20. M. A. Hannan, A. Hussain, and S. A. Samad, “Decision fusion via integrated sensing system for a smart airbag deployment scheme,” Sensors and Materials, vol. 23, no. 3, pp. 179–193, 2011. View at Google Scholar · View at Scopus
  21. C.-Y. Fang, S.-W. Chen, and C.-S. Fuh, “Road-sign detection and tracking,” IEEE Transactions on Vehicular Technology, vol. 52, no. 5, pp. 1329–1341, 2003. View at Publisher · View at Google Scholar · View at Scopus
  22. S. Lafuente-Arroyo, S. Maldonado-Bascon, P. Gil-Jimenez, H. Gomez-Moreno, and F. Lopez-Ferreras, “Road sign tracking with a predictive filter solution,” in Proceedings of the 32nd Annual Conference on IEEE Industrial Electronics (IECON '06), vol. 1–11, pp. 3314–3319, IEEE, Paris, France, November 2006. View at Publisher · View at Google Scholar
  23. Y. H. Ohara, I. Nishikawa, S. Miki, and N. Yabuki, “Detection and recognition of road signs using simple layered neural networks,” in Proceedings of the 9th International Conference on Neural Information Processing (ICONIP '02), vol. 2, pp. 626–630, IEEE, 2002. View at Publisher · View at Google Scholar
  24. J. Torresen, J. W. Bakke, and L. Sekanina, “Efficient recognition of speed limit signs,” in Proceedings of the 7th International IEEE Conference on Intelligent Transportation Systems (ITSC '04), vol. 90, pp. 652–656, Washington, DC, USA, October 2004. View at Publisher · View at Google Scholar · View at Scopus
  25. Y. Aoyagi and T. Asakura, “A study on traffic sign recognition in scene image using genetic algorithms and neural networks,” in Proceedings of the IEEE 22nd International Conference on Industrial Electronics, Control, and Instrumentation (IECON '96), vol. 3, pp. 1838–1843, IEEE, Taipei, Taiwan, August 1996. View at Publisher · View at Google Scholar · View at Scopus
  26. A. de la Eccalera, J. M. Arminogol, and M. A. Salichs, “Traffic sign detection for driver support systems,” in Proceedings of the International Conference on Field and Service Robotics (FSR '01), Helsinki, Finland, June 2001.
  27. L. Chen, Q. Li, M. Li, L. Zhang, and Q. Mao, “Design of a multi-sensor cooperation travel environment perception system for autonomous vehicle,” Sensors, vol. 12, no. 9, pp. 12386–12404, 2012. View at Publisher · View at Google Scholar · View at Scopus
  28. Y. Li, P. Sharath, and W. Guan, “Real-time traffic sign detection: an evaluation study,” in Proceedings of the 20th International Conference on Pattern Recognition (ICPR '10), pp. 3033–3036, Istanbul, Turkey, August 2010. View at Publisher · View at Google Scholar · View at Scopus
  29. J. Greenhalgh and M. Mirmehdi, “Traffic sign recognition using MSER and Random Forests,” in Proceedings of the 20th European Signal Processing Conference (EUSIPCO '12), pp. 1935–1939, August 2012. View at Scopus
  30. J. Greenhalgh and M. Mirmehdi, “Real-time detection and recognition of road traffic signs,” IEEE Transactions on Intelligent Transportation Systems, vol. 13, no. 4, pp. 1498–1506, 2012. View at Publisher · View at Google Scholar · View at Scopus
  31. M. A. Hannan, S. B. Wali, T. J. Pin, A. Hussain, and S. A. Samad, “Traffic sign recognition based on neural network for advance driver assistance system,” Przegląd Elektrotechniczny, vol. 1, no. 12, pp. 169–172, 2014. View at Publisher · View at Google Scholar
  32. Y. Sheng, K. Zhang, C. Ye, C. Liang, and J. Li, “Automatic detection and recognition of traffic signs in stereo images based on features and probabilistic neural networks,” in Optical and Digital Image Processing, vol. 7000 of Proceedings of SPIE, p. 12, April 2008.
  33. S. Yang, X. Wu, and Q. Miao, “Road-sign segmentation and recognition in natural scenes,” in Proceedings of the IEEE International Conference on Signal Processing, Communications and Computing (ICSPCC '11), pp. 1–4, Xi'an, China, September 2011. View at Publisher · View at Google Scholar · View at Scopus
  34. M. A. García-Garrido, M. Ocaña, D. F. Llorca, M. A. Sotelo, E. Arroyo, and A. Llamazares, “Robust traffic signs detection by means of vision and V2I communications,” in Proceedings of the 14th International IEEE Conference on Intelligent Transportation Systems (ITSC '11), pp. 1003–1008, IEEE, Washington, DC, USA, October 2011. View at Publisher · View at Google Scholar · View at Scopus
  35. J.-G. Park and K.-J. Kim, “Design of a visual perception model with edge-adaptive Gabor filter and support vector machine for traffic sign detection,” Expert Systems with Applications, vol. 40, no. 9, pp. 3679–3687, 2013. View at Publisher · View at Google Scholar · View at Scopus
  36. B. Soheilian, N. Paparoditis, and B. Vallet, “Detection and 3D reconstruction of traffic signs from multiple view color images,” ISPRS Journal of Photogrammetry and Remote Sensing, vol. 77, pp. 1–20, 2013. View at Publisher · View at Google Scholar · View at Scopus
  37. J. F. Khan, S. M. A. Bhuiyan, and R. R. Adhami, “Image segmentation and shape analysis for road-sign detection,” IEEE Transactions on Intelligent Transportation Systems, vol. 12, no. 1, pp. 83–96, 2011. View at Publisher · View at Google Scholar · View at Scopus
  38. I. Sebanja and D. B. Megherbi, “Automatic detection and recognition of traffic road signs for intelligent autonomous unmanned vehicles for urban surveillance and rescue,” in Proceedings of the 10th IEEE International Conference on Technologies for Homeland Security (HST '10), pp. 132–138, IEEE, Waltham, Mass, USA, November 2010. View at Publisher · View at Google Scholar · View at Scopus
  39. M. S. Prieto and A. R. Allen, “Using self-organising maps in the detection and recognition of road signs,” Image and Vision Computing, vol. 27, no. 6, pp. 673–683, 2009. View at Publisher · View at Google Scholar · View at Scopus
  40. X. Gao, N. Shevtsova, K. Hong et al., “Vision models based identification of traffic signs,” in Proceedings of the 1st European Conference on Colour in Graphics, Imaging and Vision (CGIV '2002), pp. 47–51, April 2002. View at Scopus
  41. J. Miura, T. Kanda, and Y. Shirai, “An active vision system for real-time traffic sign recognition,” in Proceedings of the IEEE 2000 Intelligent Transportation Systems Conference, pp. 52–57, October 2000.
  42. B. Alefs, G. Eschemann, H. Ramoser, and C. Beleznai, “Road sign detection from edge orientation histograms,” in Proceedings of the IEEE Intelligent Vehicles Symposium (IV '07), pp. 993–998, Istanbul, Turkey, June 2007. View at Scopus
  43. D. Tuia and G. Camps-Valls, “Semisupervised remote sensing image classification with cluster kernels,” IEEE Geoscience and Remote Sensing Letters, vol. 6, no. 2, pp. 224–228, 2009. View at Publisher · View at Google Scholar · View at Scopus
  44. M. A. García-Garrido, M. Ocaña, D. F. Llorca, M. A. Sotelo, E. Arroyo, and A. Llamazares, “Robust traffic signs detection by means of vision and V2I communications,” in Proceedings of the 14th IEEE International Intelligent Transportation Systems Conference (ITSC '11), pp. 1003–1008, IEEE, Washington, DC, USA, October 2011. View at Publisher · View at Google Scholar · View at Scopus
  45. T. Bui-Minh, O. Ghita, P. F. Whelan, and T. Hoang, “A robust algorithm for detection and classification of traffic signs in video data,” in Proceedings of the International Conference on Control, Automation and Information Sciences (ICCAIS '12), pp. 108–113, Ho Chi Minh City, Vietnam, November 2012. View at Publisher · View at Google Scholar · View at Scopus