Table of Contents Author Guidelines Submit a Manuscript
Scientific Programming
Volume 2019, Article ID 6897345, 16 pages
https://doi.org/10.1155/2019/6897345
Research Article

Effects of Challenging Weather and Illumination on Learning-Based License Plate Detection in Noncontrolled Environments

Faculty of Computer Science, University of Oviedo, Oviedo, Spain

Correspondence should be addressed to A. Rio-Alvarez; moc.liamg@zeravlaoirled

Received 21 December 2018; Revised 30 March 2019; Accepted 14 May 2019; Published 27 June 2019

Guest Editor: Vijender K. Solanki

Copyright © 2019 A. Rio-Alvarez et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Abstract

License Plate Detection (LPD) is one of the most important steps of an Automatic License Plate Recognition (ALPR) system because it is the seed of the entire recognition process. In indoor controlled environments, there are many effective methods for detecting license plates. However, outdoors LPD is still a challenge due to the large number of factors that may affect the process and the results obtained. It is an evidence that a complete training set of images including as many as possible license plates angles and sizes improves the performance of every classifier. On this line of work, numerous training sets contain images taken under different weather conditions. However, no studies tested the differences in the effectiveness of different descriptors for these different conditions. In this paper, various classifiers were trained with features extracted from a set of rainfall images using different kinds of texture-based descriptors. The accuracy of these specific trained classifiers over a test set of rainfall images was compared with the accuracy of the same descriptor-classifier pair trained with features extracted from an ideal conditions images set. In the same way, we repeat the experiment with images affected by challenging illumination. The research concludes, on one hand, that including images affected by rain, snow, or fog in the training sets does not improve the accuracy of the classifier detecting license plates over images affected by these weather conditions. Classifiers trained with ideal conditions images improve the accuracy of license plate detection in images affected by rainfalls up to 19% depending on the kind of extracted features. However, on the other hand, results evidence that including images affected by low illumination regardless of the kind of the selected feature increases the accuracy of the classifier up to 29%.

1. Introduction

This research work is aimed at studying the effect of two important issues for outdoor Automatic License Plate Recognition (ALPR) systems as the rainfalls (rain, fog, and snow) and the lack of light over the training stage of these ALPR systems. Our main goal is obtaining new information for improving the composition of training sets, achieving valid conclusions for every kind of scenario while being compatible with other image-processing techniques.

ALPR systems have played a prominent role in the literature over recent years due to their popular application in real-life scenarios like automatic coin collectors in tolls, supervision of traffic regulation, parking access, or traffic control, among others. However, efficiency of such approaches is usually limited to specific or controlled scenarios.

Figure 1 shows the traditional stages of any ALPR system. These stages are common both in controlled and uncontrolled environments. However, the algorithms that must be applied in each of these stages should be adapted to the particular conditions of the environment. The less controlled the environment, the more difficult the challenges faced by the ALPR system.

Figure 1: Stages of an ALPR system.

License Plate Detection (LPD) is the first stage of any ALPR system. A complete image or video frame is taken as input. The output is the set of Regions of Interest (ROIs) that potentially contain a license plate. Therefore, LPD comprises two phases: license plates localization and ROIs cropping. The efficacy of the LPD significantly determines the accuracy of the entire ALPR system. Moreover, it is the most time-consuming stage.

LPD systems can be broadly categorized into two groups: those based on boundary/edge information and character detection and those based on Machine-Learning (ML) algorithms working on local features, mainly boosting-based approaches. It is also important to mention another kind of LPD systems based on the location of specific areas of the vehicles close to the license plates (like for example the braking lights). In any case, these context-aware methods are based on one of the previous alternatives [1].

When the ALPR works in a controlled environment, the region of the image where license plates can appear, the license plate angle or its size is usually bounded. In addition, many indoor scenarios such as parking accesses, lighting conditions, and other meteorological factors can also be controlled. In these scenarios, LPD methods based on edge detections and morphological operations can achieve good performance. These approaches are intuitive and powerful in scenarios where license plates are not noisy [2].

Nevertheless, when the ALPR works in an uncontrolled environment, no prior information can be used to support the detection process. The license plate can appear in any region of the image, and the detection algorithms usually require an approach based on ML algorithms with a training stage where all possible angles and sizes of license plates are taken into consideration. Traditionally, this kind of approaches have handled the problem of the angle and regional variations using a learning-based algorithm and including sufficient variety of images in the training set. When dealing with the issue of the scale, these systems use to sequentially apply single-scale classifiers over a pyramid of images. But, in outdoor environments, training can be also determined by environmental factors such as lighting and meteorology conditions.

An appropriate selection of the kind of descriptors is a determinant step for ML-based approaches. In the same way, the selection of images that will be used for the training is one of the most important steps of the entire process. In this paper, we evaluate the influence in the training process of the weather and the challenging illumination that occur outside uncontrolled environments. It is reasonable to think that a complete set of training images including light variations and different weather conditions would improve the accuracy of the classifier, especially if the system will be used in areas with frequent rainfalls. In this research, we wonder under which conditions this affirmation is true. To answer this question, we check the accuracy of different classifiers trained with images captured in optimal conditions and compare it with those of the same classifiers but trained with images affected by challenging weather or low illumination.

This research includes the testing of commonly used texture features such as Histogram of Oriented Gradients (HOG) [3], Local Binary Pattern (LBP) [4], and Haar-like features [5] in combination with a boosted cascade and a Support Vector Machine (SVM) in order to consider traditional object detection algorithms such as the algorithm of Viola and Johns [5], the HOG-based approach proposed by Dalal and Triggs [3], and the approach based on LBP features proposed by Ojala et al. [4]. In addition to the chosen representative descriptors, we consider various texture-based variants such as the combination HOG-LBP [6], Local Gradient Patterns (LGPs) [7], Multi-Block Local Binary Patterns (MB-LBPs) [8], Compound Local Binary Patterns (CLBPs) [9], Local Ternary Patterns (LTPs) [10], or features extracted from the Gray-Level Co-occurrence Matrix (GLCM) [11].

In addition, and in order to check the robustness of our research, we repeated our tests using three more classifiers: K-Nearest Neighbor (KNN), an Artificial Neural Network (ANN), and a Linear Regression (LR) approach.

The rest of the paper is organized as follows. In Section 2, we briefly review the related literature. Section 3 describes the methods and algorithms tested in this research and presents our proposed experiment in detail. The results of the experiment are discussed in Section 4, and finally, we conclude the paper in Section 5, explaining the limitations and future work in Section 6.

2. Related Work

Challenging weather and difficult illumination conditions are important issues that should be taken into consideration by every LPD system in outside uncontrolled environments.

Most of the existing LPD methods do not consider input images having challenging illumination. Any methods proposed a contrast enhancement step in their LPD step [12, 13] using techniques as the fuzzy-based contrast enhancement technique proposed by Raju and Nair [14] or the improved methods proposed by Xue et al. [15]. But these kinds of methods are not an effective technique for highly low contrast regions as night images. Several others consider uneven illumination and other low-contrast issues [16, 17] but do not consider all challenging illumination conditions as a great lack of light that happens at night in outside environments. The use of special hardware like IR cameras is a common solution utilized for many methods for detecting license plates at night time [18, 19].

In addition, preprocessing techniques based on Weber’s Law can reduce the luminance effect and the high-intensity impulse noise [20] providing a better detection against illumination variation. In the same way, these techniques can be used in the description of the images improving the performance of object detection techniques, like WLD [21] that is a texture descriptor based on Weber’s Law that presents robustness to noise and illumination changes.

Respecting the challenging weather, in 2016, Azam and Islam [22] considered that none of the existing LPD methods until then were able to handle the issue of weather condition. For support this assertion, they provide a table that summarizes the most important LPD techniques with their limitations from the view of hazardous conditions. Since then, few approaches have been taken into account in this issue. Azam and Islam themselves [22] presented a LPD system in hazardous conditions. This approach includes a novel method that uses a frequency domain mask to filter rain streaks from an image for rain removal.

Recently, Panahi and Gholampour [23] presented a complete ALPR system capable of detecting and recognizing Persian license plates obfuscated by stains and several levels of dirtiness numbers in different kind of scenarios, variable weather and illumination. This approach is also assisted by a monochrome camera and an IR projector for plate detection and achieves a 98.7%, 99.2%, and 97.6% accuracies for plate detection, character segmentation, and plate recognition, respectively.

Raghunandan et al. [19] recently proposed a novel mathematical model based on Riesz fractional operator for enhancing details of edge information in license plate images. Performing this operation on each input image allows to improve the quality of the images affected by multiple factors after applying detection and recognition methods.

The approaches listed above face the problem of challenging weather and illumination by different preprocessing techniques and specialized hardware. Otherwise, there are not researches about how diverse weather affects the training using different descriptors. In the training process, several approaches simply incorporate different weather and illumination conditions in its datasets. In the presented research, we wonder if any descriptor is capable to describe correctly the challenging weather for using this information in the detection process. In the same way, we analyze the influence of challenging illumination in the same descriptors.

3. Materials and Methods

He et al. in [24] consider Haar-like, HOG, and LBP as highly representative descriptors for license plate detection because Haar-like and LBP features are appropriate to represent character corners, while HOG features are suitable to represent outlines, such as horizontal and vertical relation of characters. For this reason, they proposed a fusion of LBP and HOG features as a suitable descriptor for representing license plates.

To be able to study the influence of weather conditions and illumination changes over representative image descriptors, we decided considering these three descriptors, motivated by the reasons given by He et al. [24] and supported by an extensive literature [1018]. In addition, we consider various improved variants of the aforementioned representative descriptors proposed recently in the literature like Local Gradient Patterns (LGPs), Multi-Block Local Binary Patterns (MB-LBPs), Compound Local Binary Patterns (CLBPs), the combination of HOG and LBP (HOG-LBP) or Local Ternary Patterns (LTPs), and features extracted from the classical Gray-Level Co-occurrence Matrix (GLCM), one of the earliest techniques used for image texture analysis. Furthermore, we expect that descriptors based on the same kind of feature perform similar behaviours.

Color-based descriptors are also widely used in LPD but are not considered for the present research. Since some countries have specific colors for their license plates, the main idea of the color-based LPD methods is the location of this color pattern in the image, for example, using the blue rectangle that appears on the left side of European license plates. This kind of methods, in addition to be closely associated to certain kind of license plate, are not taken into consideration for this paper due to the fact that color features are sensitive to illumination variations, so some approaches also require special lighting [25], and they are not considered a good option for uncontrolled environments.

Each descriptor was trained using the original classifier proposed in the literature. These combinations feature-classifier are given in Table 1 marked with a check (✓). In addition to these original combinations, three more classifiers (A Neural Network, K-Nearest Neighbor, and a Linear Regression) are included in this research in order to support the results obtained by the originals combinations.

Table 1: Original combinations of features and classifiers.

SVM has been taken as the reference classifier for training the other descriptors due to its widespread use for texture classification. In addition, to ensure the robustness of the experiment, we repeated the tests of each variant using the above-mentioned three additional classifiers.

For each combination of descriptor-classifier, one “generic” classifier was trained with a set of images obtained in ideal conditions (without challenging weather and adequate lighting) and another “specific” one was trained with a set of images affected by rainfalls (heavy rain, snow, or fog). Both classifiers were tested over another set of images affected by rainfalls with the goal of comparing the effectiveness of both classifiers detecting license plates under these challenging conditions. The objective is to determine which kind of descriptor can adapt better to challenging weather and how could we increase the performance of each descriptor by an appropriate selection of images for the training sets.

In the same way, we repeat the experiment using images affected by challenging illumination. One generic classifier and another specific classifier (trained with images taken without an adequate lighting) were trained for each combination descriptor-classifier and testing with a set of images taken in poor illumination conditions. The goal is the same—obtain information about how each descriptor is affected by low illumination and improve our knowledge about the composition of training sets.

3.1. Feature Extractors and Classifiers
3.1.1. Histogram of Oriented Gradients (HOG)

HOG descriptor was introduced by Dalal and Triggs [3] in 2005 and is based on evaluating well-normalized local histograms of image gradient orientations in a dense grid.

The image is divided into blocks which also consist of several cells. For each cell, a histogram that summarizes the gradient direction of each pixel is calculated. The traditional steps of this process are shown in Figure 2.

Figure 2: Traditional steps of HOG calculation.

Every histogram is concatenated into one vector. This vector is a HOG descriptor of the image. Modifying the size of cells and blocks, it is possible to adapt the descriptor to different scales.

Figure 3(b) is the visualization of a HOG descriptor of the input image (Figure 3(a)) calculated using 8 ∗ 8 pixels per cell, the descriptor shown in Figure 3(c) is calculated using 16 ∗ 16 pixel per cell, and finally, the descriptor shown in Figure 3(d) is calculated using 32 ∗ 32 pixel per cell. This kind of visualization shows for each cell the visual representation of the gradient vectors that are summarized into its histogram.

Figure 3: Visualization of three HOG descriptors (b–d) obtained from the same input image (a).

The classifier selected by Dalal and Triggs it is a SVM. SVM is a supervised machine-learning algorithm which can be used for both classification and regression challenges. SVM was developed by Vladimir Vapnik and Alexey Ya. Chervonenkis in 1963 and with further improvements was published in 1995 by Cortes and Vapnik [26].

SVMs are binary classifications systems, only two classes are considered. In the case of LPD, these classes are License Plate (LP) and Not License Plate (NLP). In their simplest form, SVMs are hyperplanes that separate the training data by a maximal margin. The training samples that lie closest to the hyperplane are called support vectors.

In other words, given the training data {, …, }, SVM finds the hyperplane leaving the largest possible number of samples of the same class in the same side, while maximizing the distance of either class from the hyperplane. Depending on the side of the hyperplane where they are located, the samples are labelled with 1 or −1:

LP and NLP classes are considered. Each positive sample (LP) is labelled as 1, and negative samples (NLP) are labelled as −1.

Finding the optimal hyperplane implies solving a constrained optimization problem using quadratic programming. The distance between positive and negative samples is the optimization criterion. The hyperplane is defined aswhere k is the kernel function. Any data point corresponding to a nonzero is a support vector of the optimal hyperplane. Therefore, finding the optimal hyperplane is equivalent to finding the all nonzero . When , x is classified as 1 (LP); otherwise, x is classified as −1 (NLP).

Several different kernels are used to solve different problems. In this research, a linear kernel is used due its performance in the context of LPD [27].

Muhammad and Altun [28] utilized HOG features for detecting license plates by means of genetics algorithms with a success rate of over 98 percent. In [29], Sarfraz et al. proposed a method in which the license plate is previously bounded in a region of interest and localized by simple template matching using HOG descriptors. Khan et al. proposed an efficient method [30] using a fusion between HOG features and geometric features followed by a selection of different features selected using a novel entropy-based method. On the other hand, HOG descriptors are widely used in ALPR systems for detecting characters of the license plate in the recognition stage [31].

3.1.2. Local Binary Patterns (LBPs)

The original idea proposed by Ojala et al. [4] was to label each pixel of an image with LBP codes. The first step for calculating each LBP code is subtracting the center pixel value from the value of its eight neighbors in a 3 × 3 square. Resulting strictly negative values are encoded with 0, and the others with 1. Concatenating all these binary codes in a clockwise direction starting from the top-left produces the LBP code associated to the center pixel, and this decimal value encodes the local structure around it. Figure 4 shows an example of the basic LBP operator.

Figure 4: Original LBP codes calculation.

Using the basic LBP operator, large-scale structures cannot be captured due the small 3 × 3 square. To deal with textures at different scales, the size of the neighborhood becomes variable [32] and is defined as a set of sampling points evenly spaced on a circle whose center is the pixel to be labelled. The sampling points that do not fall within the pixels are interpolated allowing for any radius and any number of sampling points in the neighborhood [33]. The notation (P, R) denotes a neighborhood of P sampling points on a circle of radius R. Figure 5 represents three examples of three LBP operators with different radius and numbers of sampling points.

Figure 5: Examples of different extended LBP operators.

Dividing the image in groups of pixels called blocks and, summarizing the LBP values of the pixels of each block in a histogram, a powerful texture descriptor is obtained. Now, the feature vector can be processed using a Support Vector Machine (SVM), K-nearest neighbor (K-NN), or some other machine-learning algorithm.

Since the publication of Ojala et al. [4], LBP methodology has been developed with plenty of variations for improved performance in different applications including license plate detection. Recently, Al-Shemarry et al. [34] proposed a novel LPD method based on AdaBoost cascades classifiers with three-level LBPs (3L-LBP) features. Rashedi and Nezamabadi-pour proposed in [35] a complete LPD solution employing a combination of four methods including one based on cascade classifiers and local binary pattern (LBP) features.

3.1.3. Haar-Like Features

Under the approach of Viola and Jones [5], rectangular regions with shaded and clear areas are extracted from the image. The four original kinds of feature are shown in Figure 6. These features are designed for detecting certain elements, like edges (Figures 6(a) and 6(b)), lines (Figure 6(c)), and diagonals (Figure 6(d)). The resulting value of the feature is calculated by subtracting the sum of all pixels within shaded rectangles from the sum of the clear rectangles.

Figure 6: Original Haar-like features.

These features were initially designed for the specific problem of face detection. In 2002, Lienhart et al. [36] presented an extended set of Haar-like features which add additional domain-knowledge to the framework.

The original work of Viola and Jones [5] was designed for detecting faces using a cascade classifier based on AdaBoost [37]. Its proven effectiveness detecting faces allows that it quickly became popular in every area of object detection. The Boosted Cascade of Simple Features is a supervised machine-learning method based on AdaBoost (Adaptative Boosting), required for training the cascade. Boosting techniques are based on the combination of weak classifiers for creating a strong classifier with the desired precision.

AdaBoost was introduced by Freund and Schapire [37] in 1995 in order to solve many challenges associated to boosting processes. For creating a cascade, AdaBoost is used both for selecting a set of features and training the classifier.

For selecting features, weak classifiers, each of them associated to a single feature, are trained. The main goal of these classifiers is to determine the value that minimizes the number of badly classified samples. Therefore, a weak classifier, , where x is an input image, can be determined by a feature , a threshold , and a polarity

For each iteration of the AdaBoost algorithm (t = 1, …, T), one weak classifier, and thereby one feature, is selected. The strong classifier is computed as a linear combination of the selected weak classifiers which value is either 0 or 1 and is weighted by

Instead of creating a single strong classifier by the above descripted algorithm, it is possible to create several efficient smaller classifiers capable of rejecting a high number of negative windows whilst continuing to ensure a high number of positive windows for its evaluation in further classifiers. In this way, a cascade of classifiers is obtained. This process is represented in Figure 7.

Figure 7: Cascade of classifiers.

Respecting to LPD, Zheng et al. proposed in [38] an efficient cascade detector whose two first stages are based on global features in order to discard most clear background areas, and the four following stages are based on Haar-like features. Furthermore, Wang et al. [39] presented a cascade-based classifier for detecting and tracking license plates using an extended set of Haar-like features.

3.1.4. Variants

In order to confirm the results obtained for the three selected representative descriptors, we added to our research various related texture descriptors and improved variants: Compound Local Binary Patterns (CLBPs), Local Ternary Patterns (LTPs), Local Gradient Patterns (LGPs), Multi-Block Local Binary Patterns (MB-LBPs), HOG-LBP combination, and features extracted from the Gray-Leve Co-occurrence Matrix (GLCM).

Compound Local Binary Patterns [9] increases the robustness of simple LBP. In this method, a 2-bit code is used to encode the local texture property of an image. The first one encodes the difference between the center and neighboring pixel value, while the second bit is used to encode the magnitude of difference with respect to a threshold. The main disadvantage of CLBP algorithm is the size of the feature descriptor in the process of texture feature description, which is larger than other LBP variants and brings great difficulty to the calculation. Reducing the feature dimension will inevitably lead to the loss of texture features.

Local Ternary Patterns [10] was introduced in 2010 by Tan and Triggs for face recognition. From then, several LTP methods were developed for improving the original LPT ([4043]). Original LTP used uses a group of ternary codes to encode each pixel. Every ternary sequence is divided into two separate sequences of LBP: upper patterns and lower patterns. The algorithm generates the texture features through these binary codes.

LTP is capable to encode the relations “greater than,” “equal to,” and “less than” between a pixel and its neighbors; on the other hand, LBP could only reflect two of them “greater than” and “less than.”

Jun et al. [7] proposed the Local Gradient Patterns in 2013. The main goal of LGP was to overcome the problem of local intensity variations along the edge components. In order to achieve that, LGP considers the intensity gradient profile to emphasize the local variation in the neighborhood. If the intensity of the entire image is changed globally, there is no significant difference between the LGP and LBP operators (invariant patterns). If the intensity of the background or the foreground is changed locally, the LGP generates invariant patterns in contrast to the LBP operator due to gradient differences (not only by intensity differences).

Multi-Block LBP, proposed by Zhang et al. [8], is an extension of LBP. Equally sized subblocks are used to compute the features. Instead of taking the comparison between single pixel values, MB-LBP takes the comparison between mean pixel values of these subblocks and does well in describing the texture information in different scales allowing its computation on multiple scales in constant time using the integral image.

Combining Histogram of Oriented Gradients (HOG) and Local Binary Pattern (LBP) as the feature set, Wang et al. [6] proposed a novel approach for detecting pedestrians. The proposed method combines the HOG feature with the cell-structured LBP feature.

When the background contains a high amount of noisy edges, HOG performs poorly. However, LBP uses the concept of uniform pattern that can filter this kind of noises. This reason makes them complementary in this aspect. This combination brings together the advantages of HOG and LBP for detecting license plates. In this way, He et al. [24] recently published a part-based model using HOG for detecting the car and the combination between HOG and LBP for detecting the license plate.

The Gray Level Co-occurrence Matrix [11] is one of the earliest techniques used for image texture analysis. Given a grayscale image composed of pixels each with a specific gray level (intensity), the GLCM is a tabulation of how often different combinations of gray levels co-occur in the image or in a subimage. Using the content of a GLCM matrix, the associate descriptor calculates different texture properties as contrast, dissimilarity, homogeneity, energy, and correlation.

3.2. Dataset

An extensive and complete dataset provided by the City Council of Oviedo (Asturias, Spain) that includes more than 1,000,000 of images captured by the traffic cameras of the city between 2013 and 2016 was used. All images included at least one license plate, and they were captured from 37 different locations: restricted access areas, urban and interurban roads up to 4 lanes, intersections with traffic lights, and roundabouts.

Several camera positions were used. In some cases, such as restricted access areas, the camera is located on the side of the vehicle, while on roads and intersections, the camera is usually located hanging from poles, lampposts, or traffic lights.

Images were captured for 24 hours a day, and several degrees of illumination and meteorological conditions are included in the dataset. From the dataset described above, 21,000 images were extracted and sorted into three groups depending on the environmental conditions: rainfall (rain, snow, and fog), low illumination, and optimal conditions (without rainfall and with good illumination). Example images of low illumination set and rainfall set are shown in Figures 8(a) and 8(b), respectively. Every group is composed by a training set of 5,000 images and a test set of 2,000 images. The percentage of images captured from the different cameras and locations is the same in each group. Table 2 summarizes the composition and purpose of each set of images.

Figure 8: Example images extracted from the low illumination set (a) and rainfall set (b).
Table 2: Description of the sets of images used for the experiments.

In addition, 7000 negative images were extracted by cropping nonlicense plates areas from images of the main set, in order to consider the same urban scenarios.

3.3. Methodology

The main goal of this experiment is to compare the accuracy of a classifier trained with images captured in optimal conditions with the same classifier trained with images affected by challenging weather or low illumination over the corresponding test images set.

Our experiment is composed by two phases, each one associated with one of the two issues considered for this paper: challenging weather and low illumination. First, we analysed the influence of the challenging weather. To achieve this goal, we proceeded as follows. For each combination of descriptor-classifier, one classifier was trained using the GenTrainingSet and another one was trained using the RainTrainingSet composed of images affected by challenging weather. Both classifiers were tested over the set RainTestSet which is composed of images affected by challenging weather. Comparing the accuracy of both classifiers detecting license plates under these challenging conditions, it is possible to determine which kind of descriptor can adapt better to challenging weather and how could we increase the performance of each descriptor by an appropriate selection of images for the training sets.

For the second phase, we repeat the experiment using images affected by challenging illumination. One classifier trained using the GenTrainingSet and another one using the NightTrainingSet (trained with images taken without an adequate lighting) were considered for each combination of descriptor-classifier and testing with the NightTestSet. The goal is the same—obtain information about how each descriptor is affected by low illumination and improve our knowledge about the composition of training sets.

The classifiers were named as follows: CLASSIFIER-DESCRIPTOR-{GEN/RAIN/NIGHT}. Where the suffix -GEN indicates that the classifier was trained with the optimal conditions training set (GenTrainingSet), the suffix -RAIN denotes that the classifier was trained with the rainfall training set (RainTrainingSet), and in the same way, the suffix -NIGHT indicates that the classifier was trained with challenging illumination images (NightTrainingSet).

As an example of the entire test for each pair classifier-descriptor, Figure 9 represents the model training step (Figure 9(a)) and the testing step (Figure 9(b)) for the pair SVM-HOG. This procedure is repeated for each possible pair of classifier-descriptor.

Figure 9: Complete experiment for the pair SVM-HOG including classifiers creation step (a) and testing step (b).

The HOG features extraction module was configured with the following parameters. The cell size was settled at 8 × 8 pixels, and every block is composed by 2 × 2 cells. 9 orientations are considered; this is the number of orientation bins that the gradients of the pixels of each cell will be split up in the histogram.

Regarding the extraction of LBP features, each LBP code is defined by 8 sampling points and a 2px radius. Images are divided into 16 × 16 px blocks, and the LBP codes of each block are summarized in a histogram. Each histogram has a separate bin for every pattern. We decided to use uniform patterns [44] in order to reduce the length of the histogram, and thus the dimension of the feature vector. Using uniform patterns, the length of the feature vector for a single cell reduces from 256 to 59.

Every Cascade classifier was trained with the same parameters. The training process was settled at 20 stages. Minimal desired hit rate for each stage of the classifier was settled at 0.998, and the maxima desired false alarm at 0.5.

In relation to linear SVM, all classifiers were also trained using a Regularization factor C settled in 0.1.

3.4. Evaluation

In order to compare detectors, miss rate versus FPPI (false positive per image) by varying the threshold on detection confidence are plotted. Both values are plotted on log axes according with the evaluation metrics proposed by Dollar et al. [45]. This is preferred to precision recall curves for tasks in which there is an upper limit on the acceptable FPPI rate independent of license plate density [45].

Miss rate or false positive rate (FPR) is the number of missed detections (license plates that the classifier failed to detect) in relation to the number of false positives. It is the opposite of recall or true positive rate (TPR).

Equation (5) shows the relation between recall or TPR (true positive rate) and miss rate or FPR. Where TP (true positives) is the number of correct detections, P is the total number of license plates, and FN (false negative) is the number of missed detections. False positives per image is the number of false positives in relation to the total number of images.

In this kind of graphs, lower curves indicate better accuracy. Miss rate equal to 1 FPPI is considered the common reference point for comparing results, considering that there are 1.2 license plates/image in the selected test sets. In addition, the range 10−2–101 is considered the range of interest for evaluating which classifier performs better (the lower the curve, the better the performance).

4. Results and Discussion

This research focuses on two environmental factors as challenging weather and challenging illumination conditions. Eight tests were performed for each challenge, comparing the performance of specific trained classifiers with generic trained classifier according the metric exposed in Section 3.1.

In order to analyze the effects of the challenging weather conditions, eight tests were developed. First, HOG features are tested, in accordance with the approach of Dalal and Triggs [3], by comparing the performance of the SVM trained in optimal conditions and the SVM trained with challenging weather (Figure 10).

Figure 10: Results of the challenging weather test for HOG descriptors using an SVM classifier.

The classifier trained with the GenTrainingSet performs better than the classifier that has received a specific training for challenging weather (RainTrainingSet). The performance of the generic classifier improves up to 19% the recall at 1 FPPI.

Secondly, we tested the classical approach of LBP descriptor with an SVM classifier. Figure 11 shows the results of this test.

Figure 11: Results of the challenging weather test for two different kinds of LBP descriptors using an SVM classifier. The left-hand image (a) shows the results using the extension nonrotation invariant LBP-Uniform and the right-hand image (b) shows the results using the original simple LBP operator.

At first, we selected the nonrotation invariant LBP-Uniform version as representative LBP operator (Figure 11(a)). When using a SVM classifier, we did not detect a clear difference between the two curves. It is clearly seen that both curves performs better than the another one along two different parts of the rage of interest. In order to ensure reliable results, we decided to repeat the experiment using the original LBP operator (Figure 11(b)), using the same parameters as the NRI LBP-U except the number of bins of the histogram (255 instead of 59). The margin between both curves was already small, but in this last test, the performance of the classifier trained with the GenTrainingSet outperformed the specific one over the whole range of interest.

The following test compared the performance of the algorithm of Viola and Jones [5] trained with good weather conditions and trained with specific challenging weather images.

Figure 12 shows again that the curve of the classifier trained with optimal conditions images runs along the area of interest under the specific training classifier curve. The difference between them is not very high due to the high accuracy of this approach for LPD.

Figure 12: Results of the challenging weather test for the Viola and Jones [5] approach.

Regarding the variants, the results are in line with the three selected representative descriptors. It is important to note that every LBP variant (CLBP, LTP, LGP, and MB-LBP) obtained similar results at the range of interest. Our tests show (Figure 13) that the performance of the generic classifier of each LBP-based descriptor improves the recall of the model between 5% and 10% at the reference point of 1 FPPI, compared to the specific one trained with RainTrainingSet.

Figure 13: Results of the challenging weather test for LBP variants trained with SVM classifiers: CLBP (a), LTP (b), LGP (c), and MB-LBP (d).

Figure 14 shows the comparison between the performance of both classifiers trained with HOG-LBP descriptors and the SVM classifier. According to the results obtained by the HOG descriptor, the general performance of both curves outperforms the LBP variants. Results are similar to those of previous tests. In this case, the performance of the generic classifier trained with the GenTrainingSet improves up to 7% the recall of the classifier trained with the RainTrainingSet at the reference point of 1 FPPI and remains higher all along the range of interest.

Figure 14: Results of the challenging weather test for HOG-LBP descriptor using an SVM classifier.

Finally, we repeated the test with the GLCM descriptor. Because of the simplicity of the extracted features, the general performance of both curves is significantly worse than the rest of tests. Irrespective of general performance, Figure 15 shows that the curve of the generic classifier trained with the GenTrainingSet outperforms, again, the curve of the classifier trained with an specific training.

Figure 15: Results of the challenging weather test for GLCM descriptor using an SVM classifier.

The second challenging environmental condition for evaluation in this research was the lack of light in ALPR scenarios. We used the same procedure as for evaluating the effects of challenging weather.

Again, the first test of this experiment was designed for evaluating the effects of the lack of light in HOG performance by comparing the performance of the SVM trained in optimal conditions and the SVM trained with challenging illumination (Figure 16). It is noticeable that the curve that relates Miss rates and FPPI corresponding to the classifier which has received a specific training with the NightTrainingSet performs better than the classifier trained with the GenTrainingSet all along the range of interest.

Figure 16: Results of the challenging illumination test for HOG descriptors using an SVM classifier.

Figure 17 shows the results of the same test but using LBP descriptors. The curve corresponding to the classifier which has received a specific training with NightTrainingSet performs clearly better than the classifier trained with good conditions images along the range of interest. LBP is able to capture the light differences with high accuracy, and the difference between the classifiers with regard to recall is up to 19% taking 1 FPPI as reference.

Figure 17: Results of the challenging illumination test for LBP descriptors using an SVM classifier.

Similar results were obtained in the next test. Results show again a clear difference between the performance of the curve corresponding to the classifier which has received a specific training with NightTrainingSet and the curve corresponding to the classifier trained with good conditions images. Figure 18 shows the graph relative to the Cascade Classifier of Haar features. In this test, the performance of the specificly trained classifier improves the recall up to 29% at the reference point of 1 FPPI.

Figure 18: Results of the challenging illumination test for the Viola and Jones [5] approach.

With regard to the variants, there are huge differences between the curve corresponding to the classifier which has received a specific training and the generic one for every test performed. The results were conclusive, and every variant test (Figure 19) confirmed the results of the representative descriptors tests.

Figure 19: Results of every variant for the challenging illumination test using an SVM classifier: CLBP (a), LTP (b), HOG-LBP (c), GLCM (d), LGP (e), and MB-LBP (f).
4.1. Robustness Checks

In order to verify the robustness of our results, three additional classifiers were included in our tests: Linear Regression (LR), K-Nearest Neighbor (K-NN), and an Artificial Neural Network (ANN).

The LR classifier implements a simple logistic regression using a regulation factor c = 1.

The K-NN classifier defines the class of an element depending on the distance measured between these elements and its neighbors. We selected a minimum of 5 nearest neighbors and an Euclidean function for computing the distance. Respecting the ANN, we used a Multi-layer Perceptron with one hidden layer.

Similar differences can be observed between the curve corresponding to the model which received a specific training and the curve trained with the generic GenTrainingSet for each descriptor regarding the kind of classifier. The results confirm that the curves associated to a specific training tend to outperform the generic one for challenging illumination tests. On the other hand, the curve associated to a generic training tends to outperform the specific one for challenging weather tests.

Detailed data of every test performed is available in https://doi.org/10.6084/m9.figshare.7926920.

5. Conclusions

When an object detection system based on feature classification is executed in outdoor scenarios, it is reasonable to think that the best way to obtain a good performance is to consider every environmental condition in the training sets. If it is located in a place where it often rains, a feasible strategy is to train the classifier using rain images in proper proportion. In the same way, if the system operates during the night, it makes sense to use low-illumination images in proportion to the estimated time that the system works under these conditions or, if it is possible, to work with different classifiers specifically trained for each condition.

The results obtained suggest that illumination and weather are two diverse problems with a different origin. Thus, their influence in the description of objects is completely different.

Regardless the kind of feature, the effect of rainfall is not adequately reflected in the descriptors. After testing four different features in combination with two descriptors, only the combination LBP + SVM does not produce conclusive evidence. The remaining tests indicate that by removing the rainfall images from the training sets, it is possible to improve the performance of the classifier up to 19%. When the rain, fog, or snow is captured by a camera into a 2d image, suspended particles create a random pattern between the camera and the objective. These patterns differ, and it seems that the feature extraction process is not capable to recover relevant information to improve the classification. In fact, the inclusion of this random information over the license plates could worsen the performance of the classifier because its effect is similar to that of digital noise.

On the contrary, the results suggest that the influence of light should be considered in the training process. Texture, color, or gradient patters produced in different illumination conditions are important information to be extracted because these patterns could be recognizable by the classifier.

HOG is the least-sensitive descriptor with regard to challenging illumination. HOG summarizes the gradient information into histograms grouped by its direction. These gradient directions remain constant regardless the intensity of the light, and therefore, the performance improvement of the classifiers is not high. With an LBP descriptor, it is possible to improve the classifier recall up to 20% by performing a proper training that considers images affected by challenging illumination. In a similar way, we could improve the performance of the Viola and Jones algorithm up to 29% by including different illumination conditions.

Comparing with other recent techniques [12, 17], which usually define all phases of LPD, our technique was tested with different texture-based descriptors and classifiers, allowing for a great adaptability to any algorithm based on ML. The conclusions of our study allow for a correct selection of the images that comprise the training sets; therefore, our technique is compatible with many other preprocessing techniques [1223] that try to avoid the adverse effects of meteorology and lack of lighting.

6. Limitations and Future Work

In the presented research, several texture-based descriptors were tested under the assumption that other descriptors based on the same kind of features exhibit similar behaviour. It is an interesting research for us, expanding our work including descriptors like SURF, SIFT, FAST, or CSIFT. In addition, Content-Based Image Retrieval (CBIR) has attracted enormous attention over the last few years, and methods incorporating shape, spatial layout, and saliency to describe visual contents are gaining attention. In this line, novel descriptors which incorporate various kinds of features have been developed recently. Incorporating descriptors as MTH, CDH, SED, or MSD into our work is another interesting research for us.

We decided to test our classifiers using our own dataset because it is not a goal of this paper to compare the accuracy of the tested classifiers with the state of the art. Instead, the objective is to assess the variation in the accuracy of each classifier when the training set is modified in order to include challenging illumination/weather conditions. Comparing the performance of different approaches, considering the conclusions extracted from this paper, by testing them using the widely used benchmarks is an important avenue of research.

Data Availability

The image datasets that support the findings of this study are not publicly available due to them containing information that could compromise research participant privacy. The rest of the data are available from the corresponding author upon request.

Conflicts of Interest

The authors declare that there are no conflicts of interest regarding the publication of this paper.

Acknowledgments

This work was funded by the European Union, through the European Regional Development Funds (ERDF), and the Principality of Asturias, through its Science, Technology and Innovation Plan. This work was partially funded by the Department of Science, Innovation and Universities (Spain) under the National Program for Research, Development and Innovation, project RTI2018-099235-B-I00.

References

  1. M. Molina-Moreno, I. Gonzalez-Diaz, and F. Diaz-de-Maria, “Efficient scale-adaptive license plate detection system,” IEEE Transactions on Intelligent Transportation Systems, vol. 20, no. 6, pp. 2109–2121, 2019. View at Publisher · View at Google Scholar · View at Scopus
  2. B.-G. Han, J. T. Lee, K.-T. Lim, and Y. Chung, “Real-time license plate detection in high-resolution videos using fastest available cascade classifier and core patterns,” ETRI Journal, vol. 37, no. 2, pp. 251–261, 2015. View at Publisher · View at Google Scholar · View at Scopus
  3. N. Dalal and B. Triggs, “Histograms of oriented gradients for human detection,” in Proceedings of the 2005 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR’05), vol. 1, pp. 886–893, San Diego, CA, USA, June 2005.
  4. T. Ojala, M. Pietikainen, and D. Harwood, “Performance evaluation of texture measures with classification based on Kullback discrimination of distributions,” in Proceedings of 12th International Conference on Pattern Recognition, vol. 1, pp. 582–585, Tsukuba, Japan, November 2012.
  5. P. Viola and M. Jones, “Rapid object detection using a boosted cascade of simple features,” in Proceedings of the 2001 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR 2001), vol. 1, pp. 511–518, Kauai, HI, USA, December 2001.
  6. X. Wang, T. X. Han, and S. Yan, “An HOG-LBP human detector with partial occlusion handling,” in Proceedings of the 2009 IEEE 12th International Conference on Computer Vision, pp. 32–39, Tampa, FL, USA, September 2009.
  7. B. Jun, I. Choi, and D. Kim, “Local transform features and hybridization for accurate face and human detection,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 35, no. 6, pp. 1423–1436, 2013. View at Publisher · View at Google Scholar · View at Scopus
  8. L. Zhang, R. Chu, S. Xiang, S. Liao, and S. Z. Li, “Face detection based on multi-block LBP representation,” in Proceedings of the International Conference on Biometrics, ICB 2007: Advances in Biometrics, pp. 11–18, Seoul, Republic of Korea, August 2007.
  9. F. Ahmed, H. Bari, and E. Hossain, “Person-independent facial expression recognition based on compound local binary pattern (CLBP),” International Arab Journal of Information Technology, vol. 11, no. 2, pp. 195–203, 2014. View at Google Scholar
  10. X. Y. Tan and B. Triggs, “Enhanced local texture feature sets for face recognition under difficult lighting conditions,” IEEE Transactions on Image Processing, vol. 19, no. 6, pp. 1635–1650, 2010. View at Publisher · View at Google Scholar · View at Scopus
  11. R. M. Haralick, “Statistical and structural approaches to texture,” Proceedings of the IEEE, vol. 67, no. 5, pp. 786–804, 1979. View at Publisher · View at Google Scholar · View at Scopus
  12. M. K. Saini and S. Saini, “Multiwavelet transform based license plate detection,” Journal of Visual Communication and Image Representation, vol. 44, pp. 128–138, 2017. View at Publisher · View at Google Scholar · View at Scopus
  13. V. Abolghasemi and A. Ahmadyfard, “An edge-based color-aided method for license plate detection,” Image and Vision Computing, vol. 27, no. 8, pp. 1134–1142, 2009. View at Publisher · View at Google Scholar · View at Scopus
  14. G. Raju and M. S. Nair, “A fast and efficient color image enhancement method based on fuzzy-logic and histogram,” AEU—International Journal of Electronics and Communications, vol. 68, no. 3, pp. 237–243, 2014. View at Publisher · View at Google Scholar · View at Scopus
  15. X. Xue, J. Ding, and Y. Shi, “Research and application of illumination processing method in vehicle color recognition,” in Proceedings of the 2017 3rd IEEE International Conference on Computer and Communications (ICCC), pp. 1662–1666, Chengdu, China, December 2017.
  16. Y. Wen, Y. Lu, J. Yan, Z. Zhou, K. M. von Deneen, and P. Shi, “An algorithm for license plate recognition applied to intelligent transportation system,” IEEE Transactions on Intelligent Transportation Systems, vol. 12, no. 3, pp. 830–845, 2011. View at Publisher · View at Google Scholar · View at Scopus
  17. D. Wang, Y. Tian, W. Geng, L. Zhao, and C. Gong, “LPR-Net: recognizing Chinese license plate in complex environments,” Pattern Recognition Letters, In press. View at Publisher · View at Google Scholar · View at Scopus
  18. Yi-T. Chen, J.-H. Chuang, W.-C. Teng, H.-H. Lin, and H.-T. Chen, “Robust license plate detection in nighttime scenes using multiple intensity IR-illuminator,” in Proceedings of the 2012 IEEE International Symposium on Industrial Electronics, pp. 893–898, Hangzhou, China, May 2012.
  19. K. S. Raghunandan, P. Shivakumara, H. A. Jalab et al., “Riesz fractional based model for enhancing license plate detection and recognition,” IEEE Transactions on Circuits and Systems for Video Technology, vol. 28, no. 9, pp. 2276–2288, 2018. View at Publisher · View at Google Scholar · View at Scopus
  20. H. Dawood, H. Dawood, and P. Guo, “Removal of high-intensity impulse noise by Weber’s law noise identifier,” Pattern Recognition Letters, vol. 49, pp. 121–130, 2014. View at Publisher · View at Google Scholar · View at Scopus
  21. J. Jie Chen, S. Shan, C. He et al., “WLD: a robust local image descriptor,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 32, no. 9, pp. 1705–1720, 2010. View at Publisher · View at Google Scholar · View at Scopus
  22. S. Azam and M. M. Islam, “Automatic license plate detection in hazardous condition,” Journal of Visual Communication and Image Representation, vol. 36, pp. 172–186, 2016. View at Publisher · View at Google Scholar · View at Scopus
  23. R. Panahi and I. Gholampour, “Accurate detection and recognition of dirty vehicle plate numbers for high-speed applications,” IEEE Transactions on Intelligent Transportation Systems, vol. 18, no. 4, pp. 767–779, 2017. View at Publisher · View at Google Scholar · View at Scopus
  24. H. He, Z. Shao, and J. Tan, “Recognition of car makes and models from a single traffic-camera image,” IEEE Transactions on Intelligent Transportation Systems, vol. 16, no. 6, pp. 3182–3192, 2015. View at Publisher · View at Google Scholar · View at Scopus
  25. F. Delmar Kurpiel, R. Minetto, and B. T. Nassu, “Convolutional neural networks for license plate detection in images,” in Proceedings of the 2017 IEEE International Conference on Image Processing (ICIP), pp. 3395–3399, Beijing, China, September 2017.
  26. C. Cortes and V. Vapnik, “Support-vector networks,” Machine Learning, vol. 20, no. 3, pp. 273–297, 1995. View at Publisher · View at Google Scholar · View at Scopus
  27. O. Bulan, V. Kozitsky, P. Ramesh, and M. Shreve, “Segmentation- and annotation-free license plate recognition with deep localization and failure identification,” IEEE Transactions on Intelligent Transportation Systems, vol. 18, no. 9, pp. 2351–2363, 2017. View at Publisher · View at Google Scholar · View at Scopus
  28. J. Muhammad and H. Altun, “Improved license plate detection using HOG-based features and genetic algorithm,” in Proceedings of the 2016 24th Signal Processing and Communication Application Conference (SIU), pp. 1269–1272, Zonguldak, Turkey, May 2016.
  29. M. S. Sarfraz, A. Shahzad, M. A. Elahi, M. Fraz, I. Zafar, and E. A. Edirisinghe, “Real-time automatic license plate recognition for CCTV forensic applications,” Journal of Real-Time Image Processing, vol. 8, no. 3, pp. 285–295, 2011. View at Publisher · View at Google Scholar · View at Scopus
  30. M. A. Khan, M. Sharif, M. Y. Javed, T. Akram, M. Yasmin, and T. Saba, “License number plate recognition system using entropy-based features selection approach with SVM,” IET Image Processing, vol. 12, no. 2, pp. 200–209, 2018. View at Publisher · View at Google Scholar · View at Scopus
  31. C. Gou, K. Wang, Y. Yao, and Z. Li, “Vehicle license plate recognition based on extremal regions and restricted Boltzmann machines,” IEEE Transactions on Intelligent Transportation Systems, vol. 17, no. 4, pp. 1096–1107, 2016. View at Publisher · View at Google Scholar · View at Scopus
  32. T. Ojala, M. Pietikainen, and T. Maenpaa, “Multiresolution gray-scale and rotation invariant texture classification with local binary patterns,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 24, no. 7, pp. 971–987, 2002. View at Publisher · View at Google Scholar · View at Scopus
  33. D. Huang, C. Shan, M. Ardabilian, Y. Wang, and L. Chen, “Local binary patterns and its application to facial image analysis: a survey,” IEEE Transactions on Systems, Man, and Cybernetics, Part C (Applications and Reviews), vol. 41, no. 6, pp. 765–781, 2011. View at Publisher · View at Google Scholar · View at Scopus
  34. M. S. Al-Shemarry, Y. Li, and S. Abdulla, “Ensemble of adaboost cascades of 3L-LBPs classifiers for license plates detection with low quality images,” Expert Systems with Applications, vol. 92, pp. 216–235, 2018. View at Publisher · View at Google Scholar · View at Scopus
  35. E. Rashedi and H. Nezamabadi-pour, “A hierarchical algorithm for vehicle license plate localization,” Multimedia Tools and Applications, vol. 77, no. 2, pp. 2771–2790, 2018. View at Publisher · View at Google Scholar · View at Scopus
  36. R. Lienhart and J. Maydt, “An extended set of Haar-like features for rapid object detection,” in Proceedings International Conference on Image Processing, vol. 1, pp. 900–903, Rochester, NY, USA, September 2002.
  37. Y. Freund and R. E. Schapire, “A decision-theoretic generalization of on-line learning and an application to boosting,” Journal of Computer and System Sciences, vol. 55, no. 1, pp. 119–139, 1997. View at Publisher · View at Google Scholar · View at Scopus
  38. L. Zheng, X. He, B. Samali, and L. T. Yang, “An algorithm for accuracy enhancement of license plate recognition,” Journal of Computer and System Sciences, vol. 79, no. 2, pp. 245–255, 2013. View at Publisher · View at Google Scholar · View at Scopus
  39. R. Wang, N. Sang, R. Wang, and L. Jiang, “Detection and tracking strategy for license plate detection in video,” Optik, vol. 125, no. 10, pp. 2283–2288, 2014. View at Publisher · View at Google Scholar · View at Scopus
  40. M. Raja, M. Raja, and V. Sadasivam, “Optimized local ternary patterns: a new texture model with set of optimal patterns for texture analysis,” Journal of Computer Science, vol. 9, no. 1, pp. 1–15, 2013. View at Publisher · View at Google Scholar · View at Scopus
  41. X. Wu, J. Sun, G. Fan, and Z. Wang, “Improved local ternary patterns for automatic target recognition in infrared imagery,” Sensors, vol. 15, no. 3, pp. 6399–6418, 2015. View at Publisher · View at Google Scholar · View at Scopus
  42. S. Wu, L. Yang, W. Xu, J. Zheng, Z. Li, and Z. Fang, “A mutual local-ternary-pattern based method for aligning differently exposed images,” Computer Vision and Image Understanding, vol. 152, pp. 67–78, 2016. View at Publisher · View at Google Scholar · View at Scopus
  43. X.-H. Han, Y.-W. Chen, and G. Xu, “Integration of spatial and orientation contexts in local ternary patterns for HEp-2 cell classification,” Pattern Recognition Letters, vol. 82, pp. 23–27, 2016. View at Publisher · View at Google Scholar · View at Scopus
  44. O. Barkan, J. Weill, L. Wolf, and H. Aronowitz, “Fast high dimensional vector multiplication face recognition,” in Proceedings of the IEEE International Conference on Computer Vision, Tampa, FL, USA, December 2013.
  45. P. Dollar, C. Wojek, B. Schiele, and P. Perona, “Pedestrian detection: a benchmark,” in Proceedings of the 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 304–311, Miami, FL, USA, June 2009.