Abstract

The next promising key issue of the automobile development is a self-driving technique. One of the challenges for intelligent self-driving includes a lane-detecting and lane-keeping capability for advanced driver assistance systems. This paper introduces an efficient and lane detection method designed based on top view image transformation that converts an image from a front view to a top view space. After the top view image transformation, a Hough transformation technique is integrated by using a parabolic model of a curved lane in order to estimate a parametric model of the lane in the top view space. The parameters of the parabolic model are estimated by utilizing a least-square approach. The experimental results show that the newly proposed lane detection method with the top view transformation is very effective in estimating a sharp and curved lane leading to a precise self-driving capability.

1. Introduction

In recent years, the researches regarding a self-driving capability for an advanced driver assistant systems (ADAS) have received great attentions [1]. One of the key objectives of this research area is to provide a more safe and intelligent function to drivers by using electronic and information technologies. Therein, the development of an advanced self-driving car operating in hostile traffic environments becomes a very interesting topic in these days. In hostile road conditions, a recognition and detection capability of road signs, road lanes, and traffic lights is very important and plays a critical role for the ADAS systems [2, 3]. The lane detection technique is used to control the self-driving car to keep its lane in a designated direction, providing a driver with a more convenient and safe assistant function [2, 3].

In general, the road lanes can be divided into two types of trajectories, that is, a curved lane and a straight line [4]. In the literature, several methods were introduced for the lane detection process as shown in Figure 1. However, most of those methods usually detect only a straight lane by using an original image obtained from a front view image. With the straight lane detection, we can only recognize a near view road range, but it makes it difficult to cognize a road turning in a curved lane. In addition, when we use front view camera images as original image source used in the detection process, the detection of curved lanes is not trivial but becomes very difficult leading to a poor detection performance.

In this paper, an effective lane detection algorithm is proposed with an improved curved lane detection performance based on a top view image transform approach [57] and a least-square estimation technique [8]. In the newly proposed method, the top view image transformation technique converts the original road image into a different image space and makes it effective and precise for the curved lane detection process. First, a top view image converted from a front view image is generated by using a top view image transform technique. After the top view image transformation, the shape of a lane becomes almost the same as the real road lane with a minimum distortion. Then, the transformed image is divided into two regions such as a near and a far section. In general, the road shape in the near section can be modeled with a straight lane, while the shape of the road in the far section uses either a straight line model or a curved lane model [4, 9]. Therefore, in the near section, a straight line could be transformed with a Hough transform method [10, 11], and a parabolic model is used to find the correct shape of the lane. On the other hand, in the far section, a curved lane model is used with a high-order polynomial and the parameters of the curved lane are estimated by using a least-square method. Finally, each near and far section model are combined together, which leads to the construction of a realistic road profile used in the ADAS systems. Figure 2 shows the flow process of the proposed top view based lane detection algorithms in details.

The remainder of the paper is described as follows. In Section 2, the principle of the top view transformation is explained in detail. Section 3 illustrates the way of finding the straight line profile in the near section with the Hough transformation approach. In Section 4, a precise curved lane detection algorithm in the far image section is designed by using a parabolic lane detection approach where its parameters are estimated with a least-square method. Finally, in Section 5, realistic experiments are carried out in order to verify the effectiveness and performance of the proposed new method.

2. Top View Image Transformation

Top view image transformation is a very effective method as an advanced image processing. Some researchers used the top view transformation approach to detect obstacles and even to measure distances to objects. An object’s shape on the road is infracted in the top view transformed image where a lane and a sign of the road are almost the same as the real lane and sign (Figure 5). Therefore, the usage of the top view image transformation becomes very effective for the lane detection, leading to providing an advanced safe lane-keeping and control capabilities.

Figure 3 shows the basic principle of the top view transformation where the real camera view is transformed into a virtual position with a direct top view angle. In order to figure out the transformation relationship between the front view image and the top view image, some key parameters are required to be computed first. Figure 4 illustrates the geometry of the top view transformed virtual image where is the vertical view angle, is the horizontal view angle, is the height of camera located, and is the tilt angle of the camera.

Figure 4 shows the geometry of top view transformed image where   is the height of camera located which is measured in metric. It has to be converted into a pixel from the metric, since the generated top view image is digital image. Therefore, we need to find out the inversion coefficient which is used to transform the metric into the pixel data. is the width of the front view image and is proportional to of the top view image field illustrated in Figures 3 and 4, respectively. From this relation, the coefficient, , can be determined by usingNow, the height of the camera located in pixel data is calculated by According to the geometrical description shown in Figure 4, for each point on the front view image, the corresponding sampling point on the top view image can be calculated by using the next equations of (3), (4), and (5) as where is the dependent angle of the point of the position. The coordinate in the top view image is computed by the following relation:Also, the coordinate is calculated by using the following:where is the dependent angle of the point of the position. Then, color data is copied from the position of camera image to the position of the top view image by using the following relation:Now, a more effective lane detection process could be carried out more efficiently from the top view transformed image. The top view transformed image could be divided into two sections such as a near view section and a far view section. In the near view section, a straight line model is used to find a linear lane with a Hough transformation, while for the far view section a parabolic model approach is adopted for a curved lane detection in the top view image and its parameters are estimated by utilizing a least-square approach.

3. Straight Line Detection with Hough Transform

In the near view image, a straight line detection algorithm is formulated by using a standard Hough transformation. The Hough transform method searches for lines using the equation as can be seen in Figure 6.

It is necessary to choose the longest straight line from the lines detected from the Hough transformation. The applied Hough transformation returns the coordinate of a starting point () and the coordinate of the ending point () as can be seen in Figure 7.

Now, the equation of a straight line model equation is defined and the parameters of the linear road model are calculated by using the starting and ending coordinates from each boundary condition of near section image. Equation (7) shows the straight line model for the road linear detection as follows:where is the slope of the linear detection model. It is noted that the parameters, and , used in the liner line detection model are also used again in a curved line detection process in the far view image space.

4. Curved Line Detection

4.1. Curved Line Detection Based on Parabolic Model

In the far view image, a curved line detection is necessary, and the previous parameters of the straight line model are used again. Since a curved line is modeled as a continuous one starting right after the straight line, it has a common boundary condition as can be seen in Figure 8.

On the same boundary points, the functional value of the straight line equation is equal to the value of the parabolic curved line equation as where is a parabolic model used for the curved line detection as follows:The differential value of function is also equal to the boundary point as , and the differential values are calculated byThese conditions imply also the following relations:Note that and parameters are already obtained from the Hough transformation in the previous section. Now, it is necessary to compute the , , and parameters for the curved parabolic model. From (10), and parameters are computed by:Substituting these values back into (8) leads to the following relations:Note that now only parameter is undefined and it is necessary to be resolved. Therefore, in order to find out the parameter value , first it is required to find all the white points from the boundary point , , in the curved line section as can be seen in Figure 9.

Then, the coordinates of all the white points are used to define parameter . Figure 10 shows the sequence of finding the white points.

Each , coordinate has a specific relation with the value, and (13) shows this relationship. Based on the relation, our main equation is formulated with (14). Finally, the value of the parameter, , is computed by using all the valuesThe effectiveness of the proposed parabolic model approach using the curved line detection approach is shown in Figure 11. As can be seen, the boundary of the curved line and the linear line perfectly matched. However, the parameterized curved model computed in the far view section is not perfectly aligned with the original curved line. This is because the parameters used in the parabolic model have some bias and errors. In order to compensate for the misalignment of the curved line in the far image section, an effective estimation technique is utilized in the next section.

4.2. Curved Line Detection Based on Least-Square Method

In the previous section, the parameters in the parabolic model are computed by using the white points in the curved line section. In this section, in order to increase the accuracy of the computation of the parameters of the curved line, an effective least-square estimation technique which uses all the given data is integrated. First, the least-square method is formulated by using the data as follows; Equation (15) forms the linear matrix equation with the matrix, , as follows: Since all the data is given, the matrix is calculated easily. Then, after the computation of the matrix, the , , and parameters of the curved parabolic line model are calculated by Figure 12 shows the curved line detection result by using the least-square method. It is shown that the detected curved line is matched with the original white line, but the boundary points of the linear line are not aligned well. Thus, it is needed to match the boundary conditions in the least-square method.

4.3. Integration of Parabolic Model and Least-Square Method

It is noted that each method of the parabolic approach and the least-square method has its own advantages and disadvantages in the curved line detection step. The previous ideas obtained in the curved line detection lead us to invent a new curved line detection methodology by integrating two methods as for an effective and precise curved line detection technique. For a new curved line detection technique, the parabolic detection approach and the least-square methods are integrated together by calculating the parameters used in the curved line model as As can be seen in (18), the parameters obtained in each detection method are computed again by averaging the parameter values, which resulted in more precise curved line detection performance as can be seen in Figure 13 where the green line is the result from the integrated method. The integrated method not only aligned with the original white line but also matched the same boundary conditions of the linear line model.

5. Experiment Results

In this section, realistic road experiments are carried out. In the experiments, 10 images, which contain straight line and curved line, are used. Example results are shown in Figure 14 to Figure 24. In addition, for the performance check, error plots are investigated in Figures 20, 21, and 28 measured in a pixel unit.

5.1. Experiment Results Number 1

See Figures 1421.

5.2. Experiment Results Number 2

See Figures 2229.

The newly proposed detection algorithm requires 0.5–2 sec for the one-time detection; the required computational time depends on the adopted image size, tilt angle, and height of camera. 80% of this process time is due to the usage of the top view image transformation. If either a GPU or a FPGA processor is utilized for top view image transformation, the expected processing time for the line detection could be reduced more. In the near future work, we will use GPU and FPGA processor for the top view transformation.

The most important advantage of the newly proposed curved line detection algorithm lies in the fact that the parameter values used in the line detection could be computed precisely, which result in a more robust ADAS performance. In specific, if the parameter value of is higher than zero, it indicates that the road is turning left. If the parameter value is lower than zero, it indicates that the road is turning right, and if parameter value is around zero, it means that the road is straight. As can be seen, the test results indicate that the new algorithm makes its application to the self-driving car more effective.

6. Conclusion

In this paper, an effective lane detection method is proposed by using the top view image transformation approach. In order to detect a precise line of the entire lane in the transformed image, the top view image is divided into two sections, near image and far image. In the near image section, a straight line detection is performed by using the Hough transformation, while, in the far image section, an effective curved line detection method is proposed by integrating an analytic parabolic model approach and the least-square estimation method in order to precisely compute the parameters used in the curved line model. For the verification of the newly proposed hybrid detection method, experiments are carried out. From the results it is shown that a curved line shape of the white lines after the top view image transformation almost perfectly matches the real road’s white lines. The effectiveness of the proposed integrated lane detection method can be applied to not only the self-driving car systems but also the advanced driver assistant systems in smart car systems.

Conflict of Interests

The authors declare that there is no conflict of interests regarding the publication of this paper.

Acknowledgments

This work was supported by the National Research Foundation of Korea (NRF) (no. 2014-063396) and also was supported by the Human Resource Training Program for Regional Innovation and Creativity through the Ministry of Education and National Research Foundation of Korea (no. 2014-066733).