About this Journal Submit a Manuscript Table of Contents
Advances in Mechanical Engineering
Volume 2014 (2014), Article ID 847406, 13 pages
http://dx.doi.org/10.1155/2014/847406
Research Article

Automatic Parking Based on a Bird’s Eye View Vision System

1Research Institute of Robotics, Shanghai Jiao Tong University, Shanghai 200240, China
2Department of Automation, Shanghai Jiao Tong University and Key Laboratory of System Control and Information Processing, Ministry of Education of China, Shanghai 200240, China
3College of Mechatronic Engineering and Automation, National University of Defense Technology, Changsha 410073, China
4Toyota Central R&D Labs., Inc., Nagakute 4801192, Japan

Received 1 September 2013; Revised 12 December 2013; Accepted 27 December 2013; Published 31 March 2014

Academic Editor: Heiner Bubb

Copyright © 2014 Chunxiang Wang et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Abstract

This paper aims at realizing an automatic parking method through a bird's eye view vision system. With this method, vehicles can make robust and real-time detection and recognition of parking spaces. During parking process, the omnidirectional information of the environment can be obtained by using four on-board fisheye cameras around the vehicle, which are the main part of the bird's eye view vision system. In order to achieve this purpose, a polynomial fisheye distortion model is firstly used for camera calibration. An image mosaicking method based on the Levenberg-Marquardt algorithm is used to combine four individual images from fisheye cameras into one omnidirectional bird's eye view image. Secondly, features of the parking spaces are extracted with a Radon transform based method. Finally, double circular trajectory planning and a preview control strategy are utilized to realize autonomous parking. Through experimental analysis, we can see that the proposed method can get effective and robust real-time results in both parking space recognition and automatic parking.

1. Introduction

With the increase in population and economic development of modern society of world, more and more people have cars of their own. The result is that traffic jam is more often than ever before. What is worse, in the increasingly crowded cities, the embarrassment in parking is one of the most difficult problems for drivers. People often have to waste a lot of time on searching for free parking spaces. Sometimes, it is a quite challenging task for drivers to park their cars in a limited space. In this context, the researches on PAS (Parking Assistance Systems) and automatic parking systems have become one of the hotspots in the field of intelligent vehicles. J. D. Power’s 2001 Emerging Technology Study shows that over 66% of consumers are likely to purchase parking assistance systems [1].

There are several categories of parking assistance systems [2]. The most prevailing approach now is to use sensor-based techniques, such as laser scanners, ultrasonic radars, and vision sensors. Laser scanners have high stability and accuracy, but they are of high cost and short life and easily affected by the rain and snow weather. Ultrasonic and short range radars are of low cost, long life, and small size. However, their accuracy is low and the detection range is short and, therefore, they cannot be applied in vertical parking mode. Vision sensors, such as cameras, are of low cost as well as long life and their precision is also fairly high. In addition, they can provide real-time vision assists and rich image information to drivers [3]. But vision sensors have poor performance in darkness conditions which have no additional light sources.

As a main trend for parking assistance systems, the vision-based systems are promising and many researchers and companies have developed their systems by using cameras. However, it is very difficult to maneuver vehicles by using one single camera, since the blind spots are inevitable in some complicated conditions, such as narrow valleys and the reverse parking. In this paper, a low-cost bird’s eye view vision system is constructed by installing four fisheye cameras around the vehicle to provide the image covering all the vehicle surroundings.

Generally, an automatic parking system consists of three components: path planning including free parking space detection, automatic steering, and braking system used to implement the planned trajectory, and a HMI (Human Machine Interface) which can provide information (such as visual and audio) of the ongoing parking process [4].

To find free parking space, various vision methods are proposed, which can be classified into three categories. Some recognize adjacent vehicles by using the 3D structure of parking lots [5, 6]. Some detect the parking space markings [7, 8]. The others recognize both adjacent vehicles and parking space markings [4, 9]. For example, Fintzel et al. developed a stereovision-based method for parking lots [5]. Xiu et al. developed a monocular vision-based parking lots marking recognition using neural networks [7]. The proposed method belongs to the second category.

Hough transform is often used for detecting the line marking in the parking space [8, 1012]. However, the performance of Hough transform is not robust to challenges due to the noise, clutter, and variation in illumination and weather conditions in detecting parallel line pairs [13]. Furthermore, in the parking assistance system, the wide-view images of the vehicle surroundings often cover multiple parking spaces. Almost all parking spaces are parallelograms. Nevertheless, under the influence of the noise in image, Hough transform is inefficient in detecting multiple parallelograms simultaneously, compared with the Radon Transform [14, 15]. In this paper, we employ the Radon transform rather than the Hough transform to enhance the robustness and detection accuracy. We also introduce clustering and filtering [16] to improve robustness against the challenges, such as shadows.

The remainder of this paper is organized as follows: Section 2 introduces the bird’s eye view vision system based on four fisheye cameras. Section 3 describes the details of parking space detection based on the Radon transform. In Section 4, the methods of path planning and path tracking for automatic parking are introduced. The experiment results implemented in the real scenes are given in Section 5, which substantiated the effectiveness and robustness of the proposed method. Finally, the conclusion is drawn in Section 6.

2. Bird’s Eye View Vision System

The Bird’s eye view vision used in the proposed system is based on four fisheye cameras with 180-degree field of view around vehicle. The system consists of two phases: calibration and image mosaicking.

2.1. Calibration

The camera imaging model is geometric mapping from three-dimensional space to two-dimensional pixel space. Camera calibration is to find out the parameters of this mapping relation, that is, camera parameters, which can be divided into intrinsic parameters and extrinsic parameters.

In the real calibration process, we took the flat calibration method proposed by Zhang [17]. The original images captured from the fisheye cameras are distorted; see Figure 1. In order to get high accuracy performance, the following steps are performed. Firstly, the four fisheye cameras are calibrated by using a chess board to obtain their intrinsic parameters and extrinsic parameters, respectively. Secondly, a polynomial distortion model [18] based method is used to correct each camera’s distortion. Lastly, according to the inverse perspective mapping (IPM) [19] from the image coordinate to the world coordinate, the undistorted images are transformed to the IPM images in the same ground plane by using the extrinsic parameters. Furthermore, nonlinear Levenberg-Marquardt optimization algorithm [20] is also used to optimize camera parameters.

fig1
Figure 1: The calibration images from different cameras.

From experiments we find that the above procedures can obtain comparatively ideal results; the more control points are selected, the more accurate intrinsic parameters and the better results after calibration can be obtained. In this paper, 20 control points are selected on each image and the final results, including calibration effects as well as intrinsic and extrinsic parameters, are, respectively, shown in Figure 2 and Tables 1 and 2.

tab1
Table 1: The calibration result of camera intrinsic parameters.
tab2
Table 2: The calibration result of camera distortion coefficients.
fig2
Figure 2: The distortion removed images in different cameras.
2.2. Image Mosaicking

In order to obtain the omnidirectional information, the four perspective images are combined into the one synthesized image. We firstly apply vehicle coordinate system-camera coordinate system joint calibration to decide inverse perspective transformation parameters. Then, the Levenberg-Marquardt algorithm [20] is applied to optimize inverse perspective parameters by minimizing the error of feature points. Besides, in order to improve image quality after inverse perspective transformation, the bilinear interpolation algorithm and the white balance procedure are introduced. An example after inverse perspective transformation and an example of the final bird view image are, respectively, shown in Figures 3 and 4.

fig3
Figure 3: The calibration result between the cameras and car.
847406.fig.004
Figure 4: Bird’s eye view vision system. Images , , , are captured from the front, left, right, and back fisheye cameras mounted on the CyberC3 vehicle. The image in the center is the bird’s eye view image synthesized from the four fisheye images.

3. Parking Space Detection

3.1. Overview

Generally, the parking space is often located on one side of the moving vehicle. Therefore, in order to improve the real-time performance of the automatic parking, only the fisheye cameras on the parking space side will be used to detect the free parking space, while all of the cameras are used when the parking spaces are around the car. Subsequently, the IPM (inverse perspective mapping) images are used as the input for the free parking space detection. Furthermore, in this paper, we suppose that roads are flat. Since visual field of fisheye cameras is small, usually within 2 meters of the vehicle, such assumption is generally tenable.

3.2. Radon Transform

The Radon transform is named after the Austrian mathematician Johann Karl August Radon (December 16, 1887–May 25, 1956). Applying the Radon transform in an image for a given set of angles can be regarded as computing the projection of the image along the given angles. The resultant projection is a line integral which is the sum of the intensities of the pixels in each direction. In other words, the liner integral value is the projection of the geometry image along the direction , as shown in Figure 5.

847406.fig.005
Figure 5: Radon transform.

The formulation of the Radon transform is as follows: where is the Dirac Delta function. is the parameter space about the Radon space. is the value of Radon space at points.

Radon transform and Hough transform both transform the two-dimensional image plane to the parameter space, defined by . Therefore, for both of these two transforms, any line in the image plane has corresponding points in the parameter space.

However, Hough transform quantifies the parameter space to a lot of small lattices, each of which is an accumulator. Points in the image plane will be accumulated in the corresponding small lattices through the parameter transform. After the transform, all the small lattices will be detected, and values in accumulators reflect the condition of the detected line. Therefore, the detection result is easily influenced by noise, such as illumination and shadows, and the detection accuracy and robustness are not very high.

Radon transform is a mapping from the image plane to the parameter space instead of discrete quantified lattices. In Radon space, the corresponding mapping value of the detected line in the image plane is the linear integral value, that is, the projection value of the pixels’ intensities in the image along every single direction . The projection value is closely related to the pixels’ intensities of the detected line. Therefore, Radon transform has better noise-tolerance ability and robustness as well as accuracy in detecting lines with gray information, which is the reason why Radon transform is employed to detect the free parking space.

Figure 6 shows the comparison between the image transformation results derived, respectively, from Hough transform and Radon transform. From the comparison, we can conclude that the features in Radon space, that is, light spots in Figure 6(c), are apparently more notable than those in Hough space, and the number of light spots in Radon space is equal to that of lines in the edge image. Therefore, we proposed a method for detecting the free parking space based on this merit.

fig6
Figure 6: The contrast between Hough and Radon transform. (a) The edge image with gray information. (b) Hough space. (c) Radon space.

In China, the colored parking line markings are usually either white lines or yellow lines, which have high intensities in the G and B channels of the RGB (Red, Green, and Blue) color model. In the paper, the G channel is used as the gray image for the edge detection. The Canny kernel detector is used to obtain edge image from the gray image. Meanwhile, the intensity values of the edge points in the edge image are preserved. This is very important to make good use of the feature of Radon transform mentioned previously, which ensures the high accuracy and robustness of the proposed system to the noises shadows or obstacles, compared with the system using the Hough transform.

Furthermore, the parking space orientations can be obtained by voting in the angle histogram based on the liner integral value in Radon space along the in the range of ; see Figure 7. From the figure we can conclude that since there exists a certain angle between the parking space marking lines, the corresponding angle histogram has two main peaks and the transverse angle between these two peaks just meets the fixed angle relation. Therefore, we can obtain the parking space orientations from the angle histogram.

847406.fig.007
Figure 7: The angle histogram.

However, with the influence of adjacent vehicle disturbance or noise, such as uneven light or shadows of trees, there will exist several main peaks in one angle histogram, and in the condition with severe noise, the real parking space marking lines may get submerged; see the third row of Figure 7. Therefore, the method talked above should be modified. In this paper, we propose to use the relation between the principal direction and secondary direction of the parking space to superimpose the angle histogram of on that of and mark the principal direction and secondary direction. The modified angle histogram is shown in Figure 8. From the figure, we can see that the accuracy and reliability are very high with the modified angle histogram.

847406.fig.008
Figure 8: The modified angle histogram.

However, the real parking space marking lines do not necessarily follow strict vertical or fixed angle relation and errors usually exist. Therefore, in order not to lose generality, we, respectively, take the neighborhood with a certain range of the principal and secondary directions as regions of interest.

3.3. Features Extraction

Vehicle parking space markings are often constructed with several fixed widths and colored parallel line segment pairs. According to the Radon transform, the lines are mapped to the bright spots (BS) in the Radon space. As shown in Figure 9, they are in one-to-one correspondence.

fig9
Figure 9: (a) Lines , , , and are the parking space marking lines in inverse perspective image. (b) , , , and are the bright spot pairs of the lines corresponding in Radon space.

It should be noted that the parallel line segment pairs in the edge image will produce BS pairs in the Radon space. The two BS points in one pair have the same value and a fixed width. Based on this feature, a method to detect the center of the BS pair in the Radon space is proposed. Considering the influence of noises, clutters, and vehicles in the edge image, it is necessary to take the area around the BS as the region of interest. The detector structure is shown in Figure 10.

847406.fig.0010
Figure 10: The structure for BS pairs detection. The yellow color is the BS pair in Radon space.

HBS is the high bright spot of the pair and LBS is the low bright spot of the pair. PD is the two BS pixel distances of the fixed width mapped from the line pair: where is the temporary value of the designed detector. is the line integral value in the Radon space at . Correspondingly, , , , are the line integral values in the Radon space, respectively. is a factor related by the extent of the difference between and . The value is defined as follows:

However, the real lines pair of the parking space markings is not strictly with a certain width. In order to avoid the influence of the line width errors, and are taken as the local maximum of their neighborhood near the BS along direction .

Finally, in order to extract the center of the bright spot pair, we limited the temporary value to be in the range of by using the transform as follows: where is probability set of the center about the BS pairs. The higher probability of is, the more likely the parking space exists. With known parking space orientation, the set can be obtained by series candidate feature points about the possible parking space line segment pairs. The detection result is shown in Figure 11.

fig11
Figure 11: The center detection result of the BS pairs of parking space lines. (a) , , , and are the candidate points detected by the proposed method in Radon space. (b) The red lines are the detection result of the points in image space.

It should be noted that a number of factors, such as the car body lines, shadows, and the wear of the line markings, may often affect the detection accuracy, as shown in Figure 12.

847406.fig.0012
Figure 12: The car body lines of the car interfere with the detection in Radon space. (a) The points marked in yellow area are the interfering points in Radon space. (b) The lines marked in blue area are the interfering lines of the car’s body.

In order to get accurate parking space marking lines, some measures to remove the noise factors should be taken.

3.4. Clustering and Filtering

The line segment pairs of the parking space markings are not always strictly in parallel in the real world. To solve this problem, the local maximum along the direction of near the candidate points is performed. Furthermore, although almost candidate points of the set belong to the parking space line segments, it cannot deal with the challenges due to shadows or the body lines of the car parked; see Figure 12. Therefore, we perform the -means cluster algorithm first and then filter the intervening points of the set by using the parking space geometry shape features. Finally, the center points of the BS about the parking lines can be fixed in the set .

The final detection result is shown in Figure 13, which shows that the proposed method of designed line detector, clustering, and filtering can effectively and accurately detect the parking spaces in various scenes. Furthermore, the proposed method demonstrates the high robustness against the challenges, such as shadows.

fig13
Figure 13: Result of parking space detection after clustering and filtering. (a) The detected points are the center of the BS pairs about the parking space marking lines in Radon space. (b) The red lines are the result of the detected points mapped to image coordinate. (c) The green rectangles are parking space.

Figure 14 shows the proposed methods take good performance in different scenes for parking space detection.

847406.fig.0014
Figure 14: The proposed method detection experiments of the parking space in different scenes.
3.5. Empty Parking Space Extraction

The method talked above detects both the empty parking spaces and occupied parking spaces. Since, when parking cars, we are usually concerned with empty parking spaces, empty parking spaces should be extracted.

After parking space detection, the principal direction can be used to have image rotation calibration and the effect after calibration is shown in Figure 15.

fig15
Figure 15: Principal direction correction.

After image angle calibration, a certain parking space is taken as the research object. Considering accuracy, robustness, and real-time performance, a quarter of the parking space depth is taken as the region of interest; see Figure 16, and the image area of the region of interest is taken as the feature to decide whether a certain parking space is empty or occupied. The rate of the image area (image pixel numbers) to the area of regions of interest (total pixel numbers in region of interest) P1 and P2 is, respectively, defined as and , and a high threshold and average threshold are set. Firstly, compare with  . If , consider this parking space as occupied and go to the next parking space. Otherwise, go to the region of interest P2. Secondly, compare with. If , consider this parking space as occupied and go to the next parking space. Otherwise, calculate : Finally, compare with . If , consider this parking space as occupied, and go to the next parking space. Otherwise, consider this parking space as empty.

847406.fig.0016
Figure 16: The region of interest for recognizing the free space.

4. Path Planning and Path Tracking

In this section, the method of path planning and tracking for vertical automatic parking based on the detection results of the free parking space is described.

4.1. Path Planning

Given what has been talked about previously, it is able to get the position information of the parking space. In this situation, a path planning method is needed for the automatic parking. Up to now, there are two path planning methods which are usually used for parking vehicles, that is, single circular trajectory based path planning method and double circular trajectory based path planning method.

The traditional single circular trajectory based path planning method can be divided into three parts, that is, a straight line for the first part, a circular arc for the second part, and another straight line for the third part. However, this method is always constrained by the size of parking spaces, obstacles, space for parking, and so forth, which will lead to failure of automatic parking. Considering all these defects, the traditional double circular trajectory based path planning method has done a lot of improvement. Yet, this method does not take into account the initial position of cars relative to the target parking space. As a result, this method cannot plan a parking trajectory which can be easily controlled. Under this circumstance, we proposed an improved double circular trajectory based path planning method.

Figure 17 shows the details of this method. XOY coordinate is the world coordinate. P1 and P2 are the entrance guidance points obtained from the parking space detection result in the image coordinate by using the inverse perspective mapping. Correspondingly, WD is the parking space depth. The planned path is based on a double circular trajectory, which has three switch points A, B, and C.(a)Before point A, drive the car along the straight path till it reaches point A.(b)In the circular arc AB, drive the car with a certain steering angle for the circular motion.(c)In arc BC, reverse the car in another certain steering angle for the circular motion till reaching point C.(d)After point C, drive the car along the straight path.

847406.fig.0017
Figure 17: Parking path based on a double circular trajectory.

In order to improve the parking performance in a continuous operation, it is very important to minimize the turning radius to fit the parking space. However, the smaller the turning radius is, the more likely the car may hit the neighboring vehicles. Therefore, the turning radius is calculated with the constraint of the geometry relation as shown in Figure 18: where, is the wheel base of the car, is the width of the car, and is the maximal steering wheel angle. is turning radius in the first turn. Here, it is the minimum turning radius of the vehicle. is the turning radius of the second turn. is the distance from the car coordinate center to the lines between parking space entrance guidance points. indicates the whole space of the car used during the parking procedure. According to the above formulations, it is able to get the position of the switch points and steering wheel angle by setting an appropriate .

fig18
Figure 18: Geometry relation in path planning. (a) Path planning. (b) The minimum turning radius in .
4.2. Path Tracking

Path tracking is mainly to solve the problem how to determine the steering wheel angle and speed at each time for the vehicle to follow the planned path. Generally, the speed of the vehicle is often low during the whole parking process. Therefore, we only consider tracking of the steering wheel angle of vehicle.

We employ the preview follow based on the PID (proportional-integral-derivative) strategy since it is simple and efficient for the vehicle control [21].

The principle is shown in Figure 19. The preview point is taken in front of the vehicle with a certain distance. Subsequently, the distance between the preview point and the target path is taken as the input of the PID controller. However, such method often has slow response and difficulties to control accurately when the curvature of the path is large. To overcome this problem, we proposed an improved method of the PD controller based on the feed forward for the vehicle control.

847406.fig.0019
Figure 19: Improved method for circular motion.

Preview point A uses a typical preview with straight line method to get minimum distance between point A and the target circular path. Subsequently, the preview point B is estimated by DR (Dead Reckoning) in the current pose of the vehicle in circles. Correspondingly, is the distance using the same method with .

The feed forward parameter is the steering wheel angle calculated by the previous formulation. So the final steering wheel angle is calculated as follows: where the is the in the moment of and the is the in the moment of . and are parameters of PD controller.

5. Experiments and Results

5.1. Experimental Platform

The proposed system was implemented in our experimental platform based on the CyberC3 [22, 23], which has four fisheye cameras mounted around the vehicle. The angle encoder for measuring the steering wheel angle and the odometer encoder are also installed on the platform. Details are shown in Figure 20.

847406.fig.0020
Figure 20: The CyberC3 vehicle mounted four fisheye cameras system images , , , and are the fisheye cameras mounted in the front, right side, left side, and back of the car.
5.2. Detection Accuracy

Figure 21 shows the proposed method based on the Radon transform is more robust than those based on the Hough transform in the noisy environment. Figure 22 shows that the Radon transforms can obtain a good performance in detecting multiple parking spaces simultaneously. This was also verified in [14] for detecting for rectangles and [13] for parallelograms.

fig21
Figure 21: The robustness of Radon space in detecting parking space is better than Hough transform in the same conditions, including the same detector, clustering, and filtering. (a) The ground cracks noise in the edge image affects detection points in Hough space. (b) The noise has no influence on detecting space in Radon space.
fig22
Figure 22: The performance of Radon transform has more accuracy than Hough transform in the same conditions, including the same detector, clustering, and filtering. (a) The detection of parking spaces in Hough space. (b) The detection of parking space in Radon space.

In the experiments, there were totally 2626 frames used to compare the performance of the proposed method against those based on the Hough space under the same condition. The comparison results are shown in Table 3.

tab3
Table 3: Comparing the parking space detection of the proposed method with Hough space.

The precision and recall used in Table 3 are computed as follows: where the true positive is the frequency of the correct detections of the parking spaces. The false positive is the frequency of false detections and the false negative is the frequency of the missing detections.

From the experimental results we can see that the proposed method performs robustly and accurately despite of the challenges due to shadows and other vehicles.

5.3. Empty Parking Space Extraction

Figure 23 is the experiment result of empty parking space extraction. In this figure, the red rectangles are occupied parking spaces while the green ones indicate empty parking spaces.

847406.fig.0023
Figure 23: Experiment result of free parking space.

In the experiment of empty parking space extraction, we took 327 images of different environments and the quantitative evaluation results are shown in Table 4. In this table, false positive detection number means the number of wrongly indicated parking slots that are not empty.

tab4
Table 4: The statistical result of empty parking space extraction.

Based on the experiment results, we can see that the empty parking space extraction method, which is based on regions of interest, has very high accuracy.

5.4. Automatic Parking Simulation Experiment

In order to evaluate the proposed path planning and path tracking method, a simulation of the path planning is executed in MATLAB. The scale size is determined by the inverse perspective image. The coordinate is transformed to the inverse perspective image coordinates. One pixel represents 2 cm in the real world coordinate. Furthermore, the proposed path tracking simulation is executed in TORCS (The Open Racing Car Simulator).

From the simulation results (see Figure 24), it can be found that the proposed path planning is a good trajectory for automatic parking. Furthermore, the improved method of path tracking is more fast and accurate than traditional method (Pure PID method).

fig24
Figure 24: The simulation result of the proposed method for path planning and path tracking.

6. Conclusion

In this paper, a low-cost bird’s eye view vision assistance system with four fisheye cameras has been developed, which can provide the surrounding view of the host vehicle. The system can rectify the images captured by the fisheye cameras and mosaic them into a bird’s eye view image in real-time. Furthermore, a method for detecting the free parking space has been proposed based on the Radon transform. The detector for the center of bright point pairs is completed in the Radon space. By using the clustering and filtering based on the shape features of the parking space, we can alleviate the effects of noises effectively. In particular, we compared the performance of the proposed system against those based on the Hough transform in the experiments. The experimental results show that the proposed method has more accuracy and robustness in detecting the free parking space. Finally, a simulation of the path planning and path tracking is executed to evaluate the proposed method for automatic parking.

Conflict of Interests

The authors declare that there is no conflict of interests regarding the publication of this paper.

Acknowledgments

This work was supported by the General Program of National Natural Science Foundation of China (61174178/51178268), the Major Research Plan of National Natural Science Foundation of China (91120018/91220301), and National Magnetic Confinement Fusion Science Program (2012GB102002).

References

  1. R. Frank, “Sensing in the ultimately safe vehicle,” in Proceedings of the Convergence Conference and Exhibition, 2004.
  2. H. Ichihashi, K. Miyagishi, and K. Honda, “Fuzzy c-means clustering with regularization by K-L information,” in Proceedings of the 10th IEEE International Conference on Fuzzy Systems, pp. 924–927, December 2001. View at Scopus
  3. R. Zhang, P. Ge, X. Zhou, T. Jiang, and R. Wang, “An method for vehicle-flow detection and tracking in real-time based on Gaussian mixture distribution,” Advances in Mechanical Engineering, vol. 2013, Article ID 861321, 8 pages, 2013. View at Publisher · View at Google Scholar
  4. H. G. Jung, D. S. Kim, P. J. Yoon, and J. H. Kim, “3D vision system for the recognition of free parking site location,” International Journal of Automotive Technology, vol. 7, no. 3, pp. 361–367, 2006. View at Scopus
  5. K. Fintzel, R. Bendahan, C. Vestri, S. Bougnoux, and T. Kakinami, “3D parking assistant system,” in Proceedings of the IEEE Intelligent Vehicles Symposium, pp. 881–886, June 2004. View at Scopus
  6. J. K. Suhr and H. G. Jung, “Sensor fusion-based vacant parking slot detection and tracking,” IEEE Transactions on Intelligent Transportation Systems, no. 99, pp. 1–16, 2013.
  7. J. Xiu, G. Chen, M. Xie, et al., “Vision-guided automatic parking for smart car,” in Proceedings of the IEEE Intelligent Vehicles Symposium, pp. 725–730, 2000.
  8. G. J. Ho, S. K. Dong, J. Y. Pal, and K. Jaihie, “Parking slot markings recognition for automatic parking assist system,” in Proceedings of the IEEE Intelligent Vehicles Symposium, pp. 106–113, June 2006. View at Scopus
  9. C.-C. Huang and S.-J. Wang, “A hierarchical bayesian generation framework for vacant parking space detection,” IEEE Transactions on Circuits and Systems for Video Technology, vol. 20, no. 12, pp. 1770–1785, 2010. View at Publisher · View at Google Scholar · View at Scopus
  10. P. Degerman, J. Pohl, and M. Sethson, “Hough transform for parking space estimation using long range ultrasonic sensors,” SAE Technical Paper 2006-01-0810, 2006. View at Publisher · View at Google Scholar
  11. J. K. Suhr, K. Bae, J. Kim, and H. G. Jung, “Free parking space detection using optical flow-based euclidean 3d reconstruction,” in Proceedings of the International Conference on Machine Vision Applications, pp. 563–566, 2007.
  12. C. R. Jung and R. Schramm, “Parallelogram detection using the tiled hough transform,” in Proceedings of the 13th International Conference on Systems, Signals and Image Processing, pp. 177–180, 2006.
  13. H. Bhaskar, N. Werghi, and S. Al Mansoori, “Combined spatial and transform domain analysis for rectangle detection,” in Proceedings of the 13th Conference on Information Fusion, pp. 1–7, July 2010. View at Scopus
  14. Q. Zhang and I. Couloigner, “Comparing different localization approaches of the radon transform for road centerline extraction from classified satellite imagery,” in Proceedings of the 18th International Conference on Pattern Recognition (ICPR '06), pp. 138–141, August 2006. View at Publisher · View at Google Scholar · View at Scopus
  15. H. Bhaskar and N. Werghi, “Comparing Hough and Radon transform based parallelogram detection,” in Proceedings of the IEEE Conference and Exhibition (GCC '11), pp. 641–644, February 2011. View at Publisher · View at Google Scholar · View at Scopus
  16. Y. Zheng, C.-W. Yeh, C.-D. Yang, S.-S. Jang, and I.-M. Chu, “On the local optimal solutions of metabolic regulatory networks using information guided genetic algorithm approach and clustering analysis,” Journal of Biotechnology, vol. 131, no. 2, pp. 159–167, 2007. View at Publisher · View at Google Scholar · View at Scopus
  17. Z. Zhang, “A flexible new technique for camera calibration,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 22, no. 11, pp. 1330–1334, 2000. View at Publisher · View at Google Scholar · View at Scopus
  18. C. Ricolfe-Viala and A.-J. Sanchez-Salmeron, “Lens distortion models evaluation,” Applied Optics, vol. 49, no. 30, pp. 5914–5928, 2010. View at Publisher · View at Google Scholar · View at Scopus
  19. M. Yang, C. Wang, F. Chen, B. Wang, and H. Li, “A new approach to high-accuracy road orthophoto mapping based on wavelet transform,” International Journal of Computational Intelligence Systems, vol. 4, no. 6, pp. 1367–1374, 2011.
  20. J. More, “Levenberg–marquardt algorithm: Implementation and theory,” in Proceedings of the Conference on Numerical Analysis, pp. 1–28, Dundee, UK, 1977.
  21. D. Yu and Y.-H. Wu, “Modeling of driver controlling at car-following based on preview follower theory,” in Proceedings of the International Conference on Intelligent Computation Technology and Automation (ICICTA '08), pp. 636–641, October 2008. View at Publisher · View at Google Scholar · View at Scopus
  22. T. Xia, M. Yang, R. Yang, and C. Wang, “CyberC3: a prototype cybernetic transportation system for Urban applications,” IEEE Transactions on Intelligent Transportation Systems, vol. 11, no. 1, pp. 142–152, 2010. View at Publisher · View at Google Scholar · View at Scopus
  23. H. Fang, M. Yang, R. Yang, and C. Wang, “Ground-texture-based localization for intelligent vehicles,” IEEE Transactions on Intelligent Transportation Systems, vol. 10, no. 3, pp. 463–468, 2009. View at Publisher · View at Google Scholar · View at Scopus