Table of Contents Author Guidelines Submit a Manuscript
Mathematical Problems in Engineering
Volume 2014, Article ID 531681, 13 pages
http://dx.doi.org/10.1155/2014/531681
Research Article

Research of Obstacle Recognition Technology in Cross-Country Environment for Unmanned Ground Vehicle

State Key Laboratory of Structural Analysis for Industrial Equipment, School of Automotive Engineering, Dalian University of Technology, Dalian, Liaoning 116023, China

Received 29 April 2014; Revised 18 June 2014; Accepted 19 June 2014; Published 10 July 2014

Academic Editor: Rui Mu

Copyright © 2014 Zhao Yibing et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Abstract

Being aimed at the obstacle recognition problem of unmanned ground vehicles in cross-country environment, this paper uses monocular vision sensor to realize the obstacle recognition of typical obstacles. Firstly, median filtering algorithm is applied during image preprocessing that can eliminate the noise. Secondly, image segmentation method based on the Fisher criterion function is used to segment the region of interest. Then, morphological method is used to process the segmented image, which is preparing for the subsequent analysis. The next step is to extract the color feature , color feature and edge feature “verticality” of image are extracted based on the HSI color space, the Lab color space, and two value images. Finally multifeature fusion algorithm based on Bayes classification theory is used for obstacle recognition. Test results show that the algorithm has good robustness and accuracy.

1. Introduction

Unmanned ground vehicle has been a research hotspot, and it has broad application prospect in military, civil, scientific research, and other fields [1]. The working environment of UGV has expanded to the more complex cross-country environment from the indoor environment. So in order to make the UGV safe and fast drive and finish the task in unknown environment, UGV must be able to quickly and accurately detect all kinds of obstacles and identify their categories in the environment, namely, having the ability of obstacle detection and recognition. That can provide the decision basis for how to plan a path, and then plan an optimal path according to a certain evaluation standard [2].

For the obstacle detection and recognition of UGV in cross-country environment, domestic and foreign researchers have done a lot of research. Mnaduchi et al. put forward an obstacle detection method, using the disparity map based on stereo vision, and test results proved that the method has better robust [3]. Mnaduchi et al. designed a remote obstacle detection system based on the ultrasonic radar, which has good detection range and precision, so it can improve the environmental awareness of UGV [4]. Hu and Wu used laser rangefinder to realize the obstacle detection and recognition, which can quickly detect the static and dynamic obstacles in vehicle driving environment [5]. Huihai et al. combined machine vision and ultrasonic sensors to detect obstacles, and test results show that the obstacle detection algorithm is effective and practical [6]. Yanmin et al. proposed an obstacle detection algorithm based on stereo vision and laser radar, which can separate the high grass and other obstacles (such as trunks and stones) [7]. According to the optical flow model, Zhao et al. proposed an obstacle detection method based on optical flow field [8].

Kinds of sensors are used to obstacle detection and recognition, and all of them have both advantages and disadvantages. The principle and structure of visual sensor are similar to the human sensory organs, and machine vision sensor has the advantages of small volume, low cost, and convenient installation [911]. Compared with other sensors, because it belongs to the passive measurement, its concealment is better, and it also has the advantages of having wide detection range and containing rich information [12]. Based on the requirements of research topic and hardware, the main work of this paper is to research obstacle detection and recognition technology based on monocular vision in cross-country environment.

This paper mainly researches the detection and recognition technology of trunk and shrub. And shrubs are divided into high shrubs and dwarf shrubs. The types of obstacles researched in this paper are shown in Figure 1.

fig1
Figure 1: The type of obstacles studied in this paper.

The main contents of this paper include the following aspects.Chapter 1: Chapter 1 includes an introduction; it introduces the research status of unmanned vehicle obstacle detection and recognition technology, the meaning and main contents of this paper.Chapter 2: obstacle detection is included in Chapter 2 that introduces an obstacle detection method based on monocular visual images, including image preprocessing, image segmentation, and morphological image processing.Chapter 3: feature extraction is included in Chapter 3 that introduces the features extraction method based on HSI color space, Lab color space and binary image.Chapter 4: Chapter 4 tackles identifying obstacles; it introduces an obstacle recognition method based on Bayes classification theory and calculates the parameters of Bayesian classifier by processing a large number of training samples.Chapter 5: Chapter 5 includes test platform and the relevant conclusions.

2. Monocular Vision-Based Obstacle Detection Methods

2.1. Image Preprocessing Based on Median Filtering Method

In the process of image acquisition, because of the difference of image acquisition environment, such as the light and shade degree of illumination and equipment performance, there are often noise and low contrast in the original image, which can reduce the image quality. So it is very necessary to image preprocessing before image segmentation, feature extraction, and pattern recognition [13, 14].

Median filtering belongs to nonlinear filtering method, and it is used to process the one-dimensional data in the earliest time. With the passage of time, it is gradually applied to process the two-dimensional image. Because of this, the template shape and size have great influence to the effect of [15]. When the template size increases, the effects of noise filtering will be better, but the details of the image will become increasingly blurred. So aiming at different quality and actual requirements, different shape and size of the template should be selected to realize the better noise filtering effect as far as possible and not lose the image edge details [16].

The main idea of median filtering method is to make the abrupt pixel values take the similar values with its neighborhood, so as to eliminate the isolated noise points in the image. The specific steps of the method are shown as follows.(1)Put the template (3 × 1, 1 × 3 or 3 × 3) to traverse the whole image and template center to coincide with the position of one pixel.(2)Sort all of the pixel values under in the template.(3)Find the middle values in this column.(4)Put the calculated middle values to the pixel corresponding to the center of template.

In this paper, median filtering method is used to eliminate the noise interference of original image, and the example is shown in Figure 2.

fig2
Figure 2: The comparison diagram before and after median filtering.
2.2. The Improved Image Segmentation Method Based on Fisher Criterion Function

Image segmentation is the most important and basic processing steps in the process of image processing, and it is also the key technology used in pattern recognition, image analysis, and understanding. Its main objective is to separate objects from a complex environment, and the effects of image segmentation have great influence on target recognition [17].

Threshold-based method is the simplest and most effective method in all the methods of image segmentation. Image segmentation method based on the criterion function is one of the very wide application methods, and its basic design idea is that based on some standard function, criterion function value is calculated in each threshold value, so the best threshold value is selected to which achieve the maximum threshold value or the minimum value. The classical image segmentation methods based on criterion function include minimum error criterion method, Otsu method, and maximum entropy method [18, 19]. Afterwards, Fisher criterion function is applied to the image segmentation method, and it is used in this paper. Its basic principle is as follows.

Assuming the distribution range of gray value , and there are only foreground and background in one image.   , is the normalized histogram. The priori probability of objective and background are respectively and

The mean value of objective and background is and

The variance of objective and background is and

The formula of Fisher criterion function is which is expressed by

According to the above formula, the optimal threshold is

When using image segmentation method based on Fisher criterion function, only the gray information is used, while the other color information is not used. So the improved method is that converting the gray image to other color spaces firstly, such as HSI and Lab color space, and then using Fisher criterion function method to process image. The process of this method is shown in Figure 3.

531681.fig.003
Figure 3: The flowchart of improved Fisher segmentation method.

The effect of improved image segmentation method based on Fisher criterion function is shown in Figure 4.

fig4
Figure 4: The results of improved Fisher segmentation method.

In Figure 4, comparing (a) and (c) shows that the objective region has been segmented, and the interference is obviously reduced, which is convenient for subsequent processing. Otherwise the large interference will affect the shape of objective region and edge information, which have great influence on the final results of obstacle recognition.

2.3. Image Morphology

After the image segmentation, region of interest has been segmented, but at the same time, a part of the region of noninterest is also segmented, which may influence the subsequent image analysis. So this paper adopts the outlier elimination method, small areas elimination method, and the holes filling method to eliminate this interference.

2.3.1. Outlier Elimination Method

There will be some isolated spots in the segmented image, which have influence on the image recognition, though it is not much, so it should be eliminated. The main idea of outlier elimination method is that to process the neighborhood of a single pixel, such as 4-neighborhood and 8-neighborhood. Its specific steps are shown as follows.(1)Put the 3 × 3 template to traverse the whole image and read the values of all the pixels under template.(2)Calculate the average value of these gray values.(3)Compare the average value with the threshold.(4)If the average value is less than the threshold, this pixel value is set to 0 (background); otherwise, it is set to 1 (objective).

In this paper, the results of outlier elimination method are shown in Figure 5, and (b) shows the isolated spots that have been eliminated.

fig5
Figure 5: The results of outlier elimination.
2.3.2. Small Areas Elimination Method

Although isolated points have been eliminated, there are still many interference areas that have not been eliminated, so only eliminating outliers are not enough. This paper uses small areas elimination method to eliminate the large interference areas. The main idea of this method is that according to the area size of the connected region the regions that their area is less than a certain threshold are eliminated. The specific steps of small areas elimination method are shown as follows.(1)Count all contained regions in image.(2)Calculate the area of connected regions.(3)Find the largest area in all the connected regions, and set it as the threshold.(4)Set all pixels value in the connected region that its area is smaller than the threshold as 0.

In this paper, the results of small areas elimination method are shown in Figure 6, and shows that the smaller connected regions exist in (a) have been eliminated, and only the region of interest exists in (b).

fig6
Figure 6: The results of small areas elimination.
2.3.3. The Holes Filling Method

Because of the uneven illumination, the presence of noise, and other factors, there will be holes in the segmented image, which makes the region of interest discontiguous. So this paper uses the holes filling method to fill the holes, and its main idea is that to set all the pixels within the connected region as the specified gray value. The specific steps of this method are shown as follows.(1)Scan the whole image following a particular order (left to right, top to bottom).(2)In each scanning line, when a pixel value from 1 to 0, marking the point. And scanning continues until the pixel value from 0 to 1, marking this point too.(3)After scanning the whole image, set the pixels value between the two marker points according to the order from left to right as 1.

In this paper, the results of the holes filling method are shown in Figure 7, and (b) shows the holes in (a) have been effectively filled, and it cannot cause the loss of the shape feature of region of interest.

fig7
Figure 7: The results of the holes filling.

3. Monocular Vision-Based Feature Extractions of Obstacle Method

Monocular vision-based features include color, texture, and shape (edge, wide, and high). Color and texture features belong to the internal features, so the segmented image and the original image should be combined to achieve feature extraction. Shape feature belongs to the external image, so it can be directly extracted in the segmented image. This paper uses HSI and Lab color space to extract color feature and uses binary image to extract edge feature [20].

3.1. Color Feature of Obstacle Extraction

HSI color space that reflects the feeling of people to color was first proposed by Munseu and hue, saturation, and intensity are used to describe the characteristics of color [21]. In the HSI color space model, the concept of hue and saturation is independent, so it is suitable for the application to the process of image processing and analysis. Due to the fact that hue represents color wavelength, saturation represents the shades of color, and brightness represents the reflection effect, so hue and saturation are related to the image color information, while brightness is not related to image color information [22].

Lab color space is released by CIE in 1976, and it uses digital method to describe visual induction of human. Three components, , , and , are used to describe the characteristics of color. The component represents luminosity, and it does not contain color information. The other components and represent the color information [23]. Lab color space has the advantages of wide color gamut, so long as the human visual perception of color can be shown through the Lab color space model. In addition, it also makes up for the shortcomings of color uneven distribution of RGB color space model [24].

The RGB images were converted to HSI and Lab color space, and then this paper count lots of -values, as shown in Figure 8(a), and -values, as shown in Figure 8(b), of trunk, high shrubs, and dwarf shrubs. The horizontal axis represents the number of all kinds of obstacles, and the vertical axis represents the statistical characteristic value. The red dot represents the statistical values of trunk, and the green plus sign (+) represents the statistical values of high shrubs. The blue asterisk (∗) represents the statistical values of dwarf shrubs. As can be seen from Figures 8(a) and 8(b), -feature can be used to separate trunk and the other obstacles, and -feature can be used to separate high shrubs and the other obstacles.

fig8
Figure 8: The statistical diagram of color feature.
3.2. Edge Feature of Obstacle Extraction

There are only 0 and 1 in the pixel values of binary image, and the segmented image has been binary image. In order to extract the edge information of image this paper uses edge detection method and boundary representation method to further image processing.

In the process of edge detection, the ability of antinoise and edge localization is contradictory, that is to say, by enhancing one the other one will be reduced. But the main idea of the Canny operator is to search of the best solution between the ability of antinoise and edge localization, so that both the ability of antinoise and edge detection are strong as far as possible.

In this paper, the results of edge detection method based on Canny operator is shown in Figure 9, and Figure 9(b) shows the edge has been fully extracted, which is continuous single pixel edge and has much influence on boundary representation. So the results show that this method is very effective.

fig9
Figure 9: The results of edge detection based on Canny operator.

Chain code was proposed by Freeman in 1961, and it is a kind of representation of the boundary point. Its main idea is using a series of connected line segment that has a specific length and direction to represent boundary [25]. Due to that, only the starting point needs to be represented by coordinate, while the other points can be represented by direction. Storing a direction required less space than a coordinate, so it can greatly reduce the number of data, when using chain code to represent boundary instead of coordinate. Now, the commonly used chain codes include 4-direction chain code, 6-direction chain code, 8-direction chain code, and 16-direction chain code, as shown in Figure 10.

fig10
Figure 10: The common forms of chain code.

This paper uses 16-direction code chain and counts lots of “verticality” values of trunk, high shrubs, and dwarf shrubs as shown in Figure 11. Verticality is defined as the ratio of the numbers of vertical direction and the total numbers of all directions, that is to say, it is the proportion of vertical direction. The horizontal axis represents the number of all kinds of obstacles, and the vertical axis represents the statistical characteristic value. The red dot represents the statistical values of trunk and the green plus sign (+) represents the statistical values of high shrubs. The blue asterisk (∗) represents the statistical values of dwarf shrubs. As can be seen from Figure 11, the “verticality” feature can be used to separate trunk and dwarf shrubs.

531681.fig.0011
Figure 11: The statistical diagram of verticality.

4. Obstacle Recognition Methods Based on Bayes Classification Theory

The flowchart of statistical pattern recognition system is shown in Figure 12, and it can be seen that the recognition system includes two parts: the training stage and the recognition stage. The training stage is used to analyze known samples, so as to formulate the classification criterion, which is the basis of obstacle recognition of unknown samples. The recognition stage is used to achieve the classification and recognition of unknown samples.

531681.fig.0012
Figure 12: The flowchart of statistical pattern recognition system.
4.1. Bayes Classifier
4.1.1. The Theory of Bayes Classifier

The main ideas of Bayes classifier is to make the classification error rates of the results minimum in the given condition, namely, the error rate is set as the basis of classification and recognition [26]. There are objects, , , so for the -feature of the samples, Bayes formula is as follows: according to the total probability formula:

In formula (6), ) represents the prior probability, which is the possibility of all events occurring without considering any conditions. represents the conditional probability density function. According to the former researches, the statistical data of many problems mostly belong to normal distribution, so the density function of normal distribution is selected as the calculation form of conditional probability density function. The unilabiate normal distribution probability density function is as follows:

There, represents mathematical expectation, and its formula is shown as follows:

represents variance, and its formula is shown as follows:

According to formulas (8), (9), and (10), in order to determine the specific expression of a conditional probability density function, expectation and variance should be calculated, which can be achieved by studying a large number of samples.

In formula (6), represents the posterior probability, which is the criterion for the final recognition. The flowchart of Bayes classifier based on minimum error rate is shown in Figure 13.

531681.fig.0013
Figure 13: The flowchart of Bayes classifier.

Figure 13 shows that the recognition process of Bayes classifier can be briefly described as follows. Calculate posterior probability , and then calculate the maximum value as follows:

If class has the maximum posterior probability, so the tested sample belongs to the class .

4.1.2. The Design of Bayes Classifier

The obstacles researched in this paper are trunk, high shrubs, and dwarf shrubs, which are set to , , and . And their priori probabilities are set to , , and . The prior probabilities are shown as follows:

For the same sample, this paper extracted three different features. represents saturation , represents color features , and represents “verticality” feature based on 16-direction chain code. For each feature value, this paper need calculate three conditional probability density functions. According to the features, MATLAB is used to estimate the parameters of conditional probability density function. If all the features belong to normal distribution, the conditional probability density functions are shown as follows:

In order to see the distribution of data more intuitively, the histogram is the most commonly used method. Its main idea is that to divide the data into some interval, which has the same interval between them. The histogram is defined as the rectangle. Figure 14(a) shows the histogram of -feature of trunk, and Figure 14(b) shows the fitted normal distribution probability density function curve based on -feature data of trunk samples.

fig14
Figure 14: The fitting curve of normal distribution.

The intuitionistic method that testing whether the data belong to normal distribution is shown in Figure 15. The red line is transformed from the normal distribution, and the blue points present the detected data. The linear approximation of data accord with normal distribution. As it can be seen from the figure, the data points are mostly falling in line, so The preliminary conclusion is that the data fit a normal distribution.

531681.fig.0015
Figure 15: The normal distribution test of data.

The specific hypothesis testing results for the data of trunk samples based on Lilliefors are shown in Table 1. The statistical value is 0.0596, which is less than the critical value that is 0.0947, so the original hypothesis is accepted, the conclusion is that the data fit a normal distribution. According to the expectation and variance, the -feature values of trunk are in accord with normal distribution (0.1960, 0.07532), so the conditional probability density functions are as follows:

tab1
Table 1: The results of hypothesis testing.

In addition, the 95% confidence interval of expectation is [0.1800, 0.2120], and the 95% confidence interval of variance is [0.0656, 0.0885].

Similarly, the -feature values of high shrubs are in accord with normal distribution (0.4591, 0.07992), so the conditional probability density functions are as follows:

The -feature values of dwarf shrubs are in accord with normal distribution (0.3768, 0.05792), so the conditional probability density functions are as follows:

The -feature values of trunk are in accord with normal distribution (2.4879, 1.28022), so the conditional probability density functions are as follows:

The -feature values of high shrubs are in accord with normal distribution (−2.4449, 1.89262), so the conditional probability density functions are as follows:

The -feature values of dwarf shrubs are in accord with normal distribution (1.5453, 1.61362), so the conditional probability density functions are as follows:

The “verticality” values of trunk are in accord with normal distribution (0.6826, 0.08222), so the conditional probability density functions are as follows:

The “verticality” values of high shrubs are in accord with normal distribution (0.4744, 0.08612), so the conditional probability density functions are as follows:

The “verticality” values of dwarf shrubs are in accord with normal distribution (0.2212, 0.05142), so the conditional probability density functions are as follows:

The 95% confidence intervals of color and edge features for three kinds of obstacles are shown in Table 2.

tab2
Table 2: The range of color feature for three kinds of obstacles.
4.2. Multifeature Fusion Algorithm Based on the Bayes Classifier

According to the obtained prior probability and conditional probability density functions, combined with the Bayes formula, the posterior probability under the single characteristic value can be calculated as follows: according to the total probability formula: where, , , and , respectively, present trunk, high shrubs, and dwarf shrubs and , , and , respectively, present -feature, -feature, and “verticality.”

Then, put the prior probability in formula (12) and conditional probability density function in formula (14)~(22) into formula (23), the posteriori probability can be calculated for each feature. So tested samples can be classified based on these values; namely, it achieves the task of obstacle recognition.

The above method is focused on single feature, and then this paper presents obstacle recognition of multifeature fusion based on Bayes classification theory. Its basic is Bayes classification theory and then operating the obstacle recognition combined with multifeature fusion.

There are features and class; then, the steps of this method are shown as follows.(1)According to the actual situation, get the prior probability of each class.(2)Calculate the conditional probability density function of the multifeature fusion. If they are independent of each other between features, namely, they do not interfere with each other, so the conditional probability function is shown as follows: (3)The total probability formula is employed to calculate the total probability:(4)Bayes formula is employed to calculate the posterior probability:(5)Classifying according to the classification rules, classification rules are shown as follows:

Combined with the research content of this paper, which mainly includes three features and three kinds of obstacles, the posterior probability of multifeature fusion based on Bayes classification theory is as follows: where conditional probability density function is

The total probability formula is as followed:

Finally, put the prior probability in formula (12) and conditional probability density function in formula (14)~(22) into formula (29), the posteriori probability combined three features can be calculated. So tested samples can be classified based on these values; namely, it achieves the task of obstacle recognition.

5. Test and Result Analysis

In this paper, the UGV platform of Dalian University of Technology is used to image acquisition. Then VC++ software is used to image processing and segment the region of interest. Finally, MATLAB software is used to feature extraction and obstacle recognition.

5.1. Test Platform

The UGV platform of DLUT is shown in Figure 16. The perception system of the UGV includes two AVT F-033B/C color cameras, one real-time laser radar (SICK-221), one American UNIQ USS-301 infrared camera, and one ADVANTECH IPC-610H industrial PC. Both the color camera and infrared camera are mounted on the top of the UGV’s platform with shims that provided a 10° down tilt. Laser radar is mounted to the bumper with horizontal forward-looking scanning field of view [27].

531681.fig.0016
Figure 16: The test platform of UGV.
5.2. Test Results

This paper collected 300 pieces of obstacle images, including 100 pieces of trunk images, 100 pieces of high shrubs images, and 100 pieces of dwarf shrubs images. For any one obstacle image, its probability belonging to the tree, high shrubs, and dwarf shrubs can be calculated, respectively, and when a probability value satisfies certain conditions, the recognition results can be determined; otherwise; the recognition result is pending. The conditions are shown as follows.(1)It is the maximum posteriori probability.(2)The difference between it and other posterior probability values is greater than a certain threshold (the threshold is setting for 5%).

A part of recognition results is shown in Figure 17.

fig17
Figure 17: The results of obstacle recognition.
5.3. Result Analysis

This paper tested 300 images and obtained the recognition results. Then the accuracy of single feature and multifeature fusion can be obtained, as shown in Table 3.

tab3
Table 3: The accuracy of obstacle recognition.

As can be seen from the table, when using a single feature to recognize obstacle, the correct rate is relatively low, while the correct rate of multifeature fusion reached more than 90% and it is the highest in the four methods. Therefore, the test results show that the effects of using multifeature fusion to recognize obstacle is better than using a single feature, and the recognition accuracy is significantly improved, so it verifies the feasibility and validity of the method.

6. Conclusion

According to the type of obstacle that UGV may be encountered with in the off-road environment, such as trunk, high shrubs, and dwarf shrubs, as well as a large number of related research achievements, this paper presented a monocular-vision obstacle recognition method based on Bayes classification theory. The main content is feature extraction methods based on HIS color space, Lab color space, and binary image and multifeature fusion obstacle recognition method based on Bayes classification theory. Test results show the proposed method has better robust and accuracy. In addition, there are some shortcomings in this paper. For example, the type of researched obstacles is not enough, and the real-time performance of obstacle recognition algorithm is not discussed, so the subsequent work will consider rocks, pits, water, and other obstacles, which can improve the obstacle recognition system of UGV in the off-road environment, and the real-time performance of this method.

Conflict of Interests

The authors declare that there is no conflict of interests regarding the publication of this paper.

Acknowledgments

This work is sponsored by the Specialized Research Fund for the Doctoral Program of Higher Education (20110041120024), National Natural Science Foundation Project (51205038), and Fundamental Research Funds for the Central Universities DUT13JS14 and DUT13JS02. Finally, the authors are grateful for all the friends cooperating with us during the research and the events, and the journalists who covered this event.

References

  1. W. Wei, Obstacle Detection and Road Segmentation for Robots Based on Information Fusion, Zhejiang University, Zhejiang, China, 2010.
  2. Q. Weigao and X. Xuejin, “Development status and direction of driverless vehicle,” Shanghai Automotive, vol. 7, pp. 40–43, 2007. View at Google Scholar
  3. R. Manduchi, A. Castano, A. Talukder, and L. Matthies, “Obstacle detection and terrain classification for autonomous off-road navigation,” Autonomous Robots, vol. 18, no. 1, pp. 81–102, 2005. View at Publisher · View at Google Scholar · View at Scopus
  4. R. Mnaduchi, A. Castano, A. Talukder et al., “Real-time moving obstacle detection using optical flow models,” in Proceedings of the IEEE Intelligent Vehicles Symposium, pp. 466–471, Tokyo, Japan, 2006.
  5. T. Hu and T. Wu, “A fast and robust obstacle detection algorithm for off-road autonomous mobile robots,” Robot, vol. 33, no. 3, pp. 287–298, 2011. View at Publisher · View at Google Scholar · View at Scopus
  6. C. Huihai, L. Yan, and Z. Xin, “A large range sonar obstacle detection system for cross-country autonomous vehicle,” Electronic Design Engineering, vol. 19, no. 16, pp. 64–67, 2011. View at Google Scholar
  7. L. Yanmin, Z. Qidan, Z. Xunyu et al., “Study on obstacle detection based on laser range finder,” Computer Engineering and Design, vol. 33, no. 2, pp. 718–723, 2012. View at Google Scholar
  8. X. Zhao, P. Liu, M. Zhang, L. Yang, and J. Shi, “A fast obstacle detection algorithm for mobile robots and its application,” Robot, vol. 33, no. 2, pp. 198–214, 2011. View at Publisher · View at Google Scholar · View at Scopus
  9. B. Z. Yao, P. Hu, X. H. Lu, J. J. Gao, and M. H. Zhang, “Transit network design based on travel time reliability,” Transportation Research Part C, vol. 43, pp. 233–248, 2014. View at Publisher · View at Google Scholar
  10. B. Yao, P. Hu, M. Zhang, and S. Wang, “Artificial bee colony algorithm with scanning strategy for the periodic vehicle routing problem,” Simulation, vol. 89, no. 6, pp. 762–770, 2013. View at Publisher · View at Google Scholar · View at Scopus
  11. B. Z. Yao, C. Y. Yang, J. B. Yao, and J. Sun, “Tunnel surrounding rock displacement prediction using support vector machine,” International Journal of Computational Intelligence Systems, vol. 3, no. 6, pp. 843–852, 2010. View at Publisher · View at Google Scholar · View at Scopus
  12. İ. K. İyidir, F. B. Tek, and D. Kırcalı, “Adaptive visual obstacle detection for mobile robots using monocular camera and ultrasonic sensor,” in Proceedings of the 12th International Conference on Computer Vision (ECCV '12), vol. 2, pp. 526–535, Florence, Italy, October 2012.
  13. B. Z. Yao, C. Y. Yang, J. B. Yao, J. J. Hu, and J. Sun, “An improved ant colony optimization for flexible job shop scheduling problems,” Advanced Science Letters, vol. 4, no. 6-7, pp. 2127–2131, 2011. View at Google Scholar
  14. B. Z. Yao, J. B. Yao, M. H. Zhang, and L. Yu, “Improved support vector machine regression in multi-step-ahead prediction for tunnel surrounding rock displacement,” Scientia Iranica, 2013. View at Google Scholar
  15. B. Yu, Z. Yang, K. Chen, and B. Yu, “Hybrid model for prediction of bus arrival times at next station,” Journal of Advanced Transportation, vol. 44, no. 3, pp. 193–204, 2010. View at Publisher · View at Google Scholar · View at Scopus
  16. Y. Bin, Y. Zhongzhen, and Y. Baozhen, “Bus arrival time prediction using support vector machines,” Journal of Intelligent Transportation Systems: Technology, Planning, and Operations, vol. 10, no. 4, pp. 151–158, 2006. View at Publisher · View at Google Scholar · View at Scopus
  17. B. Yu, Z. Z. Yang, and B. Z. Yao, “A hybrid algorithm for vehicle routing problem with time windows,” Expert Systems with Applications, vol. 38, no. 1, pp. 435–441, 2011. View at Publisher · View at Google Scholar · View at Scopus
  18. B. Yu, Z. Z. Yang, and J. B. Yao, “Genetic algorithm for bus frequency optimization,” Journal of Transportation Engineering, vol. 136, no. 6, pp. 576–583, 2010. View at Publisher · View at Google Scholar · View at Scopus
  19. B. Yu, H. B. Zhu, W. J. Cai, N. Ma, and B. Z. Yao, “Two-phase optimization approach to transit hub location—the case of Dalian,” Journal of Transport Geography, vol. 33, pp. 33–62, 2013. View at Publisher · View at Google Scholar
  20. X. Pang, Z. Min, and J. Kan, “Color image segmentation based on HSI and LAB color space,” Journal of Guangxi Unvisity: Natural Science Edition, vol. 36, no. 6, pp. 976–980, 2011. View at Google Scholar
  21. H. Shaojia, L. Ziyang, and S. Jianqing, “Obstacle detection of indoor robots based on monocular vision,” Journal of Computer Applications, vol. 32, no. 9, pp. 2556–2559, 2012. View at Google Scholar
  22. P. Fei and W. Henghua, “Study on obstacle distance detection based on monocular humanoid robot,” Computer Systems & Applications, vol. 22, no. 8, pp. 88–91, 2013. View at Google Scholar
  23. B. Yu, L. Guo, X. Qian, T. Zhao, and G. Cheng, “An effective de-noising algorithm in CIE-lab color space using hybrid filtering,” Journal of Northwestern Polytechnical University, vol. 30, no. 6, pp. 941–945, 2012. View at Google Scholar · View at Scopus
  24. C. Changtao, Q. Guoqing, and Y. Ping, “Application of Lab spaces color segmentation in fast vehicle license plate location,” Application Research of Computers, vol. 27, no. 8, pp. 3191–3193, 2010. View at Google Scholar
  25. D. Jingyi and Y. Jiao, “Using improved Freeman chain code to RMB denomination identification,” Computer Engineering and Design, vol. 33, no. 12, pp. 4643–4646, 2012. View at Google Scholar
  26. H. Song, D. He, and X. Xin, “Unstructured road detection and obstacle recognition algorithm based on machine vision,” Transactions of the Chinese Society of Agricultural Engineering, vol. 27, no. 6, pp. 225–230, 2011. View at Publisher · View at Google Scholar · View at Scopus
  27. Y. Zhao, J. Li, L. Li, M. Zhang, and L. Guo, “Environmental perception and sensor data fusion for unmanned ground vehicle,” Mathematical Problems in Engineering, vol. 2013, Article ID 903951, 12 pages, 2013. View at Publisher · View at Google Scholar