Special Issue

Advanced Intelligent Fuzzy Systems Modeling Technologies for Smart Cities

View this Special Issue

Research Article | Open Access

Volume 2021 |Article ID 9390778 | https://doi.org/10.1155/2021/9390778

Yuxiu Bai, Huanhuan Zheng, Jian Zhou, Dongmei Zhou, "A Lane Extraction Algorithm Based on Fuzzy Set", Mathematical Problems in Engineering, vol. 2021, Article ID 9390778, 6 pages, 2021. https://doi.org/10.1155/2021/9390778

A Lane Extraction Algorithm Based on Fuzzy Set

Revised07 May 2021
Accepted26 May 2021
Published02 Jun 2021

Abstract

Lanes are difficult to be extracted completely. A lane extraction algorithm is proposed according to vehicle driving rules. Vehicles are moving constantly, so the foreground area and background area cannot be defined effectively in the image. Therefore, based on the theory of fuzzy set, multidimensional degree is used to judge the membership degree of target and foreground in order to extract the moving area accurately. Then, the logistic regression model is established to determine the moving vehicles. Finally, based on the vehicle track, the lane extraction is realized by regional growth. The results show that the proposed algorithm can extract the road effectively.

1. Introduction

Generally, the main difficulties of road extraction are as follows: (1) it is difficult to establish a unified road segmentation model because of the limited features of the road in the image, and (2) due to the disorder of road extension, it is easy to be affected by shadow and other factors, which can result in incomplete segmentation. Therefore, we propose a complete road extraction process: (1) from the perspective of the image sequence, the driving direction of moving vehicles is analyzed to determine the road area indirectly, and (2) according to the trajectory of moving vehicles, the road is extracted completely.

According to the above analysis, the algorithm flowchart is established, as shown in Figure 1, and the images are input sequentially. (1) A multidimensional moving object detection method based on a fuzzy set is proposed to realize the accurate extraction of moving objects. (2) The vehicle detection module is established to detect and track the vehicle. (3) According to the vehicle trajectory, road extraction is realized.

2.1. Multidimensional Degree Moving Object Detection Based on Fuzzy Set

In the complex background, it is difficult to distinguish the moving object from the background. For this reason, scholars put forward the theory of fuzzy set to study the data with ambiguous types. Firstly, we define the fuzzy subset as A, and μA (x) is called the membership of x to A. In moving object detection, every attribute is a fuzzy set, and the selection of its membership function is very important. According to the difference between the intraclass similarity and the interclass similarity, we design the membership function:where a < c, b = (a + c)/2. The slope of this function is small near the boundary of 0.5, but large far away from the boundary. Because of the fuzziness of the features, the middle part cannot judge the foreground and background clearly, so it needs to be determined by combining with other membership relations. Therefore, we carry out the research from the dimensions of color (C), time (T), and space (S). The process is as shown in Figure 2.

The pyramid model is constructed to solve the membership functions under different resolutions and form the fuzzy vector set cluster. The upsampling is used to generate the foreground membership μc0, μc1, and μc2 (RGB three channels) of the same dimension as the original resolution. According to the membership principle of fuzzy vector set cluster, the membership degree of each point to the foreground is calculated from the color vector C, which is μc = (μc0 + μc1 + μc2)/3. Similarly, the membership of T and S is calculated.

The object motion will not produce mutation because of the continuity of the video. If a certain point is foreground in the current frame, it is more likely to be foreground in the next frame, and its probability is recorded as PF. Similarly, if the current frame is background, the possibility of the next frame as the background is also greatly recorded as PB. The background and foreground will be switched because of the uncertainty of object motion. Given that the number of foreground pixels and background pixels in Tn frame is and , when and , then PF and PB can be represented by fuzzy set. For the point in frame T−1, the membership degree of the vector to foreground and background is and within the time of T frames.

Spatial feature refers to the concentration of points. In a small area, the attributes of points should be consistent. Most of the outliers are noise, which should be removed. Based on the preliminary results of color membership, the number of foreground pixels with membership greater than 0.5 in the 5 × 5 pixel area is counted which is expressed as n. The membership degree of the space vector of the point to the foreground can be expressed as .

As the real moving object should have the following characteristics: the color difference between the object and the background point is large, the position of the object changes continuously, and the concentration is strong, then the final membership function is as follows:

2.2. Lane Extraction Algorithm Based on Vehicle Trajectory

According to the characteristics of vehicles only driving in the lane, we will extract vehicles from all moving objects and then indirectly determine the lane. LRM (logistic region model) [32] is a classical classification algorithm in statistics, whose process includes the following: the cost function is established, and then the optimal model parameters are obtained. Finally, the model effect is verified. The conditional probability distribution of the traditional binomial regression model is as follows:where ω is the parameter of the weight vector and · is the inner product. At this time,

The maximum likelihood estimation method is used to estimate the parameters

On the basis of L (ω), in order to avoid overfitting, the cost function is defined aswhere λ is the regularization coefficient. When J (ω) is minimum, the corresponding ω is the optimal parameter. The gradient descent method is used to get the minimum value of the cost function.

The update process is as follows:

According to the above algorithm, the vehicle area is extracted, and the vehicle trajectory set is determined by taking the vehicle gravity center as the center. Taking the trajectory set point as the seed point, the region growth method is used to realize the complete extraction of lanes.

3. Experiment Result and Analysis

20 groups of aerial videos are used in the experiment to form a database. 8 groups of representative data (43109 frames) are selected from it to verify the effectiveness of the proposed algorithm as shown in Table 1. The running environment of the program is Win 7. All images are calibrated at the pixel level by a dedicated professional person in the area where the lane is located. We use the trilinear difference to normalize the image resolution. And in this case, the frequency does not need to be normalized.

 Data Resolution Frame no. Feature 1 352 × 288 7500 Constant moving 2 480 × 360 5210 Large size moving target 3 768 × 576 5412 The first frame contains the moving object 4 320 × 240 1025 Aerial top view 5 512 × 288 7850 Contains the shadow 6 448 × 336 6245 Aerial strabismus 7 1480 × 1320 7852 Gate of the community 8 3840 × 2160 2015 The crossroads
3.1. Comparison of Moving Object Extraction Algorithms

In order to verify the accuracy of moving object extraction, area overlap segmentation (AOM) and combination measure (CM) are introduced [33]:where is the gold standard, Rs is the segmentation result, and AOM and CM are proportional to the segmentation result.

Comparing the segmentation results of different algorithms, as shown in Table 2, all algorithms show good performance for the data that the object has been moving. Because the dynamic learning factor of GMM [34] is fixed, the object extraction is incomplete. FD [35] algorithm fixes background frame, which can better extract large moving objects, but the quality of the background frame will directly affect the subsequent detection effect. EFD [36] reduces the influence of the background frame by fusing boundary information. DL [37] integrates prior knowledge, which can better restrain tailing. EGMM [38] simulates the light distribution to establish a model, which has a good inhibition effect on the shadow area. However, on the basis of the traditional GMM, our algorithm updates the learning factor dynamically and introduces the frame difference method to realize the complete extraction of the motion region. Although the effect is slightly lower than that of DL [37] and EGMM [38] when the first frame contains moving object and data with shadow, the proposed algorithm has high performance. Since the algorithm proposed in this paper is based on the area where the vehicle is traveling, when there is no vehicle passing through the shadow area, there will be a risk of missed detection. The follow-up research will continue to strengthen the study in this area.

 Algorithm AOM Data 1 Data 2 Data 3 Data 4 Data 5 Data 6 Data 7 Data 8 GMM 0.83 0.76 0.78 0.80 0.75 0.86 0.79 0.76 FD 0.81 0.79 0.75 0.79 0.76 0.80 0.82 0.83 EFD 0.86 0.82 0.77 0.83 0.79 0.83 0.85 0.87 DL 0.92 0.89 0.91 0.92 0.87 0.81 0.90 0.89 EGMM 0.85 0.80 0.82 0.85 0.93 0.80 0.86 0.85 Ours 0.93 0.90 0.89 0.93 0.91 0.90 0.92 0.90 CM GMM 0.71 0.69 0.79 0.77 0.75 0.71 0.68 0.71 FD 0.76 0.73 0.81 0.81 0.78 0.73 0.74 0.70 EFD 0.79 0.75 0.85 0.84 0.81 0.76 0.78 0.76 DL 0.82 0.78 0.88 0.86 0.83 0.81 0.81 0.79 EGMM 0.86 0.82 0.87 0.88 0.87 0.85 0.84 0.83 Ours 0.89 0.86 0.85 0.90 0.86 0.87 0.86 0.85
3.2. Lane Extraction Result

A lane extraction algorithm is proposed in this paper. On the premise of moving vehicle detection, the vehicle trajectory graph is constructed to finally realize the lane extraction, as shown in Figure 3. Our algorithm can effectively extract the lane and effectively distinguish the pedestrian road and vehicle road.

4. Conclusion

As it is difficult to extract the lane completely, a lane extraction model is established based on moving vehicles. Based on the fuzzy set theory, the algorithm is proposed from the color, time, and space dimensions and takes the vehicle route as the benchmark to achieve accurate road extraction. On this basis, road condition analysis and road construction research can be carried out in the future.

Data Availability

All data used to support the findings of this study are available within the article.

Conflicts of Interest

The authors declare that they have no conflicts of interest.

Acknowledgments

This work was supported by Yulin Science and Technology Association Young Talents Promotion Program Project under Grant no. 20200214 and Yulin City Science and Technology Plan Project under Grant no. CXY-2020-012-02.

References

1. S. Qiu, K. Cheng, L. Cui, D. Zhou, and Q. Guo, “A moving vehicle tracking algorithm based on deep learning,” Journal of Ambient Intelligence and Humanized Computing, pp. 1–7, 2020. View at: Publisher Site | Google Scholar
2. T. Zhou, H. Lu, W. Wang, and Y. Xia, “GA-SVM based feature selection and parameter optimization in hospitalization expense modeling,” Applied Soft Computing, vol. 75, pp. 323–332, 2019. View at: Publisher Site | Google Scholar
3. X. Niu, “A geometric active contour model for highway extraction,” in Proceedings of ASPRS 2006 Annual Conference, Reno, Nevada, May 2006. View at: Google Scholar
4. P. Pongpaibool, P. Tangamchit, and K. Noodwong, “Evaluation of road traffic congestion using fuzzy techniques,” in Proceedings of the TENCON 2007-2007 IEEE Region 10 Conference, pp. 1–4, IEEE, Taipei, Taiwan, November 2007. View at: Publisher Site | Google Scholar
5. A. Grote, M. Butenuth, and C. Heipke, “Road extraction in suburban areas based on normalized cuts,” International Archives of Photogrammetry, Remote Sensing and Spatial Information Sciences, vol. 36, no. 3, p. W49A, 2007. View at: Google Scholar
6. A. Grote and C. Heipke, “Road extraction for the update of road databases in suburban areas,” International Archives of Photogrammetry, Remote Sensing and Spatial Information Sciences, vol. 37, no. 3, pp. 563–568, 2008. View at: Google Scholar
7. X. Lin, J. Zhang, Z. Liu, and J. Shen, “Integration method of profile matching and template matching for road extraction from high resolution remotely sensed imagery,” in Proceedings of the 2008 International Workshop on Earth Observation and Remote Sensing Applications, pp. 1–6, IEEE, Beijing, China, July 2008. View at: Publisher Site | Google Scholar
8. A. Grote, C. Heipke, F. Rottensteiner, and H. Meyer, “Road extraction in suburban areas by region-based road subgraph extraction and evaluation,” in Proceedings of the 2009 Joint Urban Remote Sensing Event, pp. 1–6, IEEE, Shanghai, China, May 2009. View at: Publisher Site | Google Scholar
9. J. Senthilnath, M. Rajeshwari, and S. N. Omkar, “Automatic road extraction using high resolution satellite image based on texture progressive analysis and normalized cut method,” Journal of the Indian Society of Remote Sensing, vol. 37, no. 3, pp. 351–361, 2009. View at: Publisher Site | Google Scholar
10. J. Guan, Z. Wang, and X. Yao, “A new approach for road centerlines extraction and width estimation,” in Proceedings of the IEEE 10th International Conference on Signal Proceedings, pp. 924–927, IEEE, Beijing, China, October 2010. View at: Publisher Site | Google Scholar
11. L. Zhao and X. Wang, “Road extraction in high resolution remote sensing images based on mathematic morphology and snake model,” in Proceedings of the 2010 3rd International Congress on Image and Signal Processing, vol. 3, pp. 1436–1440, IEEE, Yantai, China, October 2010. View at: Publisher Site | Google Scholar
12. M. Rajeswari, K. Gurumurthy, L. Reddy, S. N. Omkar, and J. Senthilnath, “Automatic road extraction based on level set, normalized cuts and mean shift methods,” International Journal of Computer Science Issues (IJCSI), vol. 8, no. 3, p. 250, 2011. View at: Google Scholar
13. A. Kirthika and A. Mookambiga, “Automated road network extraction using artificial neural network,” in Proceedings of the 2011 International Conference on Recent Trends in Information Technology (ICRTIT), pp. 1061–1065, IEEE, Chennai, India, June 2011. View at: Publisher Site | Google Scholar
14. E. Karaman, U. Cinar, E. Gedik, Y. Yardemci, and U. Halici, “A new algorithm for automatic road network extraction in multispectral satellite images,” in Proceedings of the 4th GEOBIA, pp. 455–459, Rio de Janeiro, Brazil, May 2012. View at: Google Scholar
15. A. Grote, C. Heipke, and F. Rottensteiner, “Road network extraction in suburban areas,” The Photogrammetric Record, vol. 27, no. 137, pp. 8–28, 2012. View at: Publisher Site | Google Scholar
16. Z. Miao, W. Shi, H. Zhang, and X. Wang, “Road centerline extraction from high-resolution imagery based on shape features and multivariate adaptive regression splines,” IEEE Geoscience and Remote Sensing Letters, vol. 10, no. 3, pp. 583–587, 2012. View at: Publisher Site | Google Scholar
17. J. Wegner, J. Montoya-Zegarra, and K. Schindler, “A higher-order CRF model for road network extraction,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1698–1705, Portland, OR, USA, June 2013. View at: Publisher Site | Google Scholar
18. W. Shi, Z. Miao, Q. Wang, and H. Zhang, “Spectral-spatial classification and shape features for urban road centerline extraction,” IEEE Geoscience and Remote Sensing Letters, vol. 11, no. 4, pp. 788–792, 2013. View at: Publisher Site | Google Scholar
19. P. P. Singh and R. D. Garg, “A two-stage framework for road extraction from high-resolution satellite images by using prominent features of impervious surfaces,” International Journal of Remote Sensing, vol. 35, no. 24, pp. 8074–8107, 2014. View at: Publisher Site | Google Scholar
20. M. Maboudi and J. Amini, “Object based segmentation effect on road network extraction from satellite images,” in Proceedings of the 36th Asian Conference on Remote Sensing, pp. 19–23, Manila, Philippines, October 2015. View at: Google Scholar
21. M. Saati, J. Amini, and M. Maboudi, “A method for automatic road extraction of high resolution sar imagery,” Journal of the Indian Society of Remote Sensing, vol. 43, no. 4, pp. 697–707, 2015. View at: Publisher Site | Google Scholar
22. M. Li, A. Stein, W. Bijker, and Q. Zhan, “Region-based urban road extraction from VHR satellite images using binary partition tree,” International Journal of Applied Earth Observation and Geoinformation, vol. 44, pp. 217–225, 2016. View at: Publisher Site | Google Scholar
23. F. Saba, M. J. Valadan Zoej, and M. Mokhtarzade, “Optimization of multiresolution segmentation for object-oriented road detection from high-resolution images,” Canadian Journal of Remote Sensing, vol. 42, no. 2, pp. 75–84, 2016. View at: Publisher Site | Google Scholar
24. Y. Wei, Z. Wang, and M. Xu, “Road structure refined CNN for road extraction in aerial image,” IEEE Geoscience and Remote Sensing Letters, vol. 14, no. 5, pp. 709–713, 2017. View at: Publisher Site | Google Scholar
25. M.I. Sameen and B. Pradhan, “A two-stage optimization strategy for fuzzy object-based analysis using airborne LiDAR and high-resolution orthophotos for urban road extraction,” Journal of Sensors, vol. 2017, Article ID 6431519, 17 pages, 2017. View at: Publisher Site | Google Scholar
26. Y. Xu, Z. Xie, Y. Feng, and Z. Chen, “Road extraction from high-resolution remote sensing imagery using deep learning,” Remote Sensing, vol. 10, no. 9, Article ID 1461, 2018. View at: Publisher Site | Google Scholar
27. X. Gao, X. Sun, M. Yan et al., “Road extraction from remote sensing images by multiple feature pyramid network,” in Proceedings of the IGARSS 2018-2018 IEEE International Geoscience and Remote Sensing Symposium, pp. 6907–6910, IEEE, Valencia, Spain, July 2018. View at: Publisher Site | Google Scholar
28. F. Bastani, S. He, S. Abbar et al., “Roadtracer: automatic extraction of road networks from aerial images,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4720–4728, Salt Lake City, UT, USA, June 2018. View at: Publisher Site | Google Scholar
29. M. Hong, J. Guo, Y. Dai, and Z. Yin, “A novel FMH model for road extraction from high-resolution remote sensing images in urban areas,” Procedia Computer Science, vol. 147, pp. 49–55, 2019. View at: Publisher Site | Google Scholar
30. Y. Li, L. Xu, J. Rao, L. Guo, Z. Yan, and S. Jin, “A Y-Net deep learning method for road segmentation using high-resolution visible remote sensing images,” Remote Sensing Letters, vol. 10, no. 4, pp. 381–390, 2019. View at: Publisher Site | Google Scholar
31. S. Qiu and X. Li, “Moving target extraction and background reconstruction algorithm,” Journal of Ambient Intelligence and Humanized Computing, pp. 1–9, 2020. View at: Google Scholar
32. A. Kim, Y. Song, M. Kim, K. Lee, and J. H. Cheon, “Logistic regression model training based on the approximate homomorphic encryption,” BMC Medical Genomics, vol. 11, no. 4, pp. 23–31, 2018. View at: Publisher Site | Google Scholar
33. S. Qiu, Y. Tang, Y. Du, and S. Yang, “The infrared moving target extraction and fast video reconstruction algorithm,” Infrared Physics and Technology, vol. 97, pp. 85–92, 2019. View at: Publisher Site | Google Scholar
34. Y. Sun and B. Yuan, “Hierarchical GMM to handle sharp changes in moving object detection,” Electronics Letters, vol. 40, no. 13, pp. 801-802, 2004. View at: Publisher Site | Google Scholar
35. C. Zhan, X. Duan, S. Xu, Z. Song, and M. Luo, “An improved moving object detection algorithm based on frame difference and edge detection,” in Proceedings of the Fourth international conference on image and graphics (ICIG), pp. 519–523, IEEE, Chengdu, China, August 2007. View at: Publisher Site | Google Scholar
36. L. He and L. Ge, “CamShift Target Tracking Based on the Combination of Inter-frame Difference and Background Difference,” in Proceedings of th 2018 37th Chinese Control Conference (CCC), pp. 9461–9465, IEEE, Wuhan, China, July 2018. View at: Publisher Site | Google Scholar
37. P. Tokmakov, C. Schmid, and K. Alahari, “Learning to segment moving objects,” International Journal of Computer Vision, vol. 127, no. 3, pp. 282–301, 2019. View at: Publisher Site | Google Scholar
38. J. Cheng, Y. Gang, S. Bai, Y.-n. Guo, and D. Wang, “An improved GMM-based moving object detection method under sudden illumination change,” in Proceedings of the International Conference on Bio-Inspired Computing: Theories and Applications, pp. 178–187, Springer, Beijing, China, November 2018. View at: Publisher Site | Google Scholar

Copyright © 2021 Yuxiu Bai et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.