Abstract

This paper describes a novel recognition algorithm which includes mean filter, Gaussian filter, Retinex enhancement method, and Ostu threshold segmentation method (MGRO) for the navigation of mobile robots with visual sensors. The approach includes obstacle visual recognition and navigation path planning. In the first part, a three-stage method for obstacle visual recognition is constructed. Stage 1 combines mean filtering and Gaussian filtering to remove random noise and Gauss noise in the environmental image. Stage 2 increases image contrast by using the Retinex enhancement method. Stage 3 uses the Ostu threshold segmentation method to achieve obstacle feature extraction. A navigation method based on the auxiliary visual information is constructed in the second part. The method is based on the artificial potential field (APF) method and is able to avoid falling into local minimum by changing the repulsion field function. Experimental results confirm that obstacle features can be extracted accurately and the mobile robot can avoid obstacles safely and arrive at target positions correctly.

1. Introduction

Mobile robots integrate a number of technologies such as machinery, manipulation, sensors, and information processing [1]. Equipping mobile robots with sensors is essential for ensuring them to navigate and function in unknown environment [2].

Electromagnetic navigation is widely used in the early stage when mobile robot navigation emerged [3]. Electromagnetic navigation requires burying navigation lines in the working area of mobile robots ahead of time with high cost and poor flexibility. Track navigation appeared later where mobile robots calculate and correct their traveling paths continuously according to information from encoders and gyroscopes equipped on the robot. However, this time tends to have large time dependent accumulated error [4]. In sensor navigation, sensors such as radar, sonar, ultrasound, and infrared are often integrated in mobile robots for detecting obstacles in unknown environment and for calculating paths or building maps by using path planning algorithms [57].

Visual sensors such as cameras are able to obtain a wealth of information and are often used in mobile robot navigation. In the visual navigation process, mobile robot firstly obtains environmental information image from visual sensors. The robot then generates information via image processing and finally determines the best path via path planning algorithms [8, 9]. Tan et al. proposed a visual navigation system with double modules. One module deals with vision detection, and the other is in charge of planning track. The vision detection module can detect the edge of each area according to the brightness and geometric features of image. The planning track module then updates planning information in real time to guide the moving of a mobile robot autonomously [10]. Pei et al. studied the navigation of underwater robot. The information of obstacles above waterline was probed by the stereo vision technology, and then the static position information and dynamic velocity information were calculated to realize collision avoidance and dynamic target tracking [11]. Wang et al. constructed a multiview vision system for navigation. This system included multiple cameras to obtain surface information of the navigation scene. The information used for navigation was then achieved from the visual features by using Homography matrix decomposition and bias estimation. Experiments results proved the effectiveness and accuracy of navigation [12].

In summary, there are two most important tasks in visual navigation of mobile robots. One is how to validly extract features of image information from visual sensors such as camera in order to provide the basis for the subsequent path planning. The other is path planning with strong robustness to generate rational paths according to visual features. Since tasks for mobile robot navigation are required to be in real time or near real time, in addition to accuracy, minimizing processing time is also essential in order to meet the demand for rapid navigation of mobile robots. In this paper, we present an approach for feature extraction of navigation images and path planning in the navigation process to achieve satisfactory performance of mobile robot navigation.

2. MGRO Recognition Algorithm

In order to achieve autonomous navigation of mobile robots, it is vital to obtain environmental images and recognize obstacle information by using visual algorithms. In environmental images from cameras, several issues can impact the accuracy of obstacle recognition, including random noise and Gauss noise existing in the images, low contrast between obstacles and background, and recognition difficulty from inconspicuous features of obstacles.

In this work, we designed a three-stage approach termed MGRO recognition to improve obstacle recognition accuracy of environmental images captured by the mobile robot. Stage 1 combines mean filtering and Gaussian filtering to remove random noise and Gauss noise in the environmental image. Stage 2 increases image contrast by using the Retinex enhancement method. Stage 3 uses the Ostu threshold segmentation method to achieve obstacle feature extraction.

2.1. Noise Removal

Image noise can significantly influence image quality, and random noise and Gauss noise take the highest proportion. Since the multiplicative of the image noise can be converted to additive, these two types of noise can be removed one by one. We first remove random noise with a mean filter and then remove Gauss noise with a Gaussian filter.

A template with the size pixels is chosen, and the pixel at the center is taken as the processed one. Random noise is determined according to

Equation (1) indicates that if the gray-scale value of the center pixel is much different from the average gray-scale value of the surrounding 8 pixels, the center pixel is random noise; otherwise, the center pixel is not random noise. The judgment of the difference is made according to the prior set threshold value . For the pixel determined as random noise, it is replaced with the average gray-scale value of the surrounding 8 pixels to remove the random noise.

Removing Gauss noise is performed with a two-dimensional Gaussian filter, according to

The two-dimensional Gaussian filter is a circular symmetric function. Image smoothing is mainly controlled by which is an integer greater than zero. When performing denoising, the filter is convolved with the image to process.

2.2. Retinex Enhancement

In actual images, the contrast between foreground and background often tends to be low, which is not conducive to extracting obstacle information. Therefore, we increase the contrast between foreground and background with the Retinex enhancement method in this work.

The basis of Retinex enhancement is that, from the perspective of light intensity, the image information can be divided into incident light intensity and reflected light intensity, which, respectively, forms the low frequency information and the high frequency information of the image. If the proportion of the incident light intensity and reflected light intensity can be changed, the contrast can also be adjusted to enhance the image. The mathematical description of the single scale Retinex enhancement is where indicates the reflected information of the original image, indicates the overall information of the original image, and and are the two controlling parameters of the incident information of the image. Note that the proportion of incident and reflected information can be changed by adjusting these two parameters. Here, is not circumference ratio.

The single scale Retinex enhancement method only uses one group of and ; therefore, its enhancing capacity is limited to detailed features in different regions. Hence, we use a multiscale Retinex enhancement method in this work to achieve improved effect by adjusting more groups of and , and the final enhancement result is the linear superposition of all groups of single scale enhancement. The mathematical form of the multiscale enhancement is where there are groups of controlling parameters as and in the enhancement, and there is a weight coefficient in each group of the enhancement results.

2.3. Improved Ostu Threshold Segmentation

Since different features exhibit different pixel gray scales in an image, threshold-based segmentation can extract individual features effectively. However, there are often large deviations in manual threshold settings, which are also not conducive to automatic adjustment, making it difficult to satisfy the mobile robot navigation task with real-time performance.

Among the methods capable of automatically adjusting threshold values, the one-dimensional Ostu method is often used and is implemented as follows.

Step 1. Count gray scale values of all pixels in the original image. Let indicate the original image; let indicate the gray-scale level; and let indicate the pixel number in the image. The relationship among these variables is

Step 2. Normalize the pixel number in each gray-scale level according to

Step 3. Generate a threshold value randomly, and divide the pixels in the original image into target aggregate and background aggregate ; then calculate their probability and mean according to

Step 4. Revise the current threshold value according to target aggregate and background aggregate until attaining the optimal threshold value.

With the real-time requirement for the processing algorithm in mobile robot navigation, it is time consuming to perform the above operation in the full image plane. In addition, background information in the navigation environment is often complicated since there may exist information of multiple obstacles. Hence, it is difficult to split out all the obstacles with the same threshold value in the full image plane. Therefore, we improve the one-dimensional Ostu segmentation method as follows. The original image is first partitioned to generate subimages according to

Ostu segmentation is then performed in each subimage, background is set to white, and obstacle information is set to black. Finally, all the subimages are combined, and connecting black areas are closed to form each obstacle’s information according to where and are the pixel average of two image areas, and are the pixel variance of these two image areas, and are the pixel number of these two images.

3. Artificial Potential Field (APF) Navigation Strategy Based on Auxiliary Visual Information

We obtain obstacle information in front of the mobile robot to build a structured map by visual sensors and using the above image processing method. Among all the mobile robot navigation methods in structured environment, the artificial potential field (APF) method is the most commonly used. The basis of the APF method is that, considering the fact that the target position exerts gravity on the robot, the obstacle exerts repulsion on the robot. The resultant of forces formed by the gravity and repulsion makes the robot navigate towards the target position to avoid obstacles. However, there are several limitations in the APF method. Firstly, the robot cannot find the path to the target position when the environment information is too complicated. Secondly, the robot would stop at the target position when falling into a local minimum.

Accordingly, we construct a new strategy for mobile robot navigation in this work, combining the obtained visual information of obstacle and assistance from the APF method. Figure 1 illustrates this navigation method that is implemented in the following steps.

Step 1. Determine the starting position and target position of the robot, and construct the straight path between them.

Step 2. Select irregular areas with the rectangular marquee according to obstacle information.

Step 3. Guide the robot to move forward and avoid the obstacle under the gravity of the target position. This step can be divided into several strategies according to the arrangement of the obstacles.
(a) Take the peripheral rectangular marquee as the repulsion calculation region when there is only one obstacle in front of the robot. In order to avoid the APF method falling into local minimum, a new function of the repulsion field is constructed as follows: where is the repulsive field parameter related to the distance between the robot and the obstacles. is a positive number. indicates the minimum distance between the robot and the obstacle. indicates the distance between the robot and the target position. is used to adjust repulsive force influence according to the distance from the robot to the target position and is in the interval . indicates the repulsion range of the obstacle, as the circular yellow region of the rectangle corner point shown in Figure 1. The radius of the circular region is set as half the width of the robot body.
Compared with the repulsion function set in the conventional APF method, we introduce the relative distance between the robot and the target position as shown in (8), which is conducive to avoiding falling into local minimum before the robot reaches the target position. The corresponding gravitational field function is where is gravitational field factor related to the distance between the robot and target position and is a positive number.
(b) When the distance of two obstacles is large enough, the contralateral corner of the two rectangular regions can be connected to confirm the probable passing range of the robot, and then the best path is determined.
(c) When the distance of the two obstacles allows the robot to pass but is too narrow, rectangular marquee is no longer used. At this moment, the bulge of the corresponding position of the two obstacles is found, and the distance between them is calculated to search for a neutral position allowing the robot to pass.
(d) When the distance of the two obstacles does not allow the robot to pass, according to situation (3-a), the robot is guided to avoid obstacles from the left side of the left obstacle or the right side of the right obstacle.

4. Experimental Result

We performed experimental verification to test the validity of the obstacle recognition and navigation method based on visual sensing. We chose the Srv-1 robot (Figure 2) as the mobile robot in the navigation experiment.

Srv-1 robot is a track robot with the size of length 12 cm, width 10 cm, and height 8 cm. The main sensor to receive external information is a CMOS camera set in the front of the robot body with a resolution from pixels to pixels. This robot was chosen because of its small size and the function of visual sensing, making it suitable for testing the navigation method in the experimental environment (Figure 3).

As shown in Figure 3, there are five obstacles. The starting position of the mobile robot is A, and the target position is B. It can be seen that there is noise in the image. Meanwhile, the obstacles form several shadow areas because of the light source position, causing interferences for the recognition of the obstacles accurately. By using the method described in Section 2, the recognized feature information of obstacles is shown in Figure 4.

As shown in Figure 4, the feature information of the five obstacles was recognized accurately. This is because the Retinex enhancement method increases the contrast between the obstacles and background effectively, and the shadows of the obstacles were also brought into the background. After the contrast was increased, the improved Ostu threshold segmentation method was able to separate the obstacle information accurately.

According to the recognition result, the environmental information of the robot moving process was constructed. According to the navigation algorithm described in Section 3, the planned path for the robot is shown in Figure 5.

It can be seen in Figure 5 that the APF navigation method based on the auxiliary visual information was able to plan an ideal moving path for the robot. From A, the repulsion field, generated by the lower right corner point (O1) of the rectangular marquee formed by the first obstacle, impacted the ideal path from A to B, and the robot avoided the first obstacle to form a new path. After that, the robot returned to the ideal path from A to B after a period of time. Next, the robot avoided the repulsion field at O2 successfully. When entering the region between O3 and O4, because of the narrow width of the region, the robot moved forward along the path parallel to the edge of the rectangular marquee. When reaching the repulsion field at O5, the robot avoided the last obstacle successfully and finally arrived at the target point B.

In order to further verify the performance in a complex environment, the comparison between the proposed algorithm and existing algorithm was made in this paper. A complex environment was constructed further shown as in Figure 6. There are grids and some obstacle blocks in the scene. Every block is half length and half width of a grid. Similarly, the robot’s width size is equal to the block’s width. The obstacles are made up of one block, two blocks, or several blocks. In the scene, the number of obstacles is 18.

In the experiment, the initial position of the robot is set in C point and the target position of the robot is set in D point. There are three robots in the experiment. Robot number 1 equipped with vision sensors will be controlled to carry out a task according to the red path which was planned by the proposed algorithm above. Robots number 2 and number 3 equipped with the ultrasonic distance sensor will be controlled to carry out the same task as robot number 1 according to the green and blue path which was planned by the artificial potential algorithm.

From the comparison results shown as Figure 6, the positions of entry of number 2 and number 3 are all far from the central axis of the scene map. This is due to more obstacles in the middle of the scene. These obstacles caused trouble to the information feedback of the ultrasonic sensors and the robots were hard to judge if the situation would be safe or not. The amplification comparison results are shown as Figure 7.

From the partial enlargement results shown as Figure 7, robot number 1 chose the center position with the complex obstacles distributed as the entry and found a safe path to go through the area. This proved that the proposed algorithm is effective and the information acquisition of vision sensors is accurate. Robots number 2 and Number 3 chose the longer path to avoid the whole area with the complex obstacles to ensure safety.

The comparison results between robots number 1 and number 3 are shown as Figure 8. Robot number 1 kept the optimal path under the action of the gravitation of the target position and the local repulsion of the obstacles. Under the action of the gravitation of the target position robot number 3 tried to return the position of the central axis of the scene map. However, because of the ultrasonic sensors feedback information and the safe area setting of the path planning robot number 3 cannot go through the middle of the scene. After repeated failure, it turned back to the further position. By the comparison above, the proposed algorithm has higher effectiveness and better path planning.

5. Conclusion

Visual sensors are important for mobile robots to receive environmental information. By receiving environmental information with visual sensors and building maps according to obstacle features of the environmental images, the ideal moving path for the mobile robot can be planned successfully. We divide the environmental image processing into three steps: noise removal, contrast enhancement, and obstacle feature extraction. After obtaining the obstacle information with visual sensors, we further design an APF based navigation strategy using auxiliary visual information. In this method, the repulsion field function is improved for the navigation algorithm to avoid falling into local minimum, and the external rectangular marquee of obstacles is used to deal with the irregular visual feature areas. Experimental results demonstrated that the obstacle extraction method proposed in this work can effectively tackle the influence of noise and shadows and accurately extract the obstacle information. The navigation method makes full use of the visual feature information and achieves correct navigation for the mobile robot.

Conflict of Interests

The authors declare that there is no conflict of interests regarding the publication of this paper.

Acknowledgments

This work was financially supported by International S&T Cooperation Program of China (Grant no. 2014DFA70470), Instrument Development Major Program of National Natural Science of China (Grant no. 61327811), and Jiangsu Natural Science Funds for Distinguished Young Scholar (Grant no. BK2012005).