About this Journal Submit a Manuscript Table of Contents
Advances in Mechanical Engineering
Volume 2013 (2013), Article ID 697415, 11 pages
http://dx.doi.org/10.1155/2013/697415
Research Article

A Hybrid Architecture for Vision-Based Obstacle Avoidance

1Department of Computer Engineering, Ankara University, 06830 Ankara, Turkey
2Faculty of Electrical and Electronic Engineering, UTHM, 86400 Batu Pahat, Johor, Malaysia

Received 11 February 2013; Accepted 20 August 2013

Academic Editor: Shao Zili

Copyright © 2013 Mehmet Serdar Güzel and Wan Zakaria. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Abstract

This paper proposes a new obstacle avoidance method using a single monocular vision camera as the only sensor which is called as Hybrid Architecture. This architecture integrates a high performance appearance-based obstacle detection method into an optical flow-based navigation system. The hybrid architecture was designed and implemented to run both methods simultaneously and is able to combine the results of each method using a novel arbitration mechanism. The proposed strategy successfully fused two different vision-based obstacle avoidance methods using this arbitration mechanism in order to permit a safer obstacle avoidance system. Accordingly, to establish the adequacy of the design of the obstacle avoidance system, a series of experiments were conducted. The results demonstrate the characteristics of the proposed architecture, and the results prove that its performance is somewhat better than the conventional optical flow-based architecture. Especially, the robot employing Hybrid Architecture avoids lateral obstacles in a more smooth and robust manner than when using the conventional optical flow-based technique.

1. Introduction

One of the key research problems in mobile robot navigation concerns methods for obstacle avoidance. In order to cope with this problem, most autonomous navigation systems rely on range data for obstacle detection. Ultrasonic sensors, laser rangefinders, and stereo vision techniques are widely used for estimating range. However, all of these have drawbacks. Ultrasonic sensors suffer from poor angular resolution, and laser range finders and stereo vision systems are relatively expensive. Moreover, the computational complexity of stereo vision systems is another key challenge [1]. In addition to their other shortcomings, range sensors are not capable of differentiating between different types of ground surfaces such as pavements and adjacent flat grassy areas. Overall the computational complexity of the avoidance algorithms and the cost of sensors are the most critical factors for real-time applications. The use of monocular vision-based systems can avoid these problems, and they are able to provide appropriate solutions to the obstacle avoidance problem. There are two general types of vision-based obstacle avoidance techniques: those that compute apparent motion and those that rely on the appearance of individual pixels for monocular vision-based obstacle avoidance systems. The first group is called optical flow-based techniques, in which the main idea is to control the robot using optical flow data, from which the heading direction of the observer and time-to-contact values are obtained [2]. One way of using these values is by acting to achieve a certain type of flow. For instance, to maintain ambient orientation, the type of optic flow required is to detect no flow at all. If some flow is detected, then the robot should change the forces produced by its effectors so as to minimize this flow, based on the Law of Control [3]. A second group of techniques is called the appearance-based methods, which in essence rely on qualitative information. They utilize basic image processing techniques which consist of detecting pixels different in appearance from those of the ground and then classifying them as obstacles. The algorithms used perform in real-time, provide a high-resolution obstacle image, and can operate in a variety of environments [4]. The main advantages of these two types of conventional methods are their ease of implementation and ready availability for real-time applications.

2. Literature Survey

Optical flow, as illustrated in Figure 1, is an approximation of the motion field, summarizing the temporal changes in an image sequence. Optical flow estimation is one of the central problems in computer vision. There are several methods which can be employed to determine optical flow, namely, block-based, differential, phase correlation, and variational methods [5, 6]. There has been wide interest in the use of optical flow for vision-based mobile robot navigation. The visual control of motion in flying insects has been shown to provide important clues for navigational tasks such as centred flight in corridors and the estimation of distance travelled, encouraging new biologically inspired approaches to mobile robot navigation using optical flow. Behaviour such as corridor centring, docking, and visual odometry have all been demonstrated in practice using visual motion for the closed loop control of a mobile robot [7]. In recent years, there has been growing amount of literature on optical flow-based mobile robot navigation. Bernardino and Santos-Victor [8] used biologically inspired behaviours based on stereo vision for obstacle detection. A trinocular vision system for mobile robot navigation has been also proposed [9]. These methods, in some ways, emulate corridor following behaviour; nevertheless, their main disadvantage is the need to employ more than one camera. Alternatively, a number of studies relying on monocular vision have proposed the employment of optical flow techniques for mobile robot navigation [2, 4, 10, 11].

697415.fig.001
Figure 1: Illustration of flow vectors and motion animation. (a) Destination image, (b) source image, and (c) movement animation.

Appearance-based methods which identify locations on the basis of sensory similarities are a promising potential solution to mobile robot navigation. One of the main idea behind the strategy is to head the robot towards an obstacle-free position using similarities between the template and the active images [12]. This is called template matching and is discussed in the next section. The similarity between the image patterns can be obtained by using feature detectors, involving corner-based detectors, region-based detectors, and distribution-based descriptors [13]. However, most of these techniques consume a lot of processing time which is not appropriate for real-time systems. In order to handle the performance problem, algorithms are designed based on the appearance of individual pixels. The classification of obstacles is carried out by using differences between pixels in the template and active image patterns, and any pixel that differs in appearance from the ground is classified as an obstacle. The method requires three assumptions that are reasonable for a variety of indoor and outdoor environments, which are as follows. (i)Obstacles must be different in appearance from the ground;(ii)the ground must be flat;(iii)there must be no overhanging obstacles.

The first assumption distinguishes obstacles from the ground, while the second and third assumptions are required to estimate the distances between detected obstacles and the robot. There are several models for representing colour. The main model is the RGB (red, green, and blue) schema which is used in most image file formats; however, colour information in this model is very noisy at low intensity. The RGB format is frequently converted to HSV (hue, saturation, and value) or HIS (hue, intensity, and saturation). Hue is what humans perceive as colour; saturation is determined by a combination of light intensity and the extent to which it is distributed across the spectrum of different wavelengths, and value is related to brightness. In HIS, I is an intensity value with a range from 0 to 1 where 0 represents black and white 1. These colour spaces are assumed to be less sensitive to noise and lighting conditions.

3. Hybrid Architecture

Optical flow-based methods suffer from two major problems. The first and most important of these is illumination, which is markedly affected by variations in lighting and shadows [3]. Another major issue is sensitivity to noise and distortion. Various integrated methods for solving these problems have been proposed; nevertheless, it is still a key challenge in employing optical flow methodologies for mobile robot navigation. Appearance-based methods have significant processing and performance advantages which make them a good alternative for vision-based obstacle avoidance. Nevertheless, these techniques still suffer from illumination problems and are highly sensitive to floor imperfections, as well as to the physical structure of the terrain. To overcome these drawbacks, an alternative method has been proposed which essentially relies on a fusion of both techniques, as illustrated in Figure 2. The main strategy behind this proposal is to integrate the results obtained from an appearance-based method into the proposed optical flow-based architecture. In order to achieve this integration, flow equations are updated with respect to an estimated binary image. However, the binary image illustrated with Boolean logic () needs to be converted into logical expressions in order to be reasoned over. The method used in this study obtains the extreme values (the highest and lowest average magnitude values) from flow clusters. These are subsequently replaced with Boolean values for the binary image in which the highest value is replaced with “” members and the lowest value is replaced with “” members. Algorithm 1 illustrates how the estimated Boolean values from the appearance-based method are converted into flow values.

alg1
Algorithm 1: Conversion algorithm.

697415.fig.002
Figure 2: Flowchart of the proposed hybrid architecture.

The conversion procedure for each segment can be formalized as follows: where represents the th segment extracted from the corresponding binary image and is its updated equivalent.

Equation (2) is used to calculate the new heading angle, including the corresponding member of the map. The new heading angle and the updated version of the control equation () can be expressed as follows: where and are the sums of the magnitudes of optical flow and converted map regions with respect to the extreme flow values in the visual hemifields on both sides of the robot’s body. These can be detailed as follows: where and represent the average magnitudes of flow vectors in the left and right clusters, respectively, whereas and represent the converted segments from the binary image ( is the number of clusters and is set to 4).

The flowchart of the overall control architecture is illustrated in Figure 2; the image sequence is used by the optical flow module to calculate flow vectors and corresponding parameters such as focus of expansion (FOE) and time to contact (TTC). Simultaneously, the last obtained image is correlated with a template in order to estimate the free () and occupied () parts of the current image based on the appearance-based obstacle detection method. The conversion module converts the output of the appearance-based obstacle detection output into flow-based values. The control law is generated based on the inputs provided by both the optical flow and conversion modules (see (2)). Finally, the behaviour module selects the appropriate behavior based on its arbitration mechanism to steer the robot towards a free space.

Figures 3, 4, 5, and 6 present the output of both detection algorithms for different frames captured from a navigation scenario. The control parameters of each frame are included in Table 1.

tab1
Table 1: Estimated steering angles for experiments.
fig3
Figure 3: Frame 1, (a) flow vectors and (b) binary output from appearance-based method.
fig4
Figure 4: Frame 32, (a) flow vectors and (b) binary output from appearance-based method.
fig5
Figure 5: Frame 53, (a) flow vectors and (b) binary output from appearance-based method.
fig6
Figure 6: Frame 97, (a) flow vectors and (b) binary output from appearance-based method.

The hybrid technique has the ability to negotiate and avoid walls and doors by benefiting from the results of the optical flow-based navigation technique using the frontal optic flow to estimate the so-called time to contact before a frontal collision is likely to occur. Furthermore, it possesses the ability to avoid lateral obstacles in both a safer and smoother manner than with the conventional optical flow technique. Figure 5 presents such a scenario where the robot, using the optical flow-based method, is not able to avoid the obstacle, because the system does not generate an appropriate steering angle. The major difficulty with optical flow methods in mobile robot navigation is that despite the assumption of constant illumination, lighting conditions are still vulnerable to environmental factors. This may cause miscalculations of flow vectors. However, the hybrid system integrates the results of the appearance-based method, into the control law which enforces the overall control strategy. For this scenario, the hybrid system generates a sharper avoiding manoeuvre which allows the robot to pass the obstacle without colliding with it. Figure 6 presents another scenario in which the hybrid method generates a safer avoidance manoeuvre when compared with the conventional optical flow method. This is because the hybrid architecture involves merging the optical flow method with the appearance-based method, and this results in a better response to the lateral obstacle. Figures 3 and 6 present the scenarios where the environments are partly open and safe. The results reveal that the control parameters generated by both methods for corresponding scenarios are similar (see Table 1).

Despite their success with lateral obstacles, conventional appearance-based obstacle detection methods tend to fail to detect objects such as walls and doors that span the entire field of view. This is because the appearance-based methods perform a classification of obstacles using differences between pixels in the template and the active image patterns, where any pixel that differs in appearance from the ground is classified as an obstacle. Additionally, region segmentation has some other drawbacks, one of which is that the thresholding technique requires a significant contrast between the background and foreground in order to be successful. This technique essentially works well for the environments which consist of one dominant colour. Accordingly, if the colour of the doors or walls is similar to the floor pattern, the algorithm may easily fail to complete the navigation task. Figure 7 illustrates an example where the appearance-based method is not able to distinguish between the door and the floor in a precise manner due to the similarity in colours of their patterns.

fig7
Figure 7: Similarity of views, (a) original image and (b) binary Image.

Two additional examples are illustrated in Figures 8 and 9, where the path of the robot is obscured by large obstacles. The results shown in Figure 8 indicate that both techniques can detect the obstacles. However, due to the lighting conditions, the second technique fails in the segmentation of some parts of the image where reflections are present on the white floor, as shown in Figure 8(b). On the other hand, the first technique estimates the obstacles by successfully using the magnitudes of flow vectors. Figure 9 demonstrates another scenario in which the obstacle is rather close to the goal. Here, the second method is more useful than the conventional optical flow-based technique, despite the extracted stains as shown in Figure 9(b). This is because the appearance-based methods are based on pixel differences which can provide image segmentation independent of distance to the goal. As has been discussed above, the first method focuses on the practical use of optical flow and visual motion information in performing obstacle avoidance task in real indoor environments. However, when the obstacle becomes very close to the robot, the gradients usually cannot be calculated accurately which may result in the incomplete calculation or allocation of flow vectors.

fig8
Figure 8: First large obstacle, (a) original image and (b) binary image.
fig9
Figure 9: Second large obstacle, (a) original image and (b) binary image.

A final example has been added into this study in order to compare the results of the conventional appearance-based detection methods and the proposed method in different illumination conditions. In normal lighting conditions, the appearance-based algorithm is able to separate the given obstacle from the ground surface successfully, as can be seen in Figures 10(a) and 10(b). However, once the illumination conditions are varied in the working environment, the algorithm tends to fail. For instance, in Figure 10(c), the same room becomes brighter due to the change in the illumination condition, and Figure 10(d) exemplifies how the corresponding algorithm fails to segment the image properly. This experiment is a good example of revealing the characteristics of conventional appearance-based approaches, proving that those algorithms are quite vulnerable to change in illumination conditions. Figure 10(f), on the other hand, exemplifies the proposed hybrid algorithm responses in brighter condition which results in estimating flow vectors successfully and achieves to detect the given obstacle. To evaluate the performance of the proposed navigation method, a series of simulation experiments are discussed in the following section.

fig10
Figure 10: Changing illumination, (a) constant illumination and (b). The average of each test was determined binary image, (c) brighter image, (d). The average of each test was determined binary image and (e) flow vectors estimated.

4. Discussion and Conclusions

The navigation systems were uploaded onto the Pioneer mobile robot, shown in Figure 11. All experiments were conducted in and around an area of the Robotics and Automation Laboratory (RAL) at Newcastle University, which has the physical dimensions of , as illustrated in Figure 11. Hard board panels were used to simulate walls during the experiments.

697415.fig.0011
Figure 11: The pioneer robot with additional sensors and peripheral devices.

This section presents the design of the experiments used to evaluate the proposed hybrid vision-based obstacle avoidance system. The experiments were conducted in the test environments shown in Figure 12. In order to verify the performance of the proposed system, the results for each scenario are compared with those of the conventional optical flow method. The main aim of these experiments is to navigate the robot in these environments with regard to the designed scenarios without hitting any obstacles until a certain amount of time has passed. The robot navigates in these experiments at a linear speed of 0.15 m/sec, and the required time limit is 200 seconds to fulfill each scenario. Therefore, once the robot achieves to wander along the environment without colliding until the end of the time limit, it is accepted to complete the task successfully. All overhead lights in the laboratory and corridor environment are turned on during the capture of both snapshot and current images, in an attempt to maintain constant illumination over the entire experimental area. Images were captured at a resolution of jpg format and then converted to pgm format.

697415.fig.0012
Figure 12: Robotics and Automation Research Laboratory, Newcastle University (including hard-board panels).
4.1. Definition of Scenarios

Several different scenarios were set up to evaluate the performance of the proposed system. They are arranged in increased level of difficulty. Experiments were conducted in the given test environment as previously discussed.

Each individual test was repeated five times, and the average for each performance parameter was determined. The results for each series of tests were found to be very consistent, principally because the starting position, robot position and feature size, and position of the obstacles are identical for different runs under the same scenario. Two of those different scenarios are discussed in this paper, and an example presenting the limitations of the proposed architecture is demonstrated. Two vision-based obstacle avoidance techniques were employed, namely, the hybrid (FS) and optical flow based (OFB). In order to provide a precise comparison of the test results, each technique is integrated with the proposed control architecture and the behavioural strategy discussed in Section 3. Table 2 displays the initial parameters used in the navigation algorithms used for conducting the experiments.

tab2
Table 2: Initial parameters for experiments.

Figure 13 presents the first scenario which was conducted in the laboratory environment where the robot was required to navigate in this open environment. The results of the corresponding scenario employing the FS technique are shown in Figure 13(a). The robot navigates through the environment successfully without collision. It negotiates both the door and the wall avoiding them using a 90° left turn manoeuvre. It then proceeds to move forward along a left curved trajectory eventually getting back to its start point. The results demonstrate that the FS technique performs smooth and robust behaviour for this navigation task.

fig13
Figure 13: Estimated trajectories for scenario 1, (a) FS and (b) OFB.

The OFB technique performs the navigation without colliding with any obstacle in a smooth manner, as shown in Figure 13(b), so that it negotiates the door and walls, respectively. The results demonstrate that performance in these experiments is surprisingly reliable for this scenario. Table 3 presents the performance measures for each method with this scenario. The FS technique performs the task for each repetition successfully. OFS fails once, but its overall performance is better than expected. Nevertheless it generates a higher value of compared to the FS method.

tab3
Table 3: Performance measures for scenario 1.

Figure 14 illustrates the second scenario in which the robot is required to navigate in the laboratory environment with three unexpected obstacles placed along its path. Figure 14(a) presents the navigation, results of the FS method for this scenario. The robot begins its navigation, and then it detects the first obstacle. The robot avoids the first obstacle successfully. Subsequently it avoids the second obstacle. After this the robot negotiates walls and the third obstacle, all of which are successfully avoided by following a rectangular path. The navigation results for the OFB technique for this scenario are given in Figure 14(b), where the robot avoids the first obstacle but collides with the second. After this the robot passes the second room and is stopped. Performance measurements of the given scenario are illustrated in Table 5.

fig14
Figure 14: Estimated trajectories for scenario 2, (a) FS and (b) OFB.
4.2. Comparison and Evaluation of Methods

In these test scenarios, the positions of all obstacles in the test environment are unknown to the robot. Two were selected for discussion in this section. The results reveal that the OFB technique addresses the use of optical flow to supervise the navigation of mobile robots. It basically utilizes control laws in aiming to detect the presence of obstacles close to the robot based on information about changes in image brightness. The technique performs better than expected in steering the robot effectively, especially in the open environments as illustrated in Figure 13(b). The major difficulty with employing optical flow in mobile robot navigation is when key information is not obtained concerning whether or not motion vectors or changes of illumination change the intensity value of pixels. In addition, despite the assumption of having constant illumination, lighting conditions may significantly change due to environmental factors which optical flow techniques are known to have difficulty in handling. These may cause the miscalculation of flow vectors and can result in collision, as illustrated in Figure 14(b). The OFB is capable of negotiating walls and doors successfully which provides flexibility in this method in partially cluttered indoor environments. However, the OFB technique is not able to avoid external obstacles deliberately located along the path of the robot, as much as the FS technique is able to perform.

It is proposed that the FS technique can improve on the performance of the OFB method, by fusing the results of two techniques in terms of the optical flow-based control law. Figure 14(a) reveals the capacity of this technique in partially cluttered environments, including those with external obstacles. The aim of the technique is to integrate the results of the appearance-based detection technique and optical flow-based navigation architecture. The technique has the ability to negotiate and avoid walls and doors, by benefiting from the results of the optical flow-based navigation technique employing the frontal optic flow to estimate the so-called time to contact before a frontal collision is likely to occur. It is also able to avoid lateral obstacles more smoothly than with the conventional optical flow technique. The outcome of balance strategy tends to maintain equal distances to obstacles on both sides of the robot, exploiting the results of the appearance-based detection technique. The test results reveal that the overall performance of the system is better than that of the conventional technique, but it is still vulnerable to lighting conditions, illumination problems, and floor imperfections.

The characteristics problems of these conventional methods may still affect the performance of the proposed method. Figure 14(b) illustrates the characteristics of the proposed algorithm in cases of frontal obstacles spanning the entire field of view. As the robot remain blind (does not make any decision) and the TTC value indicates the high possibility of collision, the behavioural module is triggered, according to which the Change Direction behaviour has a higher priority level than the Steering behaviour. Thus, a 90° turning manoeuvre is performed to avoid obstacles. The FS method does not extract the features of the images but only measures the differences between them, which makes the technique appropriate for real-time applications; however, this maybe a disadvantage in more complex situations.

An example illustrating the limitations of the proposed technique is shown in Figure 15 where the robot is not able to avoid both the obstacles and collision with the right wall. This case represents a typical trap situation for this method, where the robot is not able to avoid all obstacles in such complex situations. Table 5 highlights the percentage improvement in performance of the FS over the OFB for the Pioneer robot. Tables 3 and 4 consistently demonstrate improved performance. Accordingly, the FS navigation method offers better overall performance in terms of safety and consistent motion when compared to the OFB method.

tab4
Table 4: Performance measures for scenario 2.
tab5
Table 5: Performance improvement of the FS over the OFB.
fig15
Figure 15: Experimental results for scenario 3 (trap situation), (a) scenario 3 and (b) FS method.

Conflict of Interests

The authors declare that there is no conflict of interests regarding the publication of this paper.

Acknowledgments

One of the authors sincerely thanks his colleagues who have offered encouragement along this long way, especially all members of the robotics lab.

References

  1. I. Ulrich and I. Nourbakhsh, “Appearance-based obstacle detection with monocular color vision,” in Proceedings of the AAAI National Conference on Artificial Intelligence, Austin, Tex, USA, July-August 2000.
  2. M. S. Guzel and R. Bicker, “Optical flow based system design for mobile robots,” in Proceedings of the IEEE International Conference on Robotics, Automation and Mechatronics (RAM '10), pp. 545–550, Singapore, June 2010. View at Publisher · View at Google Scholar · View at Scopus
  3. E. B. Contreras, “A biologically inspired solution for an evolved simulated agent,” in Proceedings of the 9th Annual Genetic and Evolutionary Computation Conference (GECCO '07), pp. 206–213, London, UK, July 2007. View at Publisher · View at Google Scholar · View at Scopus
  4. G. N. DeSouza and A. C. Kak, “Vision for mobile robot navigation: a survey,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 24, no. 2, pp. 237–267, 2002. View at Publisher · View at Google Scholar · View at Scopus
  5. J. L. Barron, D. J. Fleet, and S. S. Beauchemin, “Performance of optical flow techniques,” International Journal of Computer Vision, vol. 12, no. 1, pp. 43–77, 1994. View at Publisher · View at Google Scholar · View at Scopus
  6. B. Atcheson, W. Heidrich, and I. Ihrke, “An evaluation of optical flow algorithms for background oriented schlieren imaging,” Experiments in Fluids, vol. 46, no. 3, pp. 467–476, 2009. View at Publisher · View at Google Scholar · View at Scopus
  7. D. M. Szenher, Visual homing in dynamic indoor enviorenments [Ph.D. thesis], University of Edinburgh, Edinburgh, UK, 2008.
  8. A. Bernardino and J. Santos-Victor, “Visual behaviours for binocular tracking,” Robotics and Autonomous Systems, vol. 25, no. 3-4, pp. 137–146, 1998. View at Scopus
  9. A. A. Argyros and F. Bergholm, “Combining central and peripheral vision for reactive robot navigation,” in Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR '99), pp. 646–651, Collins, Colo, USA, June 1999. View at Scopus
  10. S. Szabo, D. Coombs, M. Herman, T. Camus, and H. Liu, “A real-time computer vision platform for mobile robot applications,” Real-Time Imaging, vol. 2, no. 5, pp. 315–327, 1996. View at Scopus
  11. K. Souhila and A. Karim, “Optical flow based robot obstacle avoidance,” International Journal of Advanced Robotic Systems, vol. 4, no. 1, pp. 13–16, 2007. View at Scopus
  12. R. F. Vassallo, H. J. Schneebeli, and J. Santos-Victor, “Visual servoing and appearance for navigation,” Robotics and Autonomous Systems, vol. 31, no. 1, pp. 87–97, 2000. View at Publisher · View at Google Scholar · View at Scopus
  13. A. Yilmaz, O. Javed, and M. Shah, “Object tracking: a survey,” ACM Computing Surveys, vol. 38, no. 4, article 13, 2006. View at Publisher · View at Google Scholar · View at Scopus
  14. E. B. Contreras, “A biologically inspired solution for an evolved simulated agent,” in Proceedings of the 9th Annual Genetic and Evolutionary Computation Conference (GECCO '07), pp. 206–213, London, UK, July 2007. View at Publisher · View at Google Scholar · View at Scopus