Mathematical Problems in Engineering

Volume 2015, Article ID 168645, 12 pages

http://dx.doi.org/10.1155/2015/168645

## Discrete-State-Based Vision Navigation Control Algorithm for One Bipedal Robot

School of Mechanical Engineering, Northwestern Polytechnical University, Xi’an, Shaanxi 710072, China

Received 28 November 2014; Revised 28 February 2015; Accepted 3 March 2015

Academic Editor: Victor Santibáñez

Copyright © 2015 Dunwen Wei. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

#### Abstract

Navigation with the specific objective can be defined by specifying desired timed trajectory. The concept of desired direction field is proposed to deal with such navigation problem. To lay down a principled discussion of the accuracy and efficiency of navigation algorithms, strictly quantitative definitions of tracking error, actuator effect, and time efficiency are established. In this paper, one vision navigation control method based on desired direction field is proposed. This proposed method uses discrete image sequences to form discrete state space, which is especially suitable for bipedal walking robots with single camera walking on a free-barrier plane surface to track the specific objective without overshoot. The shortest path method (SPM) is proposed to design such direction field with the highest time efficiency. However, one improved control method called canonical piecewise-linear function (PLF) is proposed. In order to restrain the noise disturbance from the camera sensor, the band width control method is presented to significantly decrease the error influence. The robustness and efficiency of the proposed algorithm are illustrated through a number of computer simulations considering the error from camera sensor. Simulation results show that the robustness and efficiency can be balanced by choosing the proper controlling value of band width.

#### 1. Introduction

Vision navigation of mobile robots has been known as an open and challenging problem over the last few decades [1, 2]. Currently, most of the vision navigation systems are investigated based on 3D space [3–5]. 3D vision navigation needs multicamera [3, 4, 6] or omnidirectional camera [7] to provide the robot with data on its orientation and direction. In some cases, the hardware that is needed to implement the algorithm can be more costly than the robot itself. This fact makes the practical realization of such method in most real world robotic systems questionable. However, for the robot moving on a plane ground, it is feasible to navigate using single camera system [8–12]. The beauty of single camera navigation system is low cost and it can make the image processing simple. For the real application of one bipedal robot called Cornell Ranger [13], the goal of task is to track specific objective (the specific objective is the tracked trajectory composed of the point, line, and curve corresponding to the precise position, straight path, and curving path). However, the Cornell Ranger has its inherently required features. Firstly, the Cornell Ranger is a kneeless 4-legged bipedal robot which is energetically and computationally autonomous. The light weight and low energy cost are the significant aspects and should be considered seriously. From the view of experience, single camera navigation system is a better option to be considered compared with multicamera navigation system, because multicamera navigation system would consume much more energy and the implementation of it is much more complex. Secondly, the wheeled mobile robots can capture a sequence of views at the same level height, while for bipedal robots, the discrete walking gaits determine that the vertical position of cameras mounted in robot is varying, so a large sequence of images captured by cameras are under different level height, which gives rise to more complex vision processing.

Vision navigation with the specific objective usually named as path following [14, 15] can be defined by specifying desired timed trajectory and includes both aspects, path recognition and path planning. On the one hand, the path recognition, usually called path finding [16], is to reconstruct surrounding scene and to find the specific objective by vision system. Although many environment features may be visible from the image captured by camera, only few of these features are necessary to estimate the robot’s position and orientation [17]. How to extract the useful information from a single camera is a difficult problem. On the other hand, the path planning [18] is a navigation algorithm which leads the robot to the specific objective according to the information from the vision system. Currently, various algorithms have been developed to solve the path planning problem for autonomous robot navigation, such as genetic algorithm [19] and fuzzy logical controller [20]. The accuracy, efficiency, and robustness are three important indices to evaluate navigation algorithm. The accuracy and robustness of vision navigation algorithm proposed in [8] were evaluated qualitatively by experiments. In [21], one localization algorithm based on single landmark was proposed and its effectiveness was shown by simulation. However, there are no quantitative definitions established about accuracy, efficiency, and robustness to evaluate vision navigation quantitatively. Moreover, fewer algorithms have dealt with the problem of how to balance the robustness and efficiency.

There are three contributions in this paper. Firstly, the concept of desired direction field is proposed. In order to estimate the robustness and efficiency of navigation algorithm, the quantitative definitions of robustness and efficiency are given. This concept can be expanded to deal with all navigation problems of mobile robots with the specific objective. Secondly, in order to figure out the problem mentioned above for the discrete walking gaits of bipedal robots, the discrete-state-based vision method of using single camera is presented to obtain the position and direction information between bipedal robot and specific objective at the phase of path recognition. At the phase of path planning, one piecewise-linear method (PLM) is proposed to design the desired direction field and achieve navigation algorithm to control the bipedal robot walking on a plane surface. Lastly, combined with the improved method with band width, the robustness and efficiency of such proposed algorithm are studied and balanced by setting the proper control parameters.

This paper is organized as follows. Section 2 defines the navigation problem with the specific objective. Section 3 gives the background of Cornell Ranger and briefly introduces vision navigation algorithm. Sections 4 and 5 deal with the problems of path recognition and the design of direction field, respectively. Section 6 shows how the improved control method with band width can restrain the noise influence from the camera, and simulation results are offered and compared. Finally, conclusions are summarized in Section 7.

*Notation*. Objects in a manifold are denoted by unbold letters or symbols. Their local coordinate representations are denoted by bold letters and symbols. Thus, if and is a -dimensional configuration manifold, then represents the local coordinates of . In this paper, generally capital letters will be used to denote desired quantities and lowercase letters to denote actual quantities.

#### 2. Problem Definition

##### 2.1. Discrete State Space

Generally, for self-navigation robots, the vision system is one of the most important parts and the camera is used to capture the information about surrounding environment. Since the vision system is mounted on the robot, the robot intuitively is recognized as an observer always keeping stationary in relation to itself. The observer is always coupled with the concept of state space, and different observers are corresponding with different state spaces. The surrounding environment composes one new time state space in relation to the observer (robot) at the instant of time . All of the state spaces compose the state space set denoted by in the whole time . The state space set can be defined byHere, is the dimension of state space. is a mapping from time space to state space . The range of each image captured by the camera at different discrete intervals is only a subset of a state space. For simple consideration, although the picture information captured by the camera is only a subset of a state space, it is sufficient to define picture information as a discrete time state space after the range according to the picture image being thought of as extending infinitely, shown in Figure 1. The -dimensional discrete state space set can be defined as the following:Here is the number of elements and also is equivalent to the number of images from camera. There exists a projective mapping between the state space affixed image and the state space with the observer of robot, which can be expressed byAll the material treated here can be general and applicable to dimensions. In this paper, we focus on one bipedal robot walking on the level ground with discrete walking gaits. All of environment information is embedded in the 2D level ground, so the dimension of state space is two-dimensional space and . The time state space set is very useful for analysis of path recognition in Section 4.