Abstract

This paper focuses on the problem of the adaptive image-based leader-follower formation control of mobile robot with on-board omnidirectional camera. A calibrated omnidirectional camera is fixed on the follower in any position, and a feature point representing the leader can be chosen in any position. An adaptive image-based controller without depending on the velocity of the leader is proposed based on a filter technology. In other words, only by relying on the projection of the feature on the image plane, can the follower track the leader and achieve the formation control. Moreover, an observer is introduced to estimate the unknown camera extrinsic parameters and the unknown parameters of plane, where the feature point moves, relative to omnidirectional camera frame. At last, the lyapunov method is applied to prove the uniform semiglobal practical asymptotic stability (USPAS) for the closed-loop system. Simulation results are presented to validate the algorithm.

1. Introduction

The formation control problem has been the research focus for a long time. The multirobots moving in formation have better collaboration abilities than those moving dispersedly. For example, robots in formation can achieve more complex tasks in shorter time. The leader-follower strategy is the most popular one due to its decentralized control approach, feasibility, and scalability. In the approach, the followers keep the desired relative pose to the leaders. Thus, the formation control problem can be transformed to several distributed control approaches.

Many methods, focusing on leader-follower strategy, have been proposed using on-board laser sensors or perspective cameras. Choi et al. [1] proposed an adaptive position-based controller. The relative pose is measured by the on-board laser sensor, and the unknown leader’s motion was estimated by a novel observer. Dani et al. [2] and Poonawala et al. [3] measured the relative pose by pose reconstruction using a perspective camera. Dani et al. [2] estimated the relative velocity using a nonlinear estimator. Poonawala et al. [3] eliminated the need of leader’s velocity in the controller, so the design of the observer to estimate the leader’s motion could be avoided. Wang et al. [4] proposed an adaptive image-based controller based on the backstepping technology. The measurement of the relative pose could be eliminated, and the unknown height of the feature was estimated by an observer, but the measurement of object’s motion should be accurate. However, lasers are more expensive than cameras, and both lasers and perspective cameras have finite field of view and limit the robots’ motion.

Compared to perspective camera, omnidirectional camera offers the panoramic view for the robot and can detect the object in the 360° surrounding scene. Due to this advantage, omnidirectional camera has been applied in many visual servoing approaches [58]. Many methods using the on-board omnidirectional camera have been developed to cope with leader-follower formation control. Das et al. [9, 10] designed a position-based controller depending on input-output linearization, and leader’s motion was estimated by extended Kalman filter (EKF). Mariottini et al. [1113] achieved the leader-follower formation control only using the relative bearing measured by uncalibrated omnidirectional camera. The relative distance was estimated by EKF in [11, 12] and by immersion and invariance-based observer in [13]. Vidal et al. [14] proposed an image-based controller based on input-output linearization technology using omnidirectional camera. The optical flows method [15] was exploited to compensate for the unknown leader’s motion, but this method depended on detecting a static point all the time. However, the above methods using omnidirectional cameras all need that the camera should be mounted at the rotation center of the follower and that the image plane should be parallel with the ground plane. These assumptions limit the layout of the camera and cause the system error due to the unmatched model. What is more, most of the approaches always need to measure the relative pose, but this is not the direct way to achieve leader-follower approach. Transforming the image information to the position information is time-consuming and inaccurate.

In this paper, the omnidirectional camera can be fixed on the mobile robots in any pose, and the image plane does not need to be parallel with the ground plane. The feature point on leader can be chosen in any position, and the position of the feature point can be unknown. An adaptive image-based controller is developed relying on a filter to eliminate the measurement or estimation of leader’s motion. Then, an observer is proposed to estimate the unknown pose of the omnidirectional camera relative to the follower and the unknown coefficient parameters of the plane where the feature point moves. The plane is expressed relative to the omnidirectional camera frame. At last, the Lyapunov theory is used to prove uniform semiglobal practical asymptotic stability (USPAS) of image error and formation error. The simulation results validate the performance of the proposed algorithm.

2. Kinematics

2.1. Problem Statement

The coordinate frames are defined as shown in Figures 1 and 2. The world frame is fixed on the ground. Denote follower frame by and leader frame by . Define the omnidirectional camera frame as and the image plane frame as . and coincide with robots’ rotation centers, and and are parallel with robots’ forward direction. The planes and are parallel with the ground plane. The robot’s linear velocity coincides with its -axis and its angular velocity is orthogonal to the ground plane. Define the original point of which locates in the focus point of the mirror, and the axis is aligned to the symmetric axis of the mirror. The optical axis of camera coincides with , and the image plane is parallel with plane . Axis is parallel with axis . The problem addressed is defined as follows.

Problem 1. Given a desired position on the image plane, an adaptive image-based controller is designed to make the feature point converge to an arbitrary small circle around the desired one, while the extrinsic parameters of the omnidirectional camera, the position of feature point on the leader, and the leader’s velocity are all unknown.

Assumption 2. The velocity of the leader is bounded, due to the practical reasons. The linear velocity and angular velocity of the leader are and , respectively. and are their upper bounds; that is,

Assumption 3. Suppose that the parameters of the omnidirectional camera’s mirror and the intrinsic parameters of the camera are known. Both robots move on the ground plane.

Notation. The notations used in this paper are listed as follows: a bold letter denotes a vector or a matrix. Otherwise, it denotes a scalar. and denote the identify matrix and the zero matrix, respectively. and denote the minimum and maximal eigenvalue of a positive-definite and diagonal gain matrix, respectively. denotes the Euclidean norm of the vector. The presuper (scribe) represent the coordinate system that the variable is related to (e.g., denotes the position of feature point with respect to ).

2.2. The Unified Model of Omnidirectional Camera

As is shown in Figure 2, the omnidirectional camera includes a curved mirror and a camera. For example, a parabolic mirror is combined with an orthographic camera and a hyperbolic mirror is combined with a perspective camera. The details of the structure of the omnidirectional camera can be found in [16, 17].

The 3D coordination of the feature point relative to frame is . As is shown in Figure 3, The intersection between the reflective ray and the plane is named the general-image point , and is the proportionable coordinate corresponding to . The proportionable relationship is whereis similar to the depth information that appeared in the case of perspective cameras and . The parameters of mirror , , and are shown in Table 1. represents the image coordinate of the image point. The mapping relation between and the image point iswhere , , , are the intrinsic parameters of the camera. can be calculated through when omnidirectional camera is calibrated. Therefore, without loss of generality, , taking place of , is considered as the output of the system.

2.3. Kinematics

The linear and angular velocities of the are and , respectively. The world linear velocity of the feature point expressed in frame is . Then, the relative velocity of with respect to frame isThe differential of isThe differential of iswhereTo eliminate , (7) can be rewritten, according to (2), asThen, (9) can be rearranged in a more convenient matrix formwhere the Jacobian matrixes are

Inspired by the depth-independent interaction matrix proposed in [1823], a depth-like term is introduced. According to (2),Thus, (10) can be transformed to

The nonholonomic kinematic model of the mobile robots can be described aswhere and denote the coordinates of with respect to . denotes the orientation of the mobile robots and is defined as the angle from axis to axis . and represent the linear and angular velocities, respectively. Moreover, let denote the rotation matrix of relative to aswhere , , and denote -- rotation angles defined relative to the current frames and from to , respectively. Frame denotes the omnidirectional frame when , axis , and axis coincide with , axis , and axis , respectively. denotes the element of in the th row and in the th column. The constant denotes the 3D coordinates of the origin of with respect to . It is noted that and are caused by the motion of the follower only. Thus, the relation between and is as follows:where the constant matrix is

Moreover, the time-variant denotes the rotation matrix of with respect to . , , and time-variant denotes the difference of orientations between the leader and the follower. The time-invariant is the 3D coordinates of the feature point with respect to . is caused by the motion of the leader only. Thus, the relation between and is as follows:where the time-variant matrix is

Substituting (16) and (18) into (13), then, (13) can be rewritten asThe details of and are shown in the Appendix.

According to Assumption 3, the feature point and the omnidirectional camera move on the planes. Therefore, the feature point moves in a fixed plane relative to the . Based on a plane equation, the unknown depth can be represented in terms of , , :According to (2), (21) can be revised asTo eliminate in (20), substituting (22) into (20), then, (20) can be revised asIt is noted that , , and are unknown and that they are not included in controller. Let denote the unknown parameters including camera’s extrinsic parameters , , and the coefficient parameters of plane equation (21). The parameterized is . In addition, can be parameterized in a linear form as The details of , and the regressor matrix can be found in the Appendix.

3. Adaptive Image-Based Leader-Follower Approach with Omnidirectional Camera

In image-based leader-follower approach, the position-based output, separation and bearing, can be transformed to an image point of an on-board omnidirectional camera, because there exists the injective mapping between an image point and a relative position with respect to . Then, general-image point is considered as the output due to the injective mapping between a general-image point and an image point. Therefore, when the general-image point converges to the desired one, the leader-follower formation is achieved. Furthermore, the desired general image point can be recorded using the omnidirectional camera when leader’s feature point locates at the desired position relative to the follower, which is known as the “teach-by-showing” approach.

3.1. Design of Controller and Observer

Proposition 4. To avoid the singular point of the matrix , the determinant should not be zero. Denote the estimated unknown parameters, and denotes the matrix in (23) containing .

According to [18], the repulsive potential field is introducedAnd the potential force is introduced as, , , and are all positive constants, and

Define the general-image error as . The observer is proposed aswhich can be considered as the update law of the estimated parameters . The regressor matrix is calculated from (24). The second term in (28) serves as a repulsive force which can push away from the singular point of matrix . are all positive gain symmetric matrices.

Inspired by [24, 25], a general-image based filter is proposed, where can be regarded as a pseudoerror which is used to compensate for the general-image’s velocity caused by leader’s motion. are positive definite and diagonal gain matrices. The controller is proposed aswhere is the inverse of the matrix . , and are positive-definite and diagonal gain matrices. and are positive gains.

3.2. Stability Analysis

Theorem 5. Utilizing the controller (29) and the observer (28), the general-image error is uniform semiglobal practical asymptotic stability (USPAS), which means USPAS of image error. Moreover, the image-based leader-follower system with omnidirectional camera is stable under Assumptions 2 and 3 and the initial relative heading is bounded away from , .

Proof. The Lyapunov function is proposed aswhere , , and . The differential of iswhere . Equation (23) can be refined asSubstituting (29) into (33), can be rewritten asSubstituting (34) and the observer (28) into (32), (32) can be revised asAccording to (24), . Then, (35) can be revised aswhere is an odd function; thus the second term in (36) is nonpositive. There is a small positive , which can be adjusted by the gains , making sure that . If , ; thus the first term in (36) is negative. Moreover, the gain matrix should satisfy ; thus the third term in (36) is nonpositive. Therefore, when , . The scalar of can be arbitrarily decreased. So is USPAS, according to [26].
The differential of (31) isDue to USPAS of , is also USAPS. The differential of iswhereSubstituting (39) into (37), (37) can be rewritten asThe nominal system is exponentially stable when is satisfied. Obviously, is bounded. Therefore, the perturbed system (40) is stable, and is bounded when based on the stability of perturbed system [27].
In summary, is USPAS, which is equivalent to USPAS of image error. Also, the relative heading is bounded. Therefore, the image-based leader-follower system with omnidirectional camera is stable.

4. Simulation Results

In this section, the simulation results are presented to validate the performance of the proposed algorithm.

The nonholonomic two-wheeled mobile robot is used to achieve the simulation. An omnidirectional camera is fixed on the follower, and the follower detects a feature point fixed on the leader. The simulation is based on the kinematics of the vehicles. The camera can detect the object instantaneously and the robot can respond to the control instantly.

The upper bounds of leader’s linear and angular velocity are and . The coordinate of the feature point with respect to is . Two followers are introduced in the simulation. The mirror types are hyperbolic, and their parameters are both and . The camera intrinsic parameters are , , . The transfer angles are both ,  , and , respectively. The coordinates of with respect to are . The control gains are both and . The observer gains are ,  , , , , . The initial estimated parameters are both chosen as , . The initial pose of the leader is ,  ,  ; the initial poses of two followers are ,  ,  ,  ,  ,  , respectively. So the initial image points on Follower1’s and Follower2’s image plane are and , respectively. The desired positions of the leader relative to frame Follower1 and frame Follower2 are , and ,  , respectively. Then the desired image points on Follower1 and Follower2’s image plane are and , respectively.

In the first simulation, the leader runs in a straight line with . The results can be found in Figure 4. In the second simulation, the leader runs along a circle with and . The results can be found in Figure 5. In the third simulation, the leader runs along an arbitrary trajectory with the variational linear and angular velocity (Figure 6). The results can be found in Figure 6. The results all validated Theorem 5.

The simulation results above have validated the performance of the adaptive image-based leader-follower approach with the on-board omnidirectional camera. The image errors converge to approximate zero at last. The results have shown the convergence of the leader-follower formation error in three different situations. Furthermore, the results also validate the feasible adaptive algorithm by which the unknown extrinsic parameters as well as the unknown motion plane of the feature point can be estimated online.

5. Conclusions

In this paper, a new adaptive image-based and omnidirectional camera based controller independent of the leader’s velocity, as well as an observer estimating the unknown extrinsic parameters of the camera and the unknown motion plane of the feature point, have been proposed. The Lyapunov method is used to prove the USPAS of the image error and the formation error. The simulation results have validated the feasible performance of the algorithm. Future work will focus on the experiment in real environment.

Appendix

The details of matrices and arewhere and denote the elements of and in the row and in the column, respectively. The detail of isThe detail of matrix is where , , , , , , , and . And, denotes the element of .

The detail of regressor matrix is

Conflict of Interests

The authors declare that there is no conflict of interests regarding the publication of this paper.

Acknowledgments

This work was supported in part by Shanghai Rising-Star Program under Grant 14QA1402500, in part by International Cooperation Project of Science and Technology under Grant 2011DFA11780, in part by the Natural Science Foundation of China under Grants 61105095, 61473191, 61203361, and 61221003, and in part by China Domestic Research Project for the International Thermonuclear Experimental Reactor (ITER) under Grants 2012GB102001 and 2012GB102008.