Research Article  Open Access
Adaptive ImageBased LeaderFollower Approach of Mobile Robot with Omnidirectional Camera
Abstract
This paper focuses on the problem of the adaptive imagebased leaderfollower formation control of mobile robot with onboard omnidirectional camera. A calibrated omnidirectional camera is fixed on the follower in any position, and a feature point representing the leader can be chosen in any position. An adaptive imagebased controller without depending on the velocity of the leader is proposed based on a filter technology. In other words, only by relying on the projection of the feature on the image plane, can the follower track the leader and achieve the formation control. Moreover, an observer is introduced to estimate the unknown camera extrinsic parameters and the unknown parameters of plane, where the feature point moves, relative to omnidirectional camera frame. At last, the lyapunov method is applied to prove the uniform semiglobal practical asymptotic stability (USPAS) for the closedloop system. Simulation results are presented to validate the algorithm.
1. Introduction
The formation control problem has been the research focus for a long time. The multirobots moving in formation have better collaboration abilities than those moving dispersedly. For example, robots in formation can achieve more complex tasks in shorter time. The leaderfollower strategy is the most popular one due to its decentralized control approach, feasibility, and scalability. In the approach, the followers keep the desired relative pose to the leaders. Thus, the formation control problem can be transformed to several distributed control approaches.
Many methods, focusing on leaderfollower strategy, have been proposed using onboard laser sensors or perspective cameras. Choi et al. [1] proposed an adaptive positionbased controller. The relative pose is measured by the onboard laser sensor, and the unknown leader’s motion was estimated by a novel observer. Dani et al. [2] and Poonawala et al. [3] measured the relative pose by pose reconstruction using a perspective camera. Dani et al. [2] estimated the relative velocity using a nonlinear estimator. Poonawala et al. [3] eliminated the need of leader’s velocity in the controller, so the design of the observer to estimate the leader’s motion could be avoided. Wang et al. [4] proposed an adaptive imagebased controller based on the backstepping technology. The measurement of the relative pose could be eliminated, and the unknown height of the feature was estimated by an observer, but the measurement of object’s motion should be accurate. However, lasers are more expensive than cameras, and both lasers and perspective cameras have finite field of view and limit the robots’ motion.
Compared to perspective camera, omnidirectional camera offers the panoramic view for the robot and can detect the object in the 360° surrounding scene. Due to this advantage, omnidirectional camera has been applied in many visual servoing approaches [5–8]. Many methods using the onboard omnidirectional camera have been developed to cope with leaderfollower formation control. Das et al. [9, 10] designed a positionbased controller depending on inputoutput linearization, and leader’s motion was estimated by extended Kalman filter (EKF). Mariottini et al. [11–13] achieved the leaderfollower formation control only using the relative bearing measured by uncalibrated omnidirectional camera. The relative distance was estimated by EKF in [11, 12] and by immersion and invariancebased observer in [13]. Vidal et al. [14] proposed an imagebased controller based on inputoutput linearization technology using omnidirectional camera. The optical flows method [15] was exploited to compensate for the unknown leader’s motion, but this method depended on detecting a static point all the time. However, the above methods using omnidirectional cameras all need that the camera should be mounted at the rotation center of the follower and that the image plane should be parallel with the ground plane. These assumptions limit the layout of the camera and cause the system error due to the unmatched model. What is more, most of the approaches always need to measure the relative pose, but this is not the direct way to achieve leaderfollower approach. Transforming the image information to the position information is timeconsuming and inaccurate.
In this paper, the omnidirectional camera can be fixed on the mobile robots in any pose, and the image plane does not need to be parallel with the ground plane. The feature point on leader can be chosen in any position, and the position of the feature point can be unknown. An adaptive imagebased controller is developed relying on a filter to eliminate the measurement or estimation of leader’s motion. Then, an observer is proposed to estimate the unknown pose of the omnidirectional camera relative to the follower and the unknown coefficient parameters of the plane where the feature point moves. The plane is expressed relative to the omnidirectional camera frame. At last, the Lyapunov theory is used to prove uniform semiglobal practical asymptotic stability (USPAS) of image error and formation error. The simulation results validate the performance of the proposed algorithm.
2. Kinematics
2.1. Problem Statement
The coordinate frames are defined as shown in Figures 1 and 2. The world frame is fixed on the ground. Denote follower frame by and leader frame by . Define the omnidirectional camera frame as and the image plane frame as . and coincide with robots’ rotation centers, and and are parallel with robots’ forward direction. The planes and are parallel with the ground plane. The robot’s linear velocity coincides with its axis and its angular velocity is orthogonal to the ground plane. Define the original point of which locates in the focus point of the mirror, and the axis is aligned to the symmetric axis of the mirror. The optical axis of camera coincides with , and the image plane is parallel with plane . Axis is parallel with axis . The problem addressed is defined as follows.
Problem 1. Given a desired position on the image plane, an adaptive imagebased controller is designed to make the feature point converge to an arbitrary small circle around the desired one, while the extrinsic parameters of the omnidirectional camera, the position of feature point on the leader, and the leader’s velocity are all unknown.
Assumption 2. The velocity of the leader is bounded, due to the practical reasons. The linear velocity and angular velocity of the leader are and , respectively. and are their upper bounds; that is,
Assumption 3. Suppose that the parameters of the omnidirectional camera’s mirror and the intrinsic parameters of the camera are known. Both robots move on the ground plane.
Notation. The notations used in this paper are listed as follows: a bold letter denotes a vector or a matrix. Otherwise, it denotes a scalar. and denote the identify matrix and the zero matrix, respectively. and denote the minimum and maximal eigenvalue of a positivedefinite and diagonal gain matrix, respectively. denotes the Euclidean norm of the vector. The presuper (scribe) represent the coordinate system that the variable is related to (e.g., denotes the position of feature point with respect to ).
2.2. The Unified Model of Omnidirectional Camera
As is shown in Figure 2, the omnidirectional camera includes a curved mirror and a camera. For example, a parabolic mirror is combined with an orthographic camera and a hyperbolic mirror is combined with a perspective camera. The details of the structure of the omnidirectional camera can be found in [16, 17].
The 3D coordination of the feature point relative to frame is . As is shown in Figure 3, The intersection between the reflective ray and the plane is named the generalimage point , and is the proportionable coordinate corresponding to . The proportionable relationship is whereis similar to the depth information that appeared in the case of perspective cameras and . The parameters of mirror , , and are shown in Table 1. represents the image coordinate of the image point. The mapping relation between and the image point iswhere , , , are the intrinsic parameters of the camera. can be calculated through when omnidirectional camera is calibrated. Therefore, without loss of generality, , taking place of , is considered as the output of the system.

(a)
(b)
2.3. Kinematics
The linear and angular velocities of the are and , respectively. The world linear velocity of the feature point expressed in frame is . Then, the relative velocity of with respect to frame isThe differential of isThe differential of iswhereTo eliminate , (7) can be rewritten, according to (2), asThen, (9) can be rearranged in a more convenient matrix formwhere the Jacobian matrixes are
Inspired by the depthindependent interaction matrix proposed in [18–23], a depthlike term is introduced. According to (2),Thus, (10) can be transformed to
The nonholonomic kinematic model of the mobile robots can be described aswhere and denote the coordinates of with respect to . denotes the orientation of the mobile robots and is defined as the angle from axis to axis . and represent the linear and angular velocities, respectively. Moreover, let denote the rotation matrix of relative to aswhere , , and denote  rotation angles defined relative to the current frames and from to , respectively. Frame denotes the omnidirectional frame when , axis , and axis coincide with , axis , and axis , respectively. denotes the element of in the th row and in the th column. The constant denotes the 3D coordinates of the origin of with respect to . It is noted that and are caused by the motion of the follower only. Thus, the relation between and is as follows:where the constant matrix is
Moreover, the timevariant denotes the rotation matrix of with respect to . , , and timevariant denotes the difference of orientations between the leader and the follower. The timeinvariant is the 3D coordinates of the feature point with respect to . is caused by the motion of the leader only. Thus, the relation between and is as follows:where the timevariant matrix is
Substituting (16) and (18) into (13), then, (13) can be rewritten asThe details of and are shown in the Appendix.
According to Assumption 3, the feature point and the omnidirectional camera move on the planes. Therefore, the feature point moves in a fixed plane relative to the . Based on a plane equation, the unknown depth can be represented in terms of , , :According to (2), (21) can be revised asTo eliminate in (20), substituting (22) into (20), then, (20) can be revised asIt is noted that , , and are unknown and that they are not included in controller. Let denote the unknown parameters including camera’s extrinsic parameters , , and the coefficient parameters of plane equation (21). The parameterized is . In addition, can be parameterized in a linear form as The details of , and the regressor matrix can be found in the Appendix.
3. Adaptive ImageBased LeaderFollower Approach with Omnidirectional Camera
In imagebased leaderfollower approach, the positionbased output, separation and bearing, can be transformed to an image point of an onboard omnidirectional camera, because there exists the injective mapping between an image point and a relative position with respect to . Then, generalimage point is considered as the output due to the injective mapping between a generalimage point and an image point. Therefore, when the generalimage point converges to the desired one, the leaderfollower formation is achieved. Furthermore, the desired general image point can be recorded using the omnidirectional camera when leader’s feature point locates at the desired position relative to the follower, which is known as the “teachbyshowing” approach.
3.1. Design of Controller and Observer
Proposition 4. To avoid the singular point of the matrix , the determinant should not be zero. Denote the estimated unknown parameters, and denotes the matrix in (23) containing .
According to [18], the repulsive potential field is introducedAnd the potential force is introduced as, , , and are all positive constants, and
Define the generalimage error as . The observer is proposed aswhich can be considered as the update law of the estimated parameters . The regressor matrix is calculated from (24). The second term in (28) serves as a repulsive force which can push away from the singular point of matrix . are all positive gain symmetric matrices.
Inspired by [24, 25], a generalimage based filter is proposed, where can be regarded as a pseudoerror which is used to compensate for the generalimage’s velocity caused by leader’s motion. are positive definite and diagonal gain matrices. The controller is proposed aswhere is the inverse of the matrix . , and are positivedefinite and diagonal gain matrices. and are positive gains.
3.2. Stability Analysis
Theorem 5. Utilizing the controller (29) and the observer (28), the generalimage error is uniform semiglobal practical asymptotic stability (USPAS), which means USPAS of image error. Moreover, the imagebased leaderfollower system with omnidirectional camera is stable under Assumptions 2 and 3 and the initial relative heading is bounded away from , .
Proof. The Lyapunov function is proposed aswhere , , and . The differential of iswhere . Equation (23) can be refined asSubstituting (29) into (33), can be rewritten asSubstituting (34) and the observer (28) into (32), (32) can be revised asAccording to (24), . Then, (35) can be revised aswhere is an odd function; thus the second term in (36) is nonpositive. There is a small positive , which can be adjusted by the gains , making sure that . If , ; thus the first term in (36) is negative. Moreover, the gain matrix should satisfy ; thus the third term in (36) is nonpositive. Therefore, when , . The scalar of can be arbitrarily decreased. So is USPAS, according to [26].
The differential of (31) isDue to USPAS of , is also USAPS. The differential of iswhereSubstituting (39) into (37), (37) can be rewritten asThe nominal system is exponentially stable when is satisfied. Obviously, is bounded. Therefore, the perturbed system (40) is stable, and is bounded when based on the stability of perturbed system [27].
In summary, is USPAS, which is equivalent to USPAS of image error. Also, the relative heading is bounded. Therefore, the imagebased leaderfollower system with omnidirectional camera is stable.
4. Simulation Results
In this section, the simulation results are presented to validate the performance of the proposed algorithm.
The nonholonomic twowheeled mobile robot is used to achieve the simulation. An omnidirectional camera is fixed on the follower, and the follower detects a feature point fixed on the leader. The simulation is based on the kinematics of the vehicles. The camera can detect the object instantaneously and the robot can respond to the control instantly.
The upper bounds of leader’s linear and angular velocity are and . The coordinate of the feature point with respect to is . Two followers are introduced in the simulation. The mirror types are hyperbolic, and their parameters are both and . The camera intrinsic parameters are , , . The transfer angles are both , , and , respectively. The coordinates of with respect to are . The control gains are both and . The observer gains are , , , , , . The initial estimated parameters are both chosen as , . The initial pose of the leader is , , ; the initial poses of two followers are , , , , , , respectively. So the initial image points on Follower1’s and Follower2’s image plane are and , respectively. The desired positions of the leader relative to frame Follower1 and frame Follower2 are , and , , respectively. Then the desired image points on Follower1 and Follower2’s image plane are and , respectively.
In the first simulation, the leader runs in a straight line with . The results can be found in Figure 4. In the second simulation, the leader runs along a circle with and . The results can be found in Figure 5. In the third simulation, the leader runs along an arbitrary trajectory with the variational linear and angular velocity (Figure 6). The results can be found in Figure 6. The results all validated Theorem 5.
(a)
(b)
(c)
(d)
(e)
(f)
(g)
(a)
(b)
(c)
(d)
(e)
(f)
(g)
(a)
(b)
(c)
(d)
(e)
(f)
(g)
The simulation results above have validated the performance of the adaptive imagebased leaderfollower approach with the onboard omnidirectional camera. The image errors converge to approximate zero at last. The results have shown the convergence of the leaderfollower formation error in three different situations. Furthermore, the results also validate the feasible adaptive algorithm by which the unknown extrinsic parameters as well as the unknown motion plane of the feature point can be estimated online.
5. Conclusions
In this paper, a new adaptive imagebased and omnidirectional camera based controller independent of the leader’s velocity, as well as an observer estimating the unknown extrinsic parameters of the camera and the unknown motion plane of the feature point, have been proposed. The Lyapunov method is used to prove the USPAS of the image error and the formation error. The simulation results have validated the feasible performance of the algorithm. Future work will focus on the experiment in real environment.
Appendix
The details of matrices and arewhere and denote the elements of and in the row and in the column, respectively. The detail of isThe detail of matrix is where , , , , , , , and . And, denotes the element of .
The detail of regressor matrix is
Conflict of Interests
The authors declare that there is no conflict of interests regarding the publication of this paper.
Acknowledgments
This work was supported in part by Shanghai RisingStar Program under Grant 14QA1402500, in part by International Cooperation Project of Science and Technology under Grant 2011DFA11780, in part by the Natural Science Foundation of China under Grants 61105095, 61473191, 61203361, and 61221003, and in part by China Domestic Research Project for the International Thermonuclear Experimental Reactor (ITER) under Grants 2012GB102001 and 2012GB102008.
References
 K. Choi, S. J. Yoo, J. B. Park, and Y. H. Choi, “Adaptive formation control in absence of leader's velocity information,” IET Control Theory & Applications, vol. 4, no. 4, pp. 521–528, 2010. View at: Publisher Site  Google Scholar
 A. P. Dani, N. Gans, and W. E. Dixon, “Positionbased visual servo control of leaderfollower formation using imagebased relative pose and relative velocity estimation,” in Proceedings of the American Control Conference (ACC '09), pp. 5271–5276, June 2009. View at: Publisher Site  Google Scholar
 H. Poonawala, A. C. Satici, N. Gans, and M. W. Spong, “Formation control of wheeled robots with visionbased position measurement,” in Proceedings of the American Control Conference (ACC '12), pp. 3173–3178, Montreal, Canada, June 2012. View at: Google Scholar
 H. Y. Wang, S. Itani, T. Fukao, and N. Adachi, “Imagebased visual adaptive tracking control of nonholonomic mobile robots,” in Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems, vol. 1, pp. 1–6, Maui, Hawaii, USA, November 2001. View at: Publisher Site  Google Scholar
 M. Liu, C. Pradalier, F. Pomerleau, and R. Siegwart, “Scaleonly visual homing from an omnidirectional camera,” in Proceedings of the IEEE International Conference on Robotics and Automation (ICRA '12), pp. 3944–3949, 2012. View at: Google Scholar
 M. Liu, C. Pradalier, and R. Siegwart, “Visual homing from scale with an uncalibrated omnidirectional camera,” IEEE Transactions on Robotics, vol. 29, no. 6, pp. 1353–1365, 2013. View at: Publisher Site  Google Scholar
 G. Caron, E. Marchand, and E. M. Mouaddib, “Photometric visual servoing for omnidirectional cameras,” Autonomous Robots, vol. 35, no. 23, pp. 177–193, 2013. View at: Publisher Site  Google Scholar
 I. Markovic, F. Chaumette, and I. Petrovic, “Moving object detection, tracking and following using an omnidirectional camera on a mobile robot,” in Proceedings of the IEEE International Conference on Robotics and Automation, 2014. View at: Google Scholar
 A. K. Das, R. Fierro, V. Kumar, J. P. Ostrowski, J. Spletzer, and C. J. Taylor, “A visionbased formation control framework,” IEEE Transactions on Robotics and Automation, vol. 18, no. 5, pp. 813–825, 2002. View at: Publisher Site  Google Scholar
 A. K. Das, R. Fierro, V. Kumar, B. Southall, J. Spletzer, and C. J. Taylor, “Realtime visionbased control of a nonholonomic mobile robot,” in Proceedings of the IEEE International Conference on Robotics and Automation (ICRA '01), pp. 1714–1719, May 2001. View at: Google Scholar
 G. L. Mariottini, F. Morbidi, D. Prattichizzo, G. J. Pappas, and K. Daniilidis, “Leaderfollower formations: Uncalibrated visionbased localization and control,” in Proceedings of the IEEE International Conference on Robotics and Automation (ICRA '07), pp. 2403–2408, April 2007. View at: Publisher Site  Google Scholar
 G. L. Mariottini, G. Pappas, D. Prattichizzo, and K. Daniilidis, “Visionbased localization of leaderfollower formations,” in Proceedings of the 44th IEEE Conference on Decision and Control and the European Control Conference (CDCECC '05), pp. 635–640, December 2005. View at: Publisher Site  Google Scholar
 F. Morbidi, G. L. Mariottini, and D. Prattichizzo, “Observer design via immersion and invariance for visionbased leaderfollower formation control,” Automatica, vol. 46, no. 1, pp. 148–154, 2010. View at: Publisher Site  Google Scholar  MathSciNet
 R. Vidal, O. Shakernia, and S. Sastry, “Formation control of nonholonomic mobile robots with omnidirectional visual servoing and motion segmentation,” in Proceedings of the IEEE International Conference on Robotics and Automation, pp. 584–589, September 2003. View at: Google Scholar
 O. Shakernia, R. Vidal, and S. Sastry, “Multibody motion estimation and segmentation from multiple central panoramic views,” in Proceedings of the IEEE International Conference on Robotics and Automation (ICRA '03), pp. 571–576, Taipei, Taiwan, September 2003. View at: Google Scholar
 C. Geyer and K. Daniilidis, “A unifying theory for central panoramic systems and practical applications,” in Proceedings of the European Conference, pp. 445–461, 2000. View at: Google Scholar
 J. P. Barreto and H. Araujo, “Geometric properties of central catadioptric line images,” in Proceedings of the European Conference on Computer Vision, pp. 237–251, 2002. View at: Google Scholar
 H. Wang, Y.H. Liu, and D. Zhou, “Dynamic visual tracking for manipulators using an uncalibrated fixed camera,” IEEE Transactions on Robotics, vol. 23, no. 3, pp. 610–617, 2007. View at: Publisher Site  Google Scholar
 Y.H. Liu, H. Wang, C. Wang, and K. K. Lam, “Uncalibrated visual servoing of robots using a depthindependent interaction matrix,” IEEE Transactions on Robotics, vol. 22, no. 4, pp. 804–817, 2006. View at: Publisher Site  Google Scholar
 H. Wang, Y. H. Liu, and D. Zhou, “Adaptive visual servoing using point and line features with an uncalibrated eyeinhand camera,” IEEE Transactions on Robotics, vol. 24, no. 4, pp. 843–857, 2008. View at: Publisher Site  Google Scholar
 Y. Liu, H. Wang, W. Chen, and D. Zhou, “Adaptive visual servoing using common image features with unknown geometric parameters,” Automatica, vol. 49, no. 8, pp. 2453–2460, 2013. View at: Publisher Site  Google Scholar  MathSciNet
 H. Wang, Y. H. Liu, W. Chen, and Z. Wang, “A new approach to dynamic eyeinhand visual tracking using nonlinear observers,” IEEE/ASME Transactions on Mechatronics, vol. 16, no. 2, pp. 387–394, 2011. View at: Google Scholar
 H. Wang, Y.H. Liu, and W. Chen, “Visual tracking of robots in uncalibrated environments,” Mechatronics, vol. 22, no. 4, pp. 390–397, 2012. View at: Publisher Site  Google Scholar
 T. Burg, D. Dawson, J. Hu, and M. de Queiroz, “An adaptive partial statefeedback controller for RLED robot manipulators,” IEEE Transactions on Automatic Control, vol. 41, no. 7, pp. 1024–1030, 1996. View at: Publisher Site  Google Scholar  MathSciNet
 S. Purwar, I. N. Kar, and A. N. Jha, “Adaptive output feedback tracking control of robot manipulators using position measurements only,” Expert Systems with Applications, vol. 34, no. 4, pp. 2789–2798, 2008. View at: Publisher Site  Google Scholar
 A. Chaillet and A. Loría, “Uniform semiglobal practical asymptotic stability for nonautonomous cascaded systems and applications,” Automatica, vol. 44, no. 2, pp. 337–347, 2008. View at: Publisher Site  Google Scholar  MathSciNet
 H. K. Khalil, Nonlinear Systems, vol. 3, Prentice Hall, Upper Saddle River, NJ, USA, 2002.
Copyright
Copyright © 2015 Dejun Guo et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.