Table of Contents Author Guidelines Submit a Manuscript
Journal of Applied Mathematics
Volume 2015 (2015), Article ID 194251, 12 pages
http://dx.doi.org/10.1155/2015/194251
Research Article

Adaptive Image-Based Leader-Follower Approach of Mobile Robot with Omnidirectional Camera

Department of Automation, Shanghai Jiao Tong University and Key Laboratory of System Control and Information Processing, Ministry of Education of China, Shanghai 200240, China

Received 22 April 2014; Accepted 16 July 2014

Academic Editor: Guoqiang Hu

Copyright © 2015 Dejun Guo et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Abstract

This paper focuses on the problem of the adaptive image-based leader-follower formation control of mobile robot with on-board omnidirectional camera. A calibrated omnidirectional camera is fixed on the follower in any position, and a feature point representing the leader can be chosen in any position. An adaptive image-based controller without depending on the velocity of the leader is proposed based on a filter technology. In other words, only by relying on the projection of the feature on the image plane, can the follower track the leader and achieve the formation control. Moreover, an observer is introduced to estimate the unknown camera extrinsic parameters and the unknown parameters of plane, where the feature point moves, relative to omnidirectional camera frame. At last, the lyapunov method is applied to prove the uniform semiglobal practical asymptotic stability (USPAS) for the closed-loop system. Simulation results are presented to validate the algorithm.

1. Introduction

The formation control problem has been the research focus for a long time. The multirobots moving in formation have better collaboration abilities than those moving dispersedly. For example, robots in formation can achieve more complex tasks in shorter time. The leader-follower strategy is the most popular one due to its decentralized control approach, feasibility, and scalability. In the approach, the followers keep the desired relative pose to the leaders. Thus, the formation control problem can be transformed to several distributed control approaches.

Many methods, focusing on leader-follower strategy, have been proposed using on-board laser sensors or perspective cameras. Choi et al. [1] proposed an adaptive position-based controller. The relative pose is measured by the on-board laser sensor, and the unknown leader’s motion was estimated by a novel observer. Dani et al. [2] and Poonawala et al. [3] measured the relative pose by pose reconstruction using a perspective camera. Dani et al. [2] estimated the relative velocity using a nonlinear estimator. Poonawala et al. [3] eliminated the need of leader’s velocity in the controller, so the design of the observer to estimate the leader’s motion could be avoided. Wang et al. [4] proposed an adaptive image-based controller based on the backstepping technology. The measurement of the relative pose could be eliminated, and the unknown height of the feature was estimated by an observer, but the measurement of object’s motion should be accurate. However, lasers are more expensive than cameras, and both lasers and perspective cameras have finite field of view and limit the robots’ motion.

Compared to perspective camera, omnidirectional camera offers the panoramic view for the robot and can detect the object in the 360° surrounding scene. Due to this advantage, omnidirectional camera has been applied in many visual servoing approaches [58]. Many methods using the on-board omnidirectional camera have been developed to cope with leader-follower formation control. Das et al. [9, 10] designed a position-based controller depending on input-output linearization, and leader’s motion was estimated by extended Kalman filter (EKF). Mariottini et al. [1113] achieved the leader-follower formation control only using the relative bearing measured by uncalibrated omnidirectional camera. The relative distance was estimated by EKF in [11, 12] and by immersion and invariance-based observer in [13]. Vidal et al. [14] proposed an image-based controller based on input-output linearization technology using omnidirectional camera. The optical flows method [15] was exploited to compensate for the unknown leader’s motion, but this method depended on detecting a static point all the time. However, the above methods using omnidirectional cameras all need that the camera should be mounted at the rotation center of the follower and that the image plane should be parallel with the ground plane. These assumptions limit the layout of the camera and cause the system error due to the unmatched model. What is more, most of the approaches always need to measure the relative pose, but this is not the direct way to achieve leader-follower approach. Transforming the image information to the position information is time-consuming and inaccurate.

In this paper, the omnidirectional camera can be fixed on the mobile robots in any pose, and the image plane does not need to be parallel with the ground plane. The feature point on leader can be chosen in any position, and the position of the feature point can be unknown. An adaptive image-based controller is developed relying on a filter to eliminate the measurement or estimation of leader’s motion. Then, an observer is proposed to estimate the unknown pose of the omnidirectional camera relative to the follower and the unknown coefficient parameters of the plane where the feature point moves. The plane is expressed relative to the omnidirectional camera frame. At last, the Lyapunov theory is used to prove uniform semiglobal practical asymptotic stability (USPAS) of image error and formation error. The simulation results validate the performance of the proposed algorithm.

2. Kinematics

2.1. Problem Statement

The coordinate frames are defined as shown in Figures 1 and 2. The world frame is fixed on the ground. Denote follower frame by and leader frame by . Define the omnidirectional camera frame as and the image plane frame as . and coincide with robots’ rotation centers, and and are parallel with robots’ forward direction. The planes and are parallel with the ground plane. The robot’s linear velocity coincides with its -axis and its angular velocity is orthogonal to the ground plane. Define the original point of which locates in the focus point of the mirror, and the axis is aligned to the symmetric axis of the mirror. The optical axis of camera coincides with , and the image plane is parallel with plane . Axis is parallel with axis . The problem addressed is defined as follows.

Figure 1: Leader-follower system with omnidirectional camera.
Figure 2: The projection model of omnidirectional camera.

Problem 1. Given a desired position on the image plane, an adaptive image-based controller is designed to make the feature point converge to an arbitrary small circle around the desired one, while the extrinsic parameters of the omnidirectional camera, the position of feature point on the leader, and the leader’s velocity are all unknown.

Assumption 2. The velocity of the leader is bounded, due to the practical reasons. The linear velocity and angular velocity of the leader are and , respectively. and are their upper bounds; that is,

Assumption 3. Suppose that the parameters of the omnidirectional camera’s mirror and the intrinsic parameters of the camera are known. Both robots move on the ground plane.

Notation. The notations used in this paper are listed as follows: a bold letter denotes a vector or a matrix. Otherwise, it denotes a scalar. and denote the identify matrix and the zero matrix, respectively. and denote the minimum and maximal eigenvalue of a positive-definite and diagonal gain matrix, respectively. denotes the Euclidean norm of the vector. The presuper (scribe) represent the coordinate system that the variable is related to (e.g., denotes the position of feature point with respect to ).

2.2. The Unified Model of Omnidirectional Camera

As is shown in Figure 2, the omnidirectional camera includes a curved mirror and a camera. For example, a parabolic mirror is combined with an orthographic camera and a hyperbolic mirror is combined with a perspective camera. The details of the structure of the omnidirectional camera can be found in [16, 17].

The 3D coordination of the feature point relative to frame is . As is shown in Figure 3, The intersection between the reflective ray and the plane is named the general-image point , and is the proportionable coordinate corresponding to . The proportionable relationship is whereis similar to the depth information that appeared in the case of perspective cameras and . The parameters of mirror , , and are shown in Table 1. represents the image coordinate of the image point. The mapping relation between and the image point iswhere , , , are the intrinsic parameters of the camera. can be calculated through when omnidirectional camera is calibrated. Therefore, without loss of generality, , taking place of , is considered as the output of the system.

Table 1: The parameters of mirrors.
Figure 3: (a) The projection model of parabolic mirror. (b) The projection model of hyperbolic mirror.
2.3. Kinematics

The linear and angular velocities of the are and , respectively. The world linear velocity of the feature point expressed in frame is . Then, the relative velocity of with respect to frame isThe differential of isThe differential of iswhereTo eliminate , (7) can be rewritten, according to (2), asThen, (9) can be rearranged in a more convenient matrix formwhere the Jacobian matrixes are

Inspired by the depth-independent interaction matrix proposed in [1823], a depth-like term is introduced. According to (2),Thus, (10) can be transformed to

The nonholonomic kinematic model of the mobile robots can be described aswhere and denote the coordinates of with respect to . denotes the orientation of the mobile robots and is defined as the angle from axis to axis . and represent the linear and angular velocities, respectively. Moreover, let denote the rotation matrix of relative to aswhere , , and denote -- rotation angles defined relative to the current frames and from to , respectively. Frame denotes the omnidirectional frame when , axis , and axis coincide with , axis , and axis , respectively. denotes the element of in the th row and in the th column. The constant denotes the 3D coordinates of the origin of with respect to . It is noted that and are caused by the motion of the follower only. Thus, the relation between and is as follows:where the constant matrix is

Moreover, the time-variant denotes the rotation matrix of with respect to . , , and time-variant denotes the difference of orientations between the leader and the follower. The time-invariant is the 3D coordinates of the feature point with respect to . is caused by the motion of the leader only. Thus, the relation between and is as follows:where the time-variant matrix is

Substituting (16) and (18) into (13), then, (13) can be rewritten asThe details of and are shown in the Appendix.

According to Assumption 3, the feature point and the omnidirectional camera move on the planes. Therefore, the feature point moves in a fixed plane relative to the . Based on a plane equation, the unknown depth can be represented in terms of , , :According to (2), (21) can be revised asTo eliminate in (20), substituting (22) into (20), then, (20) can be revised asIt is noted that , , and are unknown and that they are not included in controller. Let denote the unknown parameters including camera’s extrinsic parameters , , and the coefficient parameters of plane equation (21). The parameterized is . In addition, can be parameterized in a linear form as The details of , and the regressor matrix can be found in the Appendix.

3. Adaptive Image-Based Leader-Follower Approach with Omnidirectional Camera

In image-based leader-follower approach, the position-based output, separation and bearing, can be transformed to an image point of an on-board omnidirectional camera, because there exists the injective mapping between an image point and a relative position with respect to . Then, general-image point is considered as the output due to the injective mapping between a general-image point and an image point. Therefore, when the general-image point converges to the desired one, the leader-follower formation is achieved. Furthermore, the desired general image point can be recorded using the omnidirectional camera when leader’s feature point locates at the desired position relative to the follower, which is known as the “teach-by-showing” approach.

3.1. Design of Controller and Observer

Proposition 4. To avoid the singular point of the matrix , the determinant should not be zero. Denote the estimated unknown parameters, and denotes the matrix in (23) containing .

According to [18], the repulsive potential field is introducedAnd the potential force is introduced as, , , and are all positive constants, and

Define the general-image error as . The observer is proposed aswhich can be considered as the update law of the estimated parameters . The regressor matrix is calculated from (24). The second term in (28) serves as a repulsive force which can push away from the singular point of matrix . are all positive gain symmetric matrices.

Inspired by [24, 25], a general-image based filter is proposed, where can be regarded as a pseudoerror which is used to compensate for the general-image’s velocity caused by leader’s motion. are positive definite and diagonal gain matrices. The controller is proposed aswhere is the inverse of the matrix . , and are positive-definite and diagonal gain matrices. and are positive gains.

3.2. Stability Analysis

Theorem 5. Utilizing the controller (29) and the observer (28), the general-image error is uniform semiglobal practical asymptotic stability (USPAS), which means USPAS of image error. Moreover, the image-based leader-follower system with omnidirectional camera is stable under Assumptions 2 and 3 and the initial relative heading is bounded away from , .

Proof. The Lyapunov function is proposed aswhere , , and . The differential of iswhere . Equation (23) can be refined asSubstituting (29) into (33), can be rewritten asSubstituting (34) and the observer (28) into (32), (32) can be revised asAccording to (24), . Then, (35) can be revised aswhere is an odd function; thus the second term in (36) is nonpositive. There is a small positive , which can be adjusted by the gains , making sure that . If , ; thus the first term in (36) is negative. Moreover, the gain matrix should satisfy ; thus the third term in (36) is nonpositive. Therefore, when , . The scalar of can be arbitrarily decreased. So is USPAS, according to [26].
The differential of (31) isDue to USPAS of , is also USAPS. The differential of iswhereSubstituting (39) into (37), (37) can be rewritten asThe nominal system is exponentially stable when is satisfied. Obviously, is bounded. Therefore, the perturbed system (40) is stable, and is bounded when based on the stability of perturbed system [27].
In summary, is USPAS, which is equivalent to USPAS of image error. Also, the relative heading is bounded. Therefore, the image-based leader-follower system with omnidirectional camera is stable.

4. Simulation Results

In this section, the simulation results are presented to validate the performance of the proposed algorithm.

The nonholonomic two-wheeled mobile robot is used to achieve the simulation. An omnidirectional camera is fixed on the follower, and the follower detects a feature point fixed on the leader. The simulation is based on the kinematics of the vehicles. The camera can detect the object instantaneously and the robot can respond to the control instantly.

The upper bounds of leader’s linear and angular velocity are and . The coordinate of the feature point with respect to is . Two followers are introduced in the simulation. The mirror types are hyperbolic, and their parameters are both and . The camera intrinsic parameters are , , . The transfer angles are both ,  , and , respectively. The coordinates of with respect to are . The control gains are both and . The observer gains are ,  , , , , . The initial estimated parameters are both chosen as , . The initial pose of the leader is ,  ,  ; the initial poses of two followers are ,  ,  ,  ,  ,  , respectively. So the initial image points on Follower1’s and Follower2’s image plane are and , respectively. The desired positions of the leader relative to frame Follower1 and frame Follower2 are , and ,  , respectively. Then the desired image points on Follower1 and Follower2’s image plane are and , respectively.

In the first simulation, the leader runs in a straight line with . The results can be found in Figure 4. In the second simulation, the leader runs along a circle with and . The results can be found in Figure 5. In the third simulation, the leader runs along an arbitrary trajectory with the variational linear and angular velocity (Figure 6). The results can be found in Figure 6. The results all validated Theorem 5.

Figure 4: Line tracking results. (a) Trajectory in world frame. (b) Image trajectory of Follower1. (c) Image trajectory of Follower2. (d) Image error of Follower1. (e) Image error of Follower2. (f) Position error in frame Follower1. (g) Position error in frame Follower2.
Figure 5: Circle tracking results. (a) Trajectory in world frame. (b) Image trajectory of Follower1. (c) Image trajectory of Follower2. (d) Image error of Follower1. (e) Image error of Follower2. (f) Position error in frame Follower1. (g) Position error in frame Follower2.
Figure 6: Arbitrary trajectory tracking results. (a) Trajectory in world frame. (b) Image trajectory of Follower1. (c) Image trajectory of Follower2. (d) Image error of Follower1. (e) Image error of Follower2. (f) Position error in frame Follower1. (g) Position error in frame Follower2.

The simulation results above have validated the performance of the adaptive image-based leader-follower approach with the on-board omnidirectional camera. The image errors converge to approximate zero at last. The results have shown the convergence of the leader-follower formation error in three different situations. Furthermore, the results also validate the feasible adaptive algorithm by which the unknown extrinsic parameters as well as the unknown motion plane of the feature point can be estimated online.

5. Conclusions

In this paper, a new adaptive image-based and omnidirectional camera based controller independent of the leader’s velocity, as well as an observer estimating the unknown extrinsic parameters of the camera and the unknown motion plane of the feature point, have been proposed. The Lyapunov method is used to prove the USPAS of the image error and the formation error. The simulation results have validated the feasible performance of the algorithm. Future work will focus on the experiment in real environment.

Appendix

The details of matrices and arewhere and denote the elements of and in the row and in the column, respectively. The detail of isThe detail of matrix is where , , , , , , , and . And, denotes the element of .

The detail of regressor matrix is

Conflict of Interests

The authors declare that there is no conflict of interests regarding the publication of this paper.

Acknowledgments

This work was supported in part by Shanghai Rising-Star Program under Grant 14QA1402500, in part by International Cooperation Project of Science and Technology under Grant 2011DFA11780, in part by the Natural Science Foundation of China under Grants 61105095, 61473191, 61203361, and 61221003, and in part by China Domestic Research Project for the International Thermonuclear Experimental Reactor (ITER) under Grants 2012GB102001 and 2012GB102008.

References

  1. K. Choi, S. J. Yoo, J. B. Park, and Y. H. Choi, “Adaptive formation control in absence of leader's velocity information,” IET Control Theory & Applications, vol. 4, no. 4, pp. 521–528, 2010. View at Publisher · View at Google Scholar · View at Scopus
  2. A. P. Dani, N. Gans, and W. E. Dixon, “Position-based visual servo control of leader-follower formation using image-based relative pose and relative velocity estimation,” in Proceedings of the American Control Conference (ACC '09), pp. 5271–5276, June 2009. View at Publisher · View at Google Scholar · View at Scopus
  3. H. Poonawala, A. C. Satici, N. Gans, and M. W. Spong, “Formation control of wheeled robots with vision-based position measurement,” in Proceedings of the American Control Conference (ACC '12), pp. 3173–3178, Montreal, Canada, June 2012. View at Scopus
  4. H. Y. Wang, S. Itani, T. Fukao, and N. Adachi, “Image-based visual adaptive tracking control of nonholonomic mobile robots,” in Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems, vol. 1, pp. 1–6, Maui, Hawaii, USA, November 2001. View at Publisher · View at Google Scholar · View at Scopus
  5. M. Liu, C. Pradalier, F. Pomerleau, and R. Siegwart, “Scale-only visual homing from an omnidirectional camera,” in Proceedings of the IEEE International Conference on Robotics and Automation (ICRA '12), pp. 3944–3949, 2012.
  6. M. Liu, C. Pradalier, and R. Siegwart, “Visual homing from scale with an uncalibrated omnidirectional camera,” IEEE Transactions on Robotics, vol. 29, no. 6, pp. 1353–1365, 2013. View at Publisher · View at Google Scholar · View at Scopus
  7. G. Caron, E. Marchand, and E. M. Mouaddib, “Photometric visual servoing for omnidirectional cameras,” Autonomous Robots, vol. 35, no. 2-3, pp. 177–193, 2013. View at Publisher · View at Google Scholar · View at Scopus
  8. I. Markovic, F. Chaumette, and I. Petrovic, “Moving object detection, tracking and following using an omnidirectional camera on a mobile robot,” in Proceedings of the IEEE International Conference on Robotics and Automation, 2014.
  9. A. K. Das, R. Fierro, V. Kumar, J. P. Ostrowski, J. Spletzer, and C. J. Taylor, “A vision-based formation control framework,” IEEE Transactions on Robotics and Automation, vol. 18, no. 5, pp. 813–825, 2002. View at Publisher · View at Google Scholar · View at Scopus
  10. A. K. Das, R. Fierro, V. Kumar, B. Southall, J. Spletzer, and C. J. Taylor, “Real-time vision-based control of a nonholonomic mobile robot,” in Proceedings of the IEEE International Conference on Robotics and Automation (ICRA '01), pp. 1714–1719, May 2001. View at Scopus
  11. G. L. Mariottini, F. Morbidi, D. Prattichizzo, G. J. Pappas, and K. Daniilidis, “Leader-follower formations: Uncalibrated vision-based localization and control,” in Proceedings of the IEEE International Conference on Robotics and Automation (ICRA '07), pp. 2403–2408, April 2007. View at Publisher · View at Google Scholar · View at Scopus
  12. G. L. Mariottini, G. Pappas, D. Prattichizzo, and K. Daniilidis, “Vision-based localization of leader-follower formations,” in Proceedings of the 44th IEEE Conference on Decision and Control and the European Control Conference (CDC-ECC '05), pp. 635–640, December 2005. View at Publisher · View at Google Scholar · View at Scopus
  13. F. Morbidi, G. L. Mariottini, and D. Prattichizzo, “Observer design via immersion and invariance for vision-based leader-follower formation control,” Automatica, vol. 46, no. 1, pp. 148–154, 2010. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus
  14. R. Vidal, O. Shakernia, and S. Sastry, “Formation control of nonholonomic mobile robots with omnidirectional visual servoing and motion segmentation,” in Proceedings of the IEEE International Conference on Robotics and Automation, pp. 584–589, September 2003. View at Scopus
  15. O. Shakernia, R. Vidal, and S. Sastry, “Multibody motion estimation and segmentation from multiple central panoramic views,” in Proceedings of the IEEE International Conference on Robotics and Automation (ICRA '03), pp. 571–576, Taipei, Taiwan, September 2003. View at Scopus
  16. C. Geyer and K. Daniilidis, “A unifying theory for central panoramic systems and practical applications,” in Proceedings of the European Conference, pp. 445–461, 2000.
  17. J. P. Barreto and H. Araujo, “Geometric properties of central catadioptric line images,” in Proceedings of the European Conference on Computer Vision, pp. 237–251, 2002.
  18. H. Wang, Y.-H. Liu, and D. Zhou, “Dynamic visual tracking for manipulators using an uncalibrated fixed camera,” IEEE Transactions on Robotics, vol. 23, no. 3, pp. 610–617, 2007. View at Publisher · View at Google Scholar · View at Scopus
  19. Y.-H. Liu, H. Wang, C. Wang, and K. K. Lam, “Uncalibrated visual servoing of robots using a depth-independent interaction matrix,” IEEE Transactions on Robotics, vol. 22, no. 4, pp. 804–817, 2006. View at Publisher · View at Google Scholar · View at Scopus
  20. H. Wang, Y. H. Liu, and D. Zhou, “Adaptive visual servoing using point and line features with an uncalibrated eye-in-hand camera,” IEEE Transactions on Robotics, vol. 24, no. 4, pp. 843–857, 2008. View at Publisher · View at Google Scholar · View at Scopus
  21. Y. Liu, H. Wang, W. Chen, and D. Zhou, “Adaptive visual servoing using common image features with unknown geometric parameters,” Automatica, vol. 49, no. 8, pp. 2453–2460, 2013. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus
  22. H. Wang, Y. H. Liu, W. Chen, and Z. Wang, “A new approach to dynamic eye-in-hand visual tracking using nonlinear observers,” IEEE/ASME Transactions on Mechatronics, vol. 16, no. 2, pp. 387–394, 2011. View at Google Scholar
  23. H. Wang, Y.-H. Liu, and W. Chen, “Visual tracking of robots in uncalibrated environments,” Mechatronics, vol. 22, no. 4, pp. 390–397, 2012. View at Publisher · View at Google Scholar · View at Scopus
  24. T. Burg, D. Dawson, J. Hu, and M. de Queiroz, “An adaptive partial state-feedback controller for RLED robot manipulators,” IEEE Transactions on Automatic Control, vol. 41, no. 7, pp. 1024–1030, 1996. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus
  25. S. Purwar, I. N. Kar, and A. N. Jha, “Adaptive output feedback tracking control of robot manipulators using position measurements only,” Expert Systems with Applications, vol. 34, no. 4, pp. 2789–2798, 2008. View at Publisher · View at Google Scholar · View at Scopus
  26. A. Chaillet and A. Loría, “Uniform semiglobal practical asymptotic stability for non-autonomous cascaded systems and applications,” Automatica, vol. 44, no. 2, pp. 337–347, 2008. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus
  27. H. K. Khalil, Nonlinear Systems, vol. 3, Prentice Hall, Upper Saddle River, NJ, USA, 2002.