Research Article  Open Access
Huangsheng Xie, Guodong Li, Yuexin Wang, Zhihe Fu, Fengyu Zhou, "Research on Visual Servo Grasping of Household Objects for Nonholonomic Mobile Manipulator", Journal of Control Science and Engineering, vol. 2014, Article ID 315396, 13 pages, 2014. https://doi.org/10.1155/2014/315396
Research on Visual Servo Grasping of Household Objects for Nonholonomic Mobile Manipulator
Abstract
This paper focuses on the problem of visual servo grasping of household objects for nonholonomic mobile manipulator. Firstly, a new kind of artificial object mark based on QR (Quick Response) Code is designed, which can be affixed to the surface of household objects. Secondly, after summarizing the visionbased autonomous mobile manipulation system as a generalized manipulator, the generalized manipulator’s kinematic model is established, the analytical inverse kinematic solutions of the generalized manipulator are acquired, and a novel active vision based camera calibration method is proposed to determine the handeye relationship. Finally, a visual servo switching control law is designed to control the service robot to finish object grasping operation. Experimental results show that QR Codebased artificial object mark can overcome the difficulties brought by household objects’ variety and operation complexity, and the proposed visual servo scheme makes it possible for service robot to grasp and deliver objects efficiently.
1. Introduction
A classical mobile manipulator system (MMS) consists of a manipulator which is mounted on a nonholonomic mobile platform. This type of arrangement extends manipulator’s workspace apparently and is widely used in service robot applications [1, 2]. The development of MMS mainly involves two classical items, namely, motion planning [3–8] and coordinating control [9–13], which are used to overcome the mobile platform’s nonholonomic constraint and make the MMS move quickly and efficiently.
When robots operate in unstructured environments, it is essential to include exteroceptive sensory information in the control loop. In particular, visual information provided by vision sensor such as chargecoupled device (CCD) cameras guarantees accurate positioning, robustness of calibration uncertainties, and reactivity of environmental changes. Much of the work related to CCD cameras and manipulators has focused on the applications about the manipulator’s visual servo control, which specifies robotic tasks (such as object grasping, assembling) in terms of desired image features extracted from a target object. The overview of visual servo can be seen in literature [14–16]. In general, visual servo approaches can be divided into three different kinds, namely, positionbased visual servoing (PBVS) [17, 18], imagebased visual sevoing (IBVS) [19, 20], and hybrid visual servoing (HYBVS) [21–23]. In PBVS, the feedback signals in vision loop are the intuitive relative 3D pose between current and desired cameras estimated by current and desired image features using homography matrix or fundamental matrix estimation and decomposition. In IBVS, the feedback signals are image features whose changing velocities are related to the velocity twist of the camera via the image jacobian matrix (also called interaction matrix). Compared with the PBVS, IBVS is robust to perturbations of the robot/camera models and can maintain the image features in the field of view (FOV) of the camera through path planning [24]. But the drawbacks are also obvious; the interaction matrix’s depth information needs to be estimated and only local stability can be guaranteed for most IBVS schemes. To combine the advantages of PBVS and IBVS, HYBVS is proposed. In HYBVS, the feedback signals consist of relative 3D pose and image features, the former is used to control a subset of the camera configuration vector while the latter are used to regulate the remaining camera configuration vector.
Relating CCD cameras and the mobile robots lead to the applications of visionbased autonomous navigation control. Ma et al. [25] have developed a visionguided navigation system, where a nonholonomic mobile robot tracks an arbitrarily shaped continuous ground curve. Dixon et al. [26] present an adaptive tracking controller of a wheeled mobile robot via an uncalibrated camera system that the controller copes with the parameter uncertainty of the mechanical dynamics and the camera system. Amarasinghe et al. [27] have developed a visionbased hybrid control scheme for autonomous parking of a mobile robot that its controller consists of a discreteevent controller and a pixelerrordriven proportional controller. Vassallo et al. [28] present a similar project where a visionbased mobile robot attempts to autonomously navigate in a building.
The newest trend is to integrate CCD cameras into mobile manipulator to form a visionbased mobile manipulation system (VBMMS). Thanks to the capabilities of the vision subsystem, the VBMMS can work in an unstructured environment and has wider applications than a fixedbase manipulator and a mobile platform. Due to the lack of accurate and robust positioning performance of VBMMS, very few physical implementations have been reported. de Luca et al. [29] have considered the taskoriented modeling of the differential kinematics of nonholonomic mobile manipulators and has developed an imagebased controller for VBMMS, but their approach is illustrated through simulation, not physical implementation. Mansard et al. [30] have attempted to control a humanoid robot to grasp an object while walking. Wang et al. [31] have developed a robust visionbased mobile manipulation system for wheeled mobile robots. In their research, an innovative controller with machine learning using Qlearning is proposed to guarantee visibility of visual features in servo process.
This paper presents a physical implementation of VBMMS in service robot intelligent space. It consists of two basic contributions. First, after summarizing the VBMMS as a generalized manipulator, the kinematics is analyzed analytically and an active visionbased camera calibration method is proposed to determine the handeye relationship consequently. Second, a novel switching control strategy is proposed which switches between eyefixed approximation and positionbased static lookandmove grasping. The remainder of the paper is organized as follows. Section 1 will introduce the design of the QR Codebased artificial object mark. In Section 2, the VBMMS is summarized as a generalized manipulator, then the kinematics, inverse kinematics, and handeye relationship determination are discussed. In Section 3, the switching control strategy which switches between eyefixed approximation and static lookandmove grasping is designed. Two experiments will be presented in Section 4 to validate the designed switching controller. Conclusions will be drawn in Section 5.
2. Design of QR CodeBased Artificial Object Mark
As shown in Figure 1, the QR Codebased artificial object mark is composed of two parts: the internal information representation part which includes the object’s property and operation information and the blue concentric ring region which is called external identification part. Due to the facilitated detection of the external identification part, the mark can be recognized from complex home environment rapidly using vision sensor.
The coding of information stored in internal information representation part of the mark is shown in Table 1.

It can be seen from Table 1 that there are two different kinds of information, namely, object’s property information and object’s operation information. The object’s property information includes name, serial number, sizes, and material, in which the properties of name and serial number are used as the unique identification of an object and the property of sizes is to let robot know the gripper’s opening degree. The object’s operation information includes operation force, position, and orientation, which can assist the robot in finishing grasping operation in an appropriate way.
3. Kinematics, Inverse Kinematics, and HandEye Relationship Determination of VBMMS
As shown in Figure 2, the eyeinhand type VBMMS is combined with a nonholonomic tracked mobile robot (TMR), a 4DOF (Degree of Freedom) manipulator, and a CMOS camera. The Grandar ASRF type TMR uses differential drive structure and is easier to carry out steering control. The Schunk Powercube 4DOF manipulator is mounted on the TMR, and the Gsou V80 CMOS camera is mounted on the manipulator’s endeffector. In addition, the VBMMS is equipped with an onboard computer, whose computational capability can support realtime performance of the system.
3.1. Kinematics
Due to the nonholonomic constraint of the TMR and the nonredundancy of the 4DOF manipulator, using VBMMS to complete grasping task is very difficult, and so far, barely no related works can be found. Take the difficulties of controlling the TMR and manipulator separately into account, the VBMMS is summarized as a generalized manipulator shown in Figure 2. For the generalized manipulator, the TMR is considered as a 3DOF (rotationtranslationrotation) manipulator and the manipulator’s first degree is aborted because of its coincidence with the TMR’s third degree. In that case, the generalized manipulator has six degrees, which guarantees that the endeffector can approach an arbitrary position at any pose.
Table 2 illustrates the modified DH parameters of each link for seriallink manipulator, where , , , mean twist, length, angle, and offset of the th link, respectively. In addition to the last column of the table, R is for revolute while T is for prismatic. For the prismatic joint, is the variable whose value range is , while for the revolute joint, is the variable whose value range is .

Table 2 results in a 4 × 4 homogeneous transformation matrix representing each link’s coordinate frame with respect to the previous link’s coordinate frame. Thereby, the representation of endeffector coordinate frame with respect to base coordinate frame can be shown in the following: where in (2)–(13), , , , , .
It is well known that the symbol can be used to note the generalized manipulator Jacobian matrix expressed in endeffector coordinate frame that transforms velocities in joint space to velocities of the endeffector in Cartesian space. For the 6DOF generalized manipulator, the endeffector Cartesian velocity is where the 6vector represented by is the endeffector’s Cartesian velocity with respect to its own frame . Based upon the resolved transformation matrix and , can be determined as for prismatic joint corresponding ; the th column of can be constructed as follows: for revolute joint corresponding ; the th column of : The can finally be constructed as follows:
3.2. Inverse Kinematics
From (2)–(13) we can achieve that the manipulator has 6 joints to allow arbitrary endeffector pose. In order to reach the specified endeffector position, the inverse kinematic solutions can be acquired through separating the unknown variable , , in (2)–(13).
Note that , by premultiplicating in (1), we can acquire . On the other hand, can be computed by . Let the two different forms of be equal; four sets of analytical inverse kinematic solutions can be achieved:
Given an arbitrary endeffector pose, whose orientation is expressed by RPY description method, the four sets of inverse kinematic solutions can be solved by (19), as shown in Table 3.

After solving all the solutions, they can be processed so that they verify the joints’ value ranges, and then the optimal set of solution can be acquired using a certain optimality criteria such as the shortest path.
3.3. HandEye Relationship Determination
Handeye relationship determination is a key issue in visual servo control of a robot handeye system, and much work has been done. In this paper, we propose a novel method which is based on Zhang’s camera calibration method and tensor theory. As we know, Zhang’s algorithm proposed in 1999 is very representative because of its easy use, flexibility, and high accuracy [32]. The algorithm only requires the camera to observe a planar pattern (such as planar checkerboard) shown at a few different orientations. In consequence, a closedform solution of camera intrinsic parameter matrix and the pose of planar object’s coordinate frame with respect to the camera frame noted as can be computed, which are followed by a nonlinear refinement based on the maximum likelihood criterion so as to improve the accuracy.
Figure 3 shows the scheme of handeye relationship determination, where , are the current endeffector frame and current camera frame, respectively.
When the manipulator executes the th known movement, they move to a new position noted as and .
From Figure 3, there is In (20), is the unknown handeye relationship, can be known from the known motion of the manipulator, and and can be known from Zhang’s method.
Note that By substituting (21) into (20), one obtains
It can be seen from (22) that once we solve the unknown , then will be solved easily with (23). On the other hand, (22) can be summarized as the form of , considering a matrix to be a secondorder mixed tensor whose element of th row and th column is noted as ; the equation can be written as follows: where , are free indexes, and , are dummy indexes. Take Kronecker into account: Equation (24) can be written as
According to the different values of the free indexes and , we can acquire sets of equations from (26). Let , where is the th row vector of Matrix . Equation (26) can finally be written as where , and its element
Consider that there are sets of known manipulator movement, , corresponding to the th movement, therefore we obtain
Based upon leastsquares solution of a homogeneous system of linear equations, is the last column of , where . Note because is a rotation matrix whose Frobenius norm is and , we can determine as
Till then, the unique verifying constraint can be distinguished from (31). Furthermore, we choose , where as the final solution of (22).
After solving , the unknown can also be solved using the next equation:
4. Design of Visual Servo Switch Control Scheme for Grasping
The visual servo control scheme designed in this section consists of two steps: eyefixed approximation of the household object and static lookandmove grasping. Once the VBMMS is commanded to grasp a household object that is in its camera’s FOV, the VBMMS starts to approximate the object with its eye gazing it. When the distance between VBMMS’s camera and household object reaches a certain value, the process of approximation switches to the static lookandmove grasping.
4.1. EyeFixed Approximation
Figure 4 illustrates the scheme of eyefixed approximation. In Figure 4, and correspond to current image and desired image, respectively, and both are called zeroorder moment of the blue concentric ring region to the object mark. and are called nonhomogeneous projective coordinate of the center of blue concentric ring region to the object mark, which also correspond to current image and desired image, respectively.
It is well known that for nonhomogeneous projective coordinate , its time derivative is linearly related to the joints’ velocity through the interaction matrix : where is the coordinates of a 3D point relative to current camera frame, , are the results of handeye relationship, and is the generalized manipulator’s Jacobian in endeffector coordinate frame . For the unknown variable in , its estimation can be chosen as follows: where is the constant depth of the 3D point relative to the desired camera frame.
We partition the interaction matrix so as to isolate the second degree of freedom of the generalized manipulator. Note the following: in which diag is a diagonal matrix, only includes the second column of , and includes the first, fourth, fifth, and sixth columns of . We get
Note , where is a constant coefficient; a simple approximation control law can be acquired by using feature , as where sgn is sign function and is a timeinvariant coefficient. Note ; let be exponential convergent . There is
The vector can be considered as a modified error that incorporates the original error while taking into account the error that will be induced by robot motion .
Under the influence of control input shown in (37) and (38), the VBMMS approaches to object with a constant speed until , and during the approximation process, the object is maintained in the FOV from beginning to end.
4.2. Static LookandMove Grasping
Figure 5 illustrates the scheme of static lookandmove grasping.
In Figure 5, 3D object point is on the plane whose normal is . Meanwhile, there are three different camera frames: the desired camera frame , the initial camera frame , and the current camera frame , that is, after a manipulator’s known movement from to . The homogeneous transformation can be computed as where is the handeye relationship, is the initial endeffector’s pose, and is the current endeffector’s pose. Our ultimate objective is to determine the pose of relative to , namely, . In order to solve them, two steps are needed.
Step 1. Structure reconstruction of plane with respect to camera frame .
Take current camera frame as the reference frame, the plane can be noted as and point in frame is related to the corresponding point in frame by Euclidean homography : where means that the transforms one vector into the other, up to a scale factor. Given at least four sets of image point correspondences , , the satisfying can be derived using direct linear transformation (DLT) method.
After solving , written as and , , there are Furthermore
By solving (42), can be determined. Finally, using we can acquire the structure of plane with respect to camera frame .
Step 2. Computation of , of camera frame with respect to desired camera frame .
Similar to Step 1, point in frame is related to the corresponding point in frame by Euclidean homography : and the satisfying = 1 can also be derived using DLT method if there are sets of image point correspondences.
Adjust the solved so that it satisfies ; there are where the scale factor . Furthermore,
As we all know, the rotation matrix whose axis and angle are and can be described as Note . Choose then
Substitute into (46); according to Nanson’s equation , one obtains
Equation (50) means that the first two rows of matrix and are equal, respectively. Note then the positive scale factor and the rotation matrix are
To refine the solved , apply singular value decomposition to it. As , choose as the ultimate result. By substituting (52) and refined into (45), the unknown can finally be determined:
Till then, the homogeneous transformation can be acquired: Furthermore, the transformation , which represents the pose of the desired endeffector frame relative to the base frame , can be computed as
In (56), can be known from the VBMMS’s kinematic model established in Section 3.1, and the handeye relationship has already been known from Section 3.3. Finally, we can acquire the VBMMS’s control input from the analytical inverse kinematics proposed in Section 3.2.
5. Experiments and Analysis
In this section, we will discuss two experiments: handeye relationship determination and the VBMMS’s switch control scheme separately.
5.1. Experiment of HandEye Relationship Determination
In order to simplify the complexity of grid corner extraction, choose a model plane which contains a pattern of 4 × 4 squares as the calibration object; the size of each square is 22 mm × 22 mm. Attach an object frame to it; the upper left corner of the first square is selected to be associated to the origin point of , the axis is perpendicular to the model plane pointing outward, the plane of is aligned with the plane, so that points on the plane have zero coordinate, and , axes are paralleled to the sides of the checkerboard’s square, respectively.
Let the TMR be static; take six images of the plane under different orientations caused by several known manipulator movements. The images are shown in Figure 6, whose resolutions are all 640 × 480, and the corners are detected as the intersection of straight lines fitted to each square.
(a)
(b)
(c)
(d)
(e)
(f)
Using Zhang’s calibration method, the refined camera intrinsic parameter matrix , together with the refined extrinsic parameters corresponding to the six images respectively, can be acquired:
The known movements of the manipulator corresponding to six images are shown in Table 4.

From Table 4, the homogeneous transformation can be computed using generalized manipulator’s kinematic model. The computed and corresponding are shown in Table 5.

Combine image 1 and image 2 (noted as ). Using calibration method mentioned in Section 3.3, matrix can be determined. Similarly, combinations of , , , and can determine , , , , respectively. The handeye relationship can finally be determined:
5.2. Experiment of VBMMS’s Switch Control Scheme
The images corresponding to the desired and initial camera position and the object mark recognition using Gaussian model and Hough transformation are illustrated in Figure 7.
(a) Desired image
(b) Initial image
(c) Desired binary image
(d) Initial binary image
As can be seen in Figure 7, the blue concentric ring region of the object mark can be detected robustly and conveniently using Gaussian model for colorbased segmentation and Hough transformation for segmented region’s ellipse fitting.
Choose , and the gain involved in the approximation control law (37) . The gain involved in the eyefixed control law (38) , is set to be 25 cm while its true value is 31 cm; the computed eyefixed approximation control input is given in Figure 8.
(a)
(b)
Figure 9 shows the images corresponding to the camera position , where the eyefixed approximation process terminated, and the position , where the VBMMS does a known movement from to , together with the extracted corner correspondences, matched by RANSAC (random sample consensus) method. In addition to the corner extraction, the ROI (region of interest) is selected to be within the concentric ring.
(a) Image corresponding to
(b) Image corresponding to
(c) The extracted corner correspondences matched by RANSAC
The corner correspondences given in Figure 9 is leading to the homography satisfying . Using Step 1 mentioned in Section 3.2, the 3D structure of the object mark plane with respect to the camera frame can be computed:
Figure 10 shows the images corresponding to the camera position and the desired camera position , together with the extracted corner correspondences matched by RANSAC.
(a) Image corresponding to
(b) Image corresponding to
(c) The extracted corner correspondences matched by RANSAC
Applying DLT to the corner correspondences, the homography satisfying can be acquired; then using Step 2 mentioned in Section 3.2, the transformation can be computed. Finally the and the control input of the VBMMS can be determined from the inverse kinematics discussed in Section 3.2:
Affected by the computed control input, the VBMMS moved to a position corresponding to the desired camera frame . Thereby, the visual servo grasping task is executed successfully.
6. Conclusion
It is nearly impossible for VBMMS to finish household objects’ grasping and delivering operation without some prior knowledge (such as objects’ color, texture, sizes, and localization) provided by people, not only for the household objects’ variety and operation complexity, but also for the difficulties of the VBMMS’s kinematic modeling and its nonholonomic constraint’s handling. On the one hand, a new QR Codebased artificial object mark is designed, which can store object’s property information and operation information and can be easily distinguished from complex family environment. On the other hand, in order to model the VBMMS, we summarize it as a generalized manipulator, followed by acquiring its analytical inverse kinematic solutions and determining the handeye relationship based upon active vision. Meanwhile, in order to deal with the VBMMS’s nonholonomic constraint, a visual servo switching control law which is composed of eyefixed approximation part and static lookandmove grasping part is designed. The proposed scheme can solve the household objects’ grasping and delivering problem well, and makes it possible to let VBMMStype service robot provide better housekeeping service.
Conflict of Interests
The authors declare that there is no conflict of interests regarding the publication of this paper.
Acknowledgments
This project was supported by the National Natural Science Foundation of China (no. 61375084) and Advanced Mechanical Design and Manufacturing Public Service Platform Building, Longyan Science and Technology Plan Project. (no. 2012ly01).
References
 S. Ekvall, D. Kragic, and P. Jensfelt, “Object detection and mapping for service robot tasks,” Robotica, vol. 25, no. 2, Article ID 00323, pp. 175–187, 2007. View at: Publisher Site  Google Scholar
 K. SeverinsonEklundh, A. Green, and H. Hüttenrauch, “Social and collaborative aspects of interaction with a service robot,” Robotics and Autonomous Systems, vol. 42, no. 34, pp. 223–234, 2003. View at: Publisher Site  Google Scholar  Zentralblatt MATH
 W. F. Carriker, P. K. Khosla, and B. H. Krogh, “Path planning for mobile manipulators for multiple task execution,” IEEE Transactions on Robotics and Automation, vol. 7, no. 3, pp. 403–408, 1991. View at: Publisher Site  Google Scholar
 Q. Huang, K. Tanie, and S. Sugano, “Coordinated motion planning for a mobile manipulator considering stability and manipulation,” International Journal of Robotics Research, vol. 19, no. 8, pp. 732–742, 2001. View at: Publisher Site  Google Scholar
 A. Mohri, S. Furuno, and M. Yamamoto, “Trajectory planning of mobile manipulator with endeffector's specified path,” in Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems, vol. 4, pp. 2264–2269, Maui, Hawaii, USA, November 2001. View at: Publisher Site  Google Scholar
 K. Tchon, J. Jakubiak, and R. Muszynski, “Regular Jacobian motion planning algorithms for mobile manipulators,” in Proceedings of the 15th IFAC World Congress, vol. 15, Barcelona, Spain, 2002. View at: Publisher Site  Google Scholar
 J. Vannoy and J. Xiao, “Realtime adaptive motion planning (RAMP) of mobile manipulators in dynamic environments with unforeseen changes,” IEEE Transactions on Robotics, vol. 24, no. 5, pp. 1199–1212, 2008. View at: Publisher Site  Google Scholar
 S. Ide, T. Takubo, K. Ohara, Y. Mae, and T. Arai, “Realtime trajectory planning for mobile manipulator using model predictive control with constraints,” in Proceedings of the 8th International Conference on Ubiquitous Robots and Ambient Intelligence (URAI '11), pp. 244–249, Incheon, Republic of Korea, November 2011. View at: Publisher Site  Google Scholar
 J. H. Chung, S. A. Velinsky, and R. A. Hess, “Interaction control of a redundant mobile manipulator,” International Journal of Robotics Research, vol. 17, no. 12, pp. 1302–1309, 1998. View at: Publisher Site  Google Scholar
 A. Mazur, “Hybrid adaptive control laws solving a path following problem for nonholonomic mobile manipulators,” International Journal of Control, vol. 77, no. 15, pp. 1297–1306, 2004. View at: Publisher Site  Google Scholar  Zentralblatt MATH  MathSciNet
 S. Lin and A. A. Goldenberg, “Robust damping control of mobile manipulators,” IEEE Transactions on Systems, Man, and Cybernetics B: Cybernetics, vol. 32, no. 1, pp. 126–132, 2002. View at: Publisher Site  Google Scholar
 M. Mailah, E. Pitowarno, and H. Jamaluddin, “Robust motion control for mobile manipulator using resolved acceleration and proportionalintegral active force control,” International Journal of Advanced Robotic Systems, vol. 2, no. 2, pp. 125–134, 2005. View at: Google Scholar
 M. Galicki, “Control of mobile manipulators in a task space,” IEEE Transactions on Automatic Control, vol. 57, no. 11, pp. 2962–2967, 2012. View at: Publisher Site  Google Scholar  MathSciNet
 S. Hutchinson, G. D. Hager, and P. I. Corke, “A tutorial on visual servo control,” IEEE Transactions on Robotics and Automation, vol. 12, no. 5, pp. 651–670, 1996. View at: Publisher Site  Google Scholar
 F. Chaumette and S. Hutchinson, “Visual servo control. I. Basic approaches,” IEEE Robotics and Automation Magazine, vol. 13, no. 4, pp. 82–90, 2006. View at: Publisher Site  Google Scholar
 F. Chaumette and S. Hutchinson, “Visual servo control. II. Advanced approaches,” IEEE Robotics and Automation Magazine, vol. 14, no. 1, pp. 109–118, 2007. View at: Publisher Site  Google Scholar
 W. J. Wilson, C. C. W. Hulls, and G. S. Bell, “Relative endeffector control using cartesian position based visual servoing,” IEEE Transactions on Robotics and Automation, vol. 12, no. 5, pp. 684–696, 1996. View at: Publisher Site  Google Scholar
 B. Thuilot, P. Martinet, L. Cordesses, and J. Gallice, “Position based visual servoing: keeping the object in the field of vision,” in Proceedings of the IEEE International Conference on Robotics and Automation (ICRA '02), vol. 2, pp. 1624–1629, Washington, DC, USA, May 2002. View at: Google Scholar
 P. I. Corke and S. A. Hutchinson, “A new partitioned approach to imagebased visual servo control,” IEEE Transactions on Robotics and Automation, vol. 17, no. 4, pp. 507–515, 2001. View at: Publisher Site  Google Scholar
 R. Mahony, P. Corke, and T. Hamel, “Dynamic imagebased visual servo control using centroid and optic flow features,” Journal of Dynamic Systems, Measurement and Control, Transactions of the ASME, vol. 130, no. 1, Article ID 011005, 2008. View at: Publisher Site  Google Scholar
 E. Malis, F. Chaumette, and S. Boudet, “21/2D visual servoing,” IEEE Transactions on Robotics and Automation, vol. 15, no. 2, pp. 238–250, 1999. View at: Publisher Site  Google Scholar
 E. Malis and F. Chaumette, “21/2D visual servoing with respect to unknown objects through a new estimation scheme of camera displacement,” International Journal of Computer Vision, vol. 37, no. 1, pp. 79–97, 2000. View at: Publisher Site  Google Scholar
 E. Malis and F. Chaumette, “Theoretical improvements in the stability analysis of a new class of modelfree visual servoing methods,” IEEE Transactions on Robotics and Automation, vol. 18, no. 2, pp. 176–186, 2002. View at: Publisher Site  Google Scholar
 Y. Mezouar and F. Chaumette, “Path planning in image space for robust visual servoing,” in Proceedings of the IEEE International Conference on Robotics and Automation (ICRA '00), vol. 3, pp. 2759–2764, April 2000. View at: Google Scholar
 Y. Ma, J. Kosěcká, and S. S. Sastry, “Vision guided navigation for a nonholonomic mobile robot,” IEEE Transactions on Robotics and Automation, vol. 15, no. 3, pp. 521–536, 1999. View at: Publisher Site  Google Scholar
 W. E. Dixon, D. M. Dawson, E. Zergeroglu, and A. Behal, “Adaptive tracking control of a wheeled mobile robot via an uncalibrated camera system,” IEEE Transactions on Systems, Man, and Cybernetics B: Cybernetics, vol. 31, no. 3, pp. 341–352, 2001. View at: Publisher Site  Google Scholar
 D. Amarasinghe, G. K. I. Mann, and R. G. Gosine, “Visionbased hybrid control scheme for autonomous parking of a mobile robot,” Advanced Robotics, vol. 21, no. 8, pp. 905–930, 2007. View at: Publisher Site  Google Scholar
 R. F. Vassallo, H. J. Schneebeli, and J. SantosVictor, “Visual servoing and appearance for navigation,” Robotics and Autonomous Systems, vol. 31, no. 1, pp. 87–97, 2000. View at: Publisher Site  Google Scholar
 A. de Luca, G. Oriolo, and P. R. Giordano, “Imagebased visual servoing schemes for nonholonomic mobile manipulators,” Robotica, vol. 25, no. 2, Article ID 00326, pp. 131–145, 2007. View at: Publisher Site  Google Scholar
 N. Mansard, O. Stasse, F. Chaumette, and K. Yokoi, “Visuallyguided grasping while walking on a humanoid robot,” in Proceedings of the IEEE International Conference on Robotics and Automation (ICRA '07), pp. 3041–3047, April 2007. View at: Publisher Site  Google Scholar
 Y. Wang, H. Lang, and C. W. de Silva, “A hybrid visual servo controller for robust grasping by wheeled mobile robots,” IEEE/ASME Transactions on Mechatronics, vol. 15, no. 5, pp. 757–769, 2010. View at: Publisher Site  Google Scholar
 Z. Zhang, “A flexible new technique for camera calibration,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 22, no. 11, pp. 1330–1334, 2000. View at: Publisher Site  Google Scholar
Copyright
Copyright © 2014 Huangsheng Xie et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.