Journal of Control Science and Engineering

Journal of Control Science and Engineering / 2014 / Article

Research Article | Open Access

Volume 2014 |Article ID 315396 | https://doi.org/10.1155/2014/315396

Huangsheng Xie, Guodong Li, Yuexin Wang, Zhihe Fu, Fengyu Zhou, "Research on Visual Servo Grasping of Household Objects for Nonholonomic Mobile Manipulator", Journal of Control Science and Engineering, vol. 2014, Article ID 315396, 13 pages, 2014. https://doi.org/10.1155/2014/315396

Research on Visual Servo Grasping of Household Objects for Nonholonomic Mobile Manipulator

Academic Editor: Wuneng Zhou
Received23 Jun 2014
Accepted04 Sep 2014
Published24 Sep 2014

Abstract

This paper focuses on the problem of visual servo grasping of household objects for nonholonomic mobile manipulator. Firstly, a new kind of artificial object mark based on QR (Quick Response) Code is designed, which can be affixed to the surface of household objects. Secondly, after summarizing the vision-based autonomous mobile manipulation system as a generalized manipulator, the generalized manipulator’s kinematic model is established, the analytical inverse kinematic solutions of the generalized manipulator are acquired, and a novel active vision based camera calibration method is proposed to determine the hand-eye relationship. Finally, a visual servo switching control law is designed to control the service robot to finish object grasping operation. Experimental results show that QR Code-based artificial object mark can overcome the difficulties brought by household objects’ variety and operation complexity, and the proposed visual servo scheme makes it possible for service robot to grasp and deliver objects efficiently.

1. Introduction

A classical mobile manipulator system (MMS) consists of a manipulator which is mounted on a nonholonomic mobile platform. This type of arrangement extends manipulator’s workspace apparently and is widely used in service robot applications [1, 2]. The development of MMS mainly involves two classical items, namely, motion planning [38] and coordinating control [913], which are used to overcome the mobile platform’s nonholonomic constraint and make the MMS move quickly and efficiently.

When robots operate in unstructured environments, it is essential to include exteroceptive sensory information in the control loop. In particular, visual information provided by vision sensor such as charge-coupled device (CCD) cameras guarantees accurate positioning, robustness of calibration uncertainties, and reactivity of environmental changes. Much of the work related to CCD cameras and manipulators has focused on the applications about the manipulator’s visual servo control, which specifies robotic tasks (such as object grasping, assembling) in terms of desired image features extracted from a target object. The overview of visual servo can be seen in literature [1416]. In general, visual servo approaches can be divided into three different kinds, namely, position-based visual servoing (PBVS) [17, 18], image-based visual sevoing (IBVS) [19, 20], and hybrid visual servoing (HYBVS) [2123]. In PBVS, the feedback signals in vision loop are the intuitive relative 3D pose between current and desired cameras estimated by current and desired image features using homography matrix or fundamental matrix estimation and decomposition. In IBVS, the feedback signals are image features whose changing velocities are related to the velocity twist of the camera via the image jacobian matrix (also called interaction matrix). Compared with the PBVS, IBVS is robust to perturbations of the robot/camera models and can maintain the image features in the field of view (FOV) of the camera through path planning [24]. But the drawbacks are also obvious; the interaction matrix’s depth information needs to be estimated and only local stability can be guaranteed for most IBVS schemes. To combine the advantages of PBVS and IBVS, HYBVS is proposed. In HYBVS, the feedback signals consist of relative 3D pose and image features, the former is used to control a subset of the camera configuration vector while the latter are used to regulate the remaining camera configuration vector.

Relating CCD cameras and the mobile robots lead to the applications of vision-based autonomous navigation control. Ma et al. [25] have developed a vision-guided navigation system, where a nonholonomic mobile robot tracks an arbitrarily shaped continuous ground curve. Dixon et al. [26] present an adaptive tracking controller of a wheeled mobile robot via an uncalibrated camera system that the controller copes with the parameter uncertainty of the mechanical dynamics and the camera system. Amarasinghe et al. [27] have developed a vision-based hybrid control scheme for autonomous parking of a mobile robot that its controller consists of a discrete-event controller and a pixel-error-driven proportional controller. Vassallo et al. [28] present a similar project where a vision-based mobile robot attempts to autonomously navigate in a building.

The newest trend is to integrate CCD cameras into mobile manipulator to form a vision-based mobile manipulation system (VBMMS). Thanks to the capabilities of the vision subsystem, the VBMMS can work in an unstructured environment and has wider applications than a fixed-base manipulator and a mobile platform. Due to the lack of accurate and robust positioning performance of VBMMS, very few physical implementations have been reported. de Luca et al. [29] have considered the task-oriented modeling of the differential kinematics of nonholonomic mobile manipulators and has developed an image-based controller for VBMMS, but their approach is illustrated through simulation, not physical implementation. Mansard et al. [30] have attempted to control a humanoid robot to grasp an object while walking. Wang et al. [31] have developed a robust vision-based mobile manipulation system for wheeled mobile robots. In their research, an innovative controller with machine learning using Q-learning is proposed to guarantee visibility of visual features in servo process.

This paper presents a physical implementation of VBMMS in service robot intelligent space. It consists of two basic contributions. First, after summarizing the VBMMS as a generalized manipulator, the kinematics is analyzed analytically and an active vision-based camera calibration method is proposed to determine the hand-eye relationship consequently. Second, a novel switching control strategy is proposed which switches between eye-fixed approximation and position-based static look-and-move grasping. The remainder of the paper is organized as follows. Section 1 will introduce the design of the QR Code-based artificial object mark. In Section 2, the VBMMS is summarized as a generalized manipulator, then the kinematics, inverse kinematics, and hand-eye relationship determination are discussed. In Section 3, the switching control strategy which switches between eye-fixed approximation and static look-and-move grasping is designed. Two experiments will be presented in Section 4 to validate the designed switching controller. Conclusions will be drawn in Section 5.

2. Design of QR Code-Based Artificial Object Mark

As shown in Figure 1, the QR Code-based artificial object mark is composed of two parts: the internal information representation part which includes the object’s property and operation information and the blue concentric ring region which is called external identification part. Due to the facilitated detection of the external identification part, the mark can be recognized from complex home environment rapidly using vision sensor.

The coding of information stored in internal information representation part of the mark is shown in Table 1.


Info typeDescriptionBytes

NameObject’s name’s first two words2
Serial numberSuch as 1
SizesLong * width * height, 2 bytes for each items8
Materialp (plastic), g (glass), z (paper), w (wood), s (metal), t (textile)1
Operation forceH (huge), L (large), M (middle), S (small), T (tiny)1
Operation positionu (upper), m (middle), b (bottom)1
Operation orientationa (above), f (front), b (back), l (left), r (right)1

It can be seen from Table 1 that there are two different kinds of information, namely, object’s property information and object’s operation information. The object’s property information includes name, serial number, sizes, and material, in which the properties of name and serial number are used as the unique identification of an object and the property of sizes is to let robot know the gripper’s opening degree. The object’s operation information includes operation force, position, and orientation, which can assist the robot in finishing grasping operation in an appropriate way.

3. Kinematics, Inverse Kinematics, and Hand-Eye Relationship Determination of VBMMS

As shown in Figure 2, the eye-in-hand type VBMMS is combined with a nonholonomic tracked mobile robot (TMR), a 4-DOF (Degree of Freedom) manipulator, and a CMOS camera. The Grandar AS-RF type TMR uses differential drive structure and is easier to carry out steering control. The Schunk Powercube 4-DOF manipulator is mounted on the TMR, and the Gsou V80 CMOS camera is mounted on the manipulator’s end-effector. In addition, the VBMMS is equipped with an onboard computer, whose computational capability can support real-time performance of the system.

3.1. Kinematics

Due to the nonholonomic constraint of the TMR and the nonredundancy of the 4-DOF manipulator, using VBMMS to complete grasping task is very difficult, and so far, barely no related works can be found. Take the difficulties of controlling the TMR and manipulator separately into account, the VBMMS is summarized as a generalized manipulator shown in Figure 2. For the generalized manipulator, the TMR is considered as a 3-DOF (rotation-translation-rotation) manipulator and the manipulator’s first degree is aborted because of its coincidence with the TMR’s third degree. In that case, the generalized manipulator has six degrees, which guarantees that the end-effector can approach an arbitrary position at any pose.

Table 2 illustrates the modified D-H parameters of each link for serial-link manipulator, where ,  ,  ,   mean twist, length, angle, and offset of the th link, respectively. In addition to the last column of the table, R is for revolute while T is for prismatic. For the prismatic joint, is the variable whose value range is , while for the revolute joint, is the variable whose value range is .


Link (rad) (cm) (rad) (cm)Type

1000R
200T
3025R
4−6.50R
56.528.5R
600R
Tool0028

Table 2 results in a 4 × 4 homogeneous transformation matrix representing each link’s coordinate frame with respect to the previous link’s coordinate frame. Thereby, the representation of end-effector coordinate frame with respect to base coordinate frame can be shown in the following: where in (2)–(13), , , , ,  .

It is well known that the symbol can be used to note the generalized manipulator Jacobian matrix expressed in end-effector coordinate frame that transforms velocities in joint space to velocities of the end-effector in Cartesian space. For the 6-DOF generalized manipulator, the end-effector Cartesian velocity is where the 6-vector represented by is the end-effector’s Cartesian velocity with respect to its own frame . Based upon the resolved transformation matrix and , can be determined as for prismatic joint corresponding ; the th column of can be constructed as follows: for revolute joint corresponding ; the th column of : The can finally be constructed as follows:

3.2. Inverse Kinematics

From (2)–(13) we can achieve that the manipulator has 6 joints to allow arbitrary end-effector pose. In order to reach the specified end-effector position, the inverse kinematic solutions can be acquired through separating the unknown variable , , in (2)–(13).

Note that , by premultiplicating in (1), we can acquire . On the other hand, can be computed by . Let the two different forms of be equal; four sets of analytical inverse kinematic solutions can be achieved:

Given an arbitrary end-effector pose, whose orientation is expressed by RPY description method, the four sets of inverse kinematic solutions can be solved by (19), as shown in Table 3.


Roll/(rad)Pitch/(rad)Yaw/(rad)/(cm)/(cm)/(cm)/(rad)/(cm)/(rad)/(rad)/(rad)/(rad)

0.44490.2566−1.2466129.414315.708776.79680.2417122.6−0.78540.1366−0.6283−0.3927
0.1922123.74871.15790.1366−2.5133−0.6142
0.0880127.4757−1.3771−0.58510.1454−1.0836
−5.9305121.35151.7428−0.58512.99620.0767

−0.2411−0.4617−0.226612.592922.454769.22281.020319.3710−1.02290.5987−0.24281.0851
0.266140.19662.46930.5987−2.8988−0.0848
0.165352.60−0.4488−1.04720.1571−0.5417
−4.580620.13751.3136−1.04722.98451.5420

After solving all the solutions, they can be processed so that they verify the joints’ value ranges, and then the optimal set of solution can be acquired using a certain optimality criteria such as the shortest path.

3.3. Hand-Eye Relationship Determination

Hand-eye relationship determination is a key issue in visual servo control of a robot hand-eye system, and much work has been done. In this paper, we propose a novel method which is based on Zhang’s camera calibration method and tensor theory. As we know, Zhang’s algorithm proposed in 1999 is very representative because of its easy use, flexibility, and high accuracy [32]. The algorithm only requires the camera to observe a planar pattern (such as planar checkerboard) shown at a few different orientations. In consequence, a closed-form solution of camera intrinsic parameter matrix and the pose of planar object’s coordinate frame with respect to the camera frame noted as can be computed, which are followed by a nonlinear refinement based on the maximum likelihood criterion so as to improve the accuracy.

Figure 3 shows the scheme of hand-eye relationship determination, where , are the current end-effector frame and current camera frame, respectively.

When the manipulator executes the th known movement, they move to a new position noted as and .

From Figure 3, there is In (20), is the unknown hand-eye relationship, can be known from the known motion of the manipulator, and and can be known from Zhang’s method.

Note that By substituting (21) into (20), one obtains

It can be seen from (22) that once we solve the unknown , then will be solved easily with (23). On the other hand, (22) can be summarized as the form of , considering a matrix to be a second-order mixed tensor whose element of th row and th column is noted as ; the equation can be written as follows: where , are free indexes, and , are dummy indexes. Take Kronecker into account: Equation (24) can be written as

According to the different values of the free indexes and , we can acquire sets of equations from (26). Let , where is the th row vector of Matrix . Equation (26) can finally be written as where , and its element

Consider that there are sets of known manipulator movement, , corresponding to the th movement, therefore we obtain

Based upon least-squares solution of a homogeneous system of linear equations, is the last column of , where . Note because is a rotation matrix whose Frobenius norm is and , we can determine as

Till then, the unique verifying constraint can be distinguished from (31). Furthermore, we choose , where as the final solution of (22).

After solving , the unknown can also be solved using the next equation:

4. Design of Visual Servo Switch Control Scheme for Grasping

The visual servo control scheme designed in this section consists of two steps: eye-fixed approximation of the household object and static look-and-move grasping. Once the VBMMS is commanded to grasp a household object that is in its camera’s FOV, the VBMMS starts to approximate the object with its eye gazing it. When the distance between VBMMS’s camera and household object reaches a certain value, the process of approximation switches to the static look-and-move grasping.

4.1. Eye-Fixed Approximation

Figure 4 illustrates the scheme of eye-fixed approximation. In Figure 4, and correspond to current image and desired image, respectively, and both are called zero-order moment of the blue concentric ring region to the object mark. and are called nonhomogeneous projective coordinate of the center of blue concentric ring region to the object mark, which also correspond to current image and desired image, respectively.

It is well known that for nonhomogeneous projective coordinate , its time derivative is linearly related to the joints’ velocity through the interaction matrix : where is the coordinates of a 3D point relative to current camera frame, , are the results of hand-eye relationship, and is the generalized manipulator’s Jacobian in end-effector coordinate frame . For the unknown variable in , its estimation can be chosen as follows: where is the constant depth of the 3D point relative to the desired camera frame.

We partition the interaction matrix so as to isolate the second degree of freedom of the generalized manipulator. Note the following: in which diag is a diagonal matrix, only includes the second column of , and includes the first, fourth, fifth, and sixth columns of . We get

Note , where is a constant coefficient; a simple approximation control law can be acquired by using feature , as where sgn is sign function and is a time-invariant coefficient. Note ; let be exponential convergent . There is

The vector can be considered as a modified error that incorporates the original error while taking into account the error that will be induced by robot motion .

Under the influence of control input shown in (37) and (38), the VBMMS approaches to object with a constant speed until , and during the approximation process, the object is maintained in the FOV from beginning to end.

4.2. Static Look-and-Move Grasping

Figure 5 illustrates the scheme of static look-and-move grasping.

In Figure 5, 3D object point is on the plane whose normal is . Meanwhile, there are three different camera frames: the desired camera frame , the initial camera frame , and the current camera frame , that is, after a manipulator’s known movement from to . The homogeneous transformation can be computed as where is the hand-eye relationship, is the initial end-effector’s pose, and is the current end-effector’s pose. Our ultimate objective is to determine the pose of relative to , namely, . In order to solve them, two steps are needed.

Step 1. Structure reconstruction of plane with respect to camera frame .

Take current camera frame as the reference frame, the plane can be noted as and point in frame is related to the corresponding point in frame by Euclidean homography : where means that the transforms one vector into the other, up to a scale factor. Given at least four sets of image point correspondences ,  , the satisfying can be derived using direct linear transformation (DLT) method.

After solving , written as and , , there are Furthermore

By solving (42), can be determined. Finally, using we can acquire the structure of plane with respect to camera frame .

Step 2. Computation of , of camera frame with respect to desired camera frame .

Similar to Step 1, point in frame is related to the corresponding point in frame by Euclidean homography : and the satisfying = 1 can also be derived using DLT method if there are sets of image point correspondences.

Adjust the solved so that it satisfies ; there are where the scale factor . Furthermore,

As we all know, the rotation matrix whose axis and angle are and can be described as Note . Choose then

Substitute into (46); according to Nanson’s equation , one obtains

Equation (50) means that the first two rows of matrix and are equal, respectively. Note then the positive scale factor and the rotation matrix are

To refine the solved , apply singular value decomposition to it. As , choose as the ultimate result. By substituting (52) and refined into (45), the unknown can finally be determined:

Till then, the homogeneous transformation can be acquired: Furthermore, the transformation , which represents the pose of the desired end-effector frame relative to the base frame , can be computed as

In (56), can be known from the VBMMS’s kinematic model established in Section 3.1, and the hand-eye relationship has already been known from Section 3.3. Finally, we can acquire the VBMMS’s control input from the analytical inverse kinematics proposed in Section 3.2.

5. Experiments and Analysis

In this section, we will discuss two experiments: hand-eye relationship determination and the VBMMS’s switch control scheme separately.

5.1. Experiment of Hand-Eye Relationship Determination

In order to simplify the complexity of grid corner extraction, choose a model plane which contains a pattern of 4 × 4 squares as the calibration object; the size of each square is 22 mm × 22 mm. Attach an object frame to it; the upper left corner of the first square is selected to be associated to the origin point of , the axis is perpendicular to the model plane pointing outward, the -plane of is aligned with the plane, so that points on the plane have zero -coordinate, and , axes are paralleled to the sides of the checkerboard’s square, respectively.

Let the TMR be static; take six images of the plane under different orientations caused by several known manipulator movements. The images are shown in Figure 6, whose resolutions are all 640 × 480, and the corners are detected as the intersection of straight lines fitted to each square.

Using Zhang’s calibration method, the refined camera intrinsic parameter matrix , together with the refined extrinsic parameters corresponding to the six images respectively, can be acquired:

The known movements of the manipulator corresponding to six images are shown in Table 4.


Image, ,

100.5756−0.0245−1.1241
200.5710−0.2224−1.0983
300.4560−0.2256−1.2930
400.21430.0360−1.5370
50−0.00120.1302−1.6249
60−0.3974−0.1872−1.9276

From Table 4, the homogeneous transformation can be computed using generalized manipulator’s kinematic model. The computed and corresponding are shown in Table 5.


RollPitchYaw

I12.97131.4401−2.983342.2281−0.618441.7720
−2.5304−0.1911−1.6459−6.2760−7.246345.1793
I21.98561.3546−2.096941.7209−5.498043.0438
−2.6322−0.0152−1.7485−17.8379−1.792844.5287
I32.22971.2953−2.311238.8304−6.023043.0616
−2.53960.0100−1.7451−20.4513−6.595445.0891
I4−2.94391.38682.948433.43851.006046.4443
−2.4884−0.2583−1.6244−2.9982−11.578654.2764
I5−1.95811.43031.954527.68913.630352.0272
−2.6579−0.3423−1.61162.7396−2.068763.8751
I61.37691.3922−1.307816.0177−4.882054.7535
−2.7377−0.0032−1.5332−23.91881.568171.5180

Combine image 1 and image 2 (noted as ). Using calibration method mentioned in Section 3.3, matrix can be determined. Similarly, combinations of , , , and can determine , , , , respectively. The hand-eye relationship can finally be determined:

5.2. Experiment of VBMMS’s Switch Control Scheme

The images corresponding to the desired and initial camera position and the object mark recognition using Gaussian model and Hough transformation are illustrated in Figure 7.

As can be seen in Figure 7, the blue concentric ring region of the object mark can be detected robustly and conveniently using Gaussian model for color-based segmentation and Hough transformation for segmented region’s ellipse fitting.

Choose , and the gain involved in the approximation control law (37) . The gain involved in the eye-fixed control law (38) , is set to be 25 cm while its true value is 31 cm; the computed eye-fixed approximation control input is given in Figure 8.

Figure 9 shows the images corresponding to the camera position , where the eye-fixed approximation process terminated, and the position , where the VBMMS does a known movement from to , together with the extracted corner correspondences, matched by RANSAC (random sample consensus) method. In addition to the corner extraction, the ROI (region of interest) is selected to be within the concentric ring.

The corner correspondences given in Figure 9 is leading to the homography satisfying . Using Step 1 mentioned in Section 3.2, the 3D structure of the object mark plane with respect to the camera frame can be computed:

Figure 10 shows the images corresponding to the camera position and the desired camera position , together with the extracted corner correspondences matched by RANSAC.

Applying DLT to the corner correspondences, the homography satisfying can be acquired; then using Step 2 mentioned in Section 3.2, the transformation can be computed. Finally the and the control input of the VBMMS can be determined from the inverse kinematics discussed in Section 3.2:

Affected by the computed control input, the VBMMS moved to a position corresponding to the desired camera frame . Thereby, the visual servo grasping task is executed successfully.

6. Conclusion

It is nearly impossible for VBMMS to finish household objects’ grasping and delivering operation without some prior knowledge (such as objects’ color, texture, sizes, and localization) provided by people, not only for the household objects’ variety and operation complexity, but also for the difficulties of the VBMMS’s kinematic modeling and its nonholonomic constraint’s handling. On the one hand, a new QR Code-based artificial object mark is designed, which can store object’s property information and operation information and can be easily distinguished from complex family environment. On the other hand, in order to model the VBMMS, we summarize it as a generalized manipulator, followed by acquiring its analytical inverse kinematic solutions and determining the hand-eye relationship based upon active vision. Meanwhile, in order to deal with the VBMMS’s nonholonomic constraint, a visual servo switching control law which is composed of eye-fixed approximation part and static look-and-move grasping part is designed. The proposed scheme can solve the household objects’ grasping and delivering problem well, and makes it possible to let VBMMS-type service robot provide better housekeeping service.

Conflict of Interests

The authors declare that there is no conflict of interests regarding the publication of this paper.

Acknowledgments

This project was supported by the National Natural Science Foundation of China (no. 61375084) and Advanced Mechanical Design and Manufacturing Public Service Platform Building, Longyan Science and Technology Plan Project. (no. 2012ly01).

References

  1. S. Ekvall, D. Kragic, and P. Jensfelt, “Object detection and mapping for service robot tasks,” Robotica, vol. 25, no. 2, Article ID 00323, pp. 175–187, 2007. View at: Publisher Site | Google Scholar
  2. K. Severinson-Eklundh, A. Green, and H. Hüttenrauch, “Social and collaborative aspects of interaction with a service robot,” Robotics and Autonomous Systems, vol. 42, no. 3-4, pp. 223–234, 2003. View at: Publisher Site | Google Scholar | Zentralblatt MATH
  3. W. F. Carriker, P. K. Khosla, and B. H. Krogh, “Path planning for mobile manipulators for multiple task execution,” IEEE Transactions on Robotics and Automation, vol. 7, no. 3, pp. 403–408, 1991. View at: Publisher Site | Google Scholar
  4. Q. Huang, K. Tanie, and S. Sugano, “Coordinated motion planning for a mobile manipulator considering stability and manipulation,” International Journal of Robotics Research, vol. 19, no. 8, pp. 732–742, 2001. View at: Publisher Site | Google Scholar
  5. A. Mohri, S. Furuno, and M. Yamamoto, “Trajectory planning of mobile manipulator with end-effector's specified path,” in Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems, vol. 4, pp. 2264–2269, Maui, Hawaii, USA, November 2001. View at: Publisher Site | Google Scholar
  6. K. Tchon, J. Jakubiak, and R. Muszynski, “Regular Jacobian motion planning algorithms for mobile manipulators,” in Proceedings of the 15th IFAC World Congress, vol. 15, Barcelona, Spain, 2002. View at: Publisher Site | Google Scholar
  7. J. Vannoy and J. Xiao, “Real-time adaptive motion planning (RAMP) of mobile manipulators in dynamic environments with unforeseen changes,” IEEE Transactions on Robotics, vol. 24, no. 5, pp. 1199–1212, 2008. View at: Publisher Site | Google Scholar
  8. S. Ide, T. Takubo, K. Ohara, Y. Mae, and T. Arai, “Real-time trajectory planning for mobile manipulator using model predictive control with constraints,” in Proceedings of the 8th International Conference on Ubiquitous Robots and Ambient Intelligence (URAI '11), pp. 244–249, Incheon, Republic of Korea, November 2011. View at: Publisher Site | Google Scholar
  9. J. H. Chung, S. A. Velinsky, and R. A. Hess, “Interaction control of a redundant mobile manipulator,” International Journal of Robotics Research, vol. 17, no. 12, pp. 1302–1309, 1998. View at: Publisher Site | Google Scholar
  10. A. Mazur, “Hybrid adaptive control laws solving a path following problem for non-holonomic mobile manipulators,” International Journal of Control, vol. 77, no. 15, pp. 1297–1306, 2004. View at: Publisher Site | Google Scholar | Zentralblatt MATH | MathSciNet
  11. S. Lin and A. A. Goldenberg, “Robust damping control of mobile manipulators,” IEEE Transactions on Systems, Man, and Cybernetics B: Cybernetics, vol. 32, no. 1, pp. 126–132, 2002. View at: Publisher Site | Google Scholar
  12. M. Mailah, E. Pitowarno, and H. Jamaluddin, “Robust motion control for mobile manipulator using resolved acceleration and proportional-integral active force control,” International Journal of Advanced Robotic Systems, vol. 2, no. 2, pp. 125–134, 2005. View at: Google Scholar
  13. M. Galicki, “Control of mobile manipulators in a task space,” IEEE Transactions on Automatic Control, vol. 57, no. 11, pp. 2962–2967, 2012. View at: Publisher Site | Google Scholar | MathSciNet
  14. S. Hutchinson, G. D. Hager, and P. I. Corke, “A tutorial on visual servo control,” IEEE Transactions on Robotics and Automation, vol. 12, no. 5, pp. 651–670, 1996. View at: Publisher Site | Google Scholar
  15. F. Chaumette and S. Hutchinson, “Visual servo control. I. Basic approaches,” IEEE Robotics and Automation Magazine, vol. 13, no. 4, pp. 82–90, 2006. View at: Publisher Site | Google Scholar
  16. F. Chaumette and S. Hutchinson, “Visual servo control. II. Advanced approaches,” IEEE Robotics and Automation Magazine, vol. 14, no. 1, pp. 109–118, 2007. View at: Publisher Site | Google Scholar
  17. W. J. Wilson, C. C. W. Hulls, and G. S. Bell, “Relative end-effector control using cartesian position based visual servoing,” IEEE Transactions on Robotics and Automation, vol. 12, no. 5, pp. 684–696, 1996. View at: Publisher Site | Google Scholar
  18. B. Thuilot, P. Martinet, L. Cordesses, and J. Gallice, “Position based visual servoing: keeping the object in the field of vision,” in Proceedings of the IEEE International Conference on Robotics and Automation (ICRA '02), vol. 2, pp. 1624–1629, Washington, DC, USA, May 2002. View at: Google Scholar
  19. P. I. Corke and S. A. Hutchinson, “A new partitioned approach to image-based visual servo control,” IEEE Transactions on Robotics and Automation, vol. 17, no. 4, pp. 507–515, 2001. View at: Publisher Site | Google Scholar
  20. R. Mahony, P. Corke, and T. Hamel, “Dynamic image-based visual servo control using centroid and optic flow features,” Journal of Dynamic Systems, Measurement and Control, Transactions of the ASME, vol. 130, no. 1, Article ID 011005, 2008. View at: Publisher Site | Google Scholar
  21. E. Malis, F. Chaumette, and S. Boudet, “2-1/2-D visual servoing,” IEEE Transactions on Robotics and Automation, vol. 15, no. 2, pp. 238–250, 1999. View at: Publisher Site | Google Scholar
  22. E. Malis and F. Chaumette, “21/2D visual servoing with respect to unknown objects through a new estimation scheme of camera displacement,” International Journal of Computer Vision, vol. 37, no. 1, pp. 79–97, 2000. View at: Publisher Site | Google Scholar
  23. E. Malis and F. Chaumette, “Theoretical improvements in the stability analysis of a new class of model-free visual servoing methods,” IEEE Transactions on Robotics and Automation, vol. 18, no. 2, pp. 176–186, 2002. View at: Publisher Site | Google Scholar
  24. Y. Mezouar and F. Chaumette, “Path planning in image space for robust visual servoing,” in Proceedings of the IEEE International Conference on Robotics and Automation (ICRA '00), vol. 3, pp. 2759–2764, April 2000. View at: Google Scholar
  25. Y. Ma, J. Kosěcká, and S. S. Sastry, “Vision guided navigation for a nonholonomic mobile robot,” IEEE Transactions on Robotics and Automation, vol. 15, no. 3, pp. 521–536, 1999. View at: Publisher Site | Google Scholar
  26. W. E. Dixon, D. M. Dawson, E. Zergeroglu, and A. Behal, “Adaptive tracking control of a wheeled mobile robot via an uncalibrated camera system,” IEEE Transactions on Systems, Man, and Cybernetics B: Cybernetics, vol. 31, no. 3, pp. 341–352, 2001. View at: Publisher Site | Google Scholar
  27. D. Amarasinghe, G. K. I. Mann, and R. G. Gosine, “Vision-based hybrid control scheme for autonomous parking of a mobile robot,” Advanced Robotics, vol. 21, no. 8, pp. 905–930, 2007. View at: Publisher Site | Google Scholar
  28. R. F. Vassallo, H. J. Schneebeli, and J. Santos-Victor, “Visual servoing and appearance for navigation,” Robotics and Autonomous Systems, vol. 31, no. 1, pp. 87–97, 2000. View at: Publisher Site | Google Scholar
  29. A. de Luca, G. Oriolo, and P. R. Giordano, “Image-based visual servoing schemes for nonholonomic mobile manipulators,” Robotica, vol. 25, no. 2, Article ID 00326, pp. 131–145, 2007. View at: Publisher Site | Google Scholar
  30. N. Mansard, O. Stasse, F. Chaumette, and K. Yokoi, “Visually-guided grasping while walking on a humanoid robot,” in Proceedings of the IEEE International Conference on Robotics and Automation (ICRA '07), pp. 3041–3047, April 2007. View at: Publisher Site | Google Scholar
  31. Y. Wang, H. Lang, and C. W. de Silva, “A hybrid visual servo controller for robust grasping by wheeled mobile robots,” IEEE/ASME Transactions on Mechatronics, vol. 15, no. 5, pp. 757–769, 2010. View at: Publisher Site | Google Scholar
  32. Z. Zhang, “A flexible new technique for camera calibration,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 22, no. 11, pp. 1330–1334, 2000. View at: Publisher Site | Google Scholar

Copyright © 2014 Huangsheng Xie et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.


More related articles

 PDF Download Citation Citation
 Download other formatsMore
 Order printed copiesOrder
Views1012
Downloads881
Citations

Related articles

We are committed to sharing findings related to COVID-19 as quickly as possible. We will be providing unlimited waivers of publication charges for accepted research articles as well as case reports and case series related to COVID-19. Review articles are excluded from this waiver policy. Sign up here as a reviewer to help fast-track new submissions.