Advances in Modelling, Monitoring, and Control for Complex Industrial SystemsView this Special Issue
Research Article | Open Access
Tao Li, Hui Zhao, Yu Chang, "Delay-Dependent Stability in Uncalibrated Image-Based Dynamic Visual Servoing Robotic System", Complexity, vol. 2018, Article ID 1360874, 14 pages, 2018. https://doi.org/10.1155/2018/1360874
Delay-Dependent Stability in Uncalibrated Image-Based Dynamic Visual Servoing Robotic System
This paper addresses the stability problem of uncalibrated image-based visual servoing robotic systems. Both the visual feedback delay and the uncalibrated visual parameters can be the sources of instability for visual servoing robotic systems. To eliminate the negative effects caused by kinematic uncertainties and delays, we propose an adaptive controller including the delay-affected Jacobian matrix and design an adaptive law accordingly. Besides, the delay-dependent stability conditions are provided to show the relationship between the system stability and the delayed time in order to obtain less conservative results. A Lyapunov-Krasovskii functional is constructed, and a rigorously mathematic proof is given. Finally, the simulation results are presented to show the effectiveness of the proposed control scheme.
For human beings, vision is an important sensory channel. Through visual sensors, robots also can monitor the circumstance and perform the tasks. Nowadays, the advanced visual processing techniques and high-speed image processors make vision-based robot systems capable of handling dynamical tasks, and the vision-based control has been applied to many industrial robot systems. It has become the mainstream of robot control.
Vision-based control can be traced back to 1980s . Look-and-move is one of the early vision-based technologies [1–3]. In this approach, two nested loops run simultaneously: the visual loop is the external loop and the joint-space loop is the internal loop. Due to the sensitivities to disturbances and errors, the look-and-move architecture is not suitable for high-performance control tasks . As an alternative, the visual servo (VS) technique is proposed . This control architecture directly generates the control inputs using the visual information. Such a simple and direct structure is favorable for high-speed servoing tasks. Considerable visual servoing approaches have been investigated for various robot systems and from many different aspects. Figure 1 shows two typical structures of the visual servoing control.
(a) Position-based visual servoing
(b) Image-based visual servoing
In the existing literature, there are two challenges in the field of visual servoing control: (a) the difficulties of calibration and (b) the image feedback signals of inferior quality.
The calibration of visual servoing systems includes the camera calibration, kinematic calibration, and dynamic calibration. For the sake of identifying unknown or uncertain system parameters, periodical and high-accurate calibration work usually is required, which is tedious and demanding. Without such calibration work, the system models cannot be accurately characterized and the closed-loop visual servoing systems could be unstable. To avoid such calibration work, the uncalibrated control approaches are proposed [6–10]. Some work [6–8] investigates the approaches with robust controllers for eliminating the negative effects of calibration errors of the system model, and the uncertain system parameters in the above work are replaced with the approximated ones. As for the case of unknown parameters or time-varying parameters, adaptive control techniques are proposed [11, 12]. To handle such parameters, in these methods, adaptive laws are designed to update them online.
As another cause of system instability, the visual signals of inferior quality are also nonnegligible. Generally speaking, noise and delays in the visual signals are the main inducements. In this paper, we consider delays as the main reason for inferior image signals. As we know, the visual signal flows are expected to be synchronized with other system signals. However, asynchronization could happen due to many reasons including the limitation of image processing [13–17] and restriction of visual signal transmission . Some early research studies the instability problem caused by image-processing (or image-sampling) delays [1, 2, 13]. These early efforts focused on reducing the image sampling time through a parallel or pipelined approach [2, 14–17]. With the development of the advanced image processor chips, such a problem has been resolved to a large extent. With the wide application of visual servo, its patterns are becoming various. The connections between visual sensors and controllers can be wireless or Internet, which also means the visual feedback path can be a source of delays due to the transmission block in the intercomponent information exchanges [1, 2, 13, 18]. Improving the speed and the reliability of communication links is a straightforward way to address the issue. But it inevitably leads to the increase of the cost. Consequently, designing proper control schemes to handle with delays is an alternative. In the IBVS control scheme design, the delay problem is studied by [19–23]. Using average joint angle values of the past and present moment to replace the present joint angle,  obtains predicted image feature position values by the Jacobian matrix to cope with the delay problem. One common flaw of the aforementioned methods is that they require the accurate knowledge of system parameters, and the acquirement of such information is based on calibration.
The two challenges make a visual servoing robotic system become typical complex industrial systems. This is because the mainstream noncalibration techniques usually require accurate image signals to compensate for the parametric errors or to update the unknown parameters. Under the delayed image feedback loop, there is no accurate synchronized visual feedback available. In this context, the control of such systems is of high nonlinearity and complexity. Consequently, it is worthwhile and challenging to be investigated. This paper therefore will concentrate on the influence of visual transmission delays upon the uncalibrated visual servoing robotic systems.
In the literature of this area,  presents an online calibration method to overcome the time delay problem. Inoue and Hirai  design a two-layer controller called STP to compensate for the delays and the concept of virtual trajectory is introduced. Gao and Su  employ local fitting Jacobian matrix based on polynomial fitting to obtain more accurate Jacobian estimation and image precompensation for uncalibrated IBVS robotic system. Unfortunately, the controller design in the above literature is based on kinematics and fails to consider the dynamics of robots. It is well known that the dynamics of robot systems plays an important role in the stability, especially in the case of high speed. Much progress has been made in the aspect of the uncalibrated dynamic-based visual servoing systems control without delay effects [24–31]. As for the uncalibrated dynamic-based visual servoing systems with the delay effects, the relevant work focuses on the area of the distributed cooperative control [32–35]. Liu and Chopra  study an adaptive control algorithm to guarantee task-space synchronization of networked robotic manipulators in the presence of dynamic uncertainties and time-varying communication delays. Wang  investigates the problem of synchronization of networked robotic systems with kinematic and dynamic uncertainties in the case of nonuniform constant communication delays. Liang et al.  address cooperative tracking control problem of networked robotic manipulators in the presence of communication delays under strongly connected directed graphs. However, the above work considers the delays in the interagent information exchanges rather than delays existing in the visual feedback of single dynamic-based visual servoing robotic system. To the best of the authors’ knowledge, there is little literature considering the time delay problem in an uncalibrated dynamic-based visual servoing robotic system without using image-space velocity measurements. To address the aforementioned issues, the following problems are expected to be addressed. First, the modeling of the system. Delays, noncalibration, and velocity measurements, these contributing factors, need to simultaneously be included in the modeling of the system. Second, the handling of the time-varying parameters and how to avoid using image velocity measurements. Third, the delay-dependent stability conditions are expected to be given for obtaining less conservation. The contributions of this paper can be summarized as follows. (a) An uncalibrated dynamic-based visual servoing model is developed to visual track a feature point whose image depth is time varying without using the image velocity and in the presence of unknown constant delayed visual feedback. (b) To handle overlapped effects of uncalibrated parameter uncertainties and the visual feedback delay, the novel Jacobian matrix called delay-affected Jacobian matrix is first proposed in this paper. (c) Lyapunov-Krasovskii stability theory is employed to analyze the stability of the delay-affected dynamic-based visual servoing system, and delay-dependent (d.d) stability conditions are given to obtain less conservative stability results.
The paper is organized as follows. Section 2 gives some preliminary knowledge used throughout the paper. In Section 3, the kinematic and dynamic models of dynamic-based visual servoing robotics systems are formulated. In Section 4, the main results of this paper, the controller design, and the adaptive laws are proposed to address the stability problem of the uncalibrated dynamic-based visual servoing robotic system with visual feedback delays. In Section 5, rigorous stability analyses are provided. Section 6 presents simulation results to show the effectiveness of the proposed control scheme. Section 7 concludes the paper.
Lemma 1. Let , , and be real matrices with proper dimensions, where . For any constant , the following holds.
Lemma 2. Let be a uniformly continuous function on . Suppose that exists and is finite. Then,
Lemma 3. Consider functional differential equation Let be a mapping from (a bounded subset of ) to a bounded subset of , are continuous nondecreasing functions, for any , , are positive and . If there exists a continuous differentiable functional , such that and the zero solution of system (8) is uniformly stable. If zero solution of the system is uniformly stable and for any , holds, then zero solution of system (8) is uniformly asymptotically stable. If zero solution of the system is uniformly asymptotically stable and , then zero solution of system (8) is globally uniformly asymptotically stable.
3. Kinematics and Dynamics
In this section, we present the mathematical modeling of delayed visual servoing robotic systems with the eye-in-hand configuration. In the modeling process, both kinematics and dynamics are considered. To illustrate the kinematics of the system, Figure 2 shows the transformation among different frames.
Let be the coordinates of a feature point’s projection on the camera image plane and be the Cartesian coordinates of the feature point w.r.t the robot base frame. Based on the model developed in , the mapping between image position and the Cartesian position can be formulated as where denotes the depth of feature to the camera frame; denotes the homogeneous transformation matrix from the camera frame to the base frame; denotes the th row of the camera intrinsic parameter matrix ( is an intrinsic parameter matrix which is derived from the typical model introduced in ). In reality, the feature point is stationary with respect to the robot base and it results in a constant column vector .
The relationship between and can be formulated by where is the third row of the perspective projection matrix .
Combining with (7), the derivative of (6) w.r. time satisfies where is short for . It should be noticed that can be divided into two parts: (the forward robot kinematics) and (homogeneous matrix from the camera to the end effector). Due to the eye-in-hand configuration, is a constant matrix. Then, one has where denotes the rotation matrix, denotes the translation vector, and denotes the joint position. For more details, see . By letting the matrix be the left submatrix of , be the 1st and the 2nd rows of and be the 3rd row of , one can derive the mapping from joint velocities to image velocities as follows:
The dynamics of robots can be given with Euler-Lagrange equation as follows : where is the vector formed by joint input of the manipulator; is the positive-define and symmetric inertia matrix; is a skew-symmetric matrix such that for any proper dimensional vector ,
On the left side of (12), the first term is inertia force, the second term is the Coriolis and centrifugal forces, and the last term is the gravitational force.
Remark 1. From Figure 2, it intuitively can be seen that the estimation of Jacobian matrices determined by homogeneous transformation matrices (, , and ) is directly affected by the delayed visual feedbacks. The complexity of the system mainly lies in the highly nonlinear relationship between delayed image states and joint states.
To facilitate analysis, we present Figure 3 to show the closed-loop structure of a typically delayed VS robotic system.
4. The Adaptive Controller Design
In this section, we will investigate the uncalibrated dynamic-based visual servoing robotic system with visual feedback delays and kinematic uncertainties. In our study, the formulation of the uncalibrated VS robotic system is partly based upon the depth-independent Jacobian model developed by . This model allows depth to be time varying so that the visual servoing system can still be stabilized even in the presence of the fast-changing feature image depth.
From (10), we can easily split from and thereby obtain the depth-independent Jacobian matrix which is given by
Additionally, from (11), we define such a vector as follows:
In the uncalibrated dynamic-based visual servoing system, the estimate of Jacobian matrix is usually used as the replacement of unknown exact Jacobian matrix. It can be easily seen from (14) and (15) that components of the depth-independent Jacobian matrix and the matrix can be classified as two categories: the known and the unknown. Known components are , , and unknown components are , , and . The estimate of Jacobian matrix can be analytically derived through the linear parameterization . From (14), it can be seen that the known and the unknown are coupled. And the coupling of the known and the unknown hinders the linear parameterization of these matrices. The following property is proposed to decouple them.
Property 1. For a vector , the product can be linearly parameterized as follows: where and are regressor matrices which consist of known parameters; is a vector which consists of unknown parameters; and denotes the number of unknown parameters, which satisfies .
Proof 1. Due to the limitation of pages, see proof in Appendix A.
By Property 1, the Jacobian matrix can be expressed in a linear form: a known matrix (regressor matrix) multiplies an unknown vector. From (12), it can be clearly seen that the regressor matrix includes the current image position . Unfortunately, the feedback visual signals are delayed as we consider. We may use to denote the coordinates of delayed feature image position, where denotes the constant delayed time. In this case, the matrix cannot be obtained. Instead, we can only obtain . After substituting this regressor matrix including the delayed visual feedback matrix into (14) and (12), we have where is named as delay-affected depth-independent Jacobian matrix. For simplification, we call it delay-affected Jacobian matrix hereafter. The relationship between and is given by
Using the delay-affected Jacobian matrix and , we define a new composite Jacobian matrix as where denotes the vector which satisfies .
Based on all above analyses, we now propose the controller for delay-affected uncalibrated VS robotic systems as follows: where and are positive definite symmetric matrices and denotes the estimate of . Note that the estimate for the new Jacobian matrix is able to obtain from (21) by respectively replacing unknown matrices and with their estimates and , and it yields
Additionally, recalling (12) and (18) in Property 1, we can easily derive the following linear parameterization form where is the new regressor matrix including the delayed image state. To obtain , we proposed the following adaptive law: where is a positive definite symmetric matrix with proper dimensions and is short for . Besides, it is not hard to derive and accordingly. Please refer to Notation in Introduction for the explanation. Additionally, it is also not hard to roughly give the bound of the unknown parameter vector according to the and feature Cartesian coordinates . Thereby, we assume that both and are known, i.e., . Basing on the above analyses, we can effortlessly know the bound of from (24), i.e., and can be regarded as known ones. We define
Consequently, can be expressed in the interval matrix form  as follows: where denotes the element at the th row and the th column of . Likewise, is also bounded. and are given by
Hence, can be expressed in the interval matrix form where denotes the column vector whose th element is 1 and the other element is 0; denotes the row vector whose th element is 1 and others are 0; denotes the element at the th row and th column of .
Remark 2. From (24), one of the key points in deriving and is the obtaining of and . From the practical experience, the range scales of actually depend on the (1) the initial value of , which is set artificially and (2) the real value which is unknown. Even if is unknown, we can easily give some estimates of its elements according to some other rough estimates. For more details, please refer to Appendix A.
5. Stability Analysis
Theorem 1. Consider the uncalibrated delayed visual servoing system described by (8), (11), (14), and (15) and the controller (22). For a given constant , if there exist symmetric matrices , , positive constants , , , such that the following nonlinear matrix inequalities hold where Each and denotes any positive constant separately. Then, the system is asymptotically stable, i.e., the image error of the feature point is convergent to zero, .
Proof 2. Combining (14), (19), and (21), we have
Substituting controller (22) into (12),we have the following closed-loop system, As aforementioned, the fact that and are all bounded yields the result that for some positive constants .
Let us consider the following nonnegative Lyapunov-Krasovskii functional candidate, where the employment of the term follows the typical practice (refer to , p118).
The time derivative of along the trajectory of system is given by Multiplying from left side to both sides of (35) yields Rewriting the (34) and then multiplying from the left side of , we have After taking differential of and invoking (25), it yields Substituting (39), (40), and (41) into (38), we obtain Likewise, with Lemma 1, the below cross terms yield Besides, from (10), can be rewritten as .
Having obtained the results in (43), substituting them to (43) and we have the following inequality: We will analyze the term and the term one by one. Firstly, we consider the term . In this term, both and are time-varying matrices. It should be noted that we assume and being unknown ones as aforementioned. Using Lemma 1 and (29), we can easily derive where denotes any positive constant.
With Lemma 1 and (27), we can effortlessly extend as where denotes any positive constant.
Substituting (45) and (46) into term , invoking (30), it yields where and are defined in (33).
Then, we consider the term . In actual visual servoing robotic system, the depth changing velocity is actually bounded. Here, we can reasonably assume that the is bounded, . Invoking (31), we have Combining (47), (48), and (32), we can finally have in (44), which means that the Lyapunov-Krasovskii functional never increases its value so that it is upper bounded. From (37), bounded directly implies that the joint velocity , , the errors of , and image error . Then the joint acceleration can be concluded from the closed-loop dynamics (35). Therefore, the joint velocity is uniformly continuous. Note that it is not hard to derive from (10) with bounded and . Thereby, we can also conclude that is uniformly continuous. and yield and hence we can derive that the image delay error is uniformly continuous. And from (25), it can be derived that . Thereby, is uniformly continuous. Invoking Lemma 2 and Lemma 3, we have , , and . This completes the proof.
Remark 3. It can be clearly seen that the delay-dependent stability condition is presented in Theorem 1. Stability analyses given by [33–35] are delay-independent results, which means the stability conditions impose no constraint on system delays. Hence, their stability results hold with any magnitude of delays. However, in reality, the delays are usually bounded and the delay-dependent results are conservative. To obtain less conservative results, we should consider magnitude of delays. It is significant to the delay stability research due to less conservativeness.
Remark 4. In order to fully control 6-DOFs or more degree robots, we need more noncollinear feature points. For instance, three noncollinear feature points should be considered for a 6-DOF manipulator. The scheme proposed in this paper can be effortlessly generalized to the case of multiple feature points by the similar method described in . Considering the page limitation, we only present the single feature point case in this paper.
6. Simulation Results
The actual visual parameters are set as follows: m, pixels, pixels, pixels/m, pixels/m, and rad, where is focal length; and are coordinates of camera principal point in the image frame; and denote scale factors along axis and , respectively; and denotes intersection angle between axis and axis . The intrinsic matrix therefore can be derived as
For the setting of the camera’s position and pose, the is set as follows:
The gravitational acceleration is set as m/s2. is time varying and determined by forward kinematics of the manipulator whose parameters are given in Table 1.
Notes: denotes link length; denotes link twist; denotes link offset; denotes joint angle; denotes link mass; denotes the length between barycenter and its prior joint.
From Property 1 and according to the ranges of , , , , , in this paper, we may set and as and set and as where
The feature point’s coordinates w.r.t. the base frame are (150, 20)Tm. The initial position coordinates on the image plane are (140, 81.44)T, and the desired position coordinates on image plane are (160.7, 120.6)T.
Based on all above settings, two simulations are conducted. In the first simulation, the proposed control scheme is used to track the desired position under two different constant delays: ms and ms. Figures 4(a), 5(a), 6(a), and 7(a) demonstrate the position errors, the position, the velocity, and the trajectory of the feature point on the image plane, respectively. It can be observed that the performance is almost identical even under the different delays, 198 ms and 98 ms. It verifies that the convergence of the system will be achieved once as long as the conditions given in Theorem 1 hold. Besides, Figure 7 also shows better position tracking performance with 98 ms delays than that of 198 ms. To show the convergence of estimates to real values, we partly choose some elements in the vector . Figure 8 shows the profile of estimated parameters from to . It should be noted that the kinematic parameters converge only when the persistent excitation (P.E.) condition is satisfied. In our simulation, we choose the close to their real values such that these estimated parameters can converge to them. In most cases, the estimated parameters only converge to the true values up to a scale. However, it will not affect the convergence of image errors.
(a) Scheme 1
(b) Scheme 2
(a) Scheme 1
(b) Scheme 2
(a) Scheme 1
(b) Scheme 2
(a) Scheme 1
(b) Scheme 2
To demonstrate the superiority of the proposed control scheme, we make a comparison between the two control schemes: the scheme 1 and the scheme 2. The scheme 1 is the method proposed in this paper, and the scheme 2 originating from  is modified accordingly in this simulation as follows.
It should be noted that the Jacobian matrix in the scheme 2 does not consider the delay effects. Then, we conduct the second simulation. In this simulation, we use m/s, , , , , ms for scheme 2.
From Figures 4(b)–7(b), it can be clearly seen that the performance of scheme 2 in the presence of delay time ms is unsatisfying. Abnormal oscillations can be observed, which is caused by the delays. In contrast, the proposed scheme can still guarantee very satisfying control performance, which is not affected too much by delayed signals. In conclusion, the second simulation result shows the superiority of the proposed scheme over existing schemes that can eliminate the negative effect caused by delays to a great extent.
In this paper, we have proposed a control method for uncalibrated dynamic-based visual servoing robotic systems to cope with the delay problem existing in the visual feedback loops. To handle the unknown camera intrinsic and extrinsic parameters, we introduced the depth-independent Jacobian matrix and used the linear parameterization to adaptively identify these uncertainties. Then, we took the delays into consideration and constructed a novel matrix called delay-affected Jacobian matrix. Based on the delay-affected Jacobian matrix, we proposed the adaptive controller. To prove the stability of the closed-loop system, the Lyapunov-Krasovskii functional is constructed and delay-dependent stability conditions are also provided to obtain less conservative results. Simulation results of the proposed control scheme were presented to show the effectiveness. To further validate the performance of the proposed scheme, experimental tests on real networked visual servoing robotics systems are expected to be the most appropriate choice and this is also one of our main objectives in the future research.
A. Proof of Property 1
Proof 3. Let denote the th row of . Recalling (14), we can expand as follows:
where denotes the element of matrix , denotes the th elements of , denotes the element of matrix , and denotes th element of . Let and be the th element of and , respectively. When , i.e., none of the elements and equals to zero, vector will linearly depend on 36 unknown parameters, and can be derived. Define , , , and . Specifically, we have and
When is independent of , equals zero. can be obtained by removing elements which equal zero, accordingly, can be obtained by removing corresponding elements. When is independent of , and can be obtained by similarly removing. In this way, we can derive the expression of and for every .
Besides, because is a subset of , the linearization of can be a direct result of the Property. When , can be expressed as When , i.e., is independent of , and can be obtained by removing corresponding elements. Then, we can derive expression of and for every . This completes the proof.
B. List of Notations & Symbols
The data used to support the findings of this study are available from the corresponding author upon request.
Conflicts of Interest
The authors declare that they have no conflicts of interest.
This work was financed by Science and Technology Program of Tianjin, China under Grant 15ZXZNGX00290.
- S. Hutchinson, G. D. Hager, and P. I. Corke, “A tutorial on visual servo control,” IEEE Transactions on Robotics and Automation, vol. 12, no. 5, pp. 651–670, 1996.
- M. Vincze, “Dynamics and system performance of visual servoing,” in Proceedings 2000 ICRA. Millennium Conference. IEEE International Conference on Robotics and Automation. Symposia Proceedings (Cat. No.00CH37065), pp. 644–649, San Francisco, CA, USA, 2000.
- S. Benhimane and E. Malis, “A new approach to vision-based robot control with omni-directional cameras,” in Proceedings 2006 IEEE International Conference on Robotics and Automation, 2006. ICRA 2006, pp. 526–531, Orlando, FL, USA, 2006.
- R. Dahmouche, N. Andreff, Y. Mezouar, O. Ait-Aider, and P. Martinet, “Dynamic visual servoing from sequential regions of interest acquisition,” The International Journal of Robotics Research, vol. 31, no. 4, pp. 520–537, 2012.
- L. Weiss, A. Sanderson, and C. Neuman, “Dynamic sensor-based control of robots with visual feedback,” IEEE Journal on Robotics and Automation, vol. 3, no. 5, pp. 404–417, 1987.
- M. Jägersand, O. Fuentes, and R. Nelson, “Experimental evaluation of uncalibrated visual servoing for precision manipulation,” in Proceedings of International Conference on Robotics and Automation, pp. 2874–2880, Albuquerque, NM, USA, 1997.
- H. Hashimoto, T. Kubota, M. Sato, and F. Harashima, “Visual control of robotic manipulator based on neural networks,” IEEE Transactions on Industrial Electronics, vol. 39, no. 6, pp. 490–496, 1992.
- E. Zergeroglu, D. M. Dawson, M. S. de Querioz, and A. Behal, “Vision-based nonlinear tracking controllers with uncertain robot-camera parameters,” IEEE/ASME Transactions on Mechatronics, vol. 6, no. 3, pp. 322–337, 2001.
- H. Wang, B. Yang, Y. Liu, W. Chen, X. Liang, and R. Pfeifer, “Visual servoing of soft robot manipulator in constrained environments with an adaptive controller,” IEEE/ASME Transactions on Mechatronics, vol. 22, no. 1, pp. 41–50, 2017.
- H. Wang, “Adaptive control of robot manipulators with uncertain kinematics and dynamics,” IEEE Transactions on Automatic Control, vol. 62, no. 2, pp. 948–954, 2017.
- J. Chen, D. M. Dawson, W. E. Dixon, and A. Behal, “Adaptive homography-based visual servo tracking for a fixed camera configuration with a camera-in-hand extension,” IEEE Transactions on Control Systems Technology, vol. 13, no. 5, pp. 814–825, 2005.
- A. C. Leite and F. Lizarralde, “Passivity-based adaptive 3d visual servoing without depth and image velocity measurements for uncertain robot manipulators,” International Journal of Adaptive Control and Signal Processing, vol. 30, no. 8–10, pp. 1269–1297, 2016.
- P. I. Corke, High-performance visual closed-loop robot control, [Ph.D. thesis], Mechanical and Manufacturing Engineering, 1994.
- M. Vincze, M. Ayromlou, S. Chroust, M. Zillich, W. Ponweiser, and D. Legenstein, “Dynamic aspects of visual servoing and a framework for real-time 3D vision for robotics,” in Sensor Based Intelligent Robots, pp. 101–121, Springer, Berlin, Heidelberg, 2002.
- J. A. Gangloff and M. F. de Mathelin, “High speed visual servoing of a 6 DOF manipulator using MIMO predictive control,” in Proceedings 2000 ICRA. Millennium Conference. IEEE International Conference on Robotics and Automation. Symposia Proceedings (Cat. No.00CH37065), pp. 3751–3756, San Francisco, CA, USA, 2000.
- J. A. Gangloff and M. F. de Mathelin, “High-speed visual servoing of a 6-d.o.f. manipulator using multivariable predictive control,” Advanced Robotics, vol. 17, no. 10, pp. 993–1021, 2003.
- L. Cuvillon, E. Laroche, J. Gangloff, and M. de Mathelin, “GPC versus H ∞ control for fast visual servoing of a medical manipulator including flexibilities,” in Proceedings of the 2005 IEEE International Conference on Robotics and Automation, pp. 4044–4049, Barcelona, Spain, 2006.
- H. Wu, L. Lou, C. C. Chen, S. Hirche, and K. Kuhnlenz, “Cloud-based networked visual servo control,” IEEE Transactions on Industrial Electronics, vol. 60, no. 2, pp. 554–566, 2013.
- M. Nakadokoro, S. Komada, and T. Hori, “Stereo visual servo of robot manipulators by estimated image features without 3d reconstruction,” in IEEE SMC'99 Conference Proceedings. 1999 IEEE International Conference on Systems, Man, and Cybernetics (Cat. No.99CH37028), pp. 571–576, Tokyo, Japan, 1999.
- N. Dai, M. Nakamura, S. Komada, and J. Hirai, “Tracking of moving object by manipulator using estimated image feature and its error correction on image planes,” in The 8th IEEE International Workshop on Advanced Motion Control, 2004. AMC '04, pp. 653–657, Kawasaki, Japan, 2004.
- I. Kinbara, S. Komada, and J. Hirai, “Visual servo of active cameras and manipulators by time delay compensation of image features with simple on-line calibration,” in 2006 SICE-ICASE International Joint Conference, pp. 5317–5322, Busan, Republic of Korea, 2007.
- T. Inoue and S. Hirai, “Robotic manipulation with large time delay on visual feedback systems,” in 2010 IEEE/ASME International Conference on Advanced Intelligent Mechatronics, pp. 1111–1115, Montreal, ON, Canada, 2010.
- Z. Gao and J. Su, “Estimation of image Jacobian matrix with time-delay compensation for uncalibrated visual servoing,” Control Theory and Applications, vol. 26, no. 1, pp. 218–234, 2009.
- Y. H. Liu, H. Wang, and K. Lam, “Dynamic visual servoing of robots in uncalibrated environments,” in 2005 IEEE/RSJ International Conference on Intelligent Robots and Systems, pp. 3131–3136, Edmonton, Canada, 2006.
- H. Wang and Y. H. Liu, “Uncalibrated visual tracking control without visual velocity,” in Proceedings 2006 IEEE International Conference on Robotics and Automation, 2006. ICRA 2006, pp. 2738–2743, Orlando, FL, USA, 2006.
- Y. Shen, D. Sun, Y.-H. Liu, and K. Li, “Asymptotic trajectory tracking of manipulators using uncalibrated visual feedback,” IEEE/ASME Transactions on Mechatronics, vol. 8, no. 1, pp. 87–98, 2003.
- Y.-H. Liu, H. Wang, C. Wang, and K. K. Lam, “Uncalibrated visual servoing of robots using a depth-independent interaction matrix,” IEEE Transactions on Robotics, vol. 22, no. 4, pp. 804–817, 2006.
- H. Wang, Y.-H. Liu, and D. Zhou, “Dynamic visual tracking for manipulators using an uncalibrated fixed camera,” IEEE Transactions on Robotics, vol. 23, no. 3, pp. 610–617, 2007.
- F. Lizarralde, A. C. Leite, L. Hsu, and R. R. Costa, “Adaptive visual servoing scheme free of image velocity measurement for uncertain robot manipulators,” Automatica, vol. 49, no. 5, pp. 1304–1309, 2013.
- X. Liang, H. Wang, Y.-H. Liu, W. Chen, and J. Zhao, “A unified design method for adaptive visual tracking control of robots with eye-in-hand/fixed camera configuration,” Automatica, vol. 59, pp. 97–105, 2015.
- T. Li and H. Zhao, “Global finite-time adaptive control for uncalibrated robot manipulator based on visual servoing,” ISA Transactions, vol. 68, pp. 402–411, 2017.
- W. Qiao and R. Sipahi, “Consensus control under communication delay in a three-robot system: design and experiments,” IEEE Transactions on Control Systems Technology, vol. 24, no. 2, pp. 687–694, 2016.
- Y.-C. Liu and N. Chopra, “Controlled synchronization of heterogeneous robotic manipulators in the task space,” IEEE Transactions on Robotics, vol. 28, no. 1, pp. 268–275, 2012.
- H. Wang, “Passivity based synchronization for networked robotic systems with uncertain kinematics and dynamics,” Automatica, vol. 49, no. 3, pp. 755–761, 2013.
- X. Liang, H. Wang, Y. H. Liu, W. Chen, G. Hu, and J. Zhao, “Adaptive task-space cooperative tracking control of networked robotic manipulators without task-space velocity measurements,” IEEE Transactions on Cybernetics, vol. 46, no. 10, pp. 2386–2398, 2016.
- K. Hashimoto, K. Nagahama, and T. Noritsugu, “A mode switching estimator for visual servoing,” in Proceedings 2002 IEEE International Conference on Robotics and Automation (Cat. No.02CH37292), pp. 1610–1615, Washington, DC, USA, 2002.
- J. M. Sebastián, L. Pari, L. Angel, and A. Traslosheros, “Uncalibrated visual servoing using the fundamental matrix,” Robotics and Autonomous Systems, vol. 57, no. 1, pp. 1–10, 2009.
- A. Shademan, A.-m. Farahmand, and M. Jägersand, “Robust jacobian estimation for uncalibrated visual servoing,” in 2010 IEEE International Conference on Robotics and Automation, pp. 5564–5569, Anchorage, AK, USA, 2010.
- H. K. Khalil, Nonlinear Systems, Prentice-Hall, Inc., Upper Saddle River, NJ, USA, 3rd edition, 2002.
- J.-J. E. Slotine and W. Li, “On the adaptive control of robot manipulators,” The International Journal of Robotics Research, vol. 6, no. 3, pp. 49–59, 1987.
- F. Garofalo, G. Celentano, and L. Glielmo, “Stability robustness of interval matrices via Lyapunov quadratic forms,” IEEE Transactions on Automatic Control, vol. 38, no. 2, pp. 281–284, 1993.
- R. Lozano, B. Brogliato, and O. E. an Maschke, “Dissipative systems analysis and control. Theory and applications,” Measurement Science and Technology, vol. 12, no. 12, p. 2211, 2001.
Copyright © 2018 Tao Li et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.