Advances in Modelling, Monitoring, and Control for Complex Industrial Systems
View this Special IssueResearch Article  Open Access
DelayDependent Stability in Uncalibrated ImageBased Dynamic Visual Servoing Robotic System
Abstract
This paper addresses the stability problem of uncalibrated imagebased visual servoing robotic systems. Both the visual feedback delay and the uncalibrated visual parameters can be the sources of instability for visual servoing robotic systems. To eliminate the negative effects caused by kinematic uncertainties and delays, we propose an adaptive controller including the delayaffected Jacobian matrix and design an adaptive law accordingly. Besides, the delaydependent stability conditions are provided to show the relationship between the system stability and the delayed time in order to obtain less conservative results. A LyapunovKrasovskii functional is constructed, and a rigorously mathematic proof is given. Finally, the simulation results are presented to show the effectiveness of the proposed control scheme.
1. Introduction
For human beings, vision is an important sensory channel. Through visual sensors, robots also can monitor the circumstance and perform the tasks. Nowadays, the advanced visual processing techniques and highspeed image processors make visionbased robot systems capable of handling dynamical tasks, and the visionbased control has been applied to many industrial robot systems. It has become the mainstream of robot control.
Visionbased control can be traced back to 1980s [1]. Lookandmove is one of the early visionbased technologies [1–3]. In this approach, two nested loops run simultaneously: the visual loop is the external loop and the jointspace loop is the internal loop. Due to the sensitivities to disturbances and errors, the lookandmove architecture is not suitable for highperformance control tasks [4]. As an alternative, the visual servo (VS) technique is proposed [5]. This control architecture directly generates the control inputs using the visual information. Such a simple and direct structure is favorable for highspeed servoing tasks. Considerable visual servoing approaches have been investigated for various robot systems and from many different aspects. Figure 1 shows two typical structures of the visual servoing control.
(a) Positionbased visual servoing
(b) Imagebased visual servoing
In the existing literature, there are two challenges in the field of visual servoing control: (a) the difficulties of calibration and (b) the image feedback signals of inferior quality.
The calibration of visual servoing systems includes the camera calibration, kinematic calibration, and dynamic calibration. For the sake of identifying unknown or uncertain system parameters, periodical and highaccurate calibration work usually is required, which is tedious and demanding. Without such calibration work, the system models cannot be accurately characterized and the closedloop visual servoing systems could be unstable. To avoid such calibration work, the uncalibrated control approaches are proposed [6–10]. Some work [6–8] investigates the approaches with robust controllers for eliminating the negative effects of calibration errors of the system model, and the uncertain system parameters in the above work are replaced with the approximated ones. As for the case of unknown parameters or timevarying parameters, adaptive control techniques are proposed [11, 12]. To handle such parameters, in these methods, adaptive laws are designed to update them online.
As another cause of system instability, the visual signals of inferior quality are also nonnegligible. Generally speaking, noise and delays in the visual signals are the main inducements. In this paper, we consider delays as the main reason for inferior image signals. As we know, the visual signal flows are expected to be synchronized with other system signals. However, asynchronization could happen due to many reasons including the limitation of image processing [13–17] and restriction of visual signal transmission [18]. Some early research studies the instability problem caused by imageprocessing (or imagesampling) delays [1, 2, 13]. These early efforts focused on reducing the image sampling time through a parallel or pipelined approach [2, 14–17]. With the development of the advanced image processor chips, such a problem has been resolved to a large extent. With the wide application of visual servo, its patterns are becoming various. The connections between visual sensors and controllers can be wireless or Internet, which also means the visual feedback path can be a source of delays due to the transmission block in the intercomponent information exchanges [1, 2, 13, 18]. Improving the speed and the reliability of communication links is a straightforward way to address the issue. But it inevitably leads to the increase of the cost. Consequently, designing proper control schemes to handle with delays is an alternative. In the IBVS control scheme design, the delay problem is studied by [19–23]. Using average joint angle values of the past and present moment to replace the present joint angle, [19] obtains predicted image feature position values by the Jacobian matrix to cope with the delay problem. One common flaw of the aforementioned methods is that they require the accurate knowledge of system parameters, and the acquirement of such information is based on calibration.
The two challenges make a visual servoing robotic system become typical complex industrial systems. This is because the mainstream noncalibration techniques usually require accurate image signals to compensate for the parametric errors or to update the unknown parameters. Under the delayed image feedback loop, there is no accurate synchronized visual feedback available. In this context, the control of such systems is of high nonlinearity and complexity. Consequently, it is worthwhile and challenging to be investigated. This paper therefore will concentrate on the influence of visual transmission delays upon the uncalibrated visual servoing robotic systems.
In the literature of this area, [21] presents an online calibration method to overcome the time delay problem. Inoue and Hirai [22] design a twolayer controller called STP to compensate for the delays and the concept of virtual trajectory is introduced. Gao and Su [23] employ local fitting Jacobian matrix based on polynomial fitting to obtain more accurate Jacobian estimation and image precompensation for uncalibrated IBVS robotic system. Unfortunately, the controller design in the above literature is based on kinematics and fails to consider the dynamics of robots. It is well known that the dynamics of robot systems plays an important role in the stability, especially in the case of high speed. Much progress has been made in the aspect of the uncalibrated dynamicbased visual servoing systems control without delay effects [24–31]. As for the uncalibrated dynamicbased visual servoing systems with the delay effects, the relevant work focuses on the area of the distributed cooperative control [32–35]. Liu and Chopra [33] study an adaptive control algorithm to guarantee taskspace synchronization of networked robotic manipulators in the presence of dynamic uncertainties and timevarying communication delays. Wang [34] investigates the problem of synchronization of networked robotic systems with kinematic and dynamic uncertainties in the case of nonuniform constant communication delays. Liang et al. [35] address cooperative tracking control problem of networked robotic manipulators in the presence of communication delays under strongly connected directed graphs. However, the above work considers the delays in the interagent information exchanges rather than delays existing in the visual feedback of single dynamicbased visual servoing robotic system. To the best of the authors’ knowledge, there is little literature considering the time delay problem in an uncalibrated dynamicbased visual servoing robotic system without using imagespace velocity measurements. To address the aforementioned issues, the following problems are expected to be addressed. First, the modeling of the system. Delays, noncalibration, and velocity measurements, these contributing factors, need to simultaneously be included in the modeling of the system. Second, the handling of the timevarying parameters and how to avoid using image velocity measurements. Third, the delaydependent stability conditions are expected to be given for obtaining less conservation. The contributions of this paper can be summarized as follows. (a) An uncalibrated dynamicbased visual servoing model is developed to visual track a feature point whose image depth is time varying without using the image velocity and in the presence of unknown constant delayed visual feedback. (b) To handle overlapped effects of uncalibrated parameter uncertainties and the visual feedback delay, the novel Jacobian matrix called delayaffected Jacobian matrix is first proposed in this paper. (c) LyapunovKrasovskii stability theory is employed to analyze the stability of the delayaffected dynamicbased visual servoing system, and delaydependent (d.d) stability conditions are given to obtain less conservative stability results.
The paper is organized as follows. Section 2 gives some preliminary knowledge used throughout the paper. In Section 3, the kinematic and dynamic models of dynamicbased visual servoing robotics systems are formulated. In Section 4, the main results of this paper, the controller design, and the adaptive laws are proposed to address the stability problem of the uncalibrated dynamicbased visual servoing robotic system with visual feedback delays. In Section 5, rigorous stability analyses are provided. Section 6 presents simulation results to show the effectiveness of the proposed control scheme. Section 7 concludes the paper.
2. Preliminaries
Lemma 1. Let , , and be real matrices with proper dimensions, where . For any constant , the following holds.
Lemma 2. Let be a uniformly continuous function on . Suppose that exists and is finite. Then,
Lemma 3. Consider functional differential equation Let be a mapping from (a bounded subset of ) to a bounded subset of , are continuous nondecreasing functions, for any , , are positive and . If there exists a continuous differentiable functional , such that and the zero solution of system (8) is uniformly stable. If zero solution of the system is uniformly stable and for any , holds, then zero solution of system (8) is uniformly asymptotically stable. If zero solution of the system is uniformly asymptotically stable and , then zero solution of system (8) is globally uniformly asymptotically stable.
3. Kinematics and Dynamics
In this section, we present the mathematical modeling of delayed visual servoing robotic systems with the eyeinhand configuration. In the modeling process, both kinematics and dynamics are considered. To illustrate the kinematics of the system, Figure 2 shows the transformation among different frames.
Let be the coordinates of a feature point’s projection on the camera image plane and be the Cartesian coordinates of the feature point w.r.t the robot base frame. Based on the model developed in [7], the mapping between image position and the Cartesian position can be formulated as where denotes the depth of feature to the camera frame; denotes the homogeneous transformation matrix from the camera frame to the base frame; denotes the th row of the camera intrinsic parameter matrix ( is an intrinsic parameter matrix which is derived from the typical model introduced in [36]). In reality, the feature point is stationary with respect to the robot base and it results in a constant column vector .
The relationship between and can be formulated by where is the third row of the perspective projection matrix .
Combining with (7), the derivative of (6) w.r. time satisfies where is short for . It should be noticed that can be divided into two parts: (the forward robot kinematics) and (homogeneous matrix from the camera to the end effector). Due to the eyeinhand configuration, is a constant matrix. Then, one has where denotes the rotation matrix, denotes the translation vector, and denotes the joint position. For more details, see [7]. By letting the matrix be the left submatrix of , be the 1st and the 2nd rows of and be the 3rd row of , one can derive the mapping from joint velocities to image velocities as follows:
The nonlinear mapping introduced in (10) is an important matrix in IBVS, which is known as Jacobian matrix [37, 38]. The differential of (7) w.r.t. time satisfies
The dynamics of robots can be given with EulerLagrange equation as follows [39]: where is the vector formed by joint input of the manipulator; is the positivedefine and symmetric inertia matrix; is a skewsymmetric matrix such that for any proper dimensional vector ,
On the left side of (12), the first term is inertia force, the second term is the Coriolis and centrifugal forces, and the last term is the gravitational force.
Remark 1. From Figure 2, it intuitively can be seen that the estimation of Jacobian matrices determined by homogeneous transformation matrices (, , and ) is directly affected by the delayed visual feedbacks. The complexity of the system mainly lies in the highly nonlinear relationship between delayed image states and joint states.
To facilitate analysis, we present Figure 3 to show the closedloop structure of a typically delayed VS robotic system.
4. The Adaptive Controller Design
In this section, we will investigate the uncalibrated dynamicbased visual servoing robotic system with visual feedback delays and kinematic uncertainties. In our study, the formulation of the uncalibrated VS robotic system is partly based upon the depthindependent Jacobian model developed by [27]. This model allows depth to be time varying so that the visual servoing system can still be stabilized even in the presence of the fastchanging feature image depth.
From (10), we can easily split from and thereby obtain the depthindependent Jacobian matrix which is given by
Additionally, from (11), we define such a vector as follows:
Therefore (10) and (11) can be rewritten as
In the uncalibrated dynamicbased visual servoing system, the estimate of Jacobian matrix is usually used as the replacement of unknown exact Jacobian matrix. It can be easily seen from (14) and (15) that components of the depthindependent Jacobian matrix and the matrix can be classified as two categories: the known and the unknown. Known components are , , and unknown components are , , and . The estimate of Jacobian matrix can be analytically derived through the linear parameterization [40]. From (14), it can be seen that the known and the unknown are coupled. And the coupling of the known and the unknown hinders the linear parameterization of these matrices. The following property is proposed to decouple them.
Property 1. For a vector , the product can be linearly parameterized as follows: where and are regressor matrices which consist of known parameters; is a vector which consists of unknown parameters; and denotes the number of unknown parameters, which satisfies .
Proof 1. Due to the limitation of pages, see proof in Appendix A.
By Property 1, the Jacobian matrix can be expressed in a linear form: a known matrix (regressor matrix) multiplies an unknown vector. From (12), it can be clearly seen that the regressor matrix includes the current image position . Unfortunately, the feedback visual signals are delayed as we consider. We may use to denote the coordinates of delayed feature image position, where denotes the constant delayed time. In this case, the matrix cannot be obtained. Instead, we can only obtain . After substituting this regressor matrix including the delayed visual feedback matrix into (14) and (12), we have where is named as delayaffected depthindependent Jacobian matrix. For simplification, we call it delayaffected Jacobian matrix hereafter. The relationship between and is given by
Using the delayaffected Jacobian matrix and , we define a new composite Jacobian matrix as where denotes the vector which satisfies .
Based on all above analyses, we now propose the controller for delayaffected uncalibrated VS robotic systems as follows: where and are positive definite symmetric matrices and denotes the estimate of . Note that the estimate for the new Jacobian matrix is able to obtain from (21) by respectively replacing unknown matrices and with their estimates and , and it yields
Additionally, recalling (12) and (18) in Property 1, we can easily derive the following linear parameterization form where is the new regressor matrix including the delayed image state. To obtain , we proposed the following adaptive law: where is a positive definite symmetric matrix with proper dimensions and is short for . Besides, it is not hard to derive and accordingly. Please refer to Notation in Introduction for the explanation. Additionally, it is also not hard to roughly give the bound of the unknown parameter vector according to the and feature Cartesian coordinates [27]. Thereby, we assume that both and are known, i.e., . Basing on the above analyses, we can effortlessly know the bound of from (24), i.e., and can be regarded as known ones. We define
Consequently, can be expressed in the interval matrix form [41] as follows: where denotes the element at the th row and the th column of . Likewise, is also bounded. and are given by
Hence, can be expressed in the interval matrix form where denotes the column vector whose th element is 1 and the other element is 0; denotes the row vector whose th element is 1 and others are 0; denotes the element at the th row and th column of .
Remark 2. From (24), one of the key points in deriving and is the obtaining of and . From the practical experience, the range scales of actually depend on the (1) the initial value of , which is set artificially and (2) the real value which is unknown. Even if is unknown, we can easily give some estimates of its elements according to some other rough estimates. For more details, please refer to Appendix A.
5. Stability Analysis
Theorem 1. Consider the uncalibrated delayed visual servoing system described by (8), (11), (14), and (15) and the controller (22). For a given constant , if there exist symmetric matrices , , positive constants , , , such that the following nonlinear matrix inequalities hold where Each and denotes any positive constant separately. Then, the system is asymptotically stable, i.e., the image error of the feature point is convergent to zero, .
Proof 2. Combining (14), (19), and (21), we have
where .
Substituting controller (22) into (12),we have the following closedloop system,
As aforementioned, the fact that and are all bounded yields the result that
for some positive constants .
Let us consider the following nonnegative LyapunovKrasovskii functional candidate,
where the employment of the term follows the typical practice (refer to [42], p118).
The time derivative of along the trajectory of system is given by
Multiplying from left side to both sides of (35) yields
Rewriting the (34) and then multiplying from the left side of , we have
After taking differential of and invoking (25), it yields
Substituting (39), (40), and (41) into (38), we obtain
Likewise, with Lemma 1, the below cross terms yield
Besides, from (10), can be rewritten as .
Having obtained the results in (43), substituting them to (43) and we have the following inequality:
We will analyze the term and the term one by one. Firstly, we consider the term . In this term, both and are timevarying matrices. It should be noted that we assume and being unknown ones as aforementioned. Using Lemma 1 and (29), we can easily derive
where denotes any positive constant.
With Lemma 1 and (27), we can effortlessly extend as
where denotes any positive constant.
Substituting (45) and (46) into term , invoking (30), it yields
where and are defined in (33).
Then, we consider the term . In actual visual servoing robotic system, the depth changing velocity is actually bounded. Here, we can reasonably assume that the is bounded, . Invoking (31), we have
Combining (47), (48), and (32), we can finally have in (44), which means that the LyapunovKrasovskii functional never increases its value so that it is upper bounded. From (37), bounded directly implies that the joint velocity , , the errors of , and image error . Then the joint acceleration can be concluded from the closedloop dynamics (35). Therefore, the joint velocity is uniformly continuous. Note that it is not hard to derive from (10) with bounded and . Thereby, we can also conclude that is uniformly continuous. and yield and hence we can derive that the image delay error is uniformly continuous. And from (25), it can be derived that . Thereby, is uniformly continuous. Invoking Lemma 2 and Lemma 3, we have , , and . This completes the proof.
Remark 3. It can be clearly seen that the delaydependent stability condition is presented in Theorem 1. Stability analyses given by [33–35] are delayindependent results, which means the stability conditions impose no constraint on system delays. Hence, their stability results hold with any magnitude of delays. However, in reality, the delays are usually bounded and the delaydependent results are conservative. To obtain less conservative results, we should consider magnitude of delays. It is significant to the delay stability research due to less conservativeness.
Remark 4. In order to fully control 6DOFs or more degree robots, we need more noncollinear feature points. For instance, three noncollinear feature points should be considered for a 6DOF manipulator. The scheme proposed in this paper can be effortlessly generalized to the case of multiple feature points by the similar method described in [28]. Considering the page limitation, we only present the single feature point case in this paper.
6. Simulation Results
To show the effectiveness of the control scheme described in (22) and Theorem 1, we conduct the following simulations.
The actual visual parameters are set as follows: m, pixels, pixels, pixels/m, pixels/m, and rad, where is focal length; and are coordinates of camera principal point in the image frame; and denote scale factors along axis and , respectively; and denotes intersection angle between axis and axis . The intrinsic matrix therefore can be derived as
For the setting of the camera’s position and pose, the is set as follows:
The gravitational acceleration is set as m/s^{2}. is time varying and determined by forward kinematics of the manipulator whose parameters are given in Table 1.
 
Notes: denotes link length; denotes link twist; denotes link offset; denotes joint angle; denotes link mass; denotes the length between barycenter and its prior joint. 
From Property 1 and according to the ranges of , , , , , in this paper, we may set and as and set and as where
The feature point’s coordinates w.r.t. the base frame are (150, 20)^{T}m. The initial position coordinates on the image plane are (140, 81.44)^{T}, and the desired position coordinates on image plane are (160.7, 120.6)^{T}.
Besides, we set m/s here and and are obtained by solving the feasibility problem of (30), (31), and (32) with the solver feasp. In this simulation, we use , , , .
Based on all above settings, two simulations are conducted. In the first simulation, the proposed control scheme is used to track the desired position under two different constant delays: ms and ms. Figures 4(a), 5(a), 6(a), and 7(a) demonstrate the position errors, the position, the velocity, and the trajectory of the feature point on the image plane, respectively. It can be observed that the performance is almost identical even under the different delays, 198 ms and 98 ms. It verifies that the convergence of the system will be achieved once as long as the conditions given in Theorem 1 hold. Besides, Figure 7 also shows better position tracking performance with 98 ms delays than that of 198 ms. To show the convergence of estimates to real values, we partly choose some elements in the vector . Figure 8 shows the profile of estimated parameters from to . It should be noted that the kinematic parameters converge only when the persistent excitation (P.E.) condition is satisfied. In our simulation, we choose the close to their real values such that these estimated parameters can converge to them. In most cases, the estimated parameters only converge to the true values up to a scale. However, it will not affect the convergence of image errors.
(a) Scheme 1
(b) Scheme 2
(a) Scheme 1
(b) Scheme 2
(a) Scheme 1
(b) Scheme 2
(a) Scheme 1
(b) Scheme 2
To demonstrate the superiority of the proposed control scheme, we make a comparison between the two control schemes: the scheme 1 and the scheme 2. The scheme 1 is the method proposed in this paper, and the scheme 2 originating from [30] is modified accordingly in this simulation as follows.
It should be noted that the Jacobian matrix in the scheme 2 does not consider the delay effects. Then, we conduct the second simulation. In this simulation, we use m/s, , , , , ms for scheme 2.
From Figures 4(b)–7(b), it can be clearly seen that the performance of scheme 2 in the presence of delay time ms is unsatisfying. Abnormal oscillations can be observed, which is caused by the delays. In contrast, the proposed scheme can still guarantee very satisfying control performance, which is not affected too much by delayed signals. In conclusion, the second simulation result shows the superiority of the proposed scheme over existing schemes that can eliminate the negative effect caused by delays to a great extent.
7. Conclusions
In this paper, we have proposed a control method for uncalibrated dynamicbased visual servoing robotic systems to cope with the delay problem existing in the visual feedback loops. To handle the unknown camera intrinsic and extrinsic parameters, we introduced the depthindependent Jacobian matrix and used the linear parameterization to adaptively identify these uncertainties. Then, we took the delays into consideration and constructed a novel matrix called delayaffected Jacobian matrix. Based on the delayaffected Jacobian matrix, we proposed the adaptive controller. To prove the stability of the closedloop system, the LyapunovKrasovskii functional is constructed and delaydependent stability conditions are also provided to obtain less conservative results. Simulation results of the proposed control scheme were presented to show the effectiveness. To further validate the performance of the proposed scheme, experimental tests on real networked visual servoing robotics systems are expected to be the most appropriate choice and this is also one of our main objectives in the future research.
Appendix
A. Proof of Property 1
Proof 3. Let denote the th row of . Recalling (14), we can expand as follows:
where denotes the element of matrix , denotes the th elements of , denotes the element of matrix , and denotes th element of . Let and be the th element of and , respectively. When , i.e., none of the elements and equals to zero, vector will linearly depend on 36 unknown parameters, and can be derived. Define , , , and . Specifically, we have and
When is independent of , equals zero. can be obtained by removing elements which equal zero, accordingly, can be obtained by removing corresponding elements. When is independent of , and can be obtained by similarly removing. In this way, we can derive the expression of and for every .
Besides, because is a subset of , the linearization of can be a direct result of the Property. When , can be expressed as
When , i.e., is independent of , and can be obtained by removing corresponding elements. Then, we can derive expression of and for every . This completes the proof.
B. List of Notations & Symbols

Data Availability
The data used to support the findings of this study are available from the corresponding author upon request.
Conflicts of Interest
The authors declare that they have no conflicts of interest.
Acknowledgments
This work was financed by Science and Technology Program of Tianjin, China under Grant 15ZXZNGX00290.
References
 S. Hutchinson, G. D. Hager, and P. I. Corke, “A tutorial on visual servo control,” IEEE Transactions on Robotics and Automation, vol. 12, no. 5, pp. 651–670, 1996. View at: Publisher Site  Google Scholar
 M. Vincze, “Dynamics and system performance of visual servoing,” in Proceedings 2000 ICRA. Millennium Conference. IEEE International Conference on Robotics and Automation. Symposia Proceedings (Cat. No.00CH37065), pp. 644–649, San Francisco, CA, USA, 2000. View at: Publisher Site  Google Scholar
 S. Benhimane and E. Malis, “A new approach to visionbased robot control with omnidirectional cameras,” in Proceedings 2006 IEEE International Conference on Robotics and Automation, 2006. ICRA 2006, pp. 526–531, Orlando, FL, USA, 2006. View at: Publisher Site  Google Scholar
 R. Dahmouche, N. Andreff, Y. Mezouar, O. AitAider, and P. Martinet, “Dynamic visual servoing from sequential regions of interest acquisition,” The International Journal of Robotics Research, vol. 31, no. 4, pp. 520–537, 2012. View at: Publisher Site  Google Scholar
 L. Weiss, A. Sanderson, and C. Neuman, “Dynamic sensorbased control of robots with visual feedback,” IEEE Journal on Robotics and Automation, vol. 3, no. 5, pp. 404–417, 1987. View at: Publisher Site  Google Scholar
 M. Jägersand, O. Fuentes, and R. Nelson, “Experimental evaluation of uncalibrated visual servoing for precision manipulation,” in Proceedings of International Conference on Robotics and Automation, pp. 2874–2880, Albuquerque, NM, USA, 1997. View at: Publisher Site  Google Scholar
 H. Hashimoto, T. Kubota, M. Sato, and F. Harashima, “Visual control of robotic manipulator based on neural networks,” IEEE Transactions on Industrial Electronics, vol. 39, no. 6, pp. 490–496, 1992. View at: Publisher Site  Google Scholar
 E. Zergeroglu, D. M. Dawson, M. S. de Querioz, and A. Behal, “Visionbased nonlinear tracking controllers with uncertain robotcamera parameters,” IEEE/ASME Transactions on Mechatronics, vol. 6, no. 3, pp. 322–337, 2001. View at: Publisher Site  Google Scholar
 H. Wang, B. Yang, Y. Liu, W. Chen, X. Liang, and R. Pfeifer, “Visual servoing of soft robot manipulator in constrained environments with an adaptive controller,” IEEE/ASME Transactions on Mechatronics, vol. 22, no. 1, pp. 41–50, 2017. View at: Publisher Site  Google Scholar
 H. Wang, “Adaptive control of robot manipulators with uncertain kinematics and dynamics,” IEEE Transactions on Automatic Control, vol. 62, no. 2, pp. 948–954, 2017. View at: Publisher Site  Google Scholar
 J. Chen, D. M. Dawson, W. E. Dixon, and A. Behal, “Adaptive homographybased visual servo tracking for a fixed camera configuration with a camerainhand extension,” IEEE Transactions on Control Systems Technology, vol. 13, no. 5, pp. 814–825, 2005. View at: Publisher Site  Google Scholar
 A. C. Leite and F. Lizarralde, “Passivitybased adaptive 3d visual servoing without depth and image velocity measurements for uncertain robot manipulators,” International Journal of Adaptive Control and Signal Processing, vol. 30, no. 8–10, pp. 1269–1297, 2016. View at: Publisher Site  Google Scholar
 P. I. Corke, Highperformance visual closedloop robot control, [Ph.D. thesis], Mechanical and Manufacturing Engineering, 1994.
 M. Vincze, M. Ayromlou, S. Chroust, M. Zillich, W. Ponweiser, and D. Legenstein, “Dynamic aspects of visual servoing and a framework for realtime 3D vision for robotics,” in Sensor Based Intelligent Robots, pp. 101–121, Springer, Berlin, Heidelberg, 2002. View at: Publisher Site  Google Scholar
 J. A. Gangloff and M. F. de Mathelin, “High speed visual servoing of a 6 DOF manipulator using MIMO predictive control,” in Proceedings 2000 ICRA. Millennium Conference. IEEE International Conference on Robotics and Automation. Symposia Proceedings (Cat. No.00CH37065), pp. 3751–3756, San Francisco, CA, USA, 2000. View at: Publisher Site  Google Scholar
 J. A. Gangloff and M. F. de Mathelin, “Highspeed visual servoing of a 6d.o.f. manipulator using multivariable predictive control,” Advanced Robotics, vol. 17, no. 10, pp. 993–1021, 2003. View at: Publisher Site  Google Scholar
 L. Cuvillon, E. Laroche, J. Gangloff, and M. de Mathelin, “GPC versus H ∞ control for fast visual servoing of a medical manipulator including flexibilities,” in Proceedings of the 2005 IEEE International Conference on Robotics and Automation, pp. 4044–4049, Barcelona, Spain, 2006. View at: Publisher Site  Google Scholar
 H. Wu, L. Lou, C. C. Chen, S. Hirche, and K. Kuhnlenz, “Cloudbased networked visual servo control,” IEEE Transactions on Industrial Electronics, vol. 60, no. 2, pp. 554–566, 2013. View at: Publisher Site  Google Scholar
 M. Nakadokoro, S. Komada, and T. Hori, “Stereo visual servo of robot manipulators by estimated image features without 3d reconstruction,” in IEEE SMC'99 Conference Proceedings. 1999 IEEE International Conference on Systems, Man, and Cybernetics (Cat. No.99CH37028), pp. 571–576, Tokyo, Japan, 1999. View at: Publisher Site  Google Scholar
 N. Dai, M. Nakamura, S. Komada, and J. Hirai, “Tracking of moving object by manipulator using estimated image feature and its error correction on image planes,” in The 8th IEEE International Workshop on Advanced Motion Control, 2004. AMC '04, pp. 653–657, Kawasaki, Japan, 2004. View at: Publisher Site  Google Scholar
 I. Kinbara, S. Komada, and J. Hirai, “Visual servo of active cameras and manipulators by time delay compensation of image features with simple online calibration,” in 2006 SICEICASE International Joint Conference, pp. 5317–5322, Busan, Republic of Korea, 2007. View at: Publisher Site  Google Scholar
 T. Inoue and S. Hirai, “Robotic manipulation with large time delay on visual feedback systems,” in 2010 IEEE/ASME International Conference on Advanced Intelligent Mechatronics, pp. 1111–1115, Montreal, ON, Canada, 2010. View at: Publisher Site  Google Scholar
 Z. Gao and J. Su, “Estimation of image Jacobian matrix with timedelay compensation for uncalibrated visual servoing,” Control Theory and Applications, vol. 26, no. 1, pp. 218–234, 2009. View at: Google Scholar
 Y. H. Liu, H. Wang, and K. Lam, “Dynamic visual servoing of robots in uncalibrated environments,” in 2005 IEEE/RSJ International Conference on Intelligent Robots and Systems, pp. 3131–3136, Edmonton, Canada, 2006. View at: Publisher Site  Google Scholar
 H. Wang and Y. H. Liu, “Uncalibrated visual tracking control without visual velocity,” in Proceedings 2006 IEEE International Conference on Robotics and Automation, 2006. ICRA 2006, pp. 2738–2743, Orlando, FL, USA, 2006. View at: Publisher Site  Google Scholar
 Y. Shen, D. Sun, Y.H. Liu, and K. Li, “Asymptotic trajectory tracking of manipulators using uncalibrated visual feedback,” IEEE/ASME Transactions on Mechatronics, vol. 8, no. 1, pp. 87–98, 2003. View at: Publisher Site  Google Scholar
 Y.H. Liu, H. Wang, C. Wang, and K. K. Lam, “Uncalibrated visual servoing of robots using a depthindependent interaction matrix,” IEEE Transactions on Robotics, vol. 22, no. 4, pp. 804–817, 2006. View at: Publisher Site  Google Scholar
 H. Wang, Y.H. Liu, and D. Zhou, “Dynamic visual tracking for manipulators using an uncalibrated fixed camera,” IEEE Transactions on Robotics, vol. 23, no. 3, pp. 610–617, 2007. View at: Publisher Site  Google Scholar
 F. Lizarralde, A. C. Leite, L. Hsu, and R. R. Costa, “Adaptive visual servoing scheme free of image velocity measurement for uncertain robot manipulators,” Automatica, vol. 49, no. 5, pp. 1304–1309, 2013. View at: Publisher Site  Google Scholar
 X. Liang, H. Wang, Y.H. Liu, W. Chen, and J. Zhao, “A unified design method for adaptive visual tracking control of robots with eyeinhand/fixed camera configuration,” Automatica, vol. 59, pp. 97–105, 2015. View at: Publisher Site  Google Scholar
 T. Li and H. Zhao, “Global finitetime adaptive control for uncalibrated robot manipulator based on visual servoing,” ISA Transactions, vol. 68, pp. 402–411, 2017. View at: Publisher Site  Google Scholar
 W. Qiao and R. Sipahi, “Consensus control under communication delay in a threerobot system: design and experiments,” IEEE Transactions on Control Systems Technology, vol. 24, no. 2, pp. 687–694, 2016. View at: Publisher Site  Google Scholar
 Y.C. Liu and N. Chopra, “Controlled synchronization of heterogeneous robotic manipulators in the task space,” IEEE Transactions on Robotics, vol. 28, no. 1, pp. 268–275, 2012. View at: Publisher Site  Google Scholar
 H. Wang, “Passivity based synchronization for networked robotic systems with uncertain kinematics and dynamics,” Automatica, vol. 49, no. 3, pp. 755–761, 2013. View at: Publisher Site  Google Scholar
 X. Liang, H. Wang, Y. H. Liu, W. Chen, G. Hu, and J. Zhao, “Adaptive taskspace cooperative tracking control of networked robotic manipulators without taskspace velocity measurements,” IEEE Transactions on Cybernetics, vol. 46, no. 10, pp. 2386–2398, 2016. View at: Publisher Site  Google Scholar
 K. Hashimoto, K. Nagahama, and T. Noritsugu, “A mode switching estimator for visual servoing,” in Proceedings 2002 IEEE International Conference on Robotics and Automation (Cat. No.02CH37292), pp. 1610–1615, Washington, DC, USA, 2002. View at: Publisher Site  Google Scholar
 J. M. Sebastián, L. Pari, L. Angel, and A. Traslosheros, “Uncalibrated visual servoing using the fundamental matrix,” Robotics and Autonomous Systems, vol. 57, no. 1, pp. 1–10, 2009. View at: Publisher Site  Google Scholar
 A. Shademan, A.m. Farahmand, and M. Jägersand, “Robust jacobian estimation for uncalibrated visual servoing,” in 2010 IEEE International Conference on Robotics and Automation, pp. 5564–5569, Anchorage, AK, USA, 2010. View at: Publisher Site  Google Scholar
 H. K. Khalil, Nonlinear Systems, PrenticeHall, Inc., Upper Saddle River, NJ, USA, 3rd edition, 2002.
 J.J. E. Slotine and W. Li, “On the adaptive control of robot manipulators,” The International Journal of Robotics Research, vol. 6, no. 3, pp. 49–59, 1987. View at: Publisher Site  Google Scholar
 F. Garofalo, G. Celentano, and L. Glielmo, “Stability robustness of interval matrices via Lyapunov quadratic forms,” IEEE Transactions on Automatic Control, vol. 38, no. 2, pp. 281–284, 1993. View at: Publisher Site  Google Scholar
 R. Lozano, B. Brogliato, and O. E. an Maschke, “Dissipative systems analysis and control. Theory and applications,” Measurement Science and Technology, vol. 12, no. 12, p. 2211, 2001. View at: Publisher Site  Google Scholar
Copyright
Copyright © 2018 Tao Li et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.