Abstract

This paper investigates the stabilization and trajectory tracking problem of wheeled mobile robot with a ceiling-mounted camera in complex environment. First, an adaptive visual servoing controller is proposed based on the uncalibrated kinematic model due to the complex operation environment. Then, an adaptive controller is derived to provide a solution of uncertain dynamic control for a wheeled mobile robot subject to parametric uncertainties. Furthermore, the proposed controllers can be applied to a more general situation where the parallelism requirement between the image plane and operation plane is no more needed. The overparameterization of regressor matrices is avoided by exploring the structure of the camera-robot system, and thus, the computational complexity of the controller can be simplified. The Lyapunov method is employed to testify the stability of a closed-loop system. Finally, simulation results are presented to demonstrate the performance of the suggested control.

1. Introduction

In recent decades, the wheeled mobile robots (WMRs) have received increasing attention due to their promising applications in transportation, health care, security, and so on, which promotes the research of high-accuracy tracking control and stability analysis of the WMRs [14]. Particularly, WMR belongs to the nonholonomic mechanical system which is unable to be stabilized at one equilibrium by means of continuous and static state feedback controller [57], leading to the great complexity of the study about WMRs. A significant direction of the motion control of WMR is to employ various kinds of sensors in a closed-loop controller. The visual sensor, one of the typical noncontact sensors, has particular advantages such as abundant visual information and high efficiency; hence, visual servoing control of WMR has become a vigorous research field worldwide.

Numerous scientific achievements have been reported on visual servoing and vision-based manipulations [8, 9]. Just like the robot manipulators, the vision system in a mobile robot can be formed by two kinds of configurations, namely, eye-in-hand configuration [10, 11] and fixed-camera configuration [12, 13], respectively. For the first category configuration, the camera is mounted on the end-effecter. In contrast, the camera is called a static-camera or fixed-camera configuration when the camera is located on the ceiling. Till now, there has been a plethora of prominent literature concerning the visual servoing of nonholonomic mobile robots. To mention a few, in [14], position-based visual servoing (PBVS) was employed for visual tracking between a WMR and a multi-DOF crane. In [15], a visual servoing scheme was presented for a nonholonomic mobile robot to combine the merits of PBVS and image-based visual servoing (IBVS). In [16], a novel strategy was proposed for visual servoing of a mobile robot and the difficult issue of the automatic extrinsic calibration was addressed. It should be noted that the above-mentioned works require the camera mounted on the end-effector to be tediously calibrated beforehand. Unfortunately, the controllers are very sensitive to camera calibration errors which may give rise to reduced accuracy. To obviate this limitation, the uncalibrated camera system has emerged as a valid tool for practical systems. In [17], two independent uncalibrated cameras were used to accomplish person tracking for a vision-based mobile robot subject to nonholonomic constraint. The authors in [18] addressed a visual servo regulation approach which can work well without the perfectly calibrated camera. To deal with the imperfect calibration of the camera, the visual servoing of nonholonomic mobile robots was proposed in [19], considering both unknown extrinsic parameters and unknown depth from the camera to the motion plane. In [20], without calibrating the camera, the eye-in-hand visual trajectory tracking control strategy was constructed to ensure that the WMR is able to track the desired trajectory.

The aforesaid papers mainly discuss the visual servoing of nonholonomic mobile robots with eye-in-hand configuration. The fixed-camera configuration has the global sight and it enables the camera system to keep the observed object always in the field of view. Therefore, many researchers also devote themselves to the solutions of a WMR with the fixed uncalibrated camera. For instance, in [21], the unified tracking and regulation WMR visual servoing control was studied and the state information can be utilized to formulate the WMR kinematic model. In [22], a monocular camera with a fixed position and orientation was used to track the desired trajectory for a WMR and the controller does not require the camera to be mounted. Taking the limited velocity of a WMR into account, the control scheme for tracking a moving target by a WMR was presented in [23]. Despite the significant progress of visual servoing with the fixed uncalibrated camera, the adaptability of these controllers is unsatisfactory since the camera plane is always required to be parallel to the motion plane of the robots. It means that the controllers in [2123] are no longer effective when the camera is fixed at a general orientation on the ceiling. To overcome this drawback, the authors in [24, 25] proposed the visual servoing of a mobile robot without the parallelism requirement. By employing an adaptive image-based visual servoing approach, the camera image plane and the motion plane of WMRs are free from position constraint. However, all these methods suffer from the overparameterization in the process of the decoupled linear transformation. In addition, the previous controllers are developed via a kinematics-based model and the nonlinear dynamics are not taken into consideration in controller design.

Dynamic model-based control methods [2629] reflect the motion of real mobile robots with significant dynamics characterized by mass and inertia as well as friction, which are otherwise not considered in kinematics-based model control. The nonlinear dynamics of the mobile robot usually contain uncertain and time-varying parameters. Consequently, the nonlinear dynamic controllers to deal with unmodeled robot dynamics diverse further research. Control methodologies such as adaptive control technique [6], sliding mode control technique [27], and neural network control technique [28] have been developed on dynamic model with uncertain parameters of mobile robots. By far, visual servoing control for mobile robots at the dynamic level can be found in [8, 3032]. In [32], position/orientation tracking control of WMRs via an uncalibrated camera was considered and the adaptive controller was designed to compensate for the dynamic and the camera system uncertainties. It is noteworthy that the preceding studies are confined to visual servoing of mobile robots based on dynamic model, and these methods are invalid in a more general situation where the uncalibrated camera is fixed at an arbitrary position. Additionally, overparameterization limits the applicability of these controllers to a great extent.

In this paper, the stabilization and trajectory tracking problems of a wheeled mobile robot in complex environments are studied. The main contributions of this paper are threefold:(1)Two visual servoing controllers are proposed to stabilize a wheeled mobile robot with a ceiling-mounted camera and the desired trajectory tracking can be realized. First, an adaptive visual servoing controller is proposed based on the kinematic model. Then, an adaptive controller is derived to provide a solution of uncertain dynamic wheeled mobile robot subject to parametric uncertainties related to the camera system.(2)An uncalibrated visual servoing control strategy is proposed to realize trajectory tracking of a WMR, whose major superiority lies in the avoidance of both the requirement that the camera plane must be parallel to the motion plane of the robots and the overparameterization as in [24, 25]. Such a solution allows the controllers to be applied in a more general situation with a simpler structure and higher efficiency.(3)In comparison with the existing works for visual servoing mobile robot control in [19, 33], the camera parameters, including the intrinsic and extrinsic parameters, are unnecessary to be well calibrated, and the tracking control can be ensured in the presence of uncertain dynamics.

2. Preliminaries and System Descriptions

Throughout this paper, a typical setup for the visually servoed wheeled mobile robot is considered, as shown in Figure 1, where the camera is mounted on the ceiling to observe the movement of feature point labeled on the mobile robot. Let be the base coordinate frame, be the camera coordinate frame, and be the mobile robot coordinate frame, respectively. Furthermore, let be the center of mass of wheeled mobile robot, be the feature point, and be the distance from to along the positive direction of axis . Without loss of generality, it is assumed that the robot moves in a specific plane. Note that both the image-based kinematic and dynamic control are fully considered in this paper.

2.1. Kinematics Model of Nonholonomic Mobile Robot in Task Space

Let us firstly review the kinematics model of a mobile robot. Denote the task-space position of wheeled mobile robot with respect to the base coordinate frame by and the orientation by , whose forward rotation direction is set to counterclockwise from axis . Then, the kinematic model of the mobile robot can be written as [32, 34]where and denote the linear velocity and angular velocity of wheeled mobile robot in task space, respectively. From [32], the nonholonomic constraint of wheeled mobile robot can be formulated as follows:

This nonholonomic constraint indicates that the velocity along the connected direction between the left and right driving wheels is restricted to be zero; that is, the wheeled mobile robot will not slip during task execution. Combining the definition of and mobile robot kinematics, the task-space position of with respect to the base coordinate frame can be described as [24]

Differentiating (4) with respect to time gives rise to

2.2. Transformation from Task Space to Image Space

Let be the position of feature point on the image plane. Via the perspective projection model [8, 35], the mapping relation of from task space to image space is given bywhere is the depth information of feature point, is the so-called perspective projection matrix (see [8]), denotes the homogenous transformation matrix from the base frame to camera frame, and denotes the internal transformation matrix of camera. It should be noted that and depend on the intrinsic and extrinsic parameters, respectively. In addition, the depth information is defined aswhere denotes the 3rd row of matrix . Differentiating (6) and utilizing the definition of depth, we can obtainwhere can be interpreted as the kinematic control input and is the left submatrix of . Note that depends on both the intrinsic and extrinsic parameters of the visual model. In addition, is called the depth-independent interaction matrix since the depth information is separated. By exploiting the structure of , we can further obtain

Similarly, the time differential of depth information can be written as

The linearization properties, which are important to simplify the control design, are given as follows [34, 35].

Property 1. The products of and can be linearly decomposed and recombined aswhere is a constant vector, is a diagonal matrix with , and are visual model parameter vector, and and are the regressor matrices without depending on the parameter vectors and . Specifically, by observing (7), (9), and (10), it can be further obtained as follows:where is the depth regressor matrix. Note that the vector should involve all the depth parameters. By employing (9), (11), and (12), we have

Remark 1. In this paper, parameter uncertainties of visual servoing robot system are addressed, which means that the real parameter values and in (11)–(15) are unknown in the control design. Moreover, the image depth is not required to be consistent during robot operation as in [25, 36]; that is, the fixed-camera image plane can be not parallel to the operation plane, where a more realistic scenario is considered in both kinematic and dynamic control. In addition, the distance between the feature point and the origin of the coordinate system is assumed to be uncalibrated, which, together with the above parameter uncertainties, imposes great complexity and challenge in visual tracking control.
Throughout this paper, the following assumptions hold.

Assumption 1. The feature point can always be detected throughout the entire robot workspace such that the image position is continuously available. Moreover, the orientation of mobile robot can be measured by the encoders or other optical sensors mounted on the actuators.

2.3. Dynamics Model of Nonholonomic Mobile Robot

The dynamic behavior of wheeled mobile robot can be expressed by the Euler-Lagrangian equation as follows [6, 37]:where , is the symmetric and positive-definite inertia matrix, is the Coriolis and centrifugal matrix, denotes the gravitational force, denotes the input transformation matrix, represents the dynamic input torque, is the so-called constraint vector with being the constraint force, and the constraint form can be further represented as

It must be noted that matrices , , , , and do not depend on the actual position of and (more details of the robotic dynamics model can be referred to [37]). Based on the kinematics (1) and (2), the following holds:

Differentiating both sides of (18) and then substituting into the robot dynamics (16) and premultiplying both sides by , we havewhere (17) is utilized in the process of formula simplification and , , , and , respectively. To facilitate the control scheme, the dynamics properties of WMR are employed [37].

Property 2. The inertia matrix is symmetric and positive definite, which also satisfieswhere and are positive constants and denotes the standard Euclidean norm.

Property 3. The matrix is skew-symmetric such thatwith being a constant vector.

Property 4. The dynamic equation (19) can be linearly restructured aswhere is a differentiable vector, denotes the constant parameter vector of dynamics and is unknown in the control design, and is the regressor matrix of dynamics.

Remark 2. Via observing (8) and (19), it can be found that the kinematic control and the dynamic control are related by and , respectively. If the designed kinematic input is actually achievable in the task execution without any time delay, the visual tracking control can be conveniently realized by the kinematic loop. However, in most state-of-the-art researches on wheeled mobile robot control [19, 32, 33, 38], it is stressed that the motors assembled on the left and right wheels may not respond fast enough with the result that the actual kinematic control values may lag behind the design values. Thus, in this paper, the dynamics control for visual servoing WMR is also addressed, simultaneously taking the mechanical parameter uncertainties into consideration; that is, the precise parameter values (e.g., robot mass, inertia, and friction) are not required to be exactly measured.

2.4. Problem Statement

Based on the above system model and assumptions, the control problems from two different perspectives, namely, the kinematic and dynamic control, are addressed. Given a continuous desired trajectory on the image plane, this paper aims to solve the following problems:P1L: assuming that the WMR responds fast enough, design an adaptive visual servoing kinematic controller (AVSKC) such that the precise trajectory tracking performance can be obtained in the absence of calibrated camera model; that is,P2: when the kinematic input is not always achievable, design an adaptive visual servoing dynamic controller (AVSDC) such that (23) holds, simultaneously taking into account the uncalibrated camera-robot model.

3. Adaptive Visual Servoing Kinematic Control for Wheeled Mobile Robot under Uncalibrated Visual Model

In this part, we focus on the adaptive visual servoing kinematic control scheme for wheeled mobile robot with uncalibrated camera model, where the projection plane of camera does not need to be parallel to the operation plane during the execution of the mission, and the dynamic control will be exhibited in the next section. Since the parameters of the visual model are unknown, adaption laws are presented to estimate the real parameter values, and based on the estimated parameters, AVSKC is developed to realize the asymptotic image trajectory tracking.

3.1. Controller Design

Let , , and be the estimated values of , , and by replacing the unknown parameters and in , , and with the estimations and , respectively, and the estimations are offered by the adaption laws. Define as the image error. Then, inspired by [25], the AVSKC is designed aswhere is a positive constant. In (24), the estimated visual model rather than the calibrated model is utilized, and the estimated depth and its differential are also introduced to compensate the model error since the image plane and the operation plane are nonparallel. Now, we can further analyze the closed-loop kinematics with depth information as follows:where and , and Property 1 is used. Substituting the AVSKC (24) into (25) gives rise to

3.2. Unknown Parameter Estimation

By observing (24), it is obvious that the estimation of is employed, which requires that the parameters and are updated online. The kinematic parameter updating laws are presented aswhere and are the positive-definite diagonal matrices. Thus, by integrating (27) and (28), , , and in (24) are then available.

3.3. Stability Analysis

At this point, we are going to formulate the first theorem.

Theorem 1. Consider the visual servoing wheeled mobile robot represented by (1), (2), (4), (6), and (8) satisfying the assumption that the estimated interaction matrix is nonsingular. In the case that the design kinematic input is actually achievable in the task execution, the adaptive visual servoing kinematic controller (AVSKC) given by (24) together with the visual parameter adaption laws (27) and (28) ensures the global stability of (26) and the asymptotical convergence of to zero such that .

Proof. Construct the kinematic-based Lyapunov function candidate asDifferentiating with respect to time yieldsSubstituting the closed-loop kinematics (26) and the parameter updating laws (27) and (28) into (30), the derivative of can be denoted asSince and , we can obtain that is bounded; that is, , , and are bounded, which directly implies that and are both bounded since and are constants. Thus, , , and are all bounded, giving rise to the boundness of from (24), which means that from (8) and the boundness of (26) is guaranteed. From the result of (29) and (38), we have . Therefore, we can obtain that . Thus, the proof is completed.
From the result in [34], it has been proven that the matrix is always nonsingular. Thus, if the parameters of in and are updated properly, it can be ensured that is full rank by modifying the parameter adaption laws. In this paper, the so-called parameter projection [39] is introduced to avoid nonsingularity of . The adaption laws for visual kinematic parameters are presented aswhere , , , and are the th element of , , , and . Furthermore, the projection function is given as [39, 40]where and denote the upper and lower bounds of parameter . In this way, if the condition is satisfied, then the parameter will locate in the region . Then, we are ready to state the following proposition.

Proposition 1. Consider the visual servoing wheeled mobile robot represented by (1), (2), (4), (6), and (8). In the case that is achievable, the adaptive visual servoing kinematic controller (AVSKC) given by (24) together with the visual parameter adaption laws (33) and (35) ensures the global stability of (26) and the asymptotical convergence of image errors such that .

Proof. Choose the same Lyapunov function candidate (29), whose time derivative can be written asThus, the proof of Proposition 1 can be referred to that of Theorem 1.

Remark 3. It is noted that the considerations on visual kinematic model in this paper are similar to that in [19, 24, 25, 32, 33, 36], where the visual model parameters are uncalibrated in kinematic control design. However, the projection plane is set to be parallel to the operation plane in [32]. Also, note that eye-in-hand configuration is addressed in [19, 33] rather than the eye-to-hand setup in this paper, and in addition, only partial camera intrinsic parameters are taken into consideration in [33], and only the extrinsic camera parameters are considered in [19], respectively. Furthermore, the overparametrization problem is still unresolved in [24, 25], where a regressor matrix needs to be determined. Extra two particular feature points are introduced in [36] for the purpose of preventing the direct use of image Jacobian matrix from control design. The major difference between the proposed AVSKC scheme and the uncalibrated visual tracking control scheme [19, 24, 25, 32, 33, 36] is that the parameters in image projection matrix, including the intrinsic and extrinsic camera parameters, are estimated by adaption laws while avoiding the overparametrization problem. This is realized by exploiting the structure of image Jacobian matrix inspired by [35] (see Property 1). Specifically, the salient features of the proposed AVSKC scheme lie in (I) the structurally simple implementation of control law (24); (II) the inexpensive obtainment of regressor matrices and in adaption laws (27) and (28) (or using (33) and (35)); (III) the nonsingularity property of and the receptivity of uncertain parameters processed by adaption laws (33) and (35).

Remark 4. If the actuators of WMR perform effectively, the visual tracking on image plane can be conveniently realized by the proposed AVSKC scheme. However, it is well recognized that the presence of dynamic uncertainties will cause a great negative impact on control performance. In the next section, we will pertinently propose the dynamic control scheme for visual servoing WMR together with the handling of dynamic uncertainties.

4. Adaptive Visual Servoing Dynamic Control for Wheeled Mobile Robot with Uncalibrated Visual Model and Dynamics

The focus in this section is extending the wheeled mobile robot kinematic control to dynamic control in the presence of uncalibrated visual model and uncertain dynamics; that is, both the kinematics and dynamics parameters are not required to be measured accurately. Note that, in this case, the designed controller becomes the dynamic input torque , in which case a deeper control loop is considered.

4.1. Controller Design

Define a referenced image velocity aswhere is a positive constant. Then, the reference errors of image velocity can be denoted as

Thus, the reference errors contain both the image errors and image velocity errors. Furthermore, define a kinematic auxiliary variable

Differentiating (41) with respect to time, we have

The purpose of designing this auxiliary variable is to connect the kinematic control variable , which can be specified as

Note that, in (41), is a real response due to the evolution of robot dynamics (19) rather than a designed input. The relation between and can be interestingly derived aswhere is a positive constant. Using Property 1, we can further obtain and . Thus, (44) can be rewritten as

Based on the kinematic control errors and reference errors , the adaptive visual servoing dynamic controller (AVSDC) is proposed aswhere is a positive constant.

Remark 5. Since is related to the actuator dynamics rather than the robot dynamics, in this paper, we assume that the exact structure and parameters of are known. In particular, is defined as the identity matrix in [6, 38] if the actuators are free of operation faults.

Remark 6. It seems interesting that, in the proposed AVSDC scheme, both image errors and velocity errors are introduced in the control law, which will strengthen the tracking performance and robustness of the dynamic closed-loop system. As will be shown in the stability analysis, the asymptotical convergence of leads to the asymptotical convergence of both and .
By substituting the AVSDC (46) into (19) and employing Property 4, one can obtain the closed-loop dynamics:

4.2. Unknown Parameter Estimation

In the AVSDC design (46), the estimated dynamics and visual kinematics are employed, and the estimated parameters are updated online bywhere is a positive-definite diagonal matrix, the projection operation is given in (36) and (37), and

4.3. Stability Analysis

Based on the above system analysis of closed-loop dynamics, we have the following result.

Theorem 2. Consider the visually servoed WMR system consisting of the uncalibrated kinematics (1), (2), (4), (6), and (8) and uncertain dynamics (19), under the control of AVSDC (46) with parameter updating laws (48), (49), and (50). Then, the closed-loop dynamics system of WMR is globally bounded, and the image errors are convergent to zero asymptotically.

Proof. Similarly, consider the Lyapunov function candidateTaking the derivative of yieldsPremultiplying both sides of (47) by and then substituting it into (53), we havewhere Property 3 is used. Subsequently, substituting (45) and the parameter updating laws (48), (49), and (50) into (54), we can obtain the following result:As and are simultaneously established, we can obtain that must be bounded; that is, , , , , and are all bounded, giving rise to the boundedness of , , , , and since , , and are all constants. Moreover, is bounded and nonsingular, and , , and by observing (40) and (43). From (42), we have , leading to from (46). According to the robot dynamics (19), we have , which directly implies that . Thus, the closed-loop dynamics of WMR in (47) is globally bounded.
Furthermore, from the result of (52) and (55), we get and . Differentiating (8) with respect to time leads to . Additionally, from the above analysis, , which thus results in . Therefore, we have , since is defined by the projection function, which finally indicates that the image errors converge to zero such that and .

Remark 7. Compared with the recent work on handing uncertain parameters for WMR [19, 33, 34, 36], where only kinematic uncertainties are addressed, this paper extends the uncalibrated visual servoing control to a dynamic control loop in the presence of parameter uncertainties and varying depth. It can be seen in AVSDC (46) that both the reference image errors and kinematic control errors , which contain velocity errors and , are concurrently employed, giving rise to asymptotical convergence of both and by comparing with the AVSKC (24). Furthermore, the parameter uncertainties of visual kinematics and dynamics are adaptively compensated by the parameter updating laws (48), (49), and (50). Actually, the AVSDC (46), to some extent, can be potentially regarded as containing the kinematic control by designing the kinematic auxiliary variable .

5. Numerical Simulations

In order to demonstrate the tracking performance of AVSKC and AVSDC schemes, simulation studies are carried out. As in Figure 1, a two-wheeled mobile robot with a camera in a fixed place is considered.

5.1. Trajectory Tracking for AVSKC

In the simulation task, we firstly address the AVSKC scheme under parallel and unparallel camera case. Assume that one feature point is marked on the WMR, and the distance is set to be . The simulated parameters of the perspective projection matrix in [8, 25] are given in Table 1, where denotes the focal length of the camera, and denote the scalar factors of two-dimensional axes on image plane, and are the positions of principal point, is the included angle between the coordinate axis and is assumed to be known, and the rotation matrix in transformation matrix is set as in parallel camera case and in unparallel camera case, respectively. Note that these parameters are only used to construct the simulated model, but are unavailable in the control design. For simplicity, the detailed expression of system model in (6) and (7) and Property 1 can be referred to in [8, 35]. The designed control gains and adaption gains are set as and , respectively. The upper and lower bounds in parameter projection are designed as , , , and , and the initial parameters are and , respectively. The initial states of WMR are , where the superscript denotes th initial condition in simulation task, and the referenced trajectory is given as

5.1.1. Parallel Camera Case (PCC)

In this case, the image plane is parallel to the operation plane. The initial states of WMR are set as , , and , respectively. The graphs in Figures 2(a)2(c) demonstrate the corresponding simulation results, from which we can observe that the real trajectory converges to the referenced trajectory in about 2.5 s. Note that, in this case, the desired velocity is time varying; however, smoothly real trajectory and bounded states are still achievable.

5.1.2. Unparallel Camera Case (UPCC)

In this case, the camera is placed in a position unparallel to the operation plane. The initial states of WMR are the same as in PCC. The simulation results are depicted in Figures 2(d)2(f). Note that, in Figures 2(e) and 2(b), the initial points on image plane are noncoincident since the camera parameters are chosen in different values. Moreover, the image errors asymptotically converge to zero as expected, verifying the effectiveness of AVSKC scheme.

5.2. Trajectory Tracking for AVSDC

In this subsection, we will test the tracking performance of AVSDC scheme under tracking line case and tracking circle case. Due to the limitation of space, the parametric dynamics model of WMR is omitted, whose detailed expressions are given in [38], and the initial value for is set as . The designed gains are set as , , , and ; apart from this, all the simulated model and system parameters are given in UPCC.

5.2.1. Tracking Line Case (TLC)

The reference line is given as

In this case, the desired velocity is constant. Based on the theoretical analysis in Theorem 2, the real trajectory on the image plane asymptotically converges to the desired trajectory in the sense of position and velocity, confirmed by the simulation results in Figures 3(a)3(c).

5.2.2. Tracking Circle Case (TCC)

In this case, the desired trajectory in this case is chosen as in UPCC, and the external disturbance is applied to the robot dynamics such that

The time histories of the corresponding results are plotted in Figures 3(d)3(f). As predicted by Theorem 2, both and asymptotically converge to zero in about 1 s even under the influence of external interference. Furthermore, faster responses are expectedly obtained as compared with the simulation results in AVSKC scheme (see Figures 2(d) and 3(d)).

6. Conclusions

Two uncalibrated visual servoing control schemes for the wheeled mobile robot were developed from different perspectives, namely, the kinematic control and dynamic control. By utilizing the linearization characteristics of visual kinematics and robot dynamics, image-based tracking control laws (i.e., ASVKC and ASVDC) together with the parameter adaption algorithms were proposed to realize asymptotical convergence of image errors without the knowledge of visual model robot parameters. Furthermore, the overparametrization problem is avoided by exploiting the structure of depth-independent interaction matrix, giving less dimensional regressor matrices and simple configuration of parameter adaption laws. It was proven by the Lyapunov theory that both ASVKC and ASVDC schemes are capable of achieving global stability of closed-loop system. Lastly, numerical simulations were carried out to confirm the performance of ASVKC and ASVDC.

In this paper, we assume that the image trajectory is given in advance, and the external forces of robot system are not considered in complex environment. Furthermore, the applicability of visual servoing WMR control is worth further exploring. Thus, the further work encompasses the deterministic learning and accurate identification of system dynamics [41], the applicability of WMR with actuator constraint [42], and obstacle avoidance [4345].

Data Availability

No underlying data are included.

Conflicts of Interest

The authors declare that they have no financial and personal relationships with other people or organizations that can inappropriately influence their work; there is no professional or other personal interest of any nature or kind in any product, service, and/or company that could be construed as influencing the position presented in, or the review of, the paper.

Acknowledgments

This work was supported in part by the Research start-up funds of DGUT (GC300501-113 and GC300501-111), in part by the Innovation Talent Program for Young Scholars of Guangdong Province (2018KQNCX252 and 2018KTSCX226), in part by the General program of Guangdong Natural Science Foundation (2019A1515010493), and in part by the Basic and Applied Basic Program of Guangdong Province (2019A1515110477, 2019A1515110476, and 2019B1515120076).