#### Abstract

Recent decades have witnessed the rapid evolution of robotic applications and their expansion into a variety of spheres with remarkable achievements. This article researches a crucial technique of robot manipulators referred to as visual servoing, which relies on the visual feedback to respond to the external information. In this regard, the visual servoing issue is tactfully transformed into a quadratic programming problem with equality and inequality constraints. Differing from the traditional methods, a gradient-based recurrent neural network (GRNN) for solving the visual servoing issue is newly proposed in this article in the light of the gradient descent method. Then, the stability proof is presented in theory with the pixel error convergent exponentially to zero. Specifically speaking, the proposed method is able to impel the manipulator to approach the desired static point while maintaining physical constraints considered. After that, the feasibility and superiority of the proposed GRNN are verified by simulative experiments. Significantly, the proposed visual servo method can be leveraged to medical robots and rehabilitation robots to further assist doctors in treating patients remotely.

#### 1. Introduction

As one of the greatest human inventions in the 20th century, robot technology has undoubtedly made great progress in the past decades with brilliant research achievements [1–4]. After the birth, growth, and maturity of robots, they have become the indispensable core equipment in the manufacturing industry due to their high automation and efficiency. Especially as the rising star of the family of robots, redundant robots, which possess more degrees of freedom (DOFs) than the task requires, are capable of performing complicated tasks efficiently with the great property and versatility. In detail, the redundancy characteristic assists the redundant robots in fulfilling additional task demands, for example, repetitive motion planning [5], physical constraint avoidance [6], and manipulability optimization [7, 8]. In combination with medical technology, various medical robots have been developed and explored for patient rehabilitation and surgical execution as an important application prospect. Relying on high reliability and flexibility, medical robots are able to perform complex medical tasks, thus reducing the burden on doctors and improving treatment. The learning and control ability of various robots is also valued and explored by many scholars [9–11]. A novel learning framework for the robot learning and generalizing human-like variable impedance skills is developed in [9] with great research and practical value. Further, some adaptive control methods are presented for estimating the unknown model of manipulator dynamic, which achieves great parameter estimation and tracking effects [10, 11].

In current years, the kinematic control of redundant robots has become a research hotspot, thus drawing the attention of abundant scholars to expand their applications [12–16]. Zhang and Zhang present a minimum-velocity-norm (MVN) scheme for redundancy resolution of the redundant manipulators, which retains the robot joints within safe bounds [17]. A modified neural network approach in [18] is well designed for precise control of the robot manipulator, which can eliminate the error accumulation with accurate results. Moreover, the authors in [19] research an ingenious transformation method to deal with the acceleration limitation problem from the velocity level, and the experimental results illustrate the superiority of the method. It is deserved to notice that the above investigations [15–19] all transform the kinematic control issue of redundant robots into quadratic programming and then exploit the Karush–Kuhn–Tucker (KKT) conditions [20] or Lagrange multiplier method to solve the optimization schemes. In addition, the mentioned schemes in [15–19] are all velocity-level solutions such that they cannot interfere directly with the acceleration level.

With the continuous development of sensors and Internet of Things technology, robot applications have become very rich owing to information acquisition and processing. The sensor can transmit the external information directly to the control center of the robot and give appropriate feedback to the information through specific intelligent algorithms. As a greatly important robot application, the vision servoing technology drives the robot to accurately feedback the external vision in real time through the visual information collected by the vision sensor [21–23]. This technology is already being used in industrial production and robotic surgery [24, 25]. However, it is worth pointing out that the existing techniques [26–28] for solving the vision servoing problem often rely on the implementation of the pseudoinverse method to converge errors, which has achieved great results in both the acceleration-level schemes and the velocity-level schemes. By means of proportional-differential control, acceleration command for the visual servoing control is generated with excellent stability [26]. Moreover, an effective method to detect and compensate for faults in visual servoing systems is presented in literature [27], which is verified by simulation and experimental results. Based on the pseudoinverse operation of the Jacobian matrix, the robotic ball catching task is implemented [28]. This method takes advantage of the eye-in-hand construction to establish the motion capture system for locating fast-moving objects. However, a large number of investigations do not consider the existence of joint constraints and have potential for damage to the robot manipulators [21–24, 26–28]. Due to the physical limitations of the robot motor and robot structure, the control signals need to be kept within a reasonable range to maintain the normal operation of the robot manipulators. To this end, this paper formulates the visual servoing problem as a quadratic programming scheme with equality and inequality constraints in consideration of physical constraints.

The rise of intelligent algorithms in recent years has solved many difficult problems in electronic and engineering fields [29–31]. Numerous intelligent algorithms have been designed for powerful performance, such as noise suppression [32], simplified computation [33, 34], and predictive learning [35, 36]. Among the intelligent algorithms to solve the visual servoing of the manipulator, the neural network method stands out due to its fast parallel processing performance and learning ability [37–41]. In [42], a recurrent neural network is constructed for the visual servoing issue to force the feature point of the manipulator to approach the designed target point. Then, the extended research [43] eliminates the pseudoinversion operation and equips the neural network with powerful robustness. In addition, as a common optimization method, the gradient descent method has made some progress in the design of robot control algorithms in recent years [44, 45]. It can be used to accurately locate and control the robot by minimizing the position error [46]. Based on the above research, we establish the visual servoing issue based on acceleration commands and transform it into a quadratic programming scheme solved by the neural network method. Besides, the contributions of this paper are summarized below:(1)The proposed method regards the visual servoing problem as a constrained quadratic programming scheme with acceleration command and meanwhile considers the joint constraints to ensure the safety of the manipulator(2)This paper proposes a gradient-based recurrent neural network (GRNN) for dealing with the research on the robot visual servoing via the gradient descent method and exploiting compensation item(3)The simulation example and illustrative experiment illustrate the feasibility and superiority of the proposed method

The remainder of this paper is summarized as follows. Section 2 covers the preliminaries and the visual servoing kinematics. In Section 3, the visual servoing problem is transformed into a constrained quadratic programming scheme at the acceleration level with the corresponding GRNN deduced. The theoretical analyses of the proposed method are presented by using the Lyapunov method in Section 4. Section 5 carries out a simulation example to demonstrate the feasibility of the proposed method. In the end, we summarily conclude the whole paper in Section 7.

#### 2. Preliminaries

In this section, the visual servoing kinematics is introduced, which records the conversion relationship between the joint space and the image space.

Primarily, in consideration of an eye-in-hand vision system [28], i.e., an -DOF manipulator with a camera attached to the end effector, the forward kinematics of the manipulator is given as follows:where describes the transformation relationship between the joint space and Cartesian space; represents the joint angle of the manipulator; and denotes the Cartesian coordinates of the end effector. The investigation of visual servoing issue always takes both the position and posture of the end effector, and thus is set as a six-dimensional vector hereinafter (). Taking the derivative of time with respect to formula (1) leads towhere stands for the robot Jacobian matrix, which is determined by the manipulator structure; signifies the joint velocity of the manipulator; and is the end effector velocity containing angular velocity and translational velocity. In addition, the physical constraints, involving joint velocity and joint acceleration , to maintain the safe operation of the manipulator system are provided as below:with and being the upper and lower bounds of joint velocity and and denoting the upper and lower bounds of joint acceleration. As for the camera frame and image frame, the corresponding relationship is deduced by means of similar triangle and given as follows [27, 42]:of which is a point coordinate in the image frame with the superscript denoting the transpose of a matrix or a vector; stands for the coordinate in the camera frame; and denotes the focal length of the camera. Besides, in the image frame, point coordinates can be converted to pixel coordinates by the following formula [43]:where stands for the designed original point and and are the pixel standard size. Furthermore, the relationship between the camera velocity, i.e., the end effector velocity , and pixel coordinate velocity can be introduced aswhere denotes the image Jacobian matrix [47, 48] with its expression beingwith

Based on the above instructions, especially formula (2) and formula (6), it can be readily obtained that , which involves the relationship between the joint space and the image space. To simplify the presentation, one designs

Furthermore, the kinematic relationship at the acceleration level is derived by taking time derivative aswhere represents the acceleration of feature point in the image frame and denotes the time derivative of .

#### 3. Acceleration-Level IBVS Scheme and Its Solution

The robot vision servoing controls the robot manipulator to interact with circumstances according to the visual information. This issue can be simplified to find the static point in the image frame by feeding back the image information. To this end, we turn this visual servoing problem into a constrained quadratic programming problem and design a neural network-based solver.

##### 3.1. Quadratic Programming Scheme with Constraints

Above all, the visual servoing problem is formulated at the acceleration level into the following quadratic programming scheme:where denotes the desired feature point, which is a designed constant vector and, is an inequality constraint corresponding to the physical limit (3) with and devised aswhere stands for the design parameter. Via (15), the physical constraints of joint acceleration and joint velocity could be considered and controlled within bounds simultaneously [15]. In this regard, take the upper limit of the physical constraint as an example. For the joint velocity, when the joint velocity approaches the upper bound of velocity-level joint constraint , gets small and even close to zero. Afterwards, becomes tiny or even zero, so that the joint velocity stops growing and stays in joint constraints. Simultaneously, the upper bound of acceleration-level joint constraint is activated to realize acceleration-level joint constraint. Similarly, is able to realize the velocity-level joint constraint and the acceleration-level joint constraint simultaneously.

##### 3.2. Neural Network Solution

Differing from the traditional method to deal with equality constraints and inequality constraints, the gradient descent method [49] is exploited to derive the solution to the quadratic programming scheme (11)–(14). Design an error function to start the derivation. Utilizing neural dynamic formula [50] and , one can getwhich can be arranged and rewritten into the form of two norms as follows:

Given the gradient descent formula [51],with , it would be readily deduced that

Then, a compensation item is presented to make up for the lagging error in equation (19) as below:

Via deliberating the final desired stable state, i.e., , one can simply get the expression of referring to the derivation below. Multiplying both sides of equation (20) by one gains

Set , and it can be obtained that

Then, taking the time derivative of as

Comparing the two formulas above, one has

Hence, it can be easily got thatwith superscript being the pseudoinverse operator of a matrix and . Consequently, the GRNN solver is structured for solving the quadratic programming scheme (11)–(14) as follows:where can be regarded as a bounded activation function and the usage of arg min can be referred to [52, 53], which is equivalent to the inequality constraint (14). As Figure 1 depicts, visual servoing scheme (11)–(14) aided with GRNN solver (26) integrates the robot frame and image frame and can be regarded as a restricted online acceleration controller. For GRNN (26) and scheme (11)–(14), the following corresponding relation is given. Owing to the derivative process that GRNN (26) originates from the error function (17), the gradient descent formula is designed to reduce the image error, thus ultimately achieving equality constraint (13). In the next place, the output control command is established at the acceleration level, which corresponds to the acceleration-level kinematics formula (12). Note that compensation item is the pseudoinverse solution of the system function in a stable state, i.e., the minimization of joint acceleration, which is equivalent to minimizing objective function (11). As for joint constraint (14), introducing is able to impose restrictions on joint velocity and joint acceleration. In short, the proposed GRNN solver (26) corresponds to the quadratic programming scheme (11)–(14).

Remarks. Compared with the existing visual servo technologies, the innovations of this paper are worth emphasizing as follows: regarding the scheme (11)–(14) construction level, most of the previous strategies on visual servoing are controlled at the joint velocity level, few of which are controlled and driven by joint acceleration. In addition, none of the existing acceleration-level visual servo schemes takes joint limits into account, which is considered in the quadratic programming scheme (11)–(14). From the perspective of the intelligent algorithm, a majority of the existing techniques apply the pseudoinverse method to directly deal with the errors, which incurs additional computational overhead. However, GRNN (26) is deduced according to the gradient descent method and compensation term, which provides a novel approach to dealing with the visual servoing problem.

#### 4. Stability Proof

In this section, the stability proof is provided to prove the feasibility and effectiveness of the proposed method (26) to dispose of the visual servoing issue. The relevant theorem is given as follows.

Theorem 1. *The error synthesized by GRNN (26) can approach zero globally, provided that .*

*Proof. *Declare that the setting of precondition has two core functions. The first is to determine the minimum joint constraints, thus ensuring the safe operation of the manipulator. It is easy to image that forcing the joint to remain within the constraints may lead to the increase of error as reported in [54]. The second point is worth mentioning that precondition is of necessities for the proper derivation of the theorem. According to (10), one can getIn the light of , , and , equation (27) can be rearranged asLet stand for a Lyapunov candidate. Therefore, calculating its time derivative results inConsider the inequality relation , . We simply devise and and getExpanding the left side of the above equation generatesObserve the two formulas above, and it can be easily gained thatSubstituting equation (29) into equation (32) deducesEvidently, one hasRecalling the neural dynamic formula , it is evident thatwith design parameter , , and denoting the minimum eigenvalue of positive definite matrix . Therefore, it can be naturally concluded that is of great convergence with . Referring to the Lasalle invariance principle [55], we let to derive the stable state and get the following two conditions:Given that , the solutions to the above two conditions can be gained:In this regard, a conclusion can be readily drawn that is convergent to zero globally. The proof is thus competed (Figure 2).

#### 5. Simulation Example

This section provides a simulation example to demonstrate the performance of GRNN (26) when confronted with the robot visual servoing issue. Specifically speaking, the PUMA 560 manipulator (6-DOF) is modeled with a camera attached to its end effector to track the desired static point in the image frame. In addition, the structure information of the PUMA 560 manipulator can be referred to the existing literature [43] with the photo of PUMA 560 shown in Figure 2. It is worth pointing out that when considering only one desired feature point, the kinematic control of the PUMA 560 manipulator can be regarded as utilizing the 6-dimensional joint space to control 2-dimensional image space, which can approximately treat the PUMA 560 manipulator as a redundant manipulator.

In the first place, the simulation setting and the neural network parameters are introduced. Simply put, the parameters of the neural network and camera system are set as , , , , , , , and . As to the state and physical constraints of the PUMA 560 manipulator, the states are chosen as , the initial coordinate of feature point pixel, , and .

The simulation results are provided in Figure 3. As depicted in Figure 3, the PUMA 560 manipulator successfully achieves the desired feature point driven by GRNN (26). The error in Figure 3(b) and in Figure 3(c) converge to zero in 1 s. With regard to joint information, Figure 3(d) through Figure 3(f) record the joint acceleration, joint velocity, and joint angle during the simulation, respectively. It is worth emphasizing that joint acceleration and joint velocity are maintained within the designed physical constraints, which ensures the safe execution of the task. Overall, the above results indicate the feasibility and efficiency of the proposed GRNN (26) when handling the visual servoing issue.

**(a)**

**(b)**

**(c)**

**(d)**

**(e)**

**(f)**

To demonstrate the superiority of the proposed method, the traditional pseudoinverse method is employed to deal with the visual servoing problem with results provided in Figure 4. The control law adopted by the traditional pseudoinverse method is generalized aswith and . It is worth pointing out that the investigations of visual servoing based on the pseudoinversion operation of the Jacobian matrix are common and effective in the existing method [21, 26, 28]. Nevertheless, the pseudoinversion operation of a matrix brings more computational complexity, and the conventional pseudoinverse methods do not take joint limits into account, which are regarded as the deficiencies of existing methods [21, 26, 28]. As depicted in Figure 4(a), the error quickly converges to zero in 1.5 s, i.e., the manipulator successfully tracks the desired feature point. However, Figure 4(b) indicates that due to the large value of the initial error , the generated initial accelerations are even more than 10 rad/, which would damage the PUMA 560 manipulator. On the contrary, the proposed method (26) limits the acceleration in the physical constraints, which emphasizes the superiority of the proposed method (26).

**(a)**

**(b)**

Beyond that, an illustrative experiment is conducted on a UR5 manipulator (6-DOF) [25] with a visual sensor installed on its end effector, which is assisted by Virtual Robot Experimentation Platform (V-rep). The experiment results plotted in Figure 5 are synthesized by the proposed GRNN (26). Note that in Figure 5(a), the measured object is regarded as the desired point , which can be captured by the visual sensor, and that the center of the sensor view is the feature point of the robot visual system. By constantly transmitting the error information to GRNN (26), the visual servoing issue can be solved with approaching as described in Figures 5(b) and 5(c), which implies the validity of the proposed GRNN (26).

**(a)**

**(b)**

**(c)**

#### 6. Comparisons

In this section, some existing visual servoing approaches [21, 25–27, 42, 43, 48] are assembled in Table 1 to highlight the superiority of the proposed quadratic programming scheme (11)–(14). The following points can be determined. A majority of the existing techniques [21, 26, 27, 48] utilize the pseudoinverse method to carry out research. These approaches often take no account of joint physical constraints, which may lead to a large generated control signal and even cause damage to the manipulator. On the other hand, it is well known that the pseudoinverse operations involved are computationally onerous. Thirdly, the present research on visual servoing at acceleration level is relatively lacking [21, 26]. Therefore, in terms of joint acceleration, the quadratic programming scheme (11)–(14) avoids the pseudoinverse operation by utilizing the matrix transpose operation and meanwhile takes the joint constraints into account. This demonstrates the superiority of the proposed quadratic programming scheme (11)–(14) (Table 1).

#### 7. Conclusion

In this paper, the vision servoing issue has been formulated as a constrained quadratic programming scheme at the acceleration level with physical constraints considered. Then, a GRNN has been proposed via the gradient descent method and compensation term with the stability analyses provided. After that, simulation examples have been carried out to demonstrate the correctness of theoretical analyses and the validity of the proposed method. Note that the proposed method has resolved the visual servoing issue at the acceleration level and also has considered the joint constraints of the manipulator to guarantee the safe operation of the manipulator. For the further research direction, the authors are going to investigate the uncertain conditions and optimization in the visual system, such as noise suppression [56] or Jacobian estimation [57] and manipulability optimization [58].

#### Data Availability

The data in the paper are not made public online.

#### Conflicts of Interest

The authors declare that they have no conflicts of interest.

#### Acknowledgments

This work was supported in part by the Guangzhou Sport University Innovation and Strengthen Project under Grant 5200080589, in part by the Ministry of Education Industry-Academic Cooperation Collaborative Education Program of China under Grant 201901007048, in part by the Research and Development Foundation of Nanchong (China) under Grant 20YFZJ0018, and in part by the Fundamental Research Funds for the Central Universities under Grant lzujbky-2019-89.