Table of Contents Author Guidelines Submit a Manuscript
Complexity
Volume 2017, Article ID 5860649, 14 pages
https://doi.org/10.1155/2017/5860649
Research Article

Dynamic Learning from Adaptive Neural Control of Uncertain Robots with Guaranteed Full-State Tracking Precision

School of Automation Science and Engineering, Guangzhou Key Laboratory of Brain Computer Interaction and Applications, South China University of Technology, Guangzhou 510641, China

Correspondence should be addressed to Min Wang; nc.ude.tucs@nimgnawua

Received 21 March 2017; Accepted 30 April 2017; Published 14 August 2017

Academic Editor: Yanan Li

Copyright © 2017 Min Wang et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Abstract

A dynamic learning method is developed for an uncertain -link robot with unknown system dynamics, achieving predefined performance attributes on the link angular position and velocity tracking errors. For a known nonsingular initial robotic condition, performance functions and unconstrained transformation errors are employed to prevent the violation of the full-state tracking error constraints. By combining two independent Lyapunov functions and radial basis function (RBF) neural network (NN) approximator, a novel and simple adaptive neural control scheme is proposed for the dynamics of the unconstrained transformation errors, which guarantees uniformly ultimate boundedness of all the signals in the closed-loop system. In the steady-state control process, RBF NNs are verified to satisfy the partial persistent excitation (PE) condition. Subsequently, an appropriate state transformation is adopted to achieve the accurate convergence of neural weight estimates. The corresponding experienced knowledge on unknown robotic dynamics is stored in NNs with constant neural weight values. Using the stored knowledge, a static neural learning controller is developed to improve the full-state tracking performance. A comparative simulation study on a 2-link robot illustrates the effectiveness of the proposed scheme.

1. Introduction

In the past decades, the force/position tracking control problem of robots has attracted wide attention in both theory and applications [1, 2]. In the early stage of robotic tracking control, the system model is usually assumed to be accurately known, and the corresponding model-based control method has been proposed in [3, 4]. Along with the diversity of robot working environment and the complexity of the robot’s structure, the force/position tracking control has been studied in different kinds of uncertainties. Nowadays, ignoring uncertainties to simplify control design may cause the large steady-state errors or/and poor transient response [5]. For the case of parametric uncertainties, adaptive control method has been presented in [6, 7] to make robots adapt the changing control environment. In order to enhance system robustness on the uncertain parameters in the presence of external disturbances, sliding mode control [8, 9] has been proposed to obtain the desired robotic tracking control performance. Owing to the universal approximation property [1017], a great number of intelligent control schemes, such as adaptive neural/fuzzy control, have been developed for controlling robotic systems with uncertain nonlinearities [1822].

Although intelligent control of robotic systems has attracted considerable attention in the past few years, relatively few robot control methods could achieve human-like performance in a dynamic and uncertain environment. It is well known that intelligent control was initially inspired by the learning and control abilities of human beings, thus intelligent control should at least possess the aforementioned properties “learning by doing" and “doing with the learned knowledge" [2326]. But most existing intelligent control schemes can only ensure the stability of closed-loop systems without being able to achieve the information acquisition and storage and utilization of unknown system dynamics. This means that the existing intelligent control schemes do not solve the accurate convergence of estimated parameters, which usually needs to guarantee the exponential stability of the derived closed-loop system. However, it is extremely difficult for uncertain nonlinear systems to verify the exponential stability of the derived closed-loop system. Recently, a deterministic learning method was proposed in [27] using RBF NNs for a second-order Brunovsky system, where the derived closed-loop system was described by a class of linear time-varying systems. By verifying that RBF NNs satisfied persistent excitation (PE) condition, the convergence of partial neural weights and accurate neural approximation of unknown system dynamics were guaranteed in [27] because of the exponential convergence of the derived linear time-varying closed-loop systems. The deterministic learning method was further extended to -order Brunovsky systems with an unknown affine/nonaffine term [28, 29], where the derivative of unknown affine terms was assumed to be bounded. By combining backstepping with a system decomposition strategy, an elegant dynamic learning method was proposed to cope with the learning and control problem of third-order strict-feedback systems [30]. By combining dynamic surface control technology [31], the result in [30] was further extended to -order strict-feedback systems. The deterministic learning method was also applied in many physical systems such as marine surface vessels [32, 33] and robot manipulators [34].

Recently, the deterministic learning methods are mainly used to solve the learning and control problem for single input single output nonlinear systems without any constraint. In practice, there are many different kinds of constraints in most of physical systems, such as output or state constraints, tracking performance constraints. The violation of the constraints may cause severe performance degradation, safety problem, or system damage [35]. Therefore, it is of great importance for solving the control and learning problem of constrained systems. Based on Lyapunov theorem, a barrier Lyapunov function (BLF) method has been presented to solve output constraints for strict-feedback nonlinear systems [36], output feedback nonlinear systems [37], flexible systems [38], and robotic manipulator [35, 39]. The BLF-based methods were also extended to solve state constraints [4042]. Although the aforementioned results on the output or state constraints can guarantee that system outputs or states converge to a predefined bounded set, the predefined performance requirements on the convergence rate, maximum overshoot, and steady-state error have not been studied fully. The predefined performance issue is an extremely challenging problem. Recently, adaptive neural prescribed performance controller was proposed in [43, 44] for feedback linearizable nonlinear systems by means of transformation functions. The proposed method was also developed to deal with the constrained output tracking control problem in many applications such as robotic systems [45], nonlinear servo mechanisms [46], marine surface vessels [33], nonlinear stochastic large-scale systems with actuator faults [47], and switched nonlinear systems [48]. To solve partial tracking error constraints, a fuzzy dynamic surface control design was developed in [49, 50] for a class of strict-feedback nonlinear systems by transforming the state tracking errors into new virtual variables. However, the existing control schemes, such as [3551], can only guarantee the stability of closed-loop systems with different constraints, which are not capable of achieving the learning of unknown system dynamics. The main reason is that the derived closed-loop error system is extremely complex, such that its exponential convergence is difficult to be verified using the existing stability analysis tools. To solve this problem, a neural learning control with the output tracking error constraint was presented in [52, 53] for a class of nonlinear systems. These methods proposed in [52, 53] cannot be adopted to deal with the dynamic learning problem for multi-input and multioutput nonlinear systems with full-state tracking error constraints.

Based on the above discussions, this paper proposes a novel dynamic learning method for a multi-input and multioutput -link robot with full-state tracking error constraints and a mild assumption. To prevent the violation of the full-state tracking error constraints, performance functions are firstly introduced to characterize the transient and steady-state performance of full-state tracking errors. And then, using a nonlinear transformation method, the constrained tracking control problem is effectively transformed into the stabilization problem of equivalent unconstrained transformation error systems. By combining backstepping and Lyapunov stability, a novel adaptive neural control scheme is proposed to guarantee the uniformly ultimate boundedness of all closed-loop signals and the prescribed full-state tracking error performance. It should be pointed out that the proposed control scheme is different from the traditional backstepping design. In our control design, the correlative interconnection term can not be compensated in next step of backstepping design because of the full-state tracking error constraints. To overcome the difficulty, two independent Lyapunov functions are constructed in each step of backstepping design. Based on the independent Lyapunov functions, the unconstrained transformation error of the link angular velocity tracking error is firstly proved to be bounded, and the corresponding link angular velocity tracking error is further proved to satisfy the prescribed performance. Invoking the boundedness of the link angular velocity tracking error, we backward derive the link angular position tracking error to satisfy the prescribed performance. By means of the tracking convergence in the steady-state control process, the regression subvector consisting of the RBFs along the recurrent tracking orbit satisfies the partial PE condition. Subsequently, an appropriate state transformation is introduced to transform the closed-loop system into a linear time-varying (LTV) system with small perturbed terms. With the perturbation theory of LTV system, the knowledge of the closed-loop robotic dynamics can be accurately stored by RBF NNs with constant weight values. The stored knowledge can be reused to develop a static neural learning controller for achieving the better control performance with smaller transient-state tracking errors, smaller control gains, and less computational time.

The rest of this paper is organized as follows. Section 2 introduces the system formulation, the full-state tracking error transformation, and some useful preliminaries about RBF networks. In Section 3, a novel adaptive neural control design is proposed for rigid robotic manipulators with constrained full-state tracking performances. Neural learning control is developed in Section 4, which achieves the knowledge acquisition, storage, and utilization of unknown robotic dynamics. In Section 5, simulation studies on 2-link robotic system are given to show the effectiveness of the proposed method. Section 6 includes the conclusions of the paper.

2. Problem Formulation and Preliminaries

2.1. Uncertain Robotic System

The dynamics of an -link robotic system is described in the following Lagrange form [1]:where are the angular position, velocity, and acceleration, respectively, refers to the input torque, is an unknown symmetric positive definite inertia matrix, denotes unknown centripetal and Coriolis torques, is an unknown gravitational force vector, is an unknown friction vector, and denotes unknown disturbances,

Property 1 (see [54]). The matrix is skew-symmetric.

Assumption 1. The unknown external disturbance is bounded, that is, there exists a constant such that .

In this paper, we choose as a recurrent reference trajectory of the angular position , which is generated by the following reference model:where is the state vector of the reference model, which is assumed to be recurrent signals, and is a known smooth function. The reference orbit along the given initial condition is defined as . In this paper, is assumed to be a recurrent motion.

2.2. Full-State Tracking Constraints

Define full-state tracking errors as and , where are continuously differentiable virtual control signals. For guaranteeing the prescribed transient and steady-state bounds of full-state tracking errors, and need to satisfy the following predefined condition:where , , and are positive design constants and is a smooth, strictly positive, bounded, and decreasing function, which satisfies and is called performance function. In this paper, is chosen as the following exponential performance function:where and are positive design constants.

For any given initial condition , these design constants and can be chosen appropriately such that satisfies the predefined condition (3). From (3) and (4), the different selection of these design parameters, and , can obtain different performance requirements on the tracking error .

2.3. RBF Neural Networks

The RBF NNs can be described by , where are NN input variables with being NN input dimension and being a compact set, is the weight vector, is the NN node number, and is the radial basis function vector. The RBF neural networks have the following desired properties.

(1) Universal Approximation

Lemma 2 (see [55]). Using sufficiently large node number , the RBF NNs can approximate any smooth function over a compact set to any arbitrary accuracy aswhere is the ideal weight vector of and is any small approximation error which satisfies .

(2) Spatially Localized Approximation Property. It should be noted that the radial basic function satisfies when . This property shows that the network output is only locally affected by each basis function. Therefore, any smooth function over a compact set can be approximated using a limited number of neurons, which are located in a local region along bounded input trajectory :where is the subvector of , composed of RBFs that are close to the trajectory , is the corresponding subvector of , and approximation errors are defined as the region far away from the trajectory , which means . Therefore, is close to , which is any small value.

(3) Partial PE Condition. Persistent excitation plays a key role in accurate convergence of neural weight estimator. To clearly show that RBF NNs satisfy the PE property, a definition of PE condition is given as follows:

Definition 3 (see [55]). A continuous, uniformly bounded, vector-valued function is said to satisfy the persistent excitation condition, if there exist positive constants , and such that holds for every constant vector , where is a positive, -finite Borel measure on .

Lemma 4 (see [27]). Consider any recurrent orbit , remaining in a compact set with . Then, for the RBF network with centers placed on a regular lattice, which is large enough to cover the compact set , the regressor subvector in (6) (rather than the entire regressor vector ) is persistently exciting.

3. Adaptive Neural Control with Full-State Tracking Error Constraints

In this section, a novel stable adaptive neural tracking control scheme will be developed for the system (1) with full-state tracking error constraints (3) using nonlinear error transformations, independent Lyapunov functions, and backstepping. The proposed control scheme will guarantee that all the signals in the closed-loop system are ultimately bounded and the full-state tracking errors satisfy the prescribed performances (3).

In order to transform the constrained tracking control problem (3) into an equivalent unconstrained one, a performance transformation method is introduced as follows:where is a smooth and strictly increasing function and satisfies . In this paper, the transformation function is constructed as follows:

Owing to the properties and , the inverse transformation of exists and is well defined as follows:From (3) and (10), if is verified to be bounded, then we can obtain that satisfies the predefined error performance (3). Differentiating yieldswhere . Next, define the transformed unconstrained error vector . It follows from (11) thatwhere and . Notice that and , using system (1), and thus the dynamics of the transformed error vector can be rewritten aswhere is an unknown smooth function vector and is a computable variable. Next, a control scheme will be developed for the system (13)-(14) based on backstepping.

Step 1. For the transformed error subsystem (13), design a virtual controlThen, we havewhere with and .

Constructing the following Lyapunov function candidate:whose derivative along (16) yields

Remark 5. From (18), the boundedness of the transformation error depends on the boundedness of the link angular velocity tracking error . In the traditional backstepping design, the term is called the correlative interconnection term, which can be usually compensated in the next step of backstepping. However, in our control design, it is impossible to deal with in Step 2. The main reason is that the full-state tracking error constraints are considered in this paper, which derives the transformation error in Step 2, instead of the traditional error . Therefore, it is difficult to construct the Lyapunov with to compensate for ; see Step 2 for details.

Step 2. By adding and subtracting , the transformed error subsystem (14) can be rewritten aswhere , andIt should be pointed out that the term introduced in (19) facilitates the stability analysis based on the property (1). Since , , and are unknown smooth function vectors, is also unknown and smooth, which can not directly be used to construct the controller. To solve this problem, the unknown dynamics are approximated by RBF NN (5) in this paper. Then, we havewhere the approximation error satisfies and is the unknown optimal NN weight vector with NN node number , and is a radial basis function vector. Define as the estimated weight values of , and let be the corresponding estimated error. Then, design the adaptive neural control law asand construct the neural weight updated law aswhere and are positive design constants, is positive diagonal matrix, and is a small value, which is used to improve the robustness of the adaptive controller (22).
Substituting (22) into (21), we havewhere . Noting that the inertia matrix is positive definite, thus we construct the following Lyapunov function candidate:

In what follows, one of our main results will be shown by the following theorem.

Theorem 6 (stability and tracking). Consider the closed-loop system consisting of the robotic system (1), the bounded reference trajectory (2), the full-state tracking performance condition (3), the transformed error (10), the proposed adaptive neural control law (22) with the virtual control law (15), and the weight updated law (23). Assume the given bounded initial conditions satisfy the condition (3) (this condition can be satisfied by choosing proper designed parameters , and ). Then, we have that(1)all signals of the closed-loop system remain uniformly ultimately bounded(2)the constrained full-state tracking errors satisfy the prescribed performance (3) and they converge to a small residual set of zero in a finite time .

Proof. Differentiating in (25) with respect to time and using (18), (24) yieldsUsing the Property 1 and substituting (23) into (26), the derivation of is rewritten asSubsequently, using the appropriate inequality and combining Assumption 1, we havewhere is a positive constant. Substituting (28) into (27) giveswhere . Let ; we have

From the definition (25) of , the transformed error and the neural weight error are bounded. Noticing that , and the error transformed relationship (8)–(10), thus we obtain that the tracking error is bounded and satisfies the prescribed tracking error constraints (3). This means is any small value depending on design parameters , and . Using the bounded property of , we can obtain thatNoting , thus (18) can be rewritten as where , . Further, let ; we haveSince , it follows from (33) that is bounded. Similarly, we can obtain that the tracking error is bounded and satisfies the prescribed tracking error constraints (3). By combining the boundedness of , and the deigned bounded function , , it can be verified that and are bounded. Because of , is also bounded. Because the desired weight is bounded, the weight estimate is also bounded. From (22), we can verify that is also bounded. Hence, all closed-loop signals are uniformly ultimately bounded.

Moreover, from (17), (25), (30), and (33), we can obtain the convergence region of the transformed error , as follows:

By choosing and , there exists a finite time , such that for the transformed error satisfiesSubsequently, the convergence region after a finite time can be constructed as follows:where can be adjusted to be arbitrarily small if we choose appropriate controller parameters , , and .

From (36), the transformed error converges to a small residual set in a finite time . Owing to the convergence of and the error transformed relationship (8)–(10), the full-state constrained tracking errors satisfy (3) in a finite time . From (3) and (4), the tracking error exponentially converges to the interval , which can be adjusted to be a small residual set of zero by choosing the appropriate design parameters .

Remark 7. In order to verify the boundedness of transformation errors and , two independent Lyapunov functions and are constructed in Steps 1 and 2. Using the appropriate inequality technology, we firstly prove the transformation error is bounded. Subsequently, the boundedness of the tracking error is indirectly verified based on the error transformed relationship (8)–(10). As indicated by the Remark 5, the boundedness of backward derives the boundedness of the transformation error . Similarly, can be verified to satisfy the predefined tracking performance (3).

Remark 8. In order to verify the boundedness of transformation errors and , two independent Lyapunov functions and are constructed in Steps 1 and 2. Using the appropriate inequality technology, we firstly prove the transformation error is bounded. Although the boundedness of depends on in (30) which may be large, the tracking error can still satisfy the prescribed performance (3) based on the error transformed relationship (8)–(10). Subsequently, the boundedness of backward derives the boundedness of the transformation error . Similarly, can be verified to satisfy the predefined tracking performance (3).

4. Neural Learning Control

Based on the stable adaptive neural control scheme developed in Section 3, this section will use the spatially localized approximation ability of RBF NNs to achieve the knowledge acquisition and storage of the unknown system dynamics . And then, the stored knowledge will be reused to develop a neural learning controller so that the improved control performance of the robotic system (1) can be achieved for the same or a similar control task.

4.1. Knowledge Acquisition, Expression, and Storage

In this section, the regression subvector of RBF NN is firstly verified to satisfy the PE condition, which is key to achieve the exponential convergence of NN weight values and the accurate NN approximation of the unknown system dynamics . From Lemma 4, the NN input vector needs to be verified as recurrent signals, so that the regression subvector along the input orbit satisfies partial PE condition. Based on Theorem 6, it can be obtained that the system output converges to a small neighborhood of for . Since is a recurrent signal, is also recurrent. Since , , and are very small values, is recurrent with the same period as . In the steady-state control, is small and recurrent. Noting that , which is a function of the variables , , , and , so it can be recurrently verified as a recurrent signal. According to Lemma 4, the regression subvector satisfies partial PE condition.

Using the spatially localized approximation ability of RBF NNs, the closed-loop system from (23) and (24) can be given bywhere , which is a positive definite and symmetric matrix, is the subvector of , which is composed of RBFs that are close to the reference orbit , is the corresponding estimated weight subvector with and , denotes the region far away from the orbit , and are the approximate errors along the reference orbit.

Theorem 9. Consider the closed-loop system consisting of the robotic system (1) with Assumption 1, the bounded reference model (2), full-state tracking error constrained condition (3), the transformation error (10) and adaptive neural control law (22) with the virtual control law (15), and the NN updated law (23). Assume the given bounded initial conditions satisfy (3) and , . Then, for any recurrent orbit , we have that the NN weight estimates exponentially converge to a small neighborhood of optimal values after , and the corresponding system dynamics along recurrent signals is approximated by the stored neural knowledge as: where approaches the desired error , and the constant weight values are chosen aswith representing a time segment in a steady-state stage.

Proof. Up to now, we have verified that satisfies the PE condition. Furthermore, in order to achieve the exponential convergence of neural weight estimates , the closed-loop system (37) needs to be transformed into a class of linear time-varying (LTV) systems with small perturbations based on Lemma given in [56]. From (37), the perturbation term may be very large. The main reasons lie in the fact that the term may be large with the possible large and the term may be also large due to the possible large and . Noting is a positive definite and symmetric matrix, a state transformation is introduced to obtain the following class of LTV systems:where is abbreviation of , and

It is worth pointing out that and are small perturbations by choosing large enough and small enough . Therefore, the system (40) can be regarded as a class of LTV systems with very small perturbations. It has been shown in [28] that the nominal part of the system (40) can be guaranteed to be exponentially stable if the system (40) satisfies the following three conditions:(i)There exists a positive constant such that, for all , the bound is satisfied;(ii)There exist symmetric and positive matrices and such that ;(iii) satisfies the PE condition.

From Section 3, all closed-loop signals remain uniformly ultimately bounded. Therefore, the satisfaction of condition (i) can be easily checked. Moreover, we have verified that satisfy the PE condition (see the above analysis of Theorem 9 for the details). Next, choose a matrix . Since and are positive definite and symmetric, the matrix is also symmetric and positive. Then, we haveThe inequality holds when choosing the appropriate control parameter . Therefore, we has verified that the nominal part of the system (40) is uniformly exponentially stable. Further, based on the perturbation theory given in Lemma [56], the weight estimate error converges exponentially to a small neighborhood of zero for . Noting that , so converges exponentially to a small neighborhood of the desired weight value in a finite time , and the corresponding desired weight value can be stored by constant neural weight values in (39).

Based on the spatially localized approximation property of RBF NN (6) and the constant weight values , the unknown system dynamics could be expressed aswhere and are close to . From (43), neural networks , containing the experience knowledge , can be used to accurately approximate the unknown system dynamics .

Furthermore, the learned knowledge can be described as follows: for the experienced recurrent orbit , there exist positive constants and , which describe a local region along such thatwhere is close to . For a new task, once the NN inputs enter the region , the trained RBF networks can accurately approximate the uncertain nonlinearity .

4.2. Static Controller Design with Knowledge Utilization

By invoking the stored weight values (39), a static controller will be developed in this section to guarantee the prescribed performance of full-state tracking errors of the robotic system (1) for the same or a similar control task. Using the stored knowledge in (39), a static control law without neural weight estimated parameter adjustment online, instead of the adaptive NN control law (22), is designed as follows:where and are positive design parameters and and are defined in (12). Moreover, the virtual control law is chosen the as same as Section 3; see (15) for the detail. Then, by combining (16), (19), and (45), we can obtain the following closed-loop system:where . Subsequently, construct the following Lyapunov function candidate:Noting the condition (44) and applying the similar backstepping step in Section 3, we have the following results.

Theorem 10. Consider the closed-loop system consisting of the robotic system (1), the bounded reference trajectory (2), the full-state tracking performance condition (3), the transformed error (10), the static neural learning control law (45) with the stored constant weight given in (39), and the virtual control law (15). Then, for the same or a similar recurrent reference orbit given in Theorem 6 and the initial conditions satisfying the prescribed performance (3), it can be guaranteed that all the closed-loop signals are uniformly ultimately bounded, and the constrained full-state tracking errors satisfy the prescribed performance (3) and converge to a small residual set of zero.

5. Simulation Results

To demonstrate the effectiveness of the proposed dynamic learning scheme, we consider a 2-link robot manipulator which is shown in Figure 1. From the -link rigid robotic system (1), denotes the angular position of each joint and is the actuator input applied at the manipulator joints, respectively. Based on the system (1), the dynamic parameters of a 2-link robot manipulator are given bywhereand and denote the length and the mass of link-, , and denotes the gravity acceleration. In this paper, these system parameters are chosen as , , , and , and the external disturbances , which are bounded and satisfy Assumption 1.

Figure 1: Diagram of a 2-link robotic manipulator.

The system output is required to track the following desired reference trajectory :

For full-state constrained tracking errors and , our target is to achieve the following prescribed transient and steady-state tracking error bounds:where , , , , and ; the performance function and the transformation function are designed as

5.1. ANC Results with Full-State Tracking Error Constraints

According to Theorems 6 and 9, the main objective of this section is to use the proposed stable adaptive neural controller (22) with virtual control law (15) and neural weight adaptation law (23) such that the full-state tracking errors and satisfy the prescribed performance (51); the neural weight estimator exponentially converges to the constant weight value ; the unknown system dynamics in (20) is accurately approximated by the constant RBF NNs .

In the simulation studies, the RBF network consists of neurons whose centers are evenly spaced on with the width . The other design parameters are chosen as follows: , , , , and . The initial states are , , and . Simulation results are shown in Figures 210. Figures 25 show the constrained joint angular position and velocity tracking error performances, respectively. From Figures 25, it can be clearly seen that good transient performances have been achieved by adjusting the performance function (53) and design parameters , . The control input response is given in Figure 6. The partial weight convergence of is presented in Figures 7 and 8. It can be seen from Figures 7 and 8 that only partial weights converge to relatively large values, which means only the neurons along recurrent input signals can be motivated. Based on Theorem 9, the constant neural weight is chosen in the simulation asFigures 9 and 10 display unknown system dynamics and , along the periodic reference signals , and can be accurately approximated by the constant RBF NN and .

Figure 2: Angular position tracking error : ANC with error transformation method (—), ANC in [30] (--), and the error bounds (- - -).
Figure 3: Angular position tracking error : ANC with error transformation method (—), ANC method in [30] (--), and the error bounds (- - -).
Figure 4: Angular velocity tracking error : ANC with error transformation method (—), ANC in [30] (--), and the error bounds (- - -).
Figure 5: Angular velocity tracking error : ANC with error transformation method (—), ANC method in [30] (--), and the error bounds (- - -).
Figure 6: Control input responses: (—), (--): (a) ANC with error transformation and (b) ANC in [30].
Figure 7: Partial parameter convergence of .
Figure 8: Partial parameter convergence of .
Figure 9: Function approximation: (—) and (--).
Figure 10: Function approximation: (—) and (--).

To further show the improved transient and steady-state tracking performance for the proposed control method with full-state tracking performance constraints, the simulation comparison is given between the proposed method and the existing method without prescribed performances [30]. For comparison purpose, the existing method [30] is also used to control the same 2-link robot manipulator with the same initial condition , and the same reference trajectory (50). For clarity, the existing method without prescribed performance proposed in [30] is recalled as follows: the control law is and neural weight updated laws and . By choosing the appropriate control parameters , , , and , to be fair, both control input signals of the two methods are required to have the similar amplitude, which is shown in Figure 6. The simulation results for comparison are given in Figures 25. It is clearly showed from Figures 25 that the proposed adaptive neural control scheme with full-state tracking performance constraints achieves a better transient and steady-state tracking control performance with smaller overshoot, faster convergence rate, and smaller steady-state error.

Remark 11. It is well known that the tracking performance relies on the choice of the control parameters and structure of RBF neural networks. However, how to choose design parameters for achieving the good tracking performance is still an open problem. In this simulation, the RBF networks are constructed appropriately such that all neurons can cover the entire NN input trajectory space. Moreover, these controller parameters are chosen with large enough , , , and and small enough . It should be pointed out that these design parameters are chosen in this simulation by a trial-and-error method.

5.2. Neural Learning Control Results

By using the stored constant weight values in (55), the objective of this section is to invoke a static neural learning controller (45) to achieve the improved control performance with full-state tracking error constraints for the same or similar control tasks. For comparison purpose, the plant controlled, the reference trajectory, and the constrained tracking error performance are chosen the same as Section 5.1, while the initial conditions and the neural network structures are unchanged. In the simulation, with the control gains selected as , , and , simulation results for static neural learning control (45) are shown in Figures 1115. From Figures 1114, it can be seen that the smaller overshoot and the faster convergence are obtained using the learned knowledge in (55), while full-state tracking errors satisfy the prescribed performance. It is worth pointing out that a smaller control signal is used in neural learning control to achieve the aforementioned improved tracking control performance; see Figure 15 for the details. Moreover, because the proposed static learning control scheme avoids the online adjustment of the neural weight values, the running time saves nearly 1/2 for the same simulation time interval and the same computer configuration. The static neural learning controller (45) especially avoids the trial-and-error process on control design parameters and NN parameters which is tuned in adaptive neural control process. Therefore, the proposed learning control scheme avoids effectively a great deal of time consumed by the adaptive neural control process.

Figure 11: Angular position tracking error : neural learning control (—), ANC (--), and the error bounds (- - -).
Figure 12: Angular position tracking error : neural learning control (—), ANC (--), and the error bounds (- - -).
Figure 13: Angular velocity tracking error : neural learning control (—), ANC (--), and the error bounds (- - -).
Figure 14: Angular velocity tracking error : neural learning control (—), ANC (--), and the error bounds (- - -).
Figure 15: Control input responses: (—), (--): (a) neural learning control and (b) ANC.

6. Conclusions

This paper focused on the problem of full-state tracking error constraints for an -link rigid robot with unknown system dynamics and external disturbances. The performance transformation method was employed to transform the constrained full-state tracking errors into the unconstrained ones. By combining backstepping design and two independent Lyapunov functions, a novel adaptive neural control scheme was presented to guarantee all the signals in the closed-loop system are uniformly ultimately bounded, while this control scheme achieves predefined transient and steady-state tracking control performances concerning the link angular position and velocity tracking errors. Particularly, in the steady-state control process, the proposed neural control scheme can acquire, express, and store the knowledge of unknown system dynamics. The stored knowledge was reused to complete the same or similar tasks, so that the improved control performance was achieved with the less computational burden and the better transient-state tracking performance. It should be pointed out that the considered -link rigid robot is a class of simple multi-input and multioutput nonlinear systems. Therefore, how to extend the proposed method to complex nonlinear systems, such as nonaffine nonlinear systems, switched nonlinear systems, and stochastic large-scale systems, presents a challenging opportunity for future work.

Conflicts of Interest

The authors declare that they have no conflicts of interest.

Acknowledgments

This work was partially supported by the National Natural Science Foundation of China (nos. 61374119, 61611130214, and 61473121), the Royal Society Newton Mobility Grant IE150858, the Guangdong Natural Science Foundation under Grant 2014A030312005, the Science and Technology New Star of Zhujiang, and the Fundamental Research Funds for the Central Universities.

References

  1. F. L. Lewis, C. T. Abdallah, and D. M. Dawson, Control of Robot Manipulators, Macmillan, New York, NY, USA, 1993.
  2. S. S. Ge, T. H. Lee, and C. J. Harris, Adaptive Neural Network Control of Robotic Manipulators, World Scientific, London, UK, 1998.
  3. M. W. Spong and R. Ortega, “On adaptive inverse dynamics control of rigid robots,” Institute of Electrical and Electronics Engineers. Transactions on Automatic Control, vol. 35, no. 1, pp. 92–95, 1990. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus
  4. F. Cheng and G. Lee, “Robust control of manipulators using the computed torque plus H∞ compensation method,” IEE Proceedings - Control Theory and Applications, vol. 143, no. 1, pp. 64–72, 1996. View at Publisher · View at Google Scholar
  5. Z. Li, J. Li, and Y. Kang, “Adaptive robust coordinated control of multiple mobile manipulators interacting with rigid environments,” Automatica. A Journal of IFAC, the International Federation of Automatic Control, vol. 46, no. 12, pp. 2028–2034, 2010. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus
  6. N. Sadegh and R. Horowitz, “Communications: An Exponentially Stable Adaptive Control Law For Robot Manipulators,” IEEE Transactions on Robotics and Automation, vol. 6, no. 4, pp. 491–496, 1990. View at Publisher · View at Google Scholar · View at Scopus
  7. M. Namvar and F. Aghili, “Adaptive force-motion control of coordinated robots interacting with geometrically unknown environments,” IEEE Transactions on Robotics, vol. 21, no. 4, pp. 678–694, 2005. View at Publisher · View at Google Scholar · View at Scopus
  8. D. Chwa, “Sliding-mode tracking control of nonholonomic wheeled mobile robots in polar coordinates,” IEEE Transactions on Control Systems Technology, vol. 12, no. 4, pp. 637–644, 2004. View at Publisher · View at Google Scholar · View at Scopus
  9. J. Yang and J. Kim, “Sliding mode control for trajectory tracking of nonholonomic wheeled mobile robots,” IEEE Transactions on Robotics and Automation, vol. 15, no. 3, pp. 578–587, 1999. View at Publisher · View at Google Scholar · View at Scopus
  10. R. M. Sanner and J. E. Slotine, “Gaussian networks for direct adaptive control,” IEEE Transactions on Neural Networks, vol. 3, no. 6, pp. 837–863, 1992. View at Publisher · View at Google Scholar · View at Scopus
  11. S. S. Ge and C. Wang, “Direct adaptive NN control of a class of nonlinear systems,” IEEE Transactions on Neural Networks, vol. 13, no. 1, pp. 214–221, 2002. View at Publisher · View at Google Scholar · View at Scopus
  12. Y.-J. Liu, L. Tang, S. Tong, and C. L. Chen, “Adaptive NN controller design for a class of nonlinear MIMO discrete-time systems,” IEEE Transactions on Neural Networks and Learning Systems, vol. 26, no. 5, pp. 1007–1018, 2015. View at Publisher · View at Google Scholar · View at MathSciNet
  13. M. Wang, X. Liu, and P. Shi, “Adaptive neural control of pure-feedback nonlinear time-delay systems via dynamic surface technique,” IEEE Transactions on Systems, Man, and Cybernetics, Part B: Cybernetics, vol. 41, no. 6, pp. 1681–1692, 2011. View at Publisher · View at Google Scholar · View at Scopus
  14. T.-P. Zhang, H. Wen, and Q. Zhu, “Adaptive fuzzy control of nonlinear systems in pure feedback form based on input-to-state stability,” IEEE Transactions on Fuzzy Systems, vol. 18, no. 1, pp. 80–93, 2010. View at Publisher · View at Google Scholar · View at Scopus
  15. H. Q. Wang, P. Shi, H. Li, and Q. Zhou, “Adaptive neural tracking control for a class of nonlinear systems with unmodeled dynamics,” IEEE Transactions on Cybernetics, 2016. View at Publisher · View at Google Scholar
  16. H. Wang, P. X. Liu, and P. Shi, “Observer-based fuzzy adaptive output-feedback control of stochastic nonlinear multiple time-delay systems,” IEEE Transactions on Cybernetics, 2017. View at Publisher · View at Google Scholar
  17. S. Tong, Y. Li, and P. Shi, “Observer-based adaptive fuzzy backstepping output feedback control of uncertain MIMO pure-feedback nonlinear systems,” IEEE Transactions on Fuzzy Systems, vol. 20, no. 4, pp. 771–785, 2012. View at Publisher · View at Google Scholar · View at Scopus
  18. C. M. Kwan, A. Yesildirek, and F. L. Lewis, “Robust force/motion control of constrained robots using neural network,” Journal of Robotic Systems, vol. 16, no. 12, pp. 697–714, 1999. View at Publisher · View at Google Scholar · View at Scopus
  19. Y. Karayiannidis, G. Rovithakis, and Z. Doulgeri, “Force/position tracking for a robotic manipulator in compliant contact with a surface using neuro-adaptive control,” Automatica. A Journal of IFAC, the International Federation of Automatic Control, vol. 43, no. 7, pp. 1281–1288, 2007. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus
  20. C. Yang, X. Wang, Z. Li, Y. Li, and C. Su, “Teleoperation control based on combination of wave variable and neural networks,” IEEE Transactions on Systems, Man, and Cybernetics: Systems, vol. 47, no. 8, pp. 2125–2136, 2017. View at Publisher · View at Google Scholar
  21. C. Yang, J. Luo, Y. Pan, Z. Liu, and C. Su, “Personalized variable gain control with tremor attenuation for robot teleoperation,” IEEE Transactions on Systems, Man, and Cybernetics: Systems, 2017. View at Publisher · View at Google Scholar
  22. C. Yang, K. Huang, H. Cheng, Y. Li, and C. Su, “Haptic identification by ELM-controlled uncertain manipulator,” IEEE Transactions on Systems, Man, and Cybernetics: Systems, vol. 47, no. 8, pp. 2398–2409, 2017. View at Publisher · View at Google Scholar
  23. K. S. Fu, “Learning Control Systems and Intelligent Control Systems: An Intersection of Artificial Intelligence and Automatic Control,” IEEE Transactions on Automatic Control, vol. 16, no. 1, pp. 70–72, 1971. View at Publisher · View at Google Scholar · View at Scopus
  24. B. Xu, C. Yang, and Z. Shi, “Reinforcement learning output feedback NN control using deterministic learning technique,” IEEE Transactions on Neural Networks and Learning Systems, vol. 25, no. 3, pp. 635–641, 2014. View at Publisher · View at Google Scholar · View at Scopus
  25. C. Yang, X. Wang, L. Cheng, and H. Ma, “Neural-learning-based telerobot control with guaranteed performance,” IEEE Transactions on Cybernetics, 2017. View at Publisher · View at Google Scholar · View at Scopus
  26. B. Luo, H.-N. Wu, and T. Huang, “Off-policy reinforcement learning for H∞ control design,” IEEE Transactions on Cybernetics, vol. 45, no. 1, pp. 65–76, 2015. View at Publisher · View at Google Scholar · View at Scopus
  27. C. Wang and D. J. Hill, “Learning from neural control,” IEEE Transactions on Neural Networks, vol. 17, no. 1, pp. 130–146, 2006. View at Publisher · View at Google Scholar · View at Scopus
  28. T. Liu, C. Wang, and D. J. Hill, “Learning from neural control of nonlinear systems in normal form,” Systems & Control Letters, vol. 58, no. 9, pp. 633–638, 2009. View at Publisher · View at Google Scholar · View at MathSciNet
  29. S.-L. Dai, C. Wang, and M. Wang, “Dynamic learning from adaptive neural network control of a class of nonaffine nonlinear systems,” IEEE Transactions on Neural Networks and Learning Systems, vol. 25, no. 1, pp. 111–123, 2014. View at Publisher · View at Google Scholar · View at Scopus
  30. C. Wang, M. Wang, T. Liu, and D. J. Hill, “Learning from ISS-modular adaptive NN control of nonlinear strict-feedback systems,” IEEE Transactions on Neural Networks and Learning Systems, vol. 23, no. 10, pp. 1539–1550, 2012. View at Publisher · View at Google Scholar · View at Scopus
  31. M. Wang and C. Wang, “Learning from adaptive neural dynamic surface control of strict-feedback systems,” IEEE Transactions on Neural Networks and Learning Systems, vol. 26, no. 6, pp. 1247–1259, 2015. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus
  32. S.-L. Dai, C. Wang, and F. Luo, “Identification and learning control of ocean surface ship using neural networks,” IEEE Transactions on Industrial Informatics, vol. 8, no. 4, pp. 801–810, 2012. View at Publisher · View at Google Scholar · View at Scopus
  33. S.-L. Dai, M. Wang, and C. Wang, “Neural learning control of marine surface vessels with guaranteed transient tracking performance,” IEEE Transactions on Industrial Electronics, vol. 63, no. 3, pp. 1717–1727, 2016. View at Publisher · View at Google Scholar · View at Scopus
  34. M. Wang and A. Yang, “Dynamic learning from adaptive neural control of robot manipulators with prescribed performance,” IEEE Transactions on Systems, Man, and Cybernetics: Systems, vol. 47, no. 8, pp. 2244–2255, 2017. View at Publisher · View at Google Scholar
  35. W. He, A. O. David, Z. Yin, and C. Sun, “Neural network control of a robotic manipulator with input deadzone and output constraint,” IEEE Transactions on Systems, Man, and Cybernetics: Systems, vol. 46, no. 6, pp. 759–770, 2016. View at Publisher · View at Google Scholar
  36. K. P. Tee, S. S. Ge, and E. H. Tay, “Barrier Lyapunov functions for the control of output-constrained nonlinear systems,” Automatica. A Journal of IFAC, the International Federation of Automatic Control, vol. 45, no. 4, pp. 918–927, 2009. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus
  37. B. Ren, S. S. Ge, K. P. Tee, and T. H. Lee, “Adaptive neural control for output feedback nonlinear systems using a barrier lyapunov function,” IEEE Transactions on Neural Networks, vol. 21, no. 8, pp. 1339–1345, 2010. View at Publisher · View at Google Scholar · View at Scopus
  38. W. He, C. Sun, and S. S. Ge, “Top tension control of a flexible marine riser by using integral-barrier lyapunov function,” IEEE/ASME Transactions on Mechatronics, vol. 20, no. 2, pp. 497–505, 2015. View at Publisher · View at Google Scholar · View at Scopus
  39. C. Yang, Y. Jiang, Z. Li, W. He, and C.-Y. Su, “Neural control of bimanual robots with guaranteed global stability and motion precision,” IEEE Transactions on Industrial Informatics, vol. 13, no. 3, pp. 1162–1171, 2017. View at Publisher · View at Google Scholar
  40. K. P. Tee and S. S. Ge, “Control of nonlinear systems with partial state constraints using a barrier Lyapunov function,” International Journal of Control, vol. 84, no. 12, pp. 2008–2023, 2011. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus
  41. W. He, Y. Chen, and Z. Yin, “Adaptive neural network control of an uncertain robot with full-state constraints,” IEEE Transactions on Cybernetics, vol. 46, no. 3, pp. 620–629, 2016. View at Publisher · View at Google Scholar
  42. Y.-J. Liu and S. Tong, “Barrier Lyapunov functions-based adaptive control for a class of nonlinear pure-feedback systems with full state constraints,” Automatica. A Journal of IFAC, the International Federation of Automatic Control, vol. 64, pp. 70–75, 2016. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus
  43. C. P. Bechlioulis and G. A. Rovithakis, “Robust adaptive control of feedback linearizable MIMO nonlinear systems with prescribed performance,” Institute of Electrical and Electronics Engineers. Transactions on Automatic Control, vol. 53, no. 9, pp. 2090–2099, 2008. View at Publisher · View at Google Scholar · View at MathSciNet
  44. Y. Li, S. Tong, and T. Li, “Adaptive fuzzy output-feedback control for output constrained nonlinear systems in the presence of input saturation,” Fuzzy Sets and Systems. An International Journal in Information Science and Engineering, vol. 248, pp. 138–155, 2014. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus
  45. A. K. Kostarigka, Z. Doulgeri, and G. A. Rovithakis, “Prescribed performance tracking for flexible joint robots with unknown dynamics and variable elasticity,” Automatica. A Journal of IFAC, the International Federation of Automatic Control, vol. 49, no. 5, pp. 1137–1147, 2013. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus
  46. J. Na, Q. Chen, X. Ren, and Y. Guo, “Adaptive prescribed performance motion control of servo mechanisms with friction compensation,” IEEE Transactions on Industrial Electronics, vol. 61, no. 1, pp. 486–494, 2014. View at Publisher · View at Google Scholar · View at Scopus
  47. Y. Li, Z. Ma, and S. Tong, “Adaptive fuzzy output-constrained fault-tolerant control of nonlinear stochastic large-scale systems with actuator faults,” IEEE Transactions on Cybernetics, 2017. View at Publisher · View at Google Scholar
  48. Y. Li, S. Tong, L. Liu, and G. Feng, “Adaptive output-feedback control design with prescribed performance for switched nonlinear systems,” Automatica. A Journal of IFAC, the International Federation of Automatic Control, vol. 80, pp. 225–231, 2017. View at Publisher · View at Google Scholar · View at MathSciNet
  49. S. I. Han and J. M. Lee, “Partial tracking error constrained fuzzy dynamic surface control for a strict feedback nonlinear dynamic system,” IEEE Transactions on Fuzzy Systems, vol. 22, no. 5, pp. 1049–1061, 2014. View at Publisher · View at Google Scholar · View at Scopus
  50. S. Tong, S. Sui, and Y. Li, “Fuzzy adaptive output feedback control of MIMO nonlinear systems with partial tracking errors constrained,” IEEE Transactions on Fuzzy Systems, vol. 23, no. 4, pp. 729–742, 2015. View at Publisher · View at Google Scholar
  51. C. P. Bechlioulis, Z. Doulgeri, and G. A. A. A. Rovithakis, “Neuro-adaptive force/position control with prescribed performance and guaranteed contact maintenance,” IEEE Transactions on Neural Networks, vol. 21, no. 12, pp. 1857–1868, 2010. View at Publisher · View at Google Scholar · View at Scopus
  52. M. Wang, C. Wang, and X. Liu, “Dynamic learning from adaptive neural control with predefined performance for a class of nonlinear systems,” Information Sciences. An International Journal, vol. 279, pp. 874–888, 2014. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus
  53. M. Wang, C. Wang, P. Shi, and X. Liu, “Dynamic learning from neural control for strict-feedback systems with guaranteed predefined performance,” IEEE Transactions on Neural Networks and Learning Systems, vol. 27, no. 12, pp. 2564–2576, 2016. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus
  54. R. Selmic and F. Lewis, “Deadzone compensation in motion control systems using neural networks,” Institute of Electrical and Electronics Engineers. Transactions on Automatic Control, vol. 45, no. 4, pp. 602–613, 2000. View at Publisher · View at Google Scholar · View at MathSciNet
  55. A. J. Kurdila, F. J. Narcowich, and J. D. Ward, “Persistency of excitation in identification using radial basis function approximants,” SIAM Journal on Control and Optimization, vol. 33, no. 2, pp. 625–642, 1995. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus
  56. H. K. Khalil, Nonlinear Systems, Prentice-Hall, Upper Saddle River, NJ, USA, 3rd edition, 2002.