Abstract

A dynamic learning method is developed for an uncertain -link robot with unknown system dynamics, achieving predefined performance attributes on the link angular position and velocity tracking errors. For a known nonsingular initial robotic condition, performance functions and unconstrained transformation errors are employed to prevent the violation of the full-state tracking error constraints. By combining two independent Lyapunov functions and radial basis function (RBF) neural network (NN) approximator, a novel and simple adaptive neural control scheme is proposed for the dynamics of the unconstrained transformation errors, which guarantees uniformly ultimate boundedness of all the signals in the closed-loop system. In the steady-state control process, RBF NNs are verified to satisfy the partial persistent excitation (PE) condition. Subsequently, an appropriate state transformation is adopted to achieve the accurate convergence of neural weight estimates. The corresponding experienced knowledge on unknown robotic dynamics is stored in NNs with constant neural weight values. Using the stored knowledge, a static neural learning controller is developed to improve the full-state tracking performance. A comparative simulation study on a 2-link robot illustrates the effectiveness of the proposed scheme.

1. Introduction

In the past decades, the force/position tracking control problem of robots has attracted wide attention in both theory and applications [1, 2]. In the early stage of robotic tracking control, the system model is usually assumed to be accurately known, and the corresponding model-based control method has been proposed in [3, 4]. Along with the diversity of robot working environment and the complexity of the robot’s structure, the force/position tracking control has been studied in different kinds of uncertainties. Nowadays, ignoring uncertainties to simplify control design may cause the large steady-state errors or/and poor transient response [5]. For the case of parametric uncertainties, adaptive control method has been presented in [6, 7] to make robots adapt the changing control environment. In order to enhance system robustness on the uncertain parameters in the presence of external disturbances, sliding mode control [8, 9] has been proposed to obtain the desired robotic tracking control performance. Owing to the universal approximation property [1017], a great number of intelligent control schemes, such as adaptive neural/fuzzy control, have been developed for controlling robotic systems with uncertain nonlinearities [1822].

Although intelligent control of robotic systems has attracted considerable attention in the past few years, relatively few robot control methods could achieve human-like performance in a dynamic and uncertain environment. It is well known that intelligent control was initially inspired by the learning and control abilities of human beings, thus intelligent control should at least possess the aforementioned properties “learning by doing" and “doing with the learned knowledge" [2326]. But most existing intelligent control schemes can only ensure the stability of closed-loop systems without being able to achieve the information acquisition and storage and utilization of unknown system dynamics. This means that the existing intelligent control schemes do not solve the accurate convergence of estimated parameters, which usually needs to guarantee the exponential stability of the derived closed-loop system. However, it is extremely difficult for uncertain nonlinear systems to verify the exponential stability of the derived closed-loop system. Recently, a deterministic learning method was proposed in [27] using RBF NNs for a second-order Brunovsky system, where the derived closed-loop system was described by a class of linear time-varying systems. By verifying that RBF NNs satisfied persistent excitation (PE) condition, the convergence of partial neural weights and accurate neural approximation of unknown system dynamics were guaranteed in [27] because of the exponential convergence of the derived linear time-varying closed-loop systems. The deterministic learning method was further extended to -order Brunovsky systems with an unknown affine/nonaffine term [28, 29], where the derivative of unknown affine terms was assumed to be bounded. By combining backstepping with a system decomposition strategy, an elegant dynamic learning method was proposed to cope with the learning and control problem of third-order strict-feedback systems [30]. By combining dynamic surface control technology [31], the result in [30] was further extended to -order strict-feedback systems. The deterministic learning method was also applied in many physical systems such as marine surface vessels [32, 33] and robot manipulators [34].

Recently, the deterministic learning methods are mainly used to solve the learning and control problem for single input single output nonlinear systems without any constraint. In practice, there are many different kinds of constraints in most of physical systems, such as output or state constraints, tracking performance constraints. The violation of the constraints may cause severe performance degradation, safety problem, or system damage [35]. Therefore, it is of great importance for solving the control and learning problem of constrained systems. Based on Lyapunov theorem, a barrier Lyapunov function (BLF) method has been presented to solve output constraints for strict-feedback nonlinear systems [36], output feedback nonlinear systems [37], flexible systems [38], and robotic manipulator [35, 39]. The BLF-based methods were also extended to solve state constraints [4042]. Although the aforementioned results on the output or state constraints can guarantee that system outputs or states converge to a predefined bounded set, the predefined performance requirements on the convergence rate, maximum overshoot, and steady-state error have not been studied fully. The predefined performance issue is an extremely challenging problem. Recently, adaptive neural prescribed performance controller was proposed in [43, 44] for feedback linearizable nonlinear systems by means of transformation functions. The proposed method was also developed to deal with the constrained output tracking control problem in many applications such as robotic systems [45], nonlinear servo mechanisms [46], marine surface vessels [33], nonlinear stochastic large-scale systems with actuator faults [47], and switched nonlinear systems [48]. To solve partial tracking error constraints, a fuzzy dynamic surface control design was developed in [49, 50] for a class of strict-feedback nonlinear systems by transforming the state tracking errors into new virtual variables. However, the existing control schemes, such as [3551], can only guarantee the stability of closed-loop systems with different constraints, which are not capable of achieving the learning of unknown system dynamics. The main reason is that the derived closed-loop error system is extremely complex, such that its exponential convergence is difficult to be verified using the existing stability analysis tools. To solve this problem, a neural learning control with the output tracking error constraint was presented in [52, 53] for a class of nonlinear systems. These methods proposed in [52, 53] cannot be adopted to deal with the dynamic learning problem for multi-input and multioutput nonlinear systems with full-state tracking error constraints.

Based on the above discussions, this paper proposes a novel dynamic learning method for a multi-input and multioutput -link robot with full-state tracking error constraints and a mild assumption. To prevent the violation of the full-state tracking error constraints, performance functions are firstly introduced to characterize the transient and steady-state performance of full-state tracking errors. And then, using a nonlinear transformation method, the constrained tracking control problem is effectively transformed into the stabilization problem of equivalent unconstrained transformation error systems. By combining backstepping and Lyapunov stability, a novel adaptive neural control scheme is proposed to guarantee the uniformly ultimate boundedness of all closed-loop signals and the prescribed full-state tracking error performance. It should be pointed out that the proposed control scheme is different from the traditional backstepping design. In our control design, the correlative interconnection term can not be compensated in next step of backstepping design because of the full-state tracking error constraints. To overcome the difficulty, two independent Lyapunov functions are constructed in each step of backstepping design. Based on the independent Lyapunov functions, the unconstrained transformation error of the link angular velocity tracking error is firstly proved to be bounded, and the corresponding link angular velocity tracking error is further proved to satisfy the prescribed performance. Invoking the boundedness of the link angular velocity tracking error, we backward derive the link angular position tracking error to satisfy the prescribed performance. By means of the tracking convergence in the steady-state control process, the regression subvector consisting of the RBFs along the recurrent tracking orbit satisfies the partial PE condition. Subsequently, an appropriate state transformation is introduced to transform the closed-loop system into a linear time-varying (LTV) system with small perturbed terms. With the perturbation theory of LTV system, the knowledge of the closed-loop robotic dynamics can be accurately stored by RBF NNs with constant weight values. The stored knowledge can be reused to develop a static neural learning controller for achieving the better control performance with smaller transient-state tracking errors, smaller control gains, and less computational time.

The rest of this paper is organized as follows. Section 2 introduces the system formulation, the full-state tracking error transformation, and some useful preliminaries about RBF networks. In Section 3, a novel adaptive neural control design is proposed for rigid robotic manipulators with constrained full-state tracking performances. Neural learning control is developed in Section 4, which achieves the knowledge acquisition, storage, and utilization of unknown robotic dynamics. In Section 5, simulation studies on 2-link robotic system are given to show the effectiveness of the proposed method. Section 6 includes the conclusions of the paper.

2. Problem Formulation and Preliminaries

2.1. Uncertain Robotic System

The dynamics of an -link robotic system is described in the following Lagrange form [1]:where are the angular position, velocity, and acceleration, respectively, refers to the input torque, is an unknown symmetric positive definite inertia matrix, denotes unknown centripetal and Coriolis torques, is an unknown gravitational force vector, is an unknown friction vector, and denotes unknown disturbances,

Property 1 (see [54]). The matrix is skew-symmetric.

Assumption 1. The unknown external disturbance is bounded, that is, there exists a constant such that .

In this paper, we choose as a recurrent reference trajectory of the angular position , which is generated by the following reference model:where is the state vector of the reference model, which is assumed to be recurrent signals, and is a known smooth function. The reference orbit along the given initial condition is defined as . In this paper, is assumed to be a recurrent motion.

2.2. Full-State Tracking Constraints

Define full-state tracking errors as and , where are continuously differentiable virtual control signals. For guaranteeing the prescribed transient and steady-state bounds of full-state tracking errors, and need to satisfy the following predefined condition:where , , and are positive design constants and is a smooth, strictly positive, bounded, and decreasing function, which satisfies and is called performance function. In this paper, is chosen as the following exponential performance function:where and are positive design constants.

For any given initial condition , these design constants and can be chosen appropriately such that satisfies the predefined condition (3). From (3) and (4), the different selection of these design parameters, and , can obtain different performance requirements on the tracking error .

2.3. RBF Neural Networks

The RBF NNs can be described by , where are NN input variables with being NN input dimension and being a compact set, is the weight vector, is the NN node number, and is the radial basis function vector. The RBF neural networks have the following desired properties.

(1) Universal Approximation

Lemma 2 (see [55]). Using sufficiently large node number , the RBF NNs can approximate any smooth function over a compact set to any arbitrary accuracy aswhere is the ideal weight vector of and is any small approximation error which satisfies .

(2) Spatially Localized Approximation Property. It should be noted that the radial basic function satisfies when . This property shows that the network output is only locally affected by each basis function. Therefore, any smooth function over a compact set can be approximated using a limited number of neurons, which are located in a local region along bounded input trajectory :where is the subvector of , composed of RBFs that are close to the trajectory , is the corresponding subvector of , and approximation errors are defined as the region far away from the trajectory , which means . Therefore, is close to , which is any small value.

(3) Partial PE Condition. Persistent excitation plays a key role in accurate convergence of neural weight estimator. To clearly show that RBF NNs satisfy the PE property, a definition of PE condition is given as follows:

Definition 3 (see [55]). A continuous, uniformly bounded, vector-valued function is said to satisfy the persistent excitation condition, if there exist positive constants , and such that holds for every constant vector , where is a positive, -finite Borel measure on .

Lemma 4 (see [27]). Consider any recurrent orbit , remaining in a compact set with . Then, for the RBF network with centers placed on a regular lattice, which is large enough to cover the compact set , the regressor subvector in (6) (rather than the entire regressor vector ) is persistently exciting.

3. Adaptive Neural Control with Full-State Tracking Error Constraints

In this section, a novel stable adaptive neural tracking control scheme will be developed for the system (1) with full-state tracking error constraints (3) using nonlinear error transformations, independent Lyapunov functions, and backstepping. The proposed control scheme will guarantee that all the signals in the closed-loop system are ultimately bounded and the full-state tracking errors satisfy the prescribed performances (3).

In order to transform the constrained tracking control problem (3) into an equivalent unconstrained one, a performance transformation method is introduced as follows:where is a smooth and strictly increasing function and satisfies . In this paper, the transformation function is constructed as follows:

Owing to the properties and , the inverse transformation of exists and is well defined as follows:From (3) and (10), if is verified to be bounded, then we can obtain that satisfies the predefined error performance (3). Differentiating yieldswhere . Next, define the transformed unconstrained error vector . It follows from (11) thatwhere and . Notice that and , using system (1), and thus the dynamics of the transformed error vector can be rewritten aswhere is an unknown smooth function vector and is a computable variable. Next, a control scheme will be developed for the system (13)-(14) based on backstepping.

Step 1. For the transformed error subsystem (13), design a virtual controlThen, we havewhere with and .

Constructing the following Lyapunov function candidate:whose derivative along (16) yields

Remark 5. From (18), the boundedness of the transformation error depends on the boundedness of the link angular velocity tracking error . In the traditional backstepping design, the term is called the correlative interconnection term, which can be usually compensated in the next step of backstepping. However, in our control design, it is impossible to deal with in Step 2. The main reason is that the full-state tracking error constraints are considered in this paper, which derives the transformation error in Step 2, instead of the traditional error . Therefore, it is difficult to construct the Lyapunov with to compensate for ; see Step 2 for details.

Step 2. By adding and subtracting , the transformed error subsystem (14) can be rewritten aswhere , andIt should be pointed out that the term introduced in (19) facilitates the stability analysis based on the property (1). Since , , and are unknown smooth function vectors, is also unknown and smooth, which can not directly be used to construct the controller. To solve this problem, the unknown dynamics are approximated by RBF NN (5) in this paper. Then, we havewhere the approximation error satisfies and is the unknown optimal NN weight vector with NN node number , and is a radial basis function vector. Define as the estimated weight values of , and let be the corresponding estimated error. Then, design the adaptive neural control law asand construct the neural weight updated law aswhere and are positive design constants, is positive diagonal matrix, and is a small value, which is used to improve the robustness of the adaptive controller (22).
Substituting (22) into (21), we havewhere . Noting that the inertia matrix is positive definite, thus we construct the following Lyapunov function candidate:

In what follows, one of our main results will be shown by the following theorem.

Theorem 6 (stability and tracking). Consider the closed-loop system consisting of the robotic system (1), the bounded reference trajectory (2), the full-state tracking performance condition (3), the transformed error (10), the proposed adaptive neural control law (22) with the virtual control law (15), and the weight updated law (23). Assume the given bounded initial conditions satisfy the condition (3) (this condition can be satisfied by choosing proper designed parameters , and ). Then, we have that(1)all signals of the closed-loop system remain uniformly ultimately bounded(2)the constrained full-state tracking errors satisfy the prescribed performance (3) and they converge to a small residual set of zero in a finite time .

Proof. Differentiating in (25) with respect to time and using (18), (24) yieldsUsing the Property 1 and substituting (23) into (26), the derivation of is rewritten asSubsequently, using the appropriate inequality and combining Assumption 1, we havewhere is a positive constant. Substituting (28) into (27) giveswhere . Let ; we have

From the definition (25) of , the transformed error and the neural weight error are bounded. Noticing that , and the error transformed relationship (8)–(10), thus we obtain that the tracking error is bounded and satisfies the prescribed tracking error constraints (3). This means is any small value depending on design parameters , and . Using the bounded property of , we can obtain thatNoting , thus (18) can be rewritten as where , . Further, let ; we haveSince , it follows from (33) that is bounded. Similarly, we can obtain that the tracking error is bounded and satisfies the prescribed tracking error constraints (3). By combining the boundedness of , and the deigned bounded function , , it can be verified that and are bounded. Because of , is also bounded. Because the desired weight is bounded, the weight estimate is also bounded. From (22), we can verify that is also bounded. Hence, all closed-loop signals are uniformly ultimately bounded.

Moreover, from (17), (25), (30), and (33), we can obtain the convergence region of the transformed error , as follows:

By choosing and , there exists a finite time , such that for the transformed error satisfiesSubsequently, the convergence region after a finite time can be constructed as follows:where can be adjusted to be arbitrarily small if we choose appropriate controller parameters , , and .

From (36), the transformed error converges to a small residual set in a finite time . Owing to the convergence of and the error transformed relationship (8)–(10), the full-state constrained tracking errors satisfy (3) in a finite time . From (3) and (4), the tracking error exponentially converges to the interval , which can be adjusted to be a small residual set of zero by choosing the appropriate design parameters .

Remark 7. In order to verify the boundedness of transformation errors and , two independent Lyapunov functions and are constructed in Steps 1 and 2. Using the appropriate inequality technology, we firstly prove the transformation error is bounded. Subsequently, the boundedness of the tracking error is indirectly verified based on the error transformed relationship (8)–(10). As indicated by the Remark 5, the boundedness of backward derives the boundedness of the transformation error . Similarly, can be verified to satisfy the predefined tracking performance (3).

Remark 8. In order to verify the boundedness of transformation errors and , two independent Lyapunov functions and are constructed in Steps 1 and 2. Using the appropriate inequality technology, we firstly prove the transformation error is bounded. Although the boundedness of depends on in (30) which may be large, the tracking error can still satisfy the prescribed performance (3) based on the error transformed relationship (8)–(10). Subsequently, the boundedness of backward derives the boundedness of the transformation error . Similarly, can be verified to satisfy the predefined tracking performance (3).

4. Neural Learning Control

Based on the stable adaptive neural control scheme developed in Section 3, this section will use the spatially localized approximation ability of RBF NNs to achieve the knowledge acquisition and storage of the unknown system dynamics . And then, the stored knowledge will be reused to develop a neural learning controller so that the improved control performance of the robotic system (1) can be achieved for the same or a similar control task.

4.1. Knowledge Acquisition, Expression, and Storage

In this section, the regression subvector of RBF NN is firstly verified to satisfy the PE condition, which is key to achieve the exponential convergence of NN weight values and the accurate NN approximation of the unknown system dynamics . From Lemma 4, the NN input vector needs to be verified as recurrent signals, so that the regression subvector along the input orbit satisfies partial PE condition. Based on Theorem 6, it can be obtained that the system output converges to a small neighborhood of for . Since is a recurrent signal, is also recurrent. Since , , and are very small values, is recurrent with the same period as . In the steady-state control, is small and recurrent. Noting that , which is a function of the variables , , , and , so it can be recurrently verified as a recurrent signal. According to Lemma 4, the regression subvector satisfies partial PE condition.

Using the spatially localized approximation ability of RBF NNs, the closed-loop system from (23) and (24) can be given bywhere , which is a positive definite and symmetric matrix, is the subvector of , which is composed of RBFs that are close to the reference orbit , is the corresponding estimated weight subvector with and , denotes the region far away from the orbit , and are the approximate errors along the reference orbit.

Theorem 9. Consider the closed-loop system consisting of the robotic system (1) with Assumption 1, the bounded reference model (2), full-state tracking error constrained condition (3), the transformation error (10) and adaptive neural control law (22) with the virtual control law (15), and the NN updated law (23). Assume the given bounded initial conditions satisfy (3) and , . Then, for any recurrent orbit , we have that the NN weight estimates exponentially converge to a small neighborhood of optimal values after , and the corresponding system dynamics along recurrent signals is approximated by the stored neural knowledge as: where approaches the desired error , and the constant weight values are chosen aswith representing a time segment in a steady-state stage.

Proof. Up to now, we have verified that satisfies the PE condition. Furthermore, in order to achieve the exponential convergence of neural weight estimates , the closed-loop system (37) needs to be transformed into a class of linear time-varying (LTV) systems with small perturbations based on Lemma given in [56]. From (37), the perturbation term may be very large. The main reasons lie in the fact that the term may be large with the possible large and the term may be also large due to the possible large and . Noting is a positive definite and symmetric matrix, a state transformation is introduced to obtain the following class of LTV systems:where is abbreviation of , and

It is worth pointing out that and are small perturbations by choosing large enough and small enough . Therefore, the system (40) can be regarded as a class of LTV systems with very small perturbations. It has been shown in [28] that the nominal part of the system (40) can be guaranteed to be exponentially stable if the system (40) satisfies the following three conditions:(i)There exists a positive constant such that, for all , the bound is satisfied;(ii)There exist symmetric and positive matrices and such that ;(iii) satisfies the PE condition.

From Section 3, all closed-loop signals remain uniformly ultimately bounded. Therefore, the satisfaction of condition (i) can be easily checked. Moreover, we have verified that satisfy the PE condition (see the above analysis of Theorem 9 for the details). Next, choose a matrix . Since and are positive definite and symmetric, the matrix is also symmetric and positive. Then, we haveThe inequality holds when choosing the appropriate control parameter . Therefore, we has verified that the nominal part of the system (40) is uniformly exponentially stable. Further, based on the perturbation theory given in Lemma [56], the weight estimate error converges exponentially to a small neighborhood of zero for . Noting that , so converges exponentially to a small neighborhood of the desired weight value in a finite time , and the corresponding desired weight value can be stored by constant neural weight values in (39).

Based on the spatially localized approximation property of RBF NN (6) and the constant weight values , the unknown system dynamics could be expressed aswhere and are close to . From (43), neural networks , containing the experience knowledge , can be used to accurately approximate the unknown system dynamics .

Furthermore, the learned knowledge can be described as follows: for the experienced recurrent orbit , there exist positive constants and , which describe a local region along such thatwhere is close to . For a new task, once the NN inputs enter the region , the trained RBF networks can accurately approximate the uncertain nonlinearity .

4.2. Static Controller Design with Knowledge Utilization

By invoking the stored weight values (39), a static controller will be developed in this section to guarantee the prescribed performance of full-state tracking errors of the robotic system (1) for the same or a similar control task. Using the stored knowledge in (39), a static control law without neural weight estimated parameter adjustment online, instead of the adaptive NN control law (22), is designed as follows:where and are positive design parameters and and are defined in (12). Moreover, the virtual control law is chosen the as same as Section 3; see (15) for the detail. Then, by combining (16), (19), and (45), we can obtain the following closed-loop system:where . Subsequently, construct the following Lyapunov function candidate:Noting the condition (44) and applying the similar backstepping step in Section 3, we have the following results.

Theorem 10. Consider the closed-loop system consisting of the robotic system (1), the bounded reference trajectory (2), the full-state tracking performance condition (3), the transformed error (10), the static neural learning control law (45) with the stored constant weight given in (39), and the virtual control law (15). Then, for the same or a similar recurrent reference orbit given in Theorem 6 and the initial conditions satisfying the prescribed performance (3), it can be guaranteed that all the closed-loop signals are uniformly ultimately bounded, and the constrained full-state tracking errors satisfy the prescribed performance (3) and converge to a small residual set of zero.

5. Simulation Results

To demonstrate the effectiveness of the proposed dynamic learning scheme, we consider a 2-link robot manipulator which is shown in Figure 1. From the -link rigid robotic system (1), denotes the angular position of each joint and is the actuator input applied at the manipulator joints, respectively. Based on the system (1), the dynamic parameters of a 2-link robot manipulator are given bywhereand and denote the length and the mass of link-, , and denotes the gravity acceleration. In this paper, these system parameters are chosen as , , , and , and the external disturbances , which are bounded and satisfy Assumption 1.

The system output is required to track the following desired reference trajectory :

For full-state constrained tracking errors and , our target is to achieve the following prescribed transient and steady-state tracking error bounds:where , , , , and ; the performance function and the transformation function are designed as

5.1. ANC Results with Full-State Tracking Error Constraints

According to Theorems 6 and 9, the main objective of this section is to use the proposed stable adaptive neural controller (22) with virtual control law (15) and neural weight adaptation law (23) such that the full-state tracking errors and satisfy the prescribed performance (51); the neural weight estimator exponentially converges to the constant weight value ; the unknown system dynamics in (20) is accurately approximated by the constant RBF NNs .

In the simulation studies, the RBF network consists of neurons whose centers are evenly spaced on with the width . The other design parameters are chosen as follows: , , , , and . The initial states are , , and . Simulation results are shown in Figures 210. Figures 25 show the constrained joint angular position and velocity tracking error performances, respectively. From Figures 25, it can be clearly seen that good transient performances have been achieved by adjusting the performance function (53) and design parameters , . The control input response is given in Figure 6. The partial weight convergence of is presented in Figures 7 and 8. It can be seen from Figures 7 and 8 that only partial weights converge to relatively large values, which means only the neurons along recurrent input signals can be motivated. Based on Theorem 9, the constant neural weight is chosen in the simulation asFigures 9 and 10 display unknown system dynamics and , along the periodic reference signals , and can be accurately approximated by the constant RBF NN and .

To further show the improved transient and steady-state tracking performance for the proposed control method with full-state tracking performance constraints, the simulation comparison is given between the proposed method and the existing method without prescribed performances [30]. For comparison purpose, the existing method [30] is also used to control the same 2-link robot manipulator with the same initial condition , and the same reference trajectory (50). For clarity, the existing method without prescribed performance proposed in [30] is recalled as follows: the control law is and neural weight updated laws and . By choosing the appropriate control parameters , , , and , to be fair, both control input signals of the two methods are required to have the similar amplitude, which is shown in Figure 6. The simulation results for comparison are given in Figures 25. It is clearly showed from Figures 25 that the proposed adaptive neural control scheme with full-state tracking performance constraints achieves a better transient and steady-state tracking control performance with smaller overshoot, faster convergence rate, and smaller steady-state error.

Remark 11. It is well known that the tracking performance relies on the choice of the control parameters and structure of RBF neural networks. However, how to choose design parameters for achieving the good tracking performance is still an open problem. In this simulation, the RBF networks are constructed appropriately such that all neurons can cover the entire NN input trajectory space. Moreover, these controller parameters are chosen with large enough , , , and and small enough . It should be pointed out that these design parameters are chosen in this simulation by a trial-and-error method.

5.2. Neural Learning Control Results

By using the stored constant weight values in (55), the objective of this section is to invoke a static neural learning controller (45) to achieve the improved control performance with full-state tracking error constraints for the same or similar control tasks. For comparison purpose, the plant controlled, the reference trajectory, and the constrained tracking error performance are chosen the same as Section 5.1, while the initial conditions and the neural network structures are unchanged. In the simulation, with the control gains selected as , , and , simulation results for static neural learning control (45) are shown in Figures 1115. From Figures 1114, it can be seen that the smaller overshoot and the faster convergence are obtained using the learned knowledge in (55), while full-state tracking errors satisfy the prescribed performance. It is worth pointing out that a smaller control signal is used in neural learning control to achieve the aforementioned improved tracking control performance; see Figure 15 for the details. Moreover, because the proposed static learning control scheme avoids the online adjustment of the neural weight values, the running time saves nearly 1/2 for the same simulation time interval and the same computer configuration. The static neural learning controller (45) especially avoids the trial-and-error process on control design parameters and NN parameters which is tuned in adaptive neural control process. Therefore, the proposed learning control scheme avoids effectively a great deal of time consumed by the adaptive neural control process.

6. Conclusions

This paper focused on the problem of full-state tracking error constraints for an -link rigid robot with unknown system dynamics and external disturbances. The performance transformation method was employed to transform the constrained full-state tracking errors into the unconstrained ones. By combining backstepping design and two independent Lyapunov functions, a novel adaptive neural control scheme was presented to guarantee all the signals in the closed-loop system are uniformly ultimately bounded, while this control scheme achieves predefined transient and steady-state tracking control performances concerning the link angular position and velocity tracking errors. Particularly, in the steady-state control process, the proposed neural control scheme can acquire, express, and store the knowledge of unknown system dynamics. The stored knowledge was reused to complete the same or similar tasks, so that the improved control performance was achieved with the less computational burden and the better transient-state tracking performance. It should be pointed out that the considered -link rigid robot is a class of simple multi-input and multioutput nonlinear systems. Therefore, how to extend the proposed method to complex nonlinear systems, such as nonaffine nonlinear systems, switched nonlinear systems, and stochastic large-scale systems, presents a challenging opportunity for future work.

Conflicts of Interest

The authors declare that they have no conflicts of interest.

Acknowledgments

This work was partially supported by the National Natural Science Foundation of China (nos. 61374119, 61611130214, and 61473121), the Royal Society Newton Mobility Grant IE150858, the Guangdong Natural Science Foundation under Grant 2014A030312005, the Science and Technology New Star of Zhujiang, and the Fundamental Research Funds for the Central Universities.