Abstract

This paper focuses on neural learning from adaptive neural control (ANC) for a class of flexible joint manipulator under the output tracking constraint. To facilitate the design, a new transformed function is introduced to convert the constrained tracking error into unconstrained error variable. Then, a novel adaptive neural dynamic surface control scheme is proposed by combining the neural universal approximation. The proposed control scheme not only decreases the dimension of neural inputs but also reduces the number of neural approximators. Moreover, it can be verified that all the closed-loop signals are uniformly ultimately bounded and the constrained tracking error converges to a small neighborhood around zero in a finite time. Particularly, the reduction of the number of neural input variables simplifies the verification of persistent excitation (PE) condition for neural networks (NNs). Subsequently, the proposed ANC scheme is verified recursively to be capable of acquiring and storing knowledge of unknown system dynamics in constant neural weights. By reusing the stored knowledge, a neural learning controller is developed for better control performance. Simulation results on a single-link flexible joint manipulator and experiment results on Baxter robot are given to illustrate the effectiveness of the proposed scheme.

1. Introduction

Due to the great demands in industrial applications, the tracking control problem for flexible joint robot (FJR) manipulator has attracted much attention in recent years. Unlike rigid joint robot, the joint flexibility of FJR results in complex control situation, so that the control problem of FJR becomes much more difficult. In the past few decades, lots of efforts have been made on the research of FJR systems. Based on the model of FJR presented in [1], multifarious nonlinear control methods are presented such as backstepping method [24], sliding-mode control [58], switching control [9], fuzzy control [10], and neural network control [11, 12]. In consideration of the problem caused by the inherent structure of FJR under practical circumstance, such as friction, time delay, and variable stiffness, some researchers proposed effective strategies to solve such problem [1315]. Moreover, the teleoperation control method is also widely used in robot research [16, 17].

The backstepping control [18] is known as one of the popular method for designing the control scheme of FJR. Nevertheless, it should be pointed out that this method has a drawback called “explosion of complexity” [19]. This problem generally occurs in the design of neural networks (NNs) during backstepping procedure. To overcome this problem, some researchers used the intermediate variables as neural inputs to reduce the dimension of neural network input vector [20]. The method in [20] did work well but the problem remained unsolved. Then other researchers proposed a dynamic surface control (DSC) method by introducing a first-order filter at each step of the backstepping procedure [21]. Due to the property of DSC method, many researchers presented their control schemes combined with DSC method [2227]. In [24], a new robust output feedback control approach for flexible joint electrically driven robots via the observer-based dynamic surface method was proposed, which only requires position measurement of the system.

Besides, the transient and steady-state tracking performance constraints of system’s output are an important issue that needs to be taken into consideration [28, 29]. According to the practical operating environment, the manipulator is not only demanded to trace the reference trajectory accurately but also required to keep the tracking error within a specified range. To satisfy this condition, a performance function transformation was used to convert the “constrained” system into the “unconstrained” one [30]. Based on the idea in [30], further researches on prescribed performance for a variety of systems are proposed [3136]. Authors in [31, 32] presented novel controllers for FJRs to achieve tracking control of link angles with any prescribed performance requirements. By combining neural learning control scheme, further results are given in [3335]. In [36], an adaptive prescribed performance tracking control scheme is investigated for a class of output feedback nonlinear systems with input unmodeled dynamics based on dynamic surface control method.

In addition, adaptive neural control of nonlinear system has been widely studied for decades, but most of the traditional works focus on the system stability through online adjustment of neural weights and less works discuss the knowledge acquisition, storage, and utilization of optimal neural weights. To achieve such learning ability, the key problem is to verify the persistent excitation (PE) condition. Thanks to the results in [37], a deterministic learning mechanism is proposed, which proved the satisfaction of PE condition for the localized radial basis function (RBF) NN centered in a neighborhood along recurrent orbits. The result was extended to nonlinear systems satisfying matching conditions [3840]. By combining recursive design technologies such as backstepping control and the system decomposition strategy, the deterministic learning was also applied to solve learning problem of accurate identification of ocean surface ship and robot manipulation in uncertain dynamical environments [4144]. However, due to the recursive property of backstepping control, the convergence of neural weights has to be recursively verified based on the system decomposition strategy. It would be a tedious and complex process since the intermediate variables grow drastically as the order of system increases. Therefore, it is difficult to prove all neural weights convergence for high-order system by existing works.

This paper focuses on learning from adaptive neural control of flexible joint manipulator with unknown dynamics under the prescribed constraints. A performance function is introduced to transform the constrained tracking error into the unconstrained variable. To avoid the curse of dimensionality of RBF NN, first-order filters are introduced to reduce the number of NN approximators and decrease the dimension of NN inputs. The control law is constructed based on Lyapunov stability, which guarantees the closed-loop stability and the tracking error satisfying the prescribed performance during the transient process. Subsequently, due to the property of DSC and structure features of the considered manipulator, a system decomposition strategy is employed to decompose the stable closed-loop system into two linear time-varying (LTV) perturbed subsystems on the basis of the number of NNs in the whole system. Through the recursive design, the recurrent properties of NN input variables are easily proven. Consequently, with the satisfaction of the PE condition of RBF NNs, the convergence of partial neural weights is verified, and the unknown dynamics of system are approximated accurately in a local region along recurrent orbits. By utilization of the constant neural weights stored, a neural learning controller is developed to achieve the closed-loop stability and better control performance under the prescribed constraints for the same or similar control task. Compared with the existing neural learning results, the proposed neural learning control scheme not only achieves better control performance with specified transient and steady-state constraints but also reduces the dimension of NN inputs and the number of NNs significantly.

This paper is organized as follows. In Section 2, the problem formulation and preliminaries are stated before the control scheme design. In Section 3, a novel adaptive neural dynamic surface control scheme is proposed to guarantee that the constrained tracking error converges to a small neighborhood around zero with the prescribed performance in a finite time, and all the signals in the closed-loop system are uniformly ultimately bounded. Section 4 shows that the knowledge acquisition, expression, storage, and utilization of the manipulator’s unknown dynamics can be achieved after the steady-state control process. To verify the effectiveness of the proposed control scheme, simulation results on a single-link flexible joint manipulator and experiment results on Baxter robot are given in Section 5. Last but not least, the conclusions are drawn in Section 6.

2. Problem Formulation and Preliminaries

2.1. System Formulation

In this paper, we consider an -link manipulator with flexible joints, whose model is described by [1]where is the vector of links’ angle positions and is the vector of motors’ angle positions. is the link inertia matrix and is the diagonal and positive definite motor inertia matrix. Moreover, denotes the Coriolis and centrifugal matrix and represents the gravitational terms. is a diagonal and positive definite matrix of joint spring constants; thus is also positive definite. Finally, is the control input of system (1), and the output of system (1) is .

Property 1 (see [2]). The inertia matrix is symmetric and positive definite; both and are uniformly bounded.

Property 2 (see [2]). The Coriolis and centrifugal matrix can be defined such that is skew symmetric; that is, .

The reference trajectory vector is generated by the following smooth and bounded reference model:where and is the system outputs vector. is a smooth known nonlinear function. Assume , are recurrent signals and the reference orbit (denoted as ) is a recurrent motion. Moreover, , , , , and are assumed as unknown terms.

Our goal is to design a neural learning controller, which forces the tracking error vector (i.e., ) converges to a small neighborhood around zero with prescribed performance in a finite time. Before the design of learning control (LC) scheme, a stable adaptive neural dynamic surface controller with prescribed performance is developed to verify the feasibility of ANC scheme. According to the deterministic learning theory, the unknown system dynamics are accurately approximated by localized RBF networks along the recurrent orbits of NN inputs. Then, based on the ANC scheme and the approximation of localized RBF networks, the knowledge on unknown system dynamics is stored in static neural weights, which is also reused to develop a neural learning controller. This neural learning controller is verified to achieve the closed-loop system stability and better control performance with prescribed constraints for the same or similar tasks.

2.2. Prescribed Tracking Performance

In this paper, the output error vector of system (1) is defined as . To achieve the prescribed performance (i.e., overshoot, convergence rate, and convergence accuracy), each element in is constrained into the following prescribed region: where and are positive design constants. is a bounded, smooth, strictly positive, and decreasing performance function. In addition, is chosen as the following form by setting and : where , , and are positive constants. With (3) and (4), it can be concluded that the convergence rate of is constrained by the decreasing rate of , while its maximum bound of overshoot at initial moment is constrained by and , and its steady error is constrained within a range from to .

2.3. RBF Neural Network

According to [45], RBF NN can approximate any continuous function vector over a compact set to any arbitrary accuracy as where is the ideal weights matrix, is the NN node number, is the basis function vector with chosen as the Gaussian function , and is the approximation error vector which satisfies , with constant .

On the other hand, it has been shown in [46] that, for any bounded trajectory over the compact set , the continuous and smooth function can be approximated to an arbitrary accuracy using the localized RBF NNs with a limited number of neurons located in a local region along the trajectory: where is the subvector of and with . is the approximation error which is close to .

Lemma 1 ((partial PE condition for RBF NNs) [46]). Consider any continuous recurrent trajectory . Assume that is a continuous map from into , and remains in a bounded compact set with . Then, for the RBF NN with centers placed on a regular lattice (large enough to cover the compact set ), the regression subvector consisting of RBFs with centers located in a small neighborhood of is persistently exciting.

3. Adaptive Neural DSC Design with Predefined Tracking Performance

In this section, performance function is introduced for describing constraints of system (1). Then an adaptive neural DSC is developed, with the design of adaptive control law based on the transformed error. Meanwhile, RBF NN is used to approximate the unknown dynamics.

Step 1. Similarly to the traditional backstepping design, we set

It should be pointed out that any errors set in previously traditional design are under unrestricted condition [20], while is constrained to satisfy condition (3) in this paper, which is rewritten as

It implies that can not be used for design directly due to the limitation of the traditional design method. To solve the constrained tracking control problem, the constrained error should be transformed into the unconstrained one equivalently. Therefore, a new error vector is defined as called transformed error vector. Define a smooth transformed functions vector with being chosen as where , and . Moreover, for the symmetric tracking error constraints , the transformation function (9) can be constructed as . For clarity, Figure 1 illustrates the relationship between and .

It can be concluded from (9) and Figure 1 that the transformation function is smooth and strictly increasing while possessing the following properties:

By combining (8) with (10), can be rewritten as

Since is a strictly monotonic increasing function and , the inverse function of exists, which can be expressed as

Noting that , its derivative can be presented as where , with being presented as

It is clear that is positive definite, which is helpful for the stability analysis. By introducing a new filter variable and noting , then the virtual controller is constructed as where is a diagonal and positive design matrix. Take as the input of a first-order filter and as the output of it, a differential equation is constructed as where is the filter time constant and set ; then (13) can be rewritten as

Step 2. Let , its derivative can be obtained:

Define the unknown dynamics in (18) as where

According to the property of RBF NN, can be approximated accurately by RBF NN and (19) is rewritten as where is the bounded approximation error vector which satisfies . Define as the estimate of and set . Then a new filter variable is introduced and note ; (18) can be rewritten by combining (19) and (21):

Then the virtual controller is constructed as where is a diagonal and positive design matrix. The updated law of NN weights is given by with diagonal matrix and small value for enhancing the robustness of the controller (23).

Take as the input of a first-order filter and as the output of it, a differential equation is constructed as where is the filter time constant and set ; then (22) can be rewritten as

Step 3. Define , its derivative can be obtained:

Introduce a new filter variable and note . Then the virtual controller is constructed as where is a diagonal and positive design matrix. Take as the input of a first-order filter and as the output of it, a differential equation is constructed as where is the filter time constant and set ; then (27) can be rewritten as

Step 4. Let ; the following can be obtained: Define the unknown dynamics of the system as where

According to the property of RBF NN, can be approximated accurately by RBF NN and (32) is rewritten as where is the ideal constant weight matrix with being the NN node number, is the basis function vector, and is the bounded approximation error vector which satisfies . Define as the estimate of and let . Then (31) can be rewritten by combining (32) and (34):

Then the control input is constructed as where is a diagonal and positive design matrix. The updated law of NN weights is given by with diagonal matrix and small value for enhancing the robustness of the controller (36). Then (35) can be rewritten as

Let us construct the following Lyapunov function candidate: where , , , and .

Remark 2. It should be pointed out that, in the adaptive neural backstepping design [20], the derivative of the virtual control in Step 2 is usually used to design the virtual control in Step 3. However, according to (1) and (23), it can be seen clearly that is not available because the unknown terms, such as , , and , are included in . Therefore, a neural network has to be employed in Step 3 of backstepping to approximate the unknown dynamics in . However, too many neural networks employed make the control scheme implemented difficultly. To solve this problem, this paper introduces a new variable in (25) to design virtual control using a first-order filter. From (25), it is easy to calculate that . The advantage of the proposed approach is that the unknown dynamics in the previous step does not affect the design of virtual control in next step, so that the number of NNs employed can be greatly reduced. Moreover, the proposed method uses , instead of the intermediate variables used in [20], as neural input variable, so that there are only neural input variables for neural networks and , respectively, which reduces significantly the number of neural input variables used in [20], where and neural input variables are used in and .

Theorem 3. Consider the manipulator model (1), the reference trajectory , the prescribed performance bounds (8), the state transformation (11), the adaptive NN control law (36), the weight updated law (24), (37), and Lyapunov function (39). Given any constant , for any bounded initial conditions satisfying the prescribed performance (8) and , there exists design parameters , , , , , , , , , , and , such that the proposed control scheme guarantees that () all the signals in the closed-loop system are uniformly ultimately bounded and () the tracking error converges to a small neighborhood around zero with the prescribed performance (8) in a finite time .

Proof. See Appendix A.

4. Dynamic Neural Learning

In this section, we will show the learning ability of RBF NNs for unknown system dynamics and in the case of the manipulator with predefined tracking performance. Subsequently, the stored knowledge on the unknown dynamics will be utilized to design a neural learning controller achieving better control performance, while satisfying the prescribed performance.

4.1. Learning from Adaptive Neural DSC

According to Lemma 1, it can be concluded that a recurrent orbit can make regression subvectors satisfy the partial PE condition, which is a key condition to ensure the accurate convergence of neural weights.

Theorem 4. Consider the closed-loop system consisting of the flexible joint manipulator model (1), the reference trajectory , the prescribed performance bounds (8), the state transformation (11), the adaptive NN control law (36), and the weight updated law (24) and (37). Then, for any recurrent orbit and initial condition ( is an appropriately chosen compact set) satisfying the prescribed performance (8) and , we have that the neural weights converge to small neighborhoods around optimal values , and the locally accurate approximation of the system dynamics is obtained by the stored knowledge : with , . and are time segments after the steady-state control process.

Proof. From Theorem 3, all the signals in the closed-loop system are uniformly ultimately bounded and the tracking error vector converges to a small neighborhood around zero with the prescribed performance in a finite time . Thus, the state converges closely to the recurrent signals for all . In addition, it can be obtained from the proof of Theorem 3 that the transformed error vector converges exponentially to a small neighborhood around zero in a finite time . From (15), it can be concluded that virtual control is recurrent with the same period as by combining the convergence of and . Noting with and being close to a small neighborhood around zero based on Theorem 3, then is a recurrent signal with the same period as . In addition, is also a recurrent signal with the same period as since and is a small value. From (16), it can be obtained that is a recurrent signal as well. Therefore, the NN inputs are recurrent for all , and a partial PE condition of is satisfied according to Lemma 1. By combining the convergence of and the localized RBF NN along the recurrent signals , it can be obtained from (24) and (26) that where the subscript stands for the region near the orbits , is the subvector of consisting of the corresponding RBFs, and is the corresponding weight submatrix of . Moreover, the subscript represents the region away from the orbits , and is the NN approximation error along the orbits . Since is small, is close to .

It would be shown that the perturbation term may be large, which will make the accurate convergence of neural weights become difficult. To solve this problem, a state transformation is introduced to eliminate the influence of . Subsequently, set and ; (41) can be transformed into the following form: where

According to Theorem 3, is a small value and can be made as a small value by choosing a small design parameter . Therefore, system (43) can be considered as a linear time-varying (LTV) system with a small pertubation term. Choose ; then where , . Choose appropriately such that . Subsequently, based on the pertubation theory from Lemma in [47], both and converge exponentially to a small neighborhood around zero in a finite time , and the size of neighborhood is determined by and , respectively. Noting , it is clear that can converge to a small neighborhood of optimal weights in a finite time , and the constant weights can be obtained from (40). According to the localization property of RBF NN, the system dynamics can be described by where both and are close to due to the convergence of .

According to the above analysis and noting , there exists a constant , such that the virtual control can be rewritten as for all , where is a small value because of the convergence of . According to Theorem 3, is a recurrent signal. Since and is a small value, is also recurrent with the same period as . From (25), it can be obtained that is also a recurrent signal as well. From (28), by combining the convergence of and , virtual control is a recurrent signal with the same period as . Since and is a small value, is also recurrent with the same period as . From (29), it can be obtained that is also a recurrent signal as well. Therefore, the NN inputs are recurrent for all , and a partial PE condition of is satisfied according to Lemma 1. By combining the convergence of and the localized RBF NN along the recurrent signals , it can be obtained from (37) and (38) that

Similarly, is the subvector of consisting of the RBFs near the orbits , and is the corresponding weight submatrix of . Moreover, is the NN approximation error along the orbits . Since is small, is close to .

It would be shown that the perturbation term may be large, which will make the accurate convergence of neural weights become difficult. To solve this problem, a state transformation is introduced to eliminate the influence of . Subsequently, set and ; (46) can be transformed into the following form:where

Using the similar step and choosing , we have with . Then, it can be proven that converges to a small neighborhood of optimal weights in a finite time , and the constant weights can be obtained from (40). According to the localization property of RBF NN, the system dynamics can be described by where is close to due to the convergence of . Therefore, the dynamics , can be accurately approximated by the constant RBF NN with the stored knowledge obtained in (40).

4.2. Neural Learning Control Using the Stored Knowledge

Since the locally accurate NN approximation can be achieved by the constant RBF NN , for a similar control task, we will reuse the knowledge to design a neural learning controller for system (1): with virtual controllerwhere and are the accurate neural approximators of the unknown system dynamics and , respectively. and are constant weight matrices, which are obtained from the previous control process. Similar to the proof of Theorem 3, another Lyapunov function candidate is constructed as where , , , and .

Theorem 5. Consider the closed-loop system consisting of the manipulator model (1), the reference trajectory , the prescribed performance bounds (8), the state transformation (11), the neural learning controller (53) with the constant weights given by (40), and Lyapunov function (55). Then, for initial conditions which generate the same or similar recurrent reference orbit as in Theorem 4, and initial conditions satisfying the prescribed performance (8) and with given , such that all the closed-loop signals remain uniformly ultimately bounded, and the tracking error converges to a small neighborhood around zero with the prescribed performance (8) when initial conditions are in a close vicinity of .

Proof. See Appendix B.

Remark 6. For clarity, a block diagram of the proposed schemes is shown in Figure 2. From Figure 2, the main difference between adaptive neural control and learning neural control lies in the adaptation of NN weights. The neural weights and are updated online in the adaptive neural control process, while the stored constant weights and are reused in the neural learning process for the same or similar control task. Without the repeat adjustment of the neural weights, the neural learning controller (53) and (54) can achieve the better control performance with faster tracking convergence rate and smaller tracking error.

5. Simulation and Experiment

In this section, to illustrate the effectiveness of the proposed approach, a single-link manipulator system with flexible joint is considered by the following form:where is the mass, is the gravitational acceleration, and is the length of link. Figure 3 illustrates the structure of a single-link flexible joint manipulator. In the simulation, the system parameter is chosen exactly as , , , , , and . Then the desired reference trajectory is generated from the following equation:

5.1. Numerical Simulation Results

Based on Theorem 3, the adaptive control law is chosen as (36), and the virtual controllers are chosen as (15), (23), and (28) with the virtual control laws (24) and (37). In the performance function, , , , and . is constructed by 121 neurons whose centers evenly spaced on and widths , . Similarly, is constructed by 2783 neurons whose centers evenly spaced on and widths , , . Other design parameters are , , , , , , and . The initial states are and . The initial weights and are both set as zero vectors.

According to (40), the constant neural weights and in (53) and (54) are obtained as and . In order to be compared with the adaptive neural DSC results, in this simulation, the parameters in performance function and the initial states are set the same as the values set in adaptive neural DSC, while the control gain are set as , , , and .

The related simulation results are shown in Figures 410. Figure 4 illustrates that, for the proposed adaptive neural DSC scheme, the tracking error can ultimately converge to a small neighborhood of zero besides satisfying the prescribed performance; for the proposed learning control scheme, not only does the tracking error satisfy the prescribed performance, but also the convergence rate is faster under the similar control input amplitude shown in Figure 7. Moreover, the time consumption is decreased by 2/3 in contrast with adaptive neural DSC scheme. Figure 5 shows that the output of system (56) tracks to the reference trajectory quickly. Figure 6 shows that other state variables are bounded for the proposed scheme. Figure 8 gives the NN approximation ability for the unknown dynamics. Figures 9 and 10 show that the NN weights converge to certain values along with the updated laws of NN weights.

Remark 7. In order to compare with the difference of tracking performance between different parameter selection in (3), these parameters in (3) are chosen as , , , and ; the others are set as , , , and . Define the tracking error of the former as and the latter one as . Figure 11 illustrates two different tracking performances of the system’s output by selecting different parameters of constraints. It is evident that the tracking performance is affected by the parameter selection of performance function. And as the range of permissible error gets smaller, the tracking error is forced to converge faster and the tracking accuracy gets higher.

5.2. Experiment Results on Baxter Robot

Moreover, in order to validate the effectiveness of the proposed control scheme, the Baxter bimanual robot is used in the experiment, as shown in Figure 12. It is of two 7-DOF arms and advanced sensing technologies, including position, force, and torque sensors and control at every joint. The resolution for the joint sensors is 14 bits with 360 degrees (0.022 degrees per tick resolution), while the maximum joint torques that can be applied to the joints are 50 Nm (the first four joints) and 15 Nm (the last three joints). To compare with the simulation’s results, the desired reference trajectory in the experiment is the same as one in the simulation, which is also generated from (57). In the experiment, one of the Baxter robot’s links (such as the wrist’s link of robot’s right arm) is commanded to track the desired reference trajectory, and the tracking error between link’s angle position and reference trajectory is forced to converge to a small neighborhood around zero with prescribed performance in a finite time, while other links of the robot stay still. Figure 13 illustrates the desired motion of the robot’s link.

To verify the effectiveness of neural learning control scheme for Baxter robot, the constant weights and in Section 5.1 are reused in experiment. For comparison, the parameters in performance function and the initial states are set the same as these values in neural learning control, while the control gains are set as , , , and .

The related results are shown in Figures 1416. Figure 14 shows the difference between LC of simulation and LC of Baxter robot. Although the vibration of the robot affects the tracking performance, it is evident that the tracking error of Baxter robot can also ultimately converge to a small neighborhood of zero besides satisfying the prescribed performance. Figure 15 show that the output of Baxter robot’s link (i.e., of system (56)) tracks to the reference trajectory quickly, and other state variables are bounded for the proposed learning control scheme. Figure 16 shows that the control input of Baxter robot is also bounded with small overshoot and small mechanical vibration.

6. Conclusion

In this paper, we studied learning from adaptive neural dynamic surface control for a class of flexible joint manipulator with unknown dynamics under the prescribed constraint. A novel error transformed function was utilized to transform the constrained tracking problem into the the equivalent unconstrained one so as to facilitate the controller design. Furthermore, by combining DSC method, which was used to reduce the number of NN approximators and decrease the dimension of NN inputs, a novel adaptive neural control scheme was proposed to guarantee the prescribed performance during the transient process. Then, the closed-loop stability and control performance were achieved according to the construction of the Lyapunov function. After the stable control process, since two NNs were used in the controller design, the recurrent property of the NN input variables and the partial PE condition of RBF NNs were proved recursively. Therefore, the locally accurate approximations of unknown system dynamics by RBF NNs were achieved, and the proposed control scheme was verified to be capable of storing the learned knowledge in constant RBF NNs. Finally, the stored knowledge was reused to develop the neural learning controller for the same system model and the same or similar control task, so that the closed-loop stability and better control performance were achieved under the prescribed constraint. Simulation results for a single-link flexible joint manipulator and experiment results for Baxter robot were presented to prove the effectiveness of the proposed control scheme.

Appendix

A. Proof of Theorem 3

Firstly, the derivative of , , and can be expressed as where . Similarly, , , and they are continuous functions. can be expressed as where , , , . From , the following inequality can be obtained:

In addition, since , and are recurrent signals, there exists a compact set with . Furthermore, the set is also compact by function (39), which implies that is compact as well. Consequently, , , and are in the compact set . Therefore, there exists a maximum constant for on such that . According to Young’s inequality, the following inequalities are obtained: where is a positive design parameter. Let , , and . Then the inequality for can be obtained: where , , , , . Choose appropriately such that . Moreover, set values of , , , , , , , appropriately and (A.5) can be rewritten as where and . Let ; then which means that, as , , thus

Since can be made arbitrarily small by choosing appropriate design parameters , , , and . Thus, for given , there exists a finite time so that for all . Based on the error transformation (11), the tracking error converges to a small neighborhood around zero with guaranteed prescribed performance (3) in a finite time as well, and all the signals in the closed-loop system are uniformly ultimately bounded.

B. Proof of Theorem 5

Similar to the adaptive neural DSC design in the Section 3, the derivative of along (53) and (54) yields

According to Theorem 4, there exists a small positive constant for the recurrent orbit , such that . Thus, by Young’s inequality, the derivative of along (A.1) and (B.1) yields where , , , . Let , , and ; (B.2) can be rewritten as where and . Let ; then, based on the similar analysis in Theorem 3, it can be concluded that the tracking error converges to a small neighborhood around zero with the prescribed performance, and all the signals in the closed-loop system are uniformly ultimately bounded.

Conflicts of Interest

The authors declare that they have no conflicts of interest.

Acknowledgments

The authors would like to thank Professor Chenguang Yang for the experiment platform. Moreover, this work was partially supported by the National Natural Science Foundation of China (nos. 61773169, 61374119, 61473121, and 61611130214), the Royal Society Newton Mobility Grant IE150858, the Guangdong Natural Science Foundation under Grants 2017A030313369 and 2017A030313381, the youth talent of Guangdong Province, and the Fundamental Research Funds for the Central Universities.