Abstract

This paper deals with the problem of trajectory tracking for a broad class of uncertain nonlinear systems with multiple inputs each one subject to an unknown symmetric deadzone. On the basis of a model of the deadzone as a combination of a linear term and a disturbance-like term, a continuous-time recurrent neural network is directly employed in order to identify the uncertain dynamics. By using a Lyapunov analysis, the exponential convergence of the identification error to a bounded zone is demonstrated. Subsequently, by a proper control law, the state of the neural network is compelled to follow a bounded reference trajectory. This control law is designed in such a way that the singularity problem is conveniently avoided and the exponential convergence to a bounded zone of the difference between the state of the neural identifier and the reference trajectory can be proven. Thus, the exponential convergence of the tracking error to a bounded zone and the boundedness of all closed-loop signals can be guaranteed. One of the main advantages of the proposed strategy is that the controller can work satisfactorily without any specific knowledge of an upper bound for the unmodeled dynamics and/or the disturbance term.

1. Introduction

After more than a half century of ongoing research, the adaptive control of linear and nonlinear systems with linearly parameterized unknown constants is currently a solid area within an automatic control theory. In order to extend these results to more general classes of systems, during the last twenty years, intense research has been carried out relying on the universal approximation capability of the artificial neural networks [17].

An artificial neural network can be simply considered as a nonlinear generic mathematical formula whose parameters are adjusted in order to represent the behavior of a static or dynamic system [5]. These parameters are called weights. Generally speaking, ANN can be classified as feedforward (static) ones, based on the back propagation technique [2], or as recurrent (dynamic) ones [4]. In the first network type, system dynamics is approximated by a static mapping. These networks have two major disadvantages: a slow learning rate and a high sensitivity to training data. The second approach (recurrent ANN) incorporates feedback into its structure. Due to this feature, recurrent neural networks can overcome many problems associated with static ANN, such as global extrema search, and consequently have better approximation properties [8]. Depending on their structure, recurrent neural networks can be classified as discrete-time ones or continuous-time ones.

Much of the first effort of research about the theory and application of the control based on continuous-time recurrent neural networks was synthesized in [4, 6, 9, 10]. In [9], a strategy of indirect adaptive control based on a parallel recurrent neural network was presented. In that study, the asymptotic convergence of the average integral identification error to a bounded zone was guaranteed. In order to prove this result, a Riccati matrix equation was employed. Based on the neural model of the uncertain system, a local optimal-type controller was developed. In spite of the significant contributions presented in that study, the usage of the Riccati matrix equation can be some restrictive and certain important questions such as the possible singularity of the control law were not considered. On the basis of this work, the exponential convergence of the identification error to a bounded zone could be guaranteed in [1113]. However, the need of a Riccati matrix equation could not be avoided. In [10], a tracking controller based on a series-parallel neural network model was proposed. In that study, the assumptions about the uncertain system were less restrictive than in [9], Riccati matrix equation was not necessary, and the possibility of the singularity problem for the control law was conveniently avoided. In contrast, the control law proposed by [10] is some complex. In spite of the importance of the aforementioned works, the case when the presence of a deadzone degrades the performance of an automatic control system was not taken into account.

The deadzone is a nonsmooth nonlinearity commonly found in many practical systems such as hydraulic positioning systems [14], pneumatic servo systems [15], and DC servo motors and so on. When the deadzone is not considered explicitly during the design process, the performance of the control system could be degraded due to an increase of the steady-state error, the presence of limit cycles, or inclusive instability [1619]. A direct way of compensating the deleterious effect of the deadzone is by calculating its inverse. However, this is not an easy question because in many practical situations, both the parameters and the output of the deadzone are unknown. To overcome this problem, in a pioneer work [16], Tao and Kokotović proposed to employ an adaptive inverse of the deadzone. This scheme was applied to linear systems in a transfer function form. Cho and Bai [20] extended this work and achieved a perfect asymptotic adaptive cancellation of the deadzone. However, their work assumed that the deadzone output was measurable. In [21], the work of Tao and Kokotović was extended to linear systems in a state space form with nonmeasurable deadzone output. In [22], a new smooth parameterization of the deadzone was proposed and a class of SISO systems with completely known nonlinear functions and with linearly parameterized unknown constants was controlled by using backstepping technique. In order to avoid the construction of the adaptive inverse, in [23], the same class of nonlinear systems as in [22] was controlled by means of a robust adaptive approach and by modeling the deadzone as a combination of a linear term and a disturbance-like term. The controller design in [23] was based on the assumption that maximum and minimum values for the deadzone parameters are a priori known. However, a specific procedure to find such bounds was not provided. Based on the universal approximation property of the neural networks, a wider class of SISO systems in Brunovsky canonical form with completely unknown nonlinear functions and unknown constant control gain was considered in [2426]. Apparently, the generalization of these results to the case when the control gain is varying, state dependent is trivial. Nevertheless, the solution to this problem is not so simple due to the singularity possibility for the control law. In [27, 28], this problem was overcome satisfactorily.

All the aforementioned works about deadzone studied a very particular class of systems, that is, systems in strict Brunovsky canonical form with a unique input. In this paper, by combining, in an original way, the design strategies from [9, 10, 23], we can handle a broad class of uncertain nonlinear systems with multiple inputs each one subject to an unknown symmetric deadzone. On the basis of a model of the deadzone as a combination of a linear term and a disturbance-like term, a continuous-time recurrent neural network is directly employed in order to identify the uncertain dynamics. By using a Lyapunov analysis, the exponential convergence of the identification error to a bounded zone is demonstrated. Subsequently, by a proper control law, the state of the neural network is compelled to follow a bounded reference trajectory. This control law is designed in such a way that the singularity problem is conveniently avoided as in [10] and the exponential convergence to a bounded zone of the difference between the state of the neural identifier and the reference trajectory can be proven. Thus, the exponential convergence of the tracking error to a bounded zone and the boundedness of all closed-loop signals can be guaranteed. This is the first time, up to the best of our knowledge, that recurrent neural networks are utilized in the context of uncertain system control with deadzone.

2. Preliminaries

In this study, the system to be controlled consists of an unknown multi-input nonlinear plant with unknown deadzones in the following form: where is the measurable state vector for , is an unknown but continuous nonlinear vector function, is an unknown but continuous nonlinear matrix function, represents an unknown but bounded deterministic disturbance, the ith element of the vector , that is, , represents the output of the ith deadzone, is the input to the ith deadzone, and represent the right and left constant breakpoints of the ith deadzone, and is the constant slope of the ith deadzone. In accordance with [16, 17], the deadzone model (2.2) is a static simplification of diverse physical phenomena with negligible fast dynamics. Note that is the actual control input to the global system described by (2.1) and (2.2). Hereafter it is considered that the following assumptions are valid.

Assumption 2.1. The plant described by (2.1) is controllable.

Assumption 2.2. The ith deadzone output, that is, is not available for measurement.

Assumption 2.3. Although the ith deadzone parameters , , and are unknown constants, we can assure that , , and for all .

2.1. Statement of the Problem

The objective that we are trying to achieve is to determine a control signal such that the state follows a given bounded reference trajectory , and, at the same time, all closed-loop signals stay bounded.

Assumption 2.4. Without the loss of generality, we consider that is generated by the following exosystem: where is an unknown but continuous nonlinear vector function.

2.2. Deadzone Representation as a Linear Term and a Disturbance-Like Term

The deadzone model (2.2) can alternatively be described as [23, 29]: where is given by Note that (2.5) is the negative of a saturation function. Thus, although could not be exactly known, its boundedness can be assured. Consider that the positive constant is an upper bound for , that is, .

Based on (2.4), the relationship between and can be expressed as where and is given by . Clearly, . Consider that the positive constant is an upper bound for .

3. Neural Identifier

In this section, the identification problem of the unknown global dynamics described by (2.1) and (2.2) using a recurrent neural network is considered.

Note that an alternative representation for (2.1) is given by where is a Hurwitz matrix, and are unknown constant weight matrices, and is the activation vector function with sigmoidal components, that is, where , , and are positive constants which can be specified by the designer, is a sigmoidal function, that is, where , , and are positive constants which can be specified by the designer, and is the unmodeled dynamics which can be defined simply as .

Assumption 3.1. On a compact set , unmodeled dynamics is bounded by , that is, . The disturbance is also bounded, that is, . Both and are positive constants not necessarily a priori known.

By substituting (2.6) into (3.1), we get

Remark 3.2. It can be observed that by using the model (2.6), the actual control input appears now directly into the dynamics.

Since, by construction, is bounded, the term is also bounded. Let us define the following expression: . Clearly, this expression is bounded. Let us denote an upper bound for as . This bound is a positive constant not necessarily a priori known. Now, note that the term can be alternatively expressed as , where is an unknown weight matrix. In view of the above, (3.4) can be rewritten as Now, consider the following series-parallel structure for a continuous-time recurrent neural network where is the state of the neural network, is the control input, and and are the time-varying weight matrices. The problem of identifying system (2.1)-(2.2) based on the recurrent neural network (3.6) consists of, given the measurable state and the input , adjusting online the weights and by proper learning laws such that the identification error can be reduced. Specifically, the following learning laws are here used: where , , , and are positive constants selectable by the designer.

Based on the learning laws (3.7) and (3.8), the following result is here established.

Theorem 3.3. If the Assumptions 2.2, 2.3, and 3.1 are satisfied, the constant is selected greater than , and the weight matrices , of the neural network (3.6) are adjusted by the learning laws (3.7) and (3.8), respectively, then (a)the identification error and the weights of the neural network (3.6) are bounded: (b)the norm of the identification error, that is, converges exponentially fast to a zone bounded by the term where , .

Proof of Theorem 3.3. First, let us determine the dynamics of the identification error. The first derivative of is simply Substituting (3.6) and (3.5) into (3.11) yields where and .
Consider the following Lyapunov function candidate The first derivative of is Substituting (3.12) into (3.14) and taking into account that, for simplicity, can be selected as , where is a positive constant greater than 0.5 and is the identity matrix, yields Since and , the first derivatives for and are clearly and , respectively. However, and are given by the learning laws (3.7) and (3.8). Therefore, by substituting (3.7) into and (3.8) into and the corresponding expressions into the right-hand side of (3.15), it is possible to obtain We can see that Substituting (3.17) into (3.16) and reducing the like terms yields Now, it can be proven that [10] Likewise, it is easy to show that If (3.19) and the inequality (3.20) are substituted into (3.18), we obtain or In view of , , the following bound as a function of can finally be determined for , Equation (3.23) can be rewritten in the following form Multiplying both sides of the last inequality by , it is possible to obtain The left-hand side of (3.25) can be rewritten as or equivalently as Integrating both sides of the last inequality from to yields Adding to both sides of the last inequality, we obtain Multiplying both sides of the inequality (3.29) by yields and, consequently As by definition and are positive constants, the right-hand side of the last inequality can be bounded by . Thus, and since by construction is a nonnegative function, the boundedness of , , and can be guaranteed. Because and are bounded, , and must be bounded too and the first part of Theorem 3.3 has been proven. With respect to the second part of this theorem, from (3.13), it is evident that . Taking into account this fact and from (3.31), we get By taking the limit as of the inequality (3.32), we can guarantee that converges exponentially fast to a zone bounded by the term and the last part of Theorem 3.3 has been proven.

Remark 3.4. It is very important to mention that the identification process based on Theorem 3.3 can be accomplished without the a priori knowledge about , , and .

4. Controller Design

In this section, a proper control law in order to solve the tracking problem is determined.

Note that the dynamics of the exosystem (2.3) can be alternatively represented as where is the same Hurwitz matrix as in (3.6), is an unknown constant weight matrix, is an activation vector function with sigmoidal components, that is, where , , and are positive constants which can be specified by the designer, and is an error term which can be defined simply as

Assumption 4.1. On a compact set , the error term is bounded by the positive constant not necessarily a priori known , that is, .

Let us define the virtual tracking error as The first derivative of (4.4) is simply Substituting (3.6) and (4.1) into (4.5) yields By adding and subtracting the term into (4.6), we obtain where .

Consider the following Lyapunov function candidate: where is a positive constant. The first derivative of is Substituting (4.7) into (4.9) and taking into account that was selected in Section 3 as yields If the learning law for is selected as where is a positive constant and the control law is chosen as where and are positive constants and taking into account that then, (4.10) becomes It can be proven that By substituting (4.14) into (4.13) and reducing the like terms, we obtain Taking into account that for , and for , (4.15) becomes Note that On the other hand, by construction, and are bounded. Consider that and are the corresponding upper bounds, that is, and (both and can be calculated). Likewise, by construction, is bounded and is bounded from Theorem 3.3. Consider that is an upper bound for , that is, . In view of the above and selecting and where and are two positive constants, we can obtain or Now, in accordance with Theorem 3.3, . Based on this fact together with the Assumption 4.1, the boundedness of the term can be concluded. Consider that the unknown positive constant is an upper bound for that term, that is, . Thus, it is easy to show that On the other hand, if the constants and are selected in such a way that then the following can be established Based on (4.23), it can be proven that Substituting (4.21) and (4.24) into (4.20) yields Defining ,  , (4.25) becomes This means that As by definition and are positive constants, the right-hand side of the last inequality is bounded by . Next, and consequently , and .

As by hypothesis , the boundedness of guarantees the boundedness of . Remember that Theorem 3.3 guarantees that . By the definition of , that is, and considering that , the boundedness of can be concluded. From (4.12), we can see that the control law is selected in such a way that the denominator is never equal to zero although and/or . Besides, we can verify that is formed by bounded elements. Next, the control input must be bounded too. On the other hand, note that the following is true: Taking into account (4.28) and from (4.27), we get Now, the ultimate objective is to achieve that the state of the unknown system (2.1)-(2.2) follows the reference trajectory . Thus, we need to know if the actual tracking error converges or not to a some value. Note that Clearly, . Finally, by substituting (3.32) and (4.29) into (4.30), we have By taking the limit as of the last inequality, we can guarantee that converges exponentially fast to a zone bounded by the term . Thus, the following theorem has been proven

Theorem 4.2. Given the Assumptions 2.14.1, if the control law (4.12) is used together with the learning laws (3.8) and (4.11) then it can be guaranteed that (a)the weight matrix , the virtual tracking error, the actual tracking error, the state of the neural network, the system state, and the control input are bounded: (b)the actual tracking error converges exponentially to a zone bounded by the term where and are defined as in Theorem 3.3 and , .

5. Numerical Example

In this section, a simple but illustrative simulation example is presented in order to show the feasibility of the suggested approach. Consider the first order nonlinear system given by The initial condition for system (5.1) is ; is the deadzone output; the parameters of the deadzone are , , and ; , the disturbance term is selected as . The following reference trajectory is employed . The parameters for the neural identifier and the control law are selected by trial and error as , , , , , , , , , ,  , , , , , , , , and . The simulation is carried out by means of Simulink with ode45 method, relative tolerance equal to , and absolute tolerance equal to . The results of the tracking process are presented in Figures 13 for the first 20 seconds. In Figure 1, the output of the nonlinear system (5.1), , is represented by a dashed line whereas the reference trajectory is represented by a solid line. In Figure 2, the control signal acting as the input of the deadzone is shown. In Figure 3, a zoom of Figure 2 is presented. From Figure 3, we can appreciate that the control law avoids properly the deadzone.

6. Conclusions

In this paper, an adaptive scheme based on a continuous-time recurrent neural network is proposed in order to handle the tracking problem for a broad class of nonlinear systems with multiple inputs each one subject to an unknown symmetric deadzone. The need of an inverse adaptive commonly required in many previous works is conveniently avoided by considering the deadzone as a combination of a linear term and a disturbance-like term. Thus, the identification of the unknown dynamics together with the deadzone can be carried out directly by using a recurrent neural network. The exponential convergence of the identification error norm to a bounded zone is thoroughly proven by a Lyapunov analysis. Subsequently, the state of the neural network is compelled to follow a reference trajectory by using a control law designed in such a way that the singularity problem is conveniently avoided without the need of any projection strategy. By another Lyapunov analysis, the exponential convergence of the difference between the neural network state and the reference trajectory is demonstrated. As the tracking error is bounded by the identification error and the difference between the neural network state and the reference trajectory, the exponential convergence of the tracking error to a bounded zone is also proven. Besides, the boundedness of the system state, the neural network state, the weights, and the control signal can be guaranteed. The proposed control scheme presents two important advantages:(i)the specific knowledge of a bound for the unmodeled dynamics and/or the disturbance term is not necessary,(ii)the determination of the first derivative for the reference trajectory is not required.

Acknowledgment

The first author would like to thank the financial support through a postdoctoral fellowship from Mexican National Council for Science and Technology (CONACYT).