Research Article | Open Access

J. Humberto Pérez-Cruz, "Stabilization and Synchronization of Uncertain Zhang System by Means of Robust Adaptive Control", *Complexity*, vol. 2018, Article ID 4989520, 19 pages, 2018. https://doi.org/10.1155/2018/4989520

# Stabilization and Synchronization of Uncertain Zhang System by Means of Robust Adaptive Control

**Academic Editor:**Ludovico Minati

#### Abstract

Standard adaptive control is the preferred approach for stabilization and synchronization of chaotic systems when the structure of such systems is a priori known but the parameters are unknown. However, in the presence of unmodeled dynamics and/or disturbance, this approach is not effective anymore due to the drift of the parameter estimations, which eventually causes the instability of the closed-loop system. In this paper, a robustifying term, which consists of a saturation function, is used to avoid this problem. The robustifying term is added to the adaptive control law. Consequently, the learning law is also modified. The boundedness of the states and the parameter estimations is rigorously and thoroughly proven by means of Lyapunov like analysis based on Barbalat’s lemma. On these new conditions, the convergence to zero cannot be achieved due to the presence of unmodeled dynamics and/or disturbance. However, it is still possible to guarantee the asymptotic convergence to a bounded zone around zero. The width of this zone can be adjusted by the designer. The performance of this robust approach is verified by numerical simulations. Although, for simplicity, this strategy is only applied to the stabilization and synchronization of Zhang system, the procedure can easily be generalized to a broad class of chaotic systems.

#### 1. Introduction

Chaotic systems present interesting and peculiar features such as very high sensitivity to initial conditions, boundedness of solutions, and a rich dynamic behavior [1–6]. In particular, due to the first property, the long-term prediction of the dynamic behavior of these systems is not possible. However, it is still possible to modify the behavior of such systems by means of a proper control input. This fact is the basis of successful applications in fields as optics [7–10], secure communications [11–23], finance [5, 24–28], power systems [29–33], electrical machinery [34–36], and so on.

Basically, two cases can be distinguished for control of chaotic systems: (a) chaos suppression and (b) synchronization [37, 38]. In the first case, the states of a chaotic system tend to an equilibrium point, generally the origin, by means of a proper control law. In the second case, a chaotic system with control inputs known as slave must follow the dynamics of an autonomous chaotic system known as master. Both systems should produce the same response in spite of the different initial conditions [39]. This problem can be posed as the stabilization of the difference between the states of slave system and master system (synchronization error). From this point of view, synchronization can be simply considered as a generalization of stabilization problem [40].

In technical literature, various strategies have been proposed for control of chaotic systems [41–48]. Three approaches have commonly been used when both the structure and the parameters of the system are known: linear feedback control [49–53], nonlinear control [54–58], and active control [59–68]. In the first approach, the control input is formed by proportional feedback either of the system state or of the synchronization error according to the objective: stabilization or synchronization, respectively. In nonlinear control, the control input is designed in such a way that, given a Lyapunov function candidate , the first time derivative is negative definite. In this approach, the control input may differ depending on the particular selection of the designer. The active control consists of the compensation of nonlinearities and the decoupling of the equations that describe the dynamics of a system or of the synchronization error to achieve its stabilization [69]. The use of this technique always produces the same results independently of the designer. This technique can be considered as a special kind of feedback linearization [70].

On the other hand, when the structure is known but the parameters of the systems are unknown, the preferred approach is adaptive control [71–75]. This method can be considered as a generalization of active control. As in this case, the parameters are unknown, the compensation of nonlinearities and the decoupling of the equations must be carried out by using the estimations of such parameters. Consequently, an updating law should also be designed. A Lyapunov-like approach based on Barbalat’s lemma is the main tool to show the closed-loop stability of this kind of systems. By using this method, the Lorenz and Chen systems and modified Chua and Rössler systems could be synchronized successfully in [76]. In [77], the stabilization with respect to zero equilibrium and other non-zero equilibria and synchronization of Liu system were accomplished by means of the adaptive approach. In [78], Chen and Genesio systems were synchronized using active and adaptive control for the case with known and unknown parameters, respectively. The synchronization of an uncertain hyperchaotic Lorenz system and an uncertain hyperchaotic Lü system was investigated in [79]. In [80], the adaptive control and synchronization of Rössler prototype-4 system were considered. In [71, 73], the adaptive control was applied to the synchronization of an uncertain chaotic Lorenz-Stenflo system and of uncertain TSUCS and Lü unified chaotic systems, respectively.

All the aforementioned works considered only the case when the parameters are unknown. However, in practical situations, unmodeled dynamics and/or disturbance could be additionally present. Under this more realistic condition, the control laws designed in [66–75] are not effective anymore since the response is deteriorated, the estimations of the parameters start to grow unboundedly, and eventually the closed-loop system becomes unstable. To overcome this problem, in this paper, a robustifying term, which consists of a saturation function, is used in the control law. Consequently, the corresponding learning law for parameter estimations is also modified. For simplicity, the attention is focused on a new chaotic system proposed in [81]. However, the explained strategy can be easily applied to a broad class of chaotic systems.

#### 2. Zhang System and Problem Description

Zhang presented in [81] a new third order chaotic system formed by two linear terms, three cross-product terms, and a unique cubic term. This system is described by where , , and are the system states and , and are constant parameters. This system shows a chaotic behavior for the values , , , , , and and the initial condition , , and . This phenomenon can be appreciated qualitatively in Figure 1 where the phase planes and the phase space are presented for four-scroll chaotic attractor of system (1). A more detailed description of the properties of system (1) can be founded in [81].

**(a)**

**(b)**

**(c)**

**(d)**

Certainly, if no external influence is applied to system (1), then its chaotic behavior cannot be suppressed. Thus, for trying to achieve this objective, it is necessary first to modify system (1) in the following way: where , , and are control inputs. The stabilization problem for system (2) consists of finding a proper control law such that the states of this system tend to zero independently of the initial condition. Besides, the parameters of system (2) will be considered unknown throughout this paper.

Another problem (which could be considered as a generalization of stabilization) is the synchronization of chaotic systems. In its simpler conception, that is, master-slave configuration, a slave chaotic system with control inputs must follow the dynamic behavior of an autonomous master chaotic system. For system (1), the corresponding master system can be represented simply aswhere the subscript denotes “master”. The corresponding slave system for system (1) is given by where , , and are control inputs, and the subscript* s* denotes “slave.” The parameters of systems (3) and (4) are considered unknown. In this paper, a proper control law will be determined such that system (4) can follow the chaotic behavior of system (3) in spite of the lack of knowledge about the parameter values.

#### 3. Adaptive Stabilization

In this section, the stabilization problem of system (2) with unknown parameters is considered. First, the ideal case is solved by means of Lyapunov-like stability theory. Next, it is shown as, in the presence of unmodeled dynamics and/or disturbances, the recently deduced control law is not effective anymore. Thus, a modification of such control law must be done to overcome this drawback.

##### 3.1. Ideal Case

Given system (2) and considering that the parameters are completely unknown, the following control law can be proposed:where , , , , , and are the estimations of unknown constant parameters , , , , , and , respectively; , , and are positive constants selectable by the designer. By substituting (5) into (2), the following closed-loop dynamics is obtained: whereNote that, by taking the first time derivative of (7) and as each unknown parameter is constant, we can getIn order to analyze the stability of (6), the following Lyapunov function candidate is proposed:where , , , , , and are positive constants selectable by the designer. Note that (9) is a positive definite on . The first time derivative of (9) can be calculated as By substituting (6) into (10), taking into account (8), and after some operations, it is possible to obtainIn view of (11), the following update law can be deduced:If (12) is substituted into (11), thenConsequently, the first time derivative of is negative semidefinite. Thus, the asymptotic stability cannot be concluded from the second theorem of Lyapunov and an additional analysis is required. Alternatively, (13) can be expressed asOrBy integrating both sides of the last inequality from to , it follows thatand as is a nonnegative function, then . Consequently, , and belong to . From (7) and as , and are constant parameters, then it can be concluded that , , , , , and also belong to . An inspection of (5) and (6) reveals that , , , , , and are formed by bounded terms. Thus, . On other hand, considering the following definition and taking into account (13), it can be established that By integrating both sides of (18) from zero to , it can be obtained thator well Now, as is a nonnegative function, the following is true:Substituting (21) into (20) yields By taking the limit as of both sides of the last inequality, finally, it can be obtained that This means that . As and then it can be concluded from Barbalat's lemma [82] that , and converge asymptotically to zero. Hence, the following result has been proven:

Lemma 1. *If the control law (5) with the learning law (12) is applied to Zhang system (2), then *(a)*the states, the estimations of the parameters, and the control signal are bounded, that is,*(b)*the states , and converge globally and asymptotically to zero.*

Corollary 2. *As the estimations of the parameters are bounded and besides then each estimation converges to a constant value.*

The performance of the control law (5) can be tested by simulation. In order to achieve this objective, first system (2) is implemented in Simulink® with the parameter values provided in Section 2 and with the initial condition , and . The loop is closed with the control law (5) and the learning law (12). For simplicity, the gains of the control law are selected as , , and and the parameters of the learning law are selected as with the initial condition . The simulation is carried out by means of Runge-Kutta method (ode4) with a fixed-step size of 0.0001. The results of the simulation process are presented in Figures 2, 3, and 4.

**(a)**

**(b)**

**(c)**

**(a)**

**(b)**

**(c)**

**(a)**

**(b)**

**(c)**

In Figure 2, the stabilization process for the three states of system (2) is illustrated for the first 1.5 seconds (*s*). It can be appreciated as the stabilization is attained in less than 1* s*. In Figure 3, the estimations of the constant parameters , and are shown. According to Corollary 2, each estimation tends to a constant value. However, it should be mentioned that this constant value is not necessarily the current value of the corresponding parameter. Finally, the control law (5) is depicted in Figure 4. Once the stabilization is attained, each control input is equal to zero.

##### 3.2. Nonideal Case

Now, let us consider the more realistic case when unmodeled dynamics and/or disturbances are present into system (2). This effect can be described by means of the terms , and in the following way:

*Assumption 3. *Although the terms , , and represent unknown unmodeled dynamics and/or disturbance, a bound , , and for each term, respectively, is a priori known, that is, , , and .

Under the existence of unmodeled dynamics and/or disturbance, the control law (5) presented in Lemma 1 is not effective anymore. In order to illustrate this point, consider that , and are given by . On these new conditions, the simulation of system (25) under the control law (5) with the learning law (12) produces the results exhibited in Figure 5. Apparently, the main performance deterioration is the presence of a sinusoidal signal of about 0.5 amplitude in steady-state. However, a simulation during 5000* s* shows how now the estimations do not converge to a constant value (see Figure 6). This process is known as drift and ultimately causes instability of the system. Consequently, the control law and the learning law must be modified to overcome this problem.

**(a)**

**(b)**

**(c)**

**(a)**

**(b)**

**(c)**

Following the approach presented in [83], in this work, the control law (5) is modified by adding a robustifying term, which consists of a saturation function: where , are positive constants, , and are small positive constants, which can be selected by the designer and represents a saturation function depicted bySubstituting (26) into (25) and after some operations, the close-loop dynamics is now given bywhere , and are defined as in (7). In order to analyze the close-loop dynamics (28), the following Lyapunov function candidate is proposed: where , , , , , and are positive constants andThe first time derivative of (29) isIt can be proven thatTaking into account the previous fact and according to (8), (31) becomes By substituting (28) into (33), and after accomplishing the corresponding operations and grouping like terms, the following is obtained:In view of (34), the following update law can be deduced:If (35) is substituted into (34), then the first time derivative of* V* becomes Now, from (30)Additionally, from Assumption 3, the terms , and can be bounded asTaking into account (37) and (38), and next grouping like terms, (36) becomesit can be shown thatSubstituting (40) into (39) and grouping like terms, we can getif the constants , and are selected in such a way that the following inequality is satisfiedthen, from (41), it can be seen thatThat means that which implies that and . From the boundedness of and from (37) the following can be concluded: . As the control signal (26) and the first time derivative of are formed by bounded terms, then . From (17), (43) can be expressed asDue to an argument similar to Section 3.1, it can be shown that That is, . As and , from the Barbalat’s lemma the asymptotic convergence to zero of is guaranteed. Taking into account this fact and from (37), finally we can conclude that converge asymptotically to a zone bounded by , and , respectively. Thus, the following theorem has been demonstrated:

Theorem 4. *If Assumption 3 is satisfied, the constants , and are selected in such a way that inequality (42) is verified, and control law (26) with learning law (35) is applied to uncertain Zhang system (25), then *(a)*(b)**the states , and converge asymptotically to a region around zero bounded by , and , respectively.*

It is worth mentioning that Theorem 4 only guarantees the* asymptotic* convergence to a bounded zone around zero for , and . Thus, rigorously speaking, the convergence time could be infinite and consequently this result does not provide any procedure to determine it. Fortunately, as it will be seen later, numerical simulations show that this convergence time is finite. Although it is not possible to establish any analytical relationship, it can be mentioned that, in general, when the gains , , , , are increased, then the convergence time is decreased. However, it should be taken into account that the magnitude of the corresponding control signals , , can reach excessively large values. Therefore, a trade-off must be established. With respect to the parameters of the learning law (35), , and the initial conditions of the estimations of the unknown parameters , there is not a completely clear relationship between the effect of these parameters and the performance of the closed-loop system. Due to this reason, they should be selected by a trial and error process. Given the large number of parameters to be tuned, in order to avoid spending excessive time, this process should be stopped when a set of values produce an acceptable performance (although not necessarily the best one). Thus, for simplicity, the parameters of the control law (26) and the learning law (35) are selected here as , , , , and . Again, the initial condition for uncertain Zhang system (25) is equal to , and . The corresponding results are presented in Figures 7, 8, and 9. As can be appreciated from Figure 7, the stabilization is attained in less than 0.5* s*. Although it is not possible to obtain the convergence to zero due to the presence the unmodeled dynamics and/or disturbances, it is possible to guarantee the convergence to a bounded zone. This convergence process is shown in Figure 8. As can be seen, the residual sinusoidal signal has now an amplitude less than 0.001. Finally, it is shown in Figure 9 as the estimations converge to a constant value.

**(a)**

**(b)**

**(c)**

**(a)**

**(b)**

**(c)**

**(a)**

**(b)**

**(c)**

#### 4. Adaptive Synchronization

The synchronization of two systems can be considered as the generalization of the stabilization problem previously studied in Section 3. Consequently, the theory learned in that section will be applied in terms of the dynamics of the synchronization error. Again both cases will be considered: (a) ideal case and (b) presence of unmodeled dynamics and/or disturbances.

##### 4.1. Ideal Case

The synchronization error can be defined as Mathematically, the problem of the synchronization consists of finding a proper control law such that . In order to realize this objective, first, the dynamics of the synchronization error must be determined. This can be done by differentiating (46) and substituting (3) and (4) into the corresponding expression, that is, Once the dynamics of the synchronization error (47) has been obtained, the following control law can be proposed:where , , , , , and are estimations of the unknown constant parameters , , , , , and , respectively; , , and are positive constant gains selectable by the designer. By substituting (48) into (47), the closed-loop synchronization error dynamics is obtained aswhere , , , , , and are defined as in (7).

The closed-loop dynamics (49) must be analysed. For this purpose, the following Lyapunov function candidate is proposed:where , , , , , and are positive constants

The first time derivative of (50) isOr, equivalently according to (8), By substituting (49) into (52), and after accomplishing the corresponding operations and grouping like terms, the following is obtained:Thus, the following learning law can be suggested: By substituting (54) into (53), we getIt is easy to see that the first time derivative of (50) is negative semidefinite, and then the stability properties for dynamics (47) cannot be concluded directly. From (55), it can be obtained that . This means that ,, , , , , , , and are bounded. Now, as it is well known, the states of a chaotic autonomous system are bounded, that is, . As , and from (46), it is clear that are also bounded. Since the control law (48) is formed by bounded terms, its boundedness can be guaranteed. In a similar way, the boundedness of can also be proven. From (55) and (17), the following is true:This means that . As and , from Barbalat’s lemma the asymptotic convergence to zero of can be concluded. Thus, the following result has been proven:

Lemma 5. *If the control law (48) with the learning law (54) is applied to Zhang slave system (4), then *(a)*the states of slave system, the synchronization error, the estimations of the parameters, and the control signal are bounded, that is,*(b)*, and converge asymptotically and globally to zero and consequently the synchronization between slave system (4) and master system (3) is obtained.*

Corollary 2 is still valid in this new context. For the purpose of numerical simulation, consider that both slave system (4) and master system (3) have the parameters , and as depicted in Section 2. These values will be unknown to the controller. The initial condition for master system is , and , whereas the initial condition for slave system is , , . The gains of the controller (48) are selected as whereas the gains of the learning law (54) are selected as . Finally, for simplicity, the initial condition for the learning law (54) is set up as .

The results are shown in Figure 10. In this figure, it can be appreciated as the states of the slave systems converge to the corresponding states of the master system. Clearly, the synchronization is attained in less than 1* s*. Once again, the estimations of the parameters converge to constant values. However, for briefness, this result is not presented.

**(a)**

**(b)**

**(c)**

##### 4.2. Nonideal Case

In presence of unmodeled dynamics and/or disturbance, the dynamics of slave system can be described as It is considered that* Assumption 3* is still valid for , , and . On these new conditions, the dynamics of synchronization error is given byIf the control law (48) is used into the slave system (58), the drift of the estimations appears due to the unmodeled dynamics and/or disturbance. To avoid this problem, by generalizing the procedure used for stabilization in the presence of unmodeled dynamics/disturbances in Section 3.2, the control law (48) can be modified as follows:where , , , , , and are estimations of the unknown constant parameters , , , , , and , respectively. , , , , are positive constants, , and are small positive constants, which can be selected by the designer, and represents a saturation function as depicted in equation (27). By substituting (60) into (59), the closed-loop synchronization error dynamics is obtained aswhere , , , , , and are defined as in (7). To analyze the stability of the dynamics (61), the following Lyapunov function candidate is considered:where , , , , , and are positive constants and Taking into account that and according to (8), the first time derivative of (62) can be calculated as By substituting (61) into (65), and after some operations, the following is true:Because the objective is to get , if the learning law is proposed,then the following can be obtained: Now, from (63), it is true that On the other hand, from Assumption 3,