Abstract

This paper addresses the synchronization of chaotic gyros with unknown parameters and external disturbance via an adaptive dynamic neural network control (ADNNC) system. The proposed ADNNC system is composed of a neural controller and a smooth compensator. The neural controller uses a dynamic RBF (DRBF) network to online approximate an ideal controller. The DRBF network can create new hidden neurons online if the input data falls outside the hidden layer and prune the insignificant hidden neurons online if the hidden neuron is inappropriate. The smooth compensator is designed to compensate for the approximation error between the neural controller and the ideal controller. Moreover, the variable learning rates of the parameter adaptation laws are derived based on a discrete-type Lyapunov function to speed up the convergence rate of the tracking error. Finally, the simulation results which verified the chaotic behavior of two nonlinear identical chaotic gyros can be synchronized using the proposed ADNNC scheme.

1. Introduction

Radial basis function (RBF) networks are characterized by a simple structure with rapid computation time and superior adaptive performance [1]. There have been considerable interests in exploring the applications of RBF network to deal with the nonlinearity and uncertainty in control systems [25]. One main advantage of these RBF-based adaptive neural controllers is that the online parameter adaptive laws were derived without the requirement of offline training. Though the favorable control performance can be achieved in [25], the structure of the used RBF network should be determined by some trial-and-error tuning procedure. It is difficult to consider the balance between the number of hidden neurons and desired performance. To solve this problem, a dynamic RBF (DRBF) network was proposed for the structural adaptation of the RBF network [69]. However, some structural learning algorithms are complex and some structural learning algorithms cannot avoid the structure of RBF network growing unboundedly.

Another drawback of the RBF-based adaptive neural controller is how to determine the learning rates of the parameter adaptive laws. For a small value of the learning rates, the convergence of the tracking error can be easily guaranteed but with slow convergence speed. If the learning rates are large, the parameter adaptive laws may become system unstable. To attack this problem, a variable learning rate was studied in [1013]. A discrete-type Lyapunov function was utilized to determine the optimal learning rates in [10, 11]; however, the exact calculation of the Jacobian term cannot be determined due to the unknown control dynamics. A genetic algorithm and a particle swarm optimization algorithm were used to determine the optimal learning rates [12, 13]; however, the computation loading is heavy and their scheme lacks the real-time adaptation ability.

In the last decade, control and synchronization of chaotic systems have become an important topic. Chaos synchronization can be applied in the vast areas of physics and engineering systems such as in chemical reactions, power converters, biological systems, information processing, and secure communication [1416]. Many different methods have been applied to synchronize chaotic systems. Chang and Yan [17] proposed an adaptive robust PID controller using the sliding-mode approach; however, the phenomenon of chattering will appear. An adaptive sliding mode control was proposed to cope with the fully unknown system parameters [18]. To eliminate the chattering, a continuous control law is used; however, the system stability cannot be guaranteed. The adaptive control techniques are applied to chaos synchronization in [19]; however, adaptive control requires the structural knowledge of the chaotic dynamic functions. Yau [20] proposed a nonlinear rule-based controller for chaos synchronization. The fuzzy rules should be preconstructed by a time-consuming trial-and-error tuning procedure to achieve the required performance.

This paper proposes an adaptive dynamic neural network control (ADNNC) system to synchronize two nonlinear identical chaotic gyros. The proposed ADNNC system is composed of a neural controller and a smooth compensator. The neural controller uses a DRBF network to approximate an ideal controller and the smooth compensator is designed to dispel the approximation error introduced by the neural controller. This paper has successfully developed a low-computation loading requirement of the online structural learning algorithm for the DRBF network. To speed up the convergence rate of the tracking errors, an analytical method based on a discrete-type Lyapunov function is proposed to determine the variable learning rates of the parameter adaptive laws. Finally, some simulations are provided to verify the effectiveness of the proposed ADNNC system.

2. Problem Formulation

In this paper, a symmetric gyro with linear-plus-cubic damping as shown in Figure 1 [15] is considered. The dynamics of a gyro is a very interesting nonlinear problem in classical mechanics. According to the study by Chen [15], the dynamics of the symmetrical gyro with linear-plus-cubic damping of the angle can be expressed as where is the angle; is the parametric excitation; and are the linear and nonlinear damping, respectively; is a nonlinear resilience force. The open-loop system behavior was simulated with , , , , and for observing the chaotic unpredictable behavior. For the phase trajectory with , an uncontrolled chaotic trajectory of period 2 motion can be found, and for the phase trajectory with , a quasiperiod motion in the uncontrolled chaotic trajectory happens [15]. The time responses of the uncontrolled chaotic gyro with initial condition (1, 1) with and are shown in Figures 2(a) and 2(b), respectively. It is shown that the uncontrolled chaotic gyro has different types of trajectories for different system parameters.

Generally, the two chaotic systems in synchronization are called the drive system and response system, respectively. The interest in chaos synchronization is the problem of how to design a controller to drive the response chaotic gyros system to track the drive chaotic gyros system closely. Consider the following two nonlinear gyros, where the drive system and response system are denoted with and , respectively. The systems are given as

Drive System

Response System

where is the control input and is the coupling term. To achieve the control objective, the tracking error between the response system (2.3) and the drive system (2.2) is defined as The error dynamic equation can be obtained as If the system dynamics , , and can be obtained, there is an ideal controller as [21] where and are the nonzero constants. Applying the ideal controller (2.6) into error dynamic equation (2.5) obtains If and are chosen to correspond to the coefficients of a Hurwitz polynomial, it implies [21]. The system dynamics of these chaotic systems are always unknown; thus the ideal controller cannot be implemented.

3. Design of the ADNNC System

In this paper, an adaptive dynamic neural network control (ADNNC) system as shown in Figure 3 is introduced where a sliding surface is defined as with and being nonzero positive constants. The proposed ADNNC system is composed of a neural controller and a smooth compensator, that is, where the neural controller uses a DRBF network to mimic the ideal controller and the smooth compensator is designed to compensate for the differences between the ideal controller and neural controller. The output of the DRBF network with hidden neurons is given as where represents the th connection weights between the hidden layer and output layer and , , and are the firing weight, center, and width of the th hidden neuron, respectively.

3.1. Structural Learning of DRBF Network

To attack the problem of the structure determination in RBF network, this paper proposed a simple structural learning algorithm. In the growing process, the mathematical description of the existing layers can be expressed as clusters [1, 22]. If a new input data falls within the boundary of the clusters, the DRBF network will not generate a new hidden neuron but update the parameters of the existing hidden neurons. Find the maximum degree defined as [1] It can be observed that the maximum degree is smaller as the incoming data is far from the existing hidden neurons. If is satisfied, where a pregiven threshold, then a new hidden neuron is generated. The center and width of the new hidden neurons and the output action strength are selected as follows: where is a prespecified constant. Next, the structural learning phase is considered to determine whether or not to cancel the existing hidden neurons and weights which are inappropriate. A significance index is determined for the importance of the th hidden neurons and can be given as follows [22]: where denotes the number of iterations, is the significance index of the th hidden neurons whose initial value is 1, is the reduction threshold value, and is the reduction speed constant. If is satisfied, where a pregiven threshold, then the th hidden neuron and weight are cancelled. If the computation loading is an important issue for the practical implementation, and are chosen as large values so more hidden neurons and weights can be cancelled.

3.2. Parameter Learning of DRBF Network

Substituting (3.2) into (2.5) and using (2.6) yield Multiplying both sides by of (3.7) gives According to the gradient descent method, the weights are updated by the following equation [23]: where is the learning rate. Moreover, the center and width of the hidden neurons can be adjusted in the following equation to increase the learning capability: where and are the learning rates. For given small values of the learning rates, the convergence can be guaranteed but the convergence speed of tracking error is slow. On the other hand, if the selection of learning rates is too large, the algorithm becomes unstable. To determine the learning rates of the parameter adaptive laws, a cost function is defined as According to the gradient descent method, the adaptive law of the weight can be represented as Comparing (3.9) with (3.12) yields the Jacobian term of the system . Then, the convergence analysis in the following theorem derives the variable learning rates to ensure convergence of the output tracking error.

Theorem 3.1. Let be the learning rate for the weight of the DRBF network and define as , where and is the Euclidean norm. Then, the convergence of tracking error is guaranteed if is chosen as

Theorem 3.2. Let and be the learning rates of the center and width of the DRBF network, respectively. Define and as and , respectively, where and . The convergence of the tracking error is guaranteed if and are chosen as where and .

3.3. Stability Analysis

Since the number of hidden neurons in the DRBF network is finite for the real-time practical applications, the approximation error is inevitable. So the ideal controller can be reformulated as where is the optimal neural controller and denotes an estimate approximation error between the ideal controller and optimal neural controller. This paper proposed a smooth compensator as where denotes the estimated value of the approximation error and is a small positive constant. Substituting (3.15) and (3.16) into (3.7) yields Then, define a Lyapunov function candidate in the following form: where is the learning rate with a positive constant and . Differentiating (3.18) with respect to time and using (3.17) obtain For achieving , the error estimation law is designed as then (3.19) can be rewritten as Since is negative semidefinite, that is, , it implies that and are bounded. Let function and integrate with respect to time; then it is obtained Because is bounded and is nonincreasing and bounded, the following result can be obtained: Moreover, since is bounded, by Barbalat’s Lemma [21], . That is, as . As a result, the stability of the proposed ADNNC system can be guaranteed. In summary, the design steps of ADNNC are summarized as follows.

Step 1. Initialize the predefined parameters of the DRBF network.

Step 2. The tracking error and the sliding surface are given in (2.4) and (3.1), respectively.

Step 3. Determine whether to add a new node by the condition and determine whether to cancel an existing node by significance index .

Step 4. The control law is designed in (3.2), in which the neural controller and the smooth compensator are given as (3.3) and (3.16), respectively.

Step 5. Determine the variable learning rates , , and in (3.13) and (3.14), respectively.

Step 6. Update the parameters of the neural controller by (3.9), (3.10), and update the parameter of the smooth compensator by (3.20).

Step 7. Return to Step 2.

4. Simulation Results

In this section, the proposed ADNNC system is applied to synchronize two identical chaotic gyros. To investigate the effectiveness of the proposed ADNNC system, two simulation cases including parameter variation and initial variation are considered as follows.

Case 1. , , and .

Case 2. , , and .

According to Theorems 3.1 and 3.2, respectively, the variable learning rates are chosen as where and . The control parameters are chosen as , , , , , , , , and . All the gains are chosen in consideration of the requirement of stability. The simulation results of the proposed ADNNC system with variable learning rates are shown in Figures 4 and 5 for Cases 1 and 2, respectively. The tracking responses of states are shown in Figures 4(a) and 5(a); the tracking responses of states are shown in Figures 4(b) and 5(b); the associated control efforts are shown in Figures 4(c) and 5(c); the numbers of hidden neurons are shown in Figures 4(d) and 5(d). The simulation results show that the proposed ADNNC system with variable learning rates not only can achieve favorable synchronization performance but also an appropriate network size of the DRBF network can be obtained because the proposed self-structuring mechanism and the online learning algorithms are applied. To demonstrate the robust control performance of the proposed ADNNC system with variable parameter learning rates, a coupling term is examined here. The simulation results of the proposed ADNNC system with a coupling term are shown in Figures 6 and 7 for Cases 1 and 2, respectively. The tracking responses of states are shown in Figures 6(a) and 7(a); the tracking responses of states are shown in Figures 6(b) and 7(b); the associated control efforts are shown in Figures 6(c) and 7(c); the numbers of hidden neurons are shown in Figures 6(d) and 7(d). The simulation results show that the proposed ADNNC system with variable learning rates which can achieve favorable synchronization performance under a coupling term is examined.

In addition, since the selection of the learning rates (, , and ) for the online training of the DRBF network has a significant effect on the network performance, the performance measures of various learning rates are summarized in Table 1. It shows that the proposed ADNNC system with variable parameter learning rates possesses the most accurate synchronization performance. To verify the effect of the varied learning rates outside the convergence range, the simulation results of the proposed ADNNC system with are shown in Figures 8 and 9 for Cases 1 and 2, respectively. The tracking responses of states are shown in Figures 8(a) and 9(a); the tracking responses of states are shown in Figures 8(b) and 9(b); the associated control efforts are shown in Figures 8(c) and 9(c); the numbers of hidden neurons are shown in Figures 8(d) and 9(d). From the simulation results, the unstable tracking responses are induced due to the selection of learning rates outside the convergence region.

5. Conclusion

In this paper, an adaptive dynamic neural network control (ADNNC) system is proposed to synchronize chaotic symmetric gyros with linear-plus-cubic damping. The proposed ADNNC system is composed of a neural controller and a smooth compensator. The neural controller uses a dynamic radial basis function (DRBF) network to mimic an ideal controller in which the DRBF network can automatically grow and prune the network structure. The smooth compensator is designed to dispel the approximation error between the ideal controller and neural controller. Moreover, to speed up the convergence of tracking error, a discrete-type Lyapunov function is utilized to determine the variable learning rates of the adaptation laws. Numerical simulations have verified the effectiveness of the proposed ADNNC method.

Appendices

A. Proof of Theorem 3.1

Since a discrete-type Lyapunov function is selected as The change in the Lyapunov function is expressed as Moreover, the sliding surface difference can be represented by where is the sliding surface change and represents the change of weights in the DRBF network. Using (3.11), (3.12), and (A.1), then Then (A.4) becomes Thus, From (A.3) and (A.7), can be rewritten as If is chosen as , then the discrete-type Lyapunov stability of and is guaranteed so the output tracking error will converge to zero as . This completes the proof of Theorem 3.1.

B. Proof of Theorem 3.2

To prove Theorem 3.2, the following lemmas were used [9].

Lemma B.1. Let ; then , for all .

Lemma B.2. Let ; then , for all .

(1) According to Lemma B.1, , since Moreover, the sliding surface difference can be represented by where represents a change of the center in the th hidden neuron. Using (3.11), (3.12), and (B.1), then Then using (B.3) and (B.2), Thus, If , the term in (B.5) is less than 1. Therefore, the discrete-type Lyapunov stability of and by (A.2) and (A.3) is guaranteed.

(2) According to Lemma B.2, , since Moreover, the sliding surface difference can be represented by where represents a change of the width in the th hidden neuron. Using (3.11), (3.12), and (B.6), then Then using (B.8) and (B.7) becomes Thus, If , the term in (B.10) is less than 1. Therefore, the discrete-type Lyapunov stability of and by (A.2) and (A.3) is guaranteed.

Acknowledgments

The authors appreciate the partial financial support from the National Science Council of Republic of China under Grant NSC 98-2221-E-216-040. The authors are grateful to the reviewers for their valuable comments.