Abstract

The drive-response synchronization of delayed neural networks with discontinuous activation functions is investigated via adaptive control. The synchronization of this paper means that the synchronization error approaches to zero for almost all time as time goes to infinity. The discontinuous activation functions are assumed to be monotone increasing which can be unbounded. Due to the mild condition on the discontinuous activations, adaptive control technique is utilized to control the response system. Under the framework of Filippov solution, by using Lyapunov function and chain rule of differential inclusion, rigorous proofs are given to show that adaptive control can realize complete synchronization of the considered model. The results of this paper are also applicable to continuous neural networks, since continuous function is a special case of discontinuous function. Numerical simulations verify the effectiveness of the theoretical results. Moreover, when there are parameter mismatches between drive and response neural networks with discontinuous activations, numerical example is also presented to demonstrate the complete synchronization by using discontinuous adaptive control.

1. Introduction

In the last decades, synchronization of coupled chaotic systems (including fractional-order chaotic systems and integer-order chaotic systems [13]) has received increasing research attention from different branches of science and application fields due to its potential applications such as secure communication, biological systems, and information science [4, 5]. Along with the presentation of different kinds of synchronization, such as complete synchronization [6], lag synchronization [7, 8], quasi-synchronization [9, 10], projective synchronization [1113], and generalized synchronization [14, 15], many control methods have been developed, for instance, state feedback control [9, 16], and adaptive control [7, 15, 17]. The adaptive control technique derives special attention since its control gains need not to be known in advance and can self-adjust according to the designed adaptive law.

Delayed neural networks, as a class of important functional differential equations, have witnessed many applications in different areas such as signal processing, associative memories, classification of patterns, and optimization. Therefore, investigating dynamical behaviors of neural networks with various parameters has long been an intensive research topic, such as stability of equilibrium point [18] and chaos synchronization [7]. However, the activation functions in most of known models including those in [7, 18] are accompanied by the assumption of continuity or even Lipschitz continuity. Actually, neural networks with discontinuous neuron activations are ideal models for the case where the gain of the neuron amplifiers is very high and is frequently arising in the applications [19, 20]. Therefore, in the literature, there were some results on dynamical behaviors of neural networks with discontinuous activation functions. For instance, Forti et al. investigated the stability and global convergence of delayed and nondelayed neural networks with discontinuous activations in [19, 20]; authors in [2123] studied the global robust stability of delayed neural networks with discontinuous neuron activations; in [24, 25], authors considered existence and global convergence of periodic and almost periodic solutions of neural networks with discontinuous activations.

On the other hand, although there were many results concerning synchronization of chaotic neural networks, few published papers considered the same issue for neural networks with discontinuous activations, and we only found [10, 26]. The difficulty comes from the discontinuity of activations. The methods utilized to analyze the stability of neural networks with discontinuous activations cannot be extended to chaos synchronization case directly. In [10], authors investigated quasi-synchronization of discontinuous neural networks with and without parameter mismatches, that is, synchronization error can only be controlled to a small region around zero, but cannot approach to zero. Results of [10] revealed that complete synchronization is difficult to be realized due to the discontinuity of activation function. It is known that one of the most important applications of chaos synchronization is in secure communication. When chaos synchronization is applied to secure communication, only when the drive and response systems achieve complete synchronization can the transmitted signal be fingered out. Therefore, it is necessary to study complete synchronization of neural networks with discontinuous activations. In [26], complete synchronization of discontinuous neural networks was investigated via approximation and linear matrix inequality (LMI) approach. But we find that, through the approximation approach used in [26], the control gain is uncertain and may be very large, which leads to inapplicablility in practice. On the other hand, some results on synchronization and control of discontinuous dynamical systems are complex, which are difficult to be verified. For instance, synchronization criteria obtained in [27] were in terms of integral inequality, and the restrict condition on the discontinuity of discontinuous function was weakened, that is, as time goes to infinity, the discontinuous function approaches to a continuous function. From the above analysis, investigating the synchronization of neural networks with discontinuous activations is really a tough task.

Being motivated by the above analysis, this paper investigates asymptotic complete synchronization of neural networks with discontinuous activation functions via adaptive control technique. Because of the discontinuity of the activation functions, the solution is in the sense of differential inclusion by the Filippov theory [28], and the complete synchronization of this paper means that the state error between the derive (or master) and response systems approaches to zero for almost all (a.a.) time as time goes to infinity. We do not impose the restriction conditions of growth condition on activation function. The discontinuous activations are only assumed to be monotone increasing and can be unbounded. Due to the mild condition on the discontinuous activations, the precise control gain is difficult to be determined, and the state feedback control is not so good as the adaptive control technique. Under the framework of Filippov solution, by using Lyapunov function and chain rule of differential inclusion, rigorous proofs are given for the asymptotic stability of the error system of the coupled systems. Numerical simulations show the effectiveness of the theoretical results. Moreover, when there are parameter mismatches between driver and response neural networks with discontinuous activations, numerical example is presented to demonstrate the complete synchronization by using discontinuous adaptive control.

Notations. In the sequel, if not explicitly stated, matrices are assumed to have compatible dimensions. stands for the identity matrix of -dimension. is the space of real number. The Euclidean norm in is denoted as , accordingly, for vector ,  , where denotes transposition. represents each component of is zero. denotes a matrix of -dimension, .

The rest of this paper is organized as follows. In Section 2, a model of delayed neural networks with discontinuous activation functions is described. Some necessary assumptions, definitions, and lemmas are also given in this section. Our main results and their rigorous proofs are described in Section 3. In Section 4, two examples with their numerical simulations are offered to show the effectiveness of our results. In Section 5, conclusions are given, and at last, acknowledgments.

2. Preliminaries

In this paper, we consider the delayed neural network which is described as follows: where is the state vector; , in which , are the neuron self-inhibitions; is the transmission delay; and are the connection weight matrix and the delayed connection weight matrix, respectively; the activation function represents the output of the network; is the external input vector.

As for neural networks (1), we give the following assumption condition.

Assumption 1. For every , is monotone nondecreasing and has at most a finite number of jump discontinuities in every compact interval.

Remark 2. Assumption 1 was used in [20]. Any function satisfying this assumption condition does not need to be continuously differentiable in compact interval. However, the continuous differentiability is necessary in [10, 26]. On the other hand, the “monotone nondecreasing" can be replaced by “monotonic function.” For instance, if we replace the matrices and in Example 13 (see Section 4 of this paper) with is the same function as that in example and Obviously, the is monotone increasing, and the is monotone decreasing, but the obtained system and the original system are identical. Furthermore, the results of this paper are also applicable to neural networks with continuous monotone activation functions, since continuous function is a special case of discontinuous function.

Since is discontinuous at isolate jumping points, one cannot define a solution in the conventional sense. Therefore, we resort to the notion of Filippov solution and stability results on differential inclusion [28]. Filippov solution is one of notions to deal with the discontinuity that determine the solution on the discontinuous surface with a set-valued mapping.

The Filippov set-valued map of at is defined as follows [28]: where is the closure of the convex hull of the set , , and is the Lebesgue measure of set .

When satisfies Assumption 1, it is not difficult to get from (4) that where ,  .

Definition 3 (see [19]). A function , , is a solution (in the sense of Filippov) of the discontinuous system (1) on if (i)is continuous on and absolutely continuous on ; (ii)there exists a measurable function , such that for almost all (a.a.) and
Note that the measurable function is a single-value function which is called the measurable selection of . Any function satisfying (6) is called an output associated to . In this paper, we assume that the trajectory of the solution of neural network (1) is chaotic.

The next definition is the initial value problem (IVP) associated to system (1).

Definition 4 ((IVP), see [20]). For any continuous function and measurable selection such that for a.a. by an initial value problem associated to (1) with initial condition , one means the following problem: find a couple of functions , such that is a solution of (1) on for some , is an output associated to , and

Lemma 5 (see [20]). Suppose that Assumption 1 is satisfied. Then, any IVP for (1) has at least a local solution defined on for some .
Since chaotic system has strange attractors, there exists a bounded region containing all attractors of it such that every orbit of the system never leaves them. Hence, in view of Lemma 5, the solution of (1) is defined on .

Lemma 6 ((Chain rule), see [29]). If is C-regular, and is absolutely continuous on any compact subinterval of . Then, and are differentiable for a.a. and where is the Clark generalized gradient of at .

Lemma 7 (see [30, page 174]). Let , . Assume , is a measurable function on . If is monotone on , then is differentiable for a.a. and (i), for ;(ii), where , .

Consider the neural network model (1) as the driver system, the controlled response system is where is the state of the response system, is the controller to be designed, and the other parameters are the same as those defined in system (1).

Definition 8. The neural network (9) with discontinuous activations is said to be asymptotically synchronized with system (1) if, for any initial values, there holds

3. Main Results

In this section, rigorous mathematical proofs about complete synchronization between systems (9) and (1) under adaptive control are given. Remarks are given to specify that, under Assumption 1, state feedback control is not applicable.

By virtue of the above preparations, in order to study the synchronization issue between (1) and (9), we only need to consider the same problem of the following systems:

Let . Subtracting (11) from (12) yields the following error system: where .

Obviously, is the equilibrium point of the error system (13) when . If system (13) realizes global asymptotical stability at the origin for any given initial condition, then the global asymptotical synchronization between (11) and (12) (or (1) and (9)) is achieved.

Theorem 9. Suppose that Assumption 1 is satisfied. Then the neural networks (1) and (9) can achieve global asymptotical synchronization under the following adaptive controller: where is an arbitrary positive constant.

Proof. Since satisfies Assumption 1, is a single-valued measurable function satisfying . Therefore are monotone increasing and measurable functions on . In view of Lemma 7, is differentiable for a.a and there exist positive constants such that ,  . Consequently, for a.a. , there holds where .
Define the following Lyapunov functional candidate: where and are positive constants to be determined.
Then, for a.a. , computing the derivative of along trajectories of error system (13), we get from Lemma 6 and the calculus for differential inclusion in [31] that where , .
It follows from (15) and (17) that Take . Then one derives from (18) that
Take . Then Therefore, for a.a. we have According to Definition 8, the neural networks (1) and (9) achieve global asymptotical synchronization. Moreover, from (16), , , approach to some constants as . This completes the proof.

Remark 10. Although for a.a. there exist positive constants such that , , these are usually unknown because the function is uncertain. Hence, in this paper, for neural networks with discontinuous activations, using state feedback control to synchronize (1) and (8) is not good, since the maximum value of control gain cannot be ascertained. However, adaptive control technique can synchronize this class of neural networks as the control gains increase according to the adaptive laws. This is the main reason why we choose adaptive control method to study the synchronization issue of the considered model.

Remark 11. Under Assumption 1, complete synchronization of neural networks with discontinuous activation functions can be achieved in this paper. However, based on the growth condition used in [32, 33], authors in [10] only got the quasi-synchronization criteria of systems (1) and (9) by state feedback control. Therefore, results of this paper improve corresponding parts of those in [10].

Remark 12. The synchronization criteria in this paper are simple and can be easily verified in practice. In [27], new conditions on synchronization of linearly coupled dynamical networks with non-Lipschitz right-hand sides were derived, but the discontinuous functions were weakened to be weak-QUAD and semi-QUAD, which means that the discontinuous function approaches to a continuous function, and the criteria were expressed in integral inequalities. Such synchronization criteria may be not easily verified in practice, especially in the case that there are countable discontinuities for the discontinuous functions. Hence, results of this paper improve those in [27].

4. Numerical Examples

In this section, we provide two examples to show that our theoretical results obtained above are effective. Example also show that, when the discontinuous neural networks have parameters mismatches, synchronization is still realized under the discontinuous adaptive control developed in our previous works.

Example 13. Consider the delayed neural network model (1) with the following parameters: ,  ,  ,   is identity matrix of 2-dimension, and the activation function is with
Figure 1 shows chaotic-like trajectory of (1) with initial condition , .

Obviously, in this example is monotone increasing and is discontinuous at , so the activation function satisfies Assumption 1. It follows from Theorem 9 that system (9) can synchronize the driver system (1) under the adaptive controller (14).

In the numerical simulations, we use the forward Euler method, which was used in [34] to obtain numerical solution of differential inclusions. The parameters in the simulations are taken as step-length is 0.01, , , for all , . we get the simulation results shown in Figures 2 and 3. Figure 2 describes the trajectories of the error states. Figure 3 represents the time response of . Figures 2 and 3 show that synchronization error approaches to zero quickly as time goes and the control gain turns out to be some constants when the synchronization has been realized. Numerical simulations verify the theoretical results perfectly.

Remark 14. In Example 13, the activation function is not generalized differentiable at , nor is global Lipschitz. However, these two conditions are necessary in [26]. Hence, this example demonstrates that results of this paper improve corresponding results in [26].
When there are parameter mismatches between drive and response systems, only quasi-synchronization is realized if state feedback control technique or continuous adaptive control technique is utilized. In our previous research, a simple but all-powerful discontinuous adaptive control was designed to synchronize chaotic systems with uncertain perturbations. The following example is given to demonstrate that the discontinuous adaptive control is also applicable to synchronize neural networks with discontinuous activations. For more details of the discontinuous adaptive controller, see [6, 7, 15]. The models used in the following example are taken from example in [10].

Example 15. Consider the delayed neural network model (1) with the activation function as the other parameters are the same as those in Example 13. We label system (1) with such activation as . The chaotic-like trajectory of system can be seen in Figure 4.
The response system with parameter mismatches is assumed to be the same as that in [10], which is described as follows: where
The chaotic-like trajectory of system (25) is shown in Figure 5, which is different from that in Figure 4.
According to the analysis in [7, 15], the systems and (25) can realize synchronization under the following discontinuous adaptive controller: where , and are arbitrary positive constants.
In the numerical simulations, we still use the Euler method. The parameters in the simulations are taken as step-length is 0.001, , , , for all , , , . We get the simulation results shown in Figures 68. Figure 6 describes the trajectories of the error states as time involves. Figures 7 and 8 represent the time response of and . Simulations demonstrate that the neural networks with discontinuous activations and parameters mismatches achieve synchronization by utilizing the discontinuous adaptive control technique.

Remark 16. It can be seen from Figures 7 and 8 that the final control gains are and , which are much smaller than . In [10], was utilized to control system (25), and only quasi-synchronization was achieved. This example demonstrates that the designed discontinuous adaptive controller is really useful. Since in our previous works such adaptive controller has been discussed in details, we use it here without any proof.

5. Conclusions

In this paper, new definition of synchronization for discontinuous dynamical systems is proposed. Under this definition, synchronization of delayed neural networks with discontinuous activation functions via adaptive control is studied. The discontinuous activations in the neural networks are assumed to be monotone increasing and can be unbounded. By utilizing the framework of Filippov solution, Lyapunov function and chain rule of differential inclusion, sufficient conditions guaranteeing the realization of asymptotic complete synchronization of the considered model are derived. Numerical simulations verify the effectiveness of the theoretical results. When there are parameter mismatches between the drive and response neural networks with discontinuous activations, a useful discontinuous adaptive controller can achieve the same goal. Results of this paper are also applicable to neural networks with continuous monotone activation functions.

Acknowledgments

This work was jointly supported by the National Natural Science Foundation of China (NSFC) under Grants nos. 61263020, 61272530, 11072059, the Natural Science Foundation of Jiangsu Province of China under Grants no. BK2012741, and the Scientific Research Fund of Yunnan Province under Grant no. 2010ZC150, the Scientific Research Fund of Chongqing Normal University under Grants no. 12XLB031 and no. 940115.