#### Abstract

A novel and effective approach to synchronization analysis of neural networks is investigated by using the nonlinear operator named the generalized Dahlquist constant and the general intermittent control. The proposed approach offers a design procedure for synchronization of a large class of neural networks. The numerical simulations whose theoretical results are applied to typical neural networks with and without delayed item demonstrate the effectiveness and feasibility of the proposed technique.

#### 1. Introduction

Since its introduction by Pecora and Carrol in 1990, synchronization of chaotic systems [1–10] is of great practical significance and has received great interest in recent years. In the above literature, the approach applied to stability analysis is basically Lyapunov's method. As we all know, the construction of a proper Lyapunov function becomes usually very skillful, and Lyapunov's method does not specifically describe the convergence rate near the equilibrium of the system. Hence, there is little compatibility among all of the stability criteria obtained so far.

The concept named the generalized Dahlquist constant [11] has been applied to the investigation of impulsive synchronization [12, 13] analysis.

Intermittent control [14–18] has been used for a variety of purposes in engineering fields such as manufacturing, transportation, air-quality control, and communication. Synchronization using an intermittent control method has been discussed [15–18]. Compared with continuous control methods [2–10], intermittent control is more efficient when the system output is measured intermittently rather than continuously. Our interest focuses on the class of intermittent control with time duration, wherein the control is activated in certain nonzero time intervals and is off in other time intervals. A special case of such a control law is of the form

where denotes the control strength, denotes the switching width, and denotes the control period. In this paper, based on the generalized Dahlquist constant and the Gronwall inequality, a general intermittent controller

is designed, where is a strictly monotone increasing function on with then sufficient yet generic criteria for synchronization of typical neural networks with and without delayed item are obtained.

This paper is organized as follows. In Section 2, some necessary background materials are presented, and a simple configuration of coupled neural networks is formulated. Section 3 deals with synchronization. The theoretical results are applied to typical chaotic neural networks, and numerical simulations are shown in this section. Finally, some concluding remarks are given in Section 4.

#### 2. Formulations

Let be a Banach space endowed with the Euclidean norm , that is, , where is inner product, and, let be a open subset of . We consider the following system: where are nonlinear operators defined on , , is a time-delayed positive constant, and .

*Definition 1. *System (3) is called to be exponentially stable on a neighborhood of the equilibrium point if there exist constants , such that
where is any solution of (3) initiated from .

*Definition 2 (see [11]). *Suppose is an open subset of Banach space , and is an operator.

The constant is called to be the generalized Dahlquist constant of on , where ; here, denote by the operator mapping every point onto .

For ,

where

where . According to the Cauchy-Bunie Khodorkovsky inequality, we obtain

Therefore,

That is So the function , is monotone decreasing; thus, the limit

exists.

#### 3. Synchronization Analysis and Examples

Theorem 3. *If the operator in the system (3) satisfies
**
for any , where is a positive constant, then two solutions, and , respectively, initiated from , satisfy
**
where .*

*Proof. *Assume and are the solutions of (3), respectively, under the initial conditions , . We have
for all and .

For all ,

where .

So

where .

Then for all , we infer that Therefore, we obtain

Letting , then Integrating inequality (19) over , we have

That is

Using the Gronwall inequality [19, 20], we have

Then

Let system (3) be the drive system, and we consider the response system where are the state variables, are nonlinear operators, is a feedback control term, and where denotes the control strength, is the control period, is called the control width, and is a strictly monotone increasing function on with .

In this paper, our goal is to design suitable function, and suitable parameters, , , and such that system (24) synchronizes to system (3).

Subtract (3) from (24), the error system is obtained where . Then we have the following result.

Theorem 4. *Suppose that the operator in the systems (3) and (24) satisfies condition (12), and is defined as in Definition 2, and . Then the synchronization of (3) and (24), given in (26), is asymptotically stable if the parameters , , and are such that
**
where , is inverse function of the function .*

*Proof. *From Theorem 3, we can get the conclusion as follows:
for any ,
for any .

Consider conditions (28) and (29), and we can get the conclusion that
When , is obtained under condition (27) and (26) becomes asymptotically stable.

Corollary 5. *Letting , be defined as in Definition 2, and condition (27) is satisfied, then result similar to Theorem 4 is obtained.*

Corollary 6. *Supposing that , the operator in the systems (3) and (24) satisfies condition (12), and is defined as in Definition 2, and then; the synchronization of (3) and (24), given in (26), is asymptotically stable if the parameters , , and are such that
**
where .*

In the simulations of the following examples, we always choose and make use of the norm , where .

*Example 7. *Consider a typical delayed Hopfield neural network [21–23] with two neurons:
where , , and , , with .

It should be noted that the network is actually a chaotic delayed Hopfield neural network.

Equation (32) is considered as the drive system, and the response system is defined as follows:

We calculate and get the value , where . Choose , and it is easy to verify that condition (31) is satisfied. Let the initial condition be . Then it can be clearly seen in Figure 1 that the drive system (32) synchronizes with the response system (33).

**(a)**

**(b)**

*Example 8. *Considering a typical delayed chaotic neural network (29) with two neurons [24, 25] as the drive system, (31) as the response system, where , , , , with .

It is easily seen that the operator is differential on in Example 7, but the operator is not so in this example.

We calculate and get the value , where . Choose , and it is easy to verify that condition (31) is satisfied. Let the initial condition be . Then the synchronization property of this example can be clearly seen in Figure 2.

**(a)**

**(b)**

*Example 9. *Consider an autonomous Hopfield neural network with four neurons [26, 27]:
where , and

Das II et al. [26] have reported that the system (34) posseses a chaotic behavior.

Equation (34) is considered as the drive system, and the response system is defined as follows:

We calculate and get the value , where and choose . It is easy to verify that condition (31) is satisfied. Let the initial condition be . Then it can be clearly seen in Figure 3 that the drive system (34) synchronizes with the response system (36).

**(a)**

**(b)**

**(c)**

**(d)**

*Example 10. *Consider a typical hyperchaotic neural network (32) with two neurons [28] as the drive system, (33) as the response system, where , , and

We calculate and get the value , where and choose . It is easy to verify that the condition (27) is satisfied. Letting the initial condition be . Then the synchronization property of this example can be clearly seen in Figure 4.

**(a)**

**(b)**

**(c)**

**(d)**

#### 4. Conclusion

Approaches for synchronization of two coupled neural networks which use the nonlinear operator named the generalized Dahlquist constant and the general intermittent control have been presented in this paper. Strong properties of global and asymptotic synchronization have been achieved in a finite number of steps. The techniques have been successfully applied to typical neural networks. Numerical simulations have verified the effectiveness of the method.

#### Acknowledgment

This work is supported by Research Fund Project of the Heze University under Grant: XY10KZ01.