Complexity

Volume 2017, Article ID 6290646, 11 pages

https://doi.org/10.1155/2017/6290646

## Centralized and Decentralized Data-Sampling Principles for Outer-Synchronization of Fractional-Order Neural Networks

Hubei Normal University, Hubei 435002, China

Correspondence should be addressed to Jin-E Zhang; moc.361@50212068gnahz

Received 24 December 2016; Revised 6 February 2017; Accepted 21 February 2017; Published 8 March 2017

Academic Editor: Olfa Boubaker

Copyright © 2017 Jin-E Zhang. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

#### Abstract

This paper aims to investigate the outer-synchronization of fractional-order neural networks. Using centralized and decentralized data-sampling principles and the theory of fractional differential equations, sufficient criteria about outer-synchronization of the controlled fractional-order neural networks are derived for structure-dependent centralized data-sampling, state-dependent centralized data-sampling, and state-dependent decentralized data-sampling, respectively. A numerical example is also given to illustrate the superiority of theoretical results.

#### 1. Introduction

Fractional operator has become visible in application domains [1–15]. As the demanding performance expectations with uncertainty, fractional operator offers more degrees of freedom to designers to meet some predefined performance indexes. After gradually recognizing the importance of fractional operator, it is found that the description of fractional-order model is more accurate and totally different from that of the corresponding integer-order model. As a direct application, the characteristic of fractional-order model can be used to identify possible behavior of electrical signals from neurons. In physical implementation of neurodynamic systems, arbitrary order analog fractance circuit is most appropriate, which reveals profoundly the relationships among neural circuit elements [9–11]. In that way, real neurodynamic systems should be addressed by fractional-order models. Fractional-order neurodynamic systems can better describe how action potentials in neurons are launched and spread. In addition, fractional-order neurodynamic systems possess infinite memory, and yet, integer-order neurodynamic systems are not of such feature [3–8, 12–15]. Therefore, fractional-order neurodynamic systems have the potential to accomplish what integer-order ones can not do. More feasible analysis methods and easy-to-use techniques to be deal with fractional-order neurodynamic systems are worth looking into.

As a coherent behavior within nonlinear systems, synchronization of nonlinear systems has attracted phenomenal worldwide attention. Many studies have shown that synchronization mechanism is a universal phenomenon and has a wide range of applications in engineering systems. Generally, two schemes for synchronization are frequently used: inner-synchronization and outer-synchronization. For inner-synchronization, all nodes within a network will achieve a coherent behavior. However, for outer-synchronization, all individuals in two networks will achieve identical behaviors. In many application fields, outer-synchronization may seem practical [16–23]. For example, in heuristic computational intelligence, it is known that outer-synchronization is rooted in brain-inspired computing from evolutionary strategies to cognitive tasks. Nevertheless, results focusing on outer-synchronization of complex control systems have seldom been reported [19]. Control strategy for outer-synchronization deserves more investigation.

Sampled-data control through only using the local information has recently generated significant research interest [24–38]. Unlike continuous-time control, which requires the continuous communication data, sampled-data control is more appropriate under networked environment. For control systems, once we can give effective sampling policies and schedule, then the sampled-data control will reduce communication data and save energy dramatically. Thus, how to develop high-efficiency, heuristic information-based sampled-data control with the ultimate aim of maximizing the data collected is worth studying [38]. However, relevant studies of the data-sampling strategy for control systems are still in early stage.

Motivated by the above discussions, in this paper, we introduce the centralized and decentralized data-sampling principles to achieve outer-synchronization between coupled fractional-order neural networks. The efficient allocation of the limited energy resources of centralized and decentralized data-sampling principles that maximizes the information value of the data collected is clearly a step forward. Meanwhile, to more efficiently design the sampling method, we merge the structure and state clusters through centralized and decentralized data-sampling principles and then select the best sampling time. On the basis of some analytical tools of fractional differential equations, a series of criteria on outer-synchronization are derived. It should be noted that such criteria capture the information on sampling pattern and may have much wider application range.

The rest of the paper is organized as follows. In Section 2, we present the preliminaries and problem formulation. In Section 3, we state main results in detail. In Section 4, simulation example is illustrated. Finally, Section 5 concludes the paper.

#### 2. Preliminaries and Problem Formulation

First, some preliminaries of fractional operator are given.

Fractional integral for with order is described aswhere is Gamma function and is the initial time.

Caputo fractional derivative for with order is described aswhere is Gamma function, is a positive integer, and is the initial time.

One-parameter Mittag-Leffler function is described aswhere is Gamma function, , and is a complex number.

Consider a class of fractional-order neural networkswhere , , and are piecewise continuous and bounded, and feedback function satisfiesin which .

For the centralized data-sampling principle, (4) is rewritten aswhere is simple notion of with and is uniform for all the system states. Every neuron intersperses its state to its out-neighbors and receives the state information from its in-neighbors at the same time point .

For the decentralized data-sampling principle, (4) is rewritten aswhere is simple notion of with and is distributed for . Each neuron pushes its state information to its out-neighbors at time when it updates its state. It receives the information of in-neighbor state at time when the neighbor neuron updates its state.

Now, we state definition and problem formulation.

*Definition 1 (see [19]). *For any two trajectories and of (4) starting from different initial values and , if there exists some control scheme such thatthen we call system (4) can achieve outer-synchronization, where denotes norm.

Let and be two trajectories of (6) starting from different initial values and . Defining , it follows thatwhere , for all

When we adopt the centralized data-sampling principle via structure to achieve outer-synchronization of (6), according to Definition 1, we need to design control strategy based on system structure of (9) such thatwhere denotes norm, .

When we adopt the centralized data-sampling principle via state to achieve outer-synchronization of (6), in this case, consider state measurement errorwhere According to Definition 1, we need to design control strategy based on state measurement error (11) such thatwhere denotes norm, .

Let and be two trajectories of (7) starting from different initial values and . Defining , it follows thatwhere , for all

When we adopt the decentralized data-sampling principle via state to achieve outer-synchronization of (7), in this case, consider state measurement errorwhere According to Definition 1, we need to design control strategy based on state measurement error (14) such thatwhere denotes norm, .

Next, we present relevant lemmas.

Lemma 2 (see [1]). *Let . If , thenwhere*

Lemma 3 (see [39]). *Given , let be nonnegative and locally integrable on ; let be continuous, bounded, nonnegative, and nondecreasing on . Assuming to be nonnegative and locally integrable on withthen**Moreover, if is nondecreasing on , thenwhere is Gamma function and is one-parameter Mittag-Leffler function.*

In the following, we end this section with some notations that are needed later.

Let be positive constants, throughout this paper; denotewhere . For vector , vector norm . In addition, by the boundedness of and , there exist positive constants and such that

#### 3. Main Results

For problem formulation in preceding section, in this section, we propose the corresponding control schemes for centralized data-sampling principle and decentralized data-sampling principle, respectively.

To facilitate the narrative, we first address the control designs, then review, and analyze the theoretical results.

##### 3.1. Centralized Data-Sampling Principle

Theorem 4. *Let and be positive constants with and . Assume that there exist positive constants such that for all and . Set as a time point such thatfor Then system (6) reaches outer-synchronization.*

*Proof. *From for all and , together with (23), it follows thatthenfor all and any . According to (9), will not update untilat time point . Thus, we get , which impliesfor Thentherefore, the Zeno behavior can be excluded. Combining with (24) and (28),sofor By the definition of vector norm in this paper, from (9), now let us consider at time ,whereAccording to (5), obviously, for all , andNotice that ; then for any ,thusBy (32) and (36),which leads tohenceRecalling system (9), we havewhere is defined in (22). It can be concluded that outer-synchronization of system (6) is proved.

*Remark 5. *From inequality (28), we can seefor all , which excludes the Zeno behavior for rule (24).

Theorem 6. *Let be a positive and continuous function on . Set as a time point such thatfor all , where is defined in (11). If there exist positive constants such that for some and all , , then system (6) reaches outer-synchronization.*

*Proof. *According to Lemma 2, from (9) and (42),where is defined in (22), andOn the other hand, by (43),for , where .

Using Lemma 3, from (45), it followswhere , which implies that converges to by the sampling time sequence . Therefore, system (6) reaches out-synchronization.

##### 3.2. Decentralized Data-Sampling Principle

Theorem 7. *Let be positive and continuous on . Set as a time point such thatfor and all , where is defined in (14). If there exist positive constants such that for some and all , and , then system (7) reaches outer-synchronization.*

*Proof. *According to Lemma 2, from (13) and (47),where is defined in (22), andOn the other hand, by (48),for , , where .

Using Lemma 3, from (50), it followswhere , which implies that converges to by the sampling time sequence , . Therefore, system (7) reaches out-synchronization.

*Remark 8. *As Theorem in [19], under the data-sampling rule in Theorem 6 or Theorem 7, the interevent interval of each system state is strictly positive and possesses a common positive lower bound. Furthermore, the Zeno behavior is excluded.

*Remark 9. *For the sampled-data control, how to choose the proper scheme with the ultimate aim of maximizing the data collected to control the system is challenging. For example, as revealed in [9, 10], it is extremely difficult to design the sampling time point inherited from the sampled-data control strategy. However, according to Theorems 4–7, this situation can be effectively solved if the centralized and decentralized data-sampling principles are cleverly utilized.

*Remark 10. *For three control schemes in Theorems 4–7, these are just the type and level of points, not the merits of good points of difference. Theorem 4 is entirely focused around the centralized data-sampling principle via structure. Theorem 6 is concerned with the centralized data-sampling principle via state. Theorem 7 is to place emphasis on the decentralized data-sampling principle via state.

*Remark 11. *Note that the sampled-data control in Theorems 4–7 exerts only at the sampling time point, that is, every system state employs only its neighbors’ information at or . Thus, compared with the continuous-time control strategy, the control schemes in Theorems 4–7 can effectively save the bandwidth and reduce the communication cost. Moreover, the results obtained here are the first ones on centralized and decentralized data-sampling principles for outer-synchronization of fractional-order neural networks.

*Remark 12. *The key features of outer-synchronization in Theorems 4–7 are follows. () Each outer-synchronization scheme is closely related to the sampling time point. Once the sampling time point is given, the states of the controlled fractional-order neural networks will achieve outer-synchronization. () Centralized data-sampling principle via structure makes full use of the characteristic of system itself, while centralized or decentralized data-sampling principle via state skillfully combines the feature of state measurement error.

*Remark 13. *The analytical methods for outer-synchronization in Theorems 4–7 are quite different from conventional complete synchronization, projective synchronization, phase synchronization, distributed synchronization, pinning synchronization, and cluster synchronization.

#### 4. A Numerical Example

In this section, a numerical example is utilized to show the effectiveness of the results obtained.

Consider a class of fractional-order neural networks as follows:where , , , , , .

By direct calculation, we can obtain

To choose , , , then it follows that , . Hence the following inequalities hold:According to Theorem 4, system (52) reaches outer-synchronization. Figures 1 and 2 depict the dynamics of and , and in the triggering time points as Theorem 4, respectively. Figure 3 describes the release time points and release intervals.