Research Article | Open Access

Zhenyu Lu, Kai Li, Yan Li, "Global Asymptotic Stability of Switched Neural Networks with Delays", *Mathematical Problems in Engineering*, vol. 2015, Article ID 717513, 11 pages, 2015. https://doi.org/10.1155/2015/717513

# Global Asymptotic Stability of Switched Neural Networks with Delays

**Academic Editor:**Yuan Fan

#### Abstract

This paper investigates the global asymptotic stability of a class of switched neural networks with delays. Several new criteria ensuring global asymptotic stability in terms of linear matrix inequalities (LMIs) are obtained via Lyapunov-Krasovskii functional. And here, we adopt the quadratic convex approach, which is different from the linear and reciprocal convex combinations that are extensively used in recent literature. In addition, the proposed results here are very easy to be verified and complemented. Finally, a numerical example is provided to illustrate the effectiveness of the results.

#### 1. Introduction

In the past thirty years, neural networks have found extensive applications in associative memory, pattern recognition, and image processing [1â€“3]. It is true that most applications of neural networks are heavily dependent on the dynamic behaviors of neural networks, especially on global asymptotic stability of neural networks. On the other hand, time delays are inevitably encountered in the hardware implementation due to the finite switching speed of amplifier, which may destroy the system performance and become a source of oscillation or instability in neural networks. Therefore, stability of neural networks with delays has attracted increasing attention and lots of stability criteria have been reported in the literature [4, 5].

As a special class of hybrid systems, switched systems are organized by a switching rule that orchestrates the switching. In reality, neural networks sometimes have finite modes that switch from one to another at different times according to a switching law. In [6, 7], the authors studied the stability problem of different kinds of switched neural networks with time delays. Different from the model in these works, in this paper, we consider a class of neural networks with state-dependent switchings. Our switched neural networks model is general and it generalizes the conventional neural networks.

Recently, convex analysis has been significantly employed in the stability analysis of time-delay systems [8â€“17]. According to the feature of different convex functions, different convex combination approaches are adopted in the literature, such as the linear convex combination [8â€“10], reciprocal convex combination [11â€“13], and quadratic convex combination [15â€“17]. In [8, 9, 12, 14, 16, 17], convex combination technology was successfully used to derive some stability criteria for neural networks with time delays. It should be pointed out that the lower bound of the time delay in [8, 9, 16] is zero, which means the information on the lower bound of the time delay cannot be sufficiently used. Namely, the conditions obtained in [9, 14, 16] fail to take effect on the stability of neural networks when the lower bound of the time delay is strictly greater than zero.

In this paper, some delay-dependent stability criteria in terms of LMIs are derived. The advantages are as follows. Firstly, differential inclusions and set-valued maps are employed to deal with the switched neural networks with discontinuous right-hand sides. Secondly, our results employ the quadratic convex approach, which is different from the linear and reciprocal convex combinations that are extensively used in recent literature on stability. Thirdly, the lower bound of the time-varying delays is not zero and its information is adequately used to construct the Lyapunov-Krasovskii functional. Fourthly, we resort to neither Jensenâ€™s inequality with delay-dividing approach nor the free-weighting matrix method compared with previous results.

The organization of this paper is as follows. Some preliminaries are introduced in Section 2. In Section 3, based on the quadratic convex approach, delay-dependent stability criteria in terms of LMIs are established for switched neural networks with time-varying delays. Then, an example is given to demonstrate the effectiveness of the obtained results in Section 4. Finally, conclusions are given in Section 5.

*Notations.* Throughout this paper, denotes the -dimensional Euclidean space. and denote the transpose and the inverse of the matrix , respectively. means that the matrix is symmetric and positive definite (semipositive definite). represents the elements below the main diagonal of a symmetric matrix. The identity and zero matrices of appropriate dimensions are denoted by and 0, respectively. is defined as . denotes a block-diagonal matrix.

#### 2. System Description and Preliminaries

In this paper, we consider a class of switched neural networks with delays as follows: where is the state variable of the th neuron, and and denote the feedback connection weight and the delayed feedback connection weight, respectively. is bounded continuous function; corresponds to the transmission delay and satisfies . , , , and , are all constant numbers. The initial condition of system (1) is .

The following assumptions are given for system (1):(H1)For is bounded and there exist constants such thatâ€‰for all , .(H2)The transmission delay is a differential function and there exist constants , such that â€‰for all , .

Obviously, system (1) is a discontinuous system; then its solution is different from the classic solution and cannot be defined in the conventional sense. In order to obtain the solution of system (1), some definitions and lemmas are given.

*Definition 1. *For a system with discontinuous right-hand sides, A set-valued map is defined as where is the closure of the convex hull of set , , , and , is Lebesgue measure of set .

A solution in Filippovâ€™s sense of system (5) with initial condition is an absolutely continuous function , which satisfy and differential inclusion: If is bounded, then the set-valued function is upper semicontinuous with nonempty, convex, and compact values [18]. Then, the solution of system (5) with initial condition exists and it can be extended to the interval in the sense of Filippov.

By applying the theories of set-valued maps and differential inclusions [18â€“20], system (1) can be rewritten as the following differential inclusion: where is the convex hull of , . , , , , , and . The other parameters are the same as in system (1).

*Definition 2. *A constant vector is called an equilibrium point of system (1), if, for , It is easy to find that the origin is an equilibrium point of system (1).

*Definition 3 (see [18]). *A function is a solution of (1), with the initial condition , if is an absolutely continuous function and satisfies differential inclusion (8).

Lemma 4 (see [18]). *Suppose that assumption (H1) is satisfied; then solution with initial condition of (1) exists and it can be extended to the interval .*

Before giving our main results, we present the following important lemmas that will be used in the proof to derive the stability conditions of the switched neural networks.

Lemma 5 (see [21]). *Given constant matrices , and , where , , is equivalent to the following conditions: *

Lemma 6 (see [17]). *For real symmetric matrices and a scalar continuous function satisfy , where and are constants satisfying . If , then *

Lemma 7 (see [16]). *Let , and let be an appropriate dimensional vector. Then, one has the following facts for any scalar function , : *(i)* (ii)** (iii)**where matrices â€‰â€‰() and vector independent of the integral variable are appropriate dimensional arbitrary ones.*

#### 3. Main Results

For presentation convenience, in the following we denote , , , . , , , , , , .

Theorem 8. *Suppose assumptions (H1) and (H2) hold; the origin of system (1) is globally asymptotically stable if there exist matrices , , â€‰â€‰(), â€‰â€‰(), and , , , and â€‰â€‰(), such that the following two LMIs hold:where , = with *

*Proof. *Define a vector as Consider the following Lyapunov-Krasovskii functional candidate as follows: where where .

Calculating the time derivatives of along the trajectories of system (8), we obtain whereIt is easy to obtain the following identities: Thus, we haveSimilarly, we have Applying Lemma 7 to and , we get On the other hand, from assumption (H1), we have where .

From (27), it is easy to get that It follows from (19)â€“(26) and (28) that where is defined in the theorem context, and