Abstract

This paper investigates the global asymptotic stability of a class of switched neural networks with delays. Several new criteria ensuring global asymptotic stability in terms of linear matrix inequalities (LMIs) are obtained via Lyapunov-Krasovskii functional. And here, we adopt the quadratic convex approach, which is different from the linear and reciprocal convex combinations that are extensively used in recent literature. In addition, the proposed results here are very easy to be verified and complemented. Finally, a numerical example is provided to illustrate the effectiveness of the results.

1. Introduction

In the past thirty years, neural networks have found extensive applications in associative memory, pattern recognition, and image processing [13]. It is true that most applications of neural networks are heavily dependent on the dynamic behaviors of neural networks, especially on global asymptotic stability of neural networks. On the other hand, time delays are inevitably encountered in the hardware implementation due to the finite switching speed of amplifier, which may destroy the system performance and become a source of oscillation or instability in neural networks. Therefore, stability of neural networks with delays has attracted increasing attention and lots of stability criteria have been reported in the literature [4, 5].

As a special class of hybrid systems, switched systems are organized by a switching rule that orchestrates the switching. In reality, neural networks sometimes have finite modes that switch from one to another at different times according to a switching law. In [6, 7], the authors studied the stability problem of different kinds of switched neural networks with time delays. Different from the model in these works, in this paper, we consider a class of neural networks with state-dependent switchings. Our switched neural networks model is general and it generalizes the conventional neural networks.

Recently, convex analysis has been significantly employed in the stability analysis of time-delay systems [817]. According to the feature of different convex functions, different convex combination approaches are adopted in the literature, such as the linear convex combination [810], reciprocal convex combination [1113], and quadratic convex combination [1517]. In [8, 9, 12, 14, 16, 17], convex combination technology was successfully used to derive some stability criteria for neural networks with time delays. It should be pointed out that the lower bound of the time delay in [8, 9, 16] is zero, which means the information on the lower bound of the time delay cannot be sufficiently used. Namely, the conditions obtained in [9, 14, 16] fail to take effect on the stability of neural networks when the lower bound of the time delay is strictly greater than zero.

In this paper, some delay-dependent stability criteria in terms of LMIs are derived. The advantages are as follows. Firstly, differential inclusions and set-valued maps are employed to deal with the switched neural networks with discontinuous right-hand sides. Secondly, our results employ the quadratic convex approach, which is different from the linear and reciprocal convex combinations that are extensively used in recent literature on stability. Thirdly, the lower bound of the time-varying delays is not zero and its information is adequately used to construct the Lyapunov-Krasovskii functional. Fourthly, we resort to neither Jensen’s inequality with delay-dividing approach nor the free-weighting matrix method compared with previous results.

The organization of this paper is as follows. Some preliminaries are introduced in Section 2. In Section 3, based on the quadratic convex approach, delay-dependent stability criteria in terms of LMIs are established for switched neural networks with time-varying delays. Then, an example is given to demonstrate the effectiveness of the obtained results in Section 4. Finally, conclusions are given in Section 5.

Notations. Throughout this paper, denotes the -dimensional Euclidean space. and denote the transpose and the inverse of the matrix , respectively. means that the matrix is symmetric and positive definite (semipositive definite). represents the elements below the main diagonal of a symmetric matrix. The identity and zero matrices of appropriate dimensions are denoted by and 0, respectively. is defined as . denotes a block-diagonal matrix.

2. System Description and Preliminaries

In this paper, we consider a class of switched neural networks with delays as follows: where is the state variable of the th neuron, and and denote the feedback connection weight and the delayed feedback connection weight, respectively. is bounded continuous function; corresponds to the transmission delay and satisfies . , , , and , are all constant numbers. The initial condition of system (1) is .

The following assumptions are given for system (1):(H1)For is bounded and there exist constants such thatfor all , .(H2)The transmission delay is a differential function and there exist constants , such that for all , .

Obviously, system (1) is a discontinuous system; then its solution is different from the classic solution and cannot be defined in the conventional sense. In order to obtain the solution of system (1), some definitions and lemmas are given.

Definition 1. For a system with discontinuous right-hand sides, A set-valued map is defined as where is the closure of the convex hull of set , , , and , is Lebesgue measure of set .
A solution in Filippov’s sense of system (5) with initial condition is an absolutely continuous function , which satisfy and differential inclusion: If is bounded, then the set-valued function is upper semicontinuous with nonempty, convex, and compact values [18]. Then, the solution of system (5) with initial condition exists and it can be extended to the interval in the sense of Filippov.
By applying the theories of set-valued maps and differential inclusions [1820], system (1) can be rewritten as the following differential inclusion: where is the convex hull of , . , , , , , and . The other parameters are the same as in system (1).

Definition 2. A constant vector is called an equilibrium point of system (1), if, for , It is easy to find that the origin is an equilibrium point of system (1).

Definition 3 (see [18]). A function is a solution of (1), with the initial condition , if is an absolutely continuous function and satisfies differential inclusion (8).

Lemma 4 (see [18]). Suppose that assumption (H1) is satisfied; then solution with initial condition of (1) exists and it can be extended to the interval .

Before giving our main results, we present the following important lemmas that will be used in the proof to derive the stability conditions of the switched neural networks.

Lemma 5 (see [21]). Given constant matrices , and , where , , is equivalent to the following conditions:

Lemma 6 (see [17]). For real symmetric matrices and a scalar continuous function satisfy , where and are constants satisfying . If , then

Lemma 7 (see [16]). Let , and let be an appropriate dimensional vector. Then, one has the following facts for any scalar function , : (i) (ii) (iii)where matrices   () and vector independent of the integral variable are appropriate dimensional arbitrary ones.

3. Main Results

For presentation convenience, in the following we denote , , , . , , , , , , .

Theorem 8. Suppose assumptions (H1) and (H2) hold; the origin of system (1) is globally asymptotically stable if there exist matrices , ,   (),   (), and , , , and   (), such that the following two LMIs hold:where , = with

Proof. Define a vector as Consider the following Lyapunov-Krasovskii functional candidate as follows: where where .
Calculating the time derivatives of along the trajectories of system (8), we obtain whereIt is easy to obtain the following identities: Thus, we haveSimilarly, we have Applying Lemma 7 to and , we get On the other hand, from assumption (H1), we have where .
From (27), it is easy to get that It follows from (19)–(26) and (28) that where is defined in the theorem context, and It is easy to see that is a quadratic convex combination of matrices on .
Applying Lemma 5 to (16), we have Since , from Lemma 6, if LMIs (16) are true, then . Then, we can see that the origin of system (1) is asymptotically stable.
The proof is completed.

Remark 9. In [16], stability analysis for neural networks with time-varying delay is studied by using quadratic convex combination. Our results have two advantages compared with the results of that paper. On the one hand, the information on the lower bound of the time-varying delays is considered. On the other hand, the augmented vector includes the distributed delay terms.

Remark 10. We use three inequalities in Lemma 7 combined with the quadratic convex combination implied by Lemma 6, rather than Jensen’s inequality and the linear convex combination. In addition, our theoretical proof is not concerned with free-weighting matrix method.

Remark 11. To use the quadratic convex approach, we construct the Lyapunov-Krasovskii functional with the following term: . The degree increase of by 1 means the number increase of the integral by 1 due to the fact that .
In the case , we have the following result.

Corollary 12. Suppose assumptions (H1) and (H2) hold with ; the origin of system (1) is globally asymptotically stable if there exist matrices , ,   (),   (), and , , , and   (), such that the following two LMIs hold: where with

In addition, when the information of the time derivative of delays is unknown or the derivative of the delays does not exist, then we have the following result.

Corollary 13. Suppose assumption (H1) holds with ; the origin of system (1) is globally asymptotically stable if there exist matrices ,   (),   (), and , , , and   (), such that the following two LMIs hold:where with

Remark 14. It is worth noting that when we consider system (1) without switching, that is, , , and , then Corollary 12 is the main Theorem of [16].

Remark 15. Compared with the results on stability of neural networks with continuous right-hand side [6], our results on stability of neural networks are with discontinuous right-hand sides. So the results of this paper are less conservative and more general.

4. Numerical Example

In this section, an example is provided to verify the effectiveness of the results obtained in the previous section.

Example 1. Consider two-dimensional switched neural networks with time-varying delays where with , and take the activation function as . We can obtain that , , , , For written simplification, we use the Matlab LMI Control Toolbox and then a solution to LMIs (32) is obtained , is omitted since the dimension is too big) as follows: Therefore, according to Corollary 12, we can see that the origin of system (36) is globally asymptotically stable. The state trajectories of variables and are shown in Figure 1.

Remark 16. Because the parameters of system (1) are discontinuous, the results obtained in [6] about neural networks with continuous right-hand sides cannot be used here. In addition, the lower bounds of the delays of system (36) are not zero, so the results obtained in [8, 9, 16] cannot be used here.

5. Conclusions

In this paper, the delay-dependent stability for a class of switched neural networks with time-varying delays has been studied by using the quadratic convex combination. Some delay-dependent criteria in terms of LMIs have been obtained. The lower bound of the time-varying delays is considered to be nonzero so that the information of can be used adequately. It is worth noting that we resort to neither Jensen’s inequality with delay-dividing approach nor the free-weighting matrix method compared with previous results.

Conflict of Interests

The authors declare that there is no conflict of interests regarding the publication of this paper.

Acknowledgments

The authors would like to thank the financial supports from the National Natural Science Foundation of China (nos. 61304068 and 61473334), Jiangsu Qing Lan Project, and PAPD.