- About this Journal ·
- Abstracting and Indexing ·
- Advance Access ·
- Aims and Scope ·
- Annual Issues ·
- Article Processing Charges ·
- Articles in Press ·
- Author Guidelines ·
- Bibliographic Information ·
- Citations to this Journal ·
- Contact Information ·
- Editorial Board ·
- Editorial Workflow ·
- Free eTOC Alerts ·
- Publication Ethics ·
- Reviewers Acknowledgment ·
- Submit a Manuscript ·
- Subscription Information ·
- Table of Contents

Journal of Applied Mathematics

Volume 2013 (2013), Article ID 710741, 10 pages

http://dx.doi.org/10.1155/2013/710741

## Piecewise Convex Technique for the Stability Analysis of Delayed Neural Network

^{1}College of Computer Science and Information, Guizhou University, Guiyang 550025, China^{2}School of Mathematics and Statistics, Guizhou University of Finance and Economics, Guiyang 550004, China^{3}College of Science, Guizhou University, Guiyang 550025, China

Received 12 May 2013; Accepted 2 July 2013

Academic Editor: Chong Lin

Copyright © 2013 Zixin Liu et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

#### Abstract

On the basis of the fact that the neuron activation function is sector bounded, this paper transforms the researched original delayed neural network into a linear uncertain system. Combined with delay partitioning technique, by using the convex combination between decomposed time delay and positive matrix, this paper constructs a novel Lyapunov function to derive new less conservative stability criteria. The benefit of the method used in this paper is that it can utilize more information on slope of the activations and time delays. To illustrate the effectiveness of the new established stable criteria, one numerical example and an application example are proposed to compare with some recent results.

#### 1. Introduction

As a special class of nonlinear dynamical systems, neural networks (NNs) have attracted considerable attention due to their extensive applications in pattern recognition, signal processing, associative memories, combinatorial optimization, and many other fields. However, time delay is frequently encountered in NNs due to the finite switching speed of amplifier and the inherent communication time of neurons, especially in the artificial neural network. And it is often an important source of instability and oscillations. It has been shown that the existence of time delay can change the topology of neural networks, and then change the dynamic behavior of neural networks, such as oscillation and chaos. Thus, it is significant to introduce time delay into the neural network model. Additionally, stochastic disturbances and parameter uncertainties can also destroy the convergence of a neural network system. This makes the design or performance for the corresponding closed-loop systems become difficult. Therefore, the equilibrium and stability properties of NNs with time delay have been widely considered by many researchers. Up to now, various stability conditions have been obtained, and many excellent papers and monographs have been available (see [1–8]). So far, these obtained stability results are classified into two types: delay independent and delay dependent. Since sufficiently considered the information of time delays, delay-dependent criteria may be less conservative than delay-independent ones when the size of time delay is small, and much attention has been paid to the delay-dependent category [9–12]. In order to utilize more information of time delay, delay interval is always divided into two or many subintervals with the same size [13, 14]. It has been shown that delay partitioning technique is effective, and the more delay subintervals are divided, the less conservatism of stable criterion may be. However, too many delay subintervals must increase the computational burden; how to balance these two contradictions is a very important issue. To solve this problem, weighting delay and convex analysis methods are widely employed [8, 13].

Additionally, as pointed out by Li et al. [15], the choice of an appropriate Lyapunov-Krasovskii functional (LKF) and the utilization of neuron activation function’s information are very important for deriving less conservative stability criteria. Thus, recently, many authors were devoted to propose a new technique to establish less conservative stable results, such as discredited LKF, augmented LKF, free-weighting matrix LKF, weighting delay LKF, and delay-slope dependent LKF.

In view of the previous discussion, one can see that, to reduce criterion’s conservatism, the crucial problem is how to effectively utilize the information of time delays and neuron activation function. Motivated by the preceding discussion, this paper mainly considers the effective utilization of time delay and neuron activation function’s sector bound. By using the convex representation of the neuron activation function’s sector bounds, we first transform the original nonlinear delayed system into a linear uncertain system. Then, a new LKF function is constructed to derive less conservative stable criteria. Different from previous LKF, this new LKF sufficiently employs the convex combination between decomposed time delay and positive matrix. Finally, one numerical example and an application example are presented to illustrate the validity of the main results.

*Notation. *The notations are used in our paper unless otherwise specified. denotes a vector or a matrix norm; and are real and n-dimension real number sets, respectively; denotes the block diagonal matrix. Real matrix denotes that is positive definite (negative definite). denotes that is positive-definite. Consider , where denotes identity matrix.

#### 2. Preliminaries

Consider the following delayed neural networks: where denotes the neural state vector; , denotes the neuron activation function; is external input vector; with describes the rate with which the th neuron will reset its potential to the resting state in isolation when disconnected from the networks and external inputs; and represent the weighting and delayed weighting matrices, respectively; , is time-varying continuous function which satisfies , , where , , , and are given constants. Additionally, we always assume that each neuron activation function satisfies condition: , for all , , , where and are known constant scalars. As pointed out in [16], under this assumption, system (1) has equilibrium point. Assume that is an equilibrium point of system (1), and set , . Then, system (1) can be transformed into the following form: where , . From the previous assumption, for any , , function satisfies , , .

Notice that the nonlinear function can be rewritten as a convex combination form with the sector bounds as follows: where , satisfying , . Namely, , , where , , are elements of a convex hull .

Set , where . Obviously, .

Define , , , ; then nonlinearities and can be expressed as , , where and satisfy , . And the system (2) can be rewritten as the following delayed uncertain system:

*Remark 1. *Different from previous work, in this paper, by using the convex expression , , we transform the original nonlinear system (2) into a linear system with parameter uncertain system (4). As a result, the stability problem of delayed neural network system (1) can be transformed into the robust stability problem of uncertain linear system (4).

Let , . For further discussion, the following lemmas are needed.

Lemma 2 (see [16]). *Let have positive values in an open subset of . Then, the reciprocally convex combination of over satisfies
**
subject to
*

Lemma 3 (see [17]). *Given the symmetric matrix and any real matrices , of appropriate dimensions, then
**
for all satisfying if and only if there exists such that
**
where . *

Lemma 4 ([18], Jensen inequality). *Consider two scalars and a positive definite matrix . For any conditions function and any strictly positive condition , the following inequality holds:
*

Lemma 5 (see [19]). *For any constant matrix , , scalars , such that the following integrations are well defined; then
*

#### 3. Stability Analysis

Set , , and , , , , , are positive matrices. Let Consider a new class of Lyapunov functional candidate as follows: where Define ; we can obtain + + + + , where ,

Notice that

Set , , , , , , , , , , , . It yields that

We start with the case , where .

Therefore

Notice that . , , + ; from Lemma 5, it yields that Notice that

Thus, from Lemma 5, it yields that Set , , where are arbitrary matrices. If , since , from Lemma 2, one can obtain

Note that inequality (21) holds for all . When and , inequality (21) still holds; it yields thatwhere Using the fact that and is a strictly positive continuous function on , by Lemma 4, one can obtain

Furthermore, from (4), for arbitrary matrices and of appropriate dimensions, the following equalities hold:

DefineTherefore Moreover From (16)–(28b), we get that, along (4), for some scalar if

Inequality (30) leads, for , to the following two inequalities:

Inequalities (31a) and (31b) imply (30), because is convex in . At the same time, inequalities (31a) and (31b) lead, for , to the following inequalities:

Obviously, inequalities (32a)–(32d) imply (31a) and (31b) since + , and + are convex in .

Similarly, there exists scalar such that if

Similarly, set , , , , , , , , , , , .

If , then, .

Therefore Moreover

Set where are arbitrary matrices. If , from Lemmas 5 and 2, similar to the proof of (19)–(21), one can obtain When and , inequality (38) still holds; it yields that where One has

Furthermore, from (4), for arbitrary matrices and of appropriate dimensions, the following equalities hold:

Define Therefore Moreover From (35)–(45b), similar to (30)–(32d), there exist and such that ifor

Theorem 6. *For given scalars , , , , , , system (1) is globally asymptotically stable if there exist , of appropriate dimensions such that the following conditions hold:
**
where . *

*Proof. *By Lemma 3, the conditions in Theorem 6 are equivalent to (32a)–(32d) or (47a)–(47d). In view of the previous analysis from (30) to (32d) and (46) to (47d), one can obtain that there exist such that . By Lyapunov stable theory, system (1) is globally asymptotically stable, which completes the proof.

*Remark 7. *Different from previous work, the LKF function in this paper is constructed by using the convex combination between decomposed time delay and positive matrix, which may reduce the conservatism of criterion.

*Remark 8. * From the proof of Theorem 6, one can see that, by using the different combinations among , , and , , we can establish different stable criteria as follows.

Corollary 9. * For given scalars , , , , , , system (1) is globally asymptotically stable if there exist , of appropriate dimensions such that the following conditions hold:
*

Corollary 10. *For given scalars , , , *