New Results on Passivity Analysis for Uncertain Neural Networks with Time-Varying Delay
The paper investigates the stability and passivity analysis problems for a class of uncertain neural networks with time-delay via delta operator approach. Both the parameter uncertainty and the generalized activation functions are considered in this paper. By constructing an appropriate Lyapunov-Krasovskii functional, some new stability and passivity conditions are obtained in terms of linear matrix inequalities (LMIs). The main characteristic of this paper is to obtain novel stability and passivity analysis criteria for uncertain neural networks with time-delay in the delta operator system framework. A numerical example is presented to demonstrate the effectiveness of the proposed results.
Recently, neural networks have attracted considerable attention due to their applications in wide areas such as associative memory [1, 2], pattern recognition [1, 3], and optimization problems [4–7]. Recently, some stability conditions [8–12] and passivity analysis for neural networks [13–15] have been reported in the literature. The effect of time-delays [16–25] cannot be ignored in the real systems due to the facts that the delays can lead to instability [26–29], oscillation, or chaos. Recently, the stability results of time-delay neural networks have been presented in [30–36]. In addition, many results on passivity of neural networks with time-delay have been proposed [37–41].
It is well known that the discrete systems are often used for computer realization and continuous systems are frequently applied to theoretical analysis, respectively. Sampling continuous systems can lead to considerable discrete systems. When the sampling is fast using the traditional shift operator, the poles are located in the stable boundary. Then, the discrete systems will lose stability in finite word length computer. Goodwin proposed delta operator approach in  which is used to replace the aforementioned operator with sample continuous systems, which can unify some previous related results of the continuous and discrete systems into the framework of the delta operator systems. The delta operator is defined by where is a sampling period. More recently, much attention has been focused on the stability and stabilization problems for some delta operator systems [43–47]. However, it should be mentioned that there are few achievements about passivity analysis for uncertain discrete neural networks with time-varying delay via delta operator approach; instability exists in the applications of neural networks when the sampling rate is high, which motivates this research.
In this paper, the stability and passivity problems are investigated for uncertain neural networks with time-varying delay via delta operator approach. Both the parameter uncertainty and the generalized activation functions are considered in this paper. By choosing a new type of Lyapunov functional in delta domain and employing some novel methods to handle the delays, some stability and passivity criteria are proposed. The proposed conditions are expressed in terms of linear matrix inequalities (LMIs), which are dependent on the sampling period. The main characteristic of this paper is to obtain some stability and passivity analysis criteria for uncertain neural networks with time-varying delay in the delta operator system framework. Finally, a numerical example is given to demonstrate the effectiveness of the developed results.
Notation. Throughout this paper, for the sake of convenience, we use to denote , where is the sampling period. denotes the -dimensional Euclidean space. The superscript stands for matrix transposition. The notation denotes a block-diagonal matrix. For real symmetric matrices and , the notation (resp., ) means that the is positive semidefinite (resp., positive-definite). is the identity matrix with appropriate dimensions. The symbol “*” stands for the symmetric term in a matrix. Matrices, if their dimensions are not explicitly stated, are assumed to have compatible dimensions.
2. Problem Formulation
Consider the following uncertain neural network with time-varying delay: where stands for the network state at time ; denotes the activation at time ; is the external input at time ; is the output vector; is a positive diagonal matrix; , , and are unknown matrices; is a time-varying delay , with and ; and are two known positive and finite integers; is the sampling period. The time-varying parameter uncertainties , , and are assumed to be in the following form: where , , , and are known constant matrices; is an unknown time-varying matrix satisfying The activation function satisfies the following condition: with ,
Before ending this section, some preliminaries are recalled which are used to prove the main results in the next section.
Definition 1 (see ). A delta operator system is asymptotically stable, if the following conditions hold:(i) with equality if and only if ;(ii),where is a Lyapunov function in the delta domain.
Lemma 3 (see ). The property of delta operator for any time function and can be represented as where is a sampling period.
Lemma 4 (see ). For any constant positive semidefinite symmetric matrix , two positive integers and satisfy ; the following inequality holds:
Lemma 5 (see ). Let , , , and be real matrices of appropriate dimensions with satisfying ; then for all , if and only if there exists a scalar such that .
Lemma 6 (see  Schur complement). Given constant matrices , , and with appropriate dimensions, where , and , then , if and only if
3. Main Results
In this section, the stability and the passivity results for discrete-time uncertain neural network with time-varying delay via delta operator are given. Firstly, the stability conditions are given in the following part.
3.1. Stability Analysis
In order to consider the stability condition for uncertain neural networks with time-delay (2), we define and in (2). Then, we can have the following neural networks with time-delay: For neural networks with time-delay in (12), the stability criterion is obtained in the following theorem.
Theorem 7. For given scalars , neural network (12) with (6) is asymptotically stable, if there exist , , , and , positive definite diagonal matrices , , , and , such that the following LMI holds: where
Proof. Choose a Lyapunov-Krasovskii functional in delta domain as follows:
Applying Lemma 3, the delta-domain form of can be obtained as
From (6), for the scalars , one can have
which can be equivalently denoted as
Then, we obtain the following inequality:
Taking the delta operator manipulations of , we can obtain that
Taking the delta operator manipulations of , the following results can be obtained:
Taking the delta operator manipulations of and using Lemma 4, it can be found that For a given positive definite matrix with appropriately dimensions, one has that
Combining (18) and (21)–(26), the following inequality holds: where
It can be seen from Theorem 7 that , which means that . Then, based on Definition 1, neural network (12) is asymptotically stable. The proof is completed.
In the subsection, we continue to consider the robust stability problem of neural network (2) without inputs ; equivalently, we have For neural network (29), the robust stability criterion is given in the following theorem
Theorem 8. For given scalars , neural network (29) under the conditions (4)–(6) is robustly asymptotically stable, if there exist , , , positive definite diagonal matrices , , and , such that the following LMI holds: where with , , , , , , , , and defined in Theorem 7.
Proof. We choose the same Lyapunov-Krasovskii functional as Theorem 7. According to neural network (2), we have the following equation:
Similar to the proof of Theorem 7, we can have that
with , , , and , , , , , , , and have been defined in Theorem 7.
Applying Schur complement to (30), we have By Lemma 5, from the inequality (35), we can easily obtain Consequently, ; from Definition 1, neural network in (29) is robustly asymptotically stable. The proof is completed.
3.2. Passivity Analysis
In this subsection, the passivity analysis results are given in the following part. We first consider (2) and (3) without the parameter uncertainties , . Then, the following neural network can be obtained:
Theorem 9. For given scalars , neural network is passive in (37), if there exist , , , positive definite diagonal matrices , , , and , such that the following LMI holds: where and , , , and have been defined in Theorem 7.
Proof. In order to present the passivity condition for neural network (37), we choose the same Lyapunov-Krasovskii functional as Theorem 7. By following the same line of proof of Theorem 7 and considering the following inequality: it can be seen from the LMI condition (40) that which means that neural network in (37) is passive. This finishes the proof.
Theorem 10. For given scalars , neural network in (2) and (3) is passive, if there exist , , , positive definite diagonal matrices , , , and , such that the following LMI holds: where and , , , and have been defined in Theorem 8.
Proof. In order to present the passivity condition for neural network in (2) and (3), we choose the same Lyapunov-Krasovskii functional as Theorem 7. By following the same line of proof of Theorems 8 and 9, Theorem 10 can be proved.
4. A Numerical Example
In this section, the following numerical example is presented to demonstrate the effectiveness of the proposed results.
In order to illustrate the effectiveness of the obtained results, we choose , , and . Then, using the Matlab LMI toolbox to solve the LMI in (42), we obtain a solution as follows:
In this paper, the problems of stability and passivity analysis for discrete-time neural networks with time-varying delay have been studied via delta operator approach. This paper has considered the parameter uncertainty and the generalized activation functions. By constructing appropriate Lyapunov-Krasovskii functional, some novel stability and passivity criteria have been proposed in the delta operator system framework. The obtained conditions have been expressed in terms of LMI, which can be easily solved by standard software. A numerical example has been given to illustrate the effectiveness of the proposed results.
Conflict of Interests
The authors declare that there is no conflict of interests regarding the publication of this paper.
This work was partially supported by the National Natural Science Foundation of China (nos. 61304003 and 11226138).
G. A. Carpenter, “Neural network models for pattern recognition and associative memory,” Neural Networks, vol. 2, no. 4, pp. 243–257, 1989.View at: Google Scholar
A. N. Michel, J. A. Farrell, and H.-F. Sun, “Analysis and synthesis techniques for Hopfield type synchronous discrete time neural networks with application to associative memory,” Institute of Electrical and Electronics Engineers. Transactions on Circuits and Systems, vol. 37, no. 11, pp. 1356–1366, 1990.View at: Publisher Site | Google Scholar | MathSciNet
M. Galicki, H. Witte, J. Dörschel, M. Eiselt, and G. Griessbach, “Common optimization of adaptive preprocessing units and a neural network during the learning period. Application in EEG pattern recognition,” Neural Networks, vol. 10, no. 6, pp. 1153–1163, 1997.View at: Publisher Site | Google Scholar
C. Peterson and B. Soderberg, “A new method for mapping optimization problems onto neural networks,” International Journal of Neural Systems, vol. 1, no. 01, pp. 3–22, 1989.View at: Google Scholar
S. Yin, H. Luo, and S. X. Ding, “Real-time implementation of faulttolerant control systems with performance optimization,” IEEE Transactions on Industrial Electronics, vol. 61, no. 5, pp. 2402–2411, 2013.View at: Google Scholar
S. Yin, S. X. Ding, A. Haghani, H. Hao, and P. Zhang, “A comparison study of basic data-driven fault diagnosis and process monitoring methods on the benchmark tennessee eastman process,” Journal of Process Control, vol. 22, no. 9, pp. 1567–1581, 2012.View at: Google Scholar
X. Li and H. Gao, “A new model transformation of discrete-time systems with time-varying delay and its application to stability analysis,” Institute of Electrical and Electronics Engineers. Transactions on Automatic Control, vol. 56, no. 9, pp. 2172–2178, 2011.View at: Publisher Site | Google Scholar | MathSciNet
H. Li, X. Jing, and H. R. Karimi, “Output-feedback based control for active suspension systems with control delay,” IEEE Transactions on Industrial Electronics, vol. 61, no. 1, pp. 436–446, 2014.View at: Google Scholar
M. Liu, P. Shi, L. Zhang, and X. Zhao, “Fault-tolerant control for nonlinear Markovian jump systems via proportional and derivative sliding mode observer technique,” IEEE Transactions on Circuits and Systems. I, vol. 58, no. 11, pp. 2755–2764, 2011.View at: Publisher Site | Google Scholar | MathSciNet
A. Wu and Z. Zeng, “Exponential passivity of memristive neural networks with time delays,” Neural Networks, vol. 49, pp. 11–18, 2014.View at: Google Scholar
G. C. Goodwin, R. L. Leal, D. Q. Mayne, and R. H. Middleton, “Rapprochement between continuous and discrete model reference adaptive control,” Automatica, vol. 22, no. 2, pp. 199–207, 1986.View at: Google Scholar
X. Jiang, Q.-L. Han, and X. Yu, “Stability criteria for linear discrete-time systems with interval-like time-varying delay,” in Proceedings of the American Control Conference (ACC '05), pp. 2817–2822, June 2005.View at: Google Scholar