Abstract

The problem of passivity analysis for neural networks with time-varying delays and parameter uncertainties is considered. By the consideration of newly constructed Lyapunov-Krasovskii functionals, improved sufficient conditions to guarantee the passivity of the concerned networks are proposed with the framework of linear matrix inequalities (LMIs), which can be solved easily by various efficient convex optimization algorithms. The enhancement of the feasible region of the proposed criteria is shown via two numerical examples by the comparison of maximum allowable delay bounds.

1. Introduction

Neural networks are the networks of mutual elements that behave like biological neurons, which can be mathematically described by difference or differential equations. For this reason, during a few decades, neural networks have been extensively applied in many areas such as reconstruction of moving image, signal processing, the tasks of pattern recognition, associative memories, and fixed-point computations [110]. Also, the stability analysis of the concerned neural networks is a very important and prerequisite job because the application of neural networks heavily depends on the dynamic behavior of equilibrium points.

On the other hand, we need to pay keen attention to a delay in the time and passivity. It is well known that time-delay is a natural concomitant of the finite speed of information processing in the implementation of the networks and often causes undesirable dynamic behaviors such as oscillation and instability of the networks. In various scientific and engineering problems, stability issues are often linked to the theory of dissipative systems. It postulates that the energy dissipated inside a dynamic system is less than the energy supplied from the external source [11]. Based on the concept of energy, the passivity is the property of dynamical systems and describes the energy flow through the system. It is also an input/output characterization and related to Lyapunov method. In the field of nonlinear control, the concept of dissipativeness was firstly introduced by Willems [12] in the form of inequality including supply rate and the storage function. The main idea of passivity theory is that passive properties of a system can keep the system internally. Therefore, the study on passivity analysis for uncertain neural networks with time-delay has been widely investigated in [1319] since parametric uncertainties, which sometimes affect the stability of systems, are also undesirable dynamics in the hardware implementation of neural networks due to the fact that the connection weights of the neurons are dependent on the values of certain resistances and capacitances including variations of fluctuations [20]. In [16], two types of time-varying delays were considered in passivity analysis of uncertain neural networks. Recently, in [17], by considering some useful terms which were ignored in previous literatures and utilizing free-weighting matrix techniques, the enhancement of feasible region of passivity criteria was shown. In [18], by proposing a complete delay-decomposing approach and utilizing a segmentation technique, improved conditions for passivity of neural networks were presented. In the authors’ previous work [19], some less conservative conditions for passivity of neural networks were derived by taking more information of states. All the works [1319] show their advantages of the proposed methods via comparison of maximum delay bounds with the previous works since delay bounds for guaranteeing the passivity of the concerned networks are recognized as one of the most important index for checking the conservatism of criteria. Very recently, up to now, one of the most remarkable methods in reducing the conservatism of stability criteria is Wiritinger-based integral inequality [21] which reduced Jensen’s gap effectively. Therefore, there are rooms for further improvements in passivity analysis of the neural networks with both time-delay and parameter uncertainties.

With this motivation mentioned above, in this paper, the problem on passivity for uncertain neural networks with time-varying delays is addressed. In Theorem 6, by utilizing Wiritinger-based integral inequality [21], a passivity condition for neural networks with time-varying delays and parameter uncertainties is introduced with the framework of LMIs. Based on the results of Theorem 6, a newly constructed Lyapunov-Krasovskii functional is introduced and further improved results will be derived in Theorem 7. Inspired by the work of [22, 23], the reciprocally convex approach and some zero equality will be utilized in Theorems 6 and 7. Finally, through two numerical examples, it will be shown that Theorems 6 and 7 obtain the less conservative results.

Notation. Throughout this paper, the used notations are standard. is the -dimensional Euclidean vector space and denotes the set of all real matrices. For symmetric matrices and , means that the matrix is positive definite, whereas means that the matrix is nonnegative. , , and denote identity matrix and and denote zero matrices, respectively. denotes the block diagonal matrix. For square matrix , means the sum of and its symmetric matrix ; that is, . means that the elements of matrix include the scalar value of ; that is, .

2. Preliminaries and Problem Statement

Consider the following uncertain neural networks with time-varying delays: where denotes the number of neurons in a neural network, is the neuron state vector, denotes the neuron activation function vector, is the output vector, is the input vector, is a positive diagonal matrix,    are the interconnection weight matrices,    are known constant matrices, and and    are the parameter uncertainties of the form where , , , and are the constant matrices and is the time-varying nonlinear function satisfying The delay is time-varying function satisfying where and are known positive scalars.

It is assumed that the neuron activation functions satisfy the following condition.

Assumption 1. The neuron activation functions are continuous, bounded and satisfy where and are constants.
From (5), if , then we have Also, the conditions (5) and (6) are, respectively, equivalent to The systems (1) can be rewritten as

The objective of this paper is to investigate delay-dependent passivity conditions for system (9). Before deriving our main results, the following definition and lemmas are introduced.

Definition 2. The system (1) is called passive if there exists a scalar such that for all and for all solution of (1) with .

Lemma 3 (see [21]). For a given matrix , the following inequality holds for all continuously differentiable function in : where and .

Lemma 4 (see [22]). For any vectors , , constant matrices , , and real scalars satisfying that , the following inequality holds:

Lemma 5 (see [24]). Let , , and such that . Then, the following statements are equivalent:(i), , ,(ii), where is a right orthogonal complement of ,(iii): .

3. Main Results

In this section, new passivity criteria for the system (9) will be proposed in Theorems 6 and 7.

For the sake of simplicity on matrix representation, and are defined as block entry matrices. For example, and . And the notations of several matrices are defined as

Then, the following theorem is given as the first main result.

Theorem 6. For given positive scalars , and diagonal matrices and , the system (9) is passive for and , if there exist positive scalars and , positive diagonal matrices , ,   , positive definite matrices , , ,   , , any symmetric matrices   , and any matrices , satisfying the following LMIs: where the notation 0 in (14)–(16) means a zero matrix with an appropriate dimension.

Proof. Let us consider the following Lyapunov-Krasovskii functional candidate as where The time-derivatives of , , and can be calculated as By the use of Lemma 3, is bounded as where Furthermore, if , then applying Lemma 4 to (22) leads to where and is any matrix.
By the use of Lemma 3 and Jensen’s inequality [25], if , then can be bounded as where is any matrix.
And an upper bound of with Jensen’s inequality [25] can be obtained as Before calculating the estimation of , inspired by the work of [23], the following zero equalities with any symmetric matrices and are considered as a tool of reducing the conservatism of criterionAdding (27) to can be obtained as Here, the bound of presented in (28) is valid when hold.
By utilizing the authors’ work [26], from (7), choosing as and leads to where and are positive diagonal matrices.
Also, from (8), the following inequality holds where , , and are positive diagonal matrices.
Lastly, in succession, with the relational expression between and , , from the system (9), there exists any scalar satisfying the following inequality: From (19)–(31) and by applying S-procedure [27], an upper bound of can be By applying (i) and (iii) of Lemma 5, with is equivalent to for any free matrix with appropriate dimension.
Lastly, by utilizing (ii) and (iii) of Lemma 5, one can confirm that the inequality (33) is equivalent to Therefore, if LMIs (14), (15), and (16) hold, then (34) holds, which means By integrating (35) with respect to over the time period from to , we have for . Since , the inequality (10) in Definition 2 holds. This implies that the neural networks (1) are passive in the sense of Definition 2. This completes our proof.

In the second place, an improved passivity criterion for the system (9) will be derived in Theorem 7 by utilizing modified . The notations of several matrices are defined for simplicity: and other notations will be used in Theorem 7.

Theorem 7. For given positive scalars , and diagonal matrices and , the system (9) is passive for and , if there exist positive scalars and , positive diagonal matrices , , , positive definite matrices , , , , , any symmetric matrices , and any matrices , satisfying the LMIs (16) and where .

Proof. By choosing as a newly Lyapunov-Krasovskii functional is given by Its new upper bound can be calculated as where the following inequality is used in (41). Here, other terms are very similar to the proof of Theorem 6, so it is omitted

Remark 8. In Theorem 6, Lemma 3 (Wirtinger-based integral inequality) was applied to only the integral term obtained by calculating the time derivative of . The other integral terms such as and were estimated by using Jensen’s inequality. In the authors’ future work, further improved stability or passivity criteria for neural networks with time-varying delays will be proposed by utilizing Lemma 3 in estimating other integral terms.

Remark 9. Unlike Theorem 6, by utilizing as one of the terms of Lyapunov-Krasovskii functional, some new cross terms such as are included, which may reduce the passivity criterion of Theorem 6. In the next section, the effectiveness of the proposed Lyapunov-Krasovsii functional will be shown by comparing maximum delay bounds which guarantee the passivity of the numerical examples.

Remark 10. When the information of is unknown, then, Theorems 6 and 7 can provide passivity criteria for the system (9) by choosing .

4. Numerical Examples

In this section, two numerical examples are introduced to show the improvements of the proposed theorems. In examples, MATLAB, YALMIP, and SeDuMi 1.3 are used to solve LMI problems.

Example 11. Consider the neural networks (1) where

The results of the maximum delay bounds for guaranteeing the passivity of the above neural networks with different obtained by Theorems 6 and 7 are listed in Table 1. One can see that Theorem 6 for this example gives larger maximum delay bounds than those of [1315, 19]. This indicates that the presented sufficient conditions relieve the conservativeness of the passivity caused by time-delay and parameter uncertainties. Furthermore, Theorem 7 provides larger delay bound than that of Theorem 6. This means that the newly constructed Lyapunov-Krasovskii functional plays an important role to reduce the conservatism of Theorem 6.

Example 12. Consider the neural networks (1) where

In Table 2, the results of the maximum allowable delay bound for guaranteeing passivity are compared with the existing works. From Table 2, it can be seen that the maximum delay bounds for guaranteeing the passivity of the above neural networks are significantly larger than those of [1618].

5. Conclusions

In this paper, the two passivity criteria for neural networks with time-varying delays and parameter uncertainties have been proposed by the use of Lyapunov method and LMI framework. In Theorem 6, by constructing the suitable Lyapunov-Krasovskii functional and utilizing Wirtinger-based inequality, the sufficient condition for passivity of the concerned networks was derived. Based on the result of Theorem 6, the improved criterion for the networks was proposed in Theorem 7 by introducing the newly augmented Lyapunov-Krasovskii functional. Via two numerical examples that dealt with previous works, the improvements of the proposed passivity criteria have been successfully verified. Based on the proposed methods, future works will focus on solving various problems such as state estimation [28, 29], passivity analysis for neural networks [30], stabilization for BAM neural networks [31], synchronization for complex networks [32], stability analysis, and filtering for dynamic systems with time delays [3337]. Moreover, in [38], to reduce the conservatism of stability sufficient conditions, the triple integral forms of Lyapunov-Krasovskii functional was proposed and its effectiveness was shown. Thus, by grafting such approach onto the proposed idea of this paper, further improved results will be investigated in the near future.

Conflict of Interests

The authors declare that there is no conflict of interests regarding the publication of this paper.

Acknowledgments

This research was supported by the Basic Science Research Program through the National Research Foundation of Korea (NRF) funded by the Ministry of Education, Science and Technology (2008-0062611) and by a grant of the Korea Healthcare Technology R & D Project, Ministry of Health & Welfare, Republic of Korea (A100054).