#### Abstract

This paper investigates the problem of stability analysis for neural networks with time-varying delays. By utilizing the Wirtinger-based integral inequality and constructing a suitable augmented Lyapunov-Krasovskii functional, two less conservative delay-dependent criteria to guarantee the asymptotic stability of the concerned networks are derived in terms of linear matrix inequalities (LMIs). Three numerical examples are included to explain the superiority of the proposed methods by comparing maximum delay bounds with the recent results published in other literature.

#### 1. Introduction

Since neural networks are generally recognized as one of the simplified models of neural processing in the human brain and can provide their good performance and strong capability of information processing, they have been successfully applied in many fields such as image and signal processing, pattern recognition, fixed-point computations, optimization, feedback control, medical diagnosis, financial applications, and other scientific areas [1–3]. Due to the finite switching speed of amplifiers and the inherent communication time between the neurons, it is well known that time delays exist and may cause oscillation or deteriorate system performance. Therefore, during the last few decades, many researchers [4–9] put their times and efforts into stability analysis of neural networks with time delays because it is a prerequisite and an important job to check whether the equilibrium point of the concerned networks is stable or not due to the fact that the application of these networks is heavily dependent on the dynamic behavior of the equilibrium points. Methodologically, the delay-derivative dependent [10], the weighting-delay based [11], delay-slope dependent [12], and delay-partitioning [13] analyses were taken by the use of the information for time-delay. Also, Faydasicok and Arik [14, 15] addressed the stability of neural networks with multiple time delays. The asymptotic stability analysis was dealt with in the aforesaid works, while, in [16–19], the exponential stability analysis was studied. In addition to this, the stochastic perturbation condition was considered in [20]. Furthermore, more attentions have been received in delay-dependent stability analysis since time delays encountered in neural networks are usually not very big [21].

One of the hot issues in delay-dependent stability analysis for dynamic systems such as linear systems, neutral systems, neural networks, and so on is to reduce the conservatism of stability criteria or to enhance the feasible region of the concerned criteria. To check the enhancement of the feasible region of stability criteria, maximum delay bounds for guaranteeing the asymptotic stability of the concerned systems are compared with the other methods. The Jensen inequality [22], Park’s inequality [23], model transformation [24, 25], free-weighting matrices techniques [26, 27], convex combination technique [28], and reciprocally convex optimization [29] are well recognized and have been utilized in many fields as tools of reducing the conservatism of stability and stabilization criteria. Very recently, Seuret and Gouaisbaut [30] proposed the Wirtinger-based integral inequality and the advantages of the proposed integral inequality were shown via the comparison of maximum delay bounds for various systems such as systems with constant and known delay, systems with a time-varying delay, and sampled-data systems.

Another remarkable approach to reduce the conservatism of stability criteria is the delay-partitioning idea which was firstly proposed by Gu [22]. The advantage of this method provides larger delay bounds when the delay-partitioning number increases. The idea of the work [22] has been utilized in stability analysis of neural networks with time delays by many researchers [4, 6, 7, 9–11, 13, 21]. In [9], with the recent techniques such as free-weighting matrices techniques and reciprocally convex optimization, a delay-partitioning approach was presented by considering the time-varying delay at each subinterval. Zhang et al. [11] proposed a new delay-partitioning stability analysis by introducing a tuning parameter adjusting delay interval. Zhang et al. [21] proposed new Lyapunov-Krasovskii functionals and delay-partitioning method to investigate the problem of delay-dependent stability analysis for neural networks with two additive time-varying delays and some discussions about the recent works were described. However, when the delay-partitioning number increases, the computational burden and time-consumption increase while the increasing rate of maximum delay bounds is decreased.

In addition to the techniques mentioned above, the choice of Lyapunov-Krasovskii functional and augmented state vectors also play important roles to enhance the feasible region of stability criteria. Since the introduction of the triple integral Lyapunov-Krasovskii functional [32, 33], some new results in stability analysis of neural networks with time-varying delays have been presented in [4, 19]. Furthermore, in the authors’ previous works [5, 8] with the addition of zero equalities [34], it was shown that larger delay bounds can be obtained by constructing newly augmented Lyapunov-Krasovskii functional and introducing some new methods in the activation function condition. However, the application of the Wirtinger-based integral inequality to the terms obtained by calculating the augmented Lyapunov-Krasovskii functional has not been fully investigated yet, and thus there is room for further improvements on the reduction of conservatism.

With motivation mentioned above, in this paper, two improved delay-dependent stability criteria for neural networks with time-varying delays will be proposed. First, in Theorem 6, by constructing a suitable augmented Lyapunov-Krasovskii functional, an improved stability condition such that the considered neural networks are asymptotically stable is derived in terms of linear matrix inequalities (LMIs) by applying Wirtinger-based integral inequality to the augmented quadratic integral term and utilizing the zero equalities [34]. Second, based on the results of Theorem 6 and motivated by the works [35–37], a further improved stability criterion will be proposed in Theorem 9 by ensuring the positiveness of the Lyapunov-Krasovskii functional and utilizing the Wirtinger-based integral inequality. Through three numerical examples utilized in many previous works to check the conservatism of stability criteria, it will be shown that the proposed stability criteria can provide larger delay bounds than the recent existing results. By extension, the developed methods can be applied into the networked control [38, 39], the filtering problem [40, 41], the uncertain systems of neutral type [42], and so on.

*Notation*. is the -dimensional Euclidean space, and means the set of all real matrices. For symmetric matrices and , (resp., ) means that the matrix is positive definite (resp., nonnegative). denotes a basis for the null-space of . , , and denote identity matrix and and zero matrices, respectively. refers to the Euclidean vector norm or the induced matrix norm. denotes the block diagonal matrix. For square matrix , means the sum of and its symmetric matrix ; that is, . represents the elements below the main diagonal of a symmetric matrix. means that the elements of matrix include the scalar value of .

#### 2. Problem Statement and Preliminaries

Consider the following neural networks with discrete time-varying delays: where is the neuron state vector, denotes the number of neurons in a neural network, denotes the neuron activation function, , is a positive diagonal matrix, and are the interconnection matrices representing the weight coefficients of the neurons, and represents a constant input vector.

The delay, , is a time-varying continuous function satisfying where is a known positive scalar and is any constant one.

The neuron activation functions satisfy the following assumption.

*Assumption 1. *The neuron activation functions , , with are continuous, bounded, and they satisfy
where and are constants.

*Remark 2. *In Assumption 1, and can be allowed to be positive, negative, or zero. As mentioned in [12], Assumption 1 describes the class of globally Lipschitz continuous and monotone nondecreasing activation when and . And the class of globally Lipschitz continuous and monotone increasing activation functions can be described when .

For simplicity, in stability analysis of the neural networks (1), the equilibrium point whose uniqueness has been reported in [17] is shifted to the origin by utilizing the transformation , which leads the system (1) to the following form: where is the state vector of the transformed system, , and with .

It should be noted that the activation functions satisfy the following condition [16]: The objective of this paper is to investigate the delay-dependent stability analysis of system (4) which will be introduced in Section 3.

Before deriving our main results, the following lemmas will be utilized in deriving the main results.

Lemma 3 (see [30]). *Consider a given matrix . Then, for all continuous function in , the following inequality holds:
*

Lemma 4 (see [43]). *Let , , and such that . Then, the following statements are equivalent:*(1)*, , ,*(2)*, where is a right orthogonal complement of .*

Lemma 5 (see [44]). *For the symmetric appropriately dimensional matrices , , matrix , the following two statements are equivalent:*(1)*,*(2)*there exists a matrix of appropriate dimension such that
*

#### 3. Main Results

In this section, two new stability criteria for system (4) will be proposed. For the sake of simplicity of matrix and vector representation, which will be used in Theorems 6 and 9 are defined as block entry matrices (e.g., ). The other notations for some vectors and matrices are defined as follows: Now, the following theorem is given by the main result.

Theorem 6. *For given scalars , , and diagonal matrices and , system (4) is asymptotically stable for and , if there exist positive diagonal matrices , , and , positive definite matrices , , , , and , any matrices , , and , and any symmetric matrices satisfying the following LMIs:
**
where , , , and other notations are defined in (9).*

*Proof. *Let us consider the following candidate for the appropriate Lyapunov-Krasovskii functional:
where
It should be noted thatFrom (15), can be represented as
The result of the time-derivative of is as follows:
By calculating , it follows that
Inspired by the work of [34], the following two zero equalities with any symmetric matrices are considered:
Summing the two zero equalities presented at (19) leads to
The result of the is given by
The integral term with the addition of the two integral terms at (20) can be described as
Here, by applying Lemma 3, we have
where
With the similar process presented in (23), it can be obtained that
where
With (23)–(26) and utilizing reciprocally convex optimization [29], if which is defined in (9), an upper bound of the term (22) can be obtained as
where and are defined in (9).

Thus, from (21) to (27), the following inequality holds:
By utilizing Jensen’s inequality [22] and reciprocally convex optimization [29], if the inequality (12) holds, then the estimation of can be
From (6), for any positive diagonal matrices , the following inequality holds:
where is defined in (9).

Inspired by the authors’ work of [5], from (5), the following conditions hold:
Therefore, for any positive diagonal matrices , the following inequality holds:
From (13) to (32) and by application of the -procedure [45], an upper bound of with the addition of (20) can be written as
By Lemma 4, with is equivalent to
Then, by Lemma 5, condition (34) is equivalent to the following inequality with any matrix :
The above condition is affinely dependent on . Therefore, if inequalities (10) and (11) hold, then inequality (35) is satisfied, which means that system (4) is asymptotically stable for and . It should be noted that holds if inequalities (11) and (12) are feasible. This completes our proof.

*Remark 7. *As mentioned in the Introduction section, Lemma 3 was first introduced in [30] to reduce the conservatism of delay-dependent stability criteria. In [30], an upper bound of the integral form of was obtained by Lemma 3 with augmented vectors and . However, unlike the presented method in [30], the augmented integral term of was estimated in (23)–(26) with the consideration of the two integral terms obtained by zero equality (20). Thus, by utilizing the newly introduced state vectors such as and , more relaxed conditions can be expected since more information about the past history of states and some new cross-terms which may play roles to reduce the conservatism of stability condition were considered in Theorem 6. In the authors’ future works, this method will be extended to various problems such as state estimation, performance analysis, filtering, synchronization between two chaotic systems, and stability and stabilization of other dynamic systems which are receiving much attention in the control society.

*Remark 8. *Another novelty in Theorem 6 is introduced in (13). In many works dealing with stability of neural networks with time-varying delays, the proposed Lyapunov-Krasovskii functionals having information of time-varying delays have been of the form of or . However, the proposed Lyapunov-Krasovskii functional is which is different from the previous works. Thus, the results of time-derivative of the proposed contain some new cross-terms such as
which were presented in (18) and does not be used in existing works.

In the proposed Theorem 6, the positiveness of is included such as , , , , , , and . These conditions guarantee the positiveness of each . However, as mentioned in [35–37], by incorporating some functional of , the positiveness of can be relaxed which will be introduced in Theorem 9. For the sake of simplicity of matrix and vector representation in Theorem 9, which will be used are defined as block entry matrices (e.g., ). Assume that , , , , , and . Then, has a lower bound as follows: By Lemma 3, the lower bound of the second integral term at the right side of the inequality (35) can be obtained as where Therefore, if the following inequality holds