Abstract

This paper is concerned with the problem of robust stabilization and control for a class of uncertain neural networks. For the robust stabilization problem, sufficient conditions are derived based on the quadratic convex combination property together with Lyapunov stability theory. The feedback controller we design ensures the robust stability of uncertain neural networks with mixed time delays. We further design a robust controller which guarantees the robust stability of the uncertain neural networks with a given performance level. The delay-dependent criteria are derived in terms of LMI (linear matrix inequality). Finally, numerical examples are provided to show the effectiveness of the obtained results.

1. Introduction

Neural networks have received a great deal of attention due to their successful applications in various engineering fields such as associative memory [1], pattern recognition [2], adaptive control, and optimization. When designing or implementing a neural network such as Hopfield neural networks and cellular neural networks, the occurrence of time delays is unavoidable in the processing of storage and transmission. Since the existence of time delays is usually one of the main sources of instability and oscillations, the stability problem of neural networks with time delays has been widely considered by many researchers (see [313]). Generally speaking, stability criteria of neural networks with time delays are classified into two categories: delay-independent stability criteria and delay-dependent stability criteria. Delay-dependent stability criteria are less conservative than delay-independent ones. Therefore, people always consider the delay-dependent stability criteria. Neural networks usually have a spatial extent due to the presence of many parallel pathways of a variety of axon sizes and lengths [7]. Thus, there will be a distribution of conduction velocities along these pathways and a distribution of propagation delays [14], and both the discrete and the distributed delays should be considered in the neural network model [6, 7, 1518].

However, in practical application of neural networks, uncertainties are inevitable in neural networks because of the existence of modeling errors and external disturbances. Parameter uncertainties will destroy the stability, so that taking uncertainty into account is important when studying the dynamical behaviors of neural networks (see [12, 1921]). To facilitate the design of neural networks, it is important to consider neural networks with various activation functions, because the conditions to be imposed on the neural network are determined by the characteristics of various activation functions as well as network parameters [22]. The generalization of activation functions will provide a wider scope for neural network designs and applications [23]. Stability and stabilization results for delayed neural networks with various activation functions can be found in [2226]. References [24, 25] investigated the stability problem of neural networks with various activation functions. Phat and Trinh [23] dealt with the exponential stabilization problem for neural networks with various activation functions via the Lyapunov-Krasovskii functional. Nevertheless, the results reported therein do not consider the parameter uncertainties and disturbances. Sakthivel et al. [26] studied the problem of robust stabilization and control for a class of uncertain neural networks with various activation functions and mixed time delays by employing the Lyapunov functional method and the matrix inequality technique. In recent years, control of time-delay systems is a subject of both practical and theoretical importance. The performance of a neural control system is influenced by external disturbances. Thus, it is important to use the robust technique to eliminate the effect of external disturbances. The control problem for time-delay systems has been addressed in [6, 2634]. However, to the best of our knowledge, the robust stabilization and control for uncertain systems with time-varying delays have not yet been fully investigated.

In this paper, we consider the problem of robust stabilization and control for a class of uncertain neural networks by employing a new augmented Lyapunov-Krasovskii functional and estimating its derivative from a novel viewpoint. Our aim is to obtain a control law to guarantee the robust stability of the closed-loop system with parameter uncertainties and a given disturbance attenuation level . The results employ the quadratic convex combination technique, which is different from the linear convex combination and inverse convex combination techniques extensively used in other literature studies. The criteria are derived with the framework of LMIs, which can be easily calculated by the MATLAb LMI control toolbox. Numerical examples are provided to illustrate the effectiveness of the results.

Notations. The notations used throughout the paper are fairly standard. denotes the -dimensional Euclidean space; is the set of all real matrices; the notation   (<0) means is a symmetric positive (negative) definite matrix; and denote the inverse of matrix and the transpose of matrix ; represents the identity matrix with proper dimensions, respectively; a symmetric term in a symmetric matrix is denoted by (); represents ; stands for a block-diagonal matrix. Matrices, if their dimensions are not explicitly stated, are assumed to be compatible for algebraic operations.

2. Problem Formulation

We consider the following uncertain neural networks with discrete and distributed time-varying delays:where is the state vector of the neural networks, is the control input vector of the neural networks, is the disturbance input vector, and is the output vector; , , and denote the neuron activation function; with is the positive diagonal matrix; represent the output matrix; , and denote the connection weights, the delayed connection weights, the distributively connection weights, and the disturbance input weights, respectively. and represent the discrete and distributed time-varying delays that satisfy the conditionwhere , , and are constants. The function is continuous, defined on , .

In order to conduct the analysis, the following assumptions are necessary.

Assumption 1. The parametric uncertainties , , , , and are time-varying matrices and satisfywhere , , , , , and are some given constant matrices with appropriate dimensions and satisfies , for any .

Assumption 2. The neuron activation functions are bounded and satisfywhere , , , , , , are known constants. And we denote

Definition 3 (see [26]). Given a prescribed level of disturbance attenuation , the uncertain neural networks are said to be robustly asymptotically stable if they are robustly stable, and the response under zero initial condition satisfies for every nonzero .

Lemma 4 (see [35]). For any constant matrix , scalars , and vector function such that the following integrations are well defined; then

Lemma 5 (see [36]). Let , and an appropriate dimensional vector. Then, we have the following facts for any scalar function :and matrices and vector independent of the integral variable are appropriate dimensional arbitrary ones.

Lemma 6 ((Schur complement) [26]). Given constant symmetric matrices , , and , where and , then if and only if

Lemma 7 (see [36]). For symmetric matrices , , a positive semidefinite matrix and nonzero vector , a necessary and sufficient condition foris that the following set of inequalities hold simultaneously , .

3. Robust Stabilization

We use the following control law to tackle the robust stabilization problem in this paper:where are the gain matrix of the controller.

When the disturbance input , the neural networks (1) can be rewritten in the form

Theorem 8. Under Assumptions 1 and 2, for given scalars , , and , the system (12) is robustly asymptotically stabilizable via the control law if there exist positive diagonal matrices , , positive definite matrices , , , , scalar matrix , and any matrices with appropriate dimensions , such that the following LMIs hold: wherewith

Proof. Construct a new class of Lyapunov-Krasovskii functional as follows: where whereWe define a vector asRemark  1. Our paper uses the idea of second-order convex combination, and the property of quadratic convex function is given in Lemma 7.
Remark  2. We fully consider the various activation functions in constructing the Lyapunov-Krasovskii functional. So the augmented vector uses more information about , , and than in [26]. The Lyapunov functional in our paper is more general than that in [26], and the criteria in our paper may be more applicable.
Remark  3. In our paper, the augmented vector utilizes more information on state variables than in [26], such as . This leads to reducing the conservatism of stabilization condition.
The time derivative of along the trajectory of system is given bywhere and according to Lemma 4, we can obtain It is easy to obtain the following identities:Therefore, we can disassemble the integral into two parts as follows:where It is easy to show the following relation: Applying Lemma 5 to , we get According to Assumption 2, we havewhich is equivalent towhere is the unit column vector.
Let , , , ; thenand it is equivalent toSimilarly, we obtainBy using (31)-(32), we have The following equality holds:which is equivalent towhere is any matrix.
From Assumption 1, the following inequality holds:Furthermore, there exits a positive scalar matrix , such that the following inequality holds: Combining (16)–(27) and (33), (35), and (37), we obtainwhere , are defined in the theorem context, andNote that is a quadratic function on , and the second-order coefficient is : applying Lemma 6, we get : which is equivalent to .
Finally, employing Lemma 7, we get Thus we can obtain from (38) and (42) that which means that the system is asymptotically stable. This completes the proof.

4. Controller Design

In this section, we study the control for the considered neural networks with a given disturbance attenuation level . The neural networks (1) can be rewritten in the form

Theorem 9. Under Assumptions 1 and 2, for given disturbance attenuation level , scalars , , and , the system (44) is robustly asymptotically stabilizable under the control law if there exist positive diagonal matrices , , positive definite matrices , , , , scalar matrix , and any matrices with appropriate dimensions , such that the following LMIs hold: where with

Proof. By using the obtained in Theorem 8, we can obtain where If (45) holds, the inequality holds, and we can easily obtain Since , we have Hence the considered neural networks (44) are robustly stable for a given disturbance attenuation level according to Definition 3. This completes the proof.

5. Numerical Examples

In this section, numerical examples are provided to illustrate effectiveness of the developed method for uncertain neural networks with discrete and distributed time-varying delays.

Example 1. We consider the neural networks (12) when the disturbance input . The parameters are as follows:and the activation functions are so that If we set , the upper bound of time delay and . And we set the upper bounds of time delays . By solving through the MATLAB LMI toolbox, we obtain the gain matrix of the stabilization controller:Figures 1 and 2 present the state responses of the considered neural networks. Figure 1 shows the time response of the state variables and of the open-loop system from initial values . Figure 2 shows the time response of the state variables and of the closed-loop system from initial values . The open-loop system means the system without feedback control, and the closed-loop system means the system with the feedback control. It is clear that and converge rapidly to zero under the feedback control law and they cannot converge to zero without the feedback control. The simulation results reveal that the considered system with discrete and distributed time-varying delays is robustly asymptotically stable under the feedback control law.

Example 2. We consider the neural networks (44) with the disturbance input. The parameters are as follows:and the activation functions are so that If we set , the upper bound of time delay and . And we set the upper bounds of time delays . By solving through the MATLAB LMI toolbox, we obtain the gain matrix of the stabilization controller with the guaranteed performance :Figures 3 and 4 present the state responses of the considered neural networks with the disturbance input . Figure 3 shows the time response of the state variables and of the open-loop system from initial values . Figure 4 shows the time response of the state variables and of the closed-loop system from initial values . It is clear that and converge rapidly to zero under the feedback control law and they cannot converge to zero without the feedback control. The simulation results reveal that the considered system with discrete and distributed time-varying delays is robustly asymptotically stable under the feedback control law.

6. Conclusions

In this paper, we investigated the robust stabilization problem and control for a class of uncertain neural networks. By implementing the quadratic convex combination technique together with Lyapunov-Krasovskii functional approach, new delay-dependent conditions were established. The stabilization criterion was derived by the augmented Lyapunov-Krasovskii functional, which ensures the robust stability of the considered uncertain neural networks with various activation functions. Furthermore, our result was extended to the design of a robust controller, which guarantees the closed-loop system robustly asymptotically stable with a prescribed performance level. The criteria are derived in terms of LMIs, which can be easily calculated by the MATLAB toolbox. Numerical examples are provided to illustrate the effectiveness of the obtained results.

Conflict of Interests

The authors declare that there is no conflict of interests regarding the publication of this paper.

Aknowledgments

This work was supported by the National Basic Research Program of China (2010CB732501) and the Natural Science Foundation of Hainan Province (111002).