Abstract

A new robust training law, which is called an input/output-to-state stable training law (IOSSTL), is proposed for dynamic neural networks with external disturbance. Based on linear matrix inequality (LMI) formulation, the IOSSTL is presented to not only guarantee exponential stability but also reduce the effect of an external disturbance. It is shown that the IOSSTL can be obtained by solving the LMI, which can be easily facilitated by using some standard numerical packages. Numerical examples are presented to demonstrate the validity of the proposed IOSSTL.

1. Introduction

The past few years have witnessed a remarkable progress in the research of neural networks due to their advantages of parallel computation, learning ability, function approximation, and fault tolerance. Many exciting applications have been developed in signal processing, control, combinatorial optimization, pattern recognition, and other areas. It has been shown that these applications greatly depend on the stability and performance of the underlying neural networks [1].

Stability of neural networks is a prerequisite for their successful applications as either associative memories or optimization solvers. Stability problems in neural networks can be classified into two main categories: stability of neural networks [29] and stability of learning algorithms [1016]. In this paper, we focus on deriving a new learning algorithm for neural networks. Stable learning algorithms can be developed by analyzing identification or tracking errors in neural networks. In [11], the stability conditions of learning laws in a situation where neural networks were used to identify and control a nonlinear system were studied. In [17], the dynamic backpropagation was modified with stability constraints. In [13, 15], passivity-based learning laws were proposed for neural networks. Since neural networks cannot match the unknown nonlinear systems exactly, extensive modifications should be made either to the normal gradient algorithm or to the backpropagation algorithm [10, 11, 14, 16]. Despite these advances in neural network learning, most research results were restricted to learning algorithms for neural networks without external disturbance, but, in real physical systems, one is faced with model uncertainties and a lack of statistical information on the signals. Thus, knowledge of the learning algorithm for neural networks with external disturbance is of considerable practical importance. Unfortunately, with the existing works, it is impossible to design a learning algorithm for neural networks with external disturbance. In this paper, we propose a new robust learning algorithm for neural networks with external disturbance.

The input/output-to-state stability (IOSS) approach [18, 19] is an effective tool to analyze the stability of physical nonlinear systems affected by disturbances. It can deal with nonlinear systems using only the general characteristics of the input-output dynamics, and it offers elegant solutions for the proof of stability. The IOSS framework is a promising approach to the stability analysis of neural networks because it can lead to general conclusions on stability using only input-output characteristics. It has been widely accepted as an important concept in control engineering, and several research results have been reported in recent years [1823]. A natural question arises: can we obtain an IOSS based robust training algorithm for dynamic neural networks with external disturbance? This paper gives an answer for this question. To the best of our knowledge, however, for the IOSS training law of dynamic neural networks with external disturbance, there is no result in the literature so far, which still remains open and challenging.

In this paper, for the first time, the IOSS approach is used to create a new robust training law for dynamic neural networks with external disturbance. This training law is called an IOSS training law (IOSSTL). The IOSSTL is a new contribution to the topic of neural network learning. A sufficient condition for the existence of the IOSSTL is presented such that neural networks are exponentially stable and input/output-to-state stable for any bounded disturbance. Based on linear matrix inequality (LMI) formulation, the design of the proposed training law can be realized by solving the LMI, which can be facilitated readily via standard numerical algorithms [24, 25]. The presented result of this paper opens a new path for application of the IOSS technique to weight learning of dynamic neural networks. It is expected that the proposed scheme can be extended to study learning algorithms for neural networks with timedelay.

This paper is organized as follows. In Section 2, we formulate the problem. In Section 3, an LMI problem for the IOSSTL of dynamic neural networks is proposed. In Section 4, numerical examples are given, and finally, conclusions are presented in Section 5.

2. Problem Formulation

Consider the following differential equation: where is the state variable, is the external input, and is the output variable. and are continuously differentiable and satisfy . We first recall the following definitions.

Definition 2.1. A function is a function if it is continuous and strictly increasing and if . It is a function if it is a function and as . A function is a function if, for each fixed , the function is a function and for each fixed , the function is decreasing and as .

Definition 2.2. The system (2.1) is said to be input/output-to-state stable if there exist functions , and a function , such that, for each input , each output , and each initial state , it holds that for each .

Now we introduce a useful result that is employed for obtaining the IOSSTL.

Lemma 2.3 (see [18, 19]). A continuous function is called an IOSS-Lyapunov function for the system (2.1) if there exist functions , , , , and such that for any and for any , any , and any . Then the system (2.1) is input/output-to-state stable if and only if it admits an IOSS-Lyapunov function.

Consider the following dynamic neural network: where is the state vector, is the bounded control input, is the output vector, is the disturbance vector,   (, ) is the self-feedback matrix, and are the weight matrices, is the nonlinear vector field, and is the nonlinear matrix function, and , , and are known constant matrices. The element functions and   (,  ) are usually selected as sigmoid functions.

Remark 2.4. It can be seen that the Hopfield model [26] is the special case of the neural network (2.5) with , , , and . and are the resistance and capacitance at the ith node of the network, respectively.

The purpose of this study is to derive a new training law such that the neural network (2.5)-(2.6) with is input/output-to-state stable and exponentially stable for .

3. Main Results

In this section, we design a new input/output-to-state stable training law for dynamic neural networks with external disturbance. The following theorem gives an LMI condition for the existence of the desired training law.

Theorem 3.1. Assume that there exist , , , , and such that If the weight matrices and are updated as then the neural network (2.5)-(2.6) is input/output-to-state stable.

Proof. The neural network (2.5) can be represented by In order to guarantee the IOSS, the following relation must be satisfied: where is the gain matrix of the IOSSTL. Then we obtain One of the possible weight selections to fulfil (3.4) (perhaps excepting a subspace of a smaller dimension) is given by where stands for the pseudoinverse matrix in the Moore-Penrose sense [27]. This training law is just an algebraic relation depending on and , which can be evaluated directly. Taking into account that the training law (3.6), see [27], can be rewritten as follows: To obtain the gain matrix of the IOSSTL, consider a Lyapunov function . Note that satisfies the following Rayleigh inequality [28]: where and are the maximum and minimum eigenvalues of the matrix. The time derivative of along with the trajectory of (3.5) is Adding and subtracting , we obtain where If the following matrix inequality is satisfied we have Define functions , , , , and as Note that , , , , and are functions. From (3.9) and (3.15), we can obtain According to Lemma 2.3, we can conclude that is an IOSS-Lyapunov function. Introducing a change of variable , (3.13) is equivalently changed into the LMI (3.1). Then the gain matrix of the IOSSTL is given by . The training laws (3.8) are changed to (3.2). This completes the proof.

Corollary 3.2. Assume that there exist , , , , and such that If the weight matrices and are updated as (3.2), the neural network (2.5)-(2.6) with is input/output-to-state stable and exponentially stable for .

Proof. From Theorem 3.1, the LMI (3.18) guarantees that the neural network (2.5)-(2.6) with is input/output-to-state stable. Next, we show that under the additional LMI condition (3.19), the neural network (2.5)-(2.6) with is exponentially stable. When , from (3.14) and (2.6), we obtain From (3.20), we have where Using (3.9), we have Since the relation (3.23) guarantees the exponential stability. This completes the proof.

Remark 3.3. The LMI problems given in Theorem 3.1 and Corollary 3.2 are to determine whether solutions exist or not. They are called the feasibility problems. The LMI problems can be solved efficiently by using recently developed convex optimization algorithms [24]. In this paper, in order to solve the LMI problems, we utilize MATLAB LMI Control Toolbox [25], which implements state-of-the-art interior-point algorithms.

4. Numerical Examples

4.1. Example  1

Consider the neural network (2.5)-(2.6) with the following parameters: Applying Corollary 3.2 yields Figure 1 shows state trajectories when the initial conditions are given by and the external disturbance   () is given by , where means a Gaussian noise with mean 0 and variance 1. Figure 1 shows that the IOSSTL reduces the effect of the external disturbance on the state vector . The evolutions of the weights and are shown in Figures 2 and 3, respectively. Although the weights are not convergent, they are bounded. This result is very important for robust identification and control using neural networks [29].

4.2. Example  2

In this subsection, we apply the proposed scheme to the identification problem of Lorenz model presented in [30]. The Lorenz model is used for the fluid conviction description, especially for some feature of the atmospheric dynamic. The uncontrolled model is given by where , , and represent measures of fluid velocity and horizontal and vertical temperature variations, respectively. , , and are positive parameters representing the Prandtl number, Rayleigh number, and geometric factor, respectively. As in the commonly studied case, we select , , and . Consider the neural network (2.5)-(2.6) with the following parameters: For the numerical simulation, we use the following parameters: Solving the LMIs (3.18)-(3.19) by the convex optimization technique software [24, 25] gives Figure 4 shows state trajectories when the initial conditions are given by and the external disturbance   () is given by a Gaussian noise with mean 0 and variance 1. Figure 5 shows, by the proposed scheme, that the identification errors (, , and ) are bounded around the origin. These results appear to be of little use for the storage of patterns, but they are very important for identification and control using neural networks [29].

5. Conclusion

This paper has proposed the IOSSTL, which is a new robust training law, for weight adjustment of dynamic neural networks with external disturbance. Based on LMI approach, the IOSSTL was developed to guarantee exponential stability and reduce the effect of the external disturbance on the state vector. It was also shown that the IOSSTL can be obtained by solving the LMI. A numerical simulation was performed to demonstrate the effectiveness of the proposed IOSSTL. It is expected that the results obtained in this study can be extended to discrete-time dynamic neural networks.

The scheme proposed in this paper can be used in several control applications. For example, with the proposed learning law, a dynamic neural network is applied to model an unknown nonlinear system, and the neuron states estimated by some filters can be then utilized to achieve certain design objectives by the control law. This application of the proposed scheme remains as a future work.

Acknowledgment

This work was supported by the grant of the Korean Ministry of Education, Science and Technology (The Regional Core Research Program/Center for Healthcare Technology Development).