Abstract

The problem of existence, uniqueness, and global asymptotic stability is considered for the class of neutral-type neural network model with discrete time delays. By employing a suitable Lyapunov functional and using the homeomorphism mapping theorem, we derive some new delay-independent sufficient conditions for the existence, uniqueness, and global asymptotic stability of the equilibrium point for this class of neutral-type systems. The obtained conditions basically establish some norm and matrix inequalities involving the network parameters of the neural system. The main advantage of the proposed results is that they can be expressed in terms of network parameters only. Some comparative examples are also given to compare our results with the previous corresponding results and demonstrate the effectiveness of the results presented.

1. Introduction

In recent years, dynamical neural networks have been employed in solving many practical engineering problems such as signal and image processing, pattern recognition, associative memories, parallel computation, and optimization and control problems [110]. In such applications, it is important to know the dynamics of the designed neural networks. In addition, when using delayed neural networks, time delays might affect the transmission rate and cause instability. Therefore, the analysis of stability of neural networks with time delays is indispensable for solving engineering system problems. In the recent literature, many papers have studied the problem of global stability of different classes of neural networks by exploiting various analysis techniques and methods and presented some useful stability results for delayed neural networks. In practice, in order to precisely determine the equilibrium and stability properties of neural networks, the information about time derivatives of the past states must be introduced into the state equations of neural networks. A neural network of this model is called neutral-type neural networks. Some global stability results of various classes of neural networks with time delays have been reported in [133]. The goal of our paper is to present some new and alternative stability results of neutral-type neural networks with discrete time delays with respect to Lipschitz continuous activation functions.

Throughout this paper we will use these notations: for any matrix , will denote that is symmetric and positive definite; , , , and will denote the transpose of , the inverse of , the minimum eigenvalue of , and the maximum eigenvalue of , respectively. We will use the matrix norm . For any two positive definite matrices and . If , then will imply that . For , we will use the vector norms and .

2. Problem Statement

The class of neutral-type neural network model with discrete time delays is described by the following set of nonlinear differential equations: where is the number of the neurons in the network, denotes the state of the th neuron, and the parameters are some constants: the constants denote the strengths of the neuron interconnections within the network; the constants denote the strengths of the neuron interconnections with time delay parameters . are coefficients of the time derivative of the delayed states, the functions denote the neuron activations, and the constants are some external inputs. In system (1), represents the delay parameter with , . Accompanying the neutral system (1) is an initial condition of the form: , where denotes the set of all continuous functions from to .

We will assume that the activation functions , , are Lipschitz continuous; for example, there exist some constants such that Neural network model (1) can be written in the vector-matrix form as follows: where , , , , , , , and .

In order to obtain our main results, the following lemma will be needed.

Lemma 1 (see [23]). If a map satisfies the following conditions:(i) for all ,(ii) as ,then, is homeomorphism of .

3. Existence and Uniqueness Analysis

This section deals with obtaining the sufficient conditions that ensure the existence and uniqueness of the equilibrium point for neutral-type neural network model (1). The main result is given in the following result.

Theorem 2. For the neutral-type neural network model (1), let and the activation functions satisfy (2). Then, the system (1) has unique equilibrium point for each if there exist positive diagonal matrices and and positive definite matrices , , and such that the following conditions hold: where .

Proof. We will make use of the result of Lemma 1 for the proof of the existence and uniqueness of the equilibrium point for system (1). Let us define the following mapping associated with system (1): If is an equilibrium point of (1), then satisfies the equilibrium equation: Clearly, the solution of the equation is an equilibrium point of (1). Therefore, in the light of Lemma 1, we can conclude that, for the system defined by (1), there exists a unique equilibrium point for every input vector if is homeomorphism of . We will now show that the conditions of Theorem 2 imply that is a homeomorphism of . To this end, we choose any two vectors and such that . When the activation functions satisfy (2), for , we have two cases: first case is and , and the second case is and . Let us carry out the existence and uniqueness analysis for the first case where and . In this case, for defined by (5), we can write If we multiply both sides of (7) by the term , and then add the terms and to the right hand side of the resulting equation, we get We note the following inequalities: Using (9) in (8) results in which is of the form or equivalently Since and , , , , and imply that implying that from which it follows that implies that , and implies that . Therefore, it directly follows that , thus implying that . Hence, we conclude that for all and .
Now consider the case where and . In this case, defined by (5) satisfies ; and imply that Based on the analysis carried out for the previous case, we conclude that for all for this case.
Now it is shown that the conditions of Theorem 2 imply that as . For , we can write Taking the absolute value of the both sides of the above inequality, we obtain from which it follows that which yields We note that . Hence, from (21), it follows that Since is bounded, and , then as . Hence, the conditions of Theorem 2 ensure that is homeomorphism of , proving that neutral system defined by (1) has unique equilibrium point for each .
Choosing , , , , and in the conditions of Theorem 2 as , , , , and , we can express some special cases of Theorem 2 as follows.

Corollary 3. For the neutral-type neural network model (1), let and the activation functions satisfy (2). Then, the system (1) has unique equilibrium point for each if there exist some positive constants , , , , and such that the following conditions hold: where .

A special case of Corollary 3 is the following result.

Corollary 4. For the neutral-type neural network model (1), let and the activation functions satisfy (2). Then, the system (1) has unique equilibrium point for each if there exist some positive constants , , , , and such that the following conditions hold: where , , and .

4. Stability Analysis

In this section, we will prove that the conditions obtained from Theorem 2 for the existence and uniqueness of the equilibrium point are also sufficient for the global stability of the equilibrium point of neutral system defined by (1). In order to simplify the proofs, we will first shift the equilibrium point of system (1) to the origin. By using the transformation , the neutral-type neural network model (1) can be put in the form: which can be written in vector-matrix form as follows: where is the state vector of transformed neural system, represents the new nonlinear activation, functions, and . The activation functions in (25) satisfy We can now state the following stability result.

Theorem 5. For the neutral-type neural network model (25), let and the activation functions satisfy (27). Then, the origin of system (25) is globally asymptotically stable if there exist positive diagonal matrices and and positive definite matrices , , and such that the following conditions hold: where .

Proof. Define the following positive definite Lyapunov functional: where and are some positive constants. The time derivative of along the trajectories of the system (25) is obtained as follows: Since , we can write We can write the following inequalities: where , , and are some positive definite matrices. Using (32) in (31) yields
Equation (27) implies that Hence, we have which can be written as or equivalently Clearly, , , , and imply that if any of the vectors , , , and is nonzero, thus implying that if and only if which is the origin of system (25). On the other hand, as , meaning that the Lyapunov functional used for the stability analysis is radially unbounded. Thus, it can be concluded from the standard Lyapunov theorems [34] that the origin of system (25) or equivalently the equilibrium point of system (1) is globally asymptotically stable.

We can directly state the following corollaries.

Corollary 6. For the neutral-type neural network model (25), let and the activation functions satisfy (27). Then, the origin of system (25) is globally asymptotically stable if there exist some positive constants , , , , and such that the following conditions hold: where .

Corollary 7. For the neutral-type neural network model (25), let and the activation functions satisfy (27). Then, the origin of system (25) is globally asymptotically stable if there exist some positive constants , , , , and such that the following conditions hold: where , , and .

5. A Comparative Example

In this section, we will give a numerical example to make a comparison between our results and some previous corresponding results derived in the literature. We should point our here that the stability results regarding the neutral-type neural networks involve complicated relationships between the network parameters and some positive definite matrices to be determined, which is a difficult task to achieve. Therefore, the example we give will show that, in a particular case, our results seem to be equivalent to the previous corresponding literature results. We now state some of the previous results.

Theorem 8 (see [23]). For the neutral-type neural network model (1), let and the activation functions satisfy (2). Then, system (1) is globally asymptotically stable if there exist some positive constants , , , and such that the following conditions hold: where .

Theorem 9 (see [22]). For the neutral-type neural network model (1), let and the activation functions satisfy (2). Then, system (1) is globally asymptotically stable if there exist positive constants , , , and such that the following conditions hold: where , , and .

Theorem 10 (see [24]). For the neutral-type neural network model (1), let and the activation functions satisfy (2). Then, system (1) is globally asymptotically stable if the following condition holds: where , .

We now consider the following example.

Example 11. Assume that the network parameters of neutral-type neural system (1) are given as follows: where is real number. Assume that and . We have .
For the sufficiently small values of and and sufficiently large value of , , and , the conditions of Corollary 7 can be approximately stated as follows: The two required conditions for stability are and , implying that .
In the case of Theorem 8, for the sufficiently small value of and sufficiently large values of and , , and , the conditions of Theorem 8 can be approximately stated as follows: The required condition for stability is .
In the case of Theorem 9, for the sufficiently small value of and sufficiently large values of and , , and , the conditions of Theorem 9 can be approximately stated as follows: The two required conditions for stability are and , implying that .
In the case of Theorem 10, for a sufficiently small value of , the condition of Theorem 10 can be approximately stated as follows: The required condition for stability is .

6. Conclusions

In this paper, we have obtained some sufficient conditions for the existence, uniqueness, and global asymptotic stability of the equilibrium point for the class of neutral-type systems with discrete time delays. The results we obtained establish various relationships between the network parameters of the system. We have also given an example to show the applicability of our results and make a comparison between our results and some previous corresponding results derived in the literature. Most of the literature results express the stability conditions in terms of LMIs (linear matrix inequalities), which are then solved by some software tools. Such results may give less conservative results; however, the computational burden of this method can be high. Our results establish less complex relationships between the network parameters of the system. We should also point out that the delay-independent conditions may be more conservative than delay-dependent ones. In our paper, our stability conditions are independent of the time delays; this is due to the Lyapunov functional that we have employed in the analysis of our network model. In order to apply our techniques to obtain some delay-dependent conditions, one needs to modify the Lyapunov functional that we have used to include time delays in the conditions, which probably could be the subject of another study.