#### Abstract

Robustness refers to the ability of a system to maintain its original state under a continuous disturbance conditions. The deviation argument (DA) and stochastic disturbances (SDs) are enough to disrupt a system and keep it off course. Therefore, it is of great significance to explore the interval length of the deviation function and the intensity of noise to make a system remain exponentially stable. In this paper, the robust stability of Hopfield neural network (VPHNN) models based on differential algebraic systems (DAS) is studied for the first time. By using integral inequalities, expectation inequalities, and the basic control theory method, the upper bound of the interval of the deviation function and the noise intensity are found, and the system is guaranteed to remain exponentially stable under these disturbances. It is shown that as long as the deviation and disturbance of a system are within a certain range, there will be no unstable consequences. Finally, several simulation examples are used to verify the effectiveness of the approach and are described below.

#### 1. Introduction

Life is full of nonlinear phenomena, for example, the resistance of a plane and the starting of a car. On the other hand, scholars have never stopped the comprehensive analysis of nonlinear systems and have mainly focused on system control, for example, adaptive neural finite-time stabilization [1], adaptive control [2, 3], repetitive control [4], output feedback stabilization [5, 6], system commutativity issues [7], control [8], and fuzzy second-order-like sliding mode control [9]. Similar to p-normal forms, planar switched nonlinear systems, and semi-Markov jump nonlinear systems, the modeling and analysis of some nonlinear systems are very complex. The control of system transient processes and the restriction of nonlinear functions are the challenges of this study. Hence, it is meaningful to devise appropriate control methods for the study of nonlinear systems.

A deviation argument is a function that is related to a certain degree-of-deviation variable and is also a generalization of a time delay. This kind of complex system shows continuity and discreteness in the operation process. Systems with DAs are usually composed of differential equations and difference equations.

This special structure makes up for the local problems in the practical application of artificial neural networks and Biomathematics. Therefore, research on this kind of system also has been increasing. In [10], an adequate condition for the existence and uniqueness of the solution of a system was given. In [11], the control scheme of a sequence of hybrid systems was improved by using sampling data control and pulse control. In [12], the optimal control problem of a system is solved. A series of notable issues have been comprehensively analysed, including out-synchronization and matrix measure approaches for stability and synchronization [13–15].

Stochastic disturbances (SDs) also inevitably appear in nonlinear systems. For instance, in the transmission process of information along a channel, it inevitably interferes with various internal and external noise, which leads to the deterioration of the information transmission quality. In a radar tracking system, stochastic is embodied in the irregular movement of a tracked object and a large number of external disturbances. Obviously, nonlinear models with SDs increase the rationality of modeling the real world. However, it is very difficult to analyse and design schemes, which has always been a popular are of research. In [16], the exponential attractor discrete set (MJDS) with (SDs) was obtained by IC. It is worth noting that the subject IC condition is relatively loose. In [17], the issue of boundary state output feedback control with collocated boundary measurements is explored. An adaptive neural network state-feedback controller was designed by building an appropriate Lyapunov function in [18]. There are other conclusions about the systematic control design of SD systems in references [19, 20]. Extending these approaches to DASs is a challenging topic.

It is an innovation to describe neural networks with DAE. A neural network is a typical nonlinear dynamic system that can be used to describe, recognize, and make decisions and control intelligent behaviour. The core is the cognition and simulation of intelligence. Neural networks have refreshed the concept and function of computing and made scholars have a new understanding of them. Furthermore, neural networks have a wide range of applications, such as speech recognition, image recognition, intelligent robots, and even medicine [21, 22]. Koppe et al. [23] used recurrent neural networks to evaluate the dynamics of neuroimaging data. Yuksel et al. [24] predicted the ground-state binding energies of atomic nuclei with a neural network model. Researchers are mainly interested in the dynamic evolution of neural systems. In [25, 26], fuzzy logic and neural network control problems are explored. As we know, in the early 1980s, Hopfield combined statistical mechanics with systematic learning and proposed the classical Hopfield neural network model, and this model can also be derived through microelectrical components. The feasibility of artificial neural networks was demonstrated, and then, there was an increase in qualitative research on artificial neural networks.

Although research on nonlinear systems is very mature, there are few conclusions about the robustness of disturbed systems. For example, the Lyapunov stability theorem, controller design, semigroup theorem, and linear matrix inequality (LIM) methods have difficulties achieving robustness for nonlinear systems with disturbances. Hence, there is still much room for improvement for this issue.

DASs are also known as DAEs, which are generalized equations that use differential systems to represent complex systems. The algebraic equations in DAE systems are constraints that describe actual motion through mathematical theory. Due to the application of DAEs in chemical processes [27], power systems [28], computer algebra systems [29], etc., their system control problems have received unprecedented attention. An observer in the form of a DAE is designed for nonlinear DASs and can improve system performance [30]. The sampling data controller is set with an appropriate sampling period and scale gain to guarantee the stability of the whole closed-loop system [31]. By interpreting the DASs as feedback interconnections between a pure differential equation and an algebraic equation, the stability analysis is analogous to a small gain-like condition [32]. A symplectic indirect method is proposed for the optimal control problem with index-1 DASs in [33].

The study of the stability of a class of DASs is still in its initial stage. The Lyapunov direct method and small gain-like arguments are commonly used. However, methods for DASs with deviation functions and SDs are difficult to study. When the interval length of the DA or the SDs exceeds a fixed range, the DA and SDs in a nonlinear system will cause instabilities. Therefore, the stability of a system depends on its own strength. If the influence of a deviation and disturbance is small, the system will remain stable. Therefore, the interval length of a deviation and the quantitative index of the disturbance intensity are of great significance for the study of nonlinear systems. In fact, there are few direct studies on the robustness of DASs with deviations and disturbances. In this paper, the robustness of index-1 DASs is studied by using the theory of parallel differential systems. Based on a concrete model, the upper bounds of the deviation function and disturbance are given.

The remainder of this paper is laid out as follows. Section 2 describes the origin of the model; some definitions, conditions, and lemmas are given; and the proof process of these lemmas is provided. Section 3 provides an analysis of the robustness of a system. In Section 4, two numerical examples are given. Finally, Section 5 provides a brief summary and future work directions.

#### 2. Preliminaries and Model Description

##### 2.1. Notations

The meaning of symbols will be used in the whole text. represents natural numbers, . is a space of *n*-dimensional real numbers. is in , as . Define two sequences of real values , when , such that .

##### 2.2. Model

Then the model of this paper is introduced as follows.

As we known, traditional Hopfield neural network models are usually expressed by differential equations,

Among them, the resistor *R* and capacitor C formed in parallel mimic the time constant of the biological nerve output, and the transconductance *T* simulates the synaptic characteristics of the interconnection between neurons.

When capacitor *C* is broken down, this phenomenon can be expressed by the following static equation:

The neural network model described by DAS is as follows:where represents the synaptic strength of the neuron against the neuron at time which are continuous and bounded, are constants, and are nonlinear activation functions.

Suppose system (3) has a unique solution with any initial values of and , here and there is a trivial solution .

According to the above description, this paper gives a DAS model with deviation variable function:where , represents the synaptic strength of the neuron against the neuron at time , also expressed as synaptic strength. are constants, and are nonlinear activation functions. Here (3) can be viewed as an undisturbed model of (4).

Next, DAS with deviation functions and stochastic disturbed are introduced.

The deviation functions and represent noise intensity.

*Remark 1. * enables the system to have both advanced and deferred characteristics. System (4) is advanced when ; hence, , (4) is deferred. The DASs can be approximately regarded as a mixed system because of the strong dependence of the algebraic variables on the differential variables.

##### 2.3. Preliminaries

The following eight hypotheses are presented: where , and . It is worth noting that are satisfied local Lipschitz conditions. For any . There exists a constant , satisfied . . . . .

*Remark 2. *It is noticing that the DASs studied in this paper are all index-1, if and only ifBased on the existence and uniqueness theorem of solutions of differential equations and existing research results, if and hold, the index-1 DASs in this research exist unique solutions , which are from any initial value .

##### 2.4. Properties

*Definition 1. *For any , there exist constants , such thatThen, system (4) is globally exponentially stable.

*Definition 2. *System (5) is almost surely exponentially stable, if for any , there exist , such thatholds a.e.

*Definition 3. *For any , system (5) satisfies the mean-square exponentially stable; then, there exist , such that

#### 3. Main Results

The following lemma gives the relationship between the state and the deviation function in system (4).

Lemma 1. *Let hold, is the solution of system (4), and thenholds for , where*

*Proof. *Fix , for any , for dynamic equation, we obtainFor static equation,thenwhereCombining (14) with (16), we obtainAccording to the Gronwall–Bellman formula, we can getIn the same way, for , we obtainThen, we havetogether with (19) and (21),where .

For any , because of the arbitrariness of and , (9) is valid for all .

Lemma 2. *Let hold, is the solution of system (5), and thenholds for , where , and*

*Proof. *For any , there exists , such that , and thenAccording to the Gronwall–Bellman formula to (25), we can getIn the same way, from above, for any , we havewhereFrom (27), it followswhere .

##### 3.1. System Stability

Next, the robustness of the deviation term to the global exponential stability of system (4) is discussed.

Theorem 1. *Suppose that the conditions hold and system (3) is globally exponentially stable, if , where is the only positive solution to the transcendental equationand is the only positive solution to this transcendental equationwhere .*

*Proof. * is the solution to (4), and is the solution to system (3), combined with lemma (3), for any , for dynamic equationFor static equation,thenThe following can be obtained from (32) and (34):Due to the characteristics of system (3), from (35), when , thenBy the Gronwall–Bellman inequality to (36), for any , we haveThenFor any , by (23), then , sowhereFix , thenSystem (4) has a unique solution that satisfies the following equation:where is a positive integer; therefore, combining (41) and (26), for all , we haveFrom what has been discussed above, when , we havewhen , the above equation is also true, so system (4) achieved global exponential stability, which is verified.

*Remark 3. *Theorem 1 shows that the system remains globally asymptotically stable when the length of the interval of the deviation function of the disturbed system is less than the lower bound, that is, .

Theorem 2. *Assuming that hold and is the solution of system (3) from and system (3) is globally exponentially stable, then system (5) is mean-square exponentially stable when , where is the only positive solution of equations as follows:*

is the only positive solution of the transcendental equation:where ,

*Proof. *Fix , for any , and we havewhen , and by (48), thenfor any , and from (31), we haveTherefore, for any ,whereTogether with (45) and (46), when , we have . Fix , and thenAccording to the uniqueness of the solution of system (5),where is a positive integer. Combined with (53) and (54), for any , we haveSo, there exists positive integer , such that when ,The proof is completed.

*Remark 4. *Theorem 2 shows that as long as the deviation function and noise intensity are within a certain range, system (5) is exponentially mean-square stable on the premise that system (3) is globally exponentially stable. The range, which is can also be obtained by solving the transcendental equation in MATLAB.

*Remark 5. *Although the research in this article is based on the DASs, it is extremely dependent on the differential variable in the processing process and deals with differential variables and algebraic variables separately. This creates a situation where the index of the system eventually decreases to zero. There are even differential algebraic equations that cannot be reduced to differential equations. Therefore, a method to solve the stability problem by directly using a differential algebraic form needs to be developed.

*Remark 6. *Continuous Hopfield neural networks are usually used to solve optimization problems. The objective function is converted into a network energy function, and the variable of the problem corresponds to the state of the network neuron. In this way, the minimum point of the energy function is transformed into the equilibrium point of the system. When the CHNN model is used for optimization calculations, only the final state of the network evolution is needed, and there is no need to pay attention to the state evolution trajectory in the process. That is to say, as long as the step length of the time variable is long enough, there is no need to care whether the state is close to a trajectory during the numerical simulation.

#### 4. Example

In this part, two numerical examples are given to prove the validity of the theorems.

##### 4.1. System Description

*Example 1. *Consider the following two-dimensional case of a DASs,