Abstract

This work analyzes the passivity for a set of Markov jumping reaction-diffusion neural networks limited by time-varying delays under Dirichlet and Neumann boundary conditions, respectively, in which Markov jumping is used to describe the variations among system parameters. To overcome some difficulties originated from partial differential terms, the Lyapunov–Krasovskii functional that introduces a new integral term is proposed and some inequality techniques are also adopted to obtain the delay-dependent stability conditions in terms of linear matrix inequalities, which ensures that the designed neural networks satisfy the specified performance of passivity. Finally, the advantages and effectiveness of the obtained results are verified via displaying two illustrated examples.

1. Introduction

In the last few decades, the research of neural networks (NNs) has received widespread attention, and abundant works have been obtained. NNs have been successfully used in all kinds of areas, such as signal processing, voiceprint recognition, nonlinear dynamics, and image compression [13]. Additionally, time-varying delays (TVDs) are omnipresent in reality such that their effects on the NNs cannot be ignored, and mainly in that the existence of time delays (TDs) may cause instability and poor performance of systems in practical situations [410]. Correspondingly, not introducing TDs will cause the inaccuracy of values when NNs are mathematically modeled. Therefore, much research attention has been concentrated on the issue of stability analysis for the NNs with TDs, and accordingly a great number of works have been published in the literature (see, e.g., [11] and the references therein).

It should be remarkable that the behavior of the networks depends not only on time but also on the spatial position from the perspective of engineering, such as liquid flow and diffusion processes. In reality, reaction-diffusions exist in many systems, such as chemical systems and NNs [12]. It should be remarkable that the node states of such systems should consider the variables of space and time, which can be described by partial differential equations with reaction-diffusion terms (RDTs). Thus, it is of great essential to introduce the RDT into NNs to achieve a better approximation effect of the spatiotemporal actions. To name a few, for the complex spatiotemporal networks, the synchronization problem of space-varying coefficients was concerned in [13]. For the coupled partial differential systems, the synchronization issue with space-independent coefficients was discussed in [14]. However, finding a suitable method to deal with the RDT of neural networks is a difficult challenge. Although it is very tricky, this challenge has driven the development of RDNNs. Up to now, a mass of outstanding works on the stability analysis of NNs with RDT have been published [15, 16]. For example, the synchronization issue of hybrid-coupled RDNNs with TDs was discussed in [17]. In [18], the passivity of coupled RDNNs with nonlinear coupling was considered. Via detailed understanding, it is found that most of the existing methods are used to handle the RDT under the framework of Friedrichs inequality. In this case, there is a certain degree of conservatism, which is difficult to be extended to the general matrix form. To tackle this problem, using appropriate matrix inequalities to better reduce the conservatism was investigated. Based on the abovementioned analysis, it prompts us to take the stability problem of RDNNs into consideration by using proper matrix inequality.

On the contrary, NNs often show a special switching characteristic of the network mode, and this case will make that the RDNNs cannot be handily modeled [19, 20]. Fortunately, the Markov process has been proved to be able to depict the switching (or jumping) cases effectively [16, 2125]. Hence, taking the Markov jumping model applied into RDNNs with TVDs is practical and reasonable, in which each state represents a jumping mode. Recently, lots of works for RDNNs with the Markov jumping parameters have been published [2628]. For instance, the stochastic delayed NNs with RDT and Markov jumping parameters were studied in [29]. In [30], the state estimation of Markov jump delayed NNs with RDT was discussed. In this case, the study of RDNNs with TVDs and Markov jumping is of significance. Besides, passivity theory is recognized as a powerful tool and plays a critical role in diverse areas such as energy management and flow control [3135]. In early days, the passivity was widely used in a system where the variables of input and output merely depend on time and rarely in a system with RDT. Thereby, it is extremely necessary to explore the passivity of RDNNs with TVDs. As far as we know, only a small number of investigators have studied the passivity analysis of Markov jumping RDNNs, which powerfully motivates the present study.

Based on the above discussions, this paper concentrates on the stability and passivity analysis of Markov jumping RDNNs with TVDs. In addition, the main contributions include the following three sides:(1)Compared with some existing literature [11, 36], in this paper, Jensen’s inequality and the Wirtinger-type integral inequality, a new inequality dealing with the reaction-diffusion terms and the reciprocally convex method, are introduced to derive criteria based on linear matrix inequalities (LMIs), which is beneficial for reducing the conservativeness of the results.(2)Furthermore, by using some inequality techniques, several passive criteria for RDNNs with TVDs and Markov jumping parameters are obtained, and the passivity of Markov jumping RDNNs with TVDs is analyzed.(3)The Markov jumping model is employed to the research of RDNNs, and the passivity of Markov jumping RDNNs is investigated under the different boundary conditions (BCs), which determines the common internal stability of Markov jumping RDNNs.

2. Preliminaries

Consider the RDNNs with Markov jumping parameters and TVDs as follows:where , denotes the state variable at time and in space ; means the transmission diffusion coefficient of the th neuron; is the neuron charging time constants; and denote the connection weight coefficients of the th neuron on the th neuron; and represent the neuron activation functions of the th neuron at time ; is the control input; is bounded and continuous on , ; indicates the transmission delay and satisfies and , in which and are some given constants; is a right-continuous Markov process taking values in a finite state space with the transition probability matrix given bywhere , and for is the transition rate from mode at time to mode at time .

In our paper, two kinds of BCs are introduced:(1)Dirichlet BCs:(2)Neumann BCs:

Besides, and are defined as the activation functions, which are assumed to meet the listed condition as below.

Assumption 1. (see [30]). The activation functions and satisfy the bounded conditions and there exist positive matrices and such thatwhere , , and , for any , and .
In this paper, for each , system (1) can be rewritten as the vector matrix form as follows:wherewithThe output vector of system (6) is chosen aswhere , are known constant matrices.

Definition 1. (see [37]). For , if the input and the output satisfy the following inequality:in which , , , and , and , where is the storage function, then network (6) is said to be input-strictly passive.

Lemma 1 (see [38]). For the compact set and system (6), if satisfies , that is, it vanishes on the boundary of , then

Remark 1. It is worth noting that compared to other methods of coping with RDT, such as Lemma 1 in [39], in our paper, it is validated that utilizing (11) can better reduce conservativeness, with regards to deriving of our main results which is very significant. The proof is omitted therein but can be found in the literature [38].

3. Main Results

In this section, we consider dynamical NNs, which includes identical with spatial diffusion, and each node is a -dimensional for Markov jumping RDNNs. Additionally, sufficient condition for assuring the asymptotic stability of Markov jumping RDNNs is acquired. Besides, the passivity criteria for RDNNs (6) under the different BCs are achieved.

Theorem 1. Given scalars and , the solution of RDNNs (6) is input-strictly passive under the Dirichlet boundary condition if there exist some diagonal matrices , , , and , real positive matrices , , , , , and , and matrix of appropriate dimensions, such that , the following inequalities hold:where

Proof. We firstly choose the following Lyapunov–Krasovskii functional:whereThen, calculating time derivatives of , one hasFrom (11), we can obtainFor diagonal matrices , from (6), we can find thatOn the basis of Lemma 3 in [37], we obtainFrom (24) and (25), one hasBy Jensen’s inequality, we can obtainFrom conditions (21) and (27), it can be obtained thatAccording to the Wirtinger-type integral inequality in [40] and Lemma 5 of [41], we haveBy (23) and (29), one hasFurthermore, for positive diagonal matrices and , from (5), one obtainsBy (15)-(31), for each , we can getwhich combining with (13) meansin which .
From (33), one can see that and . This completes the proof.

Remark 2. It is noted that the method of Lemma 2 in [42] to deal with the integrals to a certain degree has reduced the conservatism of the system, compared to the general Jensen’s inequalities. Therefore, in our paper, this technique is adopted to estimate the integral:

Remark 3. Compared with the traditional Wirtinger-based integral inequality in [42], Lemma 2 used in [40] is an improved method of the Wirtinger-based integral inequality that can better reduce conservativeness. Then, some delay-dependent stability criteria are obtained by applying the abovementioned inequalities and computed by using standard LMI techniques.

Theorem 2. Given scalars and , the solution of RDNNs (6) is input-strictly passive under the Neumann boundary conditions if there exist some diagonal matrices , , , and , real positive matrices , , , , , and , and matrix of appropriate size, such that , the following inequalities hold:whereand , , , , , and are defined the same as in Theorem 1.

Proof. Considering the same Lyapunov–Krasovskii functional given in (15), afterwards, and calculating the derivative along the trajectory of (6) under the Neumann BCs (4), we can obtain thatThen, following the same procedure as in Theorem 1, we can obtain the craved results immediately.

4. Numerical Examples

In this section, two numerical examples are provided to expound the effectiveness of the main results derived above.

Example 1. Consider continuous-time Markov jump RDNNs with TVDs under the Dirichlet BCs as follows:in which , and some system matrices in system (6) containing two modes can be given as follows:Mode1:Mode2:where and , both satisfy assumption 1. Moreover, we set that , , , and . The transition probability matrix of Markov process is taken asBy solving the LMIs (12) and (13) in Theorem 1, it is obvious that there is a feasible solution. For the sake of simplicity, only partial solution matrices are listed:Therefore, one can conclude that the Markov jump RDNNs (6) with TVDs under Dirichlet BCs is input-strictly passive according to Definition 1. Here, we assume that the input function and . Then, the state trajectories of variables and are plotted in Figures 1 and 2, respectively. On the other hand, Figures 3 and 4, respectively, describe the state trajectories of variables and , in which we choose and , which means system (40) is globally stable under the zero input.

Example 2. Consider the RDNNs with Markov jumping parameters and TVDs under Neumann BCs in the following:in which the rest of the values are the same as in example 1, and some system matrices containing two modes are given as follows:Mode1:Mode2:In this case, we set and . Afterwards, the maximum allowable upper bound value is obtained via solving the LMIs of (36) and (37) in Theorem 2. Then, we can get the NNs (6) of RDT with Markov jumping parameters under Neumann BCs which are also input-strictly passive. Here, only some of the feasible solutions are shown in the following:Here, Figures 5 and 6, separately, depict the state trajectories of variables and under Neumann BCs, in which the input function is regarded as and . In the other case, consider the input function , and the numerical simulations of system (45) are presented in Figures 7 and 8, respectively.
Furthermore, from Tables 1 and 2, we can find that the upper bound of decreases with the value of increasing in the interval from the Dirichlet BCs or the Neumann BCs, which means the increasing of has a negative impact on . Moreover, from Table 3, compared with the different cases with obtained results, it is easy to discover that the maximum allowed value of time-varying delays in Theorem 1 is larger than the delay upper bound in Theorem 2, and one can get two observations:(i)We can find that the assumptions about the BCs are limited to the Neumann zero-boundary condition. The main reason is that the results are obtained without consideration of the influence of RDT on the NNs, which increases the conservatism of the system.(ii)Correspondingly, in order to avoid this problem, a RDNNs model with Dirichlet BCs introduces a new inequality, and the results consider the influence of the diffusion terms. Thus, the conservatism of the results obtained in this paper is reduced.

5. Conclusions

In this paper, we have analyzed the passivity problem for Markov jumping reaction-diffusion neural networks, in which the time-varying delay phenomenon and reaction-diffusion terms have been taken into consideration. By introducing several novel Lyapunov–Krasovskii functionals and employing a new inequality, some sufficient conditions expressed as linear matrix inequalities have been established, which guarantees the passive stability of the considered reaction-diffusion neural networks. Two numerical examples have been provided to exhibit the effectiveness of the offered approaches. Finally, we will try to apply the above results to other practical systems in future works.

The notations used throughout this paper are standard, which are presented in the following.

Notations

:Diagonal matrix
:Transpose of the matrix
:Set of the nonnegative integer
:Set of the dimensional real matrix
:Symmmetric block of a symmetric matrix
:Identity matrix with appropriate dimensions
:Space of the square summable infinite sequence
:Positive (positive semi) definite matrix
:A compact set in the vector space with smooth boundary .

Data Availability

All data sets used in this study are hypothetical.

Conflicts of Interest

The authors declare that there are no conflicts of interest regarding the publication of this paper.

Acknowledgments

This work was supported by the Key University Science Research Project of Anhui Province (Grant no. KJ2017A064), National Innovation and Entrepreneurship Training Program for College Students, China.