- About this Journal ·
- Abstracting and Indexing ·
- Aims and Scope ·
- Annual Issues ·
- Article Processing Charges ·
- Articles in Press ·
- Author Guidelines ·
- Bibliographic Information ·
- Citations to this Journal ·
- Contact Information ·
- Editorial Board ·
- Editorial Workflow ·
- Free eTOC Alerts ·
- Publication Ethics ·
- Reviewers Acknowledgment ·
- Submit a Manuscript ·
- Subscription Information ·
- Table of Contents

Abstract and Applied Analysis

Volume 2013 (2013), Article ID 169214, 12 pages

http://dx.doi.org/10.1155/2013/169214

## Convergence and Stability of the Split-Step -Milstein Method for Stochastic Delay Hopfield Neural Networks

^{1}Department of Mathematics, Shanghai Normal University, Shanghai 200234, China^{2}Division of Computational Science, E-Institute of Shanghai Universities, 100 Guilin Road, Shanghai 200234, China^{3}Department of Mathematical Sciences, Faculty of Science and Engineering, Doshisha University, Kyoto 610-0394, Japan

Received 8 December 2012; Accepted 26 February 2013

Academic Editor: Chengming Huang

Copyright © 2013 Qian Guo et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

#### Abstract

A new splitting method designed for the numerical solutions of stochastic delay Hopfield neural networks is introduced and analysed. Under Lipschitz and linear growth conditions, this split-step *θ*-Milstein method is proved to have a strong convergence of order 1 in mean-square sense, which is higher than that of existing split-step *θ*-method. Further, mean-square stability of the proposed method is investigated. Numerical experiments and comparisons with existing methods illustrate the computational efficiency of our method.

#### 1. Introduction

Hopfield neural networks, which originated with Hopfield in the 1980s [1], have been successfully applied in many areas such as combinatorial optimization [2, 3], signal processing [4], and pattern recognition [5, 6]. In the last decade, neural networks in the presence of signal transmission delay and stochastic perturbations, also named as stochastic delay Hopfield neural networks (SDHNNs), have gained considerable research interest (see, e.g., [7–9] and the references therein). It is noticed that, so far, most works on SDHNNs focus mainly on the stability analysis of the analytical solutions, including mean-square exponential stability [7], global asymptotic stability [9], and so forth. Not only simulation is an important tool to explore interesting dynamics of kinds of Hopfield neural networks (HNNs) (see, e.g., [10] and the references therein), but also parameter estimation in dynamical systems based on HNNs (see, e.g., [11]) needs to solve HNNs numerically. Moreover, because most of SDHNNs do not have explicit solutions, the numerical analysis of SDHNNs recently stirred some initial research attention. For example, Li et al. [12] investigated the exponential stability of the Euler method and the semi-implicit Euler method for SDHNNs. Rathinasamy [13] introduced a split-step -method (SST) for SDHNNs and analysed the mean-square stability of this method, and the SST is only given for the commensurable delay case. To the best of our current knowledge, the authors mainly discussed the stability of numerical solutions for stochastic Hopfield neural networks with discrete time delays but skipped the details of convergence analysis.

The split-step Euler method for stochastic differential equations (SDEs) was proposed by Higham et al. [14], further, the splitting Euler-type algorithms have been derived for stochastic delay differential equations (SDDEs) [15, 16]. In this paper, we will present a splitting method with higher order convergence for SDHNNs. To be specific, we will go into detail about the convergence analysis and comparing the stability with split-step -method given in [13].

The rest of this paper is organized as follows. In Section 2, we recall the stochastic delay neural networks model and present a split-step -Milstein method. In Section 3, we derive the convergence results of the split-step -Milstein method for the model. In Section 4, the numerical stability analysis is performed. In Section 5, some numerical examples are given to confirm the theory. In the last Section, we draw some conclusions.

#### 2. Model and the Split-Step -Milstein Method

##### 2.1. Model

Consider the stochastic delay Hopfield neural networks of the form where is the state vector associated with the neurons, , the diagonal matrix has positive entries, and represents the rate at which the unit will reset its potential to the resting state in isolation when discounted from the network and the external stochastic perturbation. The matrices and are the connection weight matrix and the discretely delayed connection weight matrix, respectively. Furthermore, the vector functions and denote the neuron activation functions with the conditions for all positive .

On the initial segment the state vector satisfies , where is a given function in and stands for .

Moreover, is a diagonal matrix with and is an -dimensional Wiener process defined on the complete probability space with a filtration satisfying the usual conditions (i.e., it is increasing and right continuous while contains all -null sets).

Let and be functions in and be in . Here denotes the family of continuously -times differentiable real-valued function defined on , while denotes the family of all real-valued measurable -adapted stochastic processes such that .

##### 2.2. Numerical Scheme

We define the mesh with a uniform step-size on the interval ; that is, and .

Let denote the increment of the Wiener process. The split-step -Milstein (SSTM) scheme for the solution of SDEs (1) is given by where the merging parameter satisfies , is an approximation to , and for Moreover, we adopt the symbols = and = , the Hadamard product means , and . When , we define .

Then scheme (2) can be written in equivalent form asSubstituting (4a) into (4b), we have a stochastic explicit single-step method with an increment function ; that is,

#### 3. Order and Convergence Results for SSTM

In this section we consider the global error of SSTM (2) as applied to SDHNNs (1) with initial condition. In what follows, denotes Euclidean norm in .

For convergence purpose we make the following standard assumptions.

*Assumption 1. *Assume that , , , and satisfy the Lipschitz condition
for every and the linear growth condition
where is a positive constant and is the maximal operator. We also define as .

We also need the following assumption on the initial condition.

*Assumption 2. * Assume that the initial function is Lipschitz continuous from to , that is, there is a positive constant satisfying

Now we give the definition of local and global errors.

*Definition 1. *Let denote the exact solution of (1). The local approximate solution starting from by SSTM (2) given by
where denotes the evaluation of (3) using the exact solution, yields the difference
Then the local error of SSTM is defined by , whereas its global error means where .

*Definition 2. *If the global error satisfies
with positive constants and and a finite , then we say that the order of mean-square convergence accuracy of the method is . Here is the expectation with respect to .

We then give the following lemmas that are useful in deriving the convergence results.

Lemma 3 (see also [17]). * Let the linear growth condition (7) hold, and the initial function is assumed to be -measurable and right continuous. And one puts . For any given positive , there exist positive numbers and such that the solution of (1) satisfies
**
where the constant is independent of step-size but dependent on . Moreover, for any , , the estimation
**
holds. *

The Jensen inequality derives from (12).

Lemma 4. *For , one has
**
Here the constant is independent of step-size .*

*Proof. *If and , under Assumption 2 we have
If and , with (13) we obtain
If and , we assume without loss of generality. Hence,
by using inequality (13).

Lemma 5. *Let denote the exact solution of (1). One assumes conditions (6) and (7). Then for the local intermediate value , one has the estimation
*

*Proof. *The difference between the components of and leads to
whose expectation, together with , , the Lipschitz condition (6), and the estimation (12), gives

Now we discuss local error estimates.

Theorem 6. *When one assumes Assumptions 1 and 2 and the conditions of Lemma 3, there exist positive constants and , such that
**
as .*

*Proof. *The Itô integral form of the component of (1) on implies
By utilizing the previous identity, the component of the difference introduced in Definition 1 can be calculated as
where and .

Taking expectations of both sides of (25),
by (24) and Itô formula, where := . Under conditions of this theorem, we have by (14), Jensen inequality , triangle inequality, and properties of definite integral. Then we have from the relation between and .

Now we prove (23). By Itô formula,
From (25) and (27), we have
Finally, it is easy to prove .

Thanks to Theorem 1 in [18], we can conclude that that is, the mean-square order of global error of the SSTM is .

#### 4. Stability of SSTM

We are concerned with the stability of SSTM solution. Since (1) has an equilibrium solution , we will discuss whether the SSTM solution with a positive step-size can attain a similar stability when goes to infinity. First we give a sufficient condition for the exponential stability in mean-square sense of the equilibrium solution. The references [13, 19] give the condition as for every .

*Definition 7. *A numerical method is said to be mean-square stable (MS-stable) if there exists an such that any application of the method to problem (1) generates numerical approximations , which satisfies
for all .

Theorem 8. *Assume (6), (30), and are satisfied; then the SSTM (2) are mean-square stable if , where
**
Here is the smallest positive root of the cubic equation with respect to given by
**
where the coefficients mean
**
for .*

*Proof. *Squaring on both sides of (4b) and (4a), we have
Taking expectations of both sides of (35), we can get
Together with (36), we have
Thus, we attain
where
Note that the assumption of Theorem implies the nonnegativity of .

Obviously, when , if the inequality holds, which is equivalent to the inequality .

Furthermore, it is easy to prove and by virtue of (30). By Vieta's formulas, the product of three roots of (33) satisfies . This means that (33) has at least one positive root. Therefore, let denote the smallest positive root of the equation. Moreover, we note that at the origin the right-hand side polynomial of (33) is negative. This completes the proof.

#### 5. Numerical Results

Now, we apply the introduced SSTM method to two test cases of SDHNNs in order to compare their performance with the split-step -method in [13], which has strong convergence order 0.5.

The mean-square error of numerical approximations at time versus the step-size is depicted in log-log diagrams, where . Here stands for the value of explicit solution of (1) at time and is its numerical approximation along the th sample path . We compute the numerical solution using the split-step -Milstein method (2) with step-size , and we will call this the * “exact solution.” *

*Example 9. *Consider the following two-dimensional stochastic delay Hopfield neural networks of the form
on with the initial condition and .

*Case 1. * Let ,

*Case 2. * Let ,

In Figure 1, SSTM is applied with 7 different step-sizes: for . Two pairs of time delays are set to and . The first pair has common factor ; however, the second pair is incommensurable by . The computation errors versus step-sizes are plotted on a log-log scale and the reference lines of slope are added. It illustrates that SSTM has raised the strong order of the split-step -method at least to 1 for SDHNNs [13].

Next, Table 1 shows a comparison of stability intervals between the SST and the SSTM for (43). Two sets of the interval in the Table are calculated through Theorem 8 in this paper and Theorem 5.1 in [13]. It is easy to see that the stability intervals of the two methods are similar.

We know that Theorem 5.1 in [13] and Theorem 8 in this paper only give sufficient conditions of mean-square stability. Therefore the stability intervals given by these theorems are only subsets of real ones. To confirm the situation, we calculated the sample moments of the approximate solution and plotted them along the time . Here the sample moment means for the numerical solution approximating along the th sample path. Figures 2, 3, 4, 5, and 6 depict the results by SST and SSTM in the log-scaled vertical axis. All the figures can give a rough estimate of the stability interval in each case.

#### 6. Concluding Remarks

We introduce the split-step -Milstein method (SSTM), which exhibits higher strong convergence rate than the split-step -method (SST, see [13]) for a stochastic delay Hopfield neural networks, and the scheme proposed in this paper can deal with incommensurable time delays which were not considered in [13]. We give the proof of convergence results, which has generally been omitted in the previous works on the same subject. By comparing the stability intervals of step size for the SST and SSTM for a test example, we find they exhibit similar mean-square stability.

In this paper, we have found a delay-independent sufficient condition for mean-square stability of split-step -Milstein method applied to nonlinear stochastic delay Hopfield neural networks. Further, Figure 6 suggests that the value of , the right end-point of the stability interval, given by Theorem 5.1 in [13] and Theorem 8 in this paper is much smaller than the true value when is close to unity. In this case, we need other techniques for stability analysis in this kind of stochastic delay differential system. To the best of our knowledge, the works in [20, 21] put forward good attempts. On the other hand, with respect to stochastic delay differential equations, some other types of stability have been successfully discussed for the Euler-type scheme, for example, mean-square exponential stability [12], delay-dependent stability [22], delay-dependent exponential stability [23], and almost sure exponential stability [24]. To Milstein-type scheme, in view of more sophisticated derivations, these issues would be challenging for future research.

#### Acknowledgment

The authors would like to thank Dr. E. Buckwar and Dr. A. Rathinasamy for their valuable suggestions. The authors also thank the editor and the anonymous referees, whose careful reading and constructive feedback have improved this work. The first author is partially supported by E-Institutes of Shanghai Municipal Education Commission (no. E03004), National Natural Science Foundation of China (no. 10901106), Natural Science Foundation of Shanghai (no. 09ZR1423200), and Innovation Program of Shanghai Municipal Education Commission (no. 09YZ150). The third author is partially supported by the Grants-in-Aid for Scientific Research (no. 24540154) supplied by Japan Society for the Promotion of Science.

#### References

- J. J. Hopfield, “Neural networks and physical systems with emergent collective computational abilities,”
*Proceedings of the National Academy of Sciences of the United States of America*, vol. 79, no. 8, pp. 2554–2558, 1982. View at Publisher · View at Google Scholar · View at MathSciNet - J. J. Hopfield and D. W. Tank, “Neural computation of decisons in optimization problems,”
*Biological Cybernetics*, vol. 52, no. 3, pp. 141–152, 1985. View at MathSciNet - G. Joya, M. A. Atencia, and F. Sandoval, “Hopfield neural networks for optimization:study of the different dynamics,”
*Neurocomputing*, vol. 43, no. 1–4, pp. 219–237, 2002. View at Publisher · View at Google Scholar - Y. Sun, “Hopfield neural network based algorithms for image restoration and reconstruction. I. Algorithms and simulations,”
*IEEE Transactions on Signal Processing*, vol. 48, no. 7, pp. 2105–2118, 2000. View at Publisher · View at Google Scholar - S. Young, P. Scott, and N. Nasrabadi, “Object recognition using multilayer Hopfield neural network,”
*IEEE Transactions on Image Processing*, vol. 6, no. 3, pp. 357–372, 1997. View at Publisher · View at Google Scholar - G. Pajares, “A Hopfield neural network for image change detection,”
*IEEE Transactions on Neural Networks*, vol. 17, no. 5, pp. 1250–1264, 2006. View at Publisher · View at Google Scholar - L. Wan and J. H. Sun, “Mean square exponential stability of stochastic delayed Hopfield neural networks,”
*Physics Letters A*, vol. 343, no. 4, pp. 306–318, 2005. View at Publisher · View at Google Scholar - Z. Wang, H. Shu, J. Fang, and X. Liu, “Robust stability for stochastic Hopfield neural networks with time delays,”
*Nonlinear Analysis*, vol. 7, no. 5, pp. 1119–1128, 2006. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet - Z. D. Wang, Y. R. Liu, K. Fraser, and X. H. Liu, “Stochastic stability of uncertain Hopfield neural networks with discrete and distributed delays,”
*Physics Letters A*, vol. 354, no. 4, pp. 288–297, 2006. View at Publisher · View at Google Scholar - J. S. Šíma and P. Orponen, “General-purpose computation with neural networks: a survey of complexity theoretic results,”
*Neural Computation*, vol. 15, no. 12, pp. 2727–2778, 2003. View at Publisher · View at Google Scholar - J. R. Raol and H. Madhuranath, “Neural network architectures for parameter estimation of dynamical systems,”
*IEE Proceedings*, vol. 143, no. 4, pp. 387–394, 1996. View at Publisher · View at Google Scholar - R. H. Li, W. Pang, and P. Leung, “Exponential stability of numerical solutions to stochastic delay Hopfield neural networks,”
*Neurocomputing*, vol. 73, no. 4–6, pp. 920–926, 2010. View at Publisher · View at Google Scholar - A. Rathinasamy, “The split-step $\theta $-methods for stochastic delay Hopfield neural networks,”
*Applied Mathematical Modelling*, vol. 36, no. 8, pp. 3477–3485, 2012. View at Publisher · View at Google Scholar · View at MathSciNet - D. J. Higham, X. Mao, and A. M. Stuart, “Strong convergence of Euler-type methods for nonlinear stochastic differential equations,”
*SIAM Journal on Numerical Analysis*, vol. 40, no. 3, pp. 1041–1063, 2002. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet - H. Zhang, S. Gan, and L. Hu, “The split-step backward Euler method for linear stochastic delay differential equations,”
*Journal of Computational and Applied Mathematics*, vol. 225, no. 2, pp. 558–568, 2009. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet - X. Wang and S. Gan, “The improved split-step backward Euler method for stochastic differential delay equations,”
*International Journal of Computer Mathematics*, vol. 88, no. 11, pp. 2359–2378, 2011. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet - X. R. Mao,
*Stochastic Differential Equations and Applications*, Horwood, Chichester, UK, 2nd edition, 2007. - E. Buckwar, “One-step approximations for stochastic functional differential equations,”
*Applied Numerical Mathematics*, vol. 56, no. 5, pp. 667–681, 2006. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet - Q. Zhou and L. Wan, “Exponential stability of stochastic delayed Hopfield neural networks,”
*Applied Mathematics and Computation*, vol. 199, no. 1, pp. 84–89, 2008. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet - Y. Saito and T. Mitsui, “Mean-square stability of numerical schemes for stochastic differential systems,”
*Vietnam Journal of Mathematics*, vol. 30, pp. 551–560, 2002. View at Zentralblatt MATH · View at MathSciNet - E. Buckwar and C. Kelly, “Towards a systematic linear stability analysis of numerical methods for systems of stochastic differential equations,”
*SIAM Journal on Numerical Analysis*, vol. 48, no. 1, pp. 298–321, 2010. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet - C. Huang, S. Gan, and D. Wang, “Delay-dependent stability analysis of numerical methods for stochastic delay differential equations,”
*Journal of Computational and Applied Mathematics*, vol. 236, no. 14, pp. 3514–3527, 2012. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet - X. Qu and C. Huang, “Delay-dependent exponential stability of the backward Euler method for nonlinear stochastic delay differential equations,”
*International Journal of Computer Mathematics*, vol. 89, no. 8, pp. 1039–1050, 2012. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet - F. Wu, X. Mao, and L. Szpruch, “Almost sure exponential stability of numerical solutions for stochastic delay differential equations,”
*Numerische Mathematik*, vol. 115, no. 4, pp. 681–697, 2010. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet