Abstract

This paper is devoted to the stability and convergence analysis of the Additive Runge-Kutta methods with the Lagrangian interpolation (ARKLMs) for the numerical solution of multidelay-integro-differential equations (MDIDEs). GDN-stability and D-convergence are introduced and proved. It is shown that strongly algebraically stability gives D-convergence, DA- DAS- and ASI-stability give GDN-stability. A numerical example is given to illustrate the theoretical results.

1. Introduction

Delay differential equations arise in a variety of fields as biology, economy, control theory, electrodynamics (see, e.g., [15]). When considering the applicability of numerical methods for the solution of DDEs, it is necessary to analyze the stability of the numerical methods. In the last three decades, many works had dealt with these problems (see, e.g., [6]). For the case of nonlinear delay differential equations, this kind of methodology had been first introduced by Torelli [7] and then developed by [812].

In this paper, we consider the following nonlinear multidelay-integro-differential equations (MDIDEs) with 𝑚 delays:𝑦(𝑡)=𝑓[1]𝑡,𝑦(𝑡),𝑦𝑡𝜏1,𝑡𝑡𝜏1𝑔[1](𝑡,𝑠,𝑦(𝑠))𝑑𝑠+𝑓[2]𝑡,𝑦(𝑡),𝑦𝑡𝜏2,𝑡𝑡𝜏2𝑔[2](𝑡,𝑠,𝑦(𝑠))𝑑𝑠++𝑓[𝑚]𝑡,𝑦(𝑡),𝑦𝑡𝜏𝑚,𝑡𝑡𝜏2𝑔[𝑚]𝑡(𝑡,𝑠,𝑦(𝑠))𝑑𝑠,𝑡0,𝑡,𝑇𝑦(𝑡)=𝜑(𝑡),𝑡0𝜏,𝑡0,(1.1) where 𝜏1𝜏2𝜏𝑚=𝜏, 𝑓[𝑣][𝑡0,𝑇]×𝐶𝑁×𝐶𝑁×𝐶𝑁𝐶𝑁,𝑔[𝑣][𝑡0,𝑇]×𝐶𝑁×𝐶𝑁𝐶𝑁𝑣=1,2,,𝑚, and 𝜑[𝑡0𝜏,𝑡0]𝐶𝑁 are continuous functions such that (1.1) has a unique solution. Moreover, we assume that there exist some inner product , and the induced norm |||| such that 𝑓Re[𝑣]𝑡,𝑦1,𝑢1,𝑤1𝑓[𝑣]𝑡,𝑦2,𝑢2,𝑤2,𝑦1𝑦2𝛼𝑣𝑦1𝑦22+𝛽𝑣𝑢1𝑢22+𝜎𝑣𝑤1𝑤22,𝑣=1,2,,𝑚,𝑡𝑡0,𝑓(1.2)[𝑣]𝑡,𝑦,𝑢1,𝑤𝑓[𝑣]𝑡,𝑦,𝑢2,𝑤𝑟𝑣𝑢1𝑢2,𝑔(1.3)[𝑣]𝑡,𝑠,𝑤1𝑔[𝑣]𝑡,𝑠,𝑤2̃𝑟𝑣𝑤1𝑤2,(𝑡,𝑠)𝐷(1.4)

forall𝑡[𝑡0,𝑇],forall𝑦,𝑦1,𝑦2,𝑢,𝑢1,𝑢2,𝑤,𝑤1,𝑤2𝐶𝑁, (𝛼𝑣),𝛽𝑣,𝜎𝑣,𝑟𝑣,̃𝑟𝑣 are all nonnegative constants. Throughout this paper, we assume that the problem (1.1) has unique exact solution 𝑦(𝑡). Space discretization of some time-dependent delay partial differential equations give rises to such delay differential equations containing additive terms with different stiffness properties. In these situations, additive Runge-Kutta (ARK) methods are used. Some recent works about ARK can refer to [13, 14]. For the additive MDIDEs (1.1), similar to the proof of Theorem 2.1 in [7], it is straightforward to prove that under the conditions (1.2)~(1.4), the analytic solutions satisfy𝑦(𝑡)𝑧(𝑡)max𝑡0𝜏𝑡𝑡0𝜑(𝑡)𝜓(𝑡),(1.5) where 𝑧(𝑡) is the solution of the perturbed problem to (1.1).

To demand the discrete numerical solutions to preserve the convergence properties of the analytic solutions, Torelli [7] introduced a concept of RN-, GRN-stability for numerical methods applied to dissipative nonlinear systems of DDEs such as (1.1) when 𝑔[𝑣](𝑡,𝑠,𝑦(𝑠))=0,𝑣=1,2,,𝑚, which is the straightforward generalization of the well-known concept of BN-stability of numerical methods with respect to dissipative systems of ODEs (see also [9]). More recently, one has noticed a growing interesting the analysis of delay integro-differential equations (DIDEs). This type of equations have been investigated in various fields, such as mathematical biology and control theory (see [1517]). The theory of computational methods for delay integro-differential equations (DIDEs) has been studied by many authors, and a great deal of interesting results have been obtained (see [1822]). Koto [23] dealt with the linear stability of Runge-Kutta (RK) methods for systems of DIDEs; Huang and Vandewalle [24] gave sufficient and necessary stability conditions for exact and discrete solutions of linear Scalar DIDEs. However, little attention has been paid to nonlinear multidelay-integro-differential equations (MDIDEs).

So, the aim of this paper is the study of stability and convergence properties for ARK methods when they are applied to nonlinear multidelay-integro-differential equations (MDIDEs) with 𝑚 delays.

2. The GDN-Stability of the Additive Runge-Kutta Methods

An additive Runge-Kutta method with the Lagrangian interpolation (ARKLM) of 𝑠 stages and 𝑚 levels can be organized in the Butcher tableau:𝐶𝐴[1]𝐴[2]𝐴[𝑚]𝑏[1]𝑇𝑏[2]𝑇𝑏[𝑚]𝑇=𝑐1𝑎[1]11𝑎[1]12𝑎[1]1𝑠𝑎[𝑚]11𝑎[𝑚1]12𝑎[𝑚]1𝑠𝑐2𝑎[1]21𝑎[1]22𝑎[1]2𝑠𝑎[𝑚]21𝑎[𝑚]22𝑎[𝑚]2𝑠𝑐𝑠𝑎[1]𝑠1𝑎[1]𝑠2𝑎[1]𝑠𝑠𝑎[𝑚]𝑠1𝑎[𝑚]𝑠2𝑎[𝑚]𝑠𝑠𝑏1[1]𝑏2[1]𝑏𝑠[1]𝑏1[𝑚]𝑏2[𝑚]𝑏𝑠[𝑚],(2.1) where 𝐶=[𝑐1,𝑐2,,𝑐𝑠]𝑇, 𝑏[𝑣]=[𝑏1[𝑣],𝑏2[𝑣],,𝑏𝑠[𝑣]], and 𝐴[𝑣]=(𝑎[𝑣]𝑖𝑗)𝑠𝑖,𝑗=1.

The adoption of the method (2.1) for solving the problem (1.1) leads to𝑦𝑛+1=𝑦𝑛+𝑚𝑠𝑣=1𝑗=1𝑏𝑗[𝑣]𝑓[𝑣]𝑡𝑗(𝑛),𝑦𝑗(𝑛),̃𝑦𝑗[𝑣](𝑛),𝑤𝑗[𝑣](𝑛),𝑦𝑖(𝑛)=𝑦𝑛+𝑚𝑠𝑣=1𝑗=1𝑎[𝑣]𝑖𝑗𝑓[𝑣]𝑡𝑗(𝑛),𝑦𝑗(𝑛),̃𝑦𝑗[𝑣](𝑛),𝑤𝑗[𝑣](𝑛),(2.2) where 𝑡𝑛=𝑡0+𝑛,𝑡𝑗(𝑛)=𝑡𝑛+𝑐𝑗, 𝑦𝑛, and 𝑦𝑗(𝑛), ̃𝑦𝑗[𝑣](𝑛) are approximations to the analytic solution 𝑦(𝑡𝑛), 𝑦(𝑡𝑛+𝑐𝑗), 𝑦(𝑡𝑛+𝑐𝑗𝜏𝑣) of (1.1), respectively, and the argument ̃𝑦𝑗[𝑣](𝑛) is determined bỹ𝑦𝑗[𝑣](𝑛)=𝜑𝑡𝑛+𝑐𝑗𝜏𝑣𝑡𝑛+𝑐𝑗𝜏𝑣0𝑟𝑃𝑣=𝑑𝐿𝑃𝑣𝛿𝑣𝑦(𝑛𝑚𝑣+𝑃𝑣)𝑗𝑡𝑛+𝑐𝑗𝜏𝑣>0,(2.3) with 𝜏𝑣=(𝑚𝑣𝛿𝑣), 𝛿𝑣[0,1), integer 𝑚𝑣𝑟+1, 𝑟,𝑑0, and𝐿𝑃𝑣𝛿𝑣=𝑟𝑘=𝑑𝑘𝑃𝑣𝛿𝑣𝑘𝑃𝑣𝑘,𝑃𝑣=𝑑,𝑑+1,,𝑟.(2.4) We assume 𝑚𝑣𝑟+1 is to guarantee that no (unknown) values 𝑦𝑗(𝑖) with 𝑖𝑛 are used in the interpolation procedure𝑤𝑗[𝑣](𝑛)𝑡isanapproximationto𝑤𝑗(𝑛)=𝑡𝑗(𝑛)𝑡𝑣)𝑗(𝑛𝑚𝑔[𝑣]𝑡𝑗(𝑛),𝑠,𝑦(𝑠)𝑑𝑠,(2.5) which can be computed by a appropriate compound quadrature rule:𝑤𝑗[𝑣](𝑛)=𝑚𝑣𝑞=0𝑑𝑞𝑔[𝑣]𝑡𝑗(𝑛),𝑡𝑗(𝑛𝑞),𝑦𝑗(𝑛𝑞),𝑣=1,2,,𝑚,𝑗=1,2,,𝑠.(2.6) As for the quadrature rule (2.6), we usually adopt the compound trapezoidal rule, the compound Simpsons rule or the compound Newton-Cotes rule, and so forth according to the requirement of the convergence of the method (see [19]) and denote 𝑀=max1𝑣𝑚{𝑚𝑣} and 𝜂=max1𝑣𝑚{𝜂𝑣} with 𝜂𝑣 satisfing 𝑚𝑣𝑞=0|𝑑𝑞|<𝜂𝑣, 𝑣=1,2,,𝑚.

In addition, we always put𝑦𝑗(𝑛)=𝜑(𝑡𝑛+𝑐𝑗), 𝑦𝑛=𝜑(𝑡𝑛) whenever 𝑛0.

In order to write (2.2), (2.3), (2.5), and (2.6) in a more compact way, we introduce some notations. The 𝑁×𝑁 identity matrix will be denoted by 𝐼𝑁, 𝑒=(1,1,,1)𝑇𝑅𝑆, 𝐺=𝐺𝐼𝑁 is the Kronecker product of matrix 𝐺 and 𝐼𝑁. For 𝑢=(𝑢1,𝑢2,,𝑢𝑠)𝑇, 𝑣=(𝑣1,𝑣2,,𝑣𝑠)𝑇𝐶𝑁𝑆, we define the inner product and the induced norm in 𝐶𝑁𝑆 as follows: 𝑢,𝑣=𝑠𝑖=1𝑢𝑖,𝑣𝑖,𝑢=𝑠𝑖=1𝑢𝑖2.(2.7)

Moreover, we also adopt that𝑦(𝑛)=𝑦1(𝑛)𝑦2(𝑛)𝑦𝑠(𝑛),̃𝑦[𝑣](𝑛)=̃𝑦1[𝑣](𝑛)̃𝑦2[𝑣](𝑛)̃𝑦𝑠[𝑣](𝑛),𝑤[𝑣](𝑛)=𝑤1[𝑣](𝑛)𝑤2[𝑣](𝑛)𝑤𝑠[𝑣](𝑛),𝑇(𝑛)=𝑡1(𝑛)𝑡2(𝑛)𝑡𝑠(𝑛),𝑓[𝑣]𝑇(𝑛),𝑦(𝑛),̃𝑦[𝑣](𝑛),𝑤[𝑣](𝑛)=𝑓[𝑣]𝑡1(𝑛),𝑦1(𝑛),̃𝑦1[𝑣](𝑛),𝑤1[𝑣](𝑛)𝑓[𝑣]𝑡2(𝑛),𝑦2(𝑛),̃𝑦2[𝑣](𝑛),𝑤2[𝑣](𝑛)𝑓[𝑣]𝑡𝑠(𝑛),𝑦𝑠(𝑛),̃𝑦𝑠[𝑣](𝑛),𝑤𝑠[𝑣](𝑛).(2.8)

With the above notation, method (2.2),(2.3), (2.5), and (2.6) can be written as𝑦𝑛+1=𝑦𝑛+𝑚𝑣=1̃𝑏[𝑣]𝑇𝑓[𝑣]𝑇(𝑛),𝑦(𝑛),̃𝑦[𝑣](𝑛),𝑤[𝑣](𝑛),𝑦(𝑛)=̃𝑒𝑦𝑛+𝑚𝑣=1𝐴[𝑣]𝑓[𝑣]𝑇(𝑛),𝑦(𝑛),̃𝑦[𝑣](𝑛),𝑤[𝑣](𝑛),̃𝑦𝑗[𝑣](𝑛)=𝑡̃𝑒𝜑𝑛+𝑐𝑗𝜏𝑣,𝑡𝑛+𝑐𝑗𝜏𝑣𝑡0,𝑟𝑃𝑣=𝑑𝐿𝑃𝑣𝛿𝑣𝑦(𝑛𝑚𝑣+𝑃𝑣)𝑗,𝑡𝑛+𝑐𝑗𝜏𝑣>𝑡0,𝑤𝑗[𝑣](𝑛)=𝑚𝑣𝑞=0𝑑𝑞𝑔[𝑣]𝑡𝑛+𝑐𝑗,𝑡𝑛𝑞+𝑐𝑗,𝑦𝑗(𝑛𝑞).(2.9) In 1997, Zhang and Zhou [25] introduced the extension of RN-stability to GDN-stability as follows.

Definition 2.1. An ARKLM (2.1) for DDEs is called GDN-stable if, numerical approximations 𝑦𝑛 and 𝑧𝑛 to the solution of (1.1) and its perturbed problem, respectively, satisfy 𝑦𝑛𝑧𝑛𝐶max𝑡0𝜏𝑡𝑡0𝜑(𝑡)𝜓(𝑡),𝑛0,(2.10) where constant 𝐶>0 depends only on the method, the parameter 𝛼𝑣,𝛽𝑣,𝜎𝑣,𝑟𝑣,̃𝑟𝑣, and the interval length 𝑇𝑡0, 𝜓(𝑡) is the initial function to the perturbed problem of (1.1).

Definition 2.2. An ARKLM (2.1) is called strongly algebraically stable if matrices 𝑀𝛾𝜇 are nonnegative definite, where 𝑀𝛾𝜇=𝐵[𝛾]𝐴[𝜇]+𝐴𝑇[𝛾]𝐵[𝜇]𝑏[𝛾]𝑏𝑇[𝜇],𝐵[𝛾]𝑏=diag1[𝛾],𝑏2[𝛾],,𝑏𝑠[𝛾],(2.11) for 𝜇,𝛾=1,2,,𝑚.
Let 𝑦𝑛,𝑦𝑗(𝑛),̃𝑦𝑗[1](𝑛),̃𝑦𝑗[2](𝑛),,̃𝑦𝑗[𝑚](𝑛),𝑤𝑗[1](𝑛),𝑤𝑗[2](𝑛),,𝑤𝑗[𝑚](𝑛)𝑠𝑗=1,𝑧𝑛,𝑧𝑗(𝑛),̃𝑧𝑗[1](𝑛),̃𝑧𝑗[2](𝑛),,̃𝑧𝑗[𝑚](𝑛),𝑤𝑗[1](𝑛),𝑤𝑗[2](𝑛)𝑤,,𝑗[𝑚](𝑛)𝑠𝑗=1(2.12)be two sequences of approximations to problems (1.1) and its perturbed problem, respectively. From method (2.1) with the same step size , and write 𝑈𝑖(𝑛)=𝑦𝑖(𝑛)𝑧𝑖(𝑛),𝑈𝑖[𝑣](𝑛)=̃𝑦𝑖[𝑣](𝑛)̃𝑧𝑖[𝑣](𝑛),𝑈0(𝑛)=𝑦𝑛𝑧𝑛,𝑄𝑖[𝑣](𝑛)𝑓=[𝑣]𝑡𝑖(𝑛),𝑦𝑖(𝑛),̃𝑦𝑖[𝑣](𝑛),𝑤𝑖[𝑣](𝑛)𝑓[𝑣]𝑡𝑖(𝑛),𝑧𝑖(𝑛),̃𝑧𝑖[𝑣](𝑛),𝑤𝑖[𝑣](𝑛),𝑖=1,2,,𝑠,𝑣=1,2,,𝑚.(2.13) Then (2.2) and (2.3) read 𝑈0(𝑛+1)=𝑈0(𝑛)+𝑚𝑠𝑣=1𝑗=1𝑏𝑗[𝑣]𝑄𝑗[𝑣](𝑛),𝑈(2.14)𝑖(𝑛)=𝑈0(𝑛)+𝑚𝑠𝑣=1𝑗=1𝑎[𝑣]𝑖𝑗𝑄𝑗[𝑣](𝑛).(2.15)

Our main results about GDN-stability are contained in the following theorem.

Theorem 2.3. Assume ARK method (2.2) is strongly algebraically stable, and then the corresponding ARKLM (2.1) is GDN-stable, and satisfies 𝑦𝑛𝑧𝑛𝐶max𝑡0𝜏𝑡𝑡0𝜑(𝑡)𝜓(𝑡)2,𝑛0,(2.16) where6𝐶=exp𝑇𝑡0𝑚𝑠𝑚𝑣=1𝛽𝑣𝐿2𝑣𝑚𝑣+𝑑+1max𝑡0𝜏𝑡𝑡0𝜑(𝑡)𝜓(𝑡)2,𝐿𝑣=max𝑑𝑝𝑣𝑟𝐿𝑝𝜈,(2.17)

Proof. From (2.14) and (2.15) we get 𝑈0(𝑛+1)2=𝑈0(𝑛)+𝑚𝑠𝑣=1𝑖=1𝑏𝑖[𝑣]𝑄𝑖[𝑣](𝑛),𝑈0(𝑛)+𝑚𝑠𝑣=1𝑖=1𝑏𝑖[𝑣]𝑄𝑖[𝑣](𝑛)=𝑈0(𝑛)2+2𝑚𝑠𝑣=1𝑖=1𝑏𝑖[𝑣]𝑄Re𝑖[𝑣](𝑛),𝑈0(𝑛)+𝑚𝑠𝑢,𝑣=1𝑖,𝑗=1𝑏𝑖[𝑢]𝑏𝑗[𝑣]𝑄𝑖[𝑢](𝑛),𝑄𝑗[𝑣](𝑛)=𝑈0(𝑛)2+2𝑚𝑠𝑣=1𝑖=1𝑏𝑖[𝑣]𝑄Re𝑖[𝑣](𝑛),𝑈𝑖(𝑛)𝑚𝑠𝑣=1𝑗=1𝑎[𝑣]𝑖𝑗𝑄𝑗[𝑣](𝑛)+𝑚𝑠𝑢,𝑣=1𝑖,𝑗=1𝑏𝑖[𝑢]𝑏𝑗[𝑣]𝑄𝑖[𝑣](𝑛),𝑄𝑗[𝑢](𝑛)=𝑈0(𝑛)2+2𝑚𝑠𝑣=1𝑖=1𝑏𝑖[𝑣]𝑄Re𝑖[𝑣](𝑛),𝑈𝑖(𝑛)𝑚𝑠𝑢,𝑣=1𝑖,𝑗=1𝑏𝑖[𝑢]𝑎[𝑣]𝑖𝑗+𝑏𝑗[𝑣]𝑎[𝑢]𝑖𝑗𝑏𝑖[𝑢]𝑏𝑗[𝑣]𝑄𝑖[𝑣](𝑛),𝑄𝑗[𝑢](𝑛).(2.18) If the matrices 𝑀𝛾𝜇 are nonnegative definite, then 𝑈0(𝑛+1)2𝑈0(𝑛)2+2𝑚𝑠𝑣=1𝑗=1𝑏𝑗[𝑣]𝑄Re𝑗[𝑣](𝑛),𝑈𝑗(𝑛).(2.19) Furthermore, by conditions (1.2)~(1.4) and Schwartz inequality we have 𝑄Re𝑗[𝑣](𝑛),𝑈𝑗(𝑛)𝑓=Re[𝑣]𝑡𝑗(𝑛),𝑦𝑗(𝑛),̃𝑦𝑗[𝑣](𝑛),𝑤𝑗[𝑣](𝑛)𝑓[𝑣]𝑡𝑗(𝑛),𝑧𝑗(𝑛),̃𝑧𝑗[𝑣](𝑛),𝑤𝑗[𝑣](𝑛),𝑈𝑗(𝑛)𝛼𝑣𝑈𝑗(𝑛)2+𝛽𝑣𝑈𝑗[𝑣](𝑛)+𝜎𝑣𝑤𝑗[𝑣](𝑛)𝑤𝑗[𝑣](𝑛)2𝛼𝑣𝑈𝑗(𝑛)2+𝛽𝑣𝑈𝑗[𝑣](𝑛)+23̃𝑟2𝑣𝜂2𝑣𝜎𝑣𝑚𝑣𝑞=0𝑈𝑗(𝑛𝑞)2(2.20)=𝛼𝑣𝑈𝑗(𝑛)2+23̃𝑟2𝑣𝜂2𝑣𝜎𝑣𝑚𝑣𝑞=0𝑈𝑗(𝑛𝑞)2,𝑡𝑛+𝑐𝑗𝜏𝑣𝑡0=𝛼𝑣𝑈𝑗(𝑛)2+𝛽𝑣𝑟𝑝𝑣=𝑑𝐿𝑝𝑣𝛿𝑣𝑈(𝑛𝑚𝑣+𝑝𝑣)𝑗2+23̃𝑟2𝑣𝜂2𝑣𝜎𝑣𝑚𝑣𝑞=0𝑈𝑗(𝑛𝑞)2,𝑡𝑛+𝑐𝑗𝜏𝑣>𝑡0(2.21)𝛼𝑣𝑈𝑗(𝑛)2+23̃𝑟2𝑣𝜂2𝑣𝜎𝑣𝑚𝑣𝑞=0𝑈𝑗(𝑛𝑞)2,𝑡𝑛+𝑐𝑗𝜏𝑣𝑡0(2.22)𝛼𝑣𝑈𝑗(𝑛)2+2𝛽𝑣𝐿2𝑣𝑟𝑝𝑣=𝑑𝑈(𝑛𝑚𝑣+𝑝𝑣)𝑗2+23̃𝑟2𝑣𝜂2𝑣𝜎𝑣𝑚𝑣𝑞=0𝑈𝑗(𝑛𝑞)2,𝑡𝑛+𝑐𝑗𝜏𝑣>𝑡0.(2.23) For (2.23), we have (2.23)2𝛽𝑣𝐿2𝑣𝑟𝑝𝑣=𝑑𝑈(𝑛𝑚𝑣+𝑝𝑣)𝑗2+23̃𝑟2𝑣𝜂2𝑣𝜎𝑣𝑚𝑣𝑞=0𝑈𝑗(𝑛𝑞)23𝛽𝑣𝐿2𝑣𝑚𝑣𝑝𝑣=𝑑𝑈(𝑛𝑚𝑣+𝑝𝑣)𝑗2.(2.24) By the same way, we can also get (2.22)3𝛽𝑣𝐿2𝑣𝑚𝑣𝑃𝑣=𝑑𝑈(𝑛𝑚𝑣+𝑝𝑣)𝑗2.(2.25) Substituting (2.25) and (2.24) into (2.19), yields 𝑈0(𝑛+1)2𝑈0(𝑛)2+2𝑚𝑠𝑣=1𝑗=13𝛽𝑣𝐿2𝑣𝑏𝑗[𝑣]𝑚𝑣𝑝𝑣=𝑑𝑈(𝑛𝑚𝑣+𝑝𝑣)𝑗2𝑈(2.26)0(𝑛)2+6𝑚𝑠𝑣=1𝑗=1𝛽𝑣𝐿2𝑣𝑏𝑗[𝑣]𝑚𝑣+𝑑+1max𝑑𝑝𝑣𝑚𝑣𝑈(𝑛𝑚𝑣+𝑝𝑣)𝑗2𝑈0(𝑛)2+6𝑚𝑠𝑚𝑣=1𝛽𝑣𝐿2𝑣𝑚𝑣+𝑑+1max(𝑗,𝑝𝑣)𝐸𝑣𝑈(𝑛𝑚𝑣+𝑝𝑣)𝑗21+6𝑚𝑠𝑚𝑣=1𝛽𝑣𝐿2𝑣𝑚𝑣𝑈+𝑑+1max0(𝑛)2,max(𝑗,𝑝𝑣)𝐸𝑣𝑈(𝑛𝑚𝑣+𝑝𝑣)𝑗2,(2.27) where 𝐸𝑣={(𝑗,𝑃𝑣)1𝑗𝑠,𝑑𝑃𝑣𝑟}.
Similar to (2.27), the inequalities: 𝑈𝑖(𝑛)21+6𝑚𝑠𝑚𝑣=1𝛽𝑣𝐿2𝑣𝑚𝑣𝑈+𝑑+1max0(𝑛)2,max(𝑗,𝑃𝑣)𝐸𝑈(𝑛𝑚𝑣+𝑃𝑣)𝑗2(2.28) follows for 𝑖=1,2,,𝑠.
In the following, with the help of inequalities (2.27), (2.28), and induction we shall prove the inequalities: 𝑈𝑖(𝑛)21+6𝑚𝑠𝑚𝑣=1𝛽𝑣𝐿2𝑣𝑚𝑣+𝑑+1(𝑛+1)max𝑡0𝜏𝑡𝑡0𝜑(𝑡)𝜓(𝑡)2,(2.29) for 𝑛0, 𝑖=1,2,,𝑠.
In fact, it is clear from (2.27), (2.28), and 𝑚𝑣𝑟+1 such that 𝑈𝑖(0)21+6𝑚𝑠𝑚𝑣=1𝛽𝑣𝐿2𝑣𝑚𝑣+𝑑+1max𝑡0𝜏𝑡𝑡0𝜑(𝑡)𝜓(𝑡)2,𝑖=0,1,2,,𝑠.(2.30) Suppose for 𝑛𝑘(𝑘0)that 𝑈𝑖(𝑛)21+6𝑚𝑠𝑚𝑣=1𝛽𝑣𝐿2𝑣𝑚𝑣+𝑑+1(𝑛+1)max𝑡0𝜏𝑡𝑡0𝜑(𝑡)𝜓(𝑡)2,𝑖=0,1,2,,𝑠.(2.31) Then from (2.27) and (2.28), 𝑚𝑣𝑟+1 and 1+6𝑚𝑠𝑚𝑣=1𝛽𝑣𝐿2𝑣(𝑚𝑣+𝑑+1)>1, we conclude that 𝑈𝑖(𝑘+1)21+6𝑚𝑠𝑚𝑣=1𝛽𝑣𝐿2𝑣𝑚𝑣+𝑑+1(𝑘+2)max𝑡0𝜏𝑡𝑡0𝜑(𝑡)𝜓(𝑡)2,𝑖=0,1,2,,𝑠.(2.32) This completes the proof of inequalities (2.29). In view of (2.29), we get for 𝑛0 that 𝑈0(𝑛)21+6𝑚𝑠𝑚𝑣=1𝛽𝑣𝐿2𝑣𝑚𝑣+𝑑+1(𝑛+1)max𝑡0𝜏𝑡𝑡0𝜑(𝑡)𝜓(𝑡)2exp(𝑛+1)6𝑚𝑠𝑚𝑣=1𝛽𝑣𝐿2𝑣𝑚𝑣+𝑑+1max𝑡0𝜏𝑡𝑡0𝜑(𝑡)𝜓(𝑡)26exp𝑇𝑡0𝑚𝑠𝑚𝑣=1𝛽𝑣𝐿2𝑣𝑚𝑣+𝑑+1max𝑡0𝜏𝑡𝑡0𝜑(𝑡)𝜓(𝑡)2.(2.33)As a result, we know that method (2.1) is GDN-stable.

3. D-Convergence

In order to study the convergence of numerical methods for MDIDEs, we have to mention the concept of the convergence for stiff ODEs.

In 1981, Frank et al. [26] introduced the important concept of B-convergence for numerical methods applied to nonlinear stiff initial value problems of ordinary differential equations. Later, there have been rapid developments in the study of B-convergence, and a significant number of important results have already been found for Runge-Kutta methods. In fact, B-convergence result is nothing but a realistic global error estimate based on one-sided Lipschitz constant [27]. In this section, we start discussing the convergence of ARKLM (2.1) for MDIDEs (1.1) with conditions (1.2)–(1.4). The approach to the derivation of these estimates is similar to that used in [25]. We assume the analytic solution 𝑦(𝑡) of (1.1) is smooth enough, and its derivatives used later are bounded by𝐷(𝑖)𝑀𝑦(𝑡)𝑖𝑡,𝑡0𝜏,𝑇,(3.1)

where 𝐷(𝑖)𝑦𝑦(𝑡)=(𝑖)𝑡(𝑡),𝑡0+(𝑗1),𝑡0,𝑦+𝑗(𝑖)𝑡0+𝑗0,𝑡=𝑡0+𝑗.(3.2) If we introduce some notations𝑌(𝑛)=𝑦𝑡𝑛+𝑐1𝑦𝑡𝑛+𝑐2𝑦𝑡𝑛+𝑐𝑠,𝑌[𝑣](𝑛)=𝑦𝑡𝑛+𝑐1𝜏𝑣𝑦𝑡𝑛+𝑐2𝜏𝑣𝑦𝑡𝑛+𝑐𝑠𝜏𝑣,𝑤[𝑣](𝑛)=𝑤𝑡𝑛+𝑐1𝜏𝑣𝑤𝑡𝑛+𝑐2𝜏𝑣𝑤𝑡𝑛+𝑐𝑠𝜏𝑣.(3.3) With the above notations, the local errors in (2.9) can be defined as𝑦𝑡𝑛+1𝑡=𝑦𝑛+𝑚𝑣=1̃𝑏[𝑣]𝑇𝑓[𝑣]𝑇(𝑛),𝑌(𝑛),𝑌[𝑣](𝑛),𝑤[𝑣](𝑛)+𝑄𝑛,𝑌(3.4)(𝑛)𝑡=̃𝑒𝑦𝑛+𝑚𝑣=1𝐴[𝑣]𝑓[𝑣]𝑇(𝑛),𝑌(𝑛),𝑌[𝑣](𝑛),𝑤[𝑣](𝑛)+𝑟𝑛𝑌,(3.5)[𝑣](𝑛)=𝑌1[𝑣](𝑛),𝑌2[𝑣](𝑛)𝑌,,𝑠[𝑣](𝑛)𝑇,(3.6) with𝑌𝑗[𝑣](𝑛)=𝜑𝑡𝑛+𝑐𝑗𝜏𝑣,𝑡𝑛+𝑐𝑗𝜏𝑣𝑡0,𝑟𝑃𝑣=𝑑𝐿𝑃𝑣𝛿𝑣𝑦(𝑛𝑚𝑣+𝑃𝑣)𝑗+𝜌𝑗[𝑣](𝑛),𝑡𝑛+𝑐𝑗𝜏𝑣>𝑡0,𝑤(3.7)𝑗[𝑣](𝑛)=𝑚𝑣𝑞=0𝑑𝑞𝑔[𝑣]𝑡𝑛+𝑐𝑗,𝑡𝑛𝑞+𝑐𝑗,𝑦𝑗(𝑛𝑞)+𝑅𝑗[𝑣](𝑛).(3.8) If we take ̆𝑦𝑛=𝑦(𝑡𝑛), ̆𝑦(𝑛)=𝑌(𝑛), ̆𝑦[𝑣](𝑛)=𝑌[𝑣](𝑛), and ̆𝑤[𝑣](𝑛)=𝑤[𝑣](𝑛)

Then we can get the perturbed scheme of (2.9),̆𝑦𝑛+1=̆𝑦𝑛+𝑚𝑣=1̃𝑏[𝑣]𝑇𝑓[𝑣]𝑇(𝑛),̆𝑦(𝑛),̆̃𝑦[𝑣](𝑛),̆𝑤[𝑣](𝑛)+𝑄𝑛,(3.9)̆𝑦(𝑛)=̃𝑒̆𝑦𝑛+𝑚𝑣=1𝐴[𝑣]𝑓[𝑣]𝑇(𝑛),̆𝑦(𝑛),̆̃𝑦[𝑣](𝑛),̆𝑤[𝑣](𝑛)+𝑟𝑛̆,(3.10)̃𝑦𝑗[𝑣](𝑛)=𝑡̃𝑒𝜑𝑛+𝑐𝑗𝜏𝑣,𝑡𝑛+𝑐𝑗𝜏𝑣0,𝑟𝑃𝑣=𝑑𝐿𝑃𝑣𝛿𝑣̆𝑦(𝑛𝑚𝑣+𝑃𝑣)𝑗+𝜌𝑗[𝑣](𝑛),𝑡𝑛+𝑐𝑗𝜏𝑣𝑤>0,(3.11)𝑗[𝑣](𝑛)=𝑚𝑣𝑞=0𝑑𝑞𝑔[𝑣]𝑡𝑛+𝑐𝑗,𝑡𝑛𝑞+𝑐𝑗,𝑦𝑗(𝑛𝑞)+𝑅𝑗[𝑣](𝑛).(3.12) With perturbations, 𝑄𝑛𝐶𝑁, 𝑟𝑛=(𝑟(𝑛)𝑇1,𝑟(𝑛)𝑇2,,𝑟(𝑛)𝑇s)𝑇,𝑅[𝑣](𝑛)=(𝑅1[𝑣](𝑛),𝑅2[𝑣](𝑛),,𝑅𝑠[𝑣](𝑛))𝑇, 𝜌(𝑛)=(𝜌(𝑛)𝑇1,𝜌(𝑛)𝑇2,,𝜌(𝑛)𝑇s)𝐶𝑁𝑆, according to Taylor formula and the formula in [28, pages 205–212], 𝑄𝑛,𝑟𝑛 and 𝜌𝑛 can be determined respectively, as follows:𝑄𝑛=𝑃𝑙=1𝑙1(𝑙1)!𝑙𝑚𝑠𝑣=1𝑗=1𝑏𝑗[𝑣]𝑐𝑗𝑙1𝐷(𝑙)𝑦𝑡𝑛+𝑅0(𝑛),𝑟(3.13)𝑖(𝑛)=𝑃𝑙=1𝑙1(𝑙1)!𝑙𝑐𝑙𝑖𝑚𝑠𝑣=1𝑗=1𝑎[𝑣]𝑖𝑗𝑐𝑗𝑙1𝐷(𝑙)𝑦𝑡𝑛+𝑅𝑖(𝑛)𝜌,(3.14)𝑖(𝑛)=𝑞+1(𝑞+1)!𝑚𝑟𝑣=1𝑃𝑣=𝑑𝛿𝑣𝑃𝑣𝐷(𝑞+1)𝑦𝜉𝑖(𝑛),𝜉𝑖(𝑛)𝑡𝑛𝑚𝑣𝑑+𝑐𝑖,𝑡𝑛𝑚𝑣+𝑟+𝑐𝑖,(3.15) where 𝑞=𝑑+𝑟,𝑅𝑖(𝑛), and 𝜉𝑖(𝑛) satisfy ||𝑅𝑖(𝑛)𝑀||𝑖𝑖+1,𝑖=0,1,2,,𝑠, (0,0], 0 depends only on the method, and 𝑀𝑖(𝑖=0,1,2,,𝑠) depends only on the method and some 𝑀𝑖 in (3.2).

Combining (2.2), (2.3), (2.5), and (2.6) with (3.9), (3.10), (3.11), and (3.12) yields the following recursion scheme for the 𝜀0(𝑛+1)=̆𝑦𝑛+1𝑦𝑛+1:𝜀0(𝑛+1)=𝜀0(𝑛)+𝑚𝑣=1̃𝑏[𝑣]T𝑓[𝑣]𝑇(𝑛),̆𝑦(𝑛),̆̃𝑦[𝑣](𝑛),𝑤[𝑣](𝑛)𝑓[𝑣]𝑇(𝑛),̆𝑦(𝑛),̃𝑦[𝑣](𝑛),𝑤[𝑣](𝑛)+𝑔𝑛[𝑣]𝜀𝑛+𝐻[𝑣](𝑛)̆𝑤[𝑣](𝑛)𝑤[𝑣](𝑛)𝜀+𝑄𝑛,𝑛=̃𝑒𝜀0(𝑛)+𝑚𝑣=1𝐴[𝑣]𝑓[𝑣]𝑇(𝑛),̆𝑦(𝑛),̆̃𝑦[𝑣](𝑛),𝑤[𝑣](𝑛)𝑓[𝑣]𝑇(𝑛),̆𝑦(𝑛),̃𝑦[𝑣](𝑛),𝑤[𝑣](𝑛)+𝑔𝑛[𝑣]𝜀𝑛+𝐻[𝑣](𝑛)̆𝑤[𝑣](𝑛)𝑤[𝑣](𝑛)+𝑟𝑛,(3.16) where 𝜀0(𝑛+1)=̆𝑦𝑛+1𝑦𝑛+1, 𝜀𝑛=(𝜀1𝑇(𝑛),𝜀2𝑇(𝑛),,𝜀𝑠𝑇(𝑛))𝑇=̆𝑦(𝑛)𝑦(𝑛),𝑔𝑖[𝑣](𝑛)=10𝑓2𝑡𝑛+𝑐𝑖,𝑦𝑖(𝑛)+𝜃̆𝑦𝑖(𝑛)𝑦𝑖(𝑛),̆̃𝑦𝑖[𝑣](𝑛),̆𝑤𝑖[𝑣](𝑛)𝐻𝑑𝜃,𝑖=1,2,,𝑠,𝑖[𝑣](𝑛)=10𝑓4𝑡𝑛+𝑐𝑖,𝑦𝑖(𝑛),̆̃𝑦𝑖[𝑣](𝑛),̆𝑤𝑖[𝑣](𝑛)̆𝑤+𝜃𝑖[𝑣](𝑛)𝑤𝑖[𝑣](𝑛)𝑑𝜃,(3.17) here, 𝑓𝑖(𝑥1,𝑥2,𝑥3,𝑥4) is the Jacobian matrix (𝜕𝑓(𝑥1,𝑥2,𝑥3,𝑥4)/𝜕𝑥𝑖)𝑖=1,2,3,4.𝐻[𝑣](𝑛)̆𝑤[𝑣](𝑛)𝑤[𝑣](𝑛)=𝐻𝑚[𝑣](𝑛)𝑣𝑞=1𝑑𝑞𝑔[𝑣]𝑡𝑛,𝑡𝑛𝑞,̆𝑦(𝑛𝑞)𝑔[𝑣]𝑡𝑛,𝑡𝑛𝑞,𝑦(𝑛𝑞)+𝐻[𝑣](𝑛)𝑅[𝑣](𝑛)+𝐻[𝑣](𝑛)𝑑0𝑔[𝑣]𝑡𝑛,𝑡𝑛,̆𝑦(𝑛)𝑔[𝑣]𝑡𝑛,𝑡𝑛,𝑦(𝑛)=𝐻𝑚[𝑣](𝑛)𝑣𝑞=1𝑑𝑞𝑔[𝑣]𝑡𝑛,𝑡𝑛𝑞,̆𝑦(𝑛𝑞)𝑔[𝑣]𝑡𝑛,𝑡𝑛𝑞,𝑦(𝑛𝑞)+𝐻[𝑣](𝑛)𝑅[𝑣](𝑛)+𝐻[𝑣](𝑛)𝑑010𝑔3[𝑣]𝑡𝑛,𝑡𝑛,𝑦(𝑛)+𝜃̆𝑦(𝑛)𝑦(𝑛)𝑑𝜃𝜀𝑛.(3.18) Assume that (𝐼𝑠𝑚𝑣=1𝐴[𝑣](𝑔[𝑣](𝑛)+𝐻[𝑣](𝑛)𝑑0𝑔3[𝑣](𝑛))) is regular, from (3.16) and (3.17), (3.18), we can get𝜀0(𝑛+1)=𝐼𝑁+𝑚𝑣=1̃𝑏[𝑣]𝑇𝐼𝑠𝑚𝑣=1𝐴[𝑣]𝑔[𝑣](𝑛)+𝐻[𝑣](𝑛)𝑑0𝑔3[𝑣](𝑛)1𝑔×̃𝑒[𝑣](𝑛)+𝐻[𝑣](𝑛)𝑑0𝑔3[𝑣](𝑛)𝜀0(𝑛)+𝑚𝑣=1̃𝑏[𝑣]𝑇𝑔[𝑣](𝑛)+𝐻[𝑣](𝑛)𝑑0𝑔3[𝑣](𝑛)𝐼𝑠𝑚𝑣=1𝐴[𝑣]𝑔[𝑣](𝑛)+𝐻[𝑣](𝑛)𝑑0𝑔3[𝑣](𝑛)1×𝑟𝑛+2𝑚𝑣=1𝐴[𝑣]𝐻[𝑣](𝑛)𝑅[𝑣](𝑛)+𝑚𝑣=1̃𝑏[𝑣]𝑇𝐼𝑠𝑚𝑣=1𝐴[𝑣]𝑔[𝑣](𝑛)+𝐻[𝑣](𝑛)𝑑0𝑔3[𝑣](𝑛)1×𝑚𝑣=1𝐴[𝑣]𝑔[𝑣](𝑛)+𝐻[𝑣](𝑛)𝑑0𝑔3[𝑣](𝑛)𝑓[𝑣]𝑇(𝑛),̆𝑦(𝑛),̆̃𝑦[𝑣](𝑛),𝑤[𝑣](𝑛)𝑓[𝑣]𝑇(𝑛),̆𝑦(𝑛),̃𝑦[𝑣](𝑛),𝑤[𝑣](𝑛)+𝐻𝑚[𝑣](𝑛)𝑣𝑞=1𝑑𝑞𝑔[𝑣]𝑡𝑛,𝑡𝑛𝑞,̆𝑦(𝑛𝑞)𝑔[𝑣]𝑡𝑛,𝑡𝑛𝑞,𝑦(𝑛𝑞)+𝑄𝑛+2𝑚𝑣=1̃𝑏[𝑣]𝑇𝐻[𝑣](𝑛)𝑅[𝑣](𝑛).(3.19) Now, we introduce the concept of D-convergence from [25].

Definition 3.1. An ARKLM (2.1) with𝑦𝑛=𝑦(𝑡𝑛)(𝑛0), 𝑦𝑖(𝑛)=𝑦(𝑡𝑛+𝑐𝑖)(𝑛<0) and ̃𝑦𝑖[𝑣](𝑛)=𝑦(𝑡𝑛+𝑐𝑖𝜏𝑣)(𝑛<0) is called D-convergence of order 𝑝 if this method, when applied to any given DIDEs (1.1) subject to (1.2)–(1.4); produce an approximation sequence 𝑦𝑛 and the global error satisfies a bound of the form: 𝑦𝑡𝑛𝑦𝑛𝑡𝐶𝑛𝑃,0,0,(3.20) where the maximum stepsize 0 depends on characteristic parameter 𝛼𝑣,𝛽𝑣,𝜎𝑣,𝑟𝑣,𝑟𝑣 and the method, the function 𝐶(𝑡) depends only on some 𝑀𝑖 in (3.2), delay 𝜏𝑣, characteristic parameters 𝛼𝑣,𝛽𝑣,𝜎𝑣,𝑟𝑣,̃𝑟𝑣, 𝑣=1,2,,𝑚, and the method.

Definition 3.2. The ARKLM (2.2), (2.3), (2.5), and (2.6) is said to be DA-stable if the matrix(𝐼𝑠𝑚𝜈=1𝐴[𝜈]𝜉) is regular for 𝜉𝐶={𝜉𝐶Re𝜉0}, and |𝑅𝑖(𝜉)|1forall𝜉𝐶,𝑖=0,1,,𝑠.
Where 𝑅𝑖𝜀1=1+𝑚𝑣=1𝐴𝑖[𝑣]𝜀1𝐼𝑠𝑚𝜈=1𝐴[𝜈]𝜉1𝐴𝑒,0[𝑣]=𝑏[𝑣],𝐴𝑖[𝑣]=𝑎[𝑣]𝑖1,𝑎[𝑣]𝑖2,,𝑎[𝑣]𝑖𝑠𝑇,𝑖=0,1,,𝑠.(3.21)

Definition 3.3. The ARKLM (2.2), (2.3), (2.5), and (2.6) is said to be ASI-stable if the matrix (𝐼𝑠𝑀𝜈=1𝐴[𝜈]𝜉) is regular for𝜉𝐶, and (𝐼𝑠𝑀𝜈=1𝐴[𝜈]𝜉)1 is uniformly bounded for 𝜉𝐶.

Definition 3.4. The ARKLM (2.2), (2.3), (2.5), and (2.6) is said to be DAS-stable if the matrix(𝐼𝑠𝑀𝜈=1𝐴[𝜈]𝜉) is regular for 𝜉𝐶, and 𝑚𝜈=1𝐴[𝜈]𝑇𝑖𝜉(𝐼𝑠𝑀𝜈=1𝐴[𝜈]𝜉)1(𝑖=0,1,,s) is uniformly bounded for 𝜉𝐶.

Lemma 3.5. Suppose the ARKLM (2.2), (2.3), (2.5), and (2.6) is DA- DAS- and ASI-stable, then there exist positive constants 0,𝛾1,𝛾2,𝛾3, which depend only on the method and the parameter 𝛼𝑣,𝛽𝑣,𝜎𝑣,𝑟𝑣,𝑟𝑣 such that 𝐼𝑠𝑀𝜈=1𝐴[𝜈]𝜉𝛾1,𝐼𝑁+𝑚𝑣=1𝐴[𝑣]𝑇𝑖𝜉𝐼𝑠𝑚𝑣=1𝐴[𝑣]𝜉1̃𝑒1+𝛾2,𝑚𝑣=1𝐴[𝑣]𝑇𝑖𝜉𝐼𝑠𝑚𝑣=1𝐴[𝑣]𝜉1𝑣𝛾3𝑣,𝑣𝐶𝑁𝑆,0,0,𝑖=0,1,2,𝑠.(3.22)

Proof. This Lemma can be proved in the similar way as that of in [29, Lemmas 3.5–3.7].

Theorem 3.6. Suppose the ARKLM (2.2), (2.3), (2.5), and (2.6) is DA- DAS- and ASI-stable, then there exist positive constants 0,𝛾3,𝛾4,𝛾5, which depend only on the method and the parameters𝛼𝑣,𝛽𝑣,𝜎𝑣,𝑟𝑣,̃𝑟𝑣, such that for (𝑜,0], 𝜀𝑖(𝑛)1+𝛾4𝜀max0(𝑛+1),max(𝑖,𝑝𝑣)𝐸𝜀(𝑛𝑚𝑣+𝑝𝑣)𝑖,max(𝑖,𝑞)𝐸𝑞𝜀𝑖(𝑛𝑞)+𝛾5max1𝑖𝑠𝜌𝑖(𝑛1)+𝑄𝑛1+𝛾3̃𝛾𝑛1,𝑖=0,1+𝛾4𝜀max0(𝑛+1),max(𝑖,𝑝𝑣)𝐸𝜀(𝑛𝑚𝑣+𝑝𝑣)𝑖,max(𝑖,𝑞)𝐸𝑞𝜀𝑖(𝑛𝑞)+𝛾5max1𝑖𝑠𝜌𝑖(𝑛)+𝑄𝑛+𝛾3̃𝛾𝑛,𝑖=1,2,,𝑠,(3.23) where 𝜀0(𝑛)=̆𝑦𝑛𝑦𝑛,𝜀𝑖(𝑛)=̆𝑦𝑖(𝑛)𝑦𝑖(𝑛),𝐸={(𝑖,𝑝𝑣)1𝑖𝑠,𝑑𝑝𝑣𝛾}, 𝐸𝑞={(𝑖,𝑞)1𝑖𝑠,1𝑞𝑚},𝑄𝑛=𝑄𝑛+2𝑚𝑣=1̃𝑏[𝑣]𝑇𝐻[𝑣](𝑛)𝑅[𝑣](𝑛), ̃𝑟𝑛=𝑟𝑛+2𝑚𝑣=1𝐴[𝑣]𝐻[𝑣](𝑛)𝑅[𝑣](𝑛).

Proof. Using (3.19) and Lemma 3.5, for (0,0], we obtain that 𝜀0(𝑛+1)1+𝛾2𝜀0(𝑛)+𝛾3̃𝑟𝑛+𝑄𝑛+𝛾3𝑚𝑣=1𝐴[𝑣]𝑓[𝑣]𝑇(𝑛),̆𝑦(𝑛),̆̃𝑦[𝑣](𝑛),𝑤[𝑣](𝑛)𝑓[𝑣]𝑇(𝑛),̆𝑦(𝑛),̃𝑦[𝑣](𝑛),𝑤[𝑣](𝑛)+𝐻𝑚[𝑣](𝑛)𝑣𝑞=1𝑑𝑞𝑔[𝑣]𝑡𝑛,𝑡𝑛𝑞,̆𝑦(𝑛𝑞)𝑔[𝑣]𝑡𝑛,𝑡𝑛𝑞,𝑦(𝑛𝑞)+𝑚𝑣=1̃𝑏[𝑣]𝑇𝑓[𝑣]𝑇(𝑛),̆𝑦(𝑛),̆̃𝑦[𝑣](𝑛),𝑤[𝑣](𝑛)𝑓[𝑣]𝑇(𝑛),̆𝑦(𝑛),̃𝑦[𝑣](𝑛),𝑤[𝑣](𝑛)+𝐻[𝑣](𝑛)𝑚𝑣𝑞=1𝑑𝑞𝑔[𝑣]𝑡𝑛,𝑡𝑛𝑞,̆𝑦(𝑛𝑞)𝑔[𝑣]𝑡𝑛,𝑡𝑛𝑞,𝑦(𝑛𝑞)(3.24)1+𝛾2𝜀0(𝑛)+𝛾3̃𝑟𝑛+𝑄𝑛(3.25a)+𝛾3𝑚𝑣=1𝑠𝑖=1𝑠𝑗=1𝑎[𝑣]𝑖𝑗𝑓[𝑣]𝑡𝑗(𝑛),𝑦𝑗(𝑛),̆̃𝑦𝑗[𝑣](𝑛),𝑤𝑗[𝑣](𝑛)𝑓[𝑣]𝑡𝑗(𝑛),𝑦𝑗(𝑛),̃𝑦𝑗[𝑣](𝑛),𝑤𝑗[𝑣](𝑛)+𝐻𝑗𝑚[𝑣](𝑛)𝑉𝑞=1𝑑𝑞𝑔[𝑣]𝑡𝑛,𝑡𝑛𝑞,̆𝑦𝑗(𝑛𝑞)𝑔[𝑣]𝑡𝑛,𝑡𝑛𝑞,𝑦𝑗(𝑛𝑞)21/2(3.25b)+𝑚𝑣=1𝑠𝑗=1𝑏𝑗[𝑣]𝑓[𝑣]𝑡𝑗(𝑛),𝑦𝑗(𝑛),̆̃𝑦𝑗[𝑣](𝑛),𝑤𝑗[𝑣](𝑛)𝑓[𝑣]𝑡𝑗(𝑛),𝑦𝑗(𝑛),̃𝑦𝑗[𝑣](𝑛),𝑤𝑗[𝑣](𝑛)+𝐻𝑗𝑚[𝑣](𝑛)𝑉𝑞=1𝑑𝑞𝑔[𝑣]𝑡𝑛,𝑡𝑛𝑞,̆𝑦𝑗(𝑛𝑞)𝑔[𝑣]𝑡𝑛,𝑡𝑛𝑞,𝑦𝑗(𝑛𝑞).(3.25c)For 𝛾3𝑚𝑣=1𝑠𝑖=1𝑠𝑗=1𝑎[𝑣]𝑖𝑗𝑓[𝑣]𝑡𝑗(𝑛),𝑦𝑗(𝑛),̆̃𝑦𝑗[𝑣](𝑛),𝑤𝑗[𝑣](𝑛)𝑓[𝑣]𝑡𝑗(𝑛),𝑦𝑗(𝑛),̃𝑦𝑗[𝑣](𝑛),𝑤𝑗[𝑣](𝑛)+𝐻𝑗𝑚[𝑣](𝑛)𝑉𝑞=1𝑑𝑞𝑔[𝑣]𝑡𝑛,𝑡𝑛𝑞,̆𝑦𝑗(𝑛𝑞)𝑔[𝑣]𝑡𝑛,𝑡𝑛𝑞,𝑦𝑗(𝑛𝑞)21/2=(3.25b).(3.26) Then (3.25b)𝛾3𝑚𝑣=1𝑠𝑖=12𝑠𝑗=1||𝑎[𝑣]𝑖𝑗||2𝑓[𝑣]𝑡𝑗(𝑛),𝑦𝑗(𝑛),̆̃𝑦𝑗[𝑣](𝑛),𝑤𝑗[𝑣](𝑛)𝑓[𝑣]𝑡𝑗(𝑛),𝑦𝑗(𝑛),̃𝑦𝑗[𝑣](𝑛),𝑤𝑗[𝑣](𝑛)2+𝐻𝑗𝑚[𝑣](𝑛)𝑉𝑞=1𝑑𝑞𝑔[𝑣]𝑡𝑛,𝑡𝑛𝑞,̆𝑦𝑗(𝑛𝑞)𝑔[𝑣]𝑡𝑛,𝑡𝑛𝑞,𝑦𝑗(𝑛𝑞)21/2𝛾3𝑚𝑣=1𝑠𝑖=12𝑠𝑗=1||𝑎[𝑣]𝑖𝑗||2𝑟2𝑣̆̃𝑦𝑗[𝑣](𝑛)̃𝑦𝑗[𝑣](𝑛)2+2𝑠𝑗=1||𝑎[𝑣]𝑖𝑗||22𝐻𝑗2[𝑣](𝑛)2𝑚𝑣𝑞=1𝑑𝑞𝑔[𝑣]𝑡𝑛,𝑡𝑛𝑞,̆𝑦𝑗(𝑛𝑞)𝑔[𝑣]𝑡𝑛,𝑡𝑛𝑞,𝑦𝑗(𝑛𝑞)21/2𝛾3𝑚𝑣=1𝑠𝑖=12𝑠𝑗=1||𝑎[𝑣]𝑖𝑗||2𝑟2𝑣̆̃𝑦𝑗[𝑣](𝑛)̃𝑦𝑗[𝑣](𝑛)2+22𝛾3𝑚𝑣=1𝑠𝑠𝑖=1𝑗=1||𝑎[𝑣]𝑖𝑗||2𝐻𝑗2𝑚[𝑣](𝑛)𝑣𝑞=1𝑑2𝑞𝑔[𝑣]𝑡𝑛,𝑡𝑛𝑞,̆𝑦𝑗(𝑛𝑞)𝑔[𝑣]𝑡𝑛,𝑡𝑛𝑞,𝑦𝑗(𝑛𝑞)22𝛾3𝑚𝑠𝑣=1𝑠𝑖=1𝑗=1||𝑎[𝑣]𝑖𝑗||𝑟𝑣̆̃𝑦𝑗[𝑣](𝑛)̃𝑦𝑗[𝑣](𝑛)+22𝛾3𝑚𝑠𝑣=1𝑠𝑖=1𝑚𝑗=1𝑣𝑞=1𝑑𝑞||𝑎[𝑣]𝑖𝑗||||𝐻𝑗[𝑣](𝑛)||𝑔[𝑣]𝑡𝑛,𝑡𝑛𝑞,̆𝑦𝑗(𝑛𝑞)𝑔[𝑣]𝑡𝑛,𝑡𝑛𝑞,𝑦𝑗(𝑛𝑞)2𝛾3𝑚𝑠𝑣=1𝑖,𝑗=1||𝑎[𝑣]𝑖𝑗||𝑟𝑣̆̃𝑦𝑗[𝑣](𝑛)̃𝑦𝑗[𝑣](𝑛)+22𝛾3𝑚𝑣𝑞=1𝑑𝑞𝑚𝑠𝑣=1𝑖,𝑗=1||𝑎[𝑣]𝑖𝑗||||𝐻𝑗[𝑣](𝑛)||̃𝑟𝑣̆𝑦𝑗(𝑛𝑞)𝑦𝑗(𝑛𝑞).(3.27) For 𝑚𝑣=1𝑠𝑗=1𝑏𝑗[𝑣]𝑓[𝑣]𝑡𝑗(𝑛),𝑦𝑗(𝑛),̆̃𝑦𝑗[𝑣](𝑛),𝑤𝑗[𝑣](𝑛)𝑓[𝑣]𝑡𝑗(𝑛),𝑦𝑗(𝑛),̃𝑦𝑗[𝑣](𝑛),𝑤𝑗[𝑣](𝑛)+𝐻𝑗𝑚[𝑣](𝑛)𝑉𝑞=1𝑑𝑞𝑔[𝑣]𝑡𝑛,𝑡𝑛𝑞,̆𝑦𝑗(𝑛𝑞)𝑔[𝑣]𝑡𝑛,𝑡𝑛𝑞,𝑦𝑗(𝑛𝑞)=(3.25c),(3.28)then (3.25c)𝑚𝑠𝑣=1𝑗=1||𝑏𝑗(𝜈)||𝛾𝑣̆̃𝑦𝑗[𝑣](𝑛)̃𝑦𝑗[𝑣](𝑛)+2𝑚𝑠𝑣=1𝑗=1||𝑏𝑗[𝑣]||||𝐻𝑗[𝑣](𝑛)||𝑚𝑣𝑞=1𝑑𝑞𝑔[𝑣]𝑡𝑛,𝑡𝑛𝑞,̆𝑦𝑗(𝑛𝑞)𝑔[𝑣]𝑡𝑛,𝑡𝑛𝑞,𝑦𝑗(𝑛𝑞)𝑚𝑠𝑣=1𝑗=1||𝑏𝑗(𝜈)||𝛾𝑣̆̃𝑦𝑗[𝑣](𝑛)̃𝑦𝑗[𝑣](𝑛)+2𝑚𝑠𝑣=1𝑗=1||𝑏𝑗[𝑣]||||𝐻𝑗[𝑣](𝑛)||𝑚𝑣𝑞=1𝑑𝑞̃𝑟𝑣̆𝑦𝑗(𝑛𝑞)𝑦𝑗(𝑛𝑞).(3.29) Combine (3.27), (3.29), and (3.25a), we have 𝜀0(𝑛+1)1+𝛾2𝜀0(𝑛)+𝛾3̃𝑟𝑛+𝑄𝑛+𝑚𝑣=1𝛾𝑣2𝛾3𝑠𝑖,𝑗=1||𝑎[𝑣]𝑖𝑗||+𝑠𝑗=1||𝑏𝑗[𝑣]||̆̃𝑦𝑗[𝑣](𝑛)̃𝑦𝑗[𝑣](𝑛)+2𝑚𝑣=1̃𝑟𝑣𝑚𝑣𝑞=1𝑑𝑞𝑠𝑗=12𝛾3𝑠𝑖=1||𝑎[𝑣]𝑖𝑗||||𝐻𝑗[𝑣](𝑛)||+||𝑏𝑗[𝑣]||||𝐻𝑗[𝑣](𝑛)||𝜀𝑗(𝑛𝑞).(3.30) Moreover, it follows from (2.5) and (3.11) that ̆̃𝑦𝑗[𝑣](𝑛)̃𝑦𝑗[𝑣](𝑛)sup𝛿𝑣[𝑟0,1)𝑃𝑣=𝑑||𝐿𝑝𝑣𝛿𝑣||max𝑑𝑃𝑟𝑟𝜀(𝑛𝑚𝑣+𝑃𝑣)𝑗+𝜌𝑗[𝑣](𝑛).(3.31) Substituting (3.31) in (3.30), we get 𝜀0(𝑛+1)1+𝛾4(0)+𝛾6(0)2𝜀max0(𝑛),max(𝑗,𝑝𝑣)𝐸𝜀(𝑛𝑚𝑣+𝑃𝑣)𝑗,max(𝑗.𝑞)𝐸𝑞𝜀𝑗(𝑛𝑞)+𝑄𝑛+𝛾3̃𝑟𝑛+𝛾5(0)max(𝑗,𝑣)𝐸𝑚𝜌𝑗[𝑣](𝑛),0,0,(3.32) where 𝛾4(0)=𝛾2+𝛾5(0)sup𝛿𝑣[0,1)𝑟𝑃𝑣=𝑑||𝐿𝑝𝑣𝛿𝑣||,𝛾5(0)=𝑚𝑣=1𝑟𝑣2𝛾3𝑠𝑖,𝑗=1||𝑎[𝑣]𝑖𝑗||+𝑠𝑗=1||𝑏𝑗[𝑣]||,𝛾6(0)=𝑚𝑣=1̃𝛾𝑣𝑚𝑣𝑠𝑞=1𝑗=1𝑑𝑞2𝛾3𝑠𝑖=1||𝑎𝑖𝑗[𝑣]𝐻𝑗[𝑣](𝑛)||+||𝑏𝑗[𝑣]𝐻𝑗[𝑣](𝑛)||,𝐸=𝑗,𝑃𝑣1𝑗𝑠,𝑑𝑃𝑣𝛾,𝐸𝑞=(𝑗,𝑞)1𝑗𝑠,1𝑞𝑚𝑣,𝐸𝑚={(𝑗,𝑣)1𝑗𝑠,1𝑣𝑚}.(3.33) By Lemma 3.5, similar to (3.32), we can obtain the inequalities: 𝜀𝑖(𝑛)1+𝛾4(𝑖)+2𝛾6(𝑖)𝜀max0(𝑛),max(𝑗,𝑃𝑣𝐸)𝜀(𝑛𝑚𝑣+𝑃𝑣)𝑗,max(𝑗,𝑞)𝐸𝑞𝜀𝑗(𝑛𝑞)+𝛾5(𝑖)max(𝑗,𝑣)𝐸𝑚𝜌𝑗[𝑣](𝑛)+𝑄𝑛+𝛾3̃𝑟𝑛,𝑖=1,2,,𝑠,0,0,(3.34) where 𝛾4(𝑖)=𝛾2+𝛾5(𝑖)sup𝛿𝑣[0,1)𝑟𝑃𝑣=𝑑||𝐿𝑝𝑣𝛿𝑣||,𝛾5(𝑖)=𝑚𝑣=1𝑟𝑣2𝛾3𝑠𝑖,𝑗=1||𝑎[𝑣]𝑖𝑗||+𝑠𝑗=1||𝑎[𝑣]𝑖𝑗||,𝛾6(𝑖)=𝑚𝑣=1̃𝛾𝑣𝑚𝑣𝑠𝑞=1𝑗=1𝑑𝑞2𝛾3𝑠𝑖=1||𝑎𝑖𝑗[𝑣]𝐻𝑗[𝑣](𝑛)||+||𝑎[𝑣]𝑖𝑗𝐻𝑗[𝑣](𝑛)||.(3.35) Setting 𝛾4=max0𝑖𝑠𝛾4(𝑖), 𝛾5=max0𝑖𝑠𝛾5(𝑖), 𝛾6=max0𝑖𝑠𝛾6(𝑖).
Combining (3.32) with (3.34), we immediately obtain the conclusion of this theorem.
Now, we turn to study the convergence of ARKLM (2.1) for (1.1). It is always assumed that the analytic solution 𝑦(𝑡) of (1.1) is smooth enough on each internal of the form (𝑡0+(𝑗1),𝑡0+𝑗) (𝑗 is a positive integer) as (3.2) defined.

Theorem 3.7. Assume ARKLM (2.1) with stage order 𝑝 is DA-, DAS- and ASI-stable, then the ARKLM (2.1) is D-convergent of order min{𝑝,𝑞+1,𝑠+1}, where 𝑞=𝑑+𝑟.

Proof. By Theorem 3.6, we have for (0,0]𝜀𝑖(𝑛)1+𝛾4+2𝛾6𝜀max0(𝑛1),max(𝑖,𝑝𝑣)𝐸𝜀(𝑛1𝑚𝑣+𝑝𝑣)𝑖,max(𝑗,𝑞𝐸𝑞)𝜀𝑗(𝑛𝑞)+T1𝑝+1+T2𝑞+2+T3𝑠+2,𝑖=01+𝛾4+2𝛾6𝜀max0(𝑛),max(𝑖,𝑝𝑣)𝐸𝜀(𝑛𝑚𝑣+𝑝𝑣)𝑖,max(𝑗,𝑞)𝐸𝑞𝜀𝑗(𝑛𝑞)+T1𝑝+1+T2𝑞+2+T3𝑠+2,𝑖=1,2,,𝑠,(3.36) where T1=𝑀0+𝛾3𝑠𝑖=1𝑀2𝑖,T2=𝛾𝑠(𝑞+1)!𝑟𝑝𝑣=𝑑||𝛿𝑣𝑃𝑣||𝑀𝑞+1,T3=𝑚𝑣=1̃𝑏[𝑣]T𝐻[𝑣]𝑅+𝑚𝑣=1𝐴[𝑣]𝐻[𝑣]𝑅𝑔[𝑣](𝑠)(𝜉).(3.37) It follows from an induction to (3.36) for 𝑛 that 𝜀𝑖(𝑛)𝑛𝑗=01+2𝛾4𝑗T1𝑝+1+T2𝑞+2+T3𝑠+2,𝑖=0,𝑛+1𝑗=01+2𝛾4𝑗T1𝑝+1+T2𝑞+2+T3𝑠+2,𝑖=1,2,,𝑠.(3.38)Hence, for(0,0], we arrive at 𝑦𝑡𝑛𝑦𝑛=𝜀0(𝑛)𝑛𝑗=01+2𝛾4𝑗T1𝑝+1+T2𝑞+2+T3𝑠+2=1+2𝛾4𝑛+112𝛾4T1𝑝+1+T2𝑞+2+T3𝑠+2exp2(𝑛+1)𝛾412𝛾4T1𝑝+1+T2𝑞+2+T3𝑠+22expT𝑡0𝛾4exp20𝛾412𝛾4T1𝑝+T2𝑞+1+T3𝑠+1.(3.39) Therefore, the ARKLM (2.1) is D-Convergent of order min{𝑝,𝑞+1,𝑠+1},(𝑞=𝑟+𝑑).

4. Some Examples

Consider the following initial value problem of multidelay-integro-differential equations:𝑦(𝑡)=104[]+99𝑖1+𝑦(𝑡)2[]1+1+𝑦(𝑡)2[]+500𝑦(𝑡1)𝑦(𝑡2)500𝑡𝑡1𝑦(𝑠)𝑑𝑠+500𝑡𝑡2𝑦(𝑠)𝑑𝑠exp(𝑡)+104×99𝑖1+exp(𝑡)21+1+exp(𝑡)2𝑦,0𝑡3,(𝑡)=exp(𝑡),2𝑡0,(4.1) and its perturbed problem:𝑧(𝑡)=104[]+99𝑖1+𝑧(𝑡)2[]1+1+𝑧(𝑡)2[]+500𝑧(𝑡1)𝑧(𝑡2)500𝑡𝑡1𝑧(𝑠)𝑑𝑠+500𝑡𝑡2𝑧(𝑠)𝑑𝑠exp(𝑡)+104×99𝑖1+exp(𝑡)21+1+exp(𝑡)2,0𝑡3,𝑧(𝑡)=exp(𝑡)+0.01,2𝑡0.(4.2) It an be easily verified that 𝑎1=𝑎2=9.5×103, 𝛽1=𝛽2=250,𝜎1=𝜎2=250, and ̃𝑟1=̃𝑟2=1, with analytic solution 𝑦(𝑡)=exp(𝑡), where the inner product is standard inner product. We apply the two-stages and two-order additive R-K method:111101021201121201(4.3)

(1) to the problem (4.1) and its perturbed problem (4.2). Since the order of the method is 2, we adopt the compound trapezoidal rule for computing the integer part. According to the result of Theorems 2.3 and 3.7, the corresponding method for DIDEs is GDN-stable and D-convergent. We denote the numerical solution of problem (4.1) and (4.2)𝑦𝑛 and 𝑧𝑛, where 𝑦𝑛 and 𝑧𝑛 are approximations to 𝑦(𝑡𝑛) and 𝑧(𝑡𝑛), respectively. The values 𝑦𝑛 and 𝑧𝑛 with =0.1 are listed inFigure 1, (where the abscissa and ordinate denote variable 𝑛 and 𝑦𝑛, resp.) and Figure 2, (where the abscissa and ordinate denote variable 𝑛 and 𝑧𝑛, resp.). It is shown in Table 1 that the numerical solutions are toward to the exact solutions as 0.

It is obvious that the corresponding method for MDIDEs is GDN-stable and D-convergent.

Acknowledgments

This work was supported by the National Natural Science Foundation of China (11101109) and the Natural Science Foundationof Hei-long-jiang Province of China (A201107).