Table of Contents Author Guidelines Submit a Manuscript
Mathematical Problems in Engineering
VolumeΒ 2009, Article IDΒ 925276, 15 pages
http://dx.doi.org/10.1155/2009/925276
Research Article

Computationally Efficient Technique for Solving ODE Systems Exhibiting Initial and Boundary Layers

School of Mathematical Sciences, University of KwaZulu-Natal, Westville Campus, Private Bag X54001, Durban 4000, South Africa

Received 29 May 2009; Accepted 13 October 2009

Academic Editor: IrenaΒ Trendafilova

Copyright Β© 2009 N. Parumasur et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Abstract

A computational technique based on asymptotic analysis for solving singularly perturbed ODE systems involving a small parameter πœ– is considered. The focus is on second-order systems, but the procedure is also applicable for first-order systems. Both initial value and boundary value problems will be solved. The application of the method is considered over the entire time domain for a wide range of πœ– and the resulting approximation is compared with the direct numerical solution. The convection-diffusion problem from fluid mechanics and the telegraph equation from electrical engineering are considered.

1. Introduction

Singularly perturbed problems involving a small parameter are ubiquitous in the field of engineering and present a formidable challenge in their numerical solution. For example, in the field of fluid mechanics such problems appear in the form of convection-diffusion partial differential equations and are used, in the computation of temperature in compressible flows, to describe the model equations for concentration of pollutants in fluids and to describe the momentum relation of the Navier-Stokes equations. In the field of electrical engineering an important application is the telegraph equation describing the voltage or the current as a function of time and position along a cable. The discretization of such systems usually leads to stiff systems of ordinary differential equations (ODEs) containing a small parameter. Such equations are usually stiff and require special care in their numerical solution. On the other hand, such systems can be treated by methods based on asymptotic analysis which renders a reasonable approximate decomposition of the original system into two new systems, for the slow mode and fast modes, respectively, which are no longer stiff. Of special interest are asymptotic methods based on the Chapman-Enskog procedure (CEP) in which the bulk part of the slow mode is left unexpanded. This technique is especially popular amongst mathematical physicists interested in obtaining the diffusion approximation for a wide range of evolution equations, for example, the Boltzmann equation, telegraph equation, and Fokker-Plank equation [1]. On the other hand, obtaining diffusion approximations is rather complicated and involved and is confined to specialists in the field. In the 1990s Mika and coworkers [2, 3] considered the application of the CEP for solving ODEs. The mathematical theory was put on a sound footing, but subsequently, no attempts were made to promote the numerical application of the method, despite very promising initial results. One reason for this can be attributed to the requirement that their method based on the CEP require the initial conditions for the unexpanded slow part of the solution which have to be recovered from additional algebraic relations. This combined with the initial layer functions guarantee a uniformly smooth solution over the entire domain. For such a solution to be comparable to a direct numerical solution of the original system this step is crucial. In [3] the authors focused on obtaining uniformly convergent 𝑂(πœ–2) approximations in the bulk region. In this paper we propose a method for finding an 𝑂(πœ–3) correction to the approximate data. Further, we examine the method for obtaining a solution over the entire domain that is comparable to a direct numerical solution. The method is suitable for a larger range of the small parameter πœ– than that considered previously. An adaptation of the CEP was also considered in [4, 5] for solving problems in chemical kinetics and nuclear engineering, respectively.

Consider the initial value problem

πœ€π‘‘2π‘₯𝑑𝑑2+𝐴𝑑π‘₯𝑑𝑑+𝑓(π‘₯)=0,π‘₯(0)=𝛼,𝑑π‘₯𝑑𝑑(0)=𝑠,(1.1) where π‘‘βˆˆ[0,𝑑1], 𝑑1>0,π‘₯(𝑑),𝑓(π‘₯),𝛼,π‘ βˆˆβ„π‘›,𝑛β‰₯1, and πœ€ is a small positive parameter. The eigenvalues of the matrix 𝐴 have all positive real parts; hence 𝐴 is invertible. By letting 𝑧=𝑑π‘₯/𝑑𝑑, the second-order system (1.1) is converted to the first-order system:

πœ€π‘‘π‘§π‘‘π‘‘=βˆ’π΄π‘§βˆ’π‘“(π‘₯),𝑑π‘₯𝑑𝑑=𝑧,π‘₯(0)=𝛼,𝑧(0)=𝑠.(1.2) The stiff differential equation (1.1) can be solved directly by using numerical methods, for example the finite difference or finite element method. With a finite difference scheme one could use a standard method on a special mesh or a standard mesh and a special method [6]. This is to guarantee uniformly convergent schemes for all values of πœ€. These techniques, involving discretization on standard meshes and layer-adapted meshes, are well established when 𝑓 is a linear function of π‘₯ [7]. However, in our case 𝑓 is a nonlinear function of the dependent variable π‘₯ and hence is much more difficult to analyse and solve using these methods. On the other hand, an alternative approach commonly adopted, is to convert to the first-order system (1.2) and then solve by using an implicit method, applicable for stiff systems, since the solution of such systems is well established. It is well known that the solution obtained in this way is usually costly. Alternatively, one could use approximation methods from singular perturbation theory [8, 9] for solving (1.1) or (1.2). The drawback of methods in this category is that they are based on heuristic derivations such as matching of the inner solution in the initial layer with the outer solution in the outer layer. The idea is to create an approximate solution by using the inner solution and then switching to an outer solution ensuring that the inner and outer solutions coincide as nearly as possible at the switch point. The problem is finding a suitable place to make the switch from one solution to the other.

The method proposed in this paper aims to avoid the difficulties presented by the preceding two approaches. Here, we follow the algorithm of the asymptotic expansion first proposed by Mika and Palczewski [2] for solving resonance type equations in kinetic theory and subsequently applied to solving singularly perturbed systems of second-order ODEs by Mika and Kozakiewicz [3]. As already mentioned, in this case the method is based on the CEP in which the bulk solution for the slow variable π‘₯ remains unexpanded. The expansions are truncated to first-order in πœ€ in order to derive the first-order version of the steady state approximation. Here we derive new initial conditions satisfied by the differential equations by using a Neumann expansion. The main point of the present exposition is that there is no additional cost involved in deriving the new initial condition whilst at the same time the numerical results are much improved for a larger range of the small parameter πœ€. The application of the method is also considered for solving boundary value problems. In performing a numerical investigation we apply the new algorithm to systems of initial value problems (IVPs) and boundary value problems (BVPs). The numerical results for IVPs show a significant improvement in the error and reaffirm the first-order rate of convergence which is consistent with the theory [3] whilst the numerical results for BVPs demonstrate fast convergence of the bulk solution.

2. Derivation of the Asymptotic Procedure

In order to keep the present exposition self-contained, we consider some aspects of the derivation of the asymptotic method presented in [3] together with the procedure for obtaining new initial conditions. The functions π‘₯ and 𝑧 in (1.2) are each decomposed into a bulk solution depending on 𝑑 and an initial layer solution depending on 𝜏=𝑑/πœ€. Hence

𝑧(𝑑)=ξ€·πœ€π‘§(𝑑)+̃𝑧(𝜏)+𝑂2ξ€Έξ€·πœ€,π‘₯(𝑑)=𝑀(𝑑)+Μƒπ‘₯(𝜏)+𝑂2ξ€Έ,(2.1) where

𝑧(𝑑)=𝑧0(𝑑)+πœ€π‘§1(𝑑),̃𝑧(𝜏)=̃𝑧0(𝜏)+πœ€Μƒπ‘§1(𝜏),Μƒπ‘₯(𝜏)=Μƒπ‘₯0(𝜏)+πœ€Μƒπ‘₯1(𝜏).(2.2) The characteristic feature of the algorithm is that the bulk solution 𝑀 for the slow variable π‘₯ remains unexpanded. Substituting (2.1) into (1.2) we obtain upon equating functions of 𝑑 and 𝜏 separately

πœ€π‘‘π‘§π‘‘π‘‘=βˆ’π΄π‘§βˆ’π‘“(𝑀),(2.3)𝑑𝑀=𝑑𝑑𝑧,(2.4)𝑑̃π‘₯π‘‘πœ=πœ€π‘§,(2.5)π‘‘Μƒπ‘§π‘‘πœ=βˆ’π΄Μƒπ‘§+𝑓(𝑀(πœ€πœ))βˆ’π‘“(𝑀(πœ€πœ)+Μƒπ‘₯).(2.6) Now a postulate of the asymptotic method is that the bulk solution for the fast variable, namely, 𝑧(𝑑), depends on 𝑑 through its functional dependence on 𝑀. To this end, we let πœ™0(𝑀(𝑑)), πœ™1(𝑀(𝑑))βˆˆβ„π‘› be two functions and write

𝑧0(𝑑)=πœ™0(𝑀(𝑑)),𝑧1(𝑑)=πœ™1(𝑀(𝑑)).(2.7) Hence

𝑧(𝑑)=πœ™0(𝑀)+πœ€πœ™1(𝑀).(2.8) Substituting (2.8) into (2.3) and (2.4) we obtain

πœ€ξ‚Έπ‘‘πœ™0𝑑𝑀𝑑𝑀𝑑𝑑+πœ€π‘‘πœ™1π‘‘π‘€π‘‘π‘€ξ‚Ήξ€·πœ™π‘‘π‘‘=βˆ’π΄0+πœ€πœ™1ξ€Έβˆ’π‘“(𝑀),(2.9)𝑑𝑀𝑑𝑑=πœ™0(𝑀)+πœ€πœ™1(𝑀).(2.10) Substituting (2.10) into (2.9) and equating coefficients of powers of πœ€ we obtain

πœ™0(𝑀)=βˆ’π΄βˆ’1𝑓(𝑀),(2.11)π‘‘πœ™0πœ™π‘‘π‘€0(𝑀)=βˆ’π΄πœ™1(𝑀).(2.12) and solving the above for πœ™1(𝑀) gives

πœ™1(𝑀)=βˆ’π΄βˆ’2𝑑𝑓(𝑀)π΄π‘‘π‘€βˆ’1𝑓(𝑀).(2.13) Hence, from (2.4) we obtain 𝑀(𝑑) as the solution of the first-order system:

𝑑𝑀𝑑𝑑=βˆ’π΄βˆ’1𝐼+πœ€π΄βˆ’1𝑑𝑓(𝑀)π΄π‘‘π‘€βˆ’1𝑓(𝑀).(2.14) The initial condition 𝑀(0) is found by considering the initial conditions in (1.2). For this we require the initial layer solutions. These are obtained by substituting (2.2) into (2.5) and equating powers of πœ€ yielding 𝑑π‘₯0π‘‘πœ=0,𝑑̃π‘₯1π‘‘πœ=̃𝑧0.(2.15) Clearly Μƒπ‘₯0(𝜏)≑0 since limπœβ†’βˆžΜƒπ‘₯0(𝜏)=0 for the initial layer function. Upon substituting (2.2) into (2.6) and using Μƒπ‘₯0(𝜏)≑0 we further obtain

𝑑̃𝑧0π‘‘πœ=βˆ’π΄Μƒπ‘§0.(2.16) Hence

̃𝑧0(𝜏)=π‘’βˆ’π΄πœΜƒπ‘§0(0),(2.17)Μƒπ‘₯1(ξ€œπœ)=πœβˆžΜƒπ‘§0(=ξ€œπ‘’)π‘‘π‘’πœβˆžπ‘’βˆ’π΄π‘’Μƒπ‘§0(0)𝑑𝑒=βˆ’π΄βˆ’1π‘’βˆ’π΄πœΜƒπ‘§0(0).(2.18)

Using the initial conditions π‘₯(0)=𝛼 and 𝑧(0)=𝑠 in (2.1) we obtain to first-order in πœ€ that

𝛼=𝑀(0)+πœ€Μƒπ‘₯1(0),(2.19)𝑠=πœ™0(𝑀(0))+πœ€πœ™1(𝑀(0))+̃𝑧0(0)+πœ€Μƒπ‘§1(0).(2.20) Mika and Kozakiewicz [3] now expand 𝑀(0) into powers of πœ€, namely, 𝑀(0)=𝑀(0)|0+πœ€π‘€(0)|1. Upon equating coefficients of πœ€ in (2.19)-(2.20) and using the initial layer functions just derived, they obtained the expression 𝑀(0)=𝛼+πœ€π΄βˆ’1𝑠+π΄βˆ’1ξ€Έ.𝑓(𝛼)(2.21) Here 𝑀(0) is solved in a different manner. From (2.20) and (2.11) to zeroth order in πœ€ we obtain ̃𝑧0(0)=π‘ βˆ’πœ™0(𝑀(0))=𝑠+π΄βˆ’1𝑓(𝑀(0)).(2.22) Using (2.19) and (2.18) we have 𝑀(0)=π›Όβˆ’πœ€Μƒπ‘₯1(0)=𝛼+πœ€π΄βˆ’1𝑠+π΄βˆ’1ξ€».𝑓(𝑀(0))(2.23) Hence 𝑀(0) is the root of the, in general, nonlinear vector equation (2.23). It is noted that the solution of (2.23) maybe accomplished using the standard Newton method. However, this is in general an expensive process. In order to avoid this we derive an explicit expression for 𝑀(0) by letting 𝑀(0)=𝛼+𝛿(πœ€), where 𝛿(πœ€) is a function of πœ€ and ||𝛿(πœ€)||β‰ͺ||𝛼||. Upon substitution into (2.23), using Taylor's theorem and ignoring terms of order ‖𝛿(πœ€)β€–2 we obtain the new initial condition

𝑀(0)=𝛼+πΌβˆ’πœ€π΄βˆ’2π‘“ξ…žξ€»(𝛼)βˆ’1πœ€π΄βˆ’1𝑠+π΄βˆ’1ξ€».𝑓(𝛼)(2.24) To simplify the evaluation of (2.24) we use the first-order Neumann expansion for [πΌβˆ’πœ€π΄βˆ’2𝑓′(𝛼)]βˆ’1, assuming that πœ€||π΄βˆ’2𝑓′(𝛼)||<1 for any compatible matrix norm. Hence, (2.24) simplifies to 𝑀(0)=𝛼+πœ€π΄βˆ’1𝑠+π΄βˆ’1𝑓(𝛼)+πœ€2π΄βˆ’2π‘“ξ…ž(𝛼)π΄βˆ’1𝑠+π΄βˆ’1ξ€».𝑓(𝛼)(2.25) It is clear that no additional cost is borne in forming the Jacobian matrix terms representing the derivative π‘“ξ…ž(𝛼) in (2.25) since the derivative is already used in (2.14). We note that (2.25) reduces to (2.21) when the πœ€2 term is ignored.

The asymptotic convergence of the approximate solutions derived above is proved in [2]. The condition concerning the nonlinear function in (1.2) is that 𝑓(π‘₯)∈𝐢4(ℝ𝑛), which supplements the conditions necessary for the system to possess a unique solution on [0,𝑑1]. Furthermore, the matrix 𝐴 has all positive eigenvalues so that the initial layer solutions decay exponentially.

3. Standard Approach

In the standard first-order approach described, for example, in [3] to solving (1.2), 𝑀(𝑑) in (2.1) is replaced by

π‘₯(𝑑)=π‘₯0(𝑑)+πœ€π‘₯1(𝑑).(3.1) With this substitution in (2.3)–(2.6) one obtains the system

πœ€π‘‘ξ€·π‘§π‘‘π‘‘0+πœ€π‘§1𝑧=βˆ’π΄0+πœ€π‘§1ξ€Έξ€·π‘₯βˆ’π‘“0+πœ€π‘₯1𝑧=βˆ’π΄0+πœ€π‘§1ξ€Έξ€·π‘₯βˆ’π‘“0ξ€Έβˆ’π‘“ξ…žξ€·π‘₯0ξ€Έπœ€π‘₯1ξ€·πœ€+𝑂2ξ€Έ,𝑑π‘₯𝑑𝑑0+πœ€π‘₯1ξ€Έ=𝑧0+πœ€π‘§1,π‘‘ξ€·π‘‘πœΜƒπ‘₯0+πœ€Μƒπ‘₯1ξ€Έξ€·=πœ€Μƒπ‘§0+πœ€Μƒπ‘§1ξ€Έ,π‘‘ξ€·π‘‘πœΜƒπ‘§0+πœ€Μƒπ‘§1ξ€Έξ€·=βˆ’π΄Μƒπ‘§0+πœ€Μƒπ‘§1ξ€Έξ€·π‘₯+𝑓0+πœ€π‘₯1ξ€Έξ€·π‘₯βˆ’π‘“0+πœ€π‘₯1+Μƒπ‘₯0+πœ€Μƒπ‘₯1ξ€Έξ€·=βˆ’π΄Μƒπ‘§0+πœ€Μƒπ‘§1ξ€Έξ€·π‘₯+𝑓0ξ€Έξ€·π‘₯+𝑓′0ξ€Έπœ€π‘₯1ξ€·π‘₯βˆ’π‘“0+Μƒπ‘₯0ξ€Έξ€·π‘₯βˆ’π‘“β€²0+Μƒπ‘₯0ξ€Έπœ€ξ€·π‘₯1+Μƒπ‘₯1ξ€Έξ€·πœ€+𝑂2ξ€Έ.(3.2) Upon equating coefficients of πœ€ and using the initial conditions, we obtain the following systems of ODEs to be solved for the bulk solution for the slow variable π‘₯(𝑑):

𝑑π‘₯0𝑑𝑑=βˆ’π΄βˆ’1𝑓π‘₯0ξ€Έ,π‘₯0(0)=𝛼,(3.3)𝑑π‘₯1𝑑𝑑=βˆ’π΄βˆ’2π‘“ξ…žξ€·π‘₯0ξ€Έπ΄βˆ’1𝑓π‘₯0ξ€Έβˆ’π΄βˆ’1π‘“ξ…žξ€·π‘₯0ξ€Έπ‘₯1π‘₯,(3.4)1(0)=π΄βˆ’1𝑠+π΄βˆ’1ξ€».𝑓(𝛼)(3.5) As before Μƒπ‘₯0≑0 and Μƒπ‘₯1(𝜏)=βˆ’π΄βˆ’1π‘’βˆ’π΄πœΜƒπ‘§0(0)=βˆ’π΄βˆ’1π‘’βˆ’π΄πœξ€Ίπ‘ +π΄βˆ’1𝑓,(𝛼)̃𝑧0(𝜏)=π‘’βˆ’π΄πœΜƒπ‘§0(0).(3.6)

4. Boundary Value Problems

Consider the boundary value problem:

πœ€π‘‘2π‘₯𝑑𝑑2+𝐴𝑑π‘₯𝑑𝑑+𝑓(π‘₯)=0,π‘₯(0)=𝛼,π‘₯(1)=𝛽.(4.1)

The boundary value problem (4.1) can be converted to the initial value problem (1.1) by dropping the second boundary condition π‘₯(1)=𝛽 and replacing it by the initial condition π‘₯β€²(0)=𝑠. Hence, 𝑓(π‘₯) and 𝐴 are assumed to have similar conditions to that considered in Section 2 on [0,1].

The initial-value problem has a uniquely determined solution π‘₯(𝑑)=π‘₯(𝑑,𝑠) which depends on the choice of the initial value 𝑠 for π‘₯ξ…ž(0). To determine the value of 𝑠 consistent with the right-hand boundary condition we must find a zero 𝑠=𝑠 of the function 𝐹(𝑠)=π‘₯(1,𝑠)βˆ’π›½. Assuming that 𝐹(𝑠)∈𝐢2(0,1), one can use Newton's method to determine 𝑠. Starting with an initial approximation 𝑠0, one then has to iteratively compute values 𝑠𝑖 according to

𝑠𝑖+1=π‘ π‘–βˆ’ξ€ΊπΉξ…žξ€·π‘ π‘–ξ€Έξ€»βˆ’1𝐹𝑠𝑖,(4.2) where for 𝑛=1, πΉξ…ž(𝑠𝑖) is approximated by the forward difference formula:

𝐹𝑠𝑖𝑠+β„Žβˆ’πΉπ‘–ξ€Έβ„Ž,(4.3) with β„Žβ‰ͺ|𝑠𝑖|. For the case 𝑛>1, the matrix πΉξ…ž(𝑠𝑖) is determined by the following useful technique. The π‘˜th column of πΉξ…ž(𝑠𝑖) is approximated by

𝐹𝑠𝑖+β„Žπ‘’π‘˜ξ€Έξ€·π‘ βˆ’πΉπ‘–ξ€Έβ„Ž,(4.4) where π‘’π‘˜ is the standard unit vector in ℝ𝑛 with unity in the π‘˜th position. Then, π‘₯(1,𝑠𝑖) is determined by solving an initial-value problem:

πœ€π‘‘2π‘₯𝑑𝑑2+𝐴𝑑π‘₯𝑑𝑑+𝑓(π‘₯)=0,π‘₯(0)=𝛼,π‘₯β€²(0)=𝑠𝑖(4.5) up to 𝑑=1.

In this paper instead of applying the shooting method to the stiff second-order system of equations (4.1) we first apply the first-order asymptotic procedure to the system and then adapt the shooting method to the resulting nonstiff first-order system. Now taking 𝑑=1 in (2.1) and (2.18) it can be shown that

𝑀(1)∢=𝑀(1,𝑠)=𝛽+πœ€π΄βˆ’1π‘’βˆ’π΄/πœ€ξ€Ίπ‘ +π΄βˆ’1ξ€».𝑓(𝑀(0,𝑠))(4.6) Solve the first-order system (2.14) with estimate (2.25) for 𝑀(0)=𝑀(0,𝑠) numerically and denote the solution at 𝑑=1 by 𝑀(1,𝑠). It remains to determine 𝑠 so that 𝑀(1,𝑠) agrees with 𝑀(1,𝑠) from (4.6). Thus we define 𝐹(𝑠)=𝑀(1,𝑠)βˆ’π‘€(1,𝑠) and iteratively obtain a zero of 𝐹(𝑠) using (4.2). A convenient starting value for the iteration is obtained by setting πœ€=0 and 𝑑=0 in (4.1) to yield 𝑠0=βˆ’π΄βˆ’1𝑓(𝛼). Essentially, we are trying to find an optimal 𝑠 such that both 𝑀(0,𝑠) and 𝑀(1,𝑠) lie on the solution curve 𝑀(𝑑) of (2.14).

5. Numerical Examples

Example 5.1. Here we choose 𝑛=1, 𝐴=1, 𝑓(π‘₯)=π‘₯2 with 𝛼=𝑠=2 in (1.1). Hence, (3.3)–(3.5) reduce to the ODEs: 𝑑π‘₯0𝑑𝑑=βˆ’π‘₯20,π‘₯(0)=2,𝑑π‘₯1𝑑𝑑=βˆ’2π‘₯0π‘₯1βˆ’2π‘₯30,π‘₯1(0)=6,(5.1) to be solved for the bulk approximation π‘₯(𝑑)=π‘₯0(𝑑)+πœ€π‘₯1(𝑑) according to the standard approach. The first-order asymptotic solution to (1.1) using the standard approach is given by 𝛽π‘₯(𝑑)=ξ‚Έ1+𝛽𝑑+πœ€βˆ’2𝛽2(1+𝛽𝑑)2ln(1+𝛽𝑑)+𝛽+𝛼2(1+𝛽𝑑)2ξ‚Ή.(5.2)
According to the new approach we solve (2.14) which reduces to
𝑑𝑀𝑑𝑑=βˆ’(1+πœ€2𝑀)𝑀2.(5.3) From (2.21) we obtain 𝑀(0)=2+6πœ€,(5.4) and using (2.25) we obtain the improved initial condition: 𝑀(0)=2+6πœ€+24πœ€2.(5.5)
The graphical solutions depicted in Figure 1 are for πœ€=0.1. The direct numerical solution of (1.1) is denoted by π‘₯ and the first-order solution corresponding to (5.2) with the initial layer incorporated is denoted by π‘₯(0). When (5.3) is solved using the initial condition (5.4), the solution from (2.1) is denoted by π‘₯(1); likewise when (5.3) is solved using the initial condition (5.5) the solution from (2.1) is denoted by π‘₯(2).
Figure 2 shows how the error varies as a function of πœ– for 𝑑=0.175. It is clear that the present algorithm, incorporating the new initial condition (2.25), gives superior results, even for large πœ–.

925276.fig.001
Figure 1: Red (dashed): π‘₯ (direct numerical solution of (1.1)), Green (dashed-dotted): π‘₯(1) (solution of (5.3) using (5.4)), Blue (solid): π‘₯(2) (solution of (5.3) using (5.5)), Black (dotted): π‘₯(0) (standard first-order solution (5.2)).
925276.fig.002
Figure 2: Comparison of Error as a function of πœ–. Red (dashed): new initial condition (2.25) and Blue (solid): old initial condition (2.21).

Example 5.2. Since this paper is concerned with systems of ODEs, the following example is directed to such a case. The technique is illustrated for a system of coupled ODEs by choosing 𝑛=21 and 𝐴=𝐼: 𝑓𝑖(π‘₯)=π‘₯3𝑖+π‘₯3𝑖+1𝑓,𝑖=1,2,…,π‘›βˆ’1,𝑛(π‘₯)=π‘₯3𝑛+π‘₯31,𝛼𝑖=sinπœ‹(π‘–βˆ’1)ξ‚Άπ‘›βˆ’1,𝛽𝑖=πœ‹cosπœ‹(π‘–βˆ’1)ξ‚Άπ‘›βˆ’1,𝑖=1,2,…,𝑛(5.6) in (1.1). Solve (2.14) using an explicit Runge-Kutta method of order 2 (RK2) with fixed step size equal to 0.01 subject to the initial conditions (2.21) and (2.25) and designate the solutions by 𝑀(1) and 𝑀(2), respectively. The corresponding solutions in (2.1) are denoted by π‘₯(1) and π‘₯(2) respectively. Let π‘₯ denote a highly accurate solution of (1.1) obtained using Mathematica 6.0 which will be treated as an exact solution for comparison purposes. Equation (1.1) is solved using the implicit Euler method with fixed step size equal to 0.01 and the solution is denoted by π‘₯(0). Table 1 shows the errors β€–π‘₯βˆ’π‘€(1)β€–βˆž and β€–π‘₯βˆ’π‘€(2)β€–βˆž, respectively. The CPU time for computing π‘₯(0) was 0.17 second as compared to 0.02 second for computing both π‘₯(1) and π‘₯(2) which amounts to a significant saving in computational effort.

tab1
Table 1: Errors in 𝑀(1) and 𝑀(2).

Example 5.3. One has πœ€π‘‘2π‘₯𝑑𝑑2+2𝑑π‘₯π‘‘π‘‘βˆ’π‘’π‘₯π‘₯=0,π‘₯(0)=0,(1)=0.(5.7)
Choose πœ€=0.01. Figures 3, 4, and 5 show a plot of the solution profiles π‘₯(𝑑) (numerical solution) and 𝑀(𝑑) (asymptotic bulk solution), corresponding to 𝑠0, 𝑠1, and 𝑠2 used in (4.3) respectively. It is seen that the solution profiles are almost identical (in the bulk region) after the second iteration.

925276.fig.003
Figure 3: Numerical solution π‘₯(𝑑) (dotted) versus first-order asymptotic solution 𝑀(𝑑) (solid) (𝑠=𝑠0).
925276.fig.004
Figure 4: Numerical solution π‘₯(𝑑) (dotted) versus first-order asymptotic solution 𝑀(𝑑) (solid) after first Newton iteration (𝑠=𝑠1).
925276.fig.005
Figure 5: Numerical solution π‘₯(𝑑) (dotted) versus first-order asymptotic solution 𝑀(𝑑) (solid) after second Newton iteration (𝑠=𝑠2).

Example 5.4. We consider the following boundary layer problem from fluid mechanics: πœ€Ξ”π‘’+π›Ύβ‹…βˆ‡π‘’=1onΞ©,𝑒=0onπœ•Ξ©,(5.8) where 𝑒=𝑒(π‘₯,𝑦), 𝛾 is a constant, and πœ•Ξ© is the boundary of the square Ξ©=(0,1)Γ—(0,1). To apply the present algorithm we replace the variable 𝑦 by 𝑑 and divide the spatial domain (π‘₯∈(0,1)) into 𝑛+2 equally spaced points with spacing β„Ž=1/(𝑛+1). Letting 𝑋𝑖=𝑒(π‘₯𝑖,𝑑) and using the method of lines to discretize the above equation result in the system of ODEs: πœ€π‘‘2𝑋𝑖𝑑𝑑2+𝛾𝑑𝑋𝑖𝑋𝑑𝑑+𝛾𝑖+1βˆ’π‘‹π‘–βˆ’1ξ‚Ή+πœ€2β„Žβ„Ž2𝑋𝑖+1βˆ’2𝑋𝑖+π‘‹π‘–βˆ’1ξ€»βˆ’1=0,(5.9)𝑖=1,2,…,𝑛. If we let 𝑋=[𝑋1,𝑋2,…,𝑋𝑛]𝑑 and 𝑓(𝑋)=[𝑓1,𝑓2,…,𝑓𝑛]𝑑, where 𝑓𝑖𝑋=𝛾𝑖+1βˆ’π‘‹π‘–βˆ’1ξ‚Ή+πœ€2β„Žβ„Ž2𝑋𝑖+1βˆ’2𝑋𝑖+π‘‹π‘–βˆ’1ξ€»βˆ’1,(5.10) then the system of ODEs can be written as πœ€π‘‘2𝑋𝑑𝑑2+𝛾𝑑𝑋𝑑𝑑+𝑓(𝑋)=0,(5.11)𝑋(0)=0,𝑋(1)=0, and 𝑛=10.
Choose πœ€=0.01 and 𝛾=1. Figure 6 shows a plot of the solution profiles 𝑋5(𝑑) and 𝑀5(𝑑) (asymptotic solution) after the first iteration in 𝑠. It is seen that the solution profiles are almost identical (in the bulk region).

925276.fig.006
Figure 6: Numerical solution 𝑋5(𝑑) (dotted) versus first-order asymptotic solution 𝑀5(𝑑) (solid) after first Newton iteration (𝑠=𝑠1).

Example 5.5. Consider the singularly perturbed telegraph equation: πœ–πœ•2π‘’πœ•π‘‘2+π›Ύπœ•π‘’πœ•πœ•π‘‘=πœ‡2π‘’πœ•π‘₯2βˆ’πœŽ(𝑑),(5.12) where 𝑒=𝑒(π‘₯,𝑑), 𝛾 and πœ‡ are constants, and 𝜎 is an arbitrary function of 𝑑. Consider normal boundary conditions: πœ•π‘’πœ•π‘₯(0,𝑑)=πœ•π‘’πœ•π‘₯(1,𝑑)=0(5.13) with initial conditions 𝑒(π‘₯,0)=sinπœ‹π‘₯,πœ•π‘’πœ•π‘‘(π‘₯,0)=πœ‹cosπœ‹π‘₯.(5.14)
As in the previous example we used the method of lines to obtain the discretized system of ODEs:
πœ–π‘‘2𝑋𝑑𝑑2+𝛾𝑑𝑋𝑑𝑑+𝑓(𝑋)=0,(5.15) where 𝑋=[𝑋0,𝑋1,…,𝑋𝑁+1]𝑑, 𝑓(𝑋)=[𝑓0,𝑓1,…,𝑓𝑁+1]𝑑, 𝑓𝑖=(βˆ’π‘’/β„Ž2)[𝑋𝑖+1βˆ’2𝑋𝑖+π‘‹π‘–βˆ’1]+πœŽπ‘‹π‘–, 𝑖=0,1,…,𝑛+1, with π‘‹βˆ’1=𝑋1 and 𝑋𝑁+2=𝑋𝑁. The initial conditions are 𝑋𝑖(0)=sinπœ‹π‘₯𝑖, 𝑋′𝑖(0)=πœ‹cosπœ‹π‘₯𝑖, 𝑖=0,1,…,𝑛+1, and 𝑛=51. Choose πœ–=0.01, 𝛾=1 and 𝜎=1. Figure 7 shows the error in the various solution components. For example, 𝑒6(1) corresponds to the error in the sixth solution component using (2.21) and 𝑒6(2) corresponds to the error in the sixth solution component using (2.25), respectively. It is evident that using the new algorithm proposed in this paper gives superior numerical results.

fig7
Figure 7: Errors for various solution components: red (dotted) using (2.21) and blue (solid) using (2.25).

6. Conclusion

The numerical solution of singularly perturbed second-order ODEs of the form (1.1) involving a single variable and a linear function 𝑓(π‘₯) is well established. On the other hand, the solution of the system (1.1) with a nonlinear function presents a formidable numerical challenge. The procedure presented in this paper gives an efficient algorithm for solving such systems. The improved algorithm is capable of handling a wider class of problems and at the same time produces a uniform solution over the entire time domain of interest. Furthermore, the numerical examples selected are representative of typical application areas including problems exhibiting multiple boundary layers.

Acknowledgments

The authors would like to thank Professor Janusz Mika for providing them with useful suggestions during the preparation of this manuscript. Also, the authors would like to extend their appreciation to the anonymous referees, whose feedback contributed in improving the manuscript.

References

  1. J. R. Mika and J. Banasiak, Singularly Perturbed Evolution Equations with Applications to Kinetic Theory, vol. 34 of Series on Advances in Mathematics for Applied Sciences, World Scientific, River Edge, NJ, USA, 1995. View at MathSciNet
  2. J. R. Mika and A. Palczewski, β€œAsymptotic analysis of singularly perturbed systems of ordinary differential equations,” Computers & Mathematics with Applications, vol. 21, no. 10, pp. 13–32, 1991. View at Publisher Β· View at Google Scholar Β· View at Zentralblatt MATH Β· View at MathSciNet
  3. J. R. Mika and J. M. Kozakiewicz, β€œFirst-order asymptotic expansion method for singularly perturbed systems of second-order ordinary differential equations,” Computers & Mathematics with Applications, vol. 25, no. 3, pp. 3–11, 1993. View at Publisher Β· View at Google Scholar Β· View at Zentralblatt MATH Β· View at MathSciNet
  4. M. Galli, M. Groppi, R. Riganti, and G. Spiga, β€œSingular perturbation techniques in the study of a diatomic gas with reactions of dissociation and recombination,” Applied Mathematics and Computation, vol. 146, no. 2-3, pp. 509–531, 2003. View at Publisher Β· View at Google Scholar Β· View at Zentralblatt MATH Β· View at MathSciNet
  5. R. Beauwens and J. Mika, β€œOn the improved prompt jump approximation,” in Proceedings of the 17th IMACS World Congress Scientific Computation, Applied Mathematics and Simulation, Paris, France, July 2005.
  6. C. Grossmann, H. Roos, and M. Stynes, Numerical Treatment of Partial Differential Equations, Universitext, Springer, Berlin, Germany, 2007. View at MathSciNet
  7. T. Linss, Layer-adapted meshes for convection-diffusion problems, Ph.D. thesis, Technische Universität Dresden, 2007.
  8. J. A. Murdock, Perturbations: Theory and Methods, vol. 27 of Classics in Applied Mathematics, SIAM, Philadelphia, Pa, USA, 1999. View at MathSciNet
  9. R. E. O'Malley Jr., Introduction to Singular Perturbations, Applied Mathematics and Mechanics, Academic Press, New York, NY, USA, 1974. View at MathSciNet