Abstract

A computational technique based on asymptotic analysis for solving singularly perturbed ODE systems involving a small parameter 𝜖 is considered. The focus is on second-order systems, but the procedure is also applicable for first-order systems. Both initial value and boundary value problems will be solved. The application of the method is considered over the entire time domain for a wide range of 𝜖 and the resulting approximation is compared with the direct numerical solution. The convection-diffusion problem from fluid mechanics and the telegraph equation from electrical engineering are considered.

1. Introduction

Singularly perturbed problems involving a small parameter are ubiquitous in the field of engineering and present a formidable challenge in their numerical solution. For example, in the field of fluid mechanics such problems appear in the form of convection-diffusion partial differential equations and are used, in the computation of temperature in compressible flows, to describe the model equations for concentration of pollutants in fluids and to describe the momentum relation of the Navier-Stokes equations. In the field of electrical engineering an important application is the telegraph equation describing the voltage or the current as a function of time and position along a cable. The discretization of such systems usually leads to stiff systems of ordinary differential equations (ODEs) containing a small parameter. Such equations are usually stiff and require special care in their numerical solution. On the other hand, such systems can be treated by methods based on asymptotic analysis which renders a reasonable approximate decomposition of the original system into two new systems, for the slow mode and fast modes, respectively, which are no longer stiff. Of special interest are asymptotic methods based on the Chapman-Enskog procedure (CEP) in which the bulk part of the slow mode is left unexpanded. This technique is especially popular amongst mathematical physicists interested in obtaining the diffusion approximation for a wide range of evolution equations, for example, the Boltzmann equation, telegraph equation, and Fokker-Plank equation [1]. On the other hand, obtaining diffusion approximations is rather complicated and involved and is confined to specialists in the field. In the 1990s Mika and coworkers [2, 3] considered the application of the CEP for solving ODEs. The mathematical theory was put on a sound footing, but subsequently, no attempts were made to promote the numerical application of the method, despite very promising initial results. One reason for this can be attributed to the requirement that their method based on the CEP require the initial conditions for the unexpanded slow part of the solution which have to be recovered from additional algebraic relations. This combined with the initial layer functions guarantee a uniformly smooth solution over the entire domain. For such a solution to be comparable to a direct numerical solution of the original system this step is crucial. In [3] the authors focused on obtaining uniformly convergent 𝑂(𝜖2) approximations in the bulk region. In this paper we propose a method for finding an 𝑂(𝜖3) correction to the approximate data. Further, we examine the method for obtaining a solution over the entire domain that is comparable to a direct numerical solution. The method is suitable for a larger range of the small parameter 𝜖 than that considered previously. An adaptation of the CEP was also considered in [4, 5] for solving problems in chemical kinetics and nuclear engineering, respectively.

Consider the initial value problem

𝜀𝑑2𝑥𝑑𝑡2+𝐴𝑑𝑥𝑑𝑡+𝑓(𝑥)=0,𝑥(0)=𝛼,𝑑𝑥𝑑𝑡(0)=𝑠,(1.1) where 𝑡[0,𝑡1], 𝑡1>0,𝑥(𝑡),𝑓(𝑥),𝛼,𝑠𝑛,𝑛1, and 𝜀 is a small positive parameter. The eigenvalues of the matrix 𝐴 have all positive real parts; hence 𝐴 is invertible. By letting 𝑧=𝑑𝑥/𝑑𝑡, the second-order system (1.1) is converted to the first-order system:

𝜀𝑑𝑧𝑑𝑡=𝐴𝑧𝑓(𝑥),𝑑𝑥𝑑𝑡=𝑧,𝑥(0)=𝛼,𝑧(0)=𝑠.(1.2) The stiff differential equation (1.1) can be solved directly by using numerical methods, for example the finite difference or finite element method. With a finite difference scheme one could use a standard method on a special mesh or a standard mesh and a special method [6]. This is to guarantee uniformly convergent schemes for all values of 𝜀. These techniques, involving discretization on standard meshes and layer-adapted meshes, are well established when 𝑓 is a linear function of 𝑥 [7]. However, in our case 𝑓 is a nonlinear function of the dependent variable 𝑥 and hence is much more difficult to analyse and solve using these methods. On the other hand, an alternative approach commonly adopted, is to convert to the first-order system (1.2) and then solve by using an implicit method, applicable for stiff systems, since the solution of such systems is well established. It is well known that the solution obtained in this way is usually costly. Alternatively, one could use approximation methods from singular perturbation theory [8, 9] for solving (1.1) or (1.2). The drawback of methods in this category is that they are based on heuristic derivations such as matching of the inner solution in the initial layer with the outer solution in the outer layer. The idea is to create an approximate solution by using the inner solution and then switching to an outer solution ensuring that the inner and outer solutions coincide as nearly as possible at the switch point. The problem is finding a suitable place to make the switch from one solution to the other.

The method proposed in this paper aims to avoid the difficulties presented by the preceding two approaches. Here, we follow the algorithm of the asymptotic expansion first proposed by Mika and Palczewski [2] for solving resonance type equations in kinetic theory and subsequently applied to solving singularly perturbed systems of second-order ODEs by Mika and Kozakiewicz [3]. As already mentioned, in this case the method is based on the CEP in which the bulk solution for the slow variable 𝑥 remains unexpanded. The expansions are truncated to first-order in 𝜀 in order to derive the first-order version of the steady state approximation. Here we derive new initial conditions satisfied by the differential equations by using a Neumann expansion. The main point of the present exposition is that there is no additional cost involved in deriving the new initial condition whilst at the same time the numerical results are much improved for a larger range of the small parameter 𝜀. The application of the method is also considered for solving boundary value problems. In performing a numerical investigation we apply the new algorithm to systems of initial value problems (IVPs) and boundary value problems (BVPs). The numerical results for IVPs show a significant improvement in the error and reaffirm the first-order rate of convergence which is consistent with the theory [3] whilst the numerical results for BVPs demonstrate fast convergence of the bulk solution.

2. Derivation of the Asymptotic Procedure

In order to keep the present exposition self-contained, we consider some aspects of the derivation of the asymptotic method presented in [3] together with the procedure for obtaining new initial conditions. The functions 𝑥 and 𝑧 in (1.2) are each decomposed into a bulk solution depending on 𝑡 and an initial layer solution depending on 𝜏=𝑡/𝜀. Hence

𝑧(𝑡)=𝜀𝑧(𝑡)+̃𝑧(𝜏)+𝑂2𝜀,𝑥(𝑡)=𝑤(𝑡)+̃𝑥(𝜏)+𝑂2,(2.1) where

𝑧(𝑡)=𝑧0(𝑡)+𝜀𝑧1(𝑡),̃𝑧(𝜏)=̃𝑧0(𝜏)+𝜀̃𝑧1(𝜏),̃𝑥(𝜏)=̃𝑥0(𝜏)+𝜀̃𝑥1(𝜏).(2.2) The characteristic feature of the algorithm is that the bulk solution 𝑤 for the slow variable 𝑥 remains unexpanded. Substituting (2.1) into (1.2) we obtain upon equating functions of 𝑡 and 𝜏 separately

𝜀𝑑𝑧𝑑𝑡=𝐴𝑧𝑓(𝑤),(2.3)𝑑𝑤=𝑑𝑡𝑧,(2.4)𝑑̃𝑥𝑑𝜏=𝜀𝑧,(2.5)𝑑̃𝑧𝑑𝜏=𝐴̃𝑧+𝑓(𝑤(𝜀𝜏))𝑓(𝑤(𝜀𝜏)+̃𝑥).(2.6) Now a postulate of the asymptotic method is that the bulk solution for the fast variable, namely, 𝑧(𝑡), depends on 𝑡 through its functional dependence on 𝑤. To this end, we let 𝜙0(𝑤(𝑡)), 𝜙1(𝑤(𝑡))𝑛 be two functions and write

𝑧0(𝑡)=𝜙0(𝑤(𝑡)),𝑧1(𝑡)=𝜙1(𝑤(𝑡)).(2.7) Hence

𝑧(𝑡)=𝜙0(𝑤)+𝜀𝜙1(𝑤).(2.8) Substituting (2.8) into (2.3) and (2.4) we obtain

𝜀𝑑𝜙0𝑑𝑤𝑑𝑤𝑑𝑡+𝜀𝑑𝜙1𝑑𝑤𝑑𝑤𝜙𝑑𝑡=𝐴0+𝜀𝜙1𝑓(𝑤),(2.9)𝑑𝑤𝑑𝑡=𝜙0(𝑤)+𝜀𝜙1(𝑤).(2.10) Substituting (2.10) into (2.9) and equating coefficients of powers of 𝜀 we obtain

𝜙0(𝑤)=𝐴1𝑓(𝑤),(2.11)𝑑𝜙0𝜙𝑑𝑤0(𝑤)=𝐴𝜙1(𝑤).(2.12) and solving the above for 𝜙1(𝑤) gives

𝜙1(𝑤)=𝐴2𝑑𝑓(𝑤)𝐴𝑑𝑤1𝑓(𝑤).(2.13) Hence, from (2.4) we obtain 𝑤(𝑡) as the solution of the first-order system:

𝑑𝑤𝑑𝑡=𝐴1𝐼+𝜀𝐴1𝑑𝑓(𝑤)𝐴𝑑𝑤1𝑓(𝑤).(2.14) The initial condition 𝑤(0) is found by considering the initial conditions in (1.2). For this we require the initial layer solutions. These are obtained by substituting (2.2) into (2.5) and equating powers of 𝜀 yielding 𝑑𝑥0𝑑𝜏=0,𝑑̃𝑥1𝑑𝜏=̃𝑧0.(2.15) Clearly ̃𝑥0(𝜏)0 since lim𝜏̃𝑥0(𝜏)=0 for the initial layer function. Upon substituting (2.2) into (2.6) and using ̃𝑥0(𝜏)0 we further obtain

𝑑̃𝑧0𝑑𝜏=𝐴̃𝑧0.(2.16) Hence

̃𝑧0(𝜏)=𝑒𝐴𝜏̃𝑧0(0),(2.17)̃𝑥1(𝜏)=𝜏̃𝑧0(=𝑢)𝑑𝑢𝜏𝑒𝐴𝑢̃𝑧0(0)𝑑𝑢=𝐴1𝑒𝐴𝜏̃𝑧0(0).(2.18)

Using the initial conditions 𝑥(0)=𝛼 and 𝑧(0)=𝑠 in (2.1) we obtain to first-order in 𝜀 that

𝛼=𝑤(0)+𝜀̃𝑥1(0),(2.19)𝑠=𝜙0(𝑤(0))+𝜀𝜙1(𝑤(0))+̃𝑧0(0)+𝜀̃𝑧1(0).(2.20) Mika and Kozakiewicz [3] now expand 𝑤(0) into powers of 𝜀, namely, 𝑤(0)=𝑤(0)|0+𝜀𝑤(0)|1. Upon equating coefficients of 𝜀 in (2.19)-(2.20) and using the initial layer functions just derived, they obtained the expression 𝑤(0)=𝛼+𝜀𝐴1𝑠+𝐴1.𝑓(𝛼)(2.21) Here 𝑤(0) is solved in a different manner. From (2.20) and (2.11) to zeroth order in 𝜀 we obtain ̃𝑧0(0)=𝑠𝜙0(𝑤(0))=𝑠+𝐴1𝑓(𝑤(0)).(2.22) Using (2.19) and (2.18) we have 𝑤(0)=𝛼𝜀̃𝑥1(0)=𝛼+𝜀𝐴1𝑠+𝐴1.𝑓(𝑤(0))(2.23) Hence 𝑤(0) is the root of the, in general, nonlinear vector equation (2.23). It is noted that the solution of (2.23) maybe accomplished using the standard Newton method. However, this is in general an expensive process. In order to avoid this we derive an explicit expression for 𝑤(0) by letting 𝑤(0)=𝛼+𝛿(𝜀), where 𝛿(𝜀) is a function of 𝜀 and ||𝛿(𝜀)||||𝛼||. Upon substitution into (2.23), using Taylor's theorem and ignoring terms of order 𝛿(𝜀)2 we obtain the new initial condition

𝑤(0)=𝛼+𝐼𝜀𝐴2𝑓(𝛼)1𝜀𝐴1𝑠+𝐴1.𝑓(𝛼)(2.24) To simplify the evaluation of (2.24) we use the first-order Neumann expansion for [𝐼𝜀𝐴2𝑓(𝛼)]1, assuming that 𝜀||𝐴2𝑓(𝛼)||<1 for any compatible matrix norm. Hence, (2.24) simplifies to 𝑤(0)=𝛼+𝜀𝐴1𝑠+𝐴1𝑓(𝛼)+𝜀2𝐴2𝑓(𝛼)𝐴1𝑠+𝐴1.𝑓(𝛼)(2.25) It is clear that no additional cost is borne in forming the Jacobian matrix terms representing the derivative 𝑓(𝛼) in (2.25) since the derivative is already used in (2.14). We note that (2.25) reduces to (2.21) when the 𝜀2 term is ignored.

The asymptotic convergence of the approximate solutions derived above is proved in [2]. The condition concerning the nonlinear function in (1.2) is that 𝑓(𝑥)𝐶4(𝑛), which supplements the conditions necessary for the system to possess a unique solution on [0,𝑡1]. Furthermore, the matrix 𝐴 has all positive eigenvalues so that the initial layer solutions decay exponentially.

3. Standard Approach

In the standard first-order approach described, for example, in [3] to solving (1.2), 𝑤(𝑡) in (2.1) is replaced by

𝑥(𝑡)=𝑥0(𝑡)+𝜀𝑥1(𝑡).(3.1) With this substitution in (2.3)–(2.6) one obtains the system

𝜀𝑑𝑧𝑑𝑡0+𝜀𝑧1𝑧=𝐴0+𝜀𝑧1𝑥𝑓0+𝜀𝑥1𝑧=𝐴0+𝜀𝑧1𝑥𝑓0𝑓𝑥0𝜀𝑥1𝜀+𝑂2,𝑑𝑥𝑑𝑡0+𝜀𝑥1=𝑧0+𝜀𝑧1,𝑑𝑑𝜏̃𝑥0+𝜀̃𝑥1=𝜀̃𝑧0+𝜀̃𝑧1,𝑑𝑑𝜏̃𝑧0+𝜀̃𝑧1=𝐴̃𝑧0+𝜀̃𝑧1𝑥+𝑓0+𝜀𝑥1𝑥𝑓0+𝜀𝑥1+̃𝑥0+𝜀̃𝑥1=𝐴̃𝑧0+𝜀̃𝑧1𝑥+𝑓0𝑥+𝑓0𝜀𝑥1𝑥𝑓0+̃𝑥0𝑥𝑓0+̃𝑥0𝜀𝑥1+̃𝑥1𝜀+𝑂2.(3.2) Upon equating coefficients of 𝜀 and using the initial conditions, we obtain the following systems of ODEs to be solved for the bulk solution for the slow variable 𝑥(𝑡):

𝑑𝑥0𝑑𝑡=𝐴1𝑓𝑥0,𝑥0(0)=𝛼,(3.3)𝑑𝑥1𝑑𝑡=𝐴2𝑓𝑥0𝐴1𝑓𝑥0𝐴1𝑓𝑥0𝑥1𝑥,(3.4)1(0)=𝐴1𝑠+𝐴1.𝑓(𝛼)(3.5) As before ̃𝑥00 and ̃𝑥1(𝜏)=𝐴1𝑒𝐴𝜏̃𝑧0(0)=𝐴1𝑒𝐴𝜏𝑠+𝐴1𝑓,(𝛼)̃𝑧0(𝜏)=𝑒𝐴𝜏̃𝑧0(0).(3.6)

4. Boundary Value Problems

Consider the boundary value problem:

𝜀𝑑2𝑥𝑑𝑡2+𝐴𝑑𝑥𝑑𝑡+𝑓(𝑥)=0,𝑥(0)=𝛼,𝑥(1)=𝛽.(4.1)

The boundary value problem (4.1) can be converted to the initial value problem (1.1) by dropping the second boundary condition 𝑥(1)=𝛽 and replacing it by the initial condition 𝑥(0)=𝑠. Hence, 𝑓(𝑥) and 𝐴 are assumed to have similar conditions to that considered in Section 2 on [0,1].

The initial-value problem has a uniquely determined solution 𝑥(𝑡)=𝑥(𝑡,𝑠) which depends on the choice of the initial value 𝑠 for 𝑥(0). To determine the value of 𝑠 consistent with the right-hand boundary condition we must find a zero 𝑠=𝑠 of the function 𝐹(𝑠)=𝑥(1,𝑠)𝛽. Assuming that 𝐹(𝑠)𝐶2(0,1), one can use Newton's method to determine 𝑠. Starting with an initial approximation 𝑠0, one then has to iteratively compute values 𝑠𝑖 according to

𝑠𝑖+1=𝑠𝑖𝐹𝑠𝑖1𝐹𝑠𝑖,(4.2) where for 𝑛=1, 𝐹(𝑠𝑖) is approximated by the forward difference formula:

𝐹𝑠𝑖𝑠+𝐹𝑖,(4.3) with |𝑠𝑖|. For the case 𝑛>1, the matrix 𝐹(𝑠𝑖) is determined by the following useful technique. The 𝑘th column of 𝐹(𝑠𝑖) is approximated by

𝐹𝑠𝑖+𝑒𝑘𝑠𝐹𝑖,(4.4) where 𝑒𝑘 is the standard unit vector in 𝑛 with unity in the 𝑘th position. Then, 𝑥(1,𝑠𝑖) is determined by solving an initial-value problem:

𝜀𝑑2𝑥𝑑𝑡2+𝐴𝑑𝑥𝑑𝑡+𝑓(𝑥)=0,𝑥(0)=𝛼,𝑥(0)=𝑠𝑖(4.5) up to 𝑡=1.

In this paper instead of applying the shooting method to the stiff second-order system of equations (4.1) we first apply the first-order asymptotic procedure to the system and then adapt the shooting method to the resulting nonstiff first-order system. Now taking 𝑡=1 in (2.1) and (2.18) it can be shown that

𝑤(1)=𝑤(1,𝑠)=𝛽+𝜀𝐴1𝑒𝐴/𝜀𝑠+𝐴1.𝑓(𝑤(0,𝑠))(4.6) Solve the first-order system (2.14) with estimate (2.25) for 𝑤(0)=𝑤(0,𝑠) numerically and denote the solution at 𝑡=1 by 𝑤(1,𝑠). It remains to determine 𝑠 so that 𝑤(1,𝑠) agrees with 𝑤(1,𝑠) from (4.6). Thus we define 𝐹(𝑠)=𝑤(1,𝑠)𝑤(1,𝑠) and iteratively obtain a zero of 𝐹(𝑠) using (4.2). A convenient starting value for the iteration is obtained by setting 𝜀=0 and 𝑡=0 in (4.1) to yield 𝑠0=𝐴1𝑓(𝛼). Essentially, we are trying to find an optimal 𝑠 such that both 𝑤(0,𝑠) and 𝑤(1,𝑠) lie on the solution curve 𝑤(𝑡) of (2.14).

5. Numerical Examples

Example 5.1. Here we choose 𝑛=1, 𝐴=1, 𝑓(𝑥)=𝑥2 with 𝛼=𝑠=2 in (1.1). Hence, (3.3)–(3.5) reduce to the ODEs: 𝑑𝑥0𝑑𝑡=𝑥20,𝑥(0)=2,𝑑𝑥1𝑑𝑡=2𝑥0𝑥12𝑥30,𝑥1(0)=6,(5.1) to be solved for the bulk approximation 𝑥(𝑡)=𝑥0(𝑡)+𝜀𝑥1(𝑡) according to the standard approach. The first-order asymptotic solution to (1.1) using the standard approach is given by 𝛽𝑥(𝑡)=1+𝛽𝑡+𝜀2𝛽2(1+𝛽𝑡)2ln(1+𝛽𝑡)+𝛽+𝛼2(1+𝛽𝑡)2.(5.2)
According to the new approach we solve (2.14) which reduces to
𝑑𝑤𝑑𝑡=(1+𝜀2𝑤)𝑤2.(5.3) From (2.21) we obtain 𝑤(0)=2+6𝜀,(5.4) and using (2.25) we obtain the improved initial condition: 𝑤(0)=2+6𝜀+24𝜀2.(5.5)
The graphical solutions depicted in Figure 1 are for 𝜀=0.1. The direct numerical solution of (1.1) is denoted by 𝑥 and the first-order solution corresponding to (5.2) with the initial layer incorporated is denoted by 𝑥(0). When (5.3) is solved using the initial condition (5.4), the solution from (2.1) is denoted by 𝑥(1); likewise when (5.3) is solved using the initial condition (5.5) the solution from (2.1) is denoted by 𝑥(2).
Figure 2 shows how the error varies as a function of 𝜖 for 𝑡=0.175. It is clear that the present algorithm, incorporating the new initial condition (2.25), gives superior results, even for large 𝜖.

Example 5.2. Since this paper is concerned with systems of ODEs, the following example is directed to such a case. The technique is illustrated for a system of coupled ODEs by choosing 𝑛=21 and 𝐴=𝐼: 𝑓𝑖(𝑥)=𝑥3𝑖+𝑥3𝑖+1𝑓,𝑖=1,2,,𝑛1,𝑛(𝑥)=𝑥3𝑛+𝑥31,𝛼𝑖=sin𝜋(𝑖1)𝑛1,𝛽𝑖=𝜋cos𝜋(𝑖1)𝑛1,𝑖=1,2,,𝑛(5.6) in (1.1). Solve (2.14) using an explicit Runge-Kutta method of order 2 (RK2) with fixed step size equal to 0.01 subject to the initial conditions (2.21) and (2.25) and designate the solutions by 𝑤(1) and 𝑤(2), respectively. The corresponding solutions in (2.1) are denoted by 𝑥(1) and 𝑥(2) respectively. Let 𝑥 denote a highly accurate solution of (1.1) obtained using Mathematica 6.0 which will be treated as an exact solution for comparison purposes. Equation (1.1) is solved using the implicit Euler method with fixed step size equal to 0.01 and the solution is denoted by 𝑥(0). Table 1 shows the errors 𝑥𝑤(1) and 𝑥𝑤(2), respectively. The CPU time for computing 𝑥(0) was 0.17 second as compared to 0.02 second for computing both 𝑥(1) and 𝑥(2) which amounts to a significant saving in computational effort.

Example 5.3. One has 𝜀𝑑2𝑥𝑑𝑡2+2𝑑𝑥𝑑𝑡𝑒𝑥𝑥=0,𝑥(0)=0,(1)=0.(5.7)
Choose 𝜀=0.01. Figures 3, 4, and 5 show a plot of the solution profiles 𝑥(𝑡) (numerical solution) and 𝑤(𝑡) (asymptotic bulk solution), corresponding to 𝑠0, 𝑠1, and 𝑠2 used in (4.3) respectively. It is seen that the solution profiles are almost identical (in the bulk region) after the second iteration.

Example 5.4. We consider the following boundary layer problem from fluid mechanics: 𝜀Δ𝑢+𝛾𝑢=1onΩ,𝑢=0on𝜕Ω,(5.8) where 𝑢=𝑢(𝑥,𝑦), 𝛾 is a constant, and 𝜕Ω is the boundary of the square Ω=(0,1)×(0,1). To apply the present algorithm we replace the variable 𝑦 by 𝑡 and divide the spatial domain (𝑥(0,1)) into 𝑛+2 equally spaced points with spacing =1/(𝑛+1). Letting 𝑋𝑖=𝑢(𝑥𝑖,𝑡) and using the method of lines to discretize the above equation result in the system of ODEs: 𝜀𝑑2𝑋𝑖𝑑𝑡2+𝛾𝑑𝑋𝑖𝑋𝑑𝑡+𝛾𝑖+1𝑋𝑖1+𝜀22𝑋𝑖+12𝑋𝑖+𝑋𝑖11=0,(5.9)𝑖=1,2,,𝑛. If we let 𝑋=[𝑋1,𝑋2,,𝑋𝑛]𝑡 and 𝑓(𝑋)=[𝑓1,𝑓2,,𝑓𝑛]𝑡, where 𝑓𝑖𝑋=𝛾𝑖+1𝑋𝑖1+𝜀22𝑋𝑖+12𝑋𝑖+𝑋𝑖11,(5.10) then the system of ODEs can be written as 𝜀𝑑2𝑋𝑑𝑡2+𝛾𝑑𝑋𝑑𝑡+𝑓(𝑋)=0,(5.11)𝑋(0)=0,𝑋(1)=0, and 𝑛=10.
Choose 𝜀=0.01 and 𝛾=1. Figure 6 shows a plot of the solution profiles 𝑋5(𝑡) and 𝑤5(𝑡) (asymptotic solution) after the first iteration in 𝑠. It is seen that the solution profiles are almost identical (in the bulk region).

Example 5.5. Consider the singularly perturbed telegraph equation: 𝜖𝜕2𝑢𝜕𝑡2+𝛾𝜕𝑢𝜕𝜕𝑡=𝜇2𝑢𝜕𝑥2𝜎(𝑡),(5.12) where 𝑢=𝑢(𝑥,𝑡), 𝛾 and 𝜇 are constants, and 𝜎 is an arbitrary function of 𝑡. Consider normal boundary conditions: 𝜕𝑢𝜕𝑥(0,𝑡)=𝜕𝑢𝜕𝑥(1,𝑡)=0(5.13) with initial conditions 𝑢(𝑥,0)=sin𝜋𝑥,𝜕𝑢𝜕𝑡(𝑥,0)=𝜋cos𝜋𝑥.(5.14)
As in the previous example we used the method of lines to obtain the discretized system of ODEs:
𝜖𝑑2𝑋𝑑𝑡2+𝛾𝑑𝑋𝑑𝑡+𝑓(𝑋)=0,(5.15) where 𝑋=[𝑋0,𝑋1,,𝑋𝑁+1]𝑡, 𝑓(𝑋)=[𝑓0,𝑓1,,𝑓𝑁+1]𝑡, 𝑓𝑖=(𝑢/2)[𝑋𝑖+12𝑋𝑖+𝑋𝑖1]+𝜎𝑋𝑖, 𝑖=0,1,,𝑛+1, with 𝑋1=𝑋1 and 𝑋𝑁+2=𝑋𝑁. The initial conditions are 𝑋𝑖(0)=sin𝜋𝑥𝑖, 𝑋𝑖(0)=𝜋cos𝜋𝑥𝑖, 𝑖=0,1,,𝑛+1, and 𝑛=51. Choose 𝜖=0.01, 𝛾=1 and 𝜎=1. Figure 7 shows the error in the various solution components. For example, 𝑒6(1) corresponds to the error in the sixth solution component using (2.21) and 𝑒6(2) corresponds to the error in the sixth solution component using (2.25), respectively. It is evident that using the new algorithm proposed in this paper gives superior numerical results.

6. Conclusion

The numerical solution of singularly perturbed second-order ODEs of the form (1.1) involving a single variable and a linear function 𝑓(𝑥) is well established. On the other hand, the solution of the system (1.1) with a nonlinear function presents a formidable numerical challenge. The procedure presented in this paper gives an efficient algorithm for solving such systems. The improved algorithm is capable of handling a wider class of problems and at the same time produces a uniform solution over the entire time domain of interest. Furthermore, the numerical examples selected are representative of typical application areas including problems exhibiting multiple boundary layers.

Acknowledgments

The authors would like to thank Professor Janusz Mika for providing them with useful suggestions during the preparation of this manuscript. Also, the authors would like to extend their appreciation to the anonymous referees, whose feedback contributed in improving the manuscript.