Abstract

A new method for inverting the Laplace transform from the real axis is formulated. This method is based on a quadrature formula. We assume that the unknown function is continuous with (known) compact support. An adaptive iterative method and an adaptive stopping rule, which yield the convergence of the approximate solution to , are proposed in this paper.

1. Introduction

Consider the Laplace transform

where ,

We assume in (1.2) that has compact support. This is not a restriction practically. Indeed, if , then for , where is an arbitrary small number. Therefore, one may assume that supp, and treat the values of for as noise. One may also note that if , then and , where . Therefore, the contribution of the “tail” of , can be considered as noise if is large and is small. We assume in (1.2) that . One may also assume that , or that , where are positive constants. If the last assumption holds, then one may define the function . Then , and its Laplace transform is known on the interval of real axis if the Laplace transform of is known on the interval . Therefore, our inversion methods are applicable to these more general classes of functions as well.

The operator is compact. Therefore, the inversion of the Laplace transform (1.1) is an ill-posed problem (see [1, 2]). Since the problem is ill-posed, a regularization method is needed to obtain a stable inversion of the Laplace transform. There are many methods to solve (1.1) stably: variational regularization, quasisolutions, and iterative regularization (see, e.g., [14]). In this paper we propose an adaptive iterative method based on the Dynamical Systems Method (DSM) developed in [2, 4]. Some methods have been developed earlier for the inversion of the Laplace transform (see [58]). In many papers the data are assumed exact and given on the complex axis. In [9] it is shown that the results of the inversion of the Laplace transform from the complex axis are more accurate than these of the inversion of the Laplace transform from the real axis. The reason is the ill-posedness of the Laplace transform inversion from the real axis. A survey regarding the methods of the Laplace transform inversion has been given in [6]. There are several types of the Laplace inversion method compared in [6]. The inversion formula for the Laplace transform that is well known

is used in some of these methods, and then is computed by some quadrature formulas, and many of these formulas can be found in [10, 11]. Moreover, the ill-posedness of the Laplace transform inversion is not discussed in all the methods compared in [6]. The approximate , obtained by these methods when the data are noisy, may differ significantly from . There are some papers in which the inversion of the Laplace transform from the real axis was studied (see [9, 1219]). In [12, 17] a method based on the Mellin transform is developed. In this method the Mellin transform of the data is calculated first and then inverted for . In [13] a Fourier series method for the inversion of Laplace transform from the real axis is developed. The drawback of this method comes from the ill-conditioning of the discretized problem. It is shown in [13] that if one uses some basis functions in , the problem becomes extremely ill-conditioned if the number of the basis functions exceeds . In [15] a reproducing kernel method is used for the inversion of the Laplace transform. In the numerical experiments in [15] the authors use double and multiple precision methods to obtain high accuracy inversion of the Laplace transform. The usage of the multiple precision increases the computation time significantly which is observed in [15], so this method may be not efficient in practice. A detailed description of the multiple precision technique can be found in [20, 21]. Moreover, the Laplace transform inversion with perturbed data is not discussed in [15]. In [19] the authors develop an inversion formula, based on the eigenfunction expansion for the Laplace transform. The difficulties with this method are (i) the inversion formula is not applicable when the data are noisy, (ii) even for exact data the inversion formula is not suitable for numerical implementation.

The Laplace transform as an operator from into , where , , is considered in [14]. The finite difference method is used in [14] to discretize the problem, where the size of the linear algebraic system obtained by this method is fixed at each iteration, so the computation time increases if one uses large linear algebraic systems. The method of choosing the size of the linear algebraic system is not given in [14]. Moreover, the inversion of the Laplace transform when the data is given only on a finite interval , , is not discussed in [14].

The novel points in our paper are the following:

(1)the representation of the approximation solution (2.69) of the function which depends only on the kernel of the Laplace transform, (2)the adaptive iterative scheme (2.72) and adaptive stopping rule (2.83), which generate the regularization parameter, the discrete data , and the number of terms in (2.69), needed for obtaining an approximation of the unknown function .

We study the inversion problem using the pair of spaces , where is defined in (1.2), develop an inversion method, which can be easily implemented numerically, and demonstrate in the numerical experiments that our method yields the results comparable in accuracy with the results, presented in the literature, for example, with the double precision results given in paper [15].

The smoothness of the kernel allows one to use the compound Simpson's rule in approximating the Laplace transform. Our approach yields a representation (2.69) of the approximate inversion of the Laplace transform. A number of terms in approximation (2.69) and the regularization parameter are generated automatically by the proposed adaptive iterative method. Our iterative method is based on the iterative method proposed in [22]. The adaptive stopping rule we propose here is based on the discrepancy-type principle, established in [23, 24]. This stopping rule yields convergence of the approximation (2.69) to when the noise level .

A detailed derivation of our inversion method is given in Section 2. In Section 3 some results of the numerical experiments are reported. These results demonstrate the efficiency and stability of the proposed method.

2. Description of the Method

Let . Then (1.1) can be written as

Let us assume that the data , the Laplace transform of , are known only for Consider the mapping , where

and is an even number which will be chosen later. Then the unknown function can be obtained from a finite-dimensional operator equation (2.2). Let

be the inner product and norm in , respectively, where are the weights of the compound Simpson's rule (see [10, page 58]), that is,

where is an even number. Then

where

It follows from (2.2) and (2.7) that

where

From (2.7) we get Range where

Lemma 2.1. Let be defined in (2.5). Then for any even number .

Proof. From definition (2.5) one gets Lemma 2.1 is proved.

Lemma 2.2. The matrix , defined in (2.11), is positive semidefinite and self-adjoint in with respect to the inner product (2.4).

Proof. Let are defined in (2.5). Then where We have Thus, is self-adjoint with respect to inner product (2.4). We have where is defined in (2.8). This shows that is a Gram matrix. Therefore, This implies Thus, is a positive semidefinite and self-adjoint matrix with respect to the inner product (2.4).

Lemma 2.3. Let be defined in (2.9). Then is self-adjoint and positive semidefinite operator in with respect to inner product (2.8).

Proof. From definition (2.9) and inner product (2.8) we get Thus, is a self-adjoint operator with respect to inner product (2.8). Let us prove that is positive semidefinite. Using (2.9), (2.5), (2.4), and (2.8), one gets Lemma 2.3 is proved.

Let us approximate the unknown as follows:

where are defined in (2.3), is defined in (2.30), and are constants obtained by solving the linear algebraic system:

where is defined in (2.10),

To prove the convergence of the approximate solution , we use the following estimates, which are proved in [4], so their proofs are omitted.

Estimates (2.26) and (2.27) are used in proving inequality (2.88), while estimates (2.28) and (2.29) are used in the proof of Lemmas 2.9 and 2.10, respectively.

Lemma 2.4. Let and be defined in (2.9) and (2.10), respectively. Then, for , the following estimates hold: where is the identity operator and

Let us formulate an iterative method for obtaining the approximation solution of with the exact data . Consider the following iterative scheme:

where is the adjoint of the operator , that is,

is defined in (2.22),

Lemma 2.5 together with Lemma 2.7 with yields

Lemma 2.5. Let be defined in (2.34), , and , where is the null space of . Then

Proof. Since , it follows from the spectral theorem that where is the resolution of the identity corresponding to , and is the orthogonal projector onto .
Lemma 2.5 is proved.

Theorem 2.6. Let , and be defined in (2.31). Then

Proof. By induction we get where is defined in (2.34), and Using the identities we get Therefore, To prove relation (2.38) the following lemma is needed.

Lemma 2.7. Let be a continuous function on , and constants. If then

Proof. Let where . Then Take arbitrarily small. For sufficiently large fixed one can choose , such that because Fix such that for . This is possible because of (2.44). One has if is sufficiently large. Here we have used the relation Since is arbitrarily small, relation (2.45) follows.
Lemma 2.7 is proved.

This together with estimate (2.43) and condition yields relation (2.38). Theorem 2.6 is proved.

Lemma 2.9 leads to an adaptive iterative scheme: where , are defined in (2.35), is defined in (2.30), is defined in (2.2), and

Lemma 2.8. Let and be defined in (2.33) and (2.9), respectively. Then

Proof. From definitions (2.33) and (2.9) we get where the following upper bound for the error of the compound Simpson's rule was used (see [10, page 58]). For , where This implies so estimate (2.53) is obtained.
Lemma 2.8 is proved.

Lemma 2.9. Let , Then where and are defined in (2.33) and (2.9), respectively.

Proof. Inequality (2.59) follows from estimate (2.53) and formula (2.58).

are defined in (2.3). In the iterative scheme (2.52) we have used the finite-dimensional operator approximating the operator . Convergence of the iterative scheme (2.52) to the solution of the equation is established in the following lemma.

Lemma 2.10. Let and be defined in (2.52). If are chosen by the rule where is the smallest even number not less than , then

Proof. Consider the estimate where and . By Theorem 2.6, we get as Let us prove that Let Then, from definitions (2.31) and (2.52), we get By induction we obtain where are defined in (2.40). Using the identities , , one gets This together with the rule (2.61), estimate (2.28), and Lemma 2.8 yields Applying Lemmas 2.5 and 2.7 with , we obtain
Lemma 2.10 is proved.

2.1. Noisy Data

When the data are noisy, the approximate solution (2.23) is written as

where the coefficients are obtained by solving the following linear algebraic system:

is defined in (2.30),

are defined in (2.5), and are defined in (2.3).

To get the approximation solution of the function with the noisy data , we consider the following iterative scheme:

where is defined in (2.30), are defined in (2.35), , is defined in (2.71), and are chosen by the rule (2.61). Let us assume that

where are random quantities generated from some statistical distributions, for example, the uniform distribution on the interval , and is the noise level of the data . It follows from assumption (2.73), definition (2.5), Lemma 2.1, and the inner product (2.4) that

Lemma 2.11. Let and be defined in (2.52) and (2.72), respectively. Then where are defined in (2.35).

Proof. Let . Then, from definitions (2.52) and (2.72), By induction we obtain where are defined in (2.40). Using estimates (2.74) and inequality (2.29), one gets where are defined in (2.40).
Lemma 2.11 is proved.

Theorem 2.12. Suppose that conditions of Lemma 2.10 hold, and satisfies the following conditions: Then

Proof. Consider the estimate This together with Lemma 2.11 yields Applying relations (2.79) in estimate (2.82), one gets relation (2.80).
Theorem 2.12 is proved.

In the following subsection we propose a stopping rule which implies relations (2.79).

2.2. Stopping Rule

In this subsection a stopping rule which yields relations (2.79) in Theorem 2.12 is given. We propose the stopping rule

where

is defined in (2.4),

and are defined in (2.5) and (2.3), respectively, and are obtained by solving linear algebraic system (2.70).

We observe that

Thus, the sequence (2.84) can be written in the following form:

where is defined in (2.4), and solves the linear algebraic system (2.70).

It follows from estimates (2.74), (2.26), and (2.27) that

This together with (2.87) yields

or

Lemma 2.13. The sequence (2.87) satisfies the following estimate: where are defined in (2.35).

Proof. Define Then estimate (2.90) can be rewritten as where the relation was used. Let us prove estimate (2.91) by induction. For we get Suppose that estimate (2.91) is true for . Then where the relation was used.
Lemma 2.13 is proved.

Lemma 2.14. Suppose where are defined in (2.87). Then there exists a unique integer , satisfying the stopping rule (2.83) with .

Proof. From Lemma 2.13 we get the estimate where are defined in (2.35). Therefore, where the relation was used. This together with condition (2.96) yields the existence of the integer . The uniqueness of the integer follows from its definition.
Lemma 2.14 is proved.

Lemma 2.15. Suppose that conditions of Lemma 2.14 hold and is chosen by the rule (2.83), then

Proof. From the stopping rule (2.83) and estimate (2.97) we get where . This implies so, for , and , one gets
Lemma 2.15 is proved.

Lemma 2.16. Consider the stopping rule (2.83), where the parameters are chosen by rule (2.61). If is chosen by the rule (2.83) then

Proof. From the stopping rule (2.83) with the sequence defined in (2.87) one gets where is obtained by solving linear algebraic system (2.70). This implies Thus, If , then there exists a such that where is the resolution of the identity corresponding to the operator . Let For a fixed number we obtain Since is a continuous operator, and , it follows from (2.107) that Therefore, for the fixed number we get for all sufficiently small , where is a constant which does not depend on . Suppose Then there exists a subsequence as , such that where rule (2.61) was used to obtain the parameters . This together with (2.107) and (2.111) yields This contradicts relation (2.106). Thus, that is,
Lemma 2.16 is proved.

It follows from Lemmas 2.15 and 2.16 that the stopping rule (2.83) yields the relations (2.79). We have proved the following theorem.

Theorem 2.17. Suppose all the assumptions of Theorem 2.12 hold, are chosen by rule (2.61), is chosen by rule (2.83), and where are defined in (2.87), then

2.3. The Algorithm

Let us formulate the algorithm for obtaining the approximate solution :

(1)The data on the interval , , the support of the function , and the noise level . (2)Initialization: choose the parameters , , , , , and set , , . (3)Iterate, starting with , and stop when condition (2.120) (see below) holds, (a), (b)choose by the rule (2.61), (c)construct the vector : (d)construct the matrices and : where are defined in (2.5), (e)solve the following linear algebraic system: where , (f)update the coefficient of the approximate solution defined in (2.69) by the iterative formula: where

Stop when for the first time the inequality

holds, and get the approximation of the function by formula (2.118).

3. Numerical Experiments

3.1. The Parameters

From definition (2.35) and the rule (2.61) we conclude that as Therefore, one needs to control the value of the parameter so that it will not grow too fast as decreases. The role of the parameter in (2.61) is to control the value of the parameter so that the value of the parameter will not be too large. Since for sufficiently small noise level , namely, , the regularization parameter , obtained by the stopping rule (2.83), is at most , we suggest to choose in the interval . For the noise level one can choose . To reduce the number of iterations we suggest to choose the geometric sequence , where and One may assume without loss of generality that , because a scaling transformation reduces the integral over to the integral over . We have assumed that the data are defined on the interval . In the case the interval , , the constant in estimates (2.59), (2.74), (2.75), (2.78), (2.90), (2.91), and (2.97) are replaced with the constant . If , that is, for , then one has to take not too large. Indeed, if for , then an integration by parts yields If the data are noisy, and the noise level is , then the data becomes indistinguishable from noise for . Therefore it is useless to keep the data for . In practice one may get a satisfactory accuracy of inversion by the method, proposed in this paper, when one uses the data with when . In all the numerical examples we have used . Given the interval , the proposed method generates automatically the discrete data , , over the interval which are needed to get the approximation of the function .

3.2. Experiments

To test the proposed method we consider some examples proposed in [57, 9, 12, 13, 15, 16, 19, 25]. To illustrate the numerical stability of the proposed method with respect to the noise, we use the noisy data with various noise levels , and . The random quantities in (2.73) are obtained from the uniform probability density function over the interval . In Examples 3.13.12 we choose the value of the parameters as follows: , , and . The parameter is used for the noise levels and . When we choose so that the value of the parameters are not very large, namely, . Therefore, the computation time for solving linear algebraic system (2.117) can be reduced significantly. We assume that the support of the function is in the interval with . In the stopping rule (2.83) the following parameters are used: , . In Example 3.13 the function is used to test the applicability of the proposed method to functions without compact support. The results are given in Table 13 and Figure 13.

For a comparison with the exact solutions we use the mean absolute error

where is the exact solution and is the approximate solution. The computation time (CPU time) for obtaining the approximation of , the number of iterations (Iter.), and the parameters and generated by the proposed method is given in each experiment (see Tables 112). All the calculations are done in double precision generated by MATLAB.

Example 3.1 (see [15]). We have The reconstruction of the exact solution for different values of the noise level is shown in Figure 1. When the noise level our result is comparable with the double precision results shown in [15]. The proposed method is stable with respect to the noise as shown in Table 1.

Example 3.2 (see [13, 15]). We have The reconstruction of the function is plotted in Figure 2. In [15] a high accuracy result is given by means of the multiple precision. But, as reported in [15], to get such high accuracy results, it takes 7 hours. From Table 2 and Figure 2 we can see that the proposed method yields stable solution with respect to the noise level . The reconstruction of the exact solution obtained by the proposed method is better than the reconstruction shown in [13]. The result is comparable with the double precision results given in [15]. For and the value of the parameter is bounded by the constant .

Example 3.3 (see [6, 12, 13, 16, 19]). We have We get an excellent agreement between the approximate solution and the exact solution when the noise level and as shown in Figure 3. The results obtained by the proposed method are better than the results given in [13]. The mean absolute error decreases as the noise level decreases which shows the stability of the proposed method. Our results are more stable with respect to the noise than the results presented in [19]. The value of the parameter is bounded by the constant when the noise level and .

Example 3.4 (see [13, 15]). We have As in our Example 3.3 when the noise and are used, we get a satisfactory agreement between the approximate solution and the exact solution. Table 4 gives the results of the stability of the proposed method with respect to the noise level . Moreover, the reconstruction of the function obtained by the proposed method is better than the reconstruction of shown in [13], and is comparable with the double precision reconstruction obtained in [15].
In this example when and the value of the parameter is bounded by the constant as shown in Table 4.

Example 3.5 (see [5, 7, 13]). We have This is an example of the damped sine function. In [5, 7] the knowledge of the exact data in the complex plane is required to get the approximate solution. Here we only use the knowledge of the discrete perturbed data , and get a satisfactory result which is comparable with the results given in [5, 7] when the level noise . The reconstruction of the exact solution obtained by our method is better than this of the method given in [13]. Moreover, our method yields stable solution with respect to the noise level as shown in Figure 5 and Table 5. In this example when the value of the parameter is bounded by for the noise level (see Table 5).

Example 3.6 (see [15]). We have Example 3.6 represents a class of piecewise continuous functions. From Figure 6 the value of the exact solution at the points where the function is not differentiable cannot be well approximated for the given levels of noise by the proposed method. When the noise level , our result is comparable with the results given in [15]. Table 6 reports the stability of the proposed method with respect to the noise . It is shown in Table 6 that the value of the parameter generated by the proposed adaptive stopping rule is bounded by the constant 54 for the noise level and which gives a relatively small computation time.

Example 3.7 (see [15]). We have When the noise level and , we get numerical results which are comparable with the double precision results given in [15]. Figure 7 and Table 7 show the stability of the proposed method for decreasing .

Example 3.8 (see [13, 25]). We have The results of this example are similar to the results of Example 3.3. The exact solution can be well reconstructed by the approximate solution obtained by our method at the levels noise and (see Figure 8). Table 8 shows that the MAE decreases as the noise level decreases which shows the stability of the proposed method with respect to the noise. In all the levels of noise the computation time of the proposed method in obtaining the approximate solution is relatively small. We get better reconstruction results than the results shown in [13]. Our results are comparable with the results given in [25].

Example 3.9 (see [16]). We have As in Example 3.6 the error of the approximate solution at the point where the function is not differentiable dominates the error of the approximation. The reconstruction of the exact solution can be seen in Figure 9. The detailed results are presented in Table 9. When the double precision is used, we get comparable results with the results shown in [16].

Example 3.10 (see [6]). We have Table 10 shows the stability of the solution obtained by our method with respect to the noise level . We get an excellent agreement between the exact solution and the approximate solution for all the noise levels as shown in Figure 10.

Example 3.11 (see [6, 9]). We have Here the function represents the class of periodic functions. It is mentioned in [9] that oscillating function can be found with acceptable accuracy only for relatively small values of . In this example the best approximation is obtained when the noise level which is comparable with the results given in [6, 9]. The reconstruction of the function for various levels of the noise are given in Figure 11. The stability of the proposed method with respect to the noise is shown in Table 11. In this example the parameter is bounded by the constant when the noise level and .

Example 3.12 (see [6, 25]). We have Here we take an increasing function which oscillates as the variable increases over the interval . A poor approximation is obtained when the noise level . Figure 12 shows that the exact solution can be approximated very well when the noise level The results of our method are comparable with these of the methods given in [6, 25]. The stability of our method with respect to the noise level is shown in Table 12.

Example 3.13. We have Here the support of is not compact. From the Laplace transform formula one gets where Therefore, can be considered as noise of the data , that is, where In this example the following parameters are used: , for and for and . Table 13 shows that the error decreases as the parameter increases. The approximate solution obtained by the proposed method converges to the function as increases (see Figure 13).

4. Conclusion

We have tested the proposed algorithm on the wide class of examples considered in the literature. Using the rule (2.61) and the stopping rule (2.83), the number of terms in representation (2.69), the discrete data , , and regularization parameter , which are used in computing the approximation (see (2.69)) of the unknown function , are obtained automatically. Our numerical experiments show that the computation time (CPU time) for approximating the function is small, namely, CPU time second, and the proposed iterative scheme and the proposed adaptive stopping rule yield stable solution given the noisy data with the noise level . The proposed method also works for with non-compact support as shown in Example 3.13. Moreover, in the proposed method we only use a simple representation (2.69) which is based on the kernel of the Laplace transform integral, so it can be easily implemented numerically.