Research Article | Open Access

# Inversion of the Laplace Transform from the Real Axis Using an Adaptive Iterative Method

**Academic Editor:**Thomas Witelski

#### Abstract

A new method for inverting the Laplace transform from the real axis is formulated. This method is based on a quadrature formula. We assume that the unknown function is continuous with (known) compact support. An adaptive iterative method and an adaptive stopping rule, which yield the convergence of the approximate solution to , are proposed in this paper.

#### 1. Introduction

Consider the Laplace transform

where ,

We assume in (1.2) that has compact support. This is not a restriction practically. Indeed, if , then for , where is an arbitrary small number. Therefore, one may assume that supp, and treat the values of for as noise. One may also note that if , then and , where . Therefore, the contribution of the “tail” of , can be considered as noise if is large and is small. We assume in (1.2) that . One may also assume that , or that , where are positive constants. If the last assumption holds, then one may define the function . Then , and its Laplace transform is known on the interval of real axis if the Laplace transform of is known on the interval . Therefore, our inversion methods are applicable to these more general classes of functions as well.

The operator is compact. Therefore, the inversion of the Laplace transform (1.1) is an ill-posed problem (see [1, 2]). Since the problem is ill-posed, a regularization method is needed to obtain a stable inversion of the Laplace transform. There are many methods to solve (1.1) stably: variational regularization, quasisolutions, and iterative regularization (see, e.g., [1–4]). In this paper we propose an adaptive iterative method based on the Dynamical Systems Method (DSM) developed in [2, 4]. Some methods have been developed earlier for the inversion of the Laplace transform (see [5–8]). In many papers the data are assumed exact and given on the complex axis. In [9] it is shown that the results of the inversion of the Laplace transform from the complex axis are more accurate than these of the inversion of the Laplace transform from the real axis. The reason is the ill-posedness of the Laplace transform inversion from the real axis. A survey regarding the methods of the Laplace transform inversion has been given in [6]. There are several types of the Laplace inversion method compared in [6]. The inversion formula for the Laplace transform that is well known

is used in some of these methods, and then is computed by some quadrature formulas, and many of these formulas can be found in [10, 11]. Moreover, the ill-posedness of the Laplace transform inversion is not discussed in all the methods compared in [6]. The approximate , obtained by these methods when the data are noisy, may differ significantly from . There are some papers in which the inversion of the Laplace transform from the real axis was studied (see [9, 12–19]). In [12, 17] a method based on the Mellin transform is developed. In this method the Mellin transform of the data is calculated first and then inverted for . In [13] a Fourier series method for the inversion of Laplace transform from the real axis is developed. The drawback of this method comes from the ill-conditioning of the discretized problem. It is shown in [13] that if one uses some basis functions in , the problem becomes extremely ill-conditioned if the number of the basis functions exceeds . In [15] a reproducing kernel method is used for the inversion of the Laplace transform. In the numerical experiments in [15] the authors use double and multiple precision methods to obtain high accuracy inversion of the Laplace transform. The usage of the multiple precision increases the computation time significantly which is observed in [15], so this method may be not efficient in practice. A detailed description of the multiple precision technique can be found in [20, 21]. Moreover, the Laplace transform inversion with perturbed data is not discussed in [15]. In [19] the authors develop an inversion formula, based on the eigenfunction expansion for the Laplace transform. The difficulties with this method are (i) the inversion formula is not applicable when the data are noisy, (ii) even for exact data the inversion formula is not suitable for numerical implementation.

The Laplace transform as an operator from into , where , , is considered in [14]. The finite difference method is used in [14] to discretize the problem, where the size of the linear algebraic system obtained by this method is fixed at each iteration, so the computation time increases if one uses large linear algebraic systems. The method of choosing the size of the linear algebraic system is not given in [14]. Moreover, the inversion of the Laplace transform when the data is given only on a finite interval , , is not discussed in [14].

The novel points in our paper are the following:

(1)the representation of the approximation solution (2.69) of the function which depends only on the kernel of the Laplace transform, (2)the adaptive iterative scheme (2.72) and adaptive stopping rule (2.83), which generate the regularization parameter, the discrete data , and the number of terms in (2.69), needed for obtaining an approximation of the unknown function .We study the inversion problem using the pair of spaces , where is defined in (1.2), develop an inversion method, which can be easily implemented numerically, and demonstrate in the numerical experiments that our method yields the results comparable in accuracy with the results, presented in the literature, for example, with the double precision results given in paper [15].

The smoothness of the kernel allows one to use the compound Simpson's rule in approximating the Laplace transform. Our approach yields a representation (2.69) of the approximate inversion of the Laplace transform. A number of terms in approximation (2.69) and the regularization parameter are generated automatically by the proposed adaptive iterative method. Our iterative method is based on the iterative method proposed in [22]. The adaptive stopping rule we propose here is based on the discrepancy-type principle, established in [23, 24]. This stopping rule yields convergence of the approximation (2.69) to when the noise level .

A detailed derivation of our inversion method is given in Section 2. In Section 3 some results of the numerical experiments are reported. These results demonstrate the efficiency and stability of the proposed method.

#### 2. Description of the Method

Let . Then (1.1) can be written as

Let us assume that the data , the Laplace transform of , are known only for Consider the mapping , where

and is an even number which will be chosen later. Then the unknown function can be obtained from a finite-dimensional operator equation (2.2). Let

be the inner product and norm in , respectively, where are the weights of the compound Simpson's rule (see [10, page 58]), that is,

where is an even number. Then

where

It follows from (2.2) and (2.7) that

where

From (2.7) we get Range where

Lemma 2.1. *Let be defined in (2.5). Then
**
for any even number .*

*Proof. *From definition (2.5) one gets
Lemma 2.1 is proved.

Lemma 2.2. *The matrix , defined in (2.11), is positive semidefinite and self-adjoint in with respect to the inner product (2.4).*

*Proof. *Let
are defined in (2.5). Then where
We have
Thus, is self-adjoint with respect to inner product (2.4). We have
where is defined in (2.8). This shows that is a Gram matrix. Therefore,
This implies
Thus, is a positive semidefinite and self-adjoint matrix with respect to the inner product (2.4).

Lemma 2.3. *Let be defined in (2.9). Then is self-adjoint and positive semidefinite operator in with respect to inner product (2.8).*

*Proof. *From definition (2.9) and inner product (2.8) we get
Thus, is a self-adjoint operator with respect to inner product (2.8). Let us prove that is positive semidefinite. Using (2.9), (2.5), (2.4), and (2.8), one gets
Lemma 2.3 is proved.

Let us approximate the unknown as follows:

where are defined in (2.3), is defined in (2.30), and are constants obtained by solving the linear algebraic system:

where is defined in (2.10),

To prove the convergence of the approximate solution , we use the following estimates, which are proved in [4], so their proofs are omitted.

Estimates (2.26) and (2.27) are used in proving inequality (2.88), while estimates (2.28) and (2.29) are used in the proof of Lemmas 2.9 and 2.10, respectively.

Lemma 2.4. *Let and be defined in (2.9) and (2.10), respectively. Then, for , the following estimates hold:
**
where
** is the identity operator and *

Let us formulate an iterative method for obtaining the approximation solution of with the exact data . Consider the following iterative scheme:

where is the adjoint of the operator , that is,

is defined in (2.22),

Lemma 2.5 together with Lemma 2.7 with yields

Lemma 2.5. *Let be defined in (2.34), , and , where is the null space of . Then
*

*Proof. *Since , it follows from the spectral theorem that
where is the resolution of the identity corresponding to , and is the orthogonal projector onto .

Lemma 2.5 is proved.

Theorem 2.6. *Let , and be defined in (2.31). Then
*

*Proof. *By induction we get
where is defined in (2.34), and
Using the identities
we get
Therefore,
To prove relation (2.38) the following lemma is needed.

Lemma 2.7. *Let be a continuous function on , and constants. If
**
then
*

*Proof. *Let
where . Then
Take arbitrarily small. For sufficiently large fixed one can choose , such that
because Fix such that for . This is possible because of (2.44). One has
if is sufficiently large. Here we have used the relation
Since is arbitrarily small, relation (2.45) follows.

Lemma 2.7 is proved.

This together with estimate (2.43) and condition yields relation (2.38). Theorem 2.6 is proved.

Lemma 2.9 leads to an adaptive iterative scheme: where , are defined in (2.35), is defined in (2.30), is defined in (2.2), and

Lemma 2.8. *Let and be defined in (2.33) and (2.9), respectively. Then
*

*Proof. *From definitions (2.33) and (2.9) we get
where the following upper bound for the error of the compound Simpson's rule was used (see [10, page 58]). For ,
where
This implies
so estimate (2.53) is obtained.

Lemma 2.8 is proved.

Lemma 2.9. *Let ,
**
Then
**
where and are defined in (2.33) and (2.9), respectively.*

*Proof. *Inequality (2.59) follows from estimate (2.53) and formula (2.58).

are defined in (2.3). In the iterative scheme (2.52) we have used the finite-dimensional operator approximating the operator . Convergence of the iterative scheme (2.52) to the solution of the equation is established in the following lemma.

Lemma 2.10. *Let and be defined in (2.52). If are chosen by the rule
**
where is the smallest even number not less than , then
*

*Proof. *Consider the estimate
where and . By Theorem 2.6, we get as Let us prove that Let Then, from definitions (2.31) and (2.52), we get
By induction we obtain
where are defined in (2.40). Using the identities , ,
one gets
This together with the rule (2.61), estimate (2.28), and Lemma 2.8 yields
Applying Lemmas 2.5 and 2.7 with , we obtain

Lemma 2.10 is proved.

##### 2.1. Noisy Data

When the data are noisy, the approximate solution (2.23) is written as

where the coefficients are obtained by solving the following linear algebraic system:

is defined in (2.30),

are defined in (2.5), and are defined in (2.3).

To get the approximation solution of the function with the noisy data , we consider the following iterative scheme:

where is defined in (2.30), are defined in (2.35), , is defined in (2.71), and are chosen by the rule (2.61). Let us assume that

where are random quantities generated from some statistical distributions, for example, the uniform distribution on the interval , and is the noise level of the data . It follows from assumption (2.73), definition (2.5), Lemma 2.1, and the inner product (2.4) that

Lemma 2.11. *Let and be defined in (2.52) and (2.72), respectively. Then
**
where are defined in (2.35).*

*Proof. *Let . Then, from definitions (2.52) and (2.72),
By induction we obtain
where are defined in (2.40). Using estimates (2.74) and inequality (2.29), one gets
where are defined in (2.40).

Lemma 2.11 is proved.

Theorem 2.12. *Suppose that conditions of Lemma 2.10 hold, and satisfies the following conditions:
**
Then
*

*Proof. *Consider the estimate
This together with Lemma 2.11 yields
Applying relations (2.79) in estimate (2.82), one gets relation (2.80).

Theorem 2.12 is proved.

In the following subsection we propose a stopping rule which implies relations (2.79).

##### 2.2. Stopping Rule

In this subsection a stopping rule which yields relations (2.79) in Theorem 2.12 is given. We propose the stopping rule

where

is defined in (2.4),

and are defined in (2.5) and (2.3), respectively, and are obtained by solving linear algebraic system (2.70).

We observe that

Thus, the sequence (2.84) can be written in the following form:

where is defined in (2.4), and solves the linear algebraic system (2.70).

It follows from estimates (2.74), (2.26), and (2.27) that

This together with (2.87) yields

or

Lemma 2.13. *The sequence (2.87) satisfies the following estimate:
**
where are defined in (2.35).*

*Proof. *Define
Then estimate (2.90) can be rewritten as
where the relation was used. Let us prove estimate (2.91) by induction. For we get
Suppose that estimate (2.91) is true for . Then
where the relation was used.

Lemma 2.13 is proved.

Lemma 2.14. *Suppose
**
where are defined in (2.87). Then there exists a unique integer , satisfying the stopping rule (2.83) with . *

*Proof. *From Lemma 2.13 we get the estimate
where are defined in (2.35). Therefore,
where the relation was used. This together with condition (2.96) yields the existence of the integer . The uniqueness of the integer follows from its definition.

Lemma 2.14 is proved.

Lemma 2.15. *Suppose that conditions of Lemma 2.14 hold and is chosen by the rule (2.83), then
*

*Proof. *From the stopping rule (2.83) and estimate (2.97) we get
where . This implies
so, for , and , one gets

Lemma 2.15 is proved.

Lemma 2.16. *Consider the stopping rule (2.83), where the parameters are chosen by rule (2.61). If is chosen by the rule (2.83) then
*

*Proof. *From the stopping rule (2.83) with the sequence defined in (2.87) one gets
where is obtained by solving linear algebraic system (2.70). This implies
Thus,
If , then there exists a such that
where is the resolution of the identity corresponding to the operator . Let
For a fixed number we obtain
Since is a continuous operator, and , it follows from (2.107) that
Therefore, for the fixed number we get
for all sufficiently small , where is a constant which does not depend on . Suppose Then there exists a subsequence as , such that
where rule (2.61) was used to obtain the parameters . This together with (2.107) and (2.111) yields