Abstract

A hybrid algorithm and regularization method are proposed, for the first time, to solve the one-dimensional degenerate inverse heat conduction problem to estimate the initial temperature distribution from point measurements. The evolution of the heat is given by a degenerate parabolic equation with singular potential. This problem can be formulated in a least-squares framework, an iterative procedure which minimizes the difference between the given measurements and the value at sensor locations of a reconstructed field. The mathematical model leads to a nonconvex minimization problem. To solve it, we prove the existence of at least one solution of problem and we propose two approaches: the first is based on a Tikhonov regularization, while the second approach is based on a hybrid genetic algorithm (married genetic with descent method type gradient). Some numerical experiments are given.

1. Introduction

The inverse problem is expressed when the PDE solution is measured or specified, and we are interested in determining some properties: coefficients, forcing term, boundary, or initial condition from the partial knowledge of the system in a limited time interval (see [1, 2]).

In the last recent years, an increasing interest has been devoted to degenerate parabolic equations. Indeed, many problems coming from physics (boundary layer models in [3], models of Kolmogorov type in [4], etc.), biology (Wright-Fisher models in [5] and Fleming-Viot models in [6]), and economics (Black-Merton-Scholes equations in [7]) are described by degenerate parabolic equations [8].

The identification of the initial state of nondegenerate parabolic problems is well studied in the literature (see [911]). However, as far as we know, the degenerate case has not been analysed in the literature.

In this paper, we are interested in estimating the initial condition by the variational method in data assimilation of degenerate/singular parabolic equation:where , , , , and . With initial and boundary conditions

The mathematical model leads to a nonconvex minimization problemwhere the functional is defined as follows:subject to being the weak solution of the parabolic problem (1) with initial state , an observation of in , and the observation operator. The space is the set of admissible initial states.

Problem (3) is ill-posed in the sense of Hadamard. To solve this problem, we propose two approaches.

The first approach is based on regularization, for the first time, applied to solve a degenerate inverse problem. The problem thus consists of minimizing a functional of the form

Here, the last term in (5) stands for the so-called Tikhonov-type regularization ([12, 13]), being a small regularizing coefficient that provides extra convexity to the functional and a priori (background) knowledge of the true state (the state to estimate). We consider that the values of are given in each point of analysis grid-points.

The second approach is applied when there is a partial knowledge of values of (example 20%); the regularization parameter is very difficult to determine. To overcome this problem, we propose a new approach, based on a hybrid genetic algorithm (married genetic with descent method gradient type). Finally, we make a comparison between the two mentioned approaches (with 20% of ).

First of all, we prove that problem (3) has at least one solution. The gradient of the functional is calculated with the adjoint method. Numerical experiments are presented to show the performance of our approaches.

2. Problem Statement and Main Result

Consider the following problem:where , ,and is the operator defined aswith , , and .

We want to estimate thanks to an observation of in . The minimization problem associated with this problem iswhere the functional is as follows:subject to being the weak solution of the parabolic problem (6) with initial state , the background state, and the observation operator. The space is the set of admissible initial states (will be defined later).

We now specify some notations we shall use. Let us introduce the following functional spaces (see [1416]):withWe recall that (see [16]) is an Hilbert space and it is the closure of for the norm . If then the injectionsare compacts.

Firstly, we prove that problem (6) is well-posed, the functional is continuous, and is -derivable in .

The weak formulation of problem (6) isLetWe discuss the following cases.

(1) Noncoercive Case (see [14], ). In this case, the bilinear form becomesWe have at , from where the bilinear form will be noncoercive.

Let

We recall the following theorem.

Theorem 1 (see [14, 17, 18]). For all and , there exists a unique weak solution which solves problem (6) such that and there is a constant such that for any solution of (6)if more then and there is a constant such that

Theorem 2. Let be the weak solution of (6) with initial state . In noncoercive case, the functionis continuous, and the functional has at least one minimum in .

Theorem 3. Let be the weak solution of (6) with initial state . The functionis -derivable in .

(2) Subcritical Potential Case (see [19, 20], ). Then the bilinear form becomes

Since at and , the bilinear form is noncoercive and is noncontinuous at .

Consider the not bounded operator where

Let

We recall the following theorem.

Theorem 4 (see [15, 19]). If then, for all , problem (6) has a unique weak solutionif more thenif then, for all , problem (6) has a unique solution

Theorem 5. Let be the weak solution of (6) with initial state . In subcritical potential case, the functionis continuous, and the functional is continuous in .

Theorem 6. Let be the weak solution of (6) with initial state . The function is -derivable in .

3. Proof

Proof of Theorem 2. Let be a small variation such that .
Consider , with being the weak solution of (6) with initial state and is the weak solution of (6) with initial state
Consequently, is the solution of the variational problem:Hence, is the weak solution of (6) with . We apply the estimate in Theorem 1 with . This gives the following.
There is a constant such thattherefore And from (32) we haveHence,In addition, from (32) we haveEquations (34), (36), and (41) imply the continuity of the functionHence, the functional is continuous inWe have , where , which gives Since the set is bounded in , then is a compact in Therefore, has at least one minimum in .

Proof of Theorem 3. Let and such that ; we define the functionwhere is the solution of the variational problemand we poseWe want to show thatWe easily verify that the function is solution of the following variational problem:By the same way as that used in the proof of continuity, we deduceHence, the function is -derivable in and we deduce the existence of the gradient of the functional .

Proof of Theorem 5. Let be a small variation such that
Consider , with being the weak solution of (6) with initial state , and is the weak solution of (6) with initial state .
Consequently, is the solution of variational problemTake ; this givessince is independent of , which givesBy integrating between and with we obtainand since and , , we obtainthis gives From whereWhich gives the continuity of the functionHence, the functional is continuous in

Proof of Theorem 6. Let and such that ; we define the functionwhere is the solution of the variational problemand we poseWe want to show thatWe easily verify that the function is the solution of the following variational problem:By the same way as that used in the proof of continuity, we deduceHence, in all cases, the function is -derivable in and we deduce the existence of the gradient of the functional .

Now, we are going to compute the gradient of with the adjoint state method.

4. Gradient of

We define the Gâteaux derivative of at in the direction , bywhere is the weak solution of (6) with initial state , and is the weak solution of (6) with initial state .

We compute the Gâteaux (directional) derivative of (6) at in some direction , and we get the so-called tangent linear model:

We introduce the adjoint variable , and we integrate

Let us take ; then we may write .

And with we may now rewrite (69) asthis givesThe Gâteaux derivative of the functional at in the direction is given by

After some calculations, we arrive atThe adjoint model is

Problem (75) is retrograde; we make the change of variable , which giveswith .

From (71), (74), and (75) the gradient of is given by

With the change of variable , the gradient becomes

To calculate a gradient of , we solve two problems: (6) and (76). The result solution of (6) is used in the second member of problem (76).

5. Discretization of Problem

Step 1 (full discretization). To resolve problem (6) and (76), we use the method -schema in time. This method is unconditionally stable for
Let be the steps in space and the steps in time.
Letwe putLetTherefore,is approximated byLet us defineLetting , finally we getwhere

Step 2 (discretization of the functional one has). We recall that the method of Thomas Simpson to calculate an integral iswith , , , .
Let the functionsThis giveswherewith , , , .
Therefore,

Step 3 (discretization of ). The adjoint problem (76) is discretized as (85), so,

6. Numerical Experiments and Results

In this section, we discuss two cases:

In case we have a priori knowledge of in each point of analysis grid-points, we apply the Tikhonov approach to solve the minimization problem (8). The data is assumed to be corrupted by measurement errors, which we will refer to as noise. In particular, we suppose that . Here, we study the impact of () on the construction of the solution.

In case we have a partial knowledge of values of (example 20%): firstly, we apply the hybrid approach to rebuild the initial state. Secondly, we make a comparison between both hybrid and Tikhonov approaches.

The tests have been performed in Matlab 2012A, on a Windows 7 platform.

6.1. Regularization Approach

The differentiability and continuity in of the functional,is deduced from the differentiability and continuity of the functional , and we havewhere is the solution of (76).

The main steps for descent method at each iteration are the following:(i)Calculate solution of (6) with initial condition .(ii)Calculate solution of (76).(iii)Calculate the descent direction .(iv)Find (v)Update the variable .The algorithm ends when , where is a given small precision.

is chosen by the inaccurate linear search by Rule Armijo-Goldstein as follows:let and if and stopif not.

We do all the tests on Pc with the following configurations: Intel Core i3 CPU 2.27 GHz; RAM = 4 GB (2.93 usable).

In all figures, the observed function is drawn in red and built function in blue.

Let be number of points in space and number of points in time.

6.1.1. The Noncoercive Case

Let and parameters , .

(i) Tests with . See Figures 1, 2, 3, and 4.

(ii) Tests with . In Figures 5, 6, 7, and 8, is drawn in red and (rebuilt initial condition) in blue.

6.1.2. Sub Critical Potential Case

Let , , and the parameters , .

(i) Tests with . See Figures 9, 10, 11, and 12.

(ii) Tests with . See Figures 13, 14, 15, and 16.

6.2. Hybrid Algorithm

The genetic algorithms (GA) are adaptive search and optimization methods that are based on the genetic processes of biological organisms. Their principles have been first laid down by Holland. The aim of GA is to optimize a problem-defined function, called the fitness function. To do this, GA maintain a population of individuals (suitably represented candidate solutions) and evolve this population over time. At each iteration, called generation, the new population is created by the process of selecting individuals according to their level of fitness in the problem domain and breeding them together using operators borrowed from natural genetics, as, for instance, crossover and mutation. As the population evolves, the individuals in general tend toward the optimal solution [2124]. The basic structure of a GA is the following: Initialize a population of individuals; Evaluate each individual in the population; While the stop criterion is not reached do Select individuals for the next population; Apply genetic operators (crossover, mutation) to produce new individuals; Evaluate the new individuals; return the best individual.

The hybrid methods combine principles from genetic algorithms and other optimization methods. In this approach, we will combine the genetic algorithm with method descent (steepest descent algorithm (FP)).

We assume that we have a partial knowledge of background state at certain points , .

We assume the individual is a vector ; the population is a set of individuals.

The initialization of individual is as follows:Starting by initial population, we apply genetic operators (crossover, mutation) to produce a new population in which each individual is an initial point for the descent method (FP). When a specific number of generations is reached without improvement of the best individual, only the fittest individuals (e.g., the first fittest individuals in the population) survive. The remaining die and their place is occupied by new individuals with new genetic ( are chosen randomly; the other are chosen as (96)). At each generation we keep the best. The algorithm ends when or , where is a given precision (see Figure 17).

The main steps for descent method (FP) at each iteration are the following:(i)Calculate solution of (6) with initial condition .(ii)Calculate solution of (76).(iii)Calculate the descent direction .(iv)Find .(v)Update the variable .The algorithm ends when , where is a given small precision.

is chosen by the inaccurate linear search by Rule Armijo-Goldstein as follows:let and .if and stopif not.

Consider we know of values of background state (), in this test we try to build the solution with the hybrid method.

In the figures below, the observed function is drawn in red and built function in blue.

Let be number of points in space and number of points in time.

6.2.1. The Noncoercive Case

Let and parameters , , number of individuals , and number of generations .

The test with simple descent gives Figure 18.

The test with genetic algorithm gives Figure 19.

Now we turn the hybrid algorithm. This gives Figure 20.

6.2.2. Subcritical Potential Case

Let , , and parameters , , number of individuals = 30, and number of generations = 2000.

The test with simple descent gives Figure 21.

The test with genetic algorithm gives Figure 22.

Now we turn the hybrid algorithm. This gives Figure 23.

6.3. Comparison between Hybrid Approach and Tikhonov Approach

Here, we assume that we know of values of background state ().

(i) Noncoercive Case. see Tables 1 and 2.

The minimum value of reached by the Tikhonov algorithm was , whereas with the hybrid algorithm it was possible to reach the value in 11 h and 20 min with knowledge of 20% of ; if we take more than 20%, we got less than elapsed time.

(ii) Subcritical Potential Case. see Tables 3 and 4.

The minimum value of reached by the Tikhonov algorithm was , whereas with the hybrid algorithm it was possible to reach the value in 13 h and 40 min with knowledge of 20% of ; if we take more than 20%, we got less than elapsed time.

7. Conclusion

In this paper, we have presented the regularization method and the hybrid method which are applied to determine an initial state from the point of measurements of parabolic degenerate/singular problem. These methods have proven efficiency to rebuild the solution. The proposed reconstruction algorithms are easily implanted.

The elapsed time of the hybrid method is long enough. To reduce it, in our coming work we will use the multiprogramming to run two approaches of parallelism.

Conflicts of Interest

The authors declare that there are no conflicts of interest regarding the publication of this paper.