The inverse problem of using measurements to estimate unknown parameters of a system often arises in engineering practice and scientific research. This paper proposes a Collage-based parameter inversion framework for a class of partial differential equations. The Collage method is used to convert the parameter estimation inverse problem into a minimization problem of a function of several variables after the partial differential equation is approximated by a differential dynamical system. Then numerical schemes for solving this minimization problem are proposed, including grid approximation and ant colony optimization. The proposed schemes are applied to a parameter estimation problem for the Belousov-Zhabotinskii equation, and the results show that the proposed approximation method is efficient for both linear and nonlinear partial differential equations with respect to unknown parameters. At worst, the presented method provides an excellent starting point for traditional inversion methods that must first select a good starting point.

1. Introduction

In industrial and engineering applications there are broad classes of inverse problems that can be described as the problems that seek to go backwards from measurements to estimated parameter values [1, 2]. In this paper we concentrate on the following partial differential equation (PDE) with unknown parameters: where and are n-dimensional vector functions, and is a K-dimensional vector consisting of spatial partial derivatives of first or higher order involved in (1.1). The detailed description on (1.1) will be given in the next section. The parameter estimation problem of (1.1) can be phrased as follows.

Let be a target solution. Find parameters such that (1.1) admits as a solution or an approximate solution, where may be linear or nonlinear with respect to the unknown parameters.

Most numerical methods for solving this kind of inverse problems rely on numerous executions of the forward problem, every time with different parameter values. Therefore, the numerical method for the forward problem must be fast. This paper proposes a new framework for solving the above parameter inversion problem by the numerical approximation based on the Collage method, aiming at avoiding to solve the forward problem again and again.

In the proposed framework the Collage method is used to convert the parameter inversion problem into a function optimization problem. The motivation for our treatment comes from the use of contraction maps in fractal-based approximation methods such as fractal interpolation [35] and fractal image compression [6, 7]. The mathematical methods that underlie fractal image compression were first introduced to inverse problems for ODEs by Kunze and Vrscay [8], in which a framework for solving inverse problems was set up based on the Picard contraction map associated with ODEs. This framework has been successfully applied to more inverse problems of ODEs (see [911]). Recently, Kunze et al. [12] developed a Collage-based approach for PDEs inverse problem, in which boundary value inverse problems were solved by the Lax-Milgram representation theorem and a generalized Collage theorem. Deng et al. [13] proposed a framework for solving parameter estimation problems for reaction-diffusion equations by an approximate Picard contraction map and Collage method. In this framework, the fixed point of the contractive Picard integral operator is viewed as an approximation of the target solution . The inverse problem becomes one of the finding unknown parameters that define the Picard operator by using the minimization of the squared Collage distance .

In the frameworks proposed in [8, 13], the stationarity conditions yield a set of linear equations under the assumption that a vector field is linear with respect to unknown parameters. Obviously, the above algorithms are incredibly simple both in concept and in form. However the stationarity conditions will yield a nonlinear system in the case that the vector field is nonlinear with respect to unknown parameters, and it is very difficult to solve the nonlinear system.

Differing from [8, 13], in this paper the parameter estimation problem of (1.1) is viewed as a global minimization problem of the function determined by the squared Collage distance , and the grid approximation and ant colony optimization are both proposed to solve the minimization problem of . The methods presented in this paper are suitable for solving the complicated parameter estimation problem, such as when is a nonlinear function with respect to unknown parameters or is associated with a great number of unknown parameters.

The structure of this paper is as follows. In Section 2, we provide a simple review of the Collage method for inverse problems of ODEs, and give the theoretical framework for converting the parameter estimation problem of (1.1) into a minimization problem of a function of several variables. In Section 3, we describe an algorithm for computing the function of several variables determined by the squared Collage distance. In Section 4, the grid approximation and ant colony optimization schemes for parameter estimation are applied with our method in order to solve a parameter estimation problem for the Belousov-Zhabotinskii equation.

2. Formulation from Parameter Estimation to Minimization Problem

In this section, we restrict our discussion of technical details to a minimum. The reader is referred to [8, 13] for greater mathematical details. The framework presented in this paper is an extension of the Picard contraction mapping method for a class of inverse problems of ordinary differential equations [8], the theoretical basis for which comes from the Collage theorem [14].

Proposition 2.1. (Collage theorem) Let be a complete metric space, and let be a contractive map on with fixed point and contraction factor . Then

In [8], the framework for solving the inverse problem of ODEs by Collage theorem was set up. Seek an ODE initial value problem that admits as either a solution or an approximate solution, where is restricted to a class of functional forms, for example, affine and quadratic. Associated with the initial value problem is the Picard integral operator : It is well known that, subject to appropriate conditions on , the operator is contractive over an appropriate Banach function space . By taking as the target solution, the approximate vector field of associated with the operator is found by minimizing the squared Collage distance .

Now we turn to discuss the parameter estimation problem of (1.1) by the use of the Collage method. Firstly some basic assumptions on (1.1) are listed as follows.

(i), where is a bounded region; and are two positive constants satisfying .(ii) is a vector function with the form of , is a differentiable function, , and here is the highest order number of spatial partial derivative involved in (1.1).(iii) are, for the moment, continuous.(iv)The exact solution of the system (1.1) exists uniquely.

By replacing the term of (1.1) with , we gain an approximate dynamical model of (1.1): and the solution of (2.3) satisfies the equivalent integral equation Define the Picard operator associated with the model (2.4) as follows: It is clear that . The parameter estimation problem of (1.1) will be converted into a minimization problem based on (2.5) by the Collage method.

In [13], we have showed that, subject to appropriate conditions on the vector field , the Picard operator is contractive over a complete space of functions supported over the domain . The space is equipped with norm where Let Then an interesting inequality is obtained (see [13] for details): where are two constants and

Note the metric is defined similar to (2.6)–(2.9) for ; the only difference between and is their dimensions.

In the inequality (2.10), the true approximation error is bounded by the spatial derivative approximation error and the Collage distance . It is clear that when , so one can find the estimate values of the unknown parameters by the use of the minimization of the squared Collage distance . However, in many practical problems the target function will be generated by interpolating observational or experimental data points , collected at various locations at various times . Obviously, there needs a further discussion for the case for applying the minimization method of the squared Collage distance to practical problems.

Proposition 2.2. Let satisfy be differentiable, and let be continuous for . Then where , and is a positive constant.

Proof. It follows from the differential mean-value theorem that for From the continuity assumption on , we have that where We find from the definition of the norm that where ; here is the area (or volume) of the domain . Thus the inequality (2.12) holds.

Proposition 2.3. Let and be the exact solution and the target solution of (1.1), respectively. Assume that and are continuous for . Then there exists a positive constant such that where .

Proof. Firstly, from Proposition 2.2, there are two positive constants and such that We have that Letting , we gain the result of Proposition 2.3.

The following theorem follows immediately from the inequality (2.10) and Proposition 2.3.

Theorem 2.4. Let and be the exact solution and the target of (1.1), respectively. Denote by , and denot by . Assume that and are continuous for . Then where

From Theorem 2.4, the true approximation error is controlled by , the spatial derivative approximation error ,and the Collage distance . For a given target solution , the first two terms of the right-hand side of (2.20) are fixed; so the smallest upper bound of associated with the inequality (2.20) can be obtained by the minimization of . Thus, Theorem 2.4 provides a theoretical basis for finding the unknown parameters of (1.1) by minimizing the squared Collage distance. At worst, the presented method can provide an excellent starting point for traditional inversion methods.

In a real problem, it is important to make the error bound of obtained from (2.20) as small as possible. Obviously, there is no problem with the first term of the right-hand side of (2.20), which approaches zero as approaches . For guaranteeing the effectiveness of the proposed minimization method, it is necessary to construct the target solution from the known measurements of (1.1) such that is as small as possible. If the target solution satisfies that , then the target function and the exact solution have the same spatial derivatives at the initial time point , and . We have from (2.20) that

In general, the Hermite interpolation method can be used to construct the target solution . When the exact solution is given in the form of data points , it can be expected to provide with a small value by taking the spatial derivative values of the exact solution at initial time point .

3. Algorithm for Function of Several Variables

Differing from the ideas proposed in [8, 13], the unknown parameters of (1.1) will be estimated by finding the minimum of the function of several variables determined by . Let be the vector function defined as follows: then where and denote the part of the vector function and , respectively. Let We have that Obviously, a function of unknown parameters will be obtained by computing the integrals involved in . The obtained function is denoted by throughout the rest of this paper, that is,

Example 3.1. To demonstrate the above algorithm, we consider the Belousov-Zhabotinskii equation where . Suppose that is the target solution satisfying the condition . and are unknown parameters. Let Then Denoting by and respectively, we have that
Let denote the integral
and Then

4. Numerical Approximation Methods

From the previous section, the function of several variables obtained from the Collage method has the form of a sum every member of which depends only upon a few variables. This leads to the conclusion that many parameter estimation problems for PDEs can be solved in the exact way known from classical analysis. However many problems can only be solved by an approximate numerical method when the function is especially complicated, such as when associated with is nonlinear. Also approximate numerical methods are suitable for the case that the number of variables is great. In this paper, we are interested in the grid approximation and ant colony optimization methods, and these methods will be applied to the unknown parameter estimation of (1.1).

Note that the ranges of unknown parameters may be assumed from the physical understanding of the problem and modified from the analysis of numerical approximation results. In this section we assume that , where is a bounded domain with the form Thus, the continuous optimization problem associated with the parameter estimation of (1.1) can be phrased as

Example 4.1. We demonstrate the methods for the system (3.6) with assumptions: the domain , the parameter domain , the initial condition , and the target solution . By applying the algorithm presented in Section 3, the coefficients of (3.12) and (3.13) are obtained The estimates of the unknown parameters and can be obtained by solving the optimization problems of and , respectively.

4.1. Grid Approximation

We firstly describe a partition scheme of the parameter domain . For , the intervals are partitioned with step , that is, Let . We define the spatial grid by the formula where are basis vectors satisfying .

With the above GR(S) grid, the approximate estimate of the unknown parameter vector of (1.1) is determined by

For testing the effect of the grid approximation method, the minimization problems of (3.12) and (3.13) are solved with , the results are shown in Figures 1 and 2. Note that the parameter estimation of (3.6) cannot be solved by the framework proposed in [13] due to being nonlinear with respect to the unknown parameters.

In Figure 1, the red point is the global minimum position of , where and . Similarly, is the global minimum point of with a minimum (see Figure 2).

Sometimes the stationarity conditions can be used to reduce the computational complexity. For example, it follows from that The minimum of can be found by viewing as a function with respect to the variable ; the result is shown in Figure 3.

4.2. Ant Colony Optimization Approximation

The ant colony optimization (ACO) algorithm was inspired by the observation of real ant colonies. Its inspiring source is the foraging behavior of real ants, which enables them to find the shortest paths between nest and food sources [15, 16]. Recently, ACO algorithms for continuous optimization problems have received an increasing attention in swarm computation; many researches have shown that the ACO algorithms have great potential in solving a wide range of optimization problems, including continuous optimization [1722]. These ACO algorithms for continuous domains can be directly used for solving the minimization problem of (4.2).

In [17], Shelokar et al. proposed a particle swarm optimization (PSO) hybridized with an ant colony approach (PSACO) for optimization of multimodal continuous functions, which applies PSO for global optimization and the idea of ant colony approach to update positions of particles to attain rapidly the feasible solution space (see [17] for detail). for example, the PSACO algorithm is used for the minimization problem of (3.12); the results in Figures 4, 5, and 6 are obtained.


This work is supported by the National Natural Science Foundation of China under Grant no. 50875104.