Abstract

We present an extended version of the Generalized Collage Theorem to deal with inverse problems for vector-valued Lax-Milgram systems. Numerical examples show how the method works in practical cases.

1. Inverse Problems via the Collage Theorem

In recent years a great deal of attention has been paid to the inverse problems in distributed systems, that is, the determination of unknown parameters in the functional form of the governing model of the phenomenon under study [14]. The literature is rich in papers studying ad hoc methods to address ill-posed inverse problems by minimizing a suitable approximation error along with utilizing some regularization techniques [57].

Many inverse problems may be recast as the approximation of a target element in a complete metric space by the fixed point of a contraction mapping . Thanks to a simple consequence of Banach’s fixed point theorem known as the Collage Theorem, most practical methods of solving the inverse problem for fixed point equations seek an operator for which the collage distance is as small as possible.

Theorem 1 (“Collage Theorem” [8]). Let be a complete metric space and a contraction mapping with contraction factor . Then, for any , where is the fixed point of .

This vastly simplifies this type of inverse problem as it is much easier to estimate than to find the fixed point and then compute . One now seeks a contraction mapping that minimizes the so-called collage error , in other words, a mapping that sends the target as close as possible to itself. This is the essence of the method of collage coding which has been the basis of most, if not all, fractal image coding and compression methods. Barnsley [8, 9] was the first to see the potential of using the Collage Theorem above for the purpose of fractal image approximation and fractal image coding. In practical applications, from a family of contraction mappings , , one wishes to find the parameter for which the approximation error is as small as possible. In practice the feasible set is often taken to be which guarantees the contractivity of for any . A difference between the “collage” approach and the one based on Tikhonov regularization is the following: in the collage approach, the constraint guarantees that is a contraction and, therefore, replaces the effect of the regularization term in the Tikhonov approach (see [4, 7]). The collage approach works well for low-dimensional parametrization in particular, while Tikhonov regularization is a fundamentally nonparametric methodology. The collage-based inverse problem can be formulated as an optimization problem as follows: This is a nonlinear and nonsmooth optimization model that can often be reduced to a quadratic optimization program. Several algorithms can be used to solve it including penalization methods and particle swarm ant colony techniques.

This method of collage coding may be applied in other situations where contractive mappings are encountered. These ideas have been extended to inverse problems for Initial Value Problems (IVP) in [10]. In this setting, the contractive Picard operator plays the role of and the space contains continuous and appropriately bounded functions on a closed interval of observation. Given a target function, perhaps the interpolation of observational data points, the Collage Theorem can be applied to find the Picard operator within a prescribed class that minimizes the collage distance. We have applied this technique to inverse problems involving several families of differential equations and application to different areas (see [1015]).

Example 2. We present the results to an IVP inverse problem solved using collage coding, Tikhonov regularization, and the Landweber-Fridman method. Consider the following steady-state heat equation on :where is the variable thermal diffusivity of the medium at , represents a heat source or sink at each , and denotes the temperature at . The inverse problem we look at is to estimate the variable thermal diffusivity given uniformly distributed values of , with low-amplitude Gaussian noise added, and the forcing function . For this example, we assume that and that , corresponding to . Morozov’s discrepancy principle was used to find the value of the Tikhonov regularization parameter, with our discrepancy below a tolerance level of . Table 1 presents the results; the subscripts indicate the method used to solve the inverse problem.

In a manner analogous to the Collage Theorem we have also formulated a Generalized Collage Theorem for solving Boundary Value Problems (BVP), replacing the minimization of the true error by the minimization of something akin to the collage distance. In place of Banach’s fixed point theorem for contraction maps on a complete metric space, we have appealed to the Lax-Milgram representation theorem.

These results have been extended to a wider class of elliptic equations problems in [16, 17], by considering not only Hilbert but also reflexive Banach spaces and even replacing the primal variational formulation of such a problem with a more general constrained variational one. Let us mention that this kind of formulation arises, for instance, when the boundary constraints are weakly imposed.

The paper is organized as follows. Section 2 is concerned with an extension of the Collage Theorem stated in [16, Corollary ] to a finite-dimensional vector-valued context, as well as a discretization scheme based on the use of suitable Schauder bases. Section 3 presents three different numerical examples which show how to solve inverse problems for systems of elliptic differential equations.

2. Vector-Valued Lax-Milgram and the Inverse Problem

In this section we deal with a Lax-Milgram theorem stated in terms of a system of suitable variational equations and with a collage type result that follows from it. We refer to [1821] for some recent vectorial versions of the Lax-Milgram theorem and some applications to the study of mixed variational equations.

The first result is the following vector-valued version of the Lax-Milgram theorem, which is a direct consequence of the characterization of the solvability of systems with infinitely many variational equations given in [22, Theorem ]—specifically of its finite-dimensional case [22, Corollary ]—and of the fact that if , , are real vector spaces, are bilinear forms, and are linear forms such that the system admits a solution, then such a solution is unique if, and only if, the corresponding homogeneous problem has one and only one solution.

Given a real normed space , we write for its topological dual space.

Theorem 3. Suppose that is a real reflexive Banach space, , are real Banach spaces, and are continuous bilinear forms. Then for all there exists a unique such that if, and only if, and there exists satisfying Moreover, if these equivalent conditions hold and is the unique solution, then

The aforementioned generalization of the collage type result [16, Corollary ] for a finite number of variational equations is stated in these terms.

Corollary 4. Let be a real reflexive Banach space, let , let be real Banach spaces, let , and let be a nonempty set such that for all there exist continuous bilinear forms and with Let us also suppose that, for all , is the unique solution of the variational system Then for each and for all the inequality is valid.

Proof. The unisolvency and continuous dependence on the initial data of the solution in Theorem 3 imply the announced result. Indeed, let and notice that is a solution of the system of variational equations Then, in view of Theorem 3, we conclude that

Let us observe that if one wants to approximate the solution in the sense of the collage distance, that is, minimize , according to Corollary 4, it suffices to minimize although if then we only need to minimize Under such an assumption, , and suppose that each space    has a Schauder basis , in such a way that if denotes its sequence of biorthogonal functionals, then the nonrestrictive condition holds. In order to discretize our optimization problem, let us also assume that admits a Schauder basis and define for each and and let be the th-projection of onto ; that is, for all , We also suppose that for all , , and and there exists such that Then, Theorem 3 guarantees the existence of unique such that Corollary 4, when applied to this vector-valued variational problem, implies and if then it suffices to minimize or equivalently the discrete objective function which is easier to minimize.

3. Numerical Examples

In this section we present three different numerical examples.

Example 1. We consider the linear system withWe solve the linear system, sample each solution component at uniformly distributed data points in , add relative noise of , and fit 6th-degree polynomials to the resulting data to produce a target solution . Figure 1 shows the numerical solution and the target functions with and . We consider the inverse problem: Given and , approximate and the coefficient such that the resulting system admits as an approximate solution. We setWith equal to the -dimensional “hat basis” of , for various values of , noise , degree , and basis dimension , we construct the objective function in (26) and minimize it to find . The results are presented in Table 2.

Example 2. We consider the coupled systemwithNote that the forcing functions and are not Hilbertian, living instead in , for example. We work in the reflexive framework of Section 2. (In [16], we observed that the Hilbert space solution framework failed to work for a single equation with forcing function .) Following [23, Proposition ], we can construct a Schauder basis in the Sobolev space by integrating the Haar system of .
The BVP has singularities at and , since at each endpoint one of or is undefined. The forward problem can be solved numerically by using collocation techniques. We use COMSOL to solve it. Figure 2 presents the numerical solution. Next, we represent the solution in the subspace generated by the first terms of the Schauder basis; call these representations and . We consider the inverse problem: find such that is admitted as an approximate solution to (30) with replaced by .
We solve the inverse problem by constructing the objective function in (26) with , equal to the element of the Schauder basis, and . With these choices, we have that When upon minimizing the objective function we obtain, to four decimal places, and . When , we find , , and .

Example 3. We consider the 2D linear systemwithwhere and have been chosen so that the actual solution to the system isAnalogous to the process followed in Example 1, we sample each solution component at uniformly distributed data points in and add relative noise of to each of these data points. A target solution is constructed using these data points together with our basis functions, (hexagonal-based pyramids in this 2D case):We consider the inverse problem: Given , , and , approximate such that the resulting system admits as an approximate solution. We setFor various values of , noise , degree , and basis dimension , we construct the objective function in (26) and minimize it to find , for and . The results are presented in Table 3.

Conflict of Interests

The authors declare that there is no conflict of interests regarding the publication of this paper.