- About this Journal ·
- Abstracting and Indexing ·
- Advance Access ·
- Aims and Scope ·
- Article Processing Charges ·
- Articles in Press ·
- Author Guidelines ·
- Bibliographic Information ·
- Citations to this Journal ·
- Contact Information ·
- Editorial Board ·
- Editorial Workflow ·
- Free eTOC Alerts ·
- Publication Ethics ·
- Reviewers Acknowledgment ·
- Submit a Manuscript ·
- Subscription Information ·
- Table of Contents

Advances in Mathematical Physics

Volume 2010 (2010), Article ID 457635, 37 pages

http://dx.doi.org/10.1155/2010/457635

## The Partial Inner Product Space Method: A Quick Overview

^{1}Institut de Recherche en Mathématique et Physique, Université Catholique de Louvain, 1348 Louvain-la-Neuve, Belgium^{2}Dipartimento di Matematica ed Applicazioni, Università di Palermo, 90123 Palermo, Italy

Received 16 December 2009; Accepted 15 April 2010

Academic Editor: S. T. Ali

Copyright © 2010 Jean-Pierre Antoine and Camillo Trapani. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

#### Abstract

Many families of function spaces play a central role in analysis, in particular, in signal
processing (e.g., wavelet or Gabor analysis). Typical are spaces, Besov spaces, amalgam
spaces, or modulation spaces. In all these cases, the parameter indexing the family measures the
behavior (regularity, decay properties) of particular functions or operators. It turns out that
all these space families are, or contain, scales or lattices of Banach spaces, which are special
cases of *partial inner product spaces* (PIP-*spaces*). In this context, it is often said that such
families should be taken as a whole and operators, bases, and frames on them should be defined
globally, for the whole family, instead of individual spaces. In this paper, we will give an overview of PIP-spaces and operators on them, illustrating the results by space families of interest in mathematical physics and signal analysis. The
interesting fact is that they allow a global definition of operators, and various operator classes
on them have been defined.

#### 1. Motivation

In the course of their curriculum, physics and mathematics students are usually taught the basics of Hilbert space, including operators of various types. The justification of this choice is twofold. On the mathematical side, Hilbert space is the example of an infinite-dimensional topological vector space that more closely resembles the familiar Euclidean space and thus it offers the student a smooth introduction into functional analysis. On the physics side, the fact is simply that Hilbert space is the daily language of quantum theory; therefore, mastering it is an essential tool for the quantum physicist.

However, the tool in question is actually insufficient. A pure Hilbert space formulation of quantum mechanics is both inconvenient and foreign to the daily behavior of most physicists, who stick to the more suggestive version of Dirac, although it lacks a rigorous formulation. On the other hand, the interesting solutions of most partial differential equations are seldom smooth or square integrable. Physically meaningful events correspond to changes of regime, which mean discontinuities and/or distributions. Shock waves are a typical example. Actually this state of affairs was recognized long ago by authors like Leray or Sobolev, whence they introduced the notion of *weak solution*. Thus it is no coincidence that many textbooks on PDEs begin with a thorough study of distribution theory [1–4].

All this naturally leads to the introduction of Rigged Hilbert Spaces (RHS) [5]. In a nutshell, a RHS is a triplet: where is a Hilbert space, is a dense subspace of the , equipped with a locally convex topology, finer than the norm topology inherited from , and is the space of continuous conjugate linear functionals on , endowed with the strong dual topology. By duality, each space in (1.1) is dense in the next one and all embeddings are linear and continuous. In addition, the space is in general required to be reflexive and nuclear. Standard examples of rigged Hilbert spaces are the Schwartz distribution spaces over or , namely or [5–8].

The problem with the RHS (1.1) is that, besides the Hilbert space vectors, it contains only two types of elements: “very good" functions in and “very bad" ones in . If one wants a fine control on the behavior of individual elements, one has to interpolate somehow between the two extreme spaces. In the case of the Schwartz triplet, , a well-known solution is given by a chain of Hilbert spaces, the so-called Hermite representation of tempered distributions [9].

In fact, this is not at all an isolated case. Indeed many function spaces that play a central role in analysis come in the form of families, indexed by one or several parameters that characterize the behavior of functions (smoothness, behavior at infinity, ). The typical structure is a chain or a scale of Hilbert spaces, or a chain of (reflexive) Banach spaces (a discrete chain of Hilbert spaces is called a *scale* if there exists a self-adjoint operator such that , with the graph norm . A similar definition holds for a continuous chain .). Let us give two familiar examples. (i)First, consider the Lebesgue the Lebesgue spaces on a finite interval, for example, :
where . Here and are dual to each other , and similarly are . By the Hölder inequality, the () inner product
is well defined if However, it is *not* well defined for two arbitrary functions . Take, for instance, but . Thus, on , (1.3) defines only a *partial* inner product. The same result holds for any compact subset of instead of [,1].(ii)As a second example, take the scale of Hilbert spaces built on the powers of a positive self-adjoint operator in a Hilbert space . Let be , the domain of , equipped with the graph norm , for or , and (conjugate dual)

Note that, in the second example (ii), the index could also be taken as real, the link between the two cases being established by the spectral theorem for self-adjoint operators. Here again the inner product of extends to each pair , but on it yields only a partial inner product. The following examples are standard: (i) in , (ii) in , (iii) in .

(The notation is suggested by the operators of position, momentum and harmonic oscillator energy in quantum mechanics, resp.). Note that both and coincide with the Schwartz space of smooth functions of fast decay, and with the space of tempered distributions (considered here as continuous *conjugate linear* functionals on ). As for the operator , it generates the scale of Sobolev spaces .

However, a moment's reflection shows that the total-order relation inherent in a chain is in fact an unnecessary restriction; partially ordered structures are sufficient, and indeed necessary in practice. For instance, in order to get a better control on the behavior of individual functions, one may consider the lattice built on the powers of and simultaneously. Then the extreme spaces are still and . Similarly, in the case of several variables, controlling the behavior of a function in each variable separately requires a nonordered set of spaces. This is in fact a statement about tensor products (remember that ). Indeed the tensor product of two chains of Hilbert spaces, , is naturally a lattice of Hilbert spaces. For instance, in the example above, for two variables , that would mean considering intermediate Hilbert spaces corresponding to the product of two operators, .

Thus the structure to analyze is that of *lattices of Hilbert or Banach spaces*, interpolating between the extreme spaces of an RHS, as in (1.1). Many examples can be given, for instance, the lattice generated by the spaces , the amalgam spaces , the mixed-norm spaces , and many more. In all these cases, which contain most families of function spaces of interest in analysis and in signal processing, a common structure emerges for the “large" space , defined as the union of all individual spaces. There is a lattice of Hilbert or reflexive Banach spaces , with an (order-reversing) involution , where (the space of continuous conjugate linear functionals on ), a central Hilbert space , and a partial inner product on that extends the inner product of to pairs of dual spaces .

Moreover, many operators should be considered globally, for the whole scale or lattice, instead of on individual spaces. In the case of the spaces , such are, for instance, operators implementing translations () or dilations (), convolution operators, Fourier transform, and so forth. In the same spirit, it is often useful to have a *common* basis for the whole family of spaces, such as the Haar basis for the spaces . Thus we need a notion of operator and basis defined globally for the scale or lattice itself.

This state of affairs prompted A. Grossmann and one of us (the first author) to systematize this approach, and this led to the concept of *partial inner product space* or PIP-space [10–13]. After many years and various developments, we devoted a full monograph [14] to a detailed survey of the theory. The aim of this paper is to present the formalism of PIP-spaces, which indeed answers these questions. In a first part, the structure of PIP-space is derived systematically from the abstract notion of compatibility and then particularized to the examples listed above. In a second part, operators on PIP-spaces are introduced and illustrated by several operators commonly used in Gabor or wavelet analysis. Finally we describe a number of applications of PIP-spaces in mathematical physics and in signal processing. Of course, the treatment is sketchy, for lack of space. For a complete information, we refer the reader to our monograph [14].

#### 2. Partial Inner Product Spaces

##### 2.1. Basic Definitions

The basic question is how to generate PIP-spaces in a systematic fashion. In order to answer, we may reformulate it as follows: given a vector space and two vectors , when does their inner product make sense? A way of formalizing the answer is given by the idea of *compatibility*.

*Definition 2.1. * A *linear compatibility relation* on a vector space is a symmetric binary relation which preserves linearity:
As a consequence, for every subset , the set is a vector subspace of and one has
Thus one gets the following equivalences:
From now on, we will call *assaying subspace* of a subspace such that and denote by the family of all assaying subsets of , ordered by inclusion. Let be the isomorphy class of , that is, is considered as an abstract partially ordered set. Elements of will be denoted by , and the corresponding assaying subsets by . By definition, if and only if . We also write . Thus the relations (2.3) mean that if and only if there is an index such that . In other words, vectors should not be considered individually, but only in terms of assaying subspaces, which are the building blocks of the whole structure.

It is easy to see that the map is a closure, in the sense of universal algebra, so that the assaying subspaces are precisely the “closed" subsets. Therefore one has the following standard result.

Theorem 2.2. *The family , ordered by inclusion, is a complete involutive lattice, that is, it is stable under the following operations, arbitrarily iterated: *(i)*involution:*(ii)*infimum: *(iii)*supremum: .*

The smallest element of is and the greatest element is . By definition, the index set is also a complete involutive lattice; for instance,

*Definition 2.3. * A *partial inner product* on is a Hermitian form defined exactly on compatible pairs of vectors. A *partial inner product space* (PIP-space) is a vector space equipped with a linear compatibility and a partial inner product.

Note that the partial inner product is not required to be positive definite.

The partial inner product clearly defines a notion of *orthogonality: *if and only if and .

*Definition 2.4. * The PIP-space is *nondegenerate* if , that is, if for all implies that .

We will assume henceforth that our PIP-space is nondegenerate. As a consequence, and every couple are dual pairs in the sense of topological vector spaces [15]. We also assume that the partial inner product is positive definite.

Now one wants the topological structure to match the algebraic structure, in particular, the topology on should be such that its conjugate dual be : . This implies that the topology must be finer than the weak topology and coarser than the Mackey topology : From here on, we will assume that every carries its Mackey topology . This choice has two interesting consequences. First, if is a Hilbert space or a reflexive Banach space, then coincides with the norm topology. Next, implies that , and the embedding operator is continuous and has dense range. In particular, is dense in every .

##### 2.2. Examples

###### 2.2.1. Sequence Spaces

Let be the space of *all* complex sequences and define on it (i) a compatibility relation by and (ii) a partial inner product .

Then , the space of finite sequences, and the complete lattice consists of Köthe's perfect sequence spaces [15, 30]. Among these, typical assaying subspaces are the weighted Hilbert spaces where is a sequence of positive numbers. The involution is where In addition, there is a central, self-dual Hilbert space, namely, , where 1 = (1).

###### 2.2.2. Spaces of Locally Integrable Functions

Let now be , the space of Lebesgue measurable functions, integrable over compact subsets, and define a compatibility relation on it by and a partial inner product .

Then , the space of bounded measurable functions of compact support. The complete lattice consists of Köthe function spaces [16, 17]. Here again, typical assaying subspaces are weighted Hilbert spaces with a.e. The involution is with and the central, self-dual Hilbert space is .

###### 2.2.3. Nested Hilbert Spaces

This is the original construction of Grossmann [18] for finding an “easy" substitute to distributions, and actually one of the motivations for introducing PIP-spaces. And indeed the two are closely related; see [14, Section ]

###### 2.2.4. Rigged Hilbert Spaces

This is the simplest example of PIP-space, but it is a rather poor one. Indeed, in the RHS (1.1), two elements are compatible if both belong to , or one of them belongs to . Thus the three defining spaces are the only assaying subspaces. The partial inner product is, of course, simply that of , provided the sesquilinear form that puts and in duality has been correctly normalized.

#### 3. Lattices of Hilbert or Banach Spaces

From the previous examples, we learn that is a huge lattice (it is complete!) and that assaying subspaces may be complicated, such as Fréchet spaces, nonmetrizable spaces, and so forth. This situation suggests to choose an involutive sublattice , indexed by , such that(i) is *generating*:
(ii)every , is a Hilbert space or a reflexive Banach space,(iii)there is a unique self-dual assaying subspace , which is a Hilbert space.

In that case, the structure is called, respectively, a *lattice of Hilbert spaces* (LHS) or a *lattice of Banach spaces* (LBS). Both types are particular cases of the so-called indexed PIP-spaces [14]. Note that themselves usually do *not* belong to the family , but they can be recovered as

In the LBS case, the lattice structure takes the following forms: (i), with the *projective* norm
(ii), with the *inductive* norm

These norms are usual in interpolation theory [19]. In the LHS case, one takes similar definitions with squared norms, in order to get Hilbert norms throughout.

In the rest of this section, we will list a series of concrete examples of LHS/LBSs. Some more examples, which are of particular interest in signal processing, will be given in Section 6.2. For simplicity, we will restrict ourselves to one dimension, although most spaces may be defined on , as well.

##### 3.1. Chains of Hilbert or Banach Spaces

Typical are the two examples described in Section 1. (1)The chain of Lebesgue spaces on a finite interval . The chain (1.2) is a (totally ordered) lattice. The corresponding lattice completion is obtained by adding “nonstandard" spaces such as (2)The scale (1.4) of Hilbert spaces built on powers of . The lattice completion is similar to the previous one, introducing analogous “nonstandard" spaces [14, Section ].

##### 3.2. Sequence Spaces

###### 3.2.1. A LHS of Weighted Spaces

In , with the compatibility and the partial inner product defined in Section 2.2.1, we may take the lattice of the weighted Hilbert spaces defined in (2.6), with lattice operations: (i)infimum: , (ii)supremum: , (iii)duality: .

As a matter of fact, the norms above are equivalent to the projective and inductive norms, respectively. Then, it is easy to show that the lattice is generating in .

###### 3.2.2. Köthe Perfect Sequence Spaces

We have already noticed that the complete lattice consists precisely of all Köthe perfect sequence spaces. Indeed, these are defined as the assaying subspaces corresponding to the compatibility , which is called *-duality* [15]. Among these, there is an interesting class, the so-called spaces associated to symmetric norming functions.

*Definition 3.1. * A real-valued function defined on the space of finite sequences is said to be a *norming function* if(n)for every sequence,(n),(n),(n) A norming function is *symmetric* if(n), where is an arbitrary permutation of .

From property (n), it is clear that a symmetric norming function is entirely determined by its values on the set of finite, positive, nonincreasing sequences. Hence, from conditions (n) and (n), we deduce that where and .

To every symmetric norming function , one can associate a Banach space as follows. Given a sequence , define its th section as . Then the sequence is nondecreasing, so that one can define and then extend the norming function to the whole of by putting . This relation defines a norm on , for which it is complete, hence, a Banach space. In other words, we can also say that is the natural domain of definition of the extended norming function . Clearly, one has and . Similarly, , where . Thus every space contains and is contained in .

In addition, the set of Banach spaces constitutes a lattice. Given two symmetric norming functions and , one defines their infimum and supremum, exactly as for the general case: (i), which defines on the space a norm equivalent to ,(ii), which defines on the space a norm equivalent to .

It remains to analyze the relationship of the spaces with the PIP-space structure of . Define, for any finite, positive, nonincreasing sequence ,
The function thus defined is a symmetric norming function; hence, it can be extended to the corresponding Banach space . The function is said to be *conjugate *to and the space is the conjugate dual of with respect to the partial inner product, that is, . Clearly one has ; hence, .

In addition, it is easy to show that and In other words, one gets the following result.

Proposition 3.2. *The family of Banach spaces , where is a symmetric norming function, is an involutive sublattice of the lattice and a LBS.*

Actually, since every satisfies the inclusions , the family is also an involutive sublattice of the lattice obtained by restricting to the PIP-space structure of .

These spaces may be generalized further to what is called the theory of Banach ideals of sequences. See [14, Section ] for more details.

##### 3.3. Spaces of Locally Integrable Functions

###### 3.3.1. A LHS of Weighted Spaces

In , we may take the lattice of the weighted Hilbert spaces defined in (2.7), with (i)infimum: , (ii)supremum: , (iii)duality: .

Here too, these norms are equivalent to the projective and inductive norms, respectively.

###### 3.3.2. The Spaces

The spaces do not constitute a scale, since one has only the inclusions . Thus one has to consider the lattice they generate, with the following lattice operations: (i), with projective norm, (ii), with inductive norm.

For , both spaces and are reflexive Banach spaces and their conjugate duals are, respectively, and .

It is convenient to introduce the following unified notation: Then, for , is a reflexive Banach space, with conjugate dual .

Next, if we represent by the point of coordinates , we may associate all the spaces in a one-to-one fashion with the points of a unit square (see Figure 1). Thus, in this picture, the spaces are on the main diagonal, intersections above it and sums below.

The space is contained in if is on the left and/or above . Thus the smallest space is and it corresponds to the upper-left corner, while the largest one is corresponding to the lower-right corner. Inside the square, duality corresponds to (geometrical) symmetry with respect to the center (1/2,1/2) of the square, which represents the space . The ordering of the spaces corresponds to the following rule: With respect to this ordering, J is an involutive lattice with the operations where . It is remarkable that the lattice generated by is obtained at the first “generation". One has, for instance, , both as sets and as topological vector spaces.

###### 3.3.3. Mixed-Norm Lebesgue Spaces

An interesting class of function spaces, close relatives to the Lebesgue spaces, consists of the so-called spaces with mixed norm. Let and be two -finite measure spaces and (in the general case, one considers such spaces and -tuples ). Then, a function measurable on the product space is said to belong to if the number obtained by taking successively the -norm in and the -norm in , in that order, is finite (exchanging the order of the two norms leads in general to a different space). If , the norm reads The analogous norm for or is obvious. For , one gets the usual space .

These spaces enjoy a number of properties similar to those of the spaces: (i) each space is a Banach space and it is reflexive if and only if ; (ii) the conjugate dual of is , where, as usual, ,; thus the topological conjugate dual coincides with the Köthe dual; (iii) the mixed-norm spaces satisfy a generalized Hölder inequality and have nice interpolation properties.

The case with Lebesgue measure is the important one for signal processing [20, Section ]. More generally, one can add a weight function and obtain the spaces (we switch to a notation more suitable for the applications): Here the weight function is a nonnegative locally integrable function on , assumed to be -moderate, that is, , for all , with a submultiplicative weight function, that is, , for all . The typical weights are of polynomial growth:

The space is a Banach space for the norm . The duality property is, as expected, Of course, things simplify when : , a weighted space.

Concerning lattice properties of the family of spaces, we cannot expect more than for the spaces. Two spaces are never comparable, even for the same weight , so one has to take the lattice generated by intersection and duality.

A different type of mixed-norm spaces is obtained if one takes , with the counting measure. Thus one gets the space , which consists of all sequences for which the following norm is finite: Contrary to the continuous case, here we do have inclusion relations: if and , then .

Discrete mixed-norm spaces have been used extensively in functional analysis and signal processing. For instance, they are key to the proof that certain operators are bounded between two given function spaces, such as modulation spaces (see below) or spaces. In general, a mixed-norm space will prove useful whenever one has a signal consisting of sequences labeled by two indices that play different roles. An obvious example is time-frequency or time-scale analysis: a Gabor or wavelet basis (or frame) is written as , where indexes the scale or frequency and the time. More generally, this applies whenever signals are expanded with respect to a dictionary with two indices.

###### 3.3.4. Köthe Function Spaces

The mixed-norm Lebesgue spaces are special cases of a very general class, the so-called *Köthe function spaces*. These have been introduced (and given that name) by Dieudonné [16] and further studied by Luxemburg-Zaanen [21]. The procedure here is entirely parallel to that used in Section 3.2.2 above for introducing the sequence spaces .

Let be a -finite measure space and the set of all measurable, non-negative functions on , where two functions are identified if they differ at most on a -null set. A *function norm* is a mapping such that (i) and if and only if , (ii), (iii), (iv).

A function norm is said to have the *Fatou property* if and only if and pointwise implies that .

Given a function norm , it can be extended to all complex measurable functions on by defining . Denote by the set of all measurable such that With the norm is a normed space and a subspace of the vector space of all measurable -a.e. finite, functions on Furthermore, if has the Fatou property, is complete, that is, a Banach space.

A function norm is said to be *saturated* if, for any measurable set of positive measure, there exists a measurable subset such that and is the characteristic function of ).

Let be a saturated function norm with the Fatou property. Define
Then is a saturated function norm with the Fatou property and . Hence, is a Banach space. Moreover, one has also
For each as above, is a Banach space and , that is, each is assaying. The pair is actually a dual pair, although is not. The space is called the *Köthe dual* or *-dual* of and denoted by .

However, is in general only a closed subspace of the Banach conjugate dual ; thus, the Mackey topology is coarser than the -norm topology, which is This defect can be remedied by further restricting . A function norm is called *absolutely continuous* if for every sequence such that pointwise a.e. on . For instance, the Lebesgue -norm is absolutely continuous for , but the -norm is *not*! Also, even if is absolutely continuous, need not be. Yet, this is the appropriate concept, in view of the following results: (i) if and only if is absolutely continuous; (ii) is reflexive if and only if *and* are absolutely continuous and has the Fatou property.

Let be a saturated, absolutely continuous function norm on , with the Fatou property and such that is also absolutely continuous. Then is a reflexive dual pair of Banach spaces. In addition, the set of all function norms with these properties is an involutive lattice with respect to the following partial order: if and only if . The lattice operations are the following: (i)(ii)(iii)

For the corresponding Banach spaces, we have the relations

Consider now the usual space , with the compatibility and partial inner product defined in Section 2.2.2, so that . Then the construction outlined above provides with the structure of a LBS. Indeed, one has the following result.

Proposition 3.3. *Let be the set of saturated, absolutely continuous function norms on , with the Fatou property and such that is also absolutely continuous. Let denote the set . Then is a LBS, with the lattice operations defined above.*

More general situations may be considered, for which we refer to [14, Section ].

#### 4. Comparing PIP-Spaces

The definition of LBS/LHS given in Section 3 leads to the proper notion of comparison between two linear compatibilities on the same vector space. Namely, we shall say that a compatibility is *finer* than , or that is *coarser* than , if is an involutive cofinal sublattice of (given a partially ordered set , a subset is *cofinal* to if, for any element , there is an element such that ).

Now, suppose that a linear compatiblity is given on . Then, every involutive cofinal sublattice of defines a coarser PIP-space, and *vice versa*. Thus coarsening is always possible, and will ultimately lead to a minimal PIP-space, consisting of and only, that is, the situation of distribution spaces. However, the operation of refining is not always possible; in particular there is no canonical solution, *a fortiori* no unique maximal solution. There are exceptions, however, for instance, when one is given explicitly a larger set of assaying subspaces that also form, or generate, a larger involutive sublattice. To give an example, the weighted spaces of Section 3.3.1 form an involutive sublattice of the involutive lattice of Köthe function spaces of Section 3.3.4; thus, is a genuine refinement of the original LHS.

In the case of a LHS, refining is possible, with infinitely many solutions, by use of interpolation methods or the spectral theorem for self-adjoint operators, which are essentially equivalent in this case. In particular, one may always refine a *discrete* scale of Hilbert spaces into a (nonunique) *continuous* one. Indeed, for the scale described in Section 1, Example (ii), one has, by definition, , the domain of , equipped with the graph norm , for . Then, for each , one may define
where is the spectral family of . With the inner product is a Hilbert space and one has the continuous embeddings
One may go further, as follows. Let be any continuous, positive function on such that is unbounded for , but increases slower than any power . An example is . Then is a well-defined self-adjoint operator, with domain
With the corresponding inner product becomes a Hilbert space . For every , one has, with proper inclusions and continuous embeddings,
This can be continued as far as one wants, with the result that every scale of Hilbert spaces possesses infinitely many proper refinements which are themselves chains of Hilbert spaces [14, Chapter ].

Another type of refinement consists in refining a RHS , by inserting a number of intermediate spaces, called *interspaces*, namely, spaces such that (which implies that the conjugate dual is also an interspace). Upon some additional conditions, the most important of which being that be dense in with its projective topology, for any pair of interspaces, one obtains in that way a proper refining of the original RHS. With this construction, which goes under the name of *multiplication framework*, one succeeds, for instance, in defining a valid (partial) multiplication between distributions. A thorough analysis may be found in [14, Section ].

#### 5. Operators on PIP-Spaces

##### 5.1. General Definitions

As already mentioned, the basic idea of (indexed) PIP-spaces is that vectors should not be considered individually, but only in terms of the subspaces ( or ), the building blocks of the structure; see (3.1). Correspondingly, an operator on a PIP-space should be defined in terms of assaying subspaces only, with the proviso that only bounded operators between Hilbert or Banach spaces are allowed. Thus an operator is a *coherent collection* of bounded operators. More precisely, one has the following.

*Definition 5.1. * Given a LHS or LBS , an *operator* on is a map , such that (i), where is a nonempty subset of ,(ii)for every , there exists a such that the restriction of to is linear and continuous into (we denote this restriction by , (iii) has no proper extension satisfying (i) and (ii).

The linear bounded operator is called a *representative* of . In terms of the latter, the operator may be characterized by the set j Thus the operator may be identified with the collection of its representatives:
By condition (ii), the set is obtained by projecting on the “first coordinate" axis. The projection on the “second coordinate" axis plays, in a sense, the role of the range of . More precisely,
The following properties are immediate see the (see Figure 2): (i)d is an initial subset of : if d and , then d, and , where is a representative of the unit operator (this is what we mean by a `coherent' collection), (ii)i is a final subset of : if i and , then i and .(iii)jdi, with strict inclusion in general.

We denote by Op the set of all operators on . Of course, a similar definition may be given for operators between two LHSs or LBSs.

Since is dense in for every , an operator may be identified with a separately continuous sesquilinear form on . Indeed, the restriction of any representative to is such a form, and all these restrictions coincide. Equivalently, an operator may be identified with a continuous linear map from into (continuity with respect to the respective Mackey topologies).

But the idea behind the notion of operator is to keep also the *algebraic operations* on operators; namely, we define the following operations:(i)* Adjoint*: Every has a unique adjoint , defined by the relation
that is, (usual Hilbert/Banach space adjoint). It follows that for every : no extension is allowed, by the maximality condition (iii) of Definition 5.1.(ii)* Partial Multiplication*: The product AB is defined if and only if there is a , that is, if and only if there is a continuous factorization through some :

It is worth noting that, for a LHS/LBS, the domain is always a vector subspace of (this is not true for a general PIP-space). Therefore, Op is a vector space and a *partial *-algebra* [22].

The concept of PIP-space operator is very simple, yet it is a far-reaching generalization of bounded operators. It allows indeed to treat on the same footing all kinds of operators, from bounded ones to very singular ones. By this, we mean the following, loosely speaking. Take Three cases may arise:(i)if exists, then corresponds to a bounded operator ,(ii)if does not exist, but only , with , then corresponds to an unbounded operator, with domain ,(iii)if no exists, but only , with , then corresponds to a singular operator, with Hilbert space domain possibly reduced to .

##### 5.2. Special Classes of Operators on PIP-Spaces

Exactly as for Hilbert or Banach spaces, one may define various types of operators between PIP-spaces, in particular LBS/LHSs. We discuss briefly the most important classes, namely, regular operators, homomorphisms and isomorphisms, unitary operators, symmetric operators, and orthogonal projections. Further details may be found in the monograph [14].

###### 5.2.1. Regular and Totally Regular Operators

An operator on a nondegenerate PIP-space , with positive-definite partial inner product, in particular, a LBS/LHS, is called *regular* if d i or, equivalently, if continuously for the respective Mackey topologies. This notion depends only on the pair , not on the particular compatibility . The set of all regular operators is denoted by . Thus a regular operator may be multiplied both on the left and on the right by an arbitrary operator. Clearly, the set is a *-algebra and can often be identified with an *O **-algebra [22, 23].

We give two examples. (1)If , then consists of arbitrary infinite matrices and of infinite matrices with finite rows and finite columns.(2) If , then consists of arbitrary tempered kernels, while contains those kernels that can be extended to and map into itself. A nice example is the Fourier transform.

An operator is called *totally regular* if j contains the diagonal of , that is, exists for every or maps every into itself continuously.

###### 5.2.2. Homomorphisms

Let be two LHSs or LBSs. An operator is called a *homomorphism* if (i)j and j,(ii) implies that .

We denote by the set of all homomorphisms from into . The following properties are easy to prove: (i) if and only if ,(ii)if , then j contains the diagonal of and j contains the diagonal of .

The homomorphism is a *monomorphism* if implies that , for any two elements of , where is any PIP-space. Typical examples of monomorphisms are the inclusion maps resulting from the restriction of a support. Take for instance, , the space of locally integrable functions on a measure space . Let be a measurable subset of and its complement, both of nonzero measure, and construct the space , which is a PIP-subspace of (see Section 5.2.5). Given , define , where is the characteristic function of . Then we obtain an injection monomorphism as follows:
If we consider the lattice of weighted Hilbert spaces in this PIP-space, then the correspondence is a bijection between the corresponding involutive lattices.

The homomorphism is an *isomorphism* if there exists a homomorphism such that , where denote the identity operators on , respectively.

For instance, the Fourier transform is an isomorphism of the Schwartz RHS onto itself and, similarly, of the Feichtinger triplet (6.16) onto itself. In both cases, the property extends to several scales of lattices interpolating between the two extreme spaces, for instance, the Hilbert chain of the Hermite representation of tempered distributions.

###### 5.2.3. Unitary Operators and Group Representations

The operator is *unitary* if and are defined and . We emphasize that unitary operators need *not* be homomorphisms ! In fact, unitarity is a rather weak property and it is insufficient for group representations.

Thus a unitary representation of a group into a PIP-space is defined as a homomorphism of into the *unitary isomorphisms* of . Given such a unitary representation of into , where the latter has the central Hilbert space , consider the representative of in . Then is a unitary representation of into , in the usual sense.

To give an example, let be the scale built on the powers of the operator (Hamiltonian) on , where is the Laplacian on and is a (nice) rotation invariant potential. The system admits as symmetry group SO(3) (the full symmetry group might be larger; for instance, the Coulomb potential admits SO(4) as symmetry group for its bound states.) and the representation is the natural representation of SO(3) in : Then extends to a unitary representation by totally regular isomorphisms of . Angular momentum decompositions, corresponding to irreducible representations of SO(3), extend to as well. In addition, this is a good setting also for representations of the Lie algebra . Notice that the representation is totally regular, but this need not be the case. For instance, if the potential is not rotation invariant, will no longer be totally regular, although it is still an isomorphism.

###### 5.2.4. Symmetric Operators

An operator is *symmetric* if . Since one has , no extension is allowed, by the maximality condition. Thus symmetric operators behave essentially like bounded self-adjoint operators in a Hilbert space. Yet, they can be very singular, as indicated above, for a chain
In this case, the question is whether a symmetric operator has a self-adjoint restriction to the central Hilbert space . In a Hilbert space context, an answer is given by the celebrated KLMN theorem (KLMN stands for Kato, Lax, Lions, Milgram, Nelson). Actually, this classical result already has a distinct PIP-space flavor. Thus is not surprising that the KLMN theorem has a natural generalization to a LHS or a PIP-space with positive-definite partial inner product and central Hilbert space , and a quadratic form version as well [14, Section ].

An interesting application is a correct description of very singular Hamiltonians in quantum mechanics, typically with or interactions. For instance, one can treat in this way the Kronig-Penney crystal model, which consists of a potential at each node of a lattice, in one, two, or three dimensions [24, 25].

###### 5.2.5. Orthogonal Projections, Bases, Frames

An operator on a nondegenerate PIP-space , respectively, a LBS/LHS , is an *orthogonal projection* if and . It follows immediately from the definition that an orthogonal projection is totally regular, that is, j contains the diagonal , or still that leaves every assaying subspace invariant. Equivalently, is an orthogonal projection if is an idempotent operator (that is, ) such that for every and whenever . We denote by Proj() the set of all orthogonal projections in and similarly for Proj().

These projection operators enjoy several properties similar to those of Hilbert space projectors. Two of them are of special interest in the present context.(i)Given a nondegenerate PIP-space , there is a natural notion of subspace, called *orthocomplemented*, which guarantees that such a subspace of is again a nondegenerate PIP-space with the induced compatibility relation and the restriction of the partial inner product. There are equivalent topological conditions, so that orthocomplemented subspaces are the proper PIP-subspaces [26]. Then the basic theorem about projections states that a subspace of is orthocomplemented if and only if is the range of an orthogonal projection that is,. Then , where is another orthocomplemented subspace.(ii)An orthogonal projection is of finite rank if and only if and (this condition is, of course, superfluous when the partial inner product is positive definite).

There is a natural partial order on the set of projections: but the lattice properties of Proj() are unknown. Thus we expect that quantum logic may be reformulated in a PIP-space language only under additional restrictions on .

Property (ii) has important consequences for the structure of bases. First we recall that a sequence of vectors in a Banach space is a *Schauder basis* if, for every , there exists a unique sequence of scalar coefficients such that = 0. Then one may write
The basis is *unconditional* if the series (5.10) converges unconditionally in (i.e., it keeps converging after an arbitrary permutation of its terms).

A standard problem is to find, for instance, a sequence of functions that is an unconditional basis for *all* the spaces . In the PIP-space language, this statement means that the basis vectors must belong to . Also, since (5.10) means that may be approximated by finite sums, the property (ii) of orthogonal projections implies that all the basis vectors must belong to . Some examples are given in Section 6.2.5.

Actually, in the context of signal processing, orthogonal (in the Hilbert sense) bases are not enough; one needs also biorthogonal bases and, more generally, *frames*. We recall that a countable family of vectors in a Hilbert space is called a *frame* if there are two positive constants m, M, with m M , such that
The two constants m, M are called *frame bounds*. If m M, the frame is said to be *tight*. Consider the analysis operator defined by and the frame operator . Then the vectors also constitute a frame, called the *canonical dual frame*, and one has the (strongly converging) expansions
Then the considerations made above for bases should apply to frame vectors as well, that is, the vectors should also belong to .

#### 6. Applications of PIP-Spaces

##### 6.1. Applications in Mathematical Physics

PIP-spaces have found many applications in mathematical physics, mostly in quantum physics. We will sketch a few of them in this section. Most of what follows is described in detail in [14].

###### 6.1.1. Dirac Formalism in Quantum Mechanics

The mathematical description of a quantum system rests on three basic principles: (i) The *superposition principle*, which implies that the set of states of the system has a linear structure; (ii) The notion of *transition amplitude*, given by an inner product: , which yields transition probabilities by ; and (iii) The *probabilistic interpretation*, which requires that , whenever .

Combining these basic principles implies that the set of states of the system is a positive-definite inner product space , that is, a pre-Hilbert space. On this basis, Dirac developed a formalism for quantum physics with great computational capacity and broad predictive power. The essential features of Dirac's formalism are the following. (i)Physical observables are represented by linear operators in the space and these operators form an algebra. Therefore, it makes sense to arbitrarily add and multiply operators to form new operators.(ii)For a given quantum physical system, there exist complete systems of commuting observables (CSCO) in the algebra of observables. The system of eigenvectors for a chosen CSCO provides a basis for the space , that is, every vector can be expanded into the eigenvectors of the CSCO.

In the simplest case, a CSCO consists of only one observable , with a mixed spectrum consisting of discrete eigenvalues and a continuous part . The corresponding eigenvectors, written as , respectively, obey “orthogonality" relations Then every can be expanded as Clearly the eigenvectors cannot belong to the pre-Hilbert space , nor to its completion . Thus Dirac's formalism, while extremely practical and used by physicists on a daily basis, is not mathematically well defined.

For that reason, von Neumann formulated a rigorous version of quantum mechanics, in a pure Hilbert space language. His formulation consists in the following two axioms: (i) *Pure states* are represented by rays (i.e., one-dimensional subspaces) in a Hilbert space ; and (ii) *Observables* are represented by self-adjoint operators in . This formulation is well defined mathematically, but too restrictive. Nonnormalizable eigenvectors, corresponding to points of a continuous spectrum, cannot belong to , yet they are extremely useful and often have a clear physical meaning (plane waves, for instance). Observables may be unbounded, so that domain considerations must be taken into account. In particular, unbounded operators may not always be multiplied. Thus it is understandable that the large majority of physicists stay with Dirac's formalism.

This had prompted several authors [27, 28, 30, 31] to propose a rigorous version in terms of a RHS . In this scheme, the space is constructed from the basic observables (*labeled* observables) of the system at hand and is interpreted as the space of all physically preparable states. The conjugate dual contains idealized states (probes), identified with measurement devices. In that context, let be an observable, represented by a self-adjoint operator in such that continuously. Then may be transposed by duality to a linear operator , which is an extension of , where is the usual Hilbert space adjoint operator, namely,
For such an operator, the vector is called a *generalized eigenvector* of , with eigenvalue , if it satisfies
Then the spectral theorem of Gel'fand-Maurin [5, 6] states that possesses in a complete orthonormal set of generalized eigenvectors . This means that, for any two , one has (we split again into the discrete and the continuous part of the spectrum of )
where is a non-negative integrable function. In that way one recovers essentially Dirac's bra-and-ket formalism. This approach is based on a RHS, but the construction is such that a PIP-space version is easily obtained—and is in fact closer to Dirac's spirit. For more details, see [14, Section ].

###### 6.1.2. Symmetries, Singular Interactions in Quantum Mechanics

Several other topics in quantum mechanics can be advantageously formulated in a RHS or PIP-space language, for instance, the implementation of symmetries, with the two dual points of view, the active one and the passive one [32]. A symmetry group is represented by a unitary representation in that extends to a unitary representation in the enveloping PIP-space, in the sense defined in Section 5.2.3. Then, in accordance with the physical interpretation given above, the active point of view corresponds to the action of in , the passive one to the action on .

Another problem is a correct definition of a Hamiltonian with a singular interaction, already mentioned in Section 5.2.4. In the simplest case, the standard definition is , where the interaction is given by some reasonable function (potential). However, there are cases where a singular interaction is needed, for instance when is replaced formally by a function or a function, with support in a point (or several) or a manifold of lower dimension. Then the usual formulation is based on von Neumann's theory of self-adjoint extensions of symmetric operators, sometimes coupled with Krein's formula [33]. But here the PIP-space approach is a convenient substitute to that approach, as shown in [24, 25] and [14, Section ].

###### 6.1.3. Quantum Scattering Theory

In scattering theory, it is common to use scales of Hilbert spaces built on the powers of or , and the LHS obtained by combinations of both. This example contains the Sobolev spaces (the scale built on ), the weighted spaces (the scale built on ), and spaces of mixed type. In particular, operators of the form , for suitable functions , play an essential role in the so-called phase-space approach to scattering theory and they may be controlled by this LHS. For instance, their trace ideal properties may be derived in this way and they are used for proving the absence of singular continuous spectrum by the limiting absorption principle.

On the other hand, the Weinberg-van Winter (WVW) formulation of scattering theory [34–36] has a very natural interpretation in terms of a LHS, whose components, including the extreme ones, are Hilbert spaces consisting of functions analytic in a sector; thus the indexing parameter is the opening angle of that sector. This technique has allowed to show that the WVW formalism is a particular case of the Complex Scaling Method [14, Section ], a result hitherto unknown.

###### 6.1.4. Quantum Field Theory

Mathematically rigorous formulations of QFT rely heavily on a RHS or a PIP-space approach, primarily Wightman's axiomatic formulation. There, indeed, a quantum field is defined as an operator-valued distribution, which is customarily written in terms of an unsmeared field (field at a point) , as Under quite reasonable assumptions, the unsmeared field can be defined as a map from into Op, where is a conveniently chosen PIP-space. This allows to give the previous formula a rigorous mathematical meaning [14, Section ].

Another PIP-space version of QFT is the Fock construction (tensor algebra) over the RHS where denotes the forward mass shell and the Lorentz invariant measure on it. Write , the space of “good" one-particle states. Then define where the right-hand side denotes the symmetrized tensor product of copies of , corresponding to -boson states. Again is reflexive, complete, and nuclear with respect to its natural topology, and it can be described as the end space of a scale of Hilbert spaces. Finally, define that is, the topological direct sum of the component spaces. Elements of are finite sequences , that is, totally symmetric functions of Schwartz type. The space is reflexive, complete, and nuclear with respect to the direct sum topology. Its dual is the topological product Thus we get a suitable RHS, in which the central Hilbert space is Fock space, that is, the tensor algebra over .

Other examples are the construction of QFT via the Borchers algebra, Nelson's Euclidean field theory, or the precise treatment of unsmeared fields (fields at a point). See [14, Section ] for a detailed presentation.

###### 6.1.5. Representations of Lie Groups and Lie Algebras

Let us return to the situation described in Section 5.2.3. We start with a strongly continuous unitary representation of a Lie group in a Hilbert space and seek to build a PIP-space , with being its central Hilbert space, such that extends to a unitary representation into .

The solution of this problem is well known from Nelson's theory of analytic vectors. Let be the closure of the Nelson operator , where are the representatives under of the elements of a basis of the Lie algebra of . is essentially self-adjoint on the so-called Gårding domain , is self-adjoint, and . Define as the canonical scale of Hilbert spaces generated by the operator : First, one has , the space of -vectors of . Next, for every leaves each invariant and its restriction is continuous; thus it can be transposed to a continuous map . It follows that extends to a unitary representation in the LHS . Corresponding to the triplet (6.11), we have three representations of , namely, , its restriction , and the dual of the latter, which is an extension of the first two. All three are continuous. Moreover, if one of the three is topologically irreducible (i.e., there is no proper invariant closed subspace), so are the other two.

In addition to the representations of the group , the scale is the natural tool for studying the properties of the operators representing elements of the Lie algebra or the universal enveloping algebra of . For every element or , the representative , respectively , originally defined on , extends to a regular operator on . These regular operators have in general no -representative, since and are represented in by unbounded operators. As in the case of the group , one gets three *-representations of the enveloping algebra , and in particular of the Lie algebra , in the three spaces of the triplet (6.11). Namely, one has, for every , where is the involution on . These representations have the same irreducibility properties as the corresponding ones of the group. See [14, Section ] for further details.

##### 6.2. Applications in Analysis and Signal Processing

Many families of function spaces of interest in analysis or signal processing are, or contain, lattices of Banach spaces. To quote a few: amalgam spaces, modulation spaces, Besov spaces, -modulation spaces, coorbit spaces, which contain many of the previous cases. We shall describe them briefly in succession. For further information about those spaces, we refer to our monograph [14, Chapters and ].

###### 6.2.1. Amalgam Spaces

A situation intermediate between the mixed-norm spaces