Abstract

We turn a given filter bank into a filtering scheme that provides perfect reconstruction, synthesis is the adjoint of the analysis part (so-called unitary filter banks), all filters have equal norm, and the essential features of the original filter bank are preserved. Unitary filter banks providing perfect reconstruction are induced by tight generalized frames, which enable signal decomposition using a set of linear operators. If, in addition, frame elements have equal norm, then the signal energy is spread through the various filter bank channels in some uniform fashion, which is often more suitable for further signal processing. We start with a given generalized frame whose elements allow for fast matrix vector multiplication, as, for instance, convolution operators, and compute a normalized tight frame, for which signal analysis and synthesis still preserve those fast algorithmic schemes.

1. Introduction

Increasingly detailed data are acquired in all sorts of measurements nowadays, so that fast algorithms are an important factor for successful signal processing. The concept of generalized frames has a long tradition in signal processing and many unitary filter bank schemes with the perfect reconstruction property are induced by tight generalized frames. Frames themselves are basis-like systems that span a vector space but allow for linear dependency. The inherent redundancy of frames can yield advantageous features unavailable within the basis concept [14]. If the frame is tight and its elements have unit norm, then it resembles the concept of an orthonormal basis, with the add-on of useful redundancy, and frame coefficients measure the signal energy in a uniform fashion. Generalized frames were introduced in [5] as a tool for signal decomposition using a set of linear operators. In [6], collections of orthogonal projectors were considered under the name fusion frames, fusion frame filter banks have been considered in [7], and the concept of tight -fusion frames was developed in [8]. As convolution operators are linear, most filter banks can be thought of as pairs of generalized frames, one for analysis and the other for synthesis. Hence, in view of filter banks, it is not sufficient to deal with frames but we must inevitably consider their generalized counterpart. Tightness of a generalized frame means that the induced unitary filter bank provides perfect reconstruction. As with frames, we seek unit norm tight generalized frames because signal energy is then spread through the various channels in a more uniform fashion. The latter was used in [9] to verify robustness of tight fusion frames against erasures, meaning that it is beneficial to have tight fusion frames with equal norm when dealing with distortions and loss of data. To keep the filter bank perspective, we will focus on generalized frames consisting of convolution operators enabling fast algorithms.

In the present paper we start with a generalized frame, whose elements allow for fast matrix vector multiplications (e.g., convolution operators) and construct a unit norm tight generalized frame that induces a filter bank scheme preserving those fast algorithms. The latter is related to the so-called Paulsen problem for frames, where one is given a unit norm frame and one asks for the closest tight frame with unit norm and for an algorithm to find it. This problem for frames has been partially solved in [1012]. Note that if we are given a unit norm generalized frame, whose elements allow for fast matrix vector multiplications, then the closest tight generalized frame with unit norm may not provide such fast algorithmic schemes in general. Here, we aim to find a related tight unit norm generalized frame in such a way that signal analysis and synthesis can still benefit from the underlying fast matrix vector multiplications.

We should point out that we used the term filter bank beyond sets of convolution operators similar to [7], where weighted orthogonal projectors are considered. Nonetheless, if the starting generalized frame consists of convolution operators, then the resulting scheme still represents convolution operators in each channel, but we require one additional linear operator for global pre- and postmultiplication. As this operator has a special structure being the inverse of a convolution frame operator, there are still fast computation schemes available [13].

Our construction is inspired by pseudocovariance estimators of elliptical distributions in [14]; see also [15, 16]. We derive an iterative algorithm on positive definite matrices, for which we prove convergence, so that we obtain a positive definite matrix that enables us to construct the tight unit norm generalized frame.

For related research topics, such as the optimal rescaling of filter banks, we refer to [17, 18]. Preconditioning in the context of Gabor frames is addressed in [19]. In order to assess the benefits of our approach in image analysis, we suggest the use of the structural similarity proposed in [20].

The outline is as follows: In Section 2, we introduce the concept of generalized frames and motivate the construction of unit norm tight generalized frames. In Section 3, we present our iterative algorithm, for which we verify convergence, enabling us to construct tight generalized frames with unit norm that preserve fast analysis and synthesis due to their special structure. In Section 4 we provide few examples of random matrices whose samples satisfy the convergence assumptions needed. We also point out examples for convolution operators and further operators enabling fast matrix vector multiplications. In Section 5, we discuss the structure of our construction when the underlying generalized frame is a sample from an elliptical distribution. Some concluding remarks are contained in Section 6.

2. Generalized Frames

Let denote either or . We follow [5] and call a collection a generalized frame (or a -frame for short) if there are two constants such thatIf the constants can be chosen , then is called a tight -frame and it is called a Parseval -frame if . For , we have , so that we recover the concept of frames (cf. [1]). It turns out that a collection is a -frame if and only if spans .

If is a -frame, then the analysis operator is given by . Its adjoint is the synthesis operator , defined by , such that the generalized frame operator is The collection is called the canonical dual -frame and yields the expansion which simply follows from , where denotes the identity matrix.

Proposition 1. If is a -frame with -frame operator , then is a Parseval -frame and, for any other Parseval -frame , one hasEquality holds if and only if , for .

Here, denotes the Hilbert-Schmidt (Frobenius) norm. Most parts of the proof of Proposition 1 can follow the lines in [21], where is considered, so we omit the proof.

Remark 2. We have supposed that the linear operators of a -frame have all the same dimensions, which simplifies notation but is not necessary. The entire paper could also deal with sets of linear operators , where , for . Then and this is all we need.

Tight frames are desirable because synthesis is simply the adjoint of the analysis part. For signal processing purposes, we are interested in tight -frames that additionally have unit norm, because those more resemble orthonormal bases, and then has more information about the signal energy in the direction of a particular frame element; see [22] for .

Given some -frame, say with unit norm elements, let us seek a tight -frame with equal norm elements that is nearby. If we give up the equal norm requirement and is the frame operator of some -frame , then the collection is a Parseval frame closest to ; see Proposition 1. In general, however, may not have equal norms. The search for the closest Parseval frame with equal norm elements has become known as the Paulsen problem. It is essentially the same problem if we restrict ourselves to the sphere; that is, given a unit norm -frame, we aim for the closest unit norm tight -frame. For , this problem was partially solved in [1012].

Suppose now that we are given a -frame that allows for fast matrix vector multiplications for each and . The closest equal norm Parseval -frame may not preserve such features. From a computational point of view, it would be preferable to find an equal norm Parseval -frame that still allows for fast analysis and synthesis schemes, and this is indeed our topic in the subsequent sections.

3. Constructing Unit Norm Tight -Frames That Preserve Fast Algorithms

The -frame operator of some -frame and hence also are positive definite and Proposition 1 yields the tight -frame that may not have unit norm, and has unit norm but may not be tight. To construct a unit norm tight -frame that preserves fast matrix vector multiplications, we get inspired by Proposition 1 and aim to find a positive definite matrix such thatis a unit norm tight -frame. As opposed to Proposition 1, we replace with and normalize. The unit norm -frame (5) is tight if and only ifSignal analysis and synthesis requires pre- and postmultiplication by but in-between we can use times the fast algorithms provided by and (cf. Figure 1). Now, has unit norm, so that the signal energy better relates to the magnitudes of . Thus, the special structure (5) can be advantageous over other unit norm tight -frames that can be closer to the original one.

Remark 3. The filtering scheme in Figure 1 can preserve many properties of the original -frame , which can go beyond fast matrix vector multiplications, such as being orthogonal projectors and sparse matrices, as long as the application of is implemented separately and we do not use directly.

Remark 4. We point out that structure (5) is different from the approach in [23], where rescalings are sought to derive tight frames. The authors in [24] discuss the setting when a linear operator exists that maps a frame into a unit norm tight frame. We are more general here, because we are joining both and we apply a linear operator and allow for rescaling.

Note that (6) is equivalent toThus, is the inverse of the generalized frame operator of . Since this equation is invariant under scalings, we can look for a solution with . Let be the collection of hermitian positive definite matrices in and denote by the same space with the additional requirement that the trace is . The fixed point equation (7) gives rise to an iterative scheme that was already considered in [14, 15] for to estimate the covariance of elliptical distributions. As initialization we choose and defineNote that , and, to verify convergence, we will follow the ideas of the technical procedure used in [14, 15] for . For analysis purposes, we will introduce the mappingso thatWe will first check that the mapping is injective up to scalings, which generalizes [14, Theorem ] from to the general case.

Lemma 5. Let be a -frame and let be positive definite. Then if and only if there is a positive constant such that .

Proof. Without loss of generality, we can assume that . Otherwise, replace with . Let be the largest eigenvalue of and let be the associated eigenprojector. Moreover, let be one-dimensional eigenprojectors of associated with eigenvalues such that , for , and . Note that do not need to be pairwise distinct. Since and , we obtain Now, implies that either , for all and , or and, hence, . Since is a generalized frame, cannot vanish simultaneously for all , so that, indeed, is a nonzero multiple of the identity.

The following result says that if we find a proper tight -frame with unit norm based on (5), then this tight -frame is unique.

Proposition 6. Let be a -frame and suppose that there are two positive definite matrices and such that both and are tight. Then those two tight -frames are identical.

Proof. The tightness assumptions imply that . According to Lemma 5, there is a positive constant such that . Therefore, the two tight -frames are identical.

Next, we use scheme (10) to compute a unit norm tight -frame.

Theorem 7. Let satisfy the following points:(i) is a -frame.(ii)If and , then spans for either or .(iii)If is a proper linear subspace of , then .(iv)If is a proper linear subspace of , then , where and is the -frame operator of .Then the recursive scheme (10) with converges towards a positive definite and defined byis a tight -frame.

This theorem generalizes results in [14, 15], where convergence is verified for . The conditions in Theorem 7 are redundant. Condition (ii) clearly implies (i). For , (iii) yields (ii). In fact, conditions (i), (ii), and (iii) depend on the range of each but not on their norm. Note that condition (iii) can only be satisfied by some if , which yields . Condition (iv) is independent of global scalings since multiplication of all by some constant means that the inverse frame operator needs to be divided by . It requires , which is, in fact, quite weak.

Proposition 8. Suppose that is a -frame. One then has , for all , and if there is with , then does not have any zero columns and , for all .

Proof of Proposition 8. We first choose an orthonormal basis for and define, for some , the index set . We apply Proposition 1  -times to derive

The assumption implies that the two above inequalities become equalities, which yield the required statements.

Note that Proposition 8 bounds the worst case scenario. Since , taking the trace on both sides yields that . If has unit norm and is close to being tight meaning , then . If is sufficiently generic or in sufficient general position, then (i)–(iv) are satisfied for sufficiently large .

If is a sample of a continuous distribution on and is sufficiently large, then with probability one all of the assumptions in Theorem 7 are satisfied.

Proof of Theorem 7. Since is a -frame, the sequence is well defined. It is also clear that is hermitian positive definite and , for all . If we suppose that converges towards , then must hold, and a direct computation yields that is a tight -frame with , for .
It remains to verify convergence, which we check in two steps.
Step 1 (refers to Lemma in [14]). Let and be the largest and smallest eigenvalues of , respectively. We observeThe positive definite square root of has the form , where is an orthogonal matrix. Since , we obtainAccording to (10), we have which yields According to (10), we have the identity , so thatholds. Since each of the matrices is positive semidefinite, the definition of in (9) with its largest and smallest eigenvalues implies The left inequality yields and the right inequality implies . Thus, as required, the sequence is increasing, is decreasing, and both converge towards and , respectively.
Step 2 (refers to Theorem and Corollary in [14]). Since is positive definite with trace , there is a subsequence that converges towards some positive semidefinite matrix . We must now verify that is positive definite and that the entire sequence converges.
If , then let be such that For , we observe that the subsequence converges toStep ensures that is invertible since its smallest eigenvalue is positive. Let be the orthogonal matrices in Step , so that and is satisfied. According to definition (10), converges towards and a short calculation yields that the sequence converges to where .
Manipulations as in Step applied to the formulas for and imply thatas well asAccording to Step , the largest and smallest eigenvalue of both and are and , respectively. Let and be the eigenprojectors of and , respectively, associated with and and . As in [14], without loss of generality, we can suppose .
By multiplying both sides from the left and the right by , the relations and yieldThus, for , we either have The first option yields . The second option implies , which yields after some computations .
For , we obtain Similar to the above considerations, the first option yields . The second option implies .
We now premultiply both sides of (24) by and postmultiply by . The above four options and using that and commute imply , which is equivalent to . Since , we obtain , so that . The latter implies with the above that, for , either Hence, we can split into two disjoint index sets and such that Condition (ii) yields that equals for either or . If this holds for , then we must have . If it holds for , then we derive .
Suppose now that holds. The same arguments as in the previous paragraph yield, for , that either or ; see also [14]. Pre- and postmultiplying both sides in (21) by yieldswhere . Next, we take the trace on both sides and use that to derive where is the number of whose range is contained in the null space of . Condition (iii) yields , which is a contradiction to the results of Step . Thus, we must have , so thatSince the ranks of the two summations in (21) are additive (see also [14]) the ranks of the two summations in (33) are additive. Hence, the two terms themselves must be orthogonal projections. According to condition (i), the rank of the first term equals . If , then taking the trace of the second term implies with condition (iii) that , which is a contradiction to Step . Therefore, is empty and . Taking the trace of the first term in (33) yieldsWe obtain , so that (34) impliesAt this point, we claim that assumption (iv) implies for at least one thatholds, for all proper linear subspaces , but postpone the verification to the end of this proof.
Since is an increasing sequence, (36) implies if . This violates (35), so that and hence must have full rank and, therefore, is positive definite. Also, must have full rank implying and . Since the eigenvalues are monotone, the entire sequence converges towards . The latter can be used with Banach’s fixed point theorem to verify that also must converge, hence, towards . By continuity, we obtain .
We still need to verify (36). We observe and define , which yields . By using as in (iv), this implies , so thatwhere the last inequality is due to (iv). This concludes the proof.

Remark 9. The inversion of in iterative scheme (10) is numerically stable because there is a lower positive bound on the smallest eigenvalues of . Therefore, we can expect that our scheme is quite stable overall.

Let us have a look at few pathological examples first.

Example 10. If is already a unit norm tight -frame, then and .
If the -frame consists of a single matrix , hence, and is regular, then and (12) yields , which is a unit norm tight -frame.

Next, we illustrate Theorem 7 with few numerical examples.

Example 11. Let , , and . We pick from a uniform distribution on and define , . By multiplication by and rotation of all vectors, we can restrict the angles to lie between and . For each random choice , we compute a unit norm tight frame using our proposed algorithm. Up to rotations and multiplication by , there is only one single unit norm tight frame with three elements. We choose , where and , , and . Therefore, to find the tight frame with unit norm that is closest to , we minimize the distance to over all rotations, that is, and define the closest tight frame by . Note that we can suppress the multiplication by because the angles of only run in . The average error of over realizations is ; see also Figure 2 for a visualization of few examples. In our numerical experiments, we observed that our proposed algorithm finds a tight frame that is almost identical to the closest tight frame if all pairs and , for , are far enough from each other.

Example 12. For , define and let denote the associated -frame operator. Note that satisfies the assumptions of Theorem 7, for all . If , then we have a tight generalized frame with unit norms. Our algorithm provides a Parseval -frame with equal -norm; see Figure 3 for the errors and .

Example 13. We choose each entry of each element in independently according to a uniform distribution on and normalize so that . All numbers in the following are averaged over 10,000 realizations, and let denote the generalized frame operator of . According to Proposition 1, is the Parseval generalized frame that is closest to , and we compute . However, the elements of may not have equal norm. Based on Theorem 7, the collection is a Parseval generalized frame, its elements have equal norm, and we compute . Thus, the additional property of having equal norm costs . It remains open though if there are other Parseval generalized frames whose elements have equal norm and that are closer to .

Let us also illustrate when the algorithm fails to converge.

Example 14. For , the collection , where violates the conditions in Theorem 7, and, indeed, the iterative scheme does not converge towards a positive definite matrix . For , on the other hand, we observe convergence numerically.

The subsequent sections are dedicated to provide some examples of random samples satisfying the assumptions of Theorem 7. We will also provide examples that allow for fast matrix vector multiplications such as convolution operators and we support the intuition that is close to the identity if the sample is close to being tight.

4. Examples of Random Matrices Satisfying the Assumptions for Convergence

We first fuse the concepts of generalized frames and probabilistic frames as developed in [5] and [4], respectively, see also [25].

Definition 15. Let be an integer. One says that a random matrix is a random -frame of order if there are positive constants and such thatA random -frame of order is called tight if we can choose .

Following the lines of the proof for rank one projectors considered in [2] yields that any random -frame of order satisfies . Similar to finite frames, if is a random -frame of order , then the random -frame operatoris positive, self-adjoint, and invertible. Thus, we obtain the reconstruction formulaMoreover, is a tight random -frame of order if and only if , where .

Note that the case of the following result is already explicitly contained in [26] and see [27] for related results on orthogonal projectors.

Theorem 16. Let be independent copies of a tight random -frame of order with for some positive constant . For fixed , there are positive constants and such that, for all , the -frame operator of the scaled collection satisfies with probability at least .

Proof. Let and denote the smallest and largest eigenvalue of , respectively. The matrix Chernoff bounds as stated in [28] yield, for all , Some calculus yields so that we deriveWe can further compute Since , for all , we can find a suitable constant if is sufficiently large.

Remark 17. The constants and in Theorem 16 can be explicitly computed. By using , we can choose and .

Next, we discuss a few examples.

Example 18 (Gaussian matrices). Let and consider the random matrix whose entries are i.i.d. Gaussian. Its joint element density isThe resulting self-adjoint matrix is a singular Wishart-matrix (cf. [29]). According to (48) the distribution of is invariant under orthogonal transformations, so that is a tight random -frame of order , for all integers . By using the moments of the chi-squared distribution, we see that the bounds satisfy .

Example 19 (fusion frames). If the columns of a matrix , , have orthonormal columns, then we can identify with a subspaces , where denotes the Grassmann space, that is, the collection of -dimensional subspaces of . The Haar measure on then induces a random -frame of order for all integers .

Example 20 (Gabor). Time-frequency structured matrices were considered in [30] in relation to compressed sensing, in which some window vector is modulated and shifted. We use cyclic shifts, which can be performed by applying a matrix having ones in the lower secondary diagonal, another one in the upper right corner, and zeros anywhere else. The modulation operator on is given by For any nonzero , the full Gabor system has cardinality and forms a tight frame for (cf. [31]). We will use the matrix , whose rows are formed by the tight frame vectors. A short computation yields that if is chosen at random as the Rademacher sequence, then is a tight random -frame of order . Moreover, each is an orthogonal projector, so that corresponds to a tight random fusion frame. The same holds when is the Steinhaus sequence; that is, each entry is uniformly distributed on the complex unit circle.

Next, we have an example that indeed allows for fast matrix vector multiplication.

Example 21 (circulant matrices). Given a vector , the corresponding circulant matrix is Each column of is a cyclic shift of the previous one. The left block of the matrix was used as a compressed sensing measurement matrix in [32]. If the entries of are i.i.d. with zero mean and nonvanishing second moments, then is a tight random -frame of order with . For instance, if is the Rademacher sequence, that is, entries are independent and equal to with probability , then is tight of order but not of order in general. It is well known that the discrete Fourier matrices diagonalize circulant matrices, so that fast matrix vector multiplications are available. In fact, the terms “filter bank” and “filterning” are usually associated with the application of convolution operators, so that each channel corresponds to a circulant matrix with potentially some subsampling involved.

Remark 22. Samples of all of the above examples satisfy the conditions (i)–(iv) with high probability for sufficiently large sample size. The circulant matrices represent convolution operators and hence correspond to a proper filter bank scheme. They enable fast matrix vector multiplications; hence, the circulant samples in Example 21 are indeed suitable for our construction in Section 3 that preserves this fast algorithmic scheme using the filter bank shown in Figure 1. Each channel corresponds to filtering, but we require one additional linear operator for pre- and postmultiplication.

It must be mentioned that filter banks usually involve some subsampling. Let the matrix , , be a random matrix with a single one in each column, whose position is chosen independently at random in a uniform fashion. Then each matrix of the sample corresponds to a sampling operator, so that we derive samplings of length . Indeed, is a tight random -frame, but it may not satisfy all other conditions in Theorem 7. Nonetheless, subsampling operators in a filter bank are used in combination with more sophisticated filters, say , so that it is possible that the conditions are satisfied by .

5. Closeness to the Original -Frame

To relate the algorithm of the previous section to the Paulsen problem, we would need estimates on the distance between the original and the resulting -frame. In particular, if the original unit norm -frame is close to being tight, then we aim to verify that the computed unit norm tight -frame is nearby. We do not derive any estimates for fixed but will provide some framework for random samples that supports such intuition.

Theorem 23. Let be a random matrix continuously distributed on the set of matrices in and an associated i.i.d. sample with being the corresponding limit of the iterative algorithm (10). Then converges almost surely towards some positive definite , so that satisfies

Before we shall provide the proof, we have some discussion. As in [14], we observe that results of the previous section applied to a continuously distributed random matrix yield that (51) has a solution among the symmetric positive definite matrices and is unique up to multiplication by a positive constant.

For elliptical distributions, (51) has a very special meaning. Here, we call a probability distribution on elliptical if it has a density with respect to the standard volume element on and where is hermitian positive definite, , and is some nonnegative function not dependent on and with . For instance, the Gaussian random matrix in Example 18 is elliptically distributed. A direct computation yields that the matrix of an elliptically distributed random matrix with satisfies (51). For simplicity, we will restrict ourselves to the case and point out that general can be handled in a similar fashion; see [14] for .

Theorem 23 directly implies the following.

Corollary 24. If is elliptically distributed with and is a multiple of the identity, then, for any sample , the associated matrices , for , converge towards .

To verify Theorem 23, we follow the ideas in [14], where was considered, so we need some notation and two lemmas. Let us define and we denote and .

Lemma 25. Let be a compact set of positive definite matrices with implying and with some fixed . Thenholds almost surely.

Lemma 26. The hermitian positive definite matrix is a critical value of if and only if .

Proof of Theorem 23. Simple arithmetics yield that implies , , for all positive definite matrices . Furthermore, or if and only if or , respectively.
As mentioned above, (51) has a solution among the symmetric positive definite matrices and is unique up to multiplication by a positive constant; see also [14]. Without loss of generality, we can assume that , which implies . Choose as in Lemma 25 with being contained in its interior. For all with , we must have . Since is continuous, Lemma 25 yields that, for any on the boundary of , we have with probability one if is sufficiently large.
Note that , so that Lemmas 5 and 26 imply that is eventually contained in for sufficiently large . Since can be chosen arbitrarily small, it follows that almost surely, which concludes the proof.

It remains to prove the two Lemmas 25 and 26.

Proof of Lemma 25. We follow [14, Proof of Statement ]. For , we define As already mentioned in [14] for , is equicontinuous on meaning that, for , there is not dependent on nor on , , such that implies . Next, the same covering argument for as in [14] used with the equicontinuity and the strong law of large numbers implies (54). We omit the details.

Proof of Lemma 26. We can simply follow the lines of [14, Proof of Statement ], where is discussed. A first order expansion of with the frame property and Kantorovich’s inequality yields Lemma 26. No new ideas are involved when dealing with , so we refer to [14] for the details.

6. Some Concluding Remarks

For some signal processing aspects, the most attractive filter bank schemes are those that provide perfect reconstruction, synthesis is the adjoint of the analysis scheme (so-called unitary filter banks), and filters have equal norm. Tight fusion frames, for instance, correspond to perfect reconstruction filter banks, in which each channel corresponds to an orthogonal projection, and it was verified in [9] that robustness of tight fusion frames against distortions and erasures is maximized when the tight fusion frame has equal norm elements. Our aim was to turn a given filter bank into such more attractive schemes and preserve the essential features of the original filtering process. In terms of frames, we turned a given generalized frame into a tight -frame with unit norm by rescaling and then applying the inverse square root of the new -frame operator. Due to our special focus on filter banks, we started with a generalized frame consisting of convolution operators, hence, allowing for fast matrix vector multiplications. Through some iterative scheme, we constructed a generalized tight frame with unit norm, which induced a filter bank that preserved the convolution structure, hence, the fast algorithmic scheme, in each channel. Only one additional global pre- and postmultiplication by is necessary. Naturally, the application of needs special care because it may be structured but not exactly a convolution operator.

We observed that the assumptions of our algorithm are satisfied by any sufficiently large sample drawn from any continuous distribution or drawn from random convolution operators. Fields of application are filter banks, in which the additional computation costs of the application of or , respectively, can be tolerated, as, for instance, when the number of channels is large or when computations are completely offline.

Our findings provide a tool to design new filter banks with improved properties on a theoretical level. Substantial numerical verification goes beyond the scope of the present paper and will be provided in future work. We hope that our theoretical findings can provide the basis for its use in more elaborate signal processing methods.

Conflict of Interests

The author declares that there is no conflict of interests regarding the publication of this paper.

Acknowledgment

The author has been funded by the Vienna Science and Technology Fund (WWTF) through Project VRG12-009.