Abstract

Prony type methods are used in many engineering applications to determine the exponential fit corresponding to a dataset. In this paper we study a variant of Prony's method that was used by Martín-Landrove et al., in a process of segmentation of -weighted MRI brain images. We show the equivalence between that method and the classical Prony method and study the stability of the computed solutions with respect to noise in the data set. In particular, we show that the relative error in the calculation of the exponential fit parameters is linear with respect to noise in the data. Our analysis is based on classical results from linear algebra, matrix computation theory, and the theory of stability for roots of polynomials.

1. Introduction

In this paper we consider the problem of recovering the parameters , , and   , from a set of measurements , with and a given exponential model The measurement corresponds to the value of expression (1.1) for , . Hence we get the set of equations This kind of exponential fit appears in the case of -weighted magnetic resonance images of the brain. is the transverse relaxation time in the process of magnetic resonance; it measures for how long the transverse magnetization lasts in a uniform external magnetic field. Two parameters, TR (repetition time) and TE (echo time) characterize the acquisition of the signal; weighted refers to a signal produced with long TR and TE.

The exponents are the nonlinear parameters and correspond to the relaxation rates associated with the different tissues present in the images. The coefficients are the linear parameters and are related to the noise, and the proportions of those tissues in the given images. In previous works Martín-Landrove et al. [1, 2], it is considered a variant of the Prony method to recover the relaxation rates, the noise and the proportions. Paluszny et al. [3] compared that variant of the Prony method with the Variable Projection Method of Golub and Pereyra, [46]. The method proved to be reliable for low noise levels and . The Variable Projection Method proved to be more robust in the presence of larger noise levels but required ten times more computational time to get results comparable to those of the variant of the Prony method. In this paper we first show that the method proposed by Martín-Landrove is equivalent to the Prony method described by Kahn et al. in [7], and then we study the behavior of the linear systems and the conditioning of the polynomial roots that have to be computed to obtain the model parameters using the method.

2. Prony Method

In the model formulation (1.1) set and , and write The Prony type methods, also known as polynomial methods, use the fact that satisfies a difference equation of the form where is the forward shift operator and the values are the roots of the polynomial which is called the characteristic polynomial of the associated difference equation (2.2). Evaluating (2.2) for , we get the following set of linear equations: Let , and Then the above system of linear equations can be rewritten in matrix form as Alternatively, if is the Hankel matrix then If there is no noise in the observation data, then and the coefficients can be determined from the equivalent system of equations where . Then the are computed as the roots of and finally In the presence of noisy data instead of solving the system (2.10), we consider the nonlinear optimization problem In order to obtain a nontrivial solution, it is necessary to impose restrictions over the parameters ; each choice of restrictions characterizes a particular version of the Prony method. For example the modified Prony method described in Osborne and Smyth [8, 9], uses where is the Moore Penrose generalized inverse of . Osborne et al. considered to be full rank, hence .

We consider a different optimization problem, which appears in the literature, to compare with the Martín-Landrove method: The above methods and others, such as classical Prony method, Pisarenko's method, and the linear predictor method, are described in [711]. Once the nonlinear parameters have been found, the linear ones are computed as the least squares solution of the linear system obtained by replacing the nonlinear parameters in (1.2), a separation of variables approach.

3. An Alternative Formulation of the Prony Method

To simplify the explanation let us consider the case and . Then, the system of equations (1.2) can be written as From (3.1) and defining we get The dimension of the system can be reduced by using the transformation , to get Then We now apply the transformation , and the new set can be written as The equations above are equivalent to Finally we use the transformation . At this final step we get the following system: In matrix notations we have where , If the data values are noiseless, the values ,  , and may be obtained from the previous system of equations and the are the roots of the polynomial Once the roots have been computed the nonlinear parameters can be calculated using (2.11), and likewise the linear parameters, as stated before.

In general, for an arbitrary , let be the matrix In this case the coefficients of the polynomial are the symmetric functions of defined as These coefficients are determinated by the solution of the system Finally, the are the roots of the polynomial Next we will study the relationship between the solution obtained by the procedure described above and the Prony method described in Section 2.

Theorem 3.1. Let be the matrix defined as follows: for set and for In addition, let and be the polynomials defined in (2.4) and (3.14), respectively. If is the solution of the optimization problem (2.14), then satisfies Moreover,

Proof. The solution of (2.14) satisfies . In the case we are considering is a root of , which implies that Then we have Then, using (2.9), it follows that Now, for the polynomial we have

Theorem 3.2. Let us suppose that there exists a unique solution to the optimization problem (2.14). A vector is the solution of the optimization problem (2.14) if and only if is the least squares solution of the linear equation (3.13).

Proof. Let be the solution of the optimization problem (2.14), , and let be the least squares solution of the linear system (3.13). From Theorem 3.1 it follows that Let us consider given by and defined as By Theorem 3.1 we have and therefore . By hypothesis is the only solution of the optimization problem, thus , which implies that . Now we have and thus .
In a similar way we can prove that in (3.24) is the solution of the optimization problem (2.14), whenever is the least squares solution of the linear equation (3.13).

4. Stability of the Alternative Prony Method

In (3.13), arrays and depend on the vector of measurements . In the presence of noisy data the equation that really holds is where and depend on the level of noise. Then it is necessary to determine the condition number of the matrix in order to see the accuracy in the coefficients given by . Let us consider (1.2) for . In this situation is a nonsingular square matrix of dimension . In the following lemma we establish a factorization of the matrix that will be used in the stability analysis of the system (3.13). Our analysis is similar to the one described in [12].

Lemma 4.1. Let us consider . Let and be the matrices defined by and let be the diagonal matrix given by . Then

Proof. By definition , . Then Thus

Lemma 4.2. Let us consider . Let , and be the condition numbers in the infinite norm of the matrices , , and , respectively. Then

Proof. Let be the condition number in the infinite norm of the matrix . Using Lemma 4.1 it follows that To get (4.6) it is enough to observe that

The following theorem establishes an estimate for the variation in the vector components in (3.13), as it depends on the noise level in the vector of the measurements .

Theorem 4.3. Let , , , and Let such that If , then the variation in the solution of the perturbed system satisfies

Proof. Let . If , then and . Therefore In a similar way we get By using (4.6) we see that From (4.13), (4.14), and (4.15), and using Theorem 2.7.2 in [13], it follows that Once we have estimated the vector , it remains to consider the corresponding impact in the calculation of the roots of the polynomial Let be one of the roots of the polynomial and suppose that is the root of the polynomial closest to . The following is an estimate for which follows from the theory of stability for simple roots of polynomials, see [14].

Lemma 4.4. Let such that Then there exits such that

From (4.12) we see that there is a constant such that Thus, if in Theorem 4.3 is sufficiently small such that then the conditions of Lemma 4.4 are satisfied and therefore With the computed ,  , we should analyze the least squares solution of the perturbed system which will be written as . The following theorem provides an estimate of , being the solution of the perturbed system. Note that the following result is the first one in this paper where we require the nonlinear parameters in the model (1.1) to be positive.

Theorem 4.5. Let , , , and be as in the statement of Theorem 4.3, with . Let be the condition number of matrix in the Euclidean norm and consider Let us suppose that satisfies (4.10) and (4.21) and let Then where and is the orthogonal projection of the vector on the subspace spanned by the columns of .

Proof. Let . Using the mean value theorem we see that with between and and . Then From (4.20) and (4.22) it follows that On the other hand By definition and (4.26) follows from (4.30), (4.31), and (4.32), by Theorem 5.3.1 in [13].

If the value of in (4.25) is given by and , then This estimate is similar to the is studied one obtained in reference [12], in which it is studied the stability of a confluent Prony system. Let us suppose, without loss of generality, that . From [15] we see that where is a diagonal matrix, therefore. We have We do not know a reasonable upper bound for the condition number .

It follows that the method is well conditioned if each one of the following statements is satisfied: there is such that for , the powers are widely separated, and the number of nonlinear parameters remains small. The proximity between two values leads to a large value of the condition number of , , and hence to the ill-conditioning of the algorithm. This characteristic is consistent with the observations of Varah, [16], who remarked the ill conditioning behaviour for any exponential fitting algorithm, provided that there are two parameters very close.

5. Numerical Results

In this section we present some numerical experiments to see the actual behavior of our implementation of the alternative Prony method described above. We have results for models with two and three exponentials.

Since the application which motivated our work is related to magnetic resonance brain images, we will take the linear parameters to be positive and such that , and the nonlinear ones to lie in the interval , see [17].

Given a particular model , the data values are generated evaluating with , and adding noise. We have used two different kinds of noise: Gaussian and Rician.

In MRI, data are intrinsically complex valued and corrupted with zero mean Gaussian distributed noise with equal variance [18]. MR magnitude images are formed by simply taking the square root of the two independent Gaussian random variables (real and imaginary parts) pixel by pixel. For an MR magnitude image, the Rician probability density function of the measure pixel intensity is given by where is the modified zeroth-order Bessel function of the first kind, is the underlying noise-free signal amplitude, and denotes the standard deviation of the Gaussian noise in the real and imaginary parts.

The figures are log-log graphs, the horizontal axis corresponds to the noise, and the vertical axis corresponds to the relative errors in the computations of the linear and nonlinear parameters For each example, we first show the results for one run corresponding to a particular value of the noise and then we show the average value of the relative errors for one hundred runs. The Gaussian noise has zero mean, and the variance, , varies between and . For the Rice noise we use , with varying between and as the parameters for each simulation.

For Figures 1, 2, 3 and 4 we used the model Note the linear dependence of the errors with respect to level of noise. We also noticed that, in general, the errors in the nonlinear parameters are slightly smaller than the errors in the linear parameters. In Figures 1 and 2 we have the results using Gaussian noise. Figures 3 and 4 show the results for the same model and Rice noise; as the behaviour is similar for Gaussian and Rice noises, we will only consider Rice noise for the rest of the experiments.

Next we present the results for the model In Figures 5 and 6 the graphs show a deterioration, as the theory suggests, because the nonlinear parameters are closer than in the previous example. In this case the errors in the linear parameters are greater, for more than one order of magnitude, than the errors on the nonlinear parameters.

Finally, we use the model The results are shown in Figures 7 and 8. Again, there is a deterioration, with respect to the first example, which is caused by the addition of one term to the exponential model.

Now we present Table 1 with the computed condition numbers for the different relevant matrices for each one of the three examples.

6. Conclusions

In this work we have developed a stability analysis for the alternative formulation of the Prony method presented by Martín-Landrove et al. [1, 2]. The analysis shows the linear dependence between the perturbation of the data and the relative errors in the computed values for the model linear and nonlinear parameters. It is also shown that the errors in the linear parameters depend upon both the number and the closeness of the nonlinear parameters.

Acknowledgment

This research was partially supported by Vicerrectoría de Investigación de la Universidad Nacional de Colombia, grant numbers 12427 and 16084.