Abstract

We present a new method, based on the method of maximum entropy in the mean, which builds upon the standard method of maximum entropy, to improve the parametric estimation of a decay rate when the measurements are corrupted by large level of noise and, more importantly, when the number of measurements is small. The method is developed in the context on a concrete example: that of estimation of the parameter in an exponential distribution. We show how to obtain an estimator with the noise filtered out, and using simulated data, we compare the performance of our method with the Bayesian and maximum likelihood approaches.

1. Introduction

Suppose that you want to measure the half-life of a decaying nucleus or the life-time of some elementary particle, or some other random variable modeled by an exponential distribution describing, say a decay time or the life time of a process. Assume as well that the noise in the measurement process can be modeled by a centered Gaussian random variable whose variance may be of the same order of magnitude as that of the decay rate to be measured. We assume that the variance of noise is known. To make things worse, assume that you can only collect very few measurements.

We want to emphasize, that the method developed is tailored to this one important model in applications, and on the other hand, to the fact that the samples have to be small and contaminated by observational noise (on top of their inherent randomness). And what the method provides in general, is a technique for filtering noise out.

Thus, if denotes a realized value of the variable, one can only measure , for , where is a small number, say 2 or 3, and denotes the additive measurement noise. When one knows that the distribution of is exponential, the parameter should be estimated by In other words, assume that you know that the sample comes from a specific parametric distribution but is contaminated by additive noise, then your estimator of the relevant parameters is contaminated by the measurement noise. What to do? One possible approach is to apply to the small sample the standard statistical estimation procedures like maximum likelihood. But these work well when the sample size is larger than what concerns us here. In our particular example, the MLE is in which the noise may be important (unless is large.) Thus apart from the issues arising from the smallness of the sample, we have the issue of the presence of the observational noise. We should direct the reader to the work of Rousseeuw and Verboven [1], in which issues relating to estimation in small samples are discussed.

Still another possibility, the one we want to explore here, is to apply a maxentropic filtering method, to estimate both the unknown variable and the noise level. For this we recast the problem as a typical inverse problem consisting of solving for in where is a convex set in and for some and , and is an -matrix which depends on how we rephrase the our problem. We could, for example, consider the following problem. Find such that

In our case , and we set Or we could consider a collection of such problems, one for every measurement, and then proceed to carry on the estimation. Once we have solved the generic problem (1.1), the variations on the theme are easy to write down. What is important to keep in mind here, is that the output of the method is a filtered estimator of which itself is an estimator of the unknown parameter. The novelty then is to filter out the noise in (1.2).

The method of maximum entropy in the mean (MEM) is rather well suited for solving problems like (1.1). See Navaza's [2] for an early development and Dacunha-Castelle and Gamboa [3] for full mathematical treatment. We shall briefly review what the method is about and then apply it to obtain an estimator from (1.2). In Section 3 we obtain the maxentropic estimator, and in Section 4 we examine some of its properties, in particular we examine what the results would be if either the noise level were small or the number of measurements were large. We devote Section 4 to some simulations in which the method is compared with a Bayesian and a maximum likelihood approaches.

2. The Basics of MEM

MEM is a technique for transforming a possibly ill-posed, linear problem with convex constraints into a simpler (usually unconstrained) but non-linear minimization problem. The number of variables in the auxiliary problem is equal to the number of equations in the original problem, in the case of example 1. To carry out the transformation one thinks of the there as the expected value of a random variable with respect to some measure to be determined. The basic datum is a sample space on which is to be defined. In our setup the natural choice is to take , the Borel subsets of , and the identity map. Similarly, we think of as the expected value of a random variable taking values in . The natural choice of sample space here is and the Borel subsets.

To continue we need to select a prior measures and on and . The only restriction that we impose on them is that the closure of the convex hull of both (resp., of ) is (resp., ). These prior measures embody knowledge that we may have about and but are not priors in the Bayesian sense. Actually, the model for the noise component describes the characteristics of the measurement device or process, and it is a datum. The two pieces are put together setting , and . And to get going we define the class

Note that for any having a strictly positive density , then . For this standard result in analysis check in Rudin's book [4]. The procedure to explicitly produce such 's is known as the maximum entropy method. The first step of which is to assume that , which amounts to say that our inverse problem (1.1) has a solution, and we define by the rule whenever the function is -integrable and otherwise. This entropy functional is concave on the convex set . To guess the form of the density of the measure that maximizes is to consider the class of exponential measures on defined by where the normalization factor is Here , if we define the dual entropy function by the rule or whenever .

It is easy to prove that, for any , and any . Thus if we were able to find a such that , we are done. To find such a it suffices to minimize (the convex function) over (the convex set) . We leave for the reader to verify that if the minimum is reached in the interior of , then . We direct the reader to [4, 5] for all about this, and much more. For a review of the use of maximum entropy on the mean for solving linear inverse problems, the reader might want to look at [6].

3. Entropic Estimators

Let us now turn our attention to (1.2). Since our estimator is a sample mean of an exponential (of unknown parameter), it is natural to assume, for the method described in Section 2, that the prior for is a , where is our best (or prior) guess of the unknown parameter. Here in after we propose a criterion for the best choice of Similarly, we shall chose to be the distribution of a random variable as prior for the noise component. Things are rather easy under these assumptions. To begin with, note that and the typical member of the exponential family is now It is also easy to verify that the dual entropy function is given by the minimum value of which is reached at satisfying and, discarding one of the solutions (because it leads to a negative estimator of a positive quantity), we are left with from which we obtain that as well as

Comment. Clearly, from (3.4) it follows that . Thus it makes sense to think of as the estimator with the noise filtered out, and to think of as the residual noise.

4. Properties of

Let us now spell out some of the notation underlying the probabilistic model behind (1.1). We shall assume that the and the in the first section are values of random variables and defined on a sample space . For each , we assume to be given a probability law on , with respect to which the sequences and are both i.i.d. and independent of each other, and with respect to and . That is, we consider the underlying model for the noise as our prior model for it. Minimal consistency is all right. From (3.6) and (3.7), the following basic results are easy to obtain.

Lemma 4.1. If we take , then , , and .

Comment. Actually it is easy to verify that the solution to is

To examine the case in which large data sets were available, let us add a superscript and write to emphasize the size of the sample. If denotes the arithmetic mean of an i.i.d. sequence of random variables having as common law, it will follow from the LLN the following.

Lemma 4.2. As then

Proof. Start from (3.7), invoke the LLN to conclude that tends to and obtain (4.1).

Corollary 4.3. The true parameter is the solution of .

Proof. Just look at the right hand-side of (4.1) to conclude that .

Comment. What this asserts is that when the number of measurements is large, to find the right value of the parameter it suffices to solve And when the noise level goes to zero, we have the following.

Lemma 4.4. With the notations introduced above, as

Proof. When , the , the Dirac point mass at . In this case, we just set in (3.4) and the conclusion follows.

When we choose , the estimator happens to be unbiased.

Lemma 4.5. Let denote the true but unknown parameter of the exponential, and have density for and otherwise. With the notations introduced above, one has whenever the prior for the maxent is the sample mean .

Proof. It drops out easily from Lemma 4.1, from (1.2), and the fact that the joint density of is a convolution.

But the right choice of the parameter is still a pending issue. To settle it we consider once more the identity In our particular case we shall see that minimizes the right-hand side of the previous identity. Thus, we propose to choose to minimize the residual or reconstruction error.

Lemma 4.6. With the same notations as above, happens to be a monotone function of and and In the first case whereas in the second

Proof. Recall from the first lemma that when , then . A simple algebraic manipulation shows that when then and that when then . To compute the limit of as , note that for large we can neglect the term under the square root sign, and then the result drops out. It is also easy to check the positivity of the derivative of with respect to Also clearly

To sum up, with the choice , the entropic estimator and residual error are

5. Simulation and Comparison with the Bayesian and Maximum Likelihood Approaches

In this section we compare the proposed maximum entropy in the mean procedure with the Bayesian and maximum likelihood estimation procedures. We do that by simulating data and carrying out the three procedures and plotting the histograms of the corresponding estimators. First, we generate histograms that describe the statistical nature of as a function of the parameter . For that we generate a data set of 5000 samples of 1, 3, 5, and measurements, and for each of them we obtain from (4.3). Also, for each data point we apply both a Bayesian estimation method, a simple-minded maximum likelihood estimation and a maximum likelihood method and plot the resulting histograms.

5.1. The Simple-Minded MLE

This consists of an application of the MLE method as if there was no measurement noise. We carried out this for the sake of comparison, to verify that when the sample size becomes larger, the effect of the measurement noise is washed away on the average. The plot of the results for is too scattered, and we do not display it. The result of the simulations is displayed in Figure 1.

5.2. The Maxentropic Estimator

The simulated data process goes as follows. For the data points are obtained in the following way.

(i)Simulate a value for from an exponential distribution with parameter . (ii)Simulate a value for from a normal distribution . (iii)Sum with to get , if , repeat first two steps until . (iv)Do this for . (v)Compute the maximum entropy estimator given by (4.3).

We then display the resulting histogram in Figure 2.

5.3. The Bayesian Estimator

In this section we derive the algorithm for a Bayesian inference of the model given by , for . The classical likelihood estimator of is given by . As we know that the unknown mean has an exponential probability distribution with parameter , therefore the joint density of the and is proportional to where is the density of the unknown mean and where is the Jeffrey's noninformative prior distribution for the parameter [5].

In order to derive the Bayesian estimator, we need to get the posterior probability distribution for , which we do with the following Gibbs sampling scheme [7].

(i)Draw . (ii)Draw .

Repeat this algorithm many times in order to obtain a large sample from the posterior distribution of in order to obtain the posterior distribution of . For our application, we simulate data with , which gives an expected value for equal to .

We get the histogram displayed in Figure 3 for the estimations of after iterations when simulating data for .

5.4. The Maximum Likelihood Estimator

The problem of obtaining a ML estimator is complicated in this setup because data points are distributed like where . Therefore, after observing , and , we get the following likelihood that we maximize numerically: If we attempted to obtain the ML estimator analytically, we would need to solve

Notice that as this equation tends to as expected. We can move forward a bit, and integrate by parts each numerator, and after some calculations we arrive to

Trying to solve this equation in is rather hopeless. That is the reason why we carried on a numerical maximization procedure on (5.3). To understand what happens when the noise is small, we drop the last term in the last equation and we are left with the solution of which is or , and we see that the effect of noise is to increase the ML estimator. In Figure 4 we plot the histogram of obtained by numerically maximizing (5.3) for each simulated data point.

The results are summarized in Tables 1, 2, and 3.

When simulating data for , the MEM, Maximum likelihood and Bayesian histograms are all skewed to the right and yield a mean under the three simulated histograms close to 1. As shown in the table, compiled for the MEM method yields a sample mean of with a sample standard deviation of , the Bayesian yields a sample mean equal to and sample standard deviation of , and the Maximum Likelihood method yields a sample mean of with a sample standard deviation of . All the three methods produce right skewed histograms for . The MEM and Bayesian method provide better and similar results and more accurate than the Maximum Likelihood method, but keep in mind that we carried out an approximate calculation.

Note as well that for the variability in the (approximate) MLE decreases, but it is far away from the true value. This could be due to the numerical approximation that we used, whereas for the estimator improves considerably. We owe this observation to one of our referee's, who pointed out that for very small samples, the MEM estimator outperforms the MLE estimator because it is biased, and that this advantage disappears as becomes large.

6. Concluding Remarks

We exhibited the usefulness of the method of maximum entropy of the mean for dealing with estimation problems in which the samples are small and contaminated by noise, thus adding and extra source of randomness which has to be filtered out. The problem we chose, while being real, it is a problem in which the Lagrange multiplier could be estimated analytically and the properties of the resulting estimators could be studied analytically as well. This possibility appears as well when the observed signal is Gaussian. In general, to obtain the filtered estimator, one has to determine the Lagrange multipliers numerically.

On one hand, MEM backs up the intuitive belief, according to which, if the are all the data that you have, it is all right to compute your estimator of the mean for . The MEM and Bayesian methods yield closer results to the true parameter value than the maximum likelihood estimator for small number of measurements.

On the other hand, and this depends on your choice of priors, MEM provides us with a way of modifying those priors, and obtain representations like ; where of course . What we saw above, is that there is a choice of prior distributions such that and

The important thing is that this is actually true regardless of what the “true” probability describing the is.

Aknowledgment

The authors want to than two referees for their comments and suggestions, which improved their presentation.