Abstract

A simple recursive method is presented for performing the inverse dispersion modeling of an unknown number of (localized) sources, given a finite number of noisy concentration data acquired by an array of detectors. Bayesian probability theory is used to address the problem of selecting the source model which is most plausible in view of the given concentration dataset and all the available prior information. The recursive algorithm involves subtracting a predicted concentration signal arising from a source model consisting of 𝑁 localized sources from the measured concentration data for increasing values of 𝑁 and examining the resulting residual data to determine if the residuals are consistent with the estimated noise level in the concentration data. The method is illustrated by application to a real concentration dataset obtained from an atmospheric dispersion experiment involving the simultaneous release of a tracer from four sources.

1. Introduction

Significant advances in sensing technology for concentration measurements of contaminants (e.g., toxic industrial materials; chemical, biological, and radiological agents) released into the atmosphere, either accidentally or deliberately, have fostered interest in exploiting this information for the reconstruction of the contaminant source distribution responsible for the observed concentration. Indeed, innovative methods for measuring and collecting concentration data in formal observation (or monitoring) networks of ever more compact (nanotechnology-enabled) sensors, including the adoption of new wireless sensor network technologies for acquisition of data in situ and for rapid delivery of this information through wireless transmission [1], have placed the emphasis on data assimilation and inverse dispersion modeling.

This should not be too surprising owing to the fact that inverse dispersion modeling will enable the full realization of the benefits in this new data context, allowing the innovative fusion of sensor data with predictive models for atmospheric dispersion for determination of the unknown source characteristics. In turn, this will lead to a greatly improved situational awareness and result in a significantly enhanced common operational picture required for making more informed decisions for mitigation of the consequences of a (toxic) contaminant release. In this context, inverse dispersion modeling for source reconstruction has important implications for public safety and security.

In the past, two different methodologies have been used to address the inverse dispersion modeling problem, namely, deterministic optimization and stochastic Bayesian approaches. In the optimization method, the parameters 𝜃 describing the source model are obtained as the solution of a nonlinear least-squares optimization problem: ̂𝜃=argmin𝜃𝑛𝑓𝑂(𝜃)+𝑓𝑅,(𝜃)(1.1) where 𝑛 is the number of parameters (unknowns) used to parameterize the source model, 𝑓𝑂(𝜃) is an objective functional that measures the total mismatch (usually taken as the sum-of-squared differences) between the measured concentration data and the synthetic (predicted) concentration data associated with the current source model 𝜃, and 𝑓𝑅(𝜃) is a regularization functional that is used either to impose an additional constraint on the solution or to incorporate a priori information (which is required to produce a mathematically unique solution). Most efforts in application of the optimization approach to inverse dispersion modeling have been focused on the development of numerical methods for solution of the optimization problem or on the prescription of the regularization functional for incorporation of various forms of a priori information. The optimization method for inverse dispersion modeling has been used by Robertson and Langner [2], Bocquet [3], Thomson et al. [4], Allen et al. [5], and Issartel et al. [6] (among others) based on various numerical methods for solution of the optimization problem (e.g., variational data assimilation, genetic algorithm, conjugate gradient algorithm with line search, augmented Lagrangian method) and a number of different forms of regularization (e.g., energy, entropy, smoothness, or flatness).

While the optimization approach for inverse dispersion modeling seeks to provide a single optimal solution for the problem, the stochastic Bayesian approach seeks to generate multiple solutions to the problem with the evaluation of the degree of plausibility for each solution. In contrast to the optimization approach, the stochastic Bayesian approach allows the quantification of the uncertainty in the source reconstruction (arising as such from the use of incomplete and noisy concentration data and imperfect models of atmospheric dispersion for the inverse dispersion modeling). This approach has been developed and used by a number of researchers (e.g., [710]).

In all the previous studies cited above, the inverse dispersion modeling involved the problem of identification of source parameters for a single localized source. The determination of the characteristics of multiple localized sources was briefly addressed by Yee [11] and by Sharan et al. [12] using, respectively, a stochastic Bayesian approach and an optimization (least-squares) approach that were similar to those applied previously for the recovery of a single source. In both these cases, this was possible because it was assumed in these investigations that the number of sources was known a priori. The problem of reconstruction of an unknown number of localized sources using a finite number of noisy concentration measurements is a significantly more difficult problem. A solution for this problem was proposed by Yee [13, 14] who approached the problem as a generalized parameter estimation problem in which the number of localized sources, 𝑁, in the (unknown) source distribution was included explicitly in the parameter vector 𝜃, in addition to the usual parameters that characterize each localized source (e.g., location, emission rate, source-on time, source-off time).

Solving the reconstruction of an unknown number of localized sources as a generalized parameter estimation problem posed a number of conceptual and technical difficulties. The primary conceptual difficulty resided in the fact that when the number of sources is unknown a priori, the dimension of the parameter space (or, equivalently, the dimension or length of the associated parameter vector 𝜃) is an unknown that needs to be estimated using the noisy concentration data. Yee [13, 14] overcame this conceptual difficulty by using Bayesian probability theory to formulate the full joint probability density function (PDF) for the number of sources 𝑁 and the parameters of the 𝑁 localized sources and then demonstrated how a reversible-jump Markov chain Monte Carlo (RJMCMC) algorithm can be designed to draw samples of candidate source models from this joint PDF, allowing for changes in the dimensionality of the source model (or equivalently, changes in the number of localized sources in the source distribution). Furthermore, it was found that the RJMCMC algorithm sampled the parameter space of the unknown source distribution rather inefficiently. To overcome this technical difficulty, Yee [13, 14] showed how the RJMCMC algorithm can be combined either with a form of parallel tempering based on a Metropolis-coupled MCMC algorithm [13] or with a simpler and computationally more efficient simulated annealing scheme [14] to improve significantly the sampling efficiency (or, mixing rate) of the Markov chain in the variable dimension parameter space.

In this paper, we address the inverse dispersion modeling problem for the difficult case of multiple sources when the number of sources is unknown a priori as a model selection problem, rather than as a generalized parameter estimation problem as described by Yee [13, 14]. The model selection problem here is formulated in the Bayesian framework which involves the evaluation of model selection statistics that gauges rigorously the tradeoff between the goodness-of-fit of the source model structure to the concentration data and the complexity of the source model structure. The model selection approach proposed herein for reconstruction of an unknown number of localized sources is conceptually and algorithmically simpler than that based on a generalized parameter estimation approach [13, 14], leading as such to a simpler and more efficient computer implementation. More specifically, the model selection approach is simpler than the generalized parameter estimation approach in that it does not need to deal with the complexity of the variable dimensionality of the parameter space, which requires generally the development of sophisticated and computationally intensive algorithms.

2. Bayesian Model Selection

Let 𝑚𝑁 denote a source distribution (model) which consists of a known number, 𝑁, of localized sources. The model 𝑚𝑁 is characterized by a set of parameters 𝜃. Bayesian inference for 𝜃, when given the concentration data 𝒟, is based on Bayes’ theorem (rule): 𝑝𝜃𝑚𝑁=𝑝,𝒟,𝐼𝜃𝑚𝑁𝑝,𝐼𝒟𝑚𝑁,𝜃,𝐼𝑝𝒟𝑚𝑁,𝐼,(2.1) where 𝐼 denotes the background (contextual) information that defines the multiple source reconstruction problem (e.g., background meteorology, atmospheric dispersion model used to determine the source-receptor relationship, etc.) and the vertical bar denotes “conditional upon’’ or “given.’’ Furthermore, in (2.1), 𝑝(𝜃𝑚𝑁,𝒟,𝐼)𝑝(𝜃) is the posterior probability distribution of the parameters 𝜃, 𝑝(𝜃𝑚𝑁,𝐼)𝜋(𝜃) is the prior probability distribution of the parameters 𝜃 before the data 𝒟 was made available, and 𝑝(𝒟𝑚𝑁,𝜃,𝐼)(𝜃) is the likelihood function and defines the probabilistic model of how the data were generated. Finally, 𝑝(𝒟𝑚𝑁,𝐼) is the evidence and, in the context of parameter estimation, ensures proper normalization of the posterior probability distribution (assuming 𝜃𝑛): 𝑝𝒟𝑚𝑁=,𝐼𝑛𝑚(𝜃)𝜋(𝜃)𝑑𝜃𝑍𝑁.(2.2)

However, in our problem, the number of sources (𝑁) is unknown a priori, so the relevant question that needs to be addressed is as follows: given a set of possible source models 𝑆{𝑚0,𝑚1,,𝑚𝑁max} (𝑚𝑁 denotes the source model consisting of 𝑁 localized sources), which source model is the most plausible (probable) given the concentration data 𝒟? This question is addressed rigorously in the Bayesian framework by using Bayes’ theorem to compute the posterior probability for source model 𝑚𝑁 (𝑁=0,1,,𝑁max) to give 𝑝𝑚𝑁=𝑝𝑚𝒟,𝐼𝑁𝑝𝐼𝒟𝑚𝑁,𝐼.𝑝(𝒟𝐼)(2.3) Here, 𝑝(𝑚𝑁𝐼) is the prior probability of source model 𝑚𝑁 given only the information 𝐼, and 𝑝(𝒟𝑚𝑁𝐼) is the global likelihood of the concentration data 𝒟 given the source model 𝑚𝑁 and 𝐼 and represents the goodness-of-fit of the model to the data. Note that the global likelihood of (2.3) is identical to the evidence 𝑍(𝑚𝑁) defined in (2.2). Finally, the marginal probability of the data given the background information, 𝑝(𝒟𝐼), is a normalization constant over all source models: 𝑝(𝒟𝐼)=𝑁max𝑁=0𝑝𝑚𝑁𝑝𝐼𝒟𝑚𝑁.,𝐼(2.4)

Given any two source models 𝑚𝑁0 and 𝑚𝑁1 from the set 𝑆, the “odds’’ in favor of model 𝑚𝑁1 relative to model 𝑚𝑁0 are given by the ratio of the posterior probabilities of the two models, which on using (2.3) and (2.2), reduces to 𝐾10𝑝𝑚𝑁1𝒟,𝐼𝑝𝑚𝑁0=𝑝𝒟,𝐼𝒟𝑚𝑁1,𝐼𝑝𝒟𝑚𝑁0𝑝𝑚,𝐼𝑁1𝐼𝑝𝑚𝑁0=𝑍𝑚𝐼𝑁1𝑍𝑚𝑁0𝑝𝑚𝑁1𝐼𝑝𝑚𝑁0.𝐼(2.5) Note that the posterior odds ratio 𝐾10 is the product of the evidence ratio 𝑍(𝑚𝑁1)/𝑍(𝑚𝑁0) and the prior odds ratio 𝑝(𝑚𝑁1𝐼)/𝑝(𝑚𝑁0𝐼). If every source model in the set 𝑆 is equally probable (namely, 𝑝(𝑚𝑁𝐼)=1/(𝑁max+1) for 𝑁=0,1,,𝑁max), then the prior odds ratio is unity, and the posterior odds ratio is identically equal to the evidence ratio. When viewed as a model selection problem, the inverse dispersion modeling of an unknown number of sources is conceptually simple: compute the evidence 𝑍(𝑚𝑁) for each source model 𝑚𝑁 in 𝑆 given the input concentration data 𝒟 and the background information 𝐼, select the model with the largest value of the evidence as the most probable model, and recover the parameters 𝜃 corresponding to this most probable model.

The key to the model selection problem reduces to the computation of the evidence 𝑍(𝑚𝑁) for an arbitrary value of 𝑁. A perusal of (2.2) shows that the evidence for the source model structure 𝑚𝑁 (namely, a source model consisting of exactly 𝑁 localized sources) involves a computation of the overlap integral (or inner product) of the likelihood (𝜃) and the prior distribution 𝜋(𝜃) over the entire parameter space for model 𝑚𝑁. This inner product can be interpreted as a metric (statistic) for model selection and, as such, is an intrinsic element of the source model structure 𝑚𝑁 in the sense that it depends on both the set of parameters to be varied and the prior ranges for those parameters but is independent of the most probable (preferred) values for the parameters.

Note that the model selection metric selects preferentially a source model structure with the largest overlap of the likelihood and prior distribution, implying that in order for a more complex source model (with a necessarily higher-dimensional parameter space) to be selected, this requires the prior information to be smeared over a larger hypervolume in the parameter space in order to overlap significantly the likelihood. This mechanism embodies automatically an Occam’s razor in that a simpler source model structure is preferentially selected unless the likelihood of the data for a more complex source model increases significantly more than what would be expected from simply fitting the noise (residual between the measured concentration and the predicted concentration provided by the model).

The solution of the model selection problem is conceptually simple involving, as such, the computation of 𝑍(𝑚𝑁) (𝑚𝑁𝑆). Unfortunately, this computation is a technically difficult problem for two reasons. Firstly, as 𝑁 (or, equivalently, as the number of localized sources) increases, the overlap integral in (2.2) needs to be evaluated over a parameter space of increasing dimensionality 𝑛. More specifically as 𝑁 increases, the dimensionality of the parameter space 𝑛 increases linearly with 𝑁. Secondly, the integrand of the overlap integral of (2.2) becomes ever more complex with increasing 𝑁, owing to a counting degeneracy in the problem. In particular, it is noted that (𝜃) is invariant under a reordering (relabelling) of the identifiers used to label the individual localized sources in the source model. This degeneracy simply corresponds to different (but physically equivalent) identifications of what is meant by a particular localized source in the source distribution, implying that the number of modes in (𝜃) increases with 𝑚𝑁 as 𝑁!. The factor 𝑁! here corresponds simply to the number of possible permutations of the labels for the localized sources in the source model 𝑚𝑁. In summary, the increase in the dimensionality of the parameter space and in the complexity of (𝜃) with increasing 𝑁 makes the problem of computation of 𝑍(𝑚𝑁) technically difficult.

In view of the difficulty arising from the numerical evaluation of 𝑍(𝑚𝑁) for increasing values of 𝑁, we consider an alternative (but closely related) methodology for addressing the model selection problem in the context of inverse dispersion modeling of an a priori unknown number of localized sources. To this purpose, the question that needs to be posed is after the removal of a predicted concentration for a source model 𝑚𝑁 from the measured concentration data, are the residuals that remain consistent with the estimated noise level or is there further evidence for the existence of an additional localized source in the residuals? This question is easily addressed using Bayes’ theorem as follows.

Towards this purpose, the following model is assumed for the measured concentration data 𝑑𝑖 (𝑖=1,2,,𝑁𝑚): 𝑑𝑖=𝐶𝑖+𝑒𝑖(1)=𝐶𝑖+𝑒𝑖(1)+𝑒𝑖(2)𝐶𝑖+𝑒𝑖,(2.6) where 𝐶𝑖 is the true (albeit unknown) mean concentration, 𝑒𝑖(1) is the measurement error, 𝐶𝑖 is the predicted mean concentration obtained from an atmospheric dispersion model, 𝑒𝑖(2) is the model error incurred by using the (necessarily) imperfect atmospheric model to predict 𝐶𝑖, and 𝑒𝑖 is the composite error that includes both the measurement and model error. Furthermore, it is assumed that the expectation value 𝑒𝑖 of the composite error is zero, and the variance 𝑒2𝑖 is assumed (for now) to be known and given by 𝜎2𝑖. Hence, it is assumed that we are given the concentration data 𝒟{𝑑1,𝑑2,,𝑑𝑁𝑚} and corresponding uncertainties {𝜎1,𝜎2,,𝜎𝑁𝑚} where 𝜎𝑖 is the standard deviation (square root of the variance) for the 𝑖-th concentration datum 𝑑𝑖. Normally, exact values for the uncertainties 𝜎𝑖 are unknown so that all we would have available would be some estimated values for these uncertainties 𝑠𝑖, rather than the true values 𝜎𝑖.

If we assume a priori that there are 𝑁 localized sources in the domain, we can analyse the concentration data 𝒟 given that 𝑚𝑁 is the correct model structure and draw samples of source models with exactly 𝑁 localized sources (encoded by 𝜃) from the posterior distribution given by (2.1). Assume that 𝑁𝑠 samples of source models with 𝑁 localized sources have been drawn from the posterior distribution; namely, we have available the samples 𝜃𝑘(𝑚𝑁)(𝑘=1,2,,𝑁𝑠) where 𝜃𝑘(𝑚𝑁) is 𝑘th source model sample consisting of exactly 𝑁 localized sources. To determine if 𝑁 is the correct number of localized sources, we can compute the residual data {𝜖1,𝜖2,,𝜖𝑁𝑚} where 𝜖𝑖 is the mean residual for the 𝑖th concentration datum (with realizations of the residuals obtained by subtracting the predicted concentration 𝐶𝑖(𝑚𝑁) for a sample of the source model 𝑚𝑁 encoded in 𝜃𝑘(𝑚𝑁) from the measured concentration 𝑑𝑖). The mean residual datum 𝜖𝑖 is estimated from the ensemble of source model samples 𝜃𝑘(𝑚𝑁) (𝑘=1,2,,𝑁𝑠) as follows: 𝜖𝑖=1𝑁𝑠𝑁𝑠𝑘=1𝑑𝑖𝐶𝑖𝜃𝑘𝑚𝑁.(2.7) Similarly, the uncertainty 𝑠𝜖𝑖 (error) in the residual datum 𝜖𝑖 is estimated as 𝑠𝜖𝑖𝜖=[2𝑖(𝜖𝑖)2]1/2, where 𝜖2𝑖=1𝑁𝑠𝑁𝑠𝑘=1𝑑𝑖𝐶𝑖𝜃𝑘𝑚𝑁2.(2.8)

Now, given the residual data , we evaluate the evidence for the presence of an additional source by considering two models (hypotheses) for : (1)  𝐻0: there is no additional source and the residual data 𝜖𝑖=𝐶𝑖(𝑚0)+𝑒𝑖=𝑒𝑖 (where 𝐶𝑖(𝑚0) is the predicted concentration for a source model with no localized sources and hence has zero value), and (2)  𝐻1: there is an additional source and the residual data 𝜖𝑖=𝐶𝑖(𝑚1)+𝑒𝑖 (where 𝐶𝑖(𝑚1) is the predicted concentration for a source model consisting of one localized source). Under hypothesis 𝐻0 (null hypothesis), the residuals are consistent with the estimated noise level 𝑒𝑖. To decide which hypothesis is favoured by the data, we calculate the posterior odds ratio for 𝐻1 relative to 𝐻0. From (2.5) and assuming that hypotheses 𝐻0 and 𝐻1 are equally probable so 𝑝(𝐻0𝐼)=𝑝(𝐻1𝐼)=1/2, we get 𝐾𝜖10𝑝𝐻1,𝐼𝑝𝐻0𝑝𝑚,𝐼1,𝐼𝑝𝑚0=𝑝,𝐼𝑚1,𝐼𝑝𝑚0𝑍,𝐼𝜖𝑚1𝑍𝜖𝑚0.(2.9) If 𝐾𝜖10 exceeds some preassigned threshold, then there is evidence for an additional source (namely, hypothesis 𝐻1 is favoured with respect to hypothesis 𝐻0). If this is the case, then the number of sources 𝑁 is increased by one, and we repeat the analysis of the concentration data 𝒟 given that 𝑚𝑁+1 is the correct model structure. This analysis would involve drawing samples from the posterior distribution of (2.1) with 𝑚𝑁 replaced by 𝑚𝑁+1, computing the residual data (obtained now by subtracting the predicted concentration 𝐶𝑖(𝑚𝑁+1) from the measured concentration 𝑑𝑖), and then determining once more the evidence in favour of 𝐻1 relative to that of 𝐻0. When this test fails (namely, 𝐾𝜖10 is less than or equal to the preassigned threshold value), the algorithm terminates. At this point, the number of sources 𝑁 has been determined, and the samples drawn from the posterior distribution of (2.1) for 𝑚𝑁 can be used to estimate any posterior statistic of interest for the source parameters 𝜃.

To summarize, the algorithm for the inverse dispersion modeling of an unknown number of sources consists of the following steps.(1)Input concentration data 𝒟={𝑑1,𝑑2,,𝑑𝑁𝑚} and associated estimated uncertainties {𝑠1,𝑠2,,𝑠𝑁𝑚}.(2)Set 𝑁=1 and specify a threshold 𝐾 for the posterior odds ratio. Draw 𝑁𝑠 samples of source models (encoded by the parameter vectors 𝜃𝑘(𝑚𝑁), 𝑘=1,2,,𝑁𝑠) from the posterior distribution 𝑝(𝜃𝑚𝑁,𝒟,𝐼) given by (2.1).(3)Compute the evidences 𝑍(𝑚𝑁) and 𝑍(𝑚𝑁1), and use these values to determine 𝐾10 in accordance to (2.5). If 𝐾10𝐾, stop (no sources are present in the domain, so 𝑁=0).(4)Use samples 𝜃𝑘(𝑚𝑁) (𝑘=1,2,,𝑁𝑠) to estimate the residual data ={̂𝜖1,̂𝜖2,,̂𝜖𝑁𝑠} and the associated uncertainty in this data {𝑠𝜖1,𝑠𝜖2,,𝑠𝜖𝑁𝑚} in accordance to (2.7) and (2.8), respectively.(5)Using the information from the previous step, compute 𝐾𝜖10 in accordance to (2.9). If 𝐾𝜖10𝐾, set 𝑁=𝑁, output 𝜃𝑘(𝑚𝑁) (𝑘=1,2,,𝑁𝑠), and stop.(6)Increase 𝑁𝑁+1. Draw 𝑁𝑠 samples 𝜃𝑘(𝑚𝑁) (𝑘=1,2,,𝑁𝑠) of source models from 𝑝(𝜃𝑚𝑁,𝒟,𝐼) given by (2.1). Continue from Step  4.

The algorithm summarized previously determines the minimum number of localized sources needed to represent the concentration data 𝒟 down to the estimated noise level, without the requirement to compute the evidence 𝑍(𝑚𝑁) for 𝑁2 (a technically difficult problem computationally). Steps 2 and 3 in the algorithm correspond to the signal detection phase which addresses the specific question: is there evidence in the concentration data 𝒟 for the existence of one or more localized sources in the domain? On the other hand, Steps 4 to 6 in the algorithm address the estimation phase and answer the question given that there is evidence for one or more localized sources in the domain (detection has been confirmed), how many localized sources are present, and what are the values of the source parameters that best characterize each of these localized sources?

3. Source-Receptor Relationship (Dispersion Modeling)

In this paper, we are interested in the reconstruction of an unknown number 𝑁 of localized sources. In view of this, the model for the source density function has the following explicit form (for source model 𝑚𝑁): 𝒮(𝐱,𝑡)=𝑁𝑗=1𝑄𝑗𝛿𝐱𝐱𝑠,𝑗𝑈𝑡𝑇𝑏,𝑗𝑈𝑡𝑇𝑒,𝑗,(3.1) where 𝛿() and 𝑈() are the Dirac delta and Heaviside unit step functions, respectively, and, 𝑄𝑗, 𝐱𝑠,𝑗, 𝑇𝑏,𝑗, and 𝑇𝑒,𝑗 are the emission rate, vector position (location), source-on time, and source-off time, respectively, of the 𝑗th source (𝑗=1,2,,𝑁). For a source model (or molecule) composed of 𝑁 localized sources (or source atoms), it is convenient to define the parameter vector as follows: 𝜃(𝐱𝑠,1,𝑇𝑏,1,𝑇𝑒,1,𝑄1,,𝐱𝑠,𝑁,𝑇𝑏,𝑁,𝑇𝑒,𝑁,𝑄𝑁)6𝑁.

To apply the Bayesian probability theory, it is necessary to relate the hypotheses of interest about the unknown source model 𝒮 to the modeled (predicted) concentration 𝐶 [cf. (2.6)]. The predicted concentration 𝐶 can be compared directly to the measured concentration 𝑑 “seen’’ by a sensor at receptor location 𝐱𝑟 and time 𝑡𝑟, averaged over the sensor volume and sampling time. The comparison can be effected by averaging the mean concentration 𝐶(𝐱,𝑡) over this sensor volume and sampling time to give 𝐶𝐱𝑟,𝑡𝑟𝑇0𝑑𝑡𝑑𝐱𝐶(𝐱,𝑡)𝐱,𝑡𝐱𝑟,𝑡𝑟𝐱𝐶𝑟,𝑡𝑟,(3.2) where (𝐱,𝑡𝐱𝑟,𝑡𝑟) is the spatial-temporal filtering function for the sensor concentration measurement at (𝐱𝑟,𝑡𝑟) and ×[0,𝑇] corresponds to the space-time volume that contains the source distribution and the receptors (sensors). Note that the mean concentration 𝐶(𝐱𝑟,𝑡𝑟) “seen’’ by a sensor can be expressed succinctly as the inner or scalar product 𝐶 of the mean concentration 𝐶 and the sensor response function .

A source-oriented forward-time Lagrangian stochastic (LS) model can be used to predict 𝐶(𝐱,𝑡). In this approach, 𝐶 is estimated from the statistical characteristics of “marked’’ particle trajectories described by the following stochastic differential equation [15]: 𝐶𝑑𝐗(𝑡)=𝐕(𝑡)𝑑𝑡,𝑑𝐕(𝑡)=𝐚(𝐗(𝑡),𝐕(𝑡),𝑡)𝑑𝑡+0𝜖(𝐗(𝑡),𝑡)1/2𝑑𝐖(𝑡),(3.3) where 𝐗(𝑡)(𝑋𝑖(𝑡))=(𝑋1(𝑡),𝑋2(𝑡),𝑋3(𝑡)) is the Lagrangian position and 𝐕(𝑡)(𝑉𝑖(𝑡))=(𝑉1(𝑡),𝑉2(𝑡),𝑉3(𝑡)) is the Lagrangian velocity of a “marked’’ fluid element at time 𝑡 (marked by the source as the fluid element passes through it at some earlier time 𝑡). The state of the fluid particle at any time 𝑡 after its initial release from the source density function 𝒮 is defined by (𝐗,𝐕). In (3.3), 𝐶0 is the Kolmogorov “universal’’ constant that is associated with the Kolmogorov similarity hypothesis for the form of the second-order Lagrangian velocity structure function in the inertial subrange; 𝜖 is the mean dissipation rate of turbulence kinetic energy; 𝑑𝐖(𝑡)(𝑑𝑊𝑖(𝑡))=(𝑑𝑊1(𝑡),𝑑𝑊2(𝑡),𝑑𝑊3(𝑡)) are the increments of a vector-valued (three-dimensional) Wiener process; 𝐚(𝑎𝑖)=(𝑎1,𝑎2,𝑎3) is the drift coefficient vector (or, more precisely, the conditional mean acceleration vector).

Unfortunately, for the current application, the computational demands of the source-oriented approach make it impractical as a direct tool for sampling from the posterior distribution because this involves necessarily exploring a large number of source parameter hypotheses. This is highly computer intensive, as the simulation-based Bayesian inference procedure requires a large number of forward calculations of the mean concentration to be performed, each of which involves the numerical solution of (3.2) and (3.3). In this particular case, it may be useful to construct an emulator for the simulation model and use this emulator as an inexpensive surrogate to replace the forward model [1618]. Fortunately, applying this type of approximation method for the forward model is not required for the current problem. It was shown by Keats et al. [7] and Yee et al. [10] that an exact computationally efficient procedure (appropriate for use in a Bayesian inference scheme) exists in the form of a receptor-oriented strategy for the representation of the source-receptor relationship.

Towards this purpose, the modeled sensor concentration 𝐶(𝐱𝑟,𝑡𝑟) can be computed alternatively using the following dual representation: 𝐶𝐱𝑟,𝑡𝑟𝑇0𝑑𝑡𝑑𝐱𝐶(𝐱,𝑡)𝐱,𝑡𝐱𝑟,𝑡𝑟=𝑡𝑟0𝑑𝑡0𝑑𝐱0𝐶𝐱0,𝑡0𝐱𝑟,𝑡𝑟𝒮𝐱0,𝑡0𝐶𝐱𝒮𝑟,𝑡𝑟,(3.4) where 𝐶(𝐱0,𝑡0𝐱𝑟,𝑡𝑟) is the adjunct (or dual) concentration at space-time point (𝐱0,𝑡0) associated with the concentration datum at location 𝐱𝑟 at time 𝑡𝑟 (with 𝑡0𝑡𝑟). A comparison of (3.2) and (3.4) implies the duality relationship 𝐶=𝐶𝒮 between 𝐶 and 𝐶 through the source functions and 𝒮. Moreover, 𝐶 is uniquely defined in the sense that it is explicitly constructed so that it verifies this duality relationship.

The adjunct field 𝐶 in the dual representation can be estimated from the statistical characteristics of “marked’’ particle trajectories corresponding to a receptor-oriented backward-time LS trajectory model, defined as the solution to the following stochastic differential equation (with 𝑑𝑡>0): 𝑑𝐗𝑏𝑡=𝐕𝑏𝑡𝑑𝑡,(3.5)𝑑𝐕𝑏𝑡=𝐚𝑏𝐗𝑏𝑡,𝐕𝑏𝑡,𝑡𝑑𝑡+𝐶0𝜖𝐗𝑏𝑡,𝑡1/2𝑡𝑑𝐖,(3.6) where at any given time 𝑡, (𝐗𝑏,𝐕𝑏) is a point in the phase space along the backward trajectory of the “marked’’ fluid element (here assumed to be marked or tagged as a fluid particle which at time 𝑡𝑟 passed through the spatial volume of the detector at location 𝐱𝑟). Here, (𝐗𝑏,𝐕𝑏) defines the state of a fluid particle at any time 𝑡<𝑡𝑟 before its “final” release from the sensor space-time volume at time 𝑡𝑟. It can be shown [15, 19] that 𝐶 obtained from (3.5) for a detector with the filtering function and 𝐶 obtained from (3.2) for a release from the source density 𝒮, are exactly consistent with the duality relationship 𝐶=𝐶𝒮 provided: (1)  𝐕𝑏 in (3.5) is related to 𝐕 in (3.3) as 𝐕𝑏(𝑡)=𝐕(𝑡), and (2)  𝐚𝑏 in (3.5) is related to 𝐚 in (3.3) as 𝑎𝑏𝑖(𝐱,𝐮,𝑡)=𝑎𝑖(𝐱,𝐮,𝑡)𝐶0𝜕𝜖(𝐱,𝑡)𝜕𝑢𝑖ln𝑃𝐸(𝐮),(3.7) where 𝑃𝐸(𝐮) is the background Eulerian velocity PDF. Thomson [15] provides one particular solution for the drift coefficient vector 𝐚 that is consistent with the well-mixed criterion for the case where 𝑃𝐸(𝐮) has a Gaussian functional form (namely, for Gaussian turbulence).

Finally, if we substitute (3.1) into (3.4), the predicted concentration 𝐶(𝜃)𝐶(𝐱𝑟,𝑡𝑟;𝜃) “seen’’ by the sensor at (𝐱𝑟,𝑡𝑟) is given explicitly by the following expression: 𝐶(𝜃)=𝑁𝑗=1𝑄𝑗min(𝑇,𝑇𝑒,𝑗)𝑇𝑏,𝑗𝐶𝐱𝑠,𝑗,𝑡𝑠𝐱𝑟,𝑡𝑟𝑑𝑡𝑠.(3.8)

4. Prior and Likelihood

4.1. Prior

Assuming that each parameter in 𝜃 (for source model 𝑚𝑁) is logically independent, the prior 𝜋(𝜃) can be factored as follows: 𝜋(𝜃)=𝑁𝑗=1𝜋𝑄𝑗𝜋𝐱𝑠,𝑗𝜋𝑇𝑏,𝑗𝜋𝑇𝑒,𝑗.(4.1) In this paper, the component prior distributions are assigned uniform distributions over an appropriate range for each source parameter. Furthermore, it is noted that the parameter ranges specified in the prior define the region in 𝑛 (𝑛6𝑁) over which the evidence integral is carried out [cf. (2.2)]. More specifically, for 𝑗=1,2,,𝑁, 𝑄𝑗𝒰([0,𝑄max]), where 𝑄max is an a priori upper bound on the emission rate; 𝐱𝑠,𝑗𝒰() where 3 is the a priori specified spatial domain that is assumed to contain the source 𝒮; and, 𝑇𝑏,𝑗𝒰([0,𝑇max]) and 𝑇𝑒,𝑗𝒰([𝑇𝑏,𝑗,𝑇max]), where 𝑇max is the upper bound on the time at which the source was turned on or turned off. Here, the symbol denotes “is distributed as’’ and 𝒰([𝑎,𝑏]) is the uniform distribution defined on the closed interval [𝑎,𝑏]. Finally, note in the specification of the prior for 𝑇𝑒,𝑗 that the lower bound of the parameter range is 𝑇𝑏,𝑗, encoding the fact that the time the source is turned off must necessarily occur after it has been turned on.

4.2. Likelihood

In the absence of a detailed knowledge of the noise distribution 𝑒𝑖 in (2.6), other than the fact that it has a finite variance 𝜎2𝑖, the application of the principle of maximum entropy [20] informs us that a Gaussian distribution is the most conservative choice for the direct probability (likelihood) of the concentration data 𝒟. In consequence, the likelihood function (𝜃) has the following form: 1(𝜃)(𝜃𝒟,𝜍)=𝑁𝑚𝑖=12𝜋𝜎𝑖1exp2𝜒2,(𝜃)(4.2) where 𝜒2(𝜃)𝑁𝑚𝑖=1𝑑𝑖𝐶𝑖(𝜃)𝜎𝑖2.(4.3) In (4.2), the notation for the likelihood function was adjusted to include the standard deviation for the noise 𝜍(𝜎1,𝜎2,,𝜎𝑁𝑚) in the conditioning to emphasize the fact that {𝜎𝑖}𝑁𝑚𝑖=1 are assumed to be known quantities.

Unfortunately, as mentioned previously, the noise term 𝑒𝑖 in (2.6) is rather complicated, arising as such from a superposition of both model and measurement errors. In consequence, reliable estimates for 𝜎𝑖 (𝑖=1,2,,𝑁𝑚) are difficult to obtain in practical applications. In view of this, let us denote by 𝑠𝑖 the quoted estimate of the standard deviation for the noise term 𝑒𝑖 for which the true (but unknown) standard deviation is 𝜎𝑖. Now, let us characterize the uncertainty in the specification of the standard deviation of 𝑒𝑖 with an inverse gamma distribution of the following form (or, equivalently, prior distribution for 𝜎𝑖): 𝜋𝜎𝑖𝑠𝑖𝛼,𝛼,𝛽=2𝛽𝑠Γ(𝛽)𝑖𝜎𝑖2𝛽𝑠exp𝛼2𝑖𝜎2𝑖1𝜎𝑖,𝑖=1,2,,𝑁𝑚,(4.4) where Γ(𝐱) denotes the gamma function and 𝛼 and 𝛽 are scale and shape parameters, respectively, that define the inverse gamma distribution. The inverse gamma distribution for 𝜎𝑖 has mean 𝜎𝑖=𝑠𝑖𝛼1/2Γ(𝛽1/2)/Γ(𝛽) and variance Var[𝜎𝑖]=𝑠2𝑖𝛼[1/(𝛽1)Γ2(𝛽1/2)/Γ2(𝛽)]. Again, the parameters 𝛼 and 𝛽 have been added to the PDF of the noise uncertainty in (4.4) to indicate that the values for these parameters are assumed to be known.

In view of (4.4), the true but unknown standard deviations 𝜎𝑖 of 𝑒𝑖 that appear in (4.2) can be treated as nuisance parameters and eliminated by considering the integrated likelihood =(𝜃𝒟,𝐬,𝛼,𝛽)=(𝜃𝒟,𝜍)𝜋(𝜍𝐬,𝛼,𝛽)𝑑𝜍(𝜃𝒟,𝜍)𝑁𝑚𝑖=12𝛼𝛽𝑠Γ(𝛽)𝑖𝜎𝑖2𝛽𝑠exp𝛼2𝑖𝜎2𝑖1𝜎𝑖𝑑𝜍,(4.5) where 𝐬(𝑠1,𝑠2,,𝑠𝑁𝑚) are the estimated standard deviations for the noise (𝑒1,𝑒2,,𝑒𝑁𝑚) and 𝑑𝜍𝑑𝜎1𝑑𝜎2𝑑𝜎𝑁𝑚. Now, substituting the form for (𝜃𝒟,𝜍) from (4.2) and (4.3) into (4.5) and performing the integration with respect to 𝜍, we obtain the integrated likelihood function given by (𝜃𝒟,𝐬,𝛼,𝛽)=𝑁𝑚𝑖=1𝛼𝛽Γ(𝛽+1/2)2𝜋𝑠𝑖1Γ(𝛽)𝑑𝛼+𝑖𝐶𝑖(𝜃)2/(2𝑠2𝑖)𝛽+1/2.(4.6) The integrated likelihood of (4.6) can be interpreted simply as an average over all conditional likelihoods given 𝜍, weighted by their prior probabilities. In so doing, the integrated likelihood incorporates the uncertainties regarding the standard deviations for 𝑒𝑖 (𝑖=1,2,,𝑁𝑚).

The integrated likelihood function given by (4.6) depends explicitly on the hyperparameters 𝛼 and 𝛽 for which values need to be assigned. In this paper, the values of 𝛼 and 𝛽 are assigned as 𝛼=𝜋1 and 𝛽=1. The assignment 𝛼=𝜋1 ensures that 𝜎𝑖=𝑠𝑖 encoding our belief that our estimates 𝑠𝑖 of the standard deviation of 𝑒𝑖 are unbiased. Furthermore, the assignment 𝛽=1 results in a very heavy-tailed distribution for 𝜋(𝜎𝑖𝑠𝑖,𝛼,𝛽) which allows significant deviations of the noise uncertainty from the quoted value of 𝑠𝑖 (provided by the user). Indeed, with the choice 𝛽=1, the variance associated with 𝑝(𝜎𝑖𝑠𝑖,𝛼,𝛽) in (4.4) becomes infinite. The heavy tail of the distribution is chosen to account for possibly significant under-estimations of the actual uncertainty (namely, the quoted uncertainty 𝑠𝑖𝜎𝑖). This could arise from inconsistencies in the model concentration predictions owing to structural model error or to “outliers’’ in the measured concentration data owing to either measurement error or perhaps distortion of the measured concentration data due to some unrecognized spurious (background) source.

In order to compute 𝑍𝜖(𝑚1) required in the evaluation of 𝐾𝜖10 in (2.9), we are required to specify a functional form for the likelihood (𝜃(𝑚1),𝐬𝜖) of the estimated mean residual data {̂𝜖1,̂𝜖2,,̂𝜖𝑁𝑠} and the associated uncertainty in this data 𝐬𝜖{𝑠𝜖1,𝑠𝜖2,,𝑠𝜖𝑁𝑚}. To this end, the integrated likelihood function of (4.6) is also used for the specification of (𝜃(𝑚1),𝐬𝜖): 𝜃𝑚1,𝐬𝜖𝜃𝑚1,𝐬𝜖=,𝛼,𝛽𝑁𝑚𝑖=1𝛼𝛽Γ(𝛽+1/2)2𝜋𝑠𝜖𝑖Γ1(𝛽)𝛼+̂𝜖𝑖𝐶𝑖𝜃𝑚12/(2𝑠𝑖𝜖2)𝛽+1/2.(4.7) Here, 𝜃(𝑚1) are the parameters of a source model consisting of a single localized source, used in the test to determine if the mean residual data contains evidence for the presence of an additional source. Towards this objective, the likelihood function of (4.7) is used to calculate the evidence 𝑍𝜖(𝑚1) for this test as 𝑍𝜖𝑚1=𝑛𝜃𝑚1,𝐬𝜖𝜋𝜃𝑚,𝛼,𝛽1𝑚𝑑𝜃1,(4.8) where 𝑛=6 (corresponding to the dimension of the parameter space for one source).

5. Computational Methods

There are two major computational problems that need to be addressed in order to fully specify the algorithm summarized in Section 2, namely, (1) specification of a method to draw samples from the posterior distribution 𝑝(𝜃𝑚𝑁,𝒟,𝐼) (Steps 2 and 6) and (2) specification of a method for computation of the evidence 𝑍(𝑚1) (Step  3, cf. (2.2)) or 𝑍𝜖(𝑚1) (Step  5, cf. (4.8)).

The most popular method that can be used for sampling from 𝑝(𝜃𝑚𝑁,𝒟,𝐼) (see (2.1)) is a stochastic algorithm referred to as Markov chain Monte Carlo (MCMC). There has been considerable effort expended to improve the efficiency and the chain mixing efficacy of MCMC sampling procedures in order to allow rapid chain convergence to and efficient sampling from the posterior distribution of 𝜃. Improvements to MCMC sampling methods include: introduction of an adaptive Metropolis algorithm by Haario et al. [21], formulation of the differential evolution Monte Carlo algorithm by Ter Braak [22], development of a differential evolution adaptive Metropolis algorithm (DREAM) by Vrugt et al. [23], and design of multiple-try Metropolis algorithms by Liu et al. [24].

The are a number of options that are available for the computation of the evidences 𝑍(𝑚1) and 𝑍𝜖(𝑚1). Owing to the fact that the determination of 𝑍(𝑚1) and 𝑍𝜖(𝑚1) involves only an evaluation of a multidimensional integral in a low-dimensional space (6 in this case), it is in principle possible to apply a brute force numerical integration [25] to address this problem. An alternative would be to apply a methodology that is applicable to the evaluation of the evidence in the general case, involving the evaluation of an overlap integral in a high-dimensional parameter space 𝑛 (𝑛1). These methodologies include importance sampling estimators [26] and thermodynamic integration [27].

Although there are various alternatives (leading to many different combinations) that could be used potentially to draw samples from 𝑝(𝜃𝑚𝑁,𝒟,𝐼) and to evaluate 𝑍(𝑚1) and 𝑍𝜖(𝑚1), we have used instead a single methodology to do both. The methodology that is used in this paper to address both these problems is nested sampling developed by Skilling [28] for the efficient evaluation of the evidence 𝑍 for the general case. The nested sampling algorithm transforms the multidimensional evidence integral of (2.2) into a simple one-dimensional representation: 𝑚𝑍𝑍𝑁=10(𝜒)𝑑𝜒,(5.1) where 𝜒=(𝜃)>𝜋(𝜃)𝑑𝜃(5.2) is the prior mass in the parameter (hypothesis) space enclosed with likelihood greater than and (𝜒) is the inverse which labels the likelihood contour that encloses a prior mass 𝜒. If we evaluate the likelihood (𝜒) at a sequence of 𝑚 points 𝜒𝑖 (𝑖=1,2,,𝑚) with 0<𝜒𝑚<𝜒𝑚1<<𝜒2<𝜒1<1 and (𝜒𝑖)>(𝜒𝑗) (𝑖>𝑗), the evidence 𝑍 can be approximated from the likelihood-ordered samples as the following weighted sum: 𝑍=𝑚𝑖=1𝜒𝑖𝛿𝜒𝑖,𝛿𝜒𝑖=𝜒𝑖1𝜒𝑖.(5.3)

If the prior mass points 𝜒𝑖 are sampled in a logarithmic manner as 𝜒𝑖=𝑖𝑗=1𝑡𝑗 where 𝑡𝑗(0,1) is a shrinkage ratio, then the nested algorithm consists of the following steps. The reader is referred to Skilling [28] for further details of the algorithm.(1)Set 𝑖=0, 𝜒0=1, 𝑍0=0 and 𝑓=0.5 (preset fraction used in the stopping criterion). Randomly draw 𝑀 samples 𝜃 from the prior 𝜋(𝜃) to give an ensemble of samples. Evaluate the likelihood (𝜃) for each of the samples in the ensemble.(2)Increase 𝑖𝑖+1. Select the sample having the lowest likelihood (which we label as 𝑖) in the ensemble and remove (discard) it. Shrink the prior mass to 𝜒𝑖=𝜒𝑖1𝑒1/𝑀.(3)Draw a new sample 𝜃 from the prior 𝜋(𝜃) subject to the hard likelihood constraint (𝜃)>𝑖, and add this sample to the ensemble of samples.(4)Increment the evidence: 𝑍𝑖=𝑍𝑖1+𝑖(𝜒𝑖1𝜒𝑖).(5)If max𝜒𝑖<𝑓𝑍𝑖 (max is the largest value of the likelihood in the ensemble of samples), add in contributions to the evidence 𝑍𝑖 from the ensemble of samples (remaining 𝑀 samples that have not been discarded), and stop; otherwise, continue from Step  2.

In Step  5 of the algorithm, if the stopping criterion is satisfied, the estimate for the evidence 𝑍 is completed by adding the contribution of the remaining 𝑀 samples in the ensemble to 𝑍𝑖; namely, 𝑍𝑍𝑖+𝑀1((𝜃1)+(𝜃2)++(𝜃𝑀))𝜒𝑖 where (𝜃𝑘) (𝑘=1,2,,𝑀) are the likelihood values evaluated at the remaining 𝑀 samples. It is noted that the algorithm summarized above for estimation of 𝑍 (overlap integral of (𝜃) and 𝜋(𝜃)) automatically provides weighted samples 𝜃𝑘(𝑚𝑁) drawn from the posterior distribution 𝑝(𝜃𝑚𝑁,𝒟,𝐼)(𝜃)𝜋(𝜃) (cf. (2.1) and (2.2)). More specifically, the 𝑖th discarded sample in Step  2 of the algorithm can be interpreted as a sample drawn from the posterior distribution of 𝜃 with weight given by 𝑝𝑖=𝑖(𝜒𝑖1𝜒𝑖𝑍)/ where 𝑍 is the estimate of the evidence obtained in Step  5 on termination of the algorithm.

The key part of the algorithm is Step  3 involving drawing a sample from the prior 𝜋(𝜃) within a prescribed hard likelihood constraint (𝜃)>. To this purpose, Feroz et al. [29] developed a very efficient algorithm (which the authors refer to as MULTINEST) for sampling from a prior within a hard likelihood constraint involving a very sophisticated procedure for decomposition of the support of the likelihood above a given bound into a set of overlapping ellipsoids and then sampling from the resulting ellipsoids. The algorithm is appropriate for sampling from posterior distributions with multiple modes and with pronounced curving degeneracies in a high-dimensional parameter space.

Finally, the posterior odds ratio 𝐾𝜖10 in (2.9) is a summary of the evidence for 𝐻1 (additional source is present in the residuals) against 𝐻0 (no source is present in the residuals). To interpret how strong (or weak) is the evidence for 𝐻1 against 𝐻0 provided by the residual data , we use a reference scale suggested by Jeffreys [30]. In this scale, log(𝐾𝜖10)<1 corresponds to inconclusive evidence for 𝐻1 against 𝐻0 (or the evidence for 𝐻1 is not worth more than a bare mention and corresponds to a posterior odds ratio of less than about 3 to 1 in favor of 𝐻1). On this same scale, log(𝐾𝜖10)>5 corresponds to very strong evidence for 𝐻1 against 𝐻0 (associated, as such, with a posterior odds ratio greater than about 150 to 1). In view of this scale for interpreting 𝐾𝜖10, we choose the threshold 𝐾 for the posterior odds ratio (required to be specified in Step  2 of the algorithm summarized in Section 2) to be log(𝐾)=1 (or, equivalently, 𝐾2.7).

6. Example: An Application of the Methodology

In this section, we apply the proposed methodology for inverse dispersion modeling of an unknown number of sources to a real dispersion data set obtained from a specific experiment conducted under the FUsing Sensor Information from Observing Networks (FUSION) Field Trial 2007 (FFT-07). The scientific objective of this field campaign was to acquire a comprehensive meteorological and dispersion dataset that can be used to validate methodologies developed for source reconstruction. Details of the instrumentation deployed and the experiments conducted in FFT-07 are given in Storwold [31], so only a brief summary of FFT-07 will be presented here. In particular, only the relevant details of the particular experiment that are required for the interpretation of the results in this paper are emphasized.

The experiments in FFT-07 were carried out in September 2007 at Tower Grid on US Army Dugway Proving Ground, Utah about 2 km west of Camel Back Ridge on the Great Salt Lake Desert. The easterly through southerly drainage flows that predominate during the early morning hours at this site originate on the higher terrain to the southeast and are channeled by Camel Back Ridge. Generally, the terrain was flat, uniform, and homogeneous consisting primarily of short grass interspersed with low shrubs with a height between about 0.25 and 0.75 m. The momentum roughness length, 𝑧0, was estimated to be 𝑧0=1.3±0.2cm.

In all the experiments, propylene (C3H6) was used as the tracer gas. The concentration detectors used were fast-response digital photoionization (dPID) detectors. In the experiments, a plume was formed in the atmospheric surface layer by releasing propylene from one or more (up to a maximum of four) purpose-designed gas dissemination systems. The network (or array) of concentration detectors consisted of up to 100 dPIDs, arranged in a staggered configuration of 10 rows of 10 detectors, with the rows of detectors separated by 50 m and the detectors along each row spaced 50 m apart. The concentration detectors along the ten sampling lines in the array were placed at a height, 𝑧𝑑, of 2.0 m.

In the experiment used to test the inverse dispersion modeling methodology proposed herein, tracer gas was released continuously over a period of 10 min from four source locations at a height, 𝑧𝑠, of 2.0 m: (1) source 1 is at (𝑥𝑠,𝑦𝑠)=(33.0,171.0) m; (2) source 2 is at (𝑥𝑠,𝑦𝑠)=(33.8,240.7) m; (3) source 3 is at (𝑥𝑠,𝑦𝑠)=(30.0,312.9) m; (4) source 4 is at (𝑥𝑠,𝑦𝑠)=(26.0,384.4) m. The coordinates reported here for the source locations are referenced with respect to a local Cartesian coordinate system. Unfortunately, in this experiment, only the mass flow controller for source 3 functioned properly. The mass flow controllers for sources 1, 2, and 4 failed to properly regulate the flow, so the emission rates from these sources were unknown.

A three-dimensional sonic anemometer, placed at the 2 m level of a lattice tower located upwind of the array of concentration detectors, was used to characterize the background micrometeorological state of the atmospheric surface layer. For this experiment, the horizontal mean wind speed 𝑆2 at the 2 m level, the friction velocity (𝑢), the atmospheric stability (Obukhov length 𝐿), and the standard deviations of the velocity fluctuations in the alongwind (𝜎𝑢), crosswind (𝜎𝑣), and vertical (𝜎𝑤) directions were 𝑆2=3.61 m s−1, 𝑢=0.282m s1, 𝐿=27.3m, 𝜎𝑢/𝑢=2.33, 𝜎𝑣/𝑢=1.86, and 𝜎𝑤/𝑢=1.10. The mean wind direction in the experiment was normally incident on the detector array (namely, the mean wind direction corresponded to a wind from the +𝑥-direction and, hence, is perpendicular to the sampling lines of detectors along the 𝑦-axis).

The wind velocity and turbulence statistics were used in conjunction with Monin-Obukhov similarity theory relationships [32] as input to the atmospheric dispersion model (namely, the backward-time LS particle trajectory model described briefly in Section 3) for provision of the predicted concentration 𝐶 (cf. (3.4)). More specifically, the backward-time LS model of (3.5) was applied with the Kolmogorov constant 𝐶0=4.8, a value which was recommended by Wilson et al. [33] from a calibration of the model against concentration data obtained from the Project Prairie Grass atmospheric dispersion experiments.

The example used here to illustrate the inverse dispersion methodology involve continuously emitting sources, so the relevant source parameters for source model 𝑚𝑁 are 𝜃𝜃(𝑚𝑁)=(𝐱𝑠,1,𝑄1,,𝐱𝑠,𝑁,𝑄𝑁). Furthermore, it is assumed that the height of the sources above ground level (𝑧𝑠=2.0 m) is known a priori, so the only unknown location parameters are the (𝑥𝑠,𝑦𝑠) coordinates of the sources. Owing to the horizontal homogeneity of the mean flow and turbulence statistics in the current example, the adjunct concentration 𝐶(𝐱𝑠𝐱𝑑) in (3.4) can be precalculated for one detector location 𝐱𝑑 (for the known source height 𝑧𝑠), with the adjunct concentration 𝐶 (considered as a function of 𝐱𝑠) at all other detector locations obtained simply by a linear translation of 𝐶(𝐱𝑠𝐱𝑑) in the horizontal (𝑥,𝑦)-plane.

In order to calculate the likelihood function given by (4.6), we need to provide values for the estimated standard deviations 𝑠𝑖 for the noise 𝑒𝑖. The noise error variance 𝜎2𝑖 includes the sensor sampling error variance, 𝜎2𝑑,𝑖, in the measurement of the concentration datum 𝑑𝑖 and the model error variance, 𝜎2𝑚,𝑖, in the prediction of 𝐶𝑖. The two contributions are combined in quadrature to give the noise error variance 𝜎2𝑖=𝜎2𝑑,𝑖+𝜎2𝑚,𝑖. The measurement error standard deviation is estimated as 𝜎𝑑,𝑖𝑠𝑑,𝑖=max(0.05,0.02𝑑𝑖) ppm (parts per million by volume) where the lower limit of 0.05 ppm represents the precision in the concentration measurements using the dPIDs. The model error standard deviation is estimated to be 20% of the predicted value of the concentration 𝐶𝑖(𝜃) (namely, 𝜎𝑚,𝑖𝑠𝑚,𝑖𝐶=0.20𝑖(𝜃)). In consequence, the estimated noise error standard deviation is given by 𝜎𝑖𝑠𝑖=(𝑠2𝑑,𝑖+𝑠2𝑚,𝑖)1/2.

The algorithm described in Section 2 was used for the source reconstruction, applied to the concentration data 𝒟{𝑑1,𝑑2,,𝑑𝑁𝑚} obtained from 62 detectors in the array (𝑁𝑚=62) (see Figure 1 which shows the locations of the 62 detectors as filled squares). The hyperparameters defining the prior 𝜋(𝜃) have been chosen as follows: 𝑄𝑗𝒰([0,𝑄max]) with 𝑄max=100g s1 and (𝑥𝑠,𝑗,𝑦𝑠,𝑗)𝒰() with =[0,100]m×[0,500] m for 𝑗=1,2,,𝑁.

After Steps 2 and 3 of the algorithm were completed, we found log(𝐾10)=log(𝑍(𝑚1)/𝑍(𝑚0))=64.0±0.2 implying that the concentration data strongly support the source model 𝑚1 (one localized source or “signal’’) against the alternative source model 𝑚0 (no localized source or “no signal’’). In other words, at this point in the algorithm, a source for the concentration has been detected in the data 𝒟. Owing to the fact that log(𝐾10)>log(𝐾)=1, the algorithm continues to the iterative loop involving Steps 4 to 6. This iterative loop executes four times before it is terminated. The results of the execution of each of these four iterations are exhibited in Figure 1(a) (𝑁=1), Figure 1(b) (𝑁=2), Figure 1(c) (𝑁=3), and Figure 1(d) (𝑁=4). The caption in Figure 1 summarizes the values for log(𝐾𝜖10)=log(𝑍𝜖(𝑚1)/𝑍𝜖(𝑚0)) obtained from testing the alternative hypothesis 𝐻1 (additional source present in the residual data) against the “null’’ hypothesis 𝐻0 (no additional source present in the residual data). Each panel in Figure 1 also displays a density plot of samples of the source model 𝑚𝑁 drawn from the posterior distribution 𝑝(𝜃𝑚𝑁,𝒟,𝐼) and projected onto the (𝑥,𝑦) horizontal plane (namely, each point in the plot corresponds to the (𝑥,𝑦) location of a sample of the source model 𝑚𝑁 drawn from the posterior distribution). Note that the algorithm terminates with 𝑁=𝑁=4 with log(𝐾𝜖10)=0.92log(𝐾)=1, providing the correct number of localized sources in the source distribution (𝑁=4 in this case).

The samples of source models 𝑚𝑁 (𝑁=4) drawn from the posterior distribution 𝑝(𝜃𝑚𝑁,𝒟,𝐼) in the last iteration of the algorithm (see Figure 1(d)) can be used to determine the characteristics (location, emission rate) of the four localized sources. Towards this objective, Figure 2 displays the univariate (diagonal) and bivariate (off-diagonal) marginal posterior distributions for the parameters of each source. For each univariate marginal posterior distribution of a parameter for a given source, the solid vertical line indicates the true value of the parameter (for this source), and the dashed vertical line corresponds to the best estimate of the parameter (for this source) obtained as the posterior mean. For the bivariate marginal posterior distribution of parameters of a given source, the solid square represents the position of the true source parameter values, and the solid circle indicates the best estimate of the true source parameter values obtained as the posterior means. The posterior mean, posterior standard deviation, and the lower and upper bounds for the 95% highest posterior distribution (HPD) interval of the source parameters for each identified source are summarized in Table 1. Comparing these estimated values of the source parameters, it can be seen that the proposed algorithm has adequately recovered the true parameters for each source (if these are known) to within the stated uncertainties.

7. Conclusions

In this paper, we have proposed a Bayesian inference approach for addressing the inverse dispersion of an unknown number of sources using model comparison (selection). The necessary integrations to compute the evidence 𝑍(𝑚𝑁) for 𝑁>1 can be very computationally demanding (as well as technically difficult), and, as a consequence, we developed an efficient and robust algorithm for model comparison that recursively removes the influence of a source model 𝑚𝑁 from the measured concentration data 𝒟 and tests the resulting residual data to determine if the residual data are consistent with the estimated noise level. This test requires nothing more than the computation of the evidence 𝑍𝜖(𝑚𝑁) for 𝑁=1 which is usually computationally simple. The procedure finds the minimum number of localized sources necessary to represent the concentration signal in the data 𝒟 down to the estimated noise level. Furthermore, the uncertainty in the estimated noise level (which includes contributions from both measurement and model errors) is treated by using an integrated likelihood, which was implemented using a specific prior (inverse gamma distribution) to represent the uncertainty in the estimated noise level. Nested sampling is used for the evidence computation, as well as for sampling from the posterior distribution 𝑝(𝜃𝑚𝑁,𝒟,𝐼) in the proposed algorithm for inverse dispersion modeling.

The new algorithm has been applied successfully to a real concentration dataset obtained from an atmospheric dispersion experiment conducted in the FFT-07 field campaign consisting of the simultaneous release of a tracer from four sources. It is shown that the proposed algorithm performed well for this example: the number of sources was determined correctly (𝑁=4), and for each of the identified sources the corresponding parameters (e.g., location, emission rate) were estimated reliably along with the determination of the uncertainty in the parameter (in the form of either a standard deviation or a credible interval that encloses a prespecified posterior probability mass). The methodology proposed herein offers a simpler alternative for the inverse dispersion modeling of an unknown number of sources that was addressed previously [13, 14] as a generalized parameter estimation problem, the latter of which involves necessarily the complexities in the design of an appropriate reversible-jump MCMC sampling algorithm to allow transdimensional jumps in the generalized parameter space (namely, jumps between the parameter space of source models 𝑚𝑁 of different dimensions involving different numbers of sources 𝑁).