Abstract
We develop a mathematical modeling approach to evaluate the effectiveness of a Bayesian search for objects in cases where the target exhibits ancillary dependencies. These dependencies occur in situations where there are multiple search passes of the same region, and they represent a change in search probability from that predicted using an assumption of independent scans. This variation from independent scans is typically found in situations of advanced detection processing due to fusion and/or collaboration between searchers. The framework developed is based upon the evaluation of a recursion process over spatial search cells, and the dependencies appear as additive utility components within the recursion. We derive expressions for evaluating this utility and illustrate in detail some specific instantiations of the dependency. Computational examples are provided to demonstrate the capabilities of the method.
1. Introduction
The planning of searches for objects of uncertain disposition is a classical problem in military operations research. Historically, such searches are conducted by a single platform examining different regions of the space over time. This has led to a classical search theory methodology that provides an analytical basis for evaluating potential searches a priori. When such searches are represented parametrically, the evaluation can be computationally very efficient. The problem of optimal search involves the mathematical determination of these search parameters in order to maximize this search effectiveness. These modeling approaches have been limited by the requirement to obtain analytical solutions for computational exigency, yet have served well as appropriate-fidelity models of historical search practice. Modern search platforms, however, can store past search information and, thus, fuse the overlapping “looks” of the same region to improve performance. Unfortunately, these multipass search dependencies are not consistent with the independence assumptions that are explicit in the conventional analytical formulations of search theory.
The independence dominant perspective of conventional approaches to modeling search effectiveness considers the target as an object whose presence can be ascertained only on the proximity of a searcher to that object. However, modern search systems exhibit many more dependencies in addition to simple proximity that affect the search performance. The type of search dependencies we are concerned with occurs when the target contains some sort of ancillary dependency on the particulars of the search platform's engagement. These dependencies violate the independence assumptions of the classical analytical approach to search theory.
With modern computing capabilities, there is an opportunity to consider a numerical approach to search evaluation that incorporates a Bayesian update of the likelihood of finding an object under a grid representation of the search region. Previous computational limitations prohibited the computational examination of these grid approaches that necessarily require extensive computer storage. In this paper, we develop a mathematical model of search that allows for the incorporation of multiple pass dependencies. The model is based on recursively updating a geometric likelihood structure that represents the search success. We illustrate an efficient computational process for determining search effectiveness utilizing this modeling framework. Examples that illustrate the model are provided for some notional dependencies and the results are demonstrated with computer simulations.
2. Classical Approaches to Search Modeling
The classical theory of search, as initially developed by Koopman [1], was developed to examine the search for randomly located objects within a large search space. That work was furthered by many others over many years, as summarized in Benkoski et al. [2] and the references therein. From a modeling perspective, these extensions allowed the examination of more complicated scenarios, such as accounting for the effects of motion and for the effects of multiple targets. While the extension to two-sided games for evading targets is well studied, we are only focused on fixed nonreactive search objects, and thus do not consider those extensions. However, the classical one-sided search problem still has a variety of probability questions, as noted by Nakai [3]. These problems include the detection search problem, the information search problem, and the whereabouts search problem. While different from a design and optimization standpoint, from an evaluation standpoint all of the proceeding problems focus on the sequential evaluation of object detection likelihood over the search space.
From a system design perspective, search theory allows the development of improved courses of action for limited search resources. Given models that determine the effectiveness of arbitrary search distributions, one can formulate the problems of optimal search, which lead to “best” allocations of search effort for maximizing the search goal. As clearly pointed out by Washburn [4], the problem of optimizing the search for a stationary object becomes a distribution of effort problem, for which a number of solutions exist (see [5, 6] for an overview). However, many of these search optimization problems are computationally difficult [7], and approximation methods are often employed. Computationally efficient cell-based methods to the problem of search allocation are often employed [8].
When the searchers are moving yet the target remains fixed, the kinematics of the search platform limit the achievable states and thus provide a constraint on the optimal solution. Many practical problems involve long durations with relatively narrow search swaths. This leads to problems of path formulation, as in Reber search theory (as described in [9]) which examines the achievable performance over long times given a relatively narrow search swath. Even when paths are fixed, benefits can be achieved if one adjusts the sensor gains dynamically [10]. However, all of these approaches to improved search performance hinge on the underlying mathematical model of probabilistic search performance that is employed.
When modeling the expected performance of a given search, the use of density representations of search objects is often utilized. This has been done either due to physical complications of multiple objects [11], uncertainty of the number of discrete objects [12], or a desire to search for an object whose natural representation is density-based [13]. In all these cases, the density approaches provide a natural likelihood structure for the underlying process of search. In the search context, the density approach extends to more complicated search problems, such as, the introduction of false target objects [14] or the added uncertainty of unknown searcher performance [15]. Furthermore, the likelihood formulations extended readily into the problem of (non-reactive) moving targets [16], although that complication leads to problems of optimal control which are beyond the scope of this paper. We examine the problem of one or more searchers seeking a set of objects with uncertain disposition. As opposed to other decision-theoretic methods [17], we focus on creating a sequential likelihood update process for given search paths and anticipated searcher performance. These sequential likelihood updates are similar to other approaches to sequential likelihood updating as found in receding horizon estimation [18]. When applied to geographic maps of performance, the sequential likelihood update process creates a geographic form of Bayesian estimation, which has been successfully applied to areas such as robot localization [19] and search-and-rescue [20, 21]. By formulating our numerical approach as a sequential likelihood update over a common geographical partition, we have developed a model that is scalable with respect to complex application-specific variabilities. This capability augments the limited parametric considerations found in other approaches. This new approach to recursive search performance prediction accounts for complex multiple pass search operations, and thus provides a foundation for future work on optimal planning of coordinated search efforts.
3. Search Modeling for Multiple Search Passes
Performance evaluation models that are applicable to multiple pass search operations must possess enough flexibility to account for dependencies inherent within the dynamics of collaborative search yet be simple enough to promote the computational efficiency necessary for extended usage in planning. We model the search as an interrogation over a set of geometric grid cells. We choose a grid partition of the search space as a means to account for variability encountered during the search that is not readily articulated in closed form. This variability may present itself as spatial variations in object placement likelihood or in the sensor's capacity to detect objects. The variability may also be exhibited in the spatial coverage projected by various search plans. It can be manifested by irregularity in hypothesized search path trajectories or as a distribution in the number of search passes conducted over the space. The extent of the variability dictates the specification of the grid such that the quantities are approximately static within each grid cell. This enables us to avoid any need for segmentation within the evaluation process and to keep the numeric calculation of performance to its simplest realization. We do not impose any kinematic constraints on the cell structure as search paths can be considered an input to the model. Rather, the kinematics of searcher motion are naturally translated into a sequence of cell visitations.
While we employ the grid construct in a two-dimentional search paradigm in this paper, the approach readily extends to higher dimensions in any of the searcher parameters subject to optimization. In particular, three-dimensional spatial constructs are a natural extension of the approach provided that likelihood variability restrictions on grid specification are maintained. By using a Bayesian update framework, we develop expressions for the sequential update of search probabilities over the cell visitation sequence in a manner that retains the ability to include nontrivial multipass search dependencies. We furthermore restrict our attention to cases of fixed search objects, the extension to moving search objects is a subject of future study.
This modeling approach to search evaluation is intended to address the search for multiple objects. In the following subsections, we provide a quantification of search effort in multiple pass searches. For this development, we revert to a single object placement density as a fundamental cell characteristic applicable to either a set of distinct object density functions or to a common density representative of objects that are independent and identically distributed.
3.1. Cell-Based Representations of Performance
Let represent the event that a search has successfully located the object of search (i.e., when search is successful and otherwise). Define the global detection probability map as the spatial representation of the search detection likelihood function . This function (defined on the subset of that corresponds to our search region) represents the probability that an object located at would be found when the searcher conducts a search at location (as in ). For an object of search that is located in the search region according to the density function , the probability of the search being successful is then given by
Equation (3.1) represents the search effectiveness as a simple marginalization of the global detection probability map over the search object location density . The development of prior representations of these search object location density functions for problems of practical interest has been previously reported by the authors [22]. Thus, by maintaining careful geometric representations of the evolution of these spatial densities throughout the search evaluation, we develop a search model with flexibility to handle a variety of modeling complexities.
Fundamentally, the evaluation of search dependency is a problem in spatial processing of multiple looks over regions. As such, we consider a cell-based decomposition of the finite search region into a finite set of cells , such that the complete set of cells form a partition on the search region . Thus, this implies the relationships and . In simple convex geometries such as typically found in spatial search problems, these regions generally form a simple grid of the space . However, any finite partitioning of the search region is allowed, and a particular choice of partitioning is application-dependent. Consider a two-dimensional search evaluation over the cells . We assume that the object is located somewhere in the search region, and specifically concern ourselves with examining the probability that a search of the cell that contains the object is successful. By focusing on the cell that contains the object, the object location density may be mapped to the cell-specific object location density as
We note that, by this definition, the cell-specific object location density is necessarily equal to zero in cells that have no likelihood of containing the object, as expected.
The search evaluation function of (3.1) now reduces to
where
represents the search effectiveness of the use of the search effort against target object over the specific cell . We note that this resulting value denotes a weighted spatial average of the detection likelihood function over the grid cells, where the weights represent the likelihood of the object being located in each cell. For a cell that is known to specifically contain the object, the integral is equal to one for and zero for all other 's, such that , as expected. Thus, the decomposition of (3.3) separates the problem of overall search evaluation into one of independent examination of search performance in each cell.
We shall assume that grid resolution is sufficient such that the variation in both the detection likelihood and the placement probability over the grid cell is small such that a nominal constant value can be presumed for the cell. Observe that for , there is a probability of that an object will not be detected on the first search opportunity. It may, however, be detected on subsequent passes if the search path covers this cell in a future segment of the search path.
3.2. Likelihood Functions for Multiple Passes
Let denote position within a sequence of search scans on the cell position obtained by a traveling observer. Let denote the event that first detection of an object occurs somewhere within the sequence of scans of cell . Furthermore, define the first detection probability as the probability that the first detection occurs within scan . The succession of these first occurrence probabilities develops sequentially as multiple scans of the cell materialize from the search plan, leading to
By modeling each scan's detection observation as an independent Bernoulli trial, the waiting time (i.e., the number of scans before detection occurs) for each cell follows a geometric distribution [23]. Then, the first detection probability becomes
for cell detection probabilities that are independent from pass to pass. This probability expression naturally incorporates both the temporal and spatial aspects of the search process (the spatial through and the temporal through ).
When there exists a dependency between the multiple passes of a cell , the independence assumption of the Bernoulli trial is no longer valid. Let us assume that the cell detection probability varies from scan to scan for a given grid cell , such that . This may be due to an ancillary dependency such as with sensor type or proximity to sensor or otherwise. We define the complementary event of no detections through a sequence of scans as . Then the probability of no detections through the first scans of cell is given by . At a given scan number , the probability of achieving a first detection event in cell is given as the probability product of detecting during scan and not having detected up through scan . This leads to the relation
Similarly, the probability of continuing to not detect at scan is given by the probability product of not detecting during scan and not having detected up through scan , as in
Equation (3.8) is the fundamental recursion relation that guides the search evaluation. The initial value for this recursion relation (with scans designating the unsearched condition) is given by
Since the recursion is defined only on the nondetection probability (and not on the detection probability ), the initial probability for is not explicitly required. However, we note that (3.7) and (3.9) imply , as expected.
By computing the evolution of the grid cell detection function over successive passes as the search progress, (3.8) is used to recursively update the probability of the search object nondetection on a per cell basis. To obtain the first detection probability of any given cell at a given scan, the nondetection probability is applied to (3.7). The spatial aggregation of these per cell first detection probabilities (3.7) is then a summation (as in (3.3)) to obtain the aggregate performance at any time step within the search process. Thus, the probability likelihood maps given by and provide the fundamental mechanism for capturing the search performance information for multiple scans of a search region, whereby all other aggregate search performance measures can be simply derived.
We note that, in the case of independent scans, , such that the recursion of (3.8) is a linear homogeneous recursion equation with general solution form for some constants and . In this form, (3.8) is solved with , and the initial condition (3.9) is met with , leading to
and, correspondingly, the complementary first detection probability is given by
which is the same expression as (3.6) that was found by the Bernoulli trials for independent scans, as expected.
3.3. Utility Functions for Likelihood Updates
We next extend the detection likelihood modeling to include dependency on ancillary parameters that describe the interrelation between searcher and object properties. Such modeling may articulate random dependencies such as orientation angle of the search object or particular dependencies categorizing the capability of specific searchers to detect objects of a given type. Let denote a random variable that corresponds to the ancillary parameter that is an object property that is independent of both the scan and the placement of the search object (such as an orientation angle of an object). Furthermore, let denote the deterministic ancillary parameters of the searcher that are specific to the th scan (such as a specific searcher type). Let represent the probability that an object located at with random parameter would be found when the th scan of a search is conducted at given the scan parameter . Given a probability distribution of the random parameter , the marginal search detection likelihood function is given by
We note that the overbar in is used to differentiate it from which retains the dependency. Observe that, when , we have , and the detection likelihood depends only on placement . In such cases, if there are no additional scan-specific dependencies, then , and the expression for search detection is as previously defined, such that .
We presume (as indicated in (3.12)) that the ancillary parameter and the location at which the object is placed are independent random variables. The consequence of this assumption is that the probability likelihood may be represented as a mean component with a zero-mean perturbation; that is,
Here the search detection likelihood function is decomposed into a nominal value that varies over the search space (and may vary according to as well) and a perturbation that depends only on the ancillary parameters and . Again, for the simple case with no ancillary parameters, the search detection likelihood reverts to .
We next consider the evaluation of this search detection likelihood over a region that has been partitioned into subregions as described in Section 3.1. We focus our attention on a specific grid cell , such that the object location density has been rescaled to as in (3.2). Within this grid cell, the cell first detection probability associated with the first () pass of cell is now given as
where explicitly shows the dependence on . Since we generally expect any dependence on to be implied in the th pass detection probability, we simplify notation to with an implied dependence on . To further facilitate the exposition, we define an ancillary cell detection function that serves as a decision-theoretic utility function in for the th search pass of a location. Specifically, we let
However, has been defined in (3.13) to be zero-mean perturbation term, so its integral over goes to zero, leading to . With that simplification, (3.14) becomes
In similar fashion, the first pass nondetection probability for cell is now given as
We note that the separation in (3.14) and (3.17) is enabled by the separation of terms in (3.13), and that these expressions are equivalent to the first terms of (3.7) and (3.8). While the additional definition of the ancillary cell detection function seems to be unnecessary, it will become useful in the following recursion terms.
For subsequent passes over the grid cell, the perturbed likelihood equations are slightly more complicated. We assume the grid cell size is chosen to be small enough such that is approximately constant over a cell, so that Then, for the second pass, the equation for first detection takes the form
where
represents the second-order ancillary cell detection function. Recalling that for any , we have that
Note that (3.21) is similar to (3.7) with ; however, there is now an additional term (given by ) to account for the effects of the ancillary parameters defining the search. Similarly, the second pass recursion equation for nondetection becomes
In summary, for the second scan pass of the cell , we have
where the utility function represents the added utility of the search scan over that obtained with traditional independent passes. It is a result of marginalization over the control parameter for the given search scan perturbation function . This second pass utility function has two arguments, one for each ancillary parameter corresponding to each scan of the grid cell. More generally, we construct a set of functions that are readily calculated to assess search utility for any number of passes given the search path. It is desirable that these utility functions do not present unduly computational storage requirements associated with the detection and nondetection maps developed by the search evaluation.
The general form for the th scan cell nondetection probability becomes where we note that
and is defined as the th pass utility function. Note that the approximation in (3.24) comes from the approximation of (3.18) for spatial integrations over a grid cell. Similarly, for the th scan cell first detection probability, we have
with the same utility function . Thus, the fundamental nondetection recursion of (3.8) is now generalized to the form of (3.24), and the complementary equation for first detection of (3.7) is generalized by (3.26).
The form of the utility function found in (3.26) and (3.24) is explicitly given by
By multiplying out the product term, taking the integral over , and then rearranging terms, this function is written in the form with
representing the th ancillary function. Here represents an th pass general utility function with representing the set of all -tuples of indices from to (i.e., ), and representing a specific -tuple. For convenience, we rewrite the utility in the form , where the component utility function denotes the contribution of the th ancillary function to the total utility.
An important simplification of the utility function can be found when the nominal value of the search detection likelihood is independent of the scan parameter . In particular, for those cases when , (3.4) implies that for all , such that the component utility functions reduce to
a form that is found to be convenient in many practical computational examples. Because the ancillary functions may be computed and stored prior to any specific search evaluation, the forms in (3.28) and (3.30) are extremely computationally efficient.
3.4. Properties of Multipass Utility Functions
We next note some useful properties of the utility function that illustrate some features of ancillary dependency in search and also aid in the numerical evaluation. We first consider the case of noninteracting scans, that is, events whereby the detection performance of each scan is independent of the other scans. In such cases, we have the following lemma.
Lemma 3.1. For searches in which there is no scan-specific dependency , the utility function is a linear combination of the moments of the random perturbation component of detection likelihood .
Proof. Assume a search with no scan-specific dependencies . Then which leads to for all via (3.4). Furthermore, when there is no , we have which leads to from (3.29). Thus, each component is the th moment of . The form of the utility function in (3.30) now holds, and , where is the number of terms in the sum. Thus, the component utilities are given by , where is a constant that depends on and . Now, , which is a linear combination of the moments of .
This lemma naturally leads to the following theorem about the construction of zero-utility functions.
Theorem 3.2. If a search has no scan-specific dependencies , then the utility is zero through the th scan if the first moments of the random perturbation are zero.
Proof. The proof of this theorem follows from Lemma 3.1. Assume a search has no scan-specific dependencies . Furthermore, assume the first moments of are zero. Let be a vector of the first moments of . From Lemma 3.1, it is known that there exists a vector such that . However, so that .
An obvious case of the conditions in Theorem 3.2 is the case of no ancillary dependency at all. In such cases, there are no scan-specific dependencies and the random perturbation term for all . Thus, the conditions of the theorem are met and we have zero utility, as expected. However, there are conditions under which we may have no scan-specific dependencies , but still have a non-trivial , for which we have the following important corollary to Theorem 3.2.
Corollary 3.3. In searches with no scan-specific dependencies , there may still exist a non-zero utility if there are non-zero moments of the random perturbation .
The importance of this corollary is that a model may be constructed to incorporate effects that vary randomly over the scans, but have no scan-specific dependency associated with them. These effects are naturally modeled with the dependency in and can lead to non-zero utility, thus showing a change in search performance relative to the situation with no ancillary dependencies.
For the special case of repeated events, which are more restrictive than independent events, the utility can be used to show that the search effectiveness actually decreases. This is illustrated by the following theorem.
Theorem 3.4. For a search component comprised of repeated events, there is non-positive utility, that is, .
Proof. Assume a search component comprised of repeated events, such that for all . From (3.29), we have . Since is, by definition, a zero-mean real-valued function, we have that all of the odd moments of are also zero, specifically for odd. Since for repeated events, we have the form of (3.30) for component utility. From (3.30), we then have for odd. Thus, for even, and for odd. Since , we have . Furthermore, for all arguments. Thus, for all positive integer values of , and therefore .
This theorem is important since utility is an additive component of the standard independent event recursions. If a search is performed without independent examination, but instead a repeatable examination, then the benefits of multiple independent scans are lost, yet the utility formulation can be utilized to quantify this decrease in performance.
The computation of utility becomes combinatorially complex as the number of passes of a cell increases. This is due to the summation over the components of , which has size . To reduce this computational burden, we utilize the following theorem that gives bounds on the magnitude of the component utility functions .
Theorem 3.5. For a search with cell detection probabilities that are independent of scan number, and with random perturbation bounded by , the component utility functions are bounded by
Proof. Consider a search with scan independent detection probabilities, such that . Then the component utility form of (3.30) holds. The function is a zero-mean function over a probability space that is bounded by , so the integral expression . Furthermore, the integral composed of the product of of these terms under the integrand is also bounded by . Thus, we have . The summation in (3.30) contains terms, such that the summation is bounded by . Substituting this into (3.30) yields , thus demonstrating the bound in the theorem.
4. Applications
In this section, we articulate the application of the utility-based likelihood structure for the evaluation of search performance. To do this, we first establish a constructive baseline whereby no ancillary dependency is exhibited. This is done to demonstrate the efficacy of the grid-based numerical calculation and to validate the asserted modeling assumptions. We follow this with exemplary cases exhibiting a respective discrete or continuous ancillary dependency. The discussion within the examples highlights the corresponding distinct considerations that these respective modeling paradigms present.
4.1. Example: Generic Search with Overlapping Scans
We first consider an example in which there are known analytical solutions. Consider the search for objects within a rectangular region using a ladder-type (or mowing-the-lawn) search pattern. In this case, the searcher is a simple searcher with no ancillary dependencies. The absence of an ancillary dependency allows the recursion in detection likelihood to be based solely upon the single search pass expected probability of detection.
Let the search over the partitioned placement space be defined to occur as a sequence of partial searches , where denotes a set of grid cell indices (for grid cells ) covered during the time interval over which the partial search is conducted. The time interval for partial search is chosen small enough so that no grid cell is visited more than once in that time interval. Sequence then corresponds to a temporal partitioning of the total search trajectory into nonoverlapping segments. For each interval , define a region centered about the partial search trajectory segment where detection is possible. Unfortunately, these detection regions generally overlap across adjacent search intervals. To preserve the notion of independent persistent detection observations within the search paradigm whereby the observation is interrogated only once during the partial search, we define the index sets as
Thus, we restrict the set of indices to be that set of grid cell indices that are newly covered by the time interval. This “slither” of cells provides the subregion of the search space that has been additionally searched in the new partial search time interval. Multiple independent detection events are allowed to occur at a given cell only with the search path doubling back over itself (separated by at least one partial search time interval) or by distinct sensor platforms performing a coordinated search of the cell, which is not a concern in this example.
When multiple pass searches occur, cell indices that are represented singularly within the distinct partial search time intervals are repeated over the course of multiple intervals. We consider cell-based search detection probabilities that are independent of , so that . Furthermore, by Theorem 3.2, the utility for this problem . Therefore, for this search, the cell-referenced recursions of (3.26) and (3.24) are given by
where we have replaced the approximation () with equality for ease of exposition. As previously solved in Section 3.2, this special case of constant coefficient linear homogeneous recursion equations can be solved analytically to arrive at
The cumulative probability of detecting the target within cell up through the th scan of that cell is then given by
where is the number of searches of cell , and is the probability of the target being located in cell . The aggregate search probability for the search plan is now given by summing these individual cell probabilities
where the summation is performed both over the partial search time intervals as well as the spatially distributed grid cells . Furthermore, explicitly notes the dependency of how many times cell has been searched up to (and including) the search interval . Thus, the search evaluation properly accounts for both the temporal and spatial aspects of the complex search problem.
We next consider the numerical evaluation of the search probability compared to known theoretical benchmarks. The search path employed is the vertical ladder path depicted in Figure 1. Nominally, such search path construction would extend beyond the search region so that all the space is covered. We employ the internal ladder-type search paths shown in Figure 1 to allow a comparison of the grid-based numeric calculation with theoretical results. We consider a region with objects placed according to a distribution function . The theoretical baseline comprises the probability of detecting objects within the search path, given by (see [24])

where is the area of the search region.
In Figure 2, we show the performance of the theoretical and aggregated numerical search for this problem as black and red curves, respectively. The individual curves illustrate different values of for probability of detections for a scenario with . The baseline theoretical curve is derived under an assumption of single pass coverage. Initially, the theoretical and numerical are nearly identical, as the paths do not overlap. However, after 4 hours of search time, path overlap commences and the curves deviate. Henceforth from this point in time, search probability aggregation occurs at the reduced rate given by the pass recursion probability.

4.2. Example: Search for Multiple Object Types
This example illustrates utility functionals that apply over extended discrete likelihood structures. These extensions arise from a variation in sensor detection performance due to a specialization in detection characteristics according to search object type. That is, certain sensors perform better against certain target types, and collaboration between sensor platforms may be utilized to maximize the overall detection performance of the search group.
In this case, the ancillary random variable is the search object type; that is, for discrete object types. The ancillary deterministic parameter represents the searcher type that has been deployed to conduct search. The dependency manifests itself as a conditional probability of detection for each of the search object types. The discrete ancillary random variable (equivalent to (3.12)) for this sensor-specific detection likelihood takes the form of a mixture over possible search object types, as
where denotes the searcher type that is deployed for the th search pass. The resulting detection likelihood function represents a marginalization over search object type aggregating the searcher/object-specific combinations. In this case, the placement likelihood over the space may vary for each of the respective search object types. The conditioning on acknowledges a possible variation in search object composition over the search space.
Analogous to (3.13), the detection likelihood function is formulated as a mean value with an additive variational quantity symptomatic of the ancillary dependency being modeled. That is, the likelihood function becomes
Assume as before that the grid resolution is selected such that likelihood variations within the grid cell are insignificant, yet the scale is large enough to ensure the independence of detection events over the grid. Assume as well that the mixture weights do not vary within the grid cell; that is, . Then, the first pass grid cell detection probability attained when deploying searcher becomes
with the ancillary dependency condition
holding due to the definition of as a zero-mean perturbation term. The corresponding first pass grid cell probability of nondetection becomes
yielding similar results to the previous sections.
To develop the probability functions for further passes, we proceed with the utility function development by constructing a set of ancillary functions . The ancillary function is explicitly given by
and the general form for the term is given by
where is a -tuple used in the utility form of (3.28). These terms are calculated directly and applied to either (3.24) or (3.26) to realize the likelihood recurrence over the search field.
As a numerical example of the searcher specialization, consider the case where a single grid cell is searched multiple times by the same searcher with mean detection probability . Let represent a maximum deviation from this mean value as search object type is varied where serves as a scale factor indicative of the variability. Let the search paradigm consist of finding three possible object types with the variation in detection probability given by . Figure 3 illustrates the negative impact that searching the grid cell with the same searcher type can have on the resulting multipass search effectiveness. In this example, each object type is equally likely (i.e., for ). The search probability (as given by the sequence ) and corresponding utility are depicted for each of the set of values in . The intent is to show the impact of the size of the variation from the mean detection likelihood value on the multipass detection probability. As this example presumes only one searcher type (i.e., is constant), all associated utilities are negative.


4.3. Example: Search with Target Orientation Dependency
We next introduce a detection likelihood dependency example that is in the form of a continuous random variable. Here, we impart a dependency on detection due to the angular separation between searcher and search object orientation. For instance, in optical sensing, objects that present a significant shadow are considered more detectable, and that shadow depends on object orientation relative to the searcher look direction. As an example, consider a sinusoidal representation of detection likelihood in the form of (3.13) given by
where is a cell-specific constant indicative of the size of the variation, denotes search object orientation, and denotes searcher orientation during search pass of cell . This functional representation allows for maximum detection probability when the object orientation is aligned with the searcher motion axis (i.e., when the scans are perpendicular to the object orientation).
Let search object orientation angle be denoted as with over that interval. Observe for these conditions that
That is, as expected, there is no utility in specifying the searcher axis to address this random object orientation for the first pass over the grid cell.
Using the trigonometric identity
the ancillary functions takes the form
with corresponding utility . We note that, in general, the second pass detection probability is maximized when utility is maximized. This occurs for this sinusoidal model of orientation dependency when or simply when and the first two passes of the searcher are at right angles to each other.
We note that the third ancillary function for the likelihood function of (4.14) is given by
and thus . However, the utility function for the third pass still has non-zero components, and from (3.30) we have
where we recall that represents the contribution of the ancillary function to the third pass utility function . In general, the contribution for the th pass utility function using this sinusoidal model takes the form (see (3.30))
Similarly, for this sinusoidal model, the fourth ancillary function is generally non-zero, becoming
The corresponding utility component aggregating the contribution of the ancillary function on the th pass utility where takes the form
As before, the odd ancillary function integrates to zero. The analytic calculation of utility function components can continue until a specified degree of accuracy has been reached, as given in Theorem 3.5. For the purpose of this specific example, we terminate our discussion at .
Naturally, an issue that is paramount in any extended likelihood structure is the amount of processing and storage necessary to calculate utility function components. Ideally, we wish to minimize both storage and computational loading. Table 1 provides useful recursions for calculating the and components. Indicated in the table is the pass number in which the terms become non-zero or non-empty in the case of vector quantities. For the component, observe from (4.17) that temporary variables defined in Table 1 reduce storage requirements to a two-parameter grid cell attribute. This same information must be stored in vector format, however, to apply it properly for the component. The recursion increases storage requirements in that three vectors of size and one scalar attribute must now be saved for each grid cell location along with the stored quantities for the calculation. Observe how the recursions in Table 1 separate the contribution of the th pass control parameter from the preceding parameter values. This methodology generalizes for higher order recursions.
We illustrate the impact of orientation and its utility calculation on search performance via the construction of search plans representing limiting cases on utility. Figure 1 in its continuing pattern depicts a vertical search plan in which the detection likelihood remains relatively fixed over the plan. Here, the first searcher starts at the lower left corner of the space and commences a vertical ladder search. The second searcher starts at the upper right corner and similarly performs a vertical ladder search. Figure 4 depicts the second limiting case search plan. The first part of the search remains unchanged from the vertical ladder search. However, when the two searchers reach proximity to each other, they jointly turn at right angles to alter their orientation, that is, as above, while maintaining coverage of the space. Hence, this plan seeks to achieve a positive utility through orientation diversity. This plan also demonstrates the capacity of the search evaluation modeling technique to account for arbitrary searcher motion in collaborative, multipass search plans.

The utilities achieved in the two search plans are illustrated in Figures 5 and 6, respectively. Note that what is depicted is the average utility over the cells in the sequential update. For the vertical search plan, noninteraction between searchers results in a zero utility prior to search path overlap. When multipass coverage does occur, the resulting utility is negative, following Theorem 3.4, as the searchers have the same orientation (Incidental “spikes” into positive utility occur as the searchers maneuver between search legs). Orientation switching in the second plan yields the desired generally positive utility for both searchers as multipass coverage is achieved over the cells.


These geometries represent two-pass search strategies developed over a benign search environment under which uniformity in placement preference and detection likelihood is assumed. More generally, spatial variability in these quantities may induce a variability in the number of search passes conducted over the space. To illustrate the capacity of the algorithm to accommodate higher order search pass sequences, we include a deviated coverage plan in which a local assessment of the direction of highest coverage induces the searchers to deviate from the standard ladder search. While the searchers attempt to fulfill a coverage strategy, the resulting search paths take on a random appearance. Further, in the process of deviating towards cells of high , a natural variability in the number of cell search passes occurs over the cell grid construct. A partial realization of the search geometry is depicted in Figure 7 with a histogram of the achieved search pass number provided in Figure 8. The average utility that results for the entire search path set is shown in Figure 9.



The impact of these paths on the selected search criteria is shown in Figure 10. For these evaluations, only one object type is considered. The uniform placement prior has an expected number of objects of over the space. The probability that objects will be detected along the search paths is calculated as a function of search time for for each of the search plans. The limiting utility realizations yield corresponding search performance curves with the orientation switching plan clearly outperforming both the vertical ladder search and the deviated coverage plan. It is predictable that the deviated coverage search strategy yields a lower performance than the ladder searches during the initial (single pass) phase of the plan [5]. However, during the latter (multipass) phase of the plan, the deviated coverage strategy surpasses the vertical search strategy (but not the orientation switching plan) due to the negative utility incurred by the vertical ladder search (as in Figure 5).

We finish by presenting the results of a Monte Carlo experiment for the deviated coverage search plan. An ensemble of 1000 runs of a simulated placement scenario is executed and the frequency of occurrence results are calculated. For the experiment, objects are placed uniformly with a random object orientation assigned independently for each object. The detection event is simulated in a sequential realization of the search trajectories by a random draw governed according to (4.14). The results are illustrated in Figure 11. In the figure, the mean frequency of occurrence is plotted against the predicted successful search probability for the detection events. Standard deviation bounds at the level are also depicted in the figure to indicate the estimation uncertainty in the experiment. The usefulness of Table 1 is validated under these operating conditions as the prediction is contained within the uncertainty interval.

5. Conclusion
A mathematical model of search that allows for the incorporation of multiple pass dependencies has been developed. The model is based on recursively updating a geometric likelihood structure that directly impacts search performance. This model provides a general framework for modeling arbitrary ancillary search dependencies. The example problems studied include the standard overlapping scans, a multiple searcher with multiple objects problem where each searcher is tuned to a specific object type, and two search geometries with searcher performance that varies over relative aspect to the object. The latter examples show that the method provides an approach to examine the impacts of complex dependencies at a planning stage, without resorting to extensive simulation studies. Future efforts will examine the utilization of this evaluation model in the development of optimal multisearcher coordination strategies.
Acknowledgment
This work was sponsored by the Maritime Sensing office (code 321MS) of the Office of Naval Research.