Abstract
Many planning applications must address conflicting plan objectives, such as cost, duration, and resource consumption, and decision makers want to know the possible tradeoffs. Traditionally, such problems are solved by invoking a singleobjective algorithm (such as A*) on multiple, alternative preferences of the objectives to identify nondominated plans. The lesspopular alternative is to delay such reasoning and directly optimize multiple plan objectives with a search algorithm like multiobjective A* (MOA*). The relative performance of these two approaches hinges upon the number of values computed for individual search nodes. A* may revisit a node several times and compute a different value each time. MOA* visits each node once and may compute some number of values (each estimating the value of a different nondominated solution constructed from the node). While A* does not share values between searches for different solutions, MOA* can sometimes find multiple solutions while computing a single value per node. The results of extensive empirical comparison show that (i) the performance of multiple invocations of a singleobjective A* versus a single invocation of MOA* is often worse in time and quality and (ii) that techniques for balancing per node cost and exploration are promising.
1. Introduction
Most realistic planning problems have multiple competing objectives. It is common in practice to select a preference over the objectives and solve the problem with respect to this preference [1]. It is also common that the preference is highly subjective, and the solution is found without knowledge of possibly better, alternative solutions. Instead, by finding multiple solutions, a human can apply their subjective preference over the objectives to the solution set (with full knowledge of the tradeoffs available). In this sense, finding multiple solutions can facilitateβor circumvent entirelyβpreference elicitation. The approach taken in many applications is to apply multiobjective reasoning [2β7], and in this work we study multiobjective search for planning.
1.1. Finding Sets of Plans
There are two ways to find a set of solutions that trade off the objectives differently: iterate a single objective algorithm over multiple preferences, or use a multiobjective algorithm with no assumptions about the preferences. However, the poor scalability of multiobjective heuristic search algorithms, such as multiobjective A* (MOA*) [8] has led to favoring the iteration a singleobjective algorithms (such as A*) over different aggregations or bounds (i.e., preferences) on the objectives. Upon closer examination, we notice a fundamental inefficiency with multiple invocations of a singleobjective algorithm: each search episode may expand many of the same search nodes and recompute the values (a relatively large cost in planning). The high number of redundant node expansions is especially pronounced when the set of nondominated plans positively interact (i.e., the plans share a common subsequence of actions). Moreover, with multiple invocations of the singleobjective algorithm, there may be no guarantee that each solution will be nondominated with respect to the other solutions.
1.2. MOA*
MOA* generalizes A* to find multiple nondominated solutions in a very straightforward manner. A* searches for a single βbestβ solution in terms of one optimization metric captured by each nodeβs value. MOA* searches for multiple βbestβ solutions by permitting multiple values (a vector of values) per node. Each MOA*ββvalue is associated with a different nondominated solution. If a node is in the open list and at least one of its values is nondominated with respect to the values of the other nodes, then the node can be expanded.
1.3. Computing Values
Noticing that having a single nondominated value is enough to keep a node on the nondominated search frontier is an important observation that we exploit in this work. Computing multiple values (to get multiple values) for a search node can be both good and bad. Having more values improves a nodeβs chance for being on the nondominated search frontier, but also increases the per node cost. Ideally, we would like to compute only those values that are needed to keep solutionbearing nodes on the nondominated frontier. Our intuition is that many plans positively interact, and if we can compute a single value (or a small set of values) for a set of interacting plans, then search will find these plans at a lower cost.
We explore several approaches to computing heuristics for MOA*: (i) compute a single value per node that estimates the longest solution path, hoping that other solutions will positively interact, (ii) compute a uniform grid of values per search node to avoid missing solutions that do not positively interact, (iii) with probability compute a uniform grid of values (as in (ii)), and with probability compute the values that are similar to the parent nodeβs values that were nondominated in the open list. We find (in our experiments) that the third approach is the most useful because it balances exploration with preservation of nondominated partial solutions. We compare to the baseline approach of computing several solutions with A* by setting different bounds (preferences) on the plan objectives.
1.4. Probabilistic Planning
We compare A* and MOA* in probabilistic planning, where the plan length and probability of goal satisfaction are the competing objectives. Probabilistic planning largely simplifies our analysis because in partial solutions, one objective (plan length) is free to change, but the other objective (probability of goal satisfaction) is fixed (i.e., no partial plan collects the probability of goal satisfaction until it is a complete plan). The effect is that there is a single best value for each node, and the only way to obtain multiple values is to compute multiple values. Hence, we can focus solely on how to compute multiple values without considering multiple values. While we do not study partial satisfaction planning [9] in this work, we note that it largely resembles probabilistic planning from the perspective of a search algorithm, and we believe our techniques are applicable in this problem as well.
In the following, we present the MOA* algorithm, provide the intuition for investigating which values are computed in MOA*, discuss several approaches to computing values for MOA*, describe how to formulate probabilistic planning at A* and MOA* search, study the empirical performance of the techniques on several domains, discuss related work, and conclude with future research directions. Our contribution is to evaluate the relative strengths of single objective and multiobjective search, and show that multiobjective search is preferable when nondominated solutions share common search nodes.
2. MOA*
MOA* [8] is a search algorithm that finds a set of paths, and relies on established notions of solution (non) dominance in multiobjective problem solving.
Let denote a set of solutions for some problem. Each solution is associated with a value vector defining its quality for each of objectives. A solution ββdominates solution , denoted , if it is no worse than in all objectives, , , and . Let denote the set of efficient (also called nondominated) solutions, such that for no pair of solutions , or . As such, we seek efficient solutions that minimize the objectives differently. We characterize the quality of a set of solutions by computing its hypervolume [10], the size of the objective space dominated by the solutions. As the efficient set gains quality, the hypervolume increases. Let denote the hypervolume (size of the dominated objective space) of a set of solutions , calculated as (our implementation uses an optimized, but equivalent computation for two objectives): where and denote respective upper and lower bounds on the th objective (e.g., the lower and upper bounds on a solutionβs probability of success is zero and one, resp.).
Consider the example in Figure 1, where five solutions are shown. This example shows a normalized objective space (with two objectives) so that all solution values fall in the interval in each objective. The solution is dominated by both and because it is greater in both objectives, and, likewise, is dominated by . The efficient set is , containing the nondominated solutions. The rectangles encompassing the regions that are greater in both objectives than solutions , , and denote the hypervolumes (in this case, areas) dominated by each solution. The union of the hypervolumes dominated by the set is the hypervolume of the set, and its magnitude indicates the quality of the set. Notice that the space dominated by the set is larger than the space dominated by , making the former set a better solution set.
MOA* [8] (Algorithm 1) and its variations [11] generalize the traditional, singleobjective A* heuristic search algorithm by operating on graphs whose edge costs are vectors. Each efficient path from a start node to a terminal node (a solution) has an associated nondominated value vector (equal to the sum of the edge costs). MOA* finds a set of efficient paths from a source node to one of the terminal nodes by maintaining efficient sets for the familiar, scalar A* constructs (, , and values, and backpointers). The efficient set of values for each node represents the cost of all efficient paths reaching the node, the set of values represents the estimated cost of all efficient paths reaching terminal nodes, and the set of values contains all efficient members of the crossproduct of  and values (i.e., each value is the sum of a value with an value). Nodes in the MOA*ββOpen list follow a strict partial order and have potential for expansion if one of their values are efficient with respect to values of other Open list nodes and previously found solutions. Our implementation expands a random efficient node from the Open list in each iteration.

MOA* is guaranteed to terminate with an optimal efficient set of solutions (a Pareto optimal set) when (i) edge cost vector components are all greater than zero, (ii) the graph is locally finite, (iii) the values are admissible, and (iv) there are no cycles. In this work, we violate two of these assumptions: (i) is violated in probabilistic planning because goal satisfaction is zero for nonterminal actions, and (iii) is violated because the heuristics employed are inadmissible. As such, it is not guaranteed the MOA* will terminate, nor that it finds the optimal set of solutions. These features are obviously not very theoretically appealing, but we note that they are necessary in practice because scalability requirements and time constraints often prohibit optimality and termination. We adopt an empirical anytime approach to evaluating MOA* and characterize its performance over time. While our case study violates the requirements for MOA* optimality, these limitations are easily overcome by employing an admissible heuristic and enforcing nonzero edge costs.
3. Computing Values
The intuition for judiciously selecting which values to compute is based on the following observations. A* and MOA* can, and often will, expand the same search nodes to find a set of solutions. To find multiple solutions (each with respect to a different preference over the plan objectives), A* must compute a new value for reexpanded search nodes under each new preference. MOA* does not usually (nodes removed from the Closed list in step 6 of Algorithm 1 are technically reexpanded, but values are only added, not recomputed) reexpand nodes, and does not necessarily compute a different value for each preference over the plan objectives (nor does it require any preference).
Since it is possible for A* and MOA* to expand the same nodes and find the same solutions, whichever algorithm with a lower combined per node and per iteration cost will perform better. A*βs per node cost is dependent upon how many times the node is reexpanded (i.e., its value is recomputed for a new preference) and its per iteration cost relies on sorting the Open list. MOA*βs per node cost includes the cost of computing multiple values and its per iteration cost is predominantly incurred while finding the set of nodes with efficient values (its equivalent of sorting the Open list). (our implementation of MOA* uses Kungβs algorithm [12] to find the efficient set in each iteration, with complexity when there are two objectives and is the number of values of all nodes in Open).
With MOA*, it is possible to bootstrap a nodeβs values to find multiple efficient solutions. For example, if a node must be expanded to find all efficient solutions, then it is enough that one of its values places it within the set of efficient nodes in . If an oracle can determine which value, among the set of possible values, will make the node most competitive within the efficient set (i.e., give the node the best chance of being efficient), then computing only this value will lead to a lower per node cost. This single value represents a set of positively interacting solutions to be found by expanding the node.
Without an oracle, there are several options for computing the values. All options balance two competing desires: lowering the per node cost and giving nodes the best chance at staying on the efficient frontier . In the following section, we examine three approaches to computing values for MOA* that address these competing desires.
4. Computing MOA*Values
While there are many ways to compute values for MOA*, such as restricting the values or the combinations of  and values, we focus solely on methods for computing values. These methods include computing (i) a single value per node (called ), (ii) a uniform grid of ββvalues (called ), and (iii) a probabilistic choice between computing a uniform grid of values or a set of values from nondominated values of the parent node (called ). Each of these methods involves placing an upper bound on all but one objective, and then computing the value of the free objective with respect to the bounds. We obtain multiple values by computing the free objective value with respect to different bounds. The following explores the intuition behind, and computation of, each method.
4.1. Single Value
: computing a single value (i.e., selecting a single bound for all but one objective) is difficult because it must represent the βcost to goβ of an efficient solution found via the node under consideration. Our desires from the previous section are to lower the per node cost (which can only decrease here by not computing an value) and to compute a most competitive value. Without computing additional values for comparison (e.g., by comparing the hypervolume dominated by each), it is difficult to determine which is the most competitive. Instead, we take an approach based on the intuition that, by estimating the cost of the solution that is deepest in the search graph, we can maximize potential for other solutions to be found while searching for the deepest solution. Each bound objective in the value is bound with its estimated deepest solution value, and the last, free objective is estimated with respect to the bounds. Subsequently all values for different nodes are compared solely on the basis of the free objective because the same bounds are use for the value of each node.
4.2. Multiple Values
: extending the method to compute several values can improve the competitiveness of a search node, albeit at increased cost. While one value may never lead to an efficient value, another computed for the same node might. Similar to , bounds the value of all but one objective and computes an value with respect to the bounds. Each of the values bounds the values of the objectives differently. We evaluate this method by spacing the bounds uniformly, but note that other approaches that bias the spacing may be viable as well.
4.3. Probabilistically Nondominated Values
: the and methods are two extremes that either use very few values to keep per node cost low or use many values to seek out many different plans. A simple way to combine the approaches is to probabilistically select between them when computing the values for a node. While will save some effort expanding the node, it does not account for which values caused the current nodeβs parent to be expanded. There are likely to be only very few efficient values that caused the parent to be expanded, so instead of computing a single value, we can compute a set of values similar to the parentβs efficient values (i.e., those values whose associated values were nondominated with respect to all other values of nodes in the Open list). The values are similar to the parent values because they use the same bounds on the fixed objectives and recompute the free objective. In this manner, we can compute values that are likely to be most competitive. By probabilistically selecting between and computing values similar to the parent node, we can both explore and preserve efficient values.
5. Case Study of Probabilistic Planning
Probabilistic planning is a naturally multiobjective problem, where at its simplest, the plan objectives are the plan length and probability of goal satisfaction. More precisely, we use the risk (one minus the probability of goal satisfaction) so that we can minimize each objective. We choose to study probabilistic planning because it has one property that largely simplifies our analysis and otherwise boosts empirical performance when comparing MOA* and A*. As previously noted, MOA* associates cost vectors with search graph edges; in probabilistic planning, all edges leading to nonterminal nodes incur unit cost with respect to plan length and zero cost with respect to risk; however, edges leading to terminal nodes incur zero cost for the plan length and a possibly nonzero risk cost. The effect of this property is that there is a single efficient value for each nonterminal node, meaning that multiple values for a search node can only arise from computing multiple values.
The following subsections define the conformant probabilistic planning problem, its formulation as a graph search problem in both A* and MOA*, and discussion of an existing reachability heuristic used in the search algorithms.
5.1. Conformant Probabilistic Planning
A conformant probabilistic planning problem is given by the tuple , where is a set of propositions, is a set of actions, is an initial belief state (probability distribution over initial states), and is the goal description (a conjunctive set of propositions).
A belief state is a probability distribution over all states, where each state is a set of propositions. The probability of a state in a belief state , , is denoted . We say that a state is in () if . The probability that a belief state satisfies the goal , is the sum of the probabilities of states where the goal is satisfied ().
An action is a tuple , where is an enabling precondition, and is a set of outcomes. The enabling precondition is a conjunctive set of propositions that determines action applicability. An action is applicable in belief state if it is applicable in each state in the belief state, . The causative outcomes are a set of tuples representing possible outcomes (indexed by ), where is the probability of outcome being realized, and is a mutually exclusive and exhaustive set of conditional effects (indexed by ). Each conditional effect is of the form , where both the antecedent (secondary precondition) and positive and negative consequents are conjunctive sets of propositions. This representation of effects follows the 1ND normal form presented by Rintanen [13]. As outlined in the probabilistic PDDL (PPDDL) standard [14], it is possible to use the effects of every action to derive a state transition function that defines a probability that executing in state will result in state . Executing action in belief state , denoted , defines the successor belief state such that .
A sequence of actions , executed in belief state , results in a state , where and each action is executable in the appropriate belief state. The probability that the sequence of actions satisfies the goal is the probability that the final belief state satisfies the goal. The number of actions in the sequence is the length. As long as the sequence of actions respects the definition of applicability of each action, the sequence (including the empty sequence) is a feasible plan, but not necessarily an efficient plan.
5.2. Formulating CPP as Graph Search
We formulate CPP as both A* and MOA* search over the belief state space. Each search node is a belief state, and each edge is an action.
A* search associates a unit cost with each edge, and defines terminal nodes as those nodes where the belief stateβs probability of goal satisfaction is no less than a given threshold . The actions associated with edges leading to the terminal node identify the plan. We find multiple solutions with A* by invoking A* with different values for .
MOA* search associates a cost vector with each edge. The first component of the vector is the action execution cost (as in A*), and the second component is the risk. Risk is only incurred when transitioning to terminal nodes, and only action execution cost is incurred when transitioned to nonterminal nodes. MOA* treats terminal nodes differently than A* because it does not exit upon finding a solution (i.e., MOA* does not use a goal satisfaction threshold and pursues multiple solutions). MOA* adds a single, unique terminal node and unique edges for βquitβ actions that allow the search to choose to transition from any belief state to the terminal node. The quit actions (i) are applicable to all belief states, (ii) incur zero action execution cost, (iii) incur a risk cost equal to one minus the probability of goal satisfaction of the belief state where applied, and (iv) are not added to the plan extracted from the edges leading to the terminal node. By using quit actions, MOA* can continue searching through nodes that satisfy the goal (with some probability), but retain the solutions.
5.3. Heuristics for CPP
The most effective heuristics for CPP involve estimating the cost to achieve the goal with probability no less than [15, 16]. We employ the McLUG technique described by Bryce et al. [15] to compute relaxed plans, using the number of actions as the heuristic. The approach taken by Bryce et al. [15] to compute the heuristic for a belief state and value of is to compute deterministic planning graphs and extract a relaxed plan that achieves the goals in at least of the planning graphs. Each planning graph is deterministic because it is built with respect to a state sampled from the belief state and sampled outcomes of each probabilistic action in each action layer. Symbolic techniques make the construction of multiple planning graphs and the relaxed plan efficient.
In MOA*, the heuristic is computed once for the bound in . For example, when , the relaxed plan might contain ten actions; in this case, MOA* would add an value vector (10,0.0; i.e., ). The method computes the heuristic for each bound , adding the set of value vectors , where are the numbers of actions in the relaxed plans for each value of . With probability , the heuristic uses the heuristic and, with probability computes the relaxed plan heuristic for the same values of that the nondominated parent values computed.
6. Empirical Evaluation
We compare A* to multiple versions of MOA* (using different heuristic computation strategies) on several CPP problems across four domains. The questions that we attempt to answer are as follows. (i)Will multiple invocations of A* with different preferences or one invocation of MOA* with no preferences find a better set of solutions, find the set faster, or both?(ii)Which method for computing values in MOA* will perform best?
The following describes the evaluation metrics, domains, the test environment, and results.
6.1. Evaluation Metrics
As previously mentioned, we measure the quality of a set of solutions by its hypervolume. We can measure the hypervolume over time with MOA*, but because A* finds a single solution at a time, only measuring its hypervolume over time is not as appealing. Thus, we compute a set of solutions using A* and also compare the total time taken by MOA* to find a plan set with the same or better hypervolume. We also compare the maximum hypervolume found by each technique, within twenty minutes for MOA*, and for the fixed number of solutions found by A* (where each invocation to find a solution is given a twentyminute limit).
6.2. Domains
The evaluation domains include an artificial domain, called Grid and Ladder (GL), and several domains from the CPP literature, including Logistics, Gripper, and Sand Castle.
The GL domain is an adaptation of the Grid domain presented by Hyafil and Bacchus [17] that adjusts the degree of positive interaction and independence among the nondominated plans. GL includes five actions: move right, move left, move up, move down, and climb. The move actions have the effect of moving along the intended axis with 0.8 probability and the effect of moving laterally along the other axis with 0.1 probability in each direction. The initial state starts the agent at one corner of the grid (with certainty), and the goal is to reach the opposite corner and reach the top of the ladder. The deterministic climb ladder action can only be performed in the destination corner once for each rung of the ladder and, once executed, prevents further move actions (it is possible to climb the ladder, but not go down and move about the grid). The probability of achieving the goal is equal to the probability of reaching the corner with the ladder because the climb action does not change the probability of goal satisfaction (assuming the goal of being at the top of the ladder is attained by repeated climb actions). Alternative solutions perform different sequences of grid moves to reach the corner with different probabilities and plan lengths, and all plans must climb the ladder. The problems vary the size of the grid and the height of the ladder. A larger grid equates to longer positively interacting plan prefixes, and a ladder with more rungs equates to longer independent plan suffixes. GL presents challenges to MOA* because computing too many values during the grid traversal phase of the plan is costly, but additional values are required prior to the ladder climbing phase to push search to find alternative plans. GL also challenges A* because it must recomputes the value for many of the same nodes within the plan prefix.
The other domains are not modified from their original versions to help gauge the MOA* approaches in problems without clear structure. Logistics [17] involves probabilistic (un)load actions and initial belief states where package locations are uncertain, and we use the instance p222, with two cities, two packages, and two possible locations for each package. Gripper [18] involves several machining operations to manufacture widgets that work probabilistically. Sand Castle [19] involves two actions to build a castle or dig a moat, both of which are probabilistic and affect the success of the other. GL and Logistics are relatively challenging domains that cause stateoftheart CPP planners to struggle whereas Gripper and Sand Castle are fairly simple.
6.3. Environment
All experiments were conducted on a 3βGhz Xeon processor running Linux with 8βGB of RAM. All code was written in C and is based on the POND planner [15], and all hypervolume computations were done offline after planning. The McLUG heuristic computed for the GL domain used 32 planning graphs per value, and 128 planning graphs per value in the other domains. The comparisons of hypervolume across instances were with respect to the same lower and upper bounds on each objective (computed from the planner output). The results for A* are averaged over five runs of computing plans for the set of thresholds and the MOA* results are averaged over five invocations on each problem. A* is invoked with thresholds for each problem. Invocations not returning a solution are not counted in the results.
6.4. Results
Figure 2 shows the hypervolume achieved over time for nine problems in the GL domain, where each is a grid of size three, six, or ten, and has a ladder of height two, six, or ten. Each row of plots in the figure is a common grid size (increasing in size with each row), and each column shares a common height ladder (increasing from left to right). There are four methods shown in each plot: A*, MOA* with , MOA* with , and MOA* with . Some methods described in the accompanying Table 1 are not shown in the plots because they were not as competitive. Table 1 includes results for the and methods, and the data is the average time taken in seconds to exceed the average hypervolume found by A* (T(s)) and the average maximum hypervolume found by each method (HV). The T(s) and HV results for A* are the average time taken to find the average hypervolume using A*, and the T(s) results for all other methods are the time taken to exceed the A* HV. The HV for all other methods is the maximum hypervolume found before the timeout. The βββ entries indicate that either no solution set could be found that exceeds the A* hypervolume, in the case of T(s), or that no single solution was found, in the case of HV.
(a)
(b)
(c)
(d)
(e)
(f)
(g)
(h)
(i)
The plots that show the hypervolume over time indicate that the method tends to find the most hypervolume of the MOA* methods, outperforming and , especially as the problems become larger. We also see that as the problems become larger, using is required to find more hypervolume. A* tends to find its first solutions very quickly, and it takes longer for the MOA* methods to find their first solutions. However, MOA* tends to attain quite a bit of hypervolume earlier than A*. As the problems become more difficult, MOA* takes increasingly longer over A* to find initial solutions, but finds considerably more hypervolume within a short period of time thereafter. Note that small improvements in hypervolume later in the search are not insignificant because they represent new and improved solutions that may be difficult to find despite the small hypervolume they add.
Figure 3 and Table 2 present results for the other domains that show trends similar to those seen in the GL domain. The MOA* approaches tend to find solutions later than A*, but often find much better solutions as measured by maximum hypervolume. The difference in hypervolume is especially pronounced in the Gripper and Sand Castle problems because these problems have many solutions whose probability of goal satisfaction falls in the interval [0.9, 1.0]; and, because we use only use the extreme points of this interval as values for in A*, it misses many solutions in the interval. MOA* does not use bounds on the objectives and can seek out all the solutions within intervals that are created by the boundsβleading to more hypervolume. By selecting an a priori set of preferences on the objectives, A* cannot find some solutions.
(a)
(b)
(c)
6.5. Summary
From our analysis, we have seen the following trends in comparing MOA* and A*. (i)MOA* improves the quality of solution sets over A* because it is not limited to finding a predetermined number of solutions, and it continues to search and improve upon the solutions.(ii)The method for computing values is the most effective MOA* technique because it balances exploration with the retention of efficient partial solutions. (iii)The method does not pursue enough different solutions to attain high hypervolume, or the domains tested did not have a large degree of positive interaction among the solution sets. (iv)The method was useful for finding high hypervolume in smaller problems, but failed to scale well. (v)A* is fast to find a first solution, but much slower than MOA* at finding a set with large hypervolume.
7. Related Work and Discussion
Multiobjective problem solving has been previously studied in planning [1, 9, 20β22]. However, to our knowledge, all prior works study how to best formulate preferences over the objectives and solve a single objective problem. As discussed in the previous section, iteratively solving the problem with different preferences as a single objective problem cannot only miss solutions, but may take considerably longer, or fail, to find a set of solutions with comparable quality.
The work of Van Den Briel et al. [9] on partial satisfaction planning (PSP) is especially close to our study of probabilistic planning. PSP allows for the satisfaction of a subset of goals with utilities, where CPP allows for a probabilistic satisfaction of all goals. Thus, both problems have the same interesting property in MOA*: there is only one value for each search node, and the only way to obtain multiple values is by computing multiple values. We expect that applying MOA* to PSP would attain results similar to those presented in this work.
Finding a set of diverse solutions to planning problems is an important problem recently studied in planning [23]. Srivastava et al. [23] construct solutions that are diverse with respect to a distance measure, defined in terms of the causal structure of the plans. We take a different approach, where we find plans that are diverse in their objectives. These two approaches can be seen as complementary, the former finding diversity in the decision space, and the latter finding diversity in the objective space.
MOA* was originally studied by Stewart and White [8], showing conditions for optimality, and termination. More recently, Mandow and Perez de la Cruz [11] modified the algorithm to expand the different values of a node within each iteration, where the original algorithm expanded the node itself. Our implementation is based on the original algorithm, and our work explores which values to compute, a topic not discussed by prior work on MOA*.
8. Conclusion and Future Work
We have shown that MOA* can find a better set of solutions than A* by balancing the explorative capability attained by increasing the number of values per node with the decreased per node cost associated with computing fewer values. The most effective technique for computing a nodeβs values randomly chooses between computing a uniform grid of values and recomputing the values proven efficient by its parent node. By finding a better set of solutions more quickly, MOA* is a viable choice for computing multiple solutions for a problem. Moreover, the efficient solutions are naturally diverse in the objective space. The limitations of MOA* are that when users have clear preferences it is rendered unnecessary, and as found by our evaluation, it can be more efficient to perform multiple A* searches when the nondominated solutions share few common search nodes.
In future work, we intend to explore additional techniques for computing values, methods for managing multiple values per search node, other types of planning problems (such as PSP), and planning with more objectives. We are also interested in combining our approach with techniques for finding plans that are diverse in the decision space (i.e., causally diverse).