Table of Contents Author Guidelines Submit a Manuscript
Advances in Decision Sciences
Volume 2014, Article ID 519512, 8 pages
http://dx.doi.org/10.1155/2014/519512
Research Article

Implementing Second-Order Decision Analysis: Concepts, Algorithms, and Tool

1Department of Computer and Systems Sciences, Stockholm University, Postbox 7003, 164 07 Kista, Sweden
2Department of Information and Communications System, Mid Sweden University, 851 70 Sundsvall, Sweden
3Institute for Information Processing, Leibniz University Hannover, Appel Street 9A, 301 67 Hannover, Germany
4Uppsala Monitoring Centre, Box 1051, 75140 Uppsala, Sweden
5International Institute for Applied Systems Analysis, Schlossplatz 1, 2361 Laxenburg, Austria

Received 30 June 2014; Accepted 24 October 2014; Published 30 November 2014

Academic Editor: Shelton Peiris

Copyright © 2014 Aron Larsson et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Abstract

We present implemented concepts and algorithms for a simulation approach to decision evaluation with second-order belief distributions in a common framework for interval decision analysis. The rationale behind this work is that decision analysis with interval-valued probabilities and utilities may lead to overlapping expected utility intervals yielding difficulties in discriminating between alternatives. By allowing for second-order belief distributions over interval-valued utility and probability statements these difficulties may not only be remedied but will also allow for decision evaluation concepts and techniques providing additional insight into a decision problem. The approach is based upon sets of linear constraints together with generation of random probability distributions and utility values from implicitly stated uniform second-order belief distributions over the polytopes given from the constraints. The result is an interactive method for decision evaluation with second-order belief distributions, complementing earlier methods for decision evaluation with interval-valued probabilities and utilities. The method has been implemented for trial use in a user oriented decision analysis software.

1. Introduction

During the later decades decision analysis with imprecise or incomplete information has received a lot of attention within the area of utility theory based decision analysis. Stemming from philosophical concerns regarding the ability of decision-making agents to provide precise estimates of probabilities and utilities, as well as pragmatic concerns regarding the applicability of decision analysis, several approaches have been suggested, for example, approaches based on sets of probability measures [1] and interval probabilities [2].

With respect to methods for practical decision evaluation with imprecise input statements, a number of methods have been developed and some of them have been also implemented in computer software tools. Early works include the approach to decision making with linear partial information about input statements [3]. This approach promotes the conservative -maximin decision rule together with the use of imprecise probabilities modelled by means of linear constraints, suggesting evaluation algorithms to obtain minimum expected values. However, imprecision is restricted to probability assignments, and it is not possible to allow for constraints between different alternatives. Related methods aim at investigating whether stochastic dominance holds between decision alternatives when probabilities (and weights) are ranked with linear inequalities [4].

For multiattribute decision making a number of approaches have been suggested. These include the WINPRE, supporting the preference programming approach and the PRIME method [5], the Interval SMART/SWING [6], and the GMAA [7]. However, the procedures employed in these tools yield limited support when it comes to discriminating between alternatives where their respective interval-valued utilities overlap. The Delta framework, which we extend in this paper, combines multiattribute value trees with decision trees in a common model also supporting interval-valued weights, probabilities, and utilities together with a quantitative representation of qualitative statements such as “better than” and “more probable.” It handles overlapping expected utility intervals by means of an embedded form of sensitivity analysis; see Section 2 for a presentation.

Decision analysis with second-order information has been advocated in, for example, [8], in which reasonable rationales for supporting a discrimination of beliefs in different probability assessments are promoted. Second-order information can be used for expressing various beliefs over multidimensional spaces where each dimension corresponds to, for instance, possible probabilities or utilities of consequences. These ideas have been collected in a conceptual model for decision analysis in [9] from which investigations on the implications of decision evaluation followed in, for example, [10, 11]. As the integrals obtained when expressing distributions over expected utilities become very hard if possible to compute analytically, approximate methods are called upon. Therefore, this paper involves turning these ideas into practice using a simulation approach to decision evaluation with second-order information ready to be employed in a common framework for decision analysis with imprecise information. We emphasize how second-order information provides added value for decision evaluation in practice and provides an illustrative example together with some performance measures of the employed algorithms.

2. Concepts

The representational issues are of two kinds; a decision structure, modelled by means of a conventional decision analysis decision tree together with input statements. A decision tree is a way of modelling a decision situation where the alternatives are represented as branches from a decision root node and the set of final consequences are the set of nodes without children, see Figure 1. Intermediary nodes are here called events. For convenience we can, for instance, use the notation that the children of a node are denoted as , the children of the node are denoted as , and so forth. For presentational purposes, we will denote a consequence node of an alternative simply with . Over each set of event node children as well as the set of all consequence nodes functions such as probability distributions and utility functions can be defined.

519512.fig.001
Figure 1: A conventional decision analysis decision tree.
2.1. Interval and Comparative Statements

For interval statements, the probability (or utility) of being between the numbers and is expressed as (or ). In addition to interval statements, the approach also includes comparative statements such that the utility of is lower than the utility of which is expressed as a less relation or the utility of is equal to the utility of which is expressed as an equality relation . Given such statements, imprecision may be modelled as sets of candidates of possible probability distributions and utility functions. These are expressed as vectors in polytopes that are solution sets to the statements.

Definition 1. Given a decision tree , a utility base is a set of linear constraints of the types , or , and, for all consequences in , . A probability base has the same structure but, for all intermediate nodes (except the root node) in , also includes for the children of .

The solution sets formed by the probability and utility bases are polytopes in hypercubes. A probability base can be interpreted as constraints defining the set of all possible probability distributions over the set of child nodes emanating from an intermediate node. Similarly, a utility base consists of constraints defining the set of all possible utility values for each consequence. The bases and together with the decision tree constitute the information frame . An information frame then represents a decision problem.

2.2. Comparing Alternatives

In decision analysis, decision evaluation is mainly performed through comparing the expected utility of alternatives.

Definition 2. Given an information frame and an alternative , the expression where is the depth of the tree corresponding to , is the number of possible outcomes following the event with probability , , , denote probability variables, and denote utility variables as above, is the expected utility of alternative in .

Alternatives are then compared and ranked according to their expected utility. For evaluation purposes with imprecise information the notion of strength between alternatives to is used, simply being the difference where would mean that is preferred to according to the utility principle. However, if and are interval-valued expected utilities, is interval-valued as well which may lead to overlapping utility intervals meaning that preference is not that straightforward to conclude. To handle this within the Delta framework, the concept of contraction has been proposed as an embedded form of sensitivity analysis when dealing with overlapping expected utility intervals [12, 13]. The contraction analysis consists of (proportionally) shrinking the range of each probability and utility interval while studying and at different contraction levels. This evaluation thus involves a nonlinear optimization problem in finding which is treated in [14]. The level of contraction is indicated as a percentage, so that for a 50% level of contraction the range of each variable (probability, utility) interval has been reduced to half their initial length and for a 100% level of contraction to a single point (the contraction point), see Figure 2 for a visualization of the concept. This point may be set by the decision maker as long as it is consistent with the constraints. If not set by the decision maker a contraction point may be suggested from the center of mass of each polytope; see, for example, [13].

519512.fig.002
Figure 2: Contraction analysis of versus studying and at contraction levels between 0% and 100% on the horizontal axis. It can be seen that, at a contraction level of about %, there is no overlap. This is called the intersection level.

In addition, for a decision problem with alternatives we may compare one alternative with the average of the other alternatives by the difference Studying may also be done under different contraction levels and will delimit the number of comparisons needed in order to find a ranking of alternatives; see, for example, [15]. The contraction approach to evaluating interval-valued decision trees yields upper and lower bounds for the difference in expected utilities between alternatives under different levels of contraction. The underlying idea behind the contraction analysis is that the there is less belief in the outer endpoints of the intervals than in points closer to the centroid points. In other words, points closer to the centroid are more interesting in the study of and as the belief distribution over the interval has its mass concentrated in regions close to the expected utility point obtained from each polytope centroid; see [11].

However, in real-world decision situations it is often hard to discriminate between the alternatives due to the following: the intersection level may be regarded as too high for a decision maker to conclude preference and the optimization and contraction approach together with the cones visualized are cognitively demanding for a decision maker. Therefore, it is worthwhile to extend the representation of the decision situation allowing for true second-order distributions over classes of probability and utility measures in order to search for more decisive and comprehensible methods.

3. Including Second-Order Information

When including second-order information, interval estimates and relations (comparative statements) can be considered special cases of representations based on distributions over polytopes. For instance, a distribution can be defined to have a positive support only for (consistent with a less relation). More formally, the solution set to a probability or utility base is a subset of a unit cube since both variable sets have as their ranges. This subset can be represented by the support of a distribution over the cube.

Definition 3. By a second-order belief distribution over a cube , we denote a positive distribution defined on the unit cube such that where is the -dimensional Lebesgue measure on .

As an information frame has two separated constraint sets, holding constraints on probability variables and holding constraints on utility variables, it is suitable to distinguish between cubes in the same fashion. A unit cube holding probability variables is denoted by and a unit cube holding utility variables is denoted by . The normalization constraint for probabilities implies that for a belief distribution over there can be positive support only for tuples where .

Definition 4. A probability unit cube for an intermediate event node is a unit cube where when and all constraints are satisfied. A utility unit cube, , lacks the latter normalization.

Example 5. Given an information frame , constraints in the bases can be defined through a belief distribution. Given a unit cube and a distribution over defined by , then is a second-order (belief) distribution in our sense, and the support of is . See Figure 3.

519512.fig.003
Figure 3: The support of is the solution set of the set of constraints.

4. A Simulation Approach to Decision Evaluation

For the purpose of providing an interactive method for decision evaluation with second-order probabilities, a set of algorithms for generating random values for a given decision trees are used. The approach is to generate one random vector from each of the solution sets formed by the probability and utility bases. Thereby we generate a random point valued decision tree consistent with the bases, and a generated decision tree is labeled with . The random vectors may be generated from various second-order distributions over each solution set. Representative samples from the resulting belief distributions over quantities of interest such as , , and are obtained by generating a large number of trees and calculating for each the corresponding point values , , or . It is also meaningful to keep track of which alternative had the highest expected utility for each .

4.1. Generating Probability Vectors

Let be a vector of probability values for the children of a node in a decision tree such that ; . The fast sampling algorithm was suggested in [16]. The resulting distribution of vectors is the Dirichlet distribution.

Definition 6. Let the notation be as above. Then the probability density function of the Dirichlet distribution is defined as on a set , where is a parameter vector in which each is a positive parameter and is the Gamma function.

Generating vectors from Dirichlet distribution can be done by sampling independent uniformly distributed variables in the interval . The variables are then ordered in ascending order, so that they divide the interval into parts. These parts have the same Gamma distribution with expectation, equal to .

Handling constraints in the form of inequalities between probabilities can be done by reordering interval sizes according to inequality constraints. However, to deal with interval restrictions rejection sampling techniques must be used. Unfortunately, performance of the algorithm heavily depends on the intervals themselves. Suppose, for example, that one of the intervals is specified as . It will cause rejection of of the samples and drastically slows down the whole simulation. Therefore, it is suggested to use only inequalities to specify relations between probabilities.

4.2. Generating Utility Vectors

Let be a vector of utility values for all consequences in a decision tree such that . Generation of utility vectors is done under the assumption that each utility variable is uniformly distributed on the corresponding interval . To cope with the comparative statements, utility variables related to each other by the same relation are collected in variable groups. Groups with variables related through the equality relation are formed and the simulation will be done only for one utility variable from each such group. Groups with variables related through the “less than” relation are identified and the utility variables are sorted in ascending order.

Two utilities and are collected in one group through the less relation if and only if either of the following holds:(1) and are connected with the less relation;(2)there exist a sequence of utility variables such that or .

By analogy with a relation set with less relations, two utilities and are in one relation set with equal relation if and only if either of and is connected with equal relation, or there exists a sequence of utility variables such that .

Example 7. Let , , , , , be six utility variables with the following relations between them: , . Then, there exist three groups, , , and .

It has previously been demonstrated how to simulate from the joint distribution of a group of ordered utility variables with arbitrary uniform distributions [17]. The case where all variables are equidistributed can be solved by factorizing the joint distribution into a series of univariate conditional distributions and simulating from them in turn using inverse transformation sampling. The extension to arbitrary uniform distributions can be achieved by splitting up the unit interval into subintervals and then considering all possible ways where the variables can be distributed across those subintervals.

5. Decision Evaluations and Tool

The above-mentioned simulation approach allows for efficient sampling of probability and utility vectors and has been implemented using the Java programming language as a part of the DecideIT (http://www.preference.bz/DecideIT/) decision tool [13]. This tool is an implementation of the Delta framework, and as such already allows for modelling of decision trees and compares alternatives by means of the expected utility principle together with contraction analysis of and . With respect to the simulation approach, a user can specify lower bounds for probabilities together with interval-valued utilities as well as comparative statements between utility variables. In the simulation, during each iteration random probabilities and utilities satisfying the user’s constraints are generated in a Monte Carlo fashion. When the simulation is done a histogram is visualized that depicts the distribution of an alternative’s expected utility or the distribution over the strength measure between alternatives. This histogram can then be used for decision evaluation where not only quantitative characteristics such as the mean value but also the shape of the distribution can influence the evaluation. In addition, we may compare all alternatives in one evaluation by investigating the support for each of the alternatives being the optimal choice according to the utility principle, that is, the amount of second-order belief in favor of each alternative having the highest expected utility. Such an evaluation method has no counterpart when delimiting the model to first-order information only.

6. Example and Performance

This example is inspired from a similar example that was used as a case for illustrating decision evaluation with imprecise information; see [12]. The tree in Figure 4 represents a simplified decision tree for the situation when a company is to decide upon how to acquire a new system. There are alternatives—to develop an in-house system, to order a new system from consultants, or to live with no system. The objective is to identify which alternative should be chosen according to the utility principle. For each alternative we have an uncertain event with several possible consequences, and in total we have a consequence set of nine consequences. (i)No system: there will be no legal requirements to install the system and thus no additional costs (). Legal requirements appear and the company will have to make investments later and break-even is not reached (). Legal requirements appear and the company will have to make investments later but still break-even is reached (). (ii)Installing in-house system: break-even may not be reached () or be reached (). (iii)Ordering the system from consultants: delivered but break-even is not reached (); delivered and break-even is reached (); bot delivered, which leads to installation of in-house system, and break-even is reached (); not delivered and in-house system must be installed, but break-even is not reached ().

519512.fig.004
Figure 4: The decision tree for the example, with interval probabilities and utilities denoted with : [,] and : [,], respectively, for each consequence node.

Some comparative statements are also assessed. These derive from the fact that if the consultants do not deliver a system, it would have been better to develop an in-house system from the start. Based upon this is better than and is better than in the tree shown in Figure 4. This is modelled as and .

Figures 5 and 6 contrast the output from the contraction analysis to that of the second-order analysis. Both methods suggest that installation of an in-house system (Alt. 2) is the preferred choice, although the second-order analysis more clearly discriminates between the alternatives. For example, when comparing Alt. 2 to Alt. 1 in Figure 5, about 79% of the simulated second-order belief is in favor of Alt 2., whereas full separation between the alternatives requires a contraction level almost as high as 85%. In addition, Figure 7 presents a simultaneous comparison of all three alternatives with regard to their respective support, that is, the simulated belief that either alternative is optimal according to the expected utility principle. It can be seen that approximately 78% of the belief support is in favor of the fact that Alt. 2 has the highest expected utility.

fig5
Figure 5: Pairwise comparison of Alt. 2 and Alt. 1 using contraction analysis (a) and second-order analysis (b). At a contraction level of % the expected utility intervals cease to overlap although about % of the belief masses are in favor of Alt. 2 in this evaluation. The mean of the simulated values and the contraction point coincide.
519512.fig.006
Figure 6: Comparing Alt. 2 with the average of the other alternatives. At a contraction level of % the expected utility intervals cease to overlap although about % of the belief masses are in favor of Alt. 2 in this evaluation. The mean of the simulated values and the contraction point coincide.
519512.fig.007
Figure 7: Comparing the belief support for all three alternatives in a circle diagram.
6.1. Performance Testing

Since the number of samples to conduct a proper comparison of alternatives based upon the belief distribution over the expected utility is usually large, performance becomes crucial in order for ensuring interactive usage. Profiling showed that the most of the time-consuming operation is the generation of random values, so it was important to minimize the number of generated random variables. We compared two trees with an identical structure consisting of three alternatives with eight consequence nodes each. For the first three, utility intervals but not comparative statements were used. For the second tree, three ordered groups of utility variables connected by the less relation were used. The time was measured for the whole generation of expected utilities of the alternatives.

From the performance testing results shown in Figure 8, one can see that there is no big difference for making simulations for different types of trees if they have the same size, that is, the same number of probability and utility nodes. The performance testing also shows that the time required to perform a simulation with the large sample size for a relatively large tree is tolerable and allows working with the tool in real time.

519512.fig.008
Figure 8: Performance test results. Lower graph (red) corresponds to the first tree; upper graph (blue) corresponds to the second tree.

7. Discussion and Concluding Remarks

This paper presents an implementation of a simulation approach to decision analysis with second-order belief distributions. The approach relies on an existing framework for interval decision analysis, here extended with decision evaluation methods assuming belief distribution over probability and utility variables. The main advantage of the framework in comparison with traditional approach of specifying exact probabilities and utilities is that for real-world problems these exact values are often unknown, but imprecise estimates can be derived.

We provide some performance measurements to insure that the software can be used for interactive decision analysis together with an illustrative example showing the increase of discriminative power obtained when allowing for second-order distributions. The applicability of the approach can be increased by finding methods for managing upper probability bounds, not restricting the user to only use lower bounds when second-order evaluations are deemed necessary. Further, larger degrees of freedom in terms of allowing for groups of utility variables that are not totally ordered together with a possibility to state different belief distributions for different variables should be pursued.

Conflict of Interests

The authors declare that there is no conflict of interests regarding the publication of this paper.

References

  1. I. Levi, “On indeterminate probabilities,” Journal of Philosophy, vol. 71, no. 13, pp. 391–418, 1974. View at Publisher · View at Google Scholar
  2. K. Weichselberger, “The theory of interval-probability as a unifying concept for uncertainty,” in Proceedings of the 1st International Symposium on Imprecise Probabilities and Their Applications, 1999.
  3. E. Kofler, Z. W. Kmietowicz, and A. D. Pearman, “Decision making with linear partial information (L.P.I.),” Journal of the Operational Research Society, vol. 35, no. 12, pp. 1079–1090, 1984. View at Publisher · View at Google Scholar · View at Scopus
  4. A. D. Pearman, “Establishing dominance in multiattribute decision making using an ordered metric method,” Journal of the Operational Research Society, vol. 44, no. 5, pp. 461–469, 1993. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at Scopus
  5. A. A. Salo and R. P. Hämäläinen, “Preference ratios in multiattribute evaluation (PRIME)—elicitation and decision procedures under incomplete information,” IEEE Transactions on Systems, Man, and Cybernetics Part A:Systems and Humans., vol. 31, no. 6, pp. 533–545, 2001. View at Publisher · View at Google Scholar · View at Scopus
  6. J. Mustajoki, R. P. Hämäläinen, and A. Salo, “Decision support by interval SMART/SWING—incorporating imprecision in the SMART and SWING methods,” Decision Sciences, vol. 36, no. 2, pp. 317–339, 2005. View at Publisher · View at Google Scholar · View at Scopus
  7. A. Jiménez, S. Ríos-Insua, and A. Mateos, “A generic multi-attribute analysis system,” Computers and Operations Research, vol. 33, no. 4, pp. 1081–1101, 2006. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at Scopus
  8. P. Gärdenfors and N.-E. Sahlin, “Unreliable probabilities, risk taking, and decision making,” Synthese, vol. 53, no. 3, pp. 361–386, 1982. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus
  9. L. Ekenberg and J. Thorbiörnson, “Second-order decision analysis,” International Journal of Uncertainty, Fuzziness and Knowledge-Based Systems, vol. 9, no. 1, pp. 13–37, 2001. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus
  10. M. Danielson, L. Ekenberg, and A. Larsson, “Distribution of expected utility in decision trees,” International Journal of Approximate Reasoning, vol. 46, no. 2, pp. 387–407, 2007. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet · View at Scopus
  11. D. Sundgren, M. Danielson, and L. Ekenberg, “Warp effects on calculating interval probabilities,” International Journal of Approximate Reasoning, vol. 50, no. 9, pp. 1360–1368, 2009. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet · View at Scopus
  12. M. Danielson and L. Ekenberg, “A framework for analysing decisions under risk,” European Journal of Operational Research, vol. 104, no. 3, pp. 474–484, 1998. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at Scopus
  13. M. Danielson, L. Ekenberg, J. Idefeldt, and A. Larsson, “Using a software tool for public decision analysis: the case of nacka municipality,” Decision Analysis, vol. 4, no. 2, pp. 76–90, 2007. View at Publisher · View at Google Scholar
  14. M. Danielson and L. Ekenberg, “Computing upper and lower bounds in interval decision trees,” European Journal of Operational Research, vol. 181, no. 2, pp. 808–816, 2007. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet · View at Scopus
  15. M. Danielson, “Generalized evaluation in decision analysis,” European Journal of Operational Research, vol. 162, no. 2, pp. 442–449, 2005. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet · View at Scopus
  16. T. Tervonen and R. Lahdelma, “Implementing stochastic multicriteria acceptability analysis,” European Journal of Operational Research, vol. 178, no. 2, pp. 500–513, 2007. View at Publisher · View at Google Scholar · View at Scopus
  17. O. Caster and L. Ekenberg, “Combining second-order belief distributions with qualitative statements in decision analysis,” in Managing Safety of Heterogeneous Systems, Y. Ermoliev, M. Makowski, and K. Marti, Eds., vol. 658 of Lecture Notes in Economics and Mathematical Systems, pp. 67–87, 2012. View at Google Scholar