Computational and Mathematical Methods in Medicine

Volume 2015 (2015), Article ID 347501, 8 pages

http://dx.doi.org/10.1155/2015/347501

## Community-Based Decision Making and Priority Setting Using the R Software: The Community Priority Index

^{1}Department of Family and Community Medicine, Baylor College of Medicine, 3701 Kirby Drive, Suite 600, Houston, TX 77098, USA^{2}REACHUP, Inc., 2902 N. Armenia Avenue, Suite 100, Tampa, FL 33607, USA^{3}Department of Epidemiology and Biostatistics, College of Public Health, University of South Florida, 13201 Bruce B. Downs Boulevard, MDC 56, Tampa, FL 33612, USA

Received 5 January 2015; Revised 6 February 2015; Accepted 9 February 2015

Academic Editor: Thomas Desaive

Copyright © 2015 Hamisu M. Salihu et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

#### Abstract

This paper outlines how to compute community priority indices in the context of multicriteria decision making in community settings. A simple R function was developed and validated with community needs assessment data. Particularly, the first part of this paper briefly overviews the existing methods for priority setting and reviews the utility of a multicriteria decision-making approach for community-based prioritization. The second part illustrates how community priority indices can be calculated using the freely available R program to handle community data by showing the computational and mathematical steps of CPI (Community Priority Index) with bootstrapped 95% confidence intervals.

#### 1. Introduction

Providing public health practitioners and community development advocates with reliable measures for priority setting is a necessary step to foster accountability of the decision-making process in community settings. Community engagement is considered to be the pivotal element for a successful community-based organization [1, 2], particularly during the implementation of needs assessment projects and the selection of priorities for community action. By involving all relevant community stakeholders in the development of community action plans, community-based organizations not only ensure an equitable decision-making process but also enhance the cultural acceptability of interventions [3–5]. Although priority setting is an essential decision-making step for community-based organizations and participatory action research, there is little guidance on how to approach priority setting with quantifiable indicators while adopting community engagement principles [6].

Techniques for prioritization in community settings range from simple to more complex consensus building techniques, such as straight voting, weighted voting, nominal group technique, consensus panels, focus groups, Delphi technique, and others. The simplest form of community prioritization often occurs in town hall meetings or board meetings using simple voting, which typically implies giving each stakeholder the opportunity to vote on a list of issues. A variation of simple voting is the assignment of a certain number of votes to each stakeholder (e.g., 3 votes) and then sorting ideas or ranking to select the top one. Although democratic, this form of prioritization can only be used when the number of choices is small, and it is sensitive to issues of representativeness and generalizability. However, such a decision-making approach becomes increasingly cumbersome and impractical as the number of priorities increases. Another issue with straight voting is that it takes up only the majority of opinions and may inadvertently alienate a minority group, which can result in detrimental consequences for the community partnership and the engagement process. Some community advocates also use weighted voting, in which stakeholders assign different points (e.g., 1 = low importance, 2 = medium importance, and 3 = high importance) to a list of community issues in order to rank the items posteriorly. Although such a process tends to be more equitable, this method assumes that decision-makers are capable of mentally assigning reliable weights of diverse issues. This assumption is impractical because unguided stakeholders will reflect their personal preference and there is no guarantee that they will use uniform and consistent defensible criteria for prioritization every time they vote.

Because of some limitations of simple weighted methods, community development scholars recommend that voting methods are complemented with group discussions toward building a consensus to capture the community perspective rather than personal references. The two most frequently used consensus building methods are the nominal group technique (NGT) [7, 8] and Delphi technique [9–15]. By combining ranking procedures and participatory discussions, these techniques can be very effective in gathering consensus across diverse groups of stakeholders in a democratic and unbiased manner [10–12, 14–16]. The nominal group technique uses a one rank-ordered feedback, followed by a discussion that results in community consensus. However, a group of stakeholders that surpasses ten or twelve members cannot be easily managed and consensus may not be achieved. In contrast, the Delphi technique can encompass larger numbers of stakeholders and include several iterations of ranking and reranking (typically three or four) and consensus discussions. Because of its iterative nature and capacity to incorporate larger numbers of stakeholders, the Delphi technique is more robust than other methods. However, it can result in a lengthy process of several weeks or months. Because its implementation often requires the technical skills of highly skilled facilitators, the Delphi technique may be difficult to be implemented in community settings [9–11, 14, 15].

Community practitioners may also utilize qualitative techniques such as focus groups or key informant interviews. Qualitative methods provide credible and transferrable contextual data but in order to obtain generalizability, we need to implement mixed methods approaches. The value of qualitative techniques lies in gathering culturally relevant and richly experiential data, whereas quantitative techniques can complement qualitative findings (e.g., focus group themes) to generate measureable indicators that are comparable across settings (priority scores), different populations (mothers, children, and different geographical areas), and over different periods of time (longitudinal/repeated measures assessment).

None of the techniques mentioned previously explicitly differentiate between criteria of importance and changeability for the issues under consideration. For instance, stakeholders may decide to address highly important topics that are very difficult to change, which will result in projects that are ineffective and with discouraging results for the community. Conversely, stakeholders may decide to prioritize those issues that are highly changeable, but those issues may be of relatively low importance. The later situation would result in inefficient use of the scarce resources. We consider that community-based organizations must aim to address highly changeable and highly important issues. Since there is little guidance for community-based organizations on how to integrate these two criteria, we felt the need to develop a combined measure that indicates priority based on both importance and changeability.

#### 2. The Need for the Development of the Community Priority Index

There is a clear need for quantifiable indicators for priority setting that integrate importance and changeability and permit cross settings comparisons but at the same time permit the wide participation of community stakeholders. Therefore, we developed the Community Priority Index using the following stepwise approach.

##### 2.1. Adoption of Multiple Decision-Making Criteria

The adoption of a multicriteria decision-making approach in community-based priority setting has been widely recommended [17–19]. We recommend that decision-makers at least adopt a minimum of two criteria: importance and changeability. We selected importance and changeability because these are commonly used decision criteria in community-based program planning, but their utilization remains based on judgment and it is difficult to replicate due to the lack of quantifiable and comparable measures [20, 21]. Importance pertains to how relevant the issue was to the community context, which could be based on the magnitude of a particular problem (e.g., how prevalent, how much healthcare cost burden, or contribution to life expectancy or quality of life, or how relevant the problem is for the community under discussion). It is important to note that decision-makers can adopt separate importance criteria, such as importance based on cost, importance based on number of people dying from associated diseases, and importance based on impact on community quality of life. To simplify the present analysis, we use only one overall importance criterion. The second criterion we recommend is changeability, which refers to how easily the issue could be changed in the community if a designated intervention would be made available within the scope of a particular community-based organization.

##### 2.2. Importance and Changeability Ratings for Each Decision-Making Criterion

Separate stakeholders’ rating for the criteria of importance and changeability for each community issue must be identified, using weighted numerical scores (from 1 = low to 3 = high). We used a 3-point Likert-type scale, as follows: for importance: 0 = not important, 2 = intermediate importance, and 3 = very important; for changeability: 0 = not changeable, 2 = intermediate changeability, and 3 = highly changeable.

The mathematical computation consists of the following steps. Let be a number of stakeholders (interviewers or decision-makers); each interviewer will prioritize the questions (issue) using criteria for each question. Let be a -Likert scale representing the score of the th criteria of the th question of the th interviewer; thus, for all , , .

##### 2.3. Computation of Item Average Scores by Importance and by Changeability

An important caveat is that, in community settings, it is typical to get an unequal number of participants’ responses per item. This situation occurs because some items are responded by all members, while a few are responded by only a subset of the members (e.g., some stakeholders may leave blank spaces for abstaining from voting or just missingness at random). The use of simple sum of item scores is inappropriate in such a situation. Thus, we used the arithmetic mean or average and computed mean importance scores and mean changeability scores. Accordingly, each sum of item scores was divided by the number of respondents for the particular item, which resulted in the item mean importance as well as item mean changeability. Forced responses are not recommended, since it may be perceived as coercion and a threat to the democratic process.

The mean of the th criteria of the th question, , is calculated as follows:

##### 2.4. Multiplication of Mean Importance and Mean Changeability Item Scores to Generate a Summary Statistic That We Refer to Here as Community Priority Index (CPI)

We used the following formula: . A single summary index was computed for each item or issue which integrated both perceived importance and perceived changeability, with higher values indicating higher priority.

The CPI is the product of the mean of the th criteria of the th question, calculated as follows:

##### 2.5. Stratification by Target Population

If two or more subpopulations are targeted, then we recommend the stratification of CPI scores by types of population to identify priorities for action. This is a final step in which the issues are organized by type of population and ordered in descending fashion to identify the top highly important and highly changeable issues by target population. In this manner, community stakeholders will be able to better determine the scope of community-based strategies and to allocate project resources more effectively. Notably, the process is systematic and democratic, from beginning to the final selection of top priorities and by diverse populations separately (e.g., women, children, men, and the elderly).

##### 2.6. Construction of 95% Confidence Intervals and Bootstrapping

This step of CPI pertains to the evaluation of the precision of asymptotic approximations in small samples, which is an important step if the number of stakeholders is relatively small (e.g., less than 30). For this purpose, we constructed 95% confidence intervals.

The lower bound (LB) of can be calculated by assuming for all , , . Thus, LB of CPI is . Similarly, the upper bound (UB) of can be calculated by assuming for all , , . Thus, UB of CPI is . That is, the range of is .

It is important to highlight that the traditional confidence interval estimator that is based on the normal assumption of the sampling distribution cannot be used with small samples [22]. To overcome this limitation, we complemented the classic analysis with bootstrap methods to construct 95% confidence intervals [23]. Bootstrapping samples were created by ten thousand samples with replacement from the original dataset. The 2.5th percentile and 97.5th percentile are represented as the 95% CI of CPI. In bootstrapping, data collected for a single experiment is used to simulate what the results would have been if the experiment was repeated over and over with new samples (e.g., sampling with replacement from the original dataset). Specifically, we used bootstrap samples to estimate the mean score of 3-point Likert-type scaled items and their 95% confidence interval. By using bootstrap methods the distribution of the data normalizes permitting the use of the mean as a reference cut point [23]. Therefore, we generated via computer program (S+ 8.2) 5000 bootstrap samples of community stakeholders ratings [24]. The following algorithm was used to generate the bootstrap samples.(1)We constructed an empirical distribution function, , from the observed data. places probability on each observed data point .(2)We then drew a bootstrap sample of size 6 with replacement from . The mean of this bootstrap sample was calculated achieving a normally distributed population.(3)Step 2 was repeated 10,000 times. The percentile method was used to compute a 95% confidence interval around the mean by ranking the bootstrap sample means and then selecting the 2.5 percentile as the lower confidence limit and the 97.5 percentile as the upper confidence limit. In other words, the lower bound value is the least CPI score possible within the 95% confidence interval, while the upper bound value is the highest possible CPI score within the interval. In this regard, the mean value of CPI scores for each issue represents the group consensus, whereas the width of 95% confidence intervals indicates the range of agreement.

##### 2.7. Standardization of CPI Scores

Up to this point, CPI results are still scale-dependent and lack comparability potential with other community settings if different Likert-type scales are used (e.g., 3-point scale versus 5-point scale or 7-point scale). Comparability becomes particularly important for nation-wide programs or coalitions that have local or county chapters. Thus, we standardized each CPI indicator to have a range from 0 to 1 by applying the following conceptual formula:

Accordingly, the mathematical formula standardizes each CPI indicator to have a range from 0 to 1 by applying the following formula:

Given the above formula, the CPI can only range from 0 to 1 and it is scale-free, which now permits comparisons across different studies and populations. The entire computational process of the CPI can be summarized in Figure 1.