Abstract

Drug self-administration procedures have played a critical role in the experimental analysis of psychoactive compounds, such as cocaine, for over 50 years. While there are numerous permutations of this procedure, this paper will specifically focus on choice procedures using concurrent schedules of intravenous drug self-administration. The aims of this paper are to first highlight the evolution of drug choice procedures and then review the subsequent preclinical body of literature utilizing these choice procedures to understand the environmental, pharmacological, and biological determinants of the reinforcing stimulus effects of drugs. A main rationale for this paper is our proposition that choice schedules are underutilized in investigating the reinforcing effects of drugs in assays of drug self-administration. Moreover, we will conclude with potential future directions and unexplored scientific space for the use of drug choice procedures.

1. The Evolution of Drug Choice Procedures

Drug self-administration procedures have played a critical role in the experimental analysis of psychoactive compounds, such as cocaine, for more than 50 years. In general, preclinical drug self-administration procedures are utilized for two main scientific purposes. One purpose is in abuse liability testing of psychoactive compounds for potential scheduling as controlled substances by the Drug Enforcement Agency, and there are already excellent reviews on the use of drug self-administration procedures for this purpose, see [1, 2]. The other main purpose of drug self-administration procedures is in understanding the pharmacological, environmental and biological determinants of drug-taking behavior as a model of drug addiction. This paper will focus on the use of concurrent-choice schedules of drug self-administration to address this latter purpose.

Although there are numerous permutations of drug self-administration procedures, all use the classic 3-term contingency of operant conditioning to investigate the stimulus properties of drugs [3]. This 3-term contingency can be diagrammed as follows: where designates a discriminative stimulus, designates a response on the part of the organism, and designates a consequent stimulus. The arrows specify the contingency that, in the presence of the discriminative stimulus , performance of the response will result in delivery of the consequent stimulus . As a simple and common example from a preclinical laboratory, a rat implanted with a chronic indwelling catheter might be connected to an infusion pump containing a dose of a psychoactive drug and placed into an experimental chamber that contains a stimulus light and a response lever. Contingencies can be programmed such that, if the stimulus light is illuminated (the discriminative stimulus), then depression of the response lever (the response) will result in delivery of a drug injection (the consequent stimulus). Conversely, if the stimulus light is not illuminated, then responding does not result in the delivery of the drug injection. Under these conditions, subjects typically learn to respond when the discriminative stimulus is present. Consequent stimuli that increase responding leading to their delivery are operationally defined as reinforcers, whereas stimuli that decrease responding leading to their delivery are defined as punishers. The contingencies that relate discriminative stimuli, responses, and consequent stimuli are defined by the schedule of reinforcement [4].

The first published reports of intravenous drug self-administration used exactly this type of single-response procedure described above to examine morphine self-administration in morphine-dependent rats [5, 6]. Furthermore, these seminal studies demonstrated three principles that have been commonly observed in single-response drug self-administration procedures ever since. First, these studies demonstrated that intravenous morphine could maintain schedule-appropriate rates and patterns of responding leading to its delivery, indicating that morphine functioned as a reinforcer. Second, this single-response procedure produced a bitonic, “inverted U shaped” dose-effect function relating the unit dose of morphine in each injection to measures of rate (either rates of responding or injection delivery). Thus, maximal rates of self-administration were maintained by intermediate morphine doses, and lower rates were maintained not only by lower morphine doses, but also by higher doses. Importantly, this pattern of responding indicates a dissociation between rates of self-administration maintained by a given consequent stimulus and the reinforcing efficacy of that stimulus [7, 8], because if the research subject is given a choice between a lower and higher dose of the same drug, the subject will almost always choose the higher drug dose indicating that the higher drug dose is the preferred or more efficacious reinforcer [9, 10]. Why would rates of drug self-administration decrease as dose increases above some apparent optimal level? As is often the case in other pharmacology domains, the presence of a bitonic dose-effect function indicates that multiple and/or opposing drug effects are being integrated into a common dependent variable. For example, measures of self-administration rate can be influenced not only by the reinforcing effects of a drug (which would have the effect of increasing rates), but also by other effects of the self-administered drug that can either increase or decrease rates (e.g., effects that improve or impair motor competence or information processing). These other drug effects will be collectively referred to as “reinforcement-independent rate-altering effects” in this paper to distinguish them from reinforcing effects, and one goal of more recently developed procedures is to dissociate reinforcing drug effects from reinforcement-independent rate-altering effects.

The third and final principle revealed by these early studies was that rates of morphine self-administration could be altered by treatment with other drugs. These effects were interpreted to suggest treatment effects on drug reinforcement (and by extension, to provide evidence regarding mechanisms of drug reinforcement). However, just as self-administration rates can be influenced by multiple effects of the self-administered drug, so these rates can also be influenced by multiple effects of a treatment drug (or of any other experimental manipulation, such as a lesion or genetic modification) [11, 12]. More specifically, these experimental manipulations can alter rates of self-administration not only by changing the reinforcing effects of the self-administered drug, but also by changing the reinforcer-independent rate-altering effects of the self-administered drug, or by producing its own reinforcement-independent rate-altering effects. Overall, these early studies illustrated the promise of drug self-administration as a model of drug addiction, but they also provided a glimpse of the challenges to interpretation of rate-based measures generated by single-response procedures.

Since the early 1960s, preclinical drug self-administration research has flourished, and techniques for intravenous drug self-administration were rapidly extended to studies with other drug classes and in other species of experimental subjects [13, 14]. However, over the decades, drug reinforcement research has evolved along three divergent paths. One path of self-administration research has retained the use of single-response procedures initially used by Weeks and colleagues [5, 6] and utilized more demanding and complex schedules of reinforcement, such as progressive-ratio and second-order schedules, than the simple fixed-ratio schedules used by Weeks and colleagues. In general, studies using these approaches have demonstrated that numerous drug classes can maintain rates and patterns of responding consistent with the hypothesis that drugs can function as reinforcers [15]. However, these approaches have been less successful in generating dependent measures that clearly dissociate reinforcing drug effects from reinforcement-independent rate-altering drug effects. To highlight the prevalence of single-response procedures in the current drug self-administration literature, we used the keyword “self-administration” in PubMed on April 11, 2012, to retrieve the 50 most recent preclinical studies using intravenous drug self-administration procedures as the primary independent variable. This “snapshot” of the literature revealed that 15 of the 50 most recent studies used a single-response drug self-administration procedure.

A second path of self-administration research has retained the simple fixed-ratio schedules utilized by Weeks and colleagues on one response lever, but incorporated rudimentary aspects of choice by introducing an “inactive” response option in addition to the “active” drug option. For example, an early study by Pickens and Thompson [16] used a two-lever self-administration procedure in which responding on one “active” lever produced intravenous cocaine delivery under an FR1 schedule of reinforcement, whereas responding on a second “inactive” lever had no scheduled consequences. Schedule-appropriate responding was maintained exclusively on the “active” lever, and when the contingencies on the two levers were reversed, rats rapidly reallocated their responding to the newly “active” lever. In our snapshot analysis of the current self-administration literature, the majority of drug self-administration studies (32 out of 50) used an “inactive” manipulandum. Differential rates of responding on “active” and “inactive” manipulanda are useful for investigating the reinforcing effects of consequent stimuli associated with the “active” manipulandum; however, the utility of this simple type of choice procedure is limited for at least two main reasons. First, although “active/inactive-response” procedures technically employ a concurrent schedule capable of generating measures of response allocation and choice, such measures are rarely computed or reported. Rather, investigators more commonly report measures of response or reinforcement rate on the “active” manipulandum as if it were the only response option available, and such rate-based measures of drug reinforcement are vulnerable to all the reinforcement-independent rate-altering drug effects described above. Second, baseline rates of behavior on the “active” and “inactive” manipulanda are normally vastly different, with rates on the “inactive” manipulandum being very low. As a result, data on “inactive” responding are primarily useful for only detecting reinforcement-independent rate-increasing effects. However, because “inactive” rates are already low, they are insensitive to reinforcer-independent rate-decreasing effects of experimental manipulations. This is a critical issue, because most drug self-administration studies are designed to evaluate the ability of experimental manipulations, such as pharmacological, environmental, or genetic variables, to decrease drug reinforcement as indicated by decreases in drug self-administration rates. Thus, procedures that use “active” and “inactive” manipulanda are not much different than the single-response self-administration procedures described above.

The third and least common path of drug self-administration research has used concurrent schedules in which responding is maintained on two or more manipulanda by two or more motivationally relevant consequent stimuli. For example, responding on one manipulandum might result in delivery of a particular drug dose, and responding on a different, concurrently available manipulandum might result in delivery of a different dose of the same drug, a different drug, or a qualitatively different consequent stimulus such as food (Figure 1). These procedures are often referred to as “choice” procedures, because subjects allocate their behavior, or “choose,” between the available consequent stimuli, and the relative reinforcing effects of drug in comparison to an alternative are derived from measures of behavioral allocation (or “drug choice”) rather than behavioral rate. As with any other type of self-administration procedure, the self-administered drug or other experimental manipulations might also influence overall self-administration rate by producing reinforcement-independent rate-altering effects; however, the impact of these other effects on choice measures of drug reinforcement can be minimized by appropriate use of manipulanda, discriminative, and alternative reinforcing stimuli, and schedules of reinforcement. A specific example of this dissociation is shown in Figure 1. Cocaine versus food choice increases as the unit cocaine dose increases; however, rates of responding display the prototypic inverted-U shaped dose-effect function. Moreover, choice procedures generate distinct measures of behavioral allocation and behavioral rate that permit dissociation of reinforcing effects from reinforcement-independent rate-altering effects. For example, an experimental manipulation that decreases reinforcing efficacy of a drug might be expected to reduce drug choice but increase choice of the alternative and produce no net change in overall reinforcement rates. A specific example of this selective effect from the literature is shown in Figure 2 examining cocaine versus food choice during chronic treatment with the dopamine (DA)-selective releaser m-fluoroamphetamine [17]. Conversely, a manipulation that produces reinforcement-independent rate-altering effects (e.g., motor impairment) might be expected to reduce overall reinforcement rates without altering drug choice. An example of this specific effect from the literature is shown in Figure 3 examining cocaine versus food choice during chronic treatment with the mu-opioid agonist methadone [18]. These distinct dependent measures of reinforcing effects (represented in measures of drug choice) and reinforcement-independent rate-altering effects (represented in measures of overall self-administration rates) are analogous to the use of concurrent schedules in the closely related field of drug discrimination research to generate dependent measures that permit dissociation of discriminative stimulus effects (represented in measures of drug-appropriate responding) from discrimination-independent rate-altering effects (represented in measures of overall rates of responding or reinforcement). Despite this apparent advantage, our PubMed search indicated that only 3 of the 50 most recent IV drug self-administration studies used a concurrent schedule of reinforcement. This paucity of research with concurrent schedules in research on the reinforcing stimulus effects of drugs stands in striking contrast to the almost exclusive use of concurrent schedules in drug discrimination research and suggests that concurrent schedules are underutilized in studies of drug self-administration [19].

Although choice procedures have been underutilized, their value has long been appreciated. As noted above, the earliest studies of intravenous drug self-administration used single-response procedures, but these studies were predated by choice studies in which drug was delivered by other routes of administration. For example, more than two decades before the studies by Weeks and colleagues, Spragg evaluated choice between intramuscular morphine and fruit in morphine-dependent chimpanzees and demonstrated that choice was largely influenced by the state of morphine withdrawal (such that morphine withdrawal was associated with increased probability of morphine choice) [20]. Similarly, Nichols and colleagues established responding for oral morphine in rats and found that morphine withdrawal increased choice of morphine over water [21]. Intravenous drug delivery subsequently gained prominence in drug self-administration research because it promotes a rapid onset of drug action that facilitates learned contingencies between responding and drug delivery. However, the rise of intravenous drug self-administration was also accompanied by a growing reliance on single-response and “active/inactive”-response procedures, perhaps because the limited lifespan of intravenous catheters selected for procedures that require the least initial training. Nonetheless, the use of choice procedures persisted, especially in studies of oral drug self-administration [13, 22] and in a small but steady series of intravenous drug self-administration studies. The goal here is to review the history and major findings of research on intravenous drug choice.

Before proceeding, two other points are worthy of mention. First, although choice procedures are sparingly used in preclinical studies of drug reinforcement, they have emerged as the standard approach in clinical studies of drug reinforcement [23, 24]. Consequently, increased preclinical use of choice procedures might facilitate translational research on drug reinforcement. Second, scientific interest in drug reinforcement derives in large part from its presumed role in drug addiction, and drug addiction can be defined as a disorder of choice and behavioral allocation [25, 26]. Moreover, 5 of the 7 diagnostic criteria for substance dependence in the revised fourth edition of the Diagnostic and Statistical Manual (DSM) of Mental Disorders are defined by allocation of behavior towards procurement and use of the substance compared to other behavior maintained by nondrug and presumably more adaptive alternative reinforcers [27]. In the fifth edition of the DSM that is still under development, 6 of the 11 diagnostic criteria are defined by behavioral allocations toward the procurement and use of the substance [28]. Thus, addiction implies excessive drug choice at the expense of more adaptive behaviors. The pharmacological, environmental, and genetic determinants that influence drug choice and contribute to drug addiction can be directly studied using choice procedures.

2. Determinants of Drug Choice

2.1. Overview

While the first drug choice procedure was published in 1940 [20] by Spragg it was not until 1972 that the first intravenous drug choice procedure was published, approximately 10 years after intravenous drug self-administration procedures were introduced [5]. In their seminal study, Findley et al. [29] examined the effects of dependence and withdrawal on secobarbital and chlordiazepoxide preference. Since 1972, there have been 66 publications examining the determinants of drug choice, and these publications are summarized in Table 1. The predominant drugs examined have been cocaine (80%) and heroin (15%). Furthermore, nonhuman primates have been the predominant research subjects (81%) utilized in these studies, with rhesus monkeys (70%) being the most commonly used species. The results and implications of this literature will be reviewed in more detail below.

2.2. Choice between Drug and Itself
2.2.1. Effect of Dose

One of the first fundamental research questions to be answered was whether drug choice was dose-dependent. This question was important in determining whether drug choice varied independently of rates of responding as drug dose increased, such that, choice would increase and response rates decrease as a function of increasing drug doses. Although this specific experimental question has only been explicitly examined using intravenous cocaine as the reinforcer and rhesus monkeys as the research subjects, animals almost always choose the larger drug dose [9, 3033]. Furthermore, in the manuscripts that did report rates of responding, there was no systematic relationship between cocaine choice and rates of cocaine-maintained responding [30, 34]. Overall, this body of literature supports the conclusion that drug choice is dose-dependent and that drug choice may be less sensitive than single-response procedures to reinforcer-independent rate-altering drug effects.

2.2.2. Effect of Temporal Parameters of Reinforcer Delivery

Another fundamental question to be answered was whether drug choice was sensitive to manipulations in the delivery of the drug. In general, when equal drug doses are available as the consequence for two response options, research subjects will allocate their behavior equally between the two response options. However, if the infusion rate was to be varied between the two response options, subjects will almost exclusively choose the dose associated with the shorter (faster) infusion rate [32, 35, 36]. The delay between response and drug delivery is a related variable that has been manipulated in choice studies [31, 37]. For example, Woolverton and Anderson [37] systematically varied the delay between completing the response requirement and delivery of the intravenous drug injection. When the delay was 0 sec and the choice was between a low (0.025 mg/kg/injection) and high (0.05 mg/kg/injection) unit cocaine dose, the subjects almost exclusively chose the high cocaine dose. However, increasing delays in the delivery of the high cocaine dose produced a monotonic decrease in high cocaine dose choice and a reciprocal increase in choice of the alternative low cocaine dose. Overall, these data support the notion that drug choice is highly sensitive to manipulations that affect the timing or probability of reinforcer delivery and that subjects prefer reinforcers to be delivered quickly and with no delay.

2.2.3. Effect of Schedule of Reinforcement

Finally, a third fundamental question to be addressed was whether the programmed schedules of reinforcement influenced drug choice. For example, when rhesus monkeys were given a choice between identical cocaine doses, but the probability of reinforcement was decreased such that every other FR5 (probability 50%) or every fourth FR5 (probability 25%) completion resulted in delivery of the cocaine injection on one of the response options, monkeys consistently chose the response option associated with the higher probability of reinforcement [38]. Several laboratories have examined the effects of schedule manipulations under concurrent variable-interval (VI): VI schedules. Under a VI schedule of reinforcement, the first response after a variable amount of time has passed results in presentation of the reinforcer. The variability in time is anchored at some programmed time interval by the investigator such that, on average, the interval of reinforcement is the anchored time, for example 600 sec. Most of the studies have examined cocaine versus cocaine choice [30, 33, 39, 40], but a few have also examined other drugs such as the mu-opioid agonists alfentanil [41] or remifentanil [42] and the barbiturate methohexital [41, 42]. In general, these studies have been used to demonstrate that drug self-administration procedures adhere well to the predictions of the matching law, which posits that the allocation of behavior between two response options will match the frequency of reinforcement associated with those options [30, 33, 3941]. Moreover, as predicted, subjects will chose reinforcers delivered under shorter versus longer VI schedules. In contrast to concurrent VI: VI schedules, only one published study has examined the effects of manipulating the response requirement under a concurrent fixed-ratio (FR): FR schedule of choice between a drug and itself [31]. In this study, only one of the three monkeys was sensitive to FR manipulations such that increases in the FR requirement decreased choice of the higher unit cocaine dose and produced a reciprocal increase in choice of the lower unit cocaine dose. Thus, choice behavior maintained under concurrent FR: FR schedules appears to be more quantal than choice behavior maintained under current VI: VI schedules. Overall, this body of the literature suggests that subjects prefer schedules of reinforcement that produce the higher probability of reinforcement.

2.3. Choice between Drug and Alternatives
2.3.1. Behavioral Economic Considerations

One method to understand how concurrently available reinforcers interact is to apply conceptual frameworks employed by behavioral economics [43, 44]. Based on economic theories, concurrently available reinforcers can interact in one of three ways [44, 45]. First, concurrently available reinforcers can function as substitutes; such that as the price of one reinforcer increases, choice of that reinforcer decreases and is replaced by choice of the substitute. The perfect substitute for a commodity is itself, so that studies summarized above that considered choice between a drug and itself could be conceptualized as choice between substitutes. However, different commodities can also function as economic substitutes for each other. For example, potato chips and pretzels can function as substitutes, such that as the price of potato chips increases, consumption of potato chips decreases and consumption of pretzels increases. Secondly, concurrently available reinforcers can function as complements; under this condition, as price of one reinforcer increases, choice of that reinforcer, and choice of a complement also decreases. For example, peanut butter and jelly can function as complements, and as the price of peanut butter increases, choice of both peanut butter and jelly decreases. Finally, concurrently available reinforcers can function as independents, such that changes in the price and consumption of one reinforcer would have no effect on choice of an independent reinforcer. For example, peanut butter and shoes typically function as independents, such that as the price of peanut butter increases and choice of peanut butter decreases, and consumption of shoes is unlikely to change. The interaction between two concurrently available reinforcers will be an important consideration in the following sections. In general, alternative reinforcers used in studies of drug choice are expected to function as substitutes, but this is an empirical question.

2.3.2. Drug versus Other Drugs

To ascertain the relative reinforcing efficacy of two different drugs in maintaining behavior, a choice procedure could be programmed. As stated earlier, a significant advantage of choice procedures is that the primary dependent measure (behavioral allocation or choice) is less confounded by reinforcement-independent rate-altering drug effects. Critical factors to consider when assessing the relative reinforcing efficacy between two different drugs are dose, pharmacokinetics, and pharmacodynamics. The entire literature on choice between two drugs has employed cocaine versus “drug X” choice procedures, with “X” being the dopamine (DA) uptake inhibitor methylphenidate [10, 31], the sodium channel blocker procaine [46], the monoamine releaser cathinone [47], DA agonists [48], nicotine [49], the DA uptake inhibitor 2β-propanoyl-3β-(4tolyl)-tropane (PTT) [50], or the mu-opioid agonists remifentanil [51] and heroin [52]. In general, these studies have reported that as dose of the alternative drug reinforcer increased, cocaine choice decreased. Such results are consistent with the conclusion that the alternative drug functioned as a substitute for cocaine. However, there were two notable exceptions. When choice was between cocaine versus procaine [46] or cocaine versus nicotine [49], cocaine choice predominated despite increasing doses of procaine or nicotine or decreasing doses of cocaine. Moreover, only one study has explicitly examined whether two drugs function as substitutes, complements, or independents. In monkeys choosing between cocaine and the mu-opioid agonist remifentanil, remifentanil was found to function as a behavioral economic substitute for cocaine [51]. Overall, drug versus drug choice procedures can be useful for assessing the relative reinforcing efficacy of two different compounds and this procedure may hold utility in abuse liability testing.

2.3.3. Drug versus Nondrug Reinforcers

Analysis of the choice studies in Table 1 revealed that 61% (41/67) used a nondrug alternative reinforcer with 93% (38/41) of these studies using food and 7% (3/41) using saccharin or sucrose as the alternative. In the first drug versus nondrug choice procedure, rhesus monkeys were allowed to choose between cocaine injections and food in a closed economy, such that no other food source was available outside of the choice procedure [53]. Over the 8 experimental days, monkeys almost exclusively choose cocaine over food despite body weight decreases of 6 to 10% over the course of the 8 days. More recent studies have utilized other experimental designs to evaluate the effects of multiple cocaine doses versus food using a within-session choice procedure [17, 5456]. For example, Figure 1 demonstrates that a complete cocaine versus food dose-effect function can be determined within a single daily experimental session. Cocaine choice increased in a monotonic function demonstrating that the relative reinforcing efficacy of cocaine versus food is dose-dependent. Furthermore, this monotonic increase in cocaine choice was in contrast to rates of responding, which displayed the prototypic inverted U-shaped dose-effect function. Thus, the example shown in Figure 1 clearly demonstrates the dissociation between using dependent measures of behavioral allocation (choice) and behavioral rate discussed above as a main rationale for the use of drug choice procedures.

As was evident in drug versus drug choice procedures described above, the magnitude of the alternative nondrug reinforcer, programmed schedule consequences, and reinforcement delay were also important independent variables that could impact drug choice [37, 55, 5760]. For example, increasing the magnitude of the alternative food reinforcer was shown to decrease cocaine choice [55, 57]. In another example, heroin versus food choice was decreased by increasing the intertrial interval [60]. These results suggest that under economic conditions where access to reinforcers was restricted, baboons choose food over a low dose of heroin. Moreover, if the reinforcing value of food was decreased by providing supplemental access to food before the cocaine versus food choice procedure, cocaine choice increased [55, 59].

While most of the drug versus nondrug alternative choice procedures discussed so far used nonhuman primates as research subjects, there is a small, but growing body of literature of drug versus food choice procedures in rodents. For example, a recent study by Thomsen et al. [61] established a within-session cocaine dose-effect versus ensure choice procedure similar to the within-session cocaine dose-effect versus food choice procedure shown in Figure 1. Other rodent studies have demonstrated that introduction of an alternative nondrug reinforcer such as a glucose/saccharin solution [62] or food [63, 64] will attenuate cocaine or methamphetamine choice. In contrast to these other rodent studies, rats choosing between cocaine injections and 0.2% saccharin never chose cocaine over the saccharin solution [65]. This later result suggests that 0.2% saccharin is a very strong and highly preferred reinforcer in rats. Overall, this body of literature demonstrates that nondrug reinforcers can decrease drug choice, but that reinforcer selection, reinforcer magnitude, delay of reinforcement, and reinforcer access conditions are all key independent variables to be considered in drug versus nondrug choice procedures.

2.3.4. Drug versus Compound Consequent Stimuli

In general, two categories of studies have investigated drug choice in the context of another compound consequent stimulus. One category has involved assessment of choice involving drug plus another putative reinforcer (e.g., drug + drug or drug + food). For example, when contingencies were programmed such that one response option produced a high heroin dose and a second response option produced a lower heroin dose delivered in combination with food, monkeys reallocated their behavior away from the high dose heroin towards lower heroin doses plus food [66]. Other studies have examined choice between (a) food and either cocaine alone or cocaine + mu-opioid agonist, or (b) choice between food and either mu agonist alone or delta-opioid + mu agonist [6770] and, in general, reported that the relative reinforcing efficacies of these drug mixtures were additive.

The other category has involved pairing one of the choices with a putative punisher, such as electric shock [71] or intravenous histamine [72, 73]. For example, Johanson [71] examined choice between cocaine alone versus cocaine + electric shock. In all of these studies, the punisher was effective in decreasing choice of the reinforce + punisher and increasing choice of the alternative reinforcer [7173]. However, effects of punishment on drug choice could be mitigated by increasing the drug dose [71] associated with the punisher, decreasing the intensity of the punisher [72, 73], or increasing the delay between delivery of the reinforcer and delivery of the punisher [73]. Furthermore, pairing the punisher with the alternative reinforcer can also increase drug choice. For example, in a study of cocaine versus food choice, histamine injections paired with cocaine decreased cocaine choice, but if the histamine injections were paired with food, cocaine choice increased [72]. Moreover, these studies highlight the potential for cocaine use to be influenced by environmental contingencies that may govern choice of nondrug alternative reinforcers. Overall, these results highlight the utility of concurrent schedules of reinforcement to understand relatively sophisticated behaviors maintained by complex, compound consequent stimuli. Moreover, the use of choice procedures to understand abuse of multiple drugs, drug mixtures, and drug + other consequent stimuli is a scientific space that remains relatively unexplored.

2.4. Other Factors Affecting Drug Choice
2.4.1. Effect of Drug Dependence and Withdrawal

Most published studies examining effects of dependence and withdrawal on drug choice have focused on opioids [66, 7478]. In contemporary experimental designs examining choice of mu agonists such heroin or the short-acting opioid remifentanil, the total amount of opioid available during daily choice sessions is sufficiently limited to prevent development of significant opioid dependence. Under these nondependent conditions, opioid choice can be effectively reduced by treatment with opioid antagonists like naloxone [76]. However, if opioid dependence is established by chronic noncontingent opioid treatment or by permitting supplemental daily access to contingent opioid self-administration, then drug effects on opioid choice change dramatically. So long as dependence is maintained by opioid agonist exposure, opioid choice during choice sessions is generally maintained (when the alternative is food, [76, 79]) or reduced in some subjects (when cocaine is the alternative, [78]). However, either spontaneous withdrawal or antagonist-precipitated withdrawal produce robust increases in opioid choice, and this withdrawal-associated increase in opioid choice can be blocked by opioid agonists such as methadone, which are effective maintenance medications for treatment of opioid addiction [75, 76, 78, 79]. Overall, then, opioid withdrawal in opioid-dependent subjects increases opioid choice, and opioid agonists that are effective maintenance medications for treatment of opioid addiction can block this withdrawal-associated increase in opioid choice.

In contrast to the opioid dependence literature cited above, minimal research has been conducted examining the effects of dependence and withdrawal on drug choice maintained by other drug classes. Consistent with the opioid studies described above, Findley and colleagues [29] reported that withdrawal from the barbiturate secobarbital in secobarbital-dependent subjects increased choice of lower secobarbital doses versus food. However, similar results have not been demonstrated with cocaine. For example, Banks and Negus [80] evaluated cocaine versus food choice when subjects were exposed to and withdrawn from supplemental daily access to cocaine self-administration under conditions identical to those used to establish opioid dependence [76]. During supplemental cocaine access, daily cocaine intake increased more than 10-fold and was sufficient to disrupt performance during choice sessions; however, neither exposure to nor withdrawal from supplemental cocaine significantly altered cocaine versus food choice. The extent to which drug dependence and withdrawal alters drug choice of other drug classes, such as benzodiazepines, monoamine releasers, or N-methyl D-aspartate antagonists, remains to be elucidated.

2.4.2. Effect of Pharmacological Variables

Another important body of literature has examined effects of pharmacological manipulations on drug choice, either to evaluate effects of candidate antiaddiction medications or to evaluate mechanisms of drug reinforcement. Effects of opioid agonists and antagonists on opioid choice in nondependent and opioid-dependent rhesus monkeys was described above, and additional studies have investigated potential mechanisms that may underlie withdrawal-associated increases in opioid choice. For example, opioid withdrawal functions as a stressor to activate endogenous release of the stress-related neurotransmitters dynorphin and corticotrophin releasing factor (CRF). This suggested that the hypothesis that either dynorphin acting at kappa-opioid receptors or CRF acting at CRF1 receptors might contribute to withdrawal-associated increases in opioid reinforcement; however, neither the kappa antagonist 5′-guanidinonaltrindole nor the CRF antagonist antalarmin was as effective as morphine in blocking withdrawal-associated increases in opioid choice [79].

A total of 10 studies have investigated pharmacological modulation of cocaine versus food choice. In studies examining candidate “agonist” medications for cocaine addiction, DA-selective monoamine releasers, such as d-amphetamine and phenmetrazine, significantly decreased cocaine choice, whereas mixed DA-serotonin (5HT) releasers or 5HT-selecitve releasers did not [17, 55, 81]. Importantly, these studies demonstrated a selective decrease in cocaine choice without also decreasing rates of behavior, and a representative example of this selective effect is shown in Figure 2 during treatment with the DA-selective releaser m-fluoroamphetamine. The atypical antipsychotic and DA D2 receptor partial agonist aripiprazole also decreased cocaine choice in rats after acute treatment, although this effect was not sustained during repeated treatment, and neither acute nor repeated aripiprazole altered cocaine choice in rhesus monkeys [61, 82]. Treatment with the benzodiazepine agonist diazepam also decreased cocaine choice in rats [83]. In contrast to these studies showing decreases in cocaine choice, cocaine choice was increased by treatment with a 5HT1A agonist, a kappa-opioid, agonist and high doses of dopamine receptor antagonists [55, 8486]. Finally, other treatments that have failed to alter cocaine choice up to doses that suppress responding include methadone and lithium [18, 87]. Overall, this body of the literature suggests that drug choice is sensitive to both acute and chronic pharmacological manipulations, that pharmacological effects on drug choice can be dissociated from other drug effects, and that there is a clear need for more research in understanding the pharmacological mechanisms of drug choice.

One final point regarding the effects of pharmacological variables on drug choice is worth mentioning. An overarching rationale for preclinical studies investigating the pharmacological determinants of drug self-administration is the development of candidate medications to treat drug addiction. Moreover, a main goal of pharmacotherapy should be not only to decrease drug-taking behavior, but also to reallocate behavior to activities maintained by more adaptive reinforcers [88]. Thus, in preclinical studies that aspire to evaluate candidate medications for drug addiction, choice procedures can play a critical role in the preclinical drug development process to determine whether a given experimental manipulation produces this critical reallocation of behavior [19, 24]. Furthermore, human laboratory studies provide an additional critical step in drug development between animal studies and clinical trials, and human laboratory research to evaluate effects of candidate medications on drug self-administration relies exclusively on drug versus nondrug choice procedures [24]. Consequently, the use of choice procedures in preclinical studies may facilitate translation of results to choice procedures in human laboratory studies at this critical juncture in the drug development process, and existing data suggest excellent concordance between medication effects on drug choice in animals and humans [55, 85, 8991].

2.4.3. Effect of Other Environmental Variables

An emerging body of research has also addressed the degree to which non-pharmacological environmental variables might alter drug choice. In one example, monkeys were housed in a social context to establish a dominant-subordinate hierarchy to examine the effects of social rank on cocaine versus food choice [92]. One main rationale for this study was that the initial differences between dominant and subordinate monkeys in cocaine-maintained responding under a simple FR schedule disappeared over time as the cocaine self-administration history progressed. Under the concurrent FR: FR schedule of cocaine and food reinforcement, the differences between dominant and subordinate monkeys were recaptured. Furthermore, cocaine choice in socially housed monkeys was decreased by manipulations of environmental variables that were perceived to be enriching (increased cage size), whereas cocaine choice was decreased by environmental variables that were perceived to be stressful (exposure to rubber snake) [56]. Another study examined the effects of ambient temperature on choice between 3,4-methylenedioxymethamphetamine (MDMA) and food in rhesus monkeys [93]. Compared to room temperature, cool ambient temperatures decreased MDMA choice and warm ambient temperatures increased MDMA choice. Similar studies on these and other environmental variables will play a key role in future research to identify environmental mechanisms that may differentially affect the reinforcing strength of drugs and underlie vulnerability to or protection from drug addiction.

2.4.4. Effect of Subject-Related Biological Variables

Intravenous drug choice procedures have been conducted in various species including, rats [62], squirrel monkeys [82], cynomolgus [92] and rhesus [29] macaques, and baboons [66]. However, there is substantial opportunity for more systematic research on the role of these and other subject-related variables, such as gender, genotype, or physiological state. To the best of our knowledge, there is only one example where a subject-related variable was manipulated in the context of drug choice. In this study, exogenous thyroid hormone was administered to induce a hyperthyroid state during MDMA versus food choice [94]. Although thyroid hormone treatment enhanced the thermogenic effects of MDMA, this treatment did not significantly alter MDMA choice. Moreover, drug choice studies can be expected to contribute important insights that might not be apparent from single-response or active/inactive-response procedures. Specifically, as has been emphasized repeatedly above, drug choice is strongly determined by factors that influence the reinforcing strength of alternative reinforcers. Consequently, it should be anticipated that some subject-related variables would have profound effects on drug choice by modulating the reinforcing strength of alternative reinforcers while producing little or no direct changes in the reinforcing strength of the drug.

3. Implications and Future Directions

Research on the reinforcing effects of drugs has been slow to adopt concurrent schedules of reinforcement; however, this paper has argued that choice procedures can play a useful role in preclinical research on drug reinforcement and determinants of drugs use. There are still critical gaps in our knowledge, and there remains much intellectual space to be explored. One future direction might be the establishment of drug versus nondrug alternative choice procedures involving abused drugs other than the classical compounds cocaine and heroin. The degree to which drug choice can be maintained by other novel drug class such as benzodiazepines, cannabinoids, and nicotine remains to be elucidated. Another future direction might be to examine the impact of nondrug alternative reinforcers other than food. Food is an easy alternative reinforcer to control in preclinical laboratories, but there are certainly other nondrug reinforcers (e.g., access to receptive mate or social interactions) that are available as research tools that have yet to be fully explored. As one example of a choice procedure using a nondrug alternative reinforcer other than food, one intriguing study examined effects of putative anorectic drugs on choice between food and visual access to a room containing other monkeys [95]. This type of social reinforcer has yet to be manipulated in studies of drug choice. Finally, a third potential future direction is the integration of drug reinforcers into decision-making “choice” tasks commonly used to assess cognitive function. For example, impaired delay discounting is a cognitive trait commonly linked to drug abuse [96, 97], and delay discounting is often assessed preclinically in assays that compare choice between a delayed high-magnitude reinforcer and an immediate low-magnitude reinforcer [98]. Strikingly, preclinical research with this type of assay has relied exclusively on food as the consequent stimulus. The introduction of drugs as reinforcers into delay discounting and other cognitive tasks may provide new insights into the relationship between cognitive function and drug use. Overall, the body of literature cited in this paper supports the notion that choice procedures can facilitate data interpretation by providing a rate-independent measure of drug reinforcement, improve concordance between preclinical and clinical studies in translational research, and provide experimental access to critical independent variables that influence drug choice and drug addiction in natural environments.

Acknowledgments

The authors would like to acknowledge funding from the National Institute on Drug Abuse, National Institutes of Health under Grants R01-DA026946 and R01-DA031718.