Abstract

The risks of genetically modified organisms (GMOs) are evaluated traditionally by combining hazard identification and exposure estimates to provide decision support for regulatory agencies. We question the utility of the classical risk paradigm and discuss its evolution in GMO risk assessment. First, we consider the problem of uncertainty, by comparing risk assessment for environmental toxins in the public health domain with genetically modified organisms in the environment; we use the specific comparison of an insecticide to a transgenic, insecticidal food crop. Next, we examine normal accident theory (NAT) as a heuristic to consider runaway effects of GMOs, such as negative community level consequences of gene flow from transgenic, insecticidal crops. These examples illustrate how risk assessments are made more complex and contentious by both their inherent uncertainty and the inevitability of failure beyond expectation in complex systems. We emphasize the value of conducting decision-support research, embracing uncertainty, increasing transparency, and building interdisciplinary institutions that can address the complex interactions between ecosystems and society. In particular, we argue against black boxing risk analysis, and for a program to educate policy makers about uncertainty and complexity, so that eventually, decision making is not the burden that falls upon scientists but is assumed by the public at large.

1. Introduction

Public debates about the environmental risks of genetically modified organisms (GMOs) and their products have much in common with those involving the prospective risks of other advanced technologies. For example, in controversies involving toxins in the environment, as in GMOs, the possibility of quick and easy scientific risk assessments has been undermined by inadequate data, contentious political economic contexts, and emotional, often passionate responses by the various stake holders [1, 2]. Also, in discussions about the safety of large technological systems, as in GMOs, questions of organizational complexity make easy resolution a difficult proposition [36].

The basis of this essay is a series of conversations between the two authors—respectively, a social scientist who studies the organizational and institutional contexts of risk and disasters; and an ecologist who investigates ecological risks of GMOs. Over the course of these interactions, we realized that practitioners and scholars involved with managing the risks of GMOs might find the “lessons” afforded by the social scientific literature on risks useful and relevant. This essay explores two such resonances. Firstly, we explore what GMO risk analysts can learn about the limits of the classical risk paradigm [7]. Secondly, we discuss the relevance, for GMO risk management, of the literature on risk in complex systems.

2. The Classical Risk Paradigm

An excellent definition of the classical risk paradigm is provided by the US Environmental Protection Agency (EPA), which describes it as two interrelated processes: risk assessment and risk management [8]. The purpose of risk assessment, according to the US EPA, is to “evaluate the degree and probability of harm to human health and the environment from such stressors such as pollution or habitat loss” [8]. The purpose of risk management is to identify and prioritize environmental risks; and then to coordinate an economically optimal application of resources “to sustainably minimize, monitor, and control the adverse impact events or to maximize the realization of opportunities” [9]. Risk management, according to the US EPA, is based on the results of the risk assessment, as well as on social, economic factors. Moreover, according to EPA procedures, risk management actions must “then be monitored so that any necessary adjustments can be made” [8]. Risk management, thus, is an iterative process that draws on risk assessments to minimize environmentally undesirable outcomes.

In 1983, the National Research Council (NRC) formulated a risk assessment procedure by outlining the following four steps:(i)exposure assessment—describing the populations or ecosystems exposed to stressors and the magnitude, duration, and spatial extent of the exposure; (ii)hazard identification—identifying adverse effects (e.g., cancer, short-term illness) that may occur from exposure to environmental stressors;(iii)dose-response assessment—determining the toxicity or potency of stressors; and (iv)risk characterization—using the data collected in the first three steps to estimate and describe the effects of human or ecological exposure to stressors [8].

In the ensuing three decades, this process has evolved. Crucially, the 1998 EPA Guidelines for Ecological Risk Assessment [10] represent a sophisticated approach to evaluating uncertainties. These risk assessment guidelines recognize and attempt to address the fact that ecological risk characterization is complex and difficult. Departing from the classical risk paradigm in this regard, the EPA ecological risk guidelines point out that “risk characterization includes a summary of assumptions, scientific uncertainties, and strengths and limitations of the analyses. The final product is a risk description in which the results of the integration are presented, including an interpretation of ecological adversity and descriptions of uncertainty and lines of evidence” [10]. Further, the EPA guidelines acknowledge that “descriptions of the likelihood of adverse effects may range from qualitative judgments to quantitative probabilities. Although risk assessments may include quantitative risk estimates, quantitation of risks is not always possible. It is better to convey conclusions (and associated uncertainties) qualitatively than to ignore them because they are not easily understood or estimated” [10].

Compared to the 1983 guidelines, the 1998 risk assessment guidelines [10] are thus much more transparent about factors such as unclear communication, descriptive errors, variability, data gaps, uncertainty about a quantity’s true value, model structure uncertainty (process models), and uncertainty about a model’s form (empirical models). Moreover, they are explicit and more specific in recognizing and considering social, economic, political, and legal concerns in the risk management process. Consider, for example, the following extract from Section 6 of the 1998 guidelines: “risk managers need to know the major risks to assessment endpoints and have an idea of whether the conclusions are supported by a large body of data or if there are significant data gaps. Insufficient resources, lack of consensus, or other factors may preclude preparation of a detailed and well-documented risk characterization. If this is the case, the risk assessor should clearly articulate any issues, obstacles, and correctable deficiencies for the risk manager’s consideration” [10].

Without doubt, the 1998 guidelines mark a significant improvement over its predecessor in being nuanced, aware, and reflexive. This evolution in perspective, approach, and method mirrors that in other areas, such as public health. And yet, as the next section will elaborate, the experience with risk assessment in public health suggests that such sophisticated insights are, in practice, difficult to implement.

3. The Limits of the Classical Risk Paradigm

The classical risk paradigm is elegant in principle. It is science based, and neatly separates factual characterizations and assessments of risk from the evaluative and normative processes that address issues relating to control and management. However, in practice, the paradigm has some important recognized limitations, some of which were recognized at the very outset. For example, in an article reflecting on EPA’s approach to risk management, the first director of the agency, William Ruckelshaus wrote that:

The relationships among basic science, applied science, and improvements in daily life are usually regarded as simple, being much like those growing trees, cutting lumber, and building houses. This concept feeds the notion that when we want something from science, we can order it, as we order lumber to build the house. If there is not enough lumber, we can grow and cut more trees. It follows that there is no way to “manage” this orderly process so as to make it more efficient or more suitable to our current needs…Even though a scientific explanation may appear to be a model of rational order, we should not infer from that order that the genesis of the explanation was itself orderly. Science is only orderly after the fact; in process, and especially at the advancing edge of some field, it is chaotic, and fiercely controversial [11].

The reason that science can be chaotic and controversial is the relationship between scientific uncertainty, on the one hand, and public and policy maker’s expectations of science—as a dispassionate arbitrator, on the other. One way to appreciate this point is with the aid of Figure 1, based on a schematic from Douglas and Wildavsky [12]. In essence, they argue that when knowledge, as in scientific data, is certain and its interpretation uncontested within the scientific community, risk assessment is simply an exercise in “calculation,” or mathematical valuation to find solutions to technical problems. When the interpretive frameworks are uncontested, but the data do not exist, the issue becomes one of research—again, not an insurmountable problem. The EPA [10] includes methods to estimate uncertainty associated with incomplete or imprecise data, such as calculating confidence intervals, using fuzzy set theory, or Baysian mathematics. However, the ecological risk guidelines do not explicitly acknowledge issues of consent. When the methods of interpretation of data are contested, for any number of reasons, but where the data exist, then the issue at stake is disagreement on how to interpret the data themselves or the adequacy of these data for decision making. Furthermore, when the problem constitutes both knowledge and consent, it becomes very difficult to address within bounds of reason, and often, reduces to political haggling.

To understand the full import of this argument, it is important first to identify the sources of uncertainty in GMO risk analysis. The following sources of uncertainty in determining the effects of environmental toxins on human health involve incomplete knowledge and illustrate some of the fundamental issues that have emerged from the social science literature on uncertainty in risk assessments: (1)Body’s Past Exposures [13]. When medical histories of people do not take into account the full extent of their exposures to environmental toxins, vital knowledge may be inaccessible. Consider, for example, a scenario in which common household chemicals are not recorded. In cases like this, some illnesses that stem from exposure to environmental stressors prove difficult to diagnose and treat, because tests or diagnostic procedures arguably cannot account for the cumulative effect of prior stressors. (2)Dose-Response Relationships (e.g., [14]). The problem of determining valid dose-response curves is among the most difficult and controversial issues in risk analyses. As some public health sociologists have argued, “threshold levels of exposure are difficult to accurately assess in terms of health and safety, because the relationship between dosage and response or probability of harm is rarely linear, with patterns varying from exponential, to asymptotic to parabolic curves. Extrapolation of data from proxy test subjects may add another layer of uncertainty” [15].(3)Synergistic Effects, and Etiological and Diagnostic Uncertainties [15]. The question of how the various chemicals ingested or stresses imposed on human bodies combine can be a source of disagreement among scientists. Are the effects of, say, carcinogens such as those in tobacco smoke, asbestos, and radon, additive or synergistic in combination, in producing lung cancer? Understanding such interactions among stressors has proven difficult and intractable in toxicology. Again, there is considerable difficulty, if not impossibility, in documenting conclusively that a specific disease is caused by exposure to specific environmental toxins—an issue referred to in the literature as “etiological uncertainty.” A related issue is diagnostic uncertainty, which stems from the fact that physicians typically do not possess either the requisite technology or interdisciplinary expertise to make the link between exposure to adverse environments, and a specific disease. In both cases, the problem of making such causal connections is exacerbated by subjective reasons, such as broader belief systems about illness and the environment [15].

Over and above, these three types of uncertainty in determining the effects of environmental toxins on human health are a host of other problems related to conflicting or controversial analytical approaches. One such problem, and a common source of disagreement, relates to heuristic biases [1621]. Others sources of contention are consequences of cultures and social organization [22]. Yet others are related to the interpretation of mathematical models—complex models are excellent analytic devices, especially in cases that are unique and unreplicable, or where experiments are not possible; but models also rely on assumptions that may not be valid or that may reinforce bias [2230]. There is also a growing literature that draws attention to the circularity of the models—stemming from benchmarking, rather than correlation with data [3032]. Conflicting disciplinary perspectives can result in uncertainty, such as when experts who perceive phenomena at different scales (e.g., molecules versus ecosystems or clinical versus epidemiological), apply different methods, or represent different cosmologies, disagree on what data are relevant and how to interpret them.

The consequence of these uncertainties in public health risk assessment is stated succinctly in the following paragraph by Brown et al.:

“Without the benefit of exposure histories, accurate dose-response predictions, knowledge of synergistic effects, valid etiology models, and diagnostic capabilities, there is considerable amount of guessing, speculation, and editorializing among both medical professionals and those whose lives are turned inside out by fear of environmental diseases” [15].

Arguably, there are parallels and similarities between risk assessments for public health and ecosystems. Consider the elements of uncertainty listed in Table 1, analogous to the ones discussed above, using public health effects of insecticide exposure and environmental health effects of transgenic insecticidal plants.

There are three common reasons for not having data that are needed for risk assessments for GMOs. First, field tests with regulated GMOs suffer from practical constraints. For example, small-scale field experiments using initial transformants of a transgenic crop plant or its products constitute a prudent practice in risk assessments, but they do not supply the data required to characterize the risks of the marketable end-product of subsequent crop breeding regimes [37]. The impracticality of testing subsequent events in Bt-corn, for example, results in the assumption that the expression and consequences of a random introduction of the transgene for insect toxicity into a limited number of individual plants tells us about its expression and consequences in subsequent random introductions of this transgene into multiple commercial varieties. Another practical limitation is that the desired reduction of Type II error—not detecting a significant effect when it is indeed present, and thus underestimating the probability of a hazard occurring (risk)—in risk assessment tests requires a large number of independent replications, ideally also representing a range of different environments and conditions. When appropriate data involve rare events, adequate testing can be prohibitively expensive and endless. Thus, even when risk assessments are done on a case-by-case basis, they involve extrapolation that is difficult to explain and rely on tests that are incapable of detecting small effect sizes.

Second, field tests with regulated GMOs are not ethical without biosafety practices; yet these practices, such as containment or the use of unregulated surrogates restrict realism by not taking into account the complexity of interactions even when it increases precision. Conversely, realistic testing with surrogates lacks precision. If these two approaches yield similar results, however, data are more likely to be considered robust, and uncertainty reduced. For example, plant fitness consequences of insect-resistance genes in wild relatives of Bt-canola indicate similar patterns for isolated transgenic plants and in situ surrogates [38, 39].

Third, because funding and personnel allocations for risk research are a very small fraction of what is available for the development of GMOs, comprehensive data for decision-support may not be available. According to US Department of Agriculture (USDA) farm bill documents, at least $220 million is spent by the USDA annually on biotechnology related research; $4 million, or less than 2% of that sum, is available for the USDA biotechnology risk assessment research grants in fiscal year 2012. Some of the uncertainty caused by lack of knowledge is the result of risk assessment research being, in this way, undervalued. Although regulators can and do base determinations on “no evidence” of harm, it is especially important to distinguish between having data that demonstrate biosafety and not having data because the research is constrained or has not been conducted [40].

The issue of conflicts stemming from incongruent disciplinary perspectives is as true in the case of GMO risk assessment, as it is in public health. The scope of knowledge may differ substantially among disciplines, which can have profound effects on the use of what has been termed “familiarity” in GMO risk assessment [41]. Because familiarity with an organism, the trait, and the accessible environment is derived by experts from preexisting knowledge, experimental results, and experience over time, there might be variability in expert judgments. For example, a microbiologist who is intimately familiar with the culturing requirements, DNA sequence, and behavioral properties of a bacterium in vitro may evaluate data differently from a microbial ecologist who has investigated its interactions with other organisms in the field, in terms of the uncertainty introduced with a new trait. Lack of consent among experts also arises because of divisions about emergent properties in transgenic organisms. Central to this problem is the concept of substantial equivalence of GMOs to their unmodified counterparts.

Basing risk assessments on the similarity of a genetically modified food to existing unmodified products used as foods or food components implies the absence of emergent properties that might pose additional risk to the consumer [42]. Thus, the use of substantial equivalence as a framework for collecting and interpreting decision-supporting risk assessment data is contentious not as much from disagreements on the accuracy of the data, but from disagreements about what constitutes suitable equivalence. For example, a safety assessment based on separate toxicity tests on the unmodified organism and an expressed trait, such as a soybean cultivar and the enzyme that causes herbicide tolerance, makes basic assumptions about emergent qualities and consequent farming practices. In the case of insect-resistant Bt-crops, nontarget toxicity tests on isolated, surrogate proteins are extrapolated to indicate biosafety levels for the whole organism, including the indirect, delayed, and cumulative effects it may have in the environment [33]. Experts have little consensus about substantial equivalence, whether it be from a statistical [43], physiological [44], or ecological/evolutionary perspective [35]. Uncertainty in the risk assessment process for GMOs, then, like those articulated by public health researchers, can be attributable to measurement errors, bias related to the conditions of observations, inadequacies of models [45], and matters of consent (Figure 1).

4. Risk and Complex Systems

A new field in risk studies, called normal accident theory (NAT) emerged after the Three Mile Island Nuclear accident in 1979. The concept of “normal accidents” [4], was introduced by Charles Perrow while analyzing the organizational and institutional factors underlying industrial accidents. Perrow began by defining two characteristics of large technological systems such as chemical or nuclear plants. The first of these, “interactive complexity,” refers to “a systemic characteristic in a technological system with a lot of components (parts, procedures, and operators), wherein two or more failures among components interact in some unexpected way. For example, when fails, would also be out of order and the two failures would interact so as to start a fire and put out the fire alarm” [4].

Perrow’s second term, “tight coupling,” refers to “another systemic characteristic in a technological system wherein one event or process affects another event or process directly and quickly, thus making human intervention difficult when something goes wrong” [4]. Perrow argued that when interactive complexity and tight coupling, which are characteristics of the technological system, inevitably combine to produce an accident, such an accident is not “accidental” [4]. According to the Oxford English Dictionary (OED), an accident, in Aristotelean thought, is “a property or quality not essential to a substance or object; something that does not constitute an essential component or attribute” [4]. The OED also defines an accident as “something that happens, by chance or without expectation; an event that is without apparent or deliberate cause” [4]. When an accident is a consequence of characteristics of the system, Perrow argues, it is anything but unpredictable, unusual, or a result of an unknown cause. It is, in a clearly comprehensible sense, “normal.” This way of analyzing technology has normative consequences: if potentially disastrous technologies, such as nuclear power or biotechnology, cannot be made entirely “disaster proof”, we must consider abandoning them altogether because they are, according to Perrow, not worth the risk.

The NAT analysis, however, has its detractors. Critically important among these is the high reliability organization (HRO) approach, which argues that there are many ostensibly highly vulnerable systems that are in reality very robust, reliable, and which do not fail often. In essence, they claim that it is possible to create cultures of high reliability in decentralized and continually practiced operations, and build multiple levels of redundancies to make systems and organizations safer. Although NAT theorists counter by arguing that redundancy systems in complex organizations can hide system failures and human errors, their HRO interlocutors argue that complexity, in effect, affords opportunities for variations in actions and “allows for multiple strategies of resilience” [4652].

While this debate rages, it is easy to see its broader relevance for GMO risks. At the very outset, it is worth recognizing that inherent complexity, rare events, and thereby, novelty, when GMOs are released into the environment, might combine to cause a runaway, irretrievable, outcome. For example, in the case of GM crops, Bergelson and Purrington [37] discussed the process by which a related, noncrop plant could become more weedy or invasive through the acquisition of transgenes from its crop relatives, thus creating what has been termed a superweed—a genotype that is no longer controlled by conventional methods. Their analysis requires that there be a conduit for the introduced gene to enter the wild population. This could happen either via transgenic crop-wild hybridization and production of fertile offspring (e.g., wild and crop cotton species) or simply if a weed and the crop are the same species (e.g., Brassica rapa). The next step would involve a fitness advantage conferred to the weedy plant by the expression of the transgenic trait over multiple generations, to spread through the weed population. Finally, this alteration of the genetic composition of the weed population should be such that it enables the weed to increase in density or to expand in geographic range. All three of these “coupled” steps are required for a weed problem to be created. Yet once the rare event (hybridization) occurs, the process can be impossible to reverse at the landscape level. Hilbeck [53] cautioned, for example, that the most aggressive monitoring programs may still miss the detection of environmental problems in time to avoid long-term or irreversible harm. Once introduced, organisms can continue to increase exponentially, resist management, and lead to substantial losses of biodiversity or sustained negative economic consequences. While this example is not a text book case of a normal accident, for it is not clearly evident that the consequences are tightly coupled, it does serve to illustrate the potential “normality” of undesirable outcomes in complex systems.

Another theme in normal accident theory, unforeseen problems, arises whenever a regulatory assessment of GMOs, given systemic complexity, fails to address all of the possible sources of failure or routes to hazardous outcomes. An early example of such an accidental outcome was the unconscious allowance of the expression of Bt endotoxins in corn pollen, which was then deposited by wind onto the host plants of insects previously assumed to be safe from harm because they do not feed on corn plants. Environmental impact assessments and regulatory guidelines took no account of the fate of toxic pollen and thus did not foresee any potential hazard to organisms that feed on other plants in the agroecosystem [54].

An additional source of unforeseen problems is human error farther along the process chain. Perrow [4] uses a wide range of examples to illustrate exactly how complex mechanical and human failure can interactively combine. He argues that if the system is sufficiently complex (an enormous number of nodes connecting things) everything may work just fine, but under some (presumably rare) combinations of interactions, there can be a failure simply because no designer (let alone operator or monitor) could have anticipated this set of combinations. Because of the tight coupling of refineries (no slack, no way to reverse or stop a process, no substitutions possible, etc.), the failure will cascade and bring down the system or a major part of it. Subsequent investigations will not reveal the cause of the failure and make similar failures unlikely because nothing actually failed, though one might say the designer failed to take everything into account ahead of time. This kind of analysis is relevant to GM crops and food risk assessments because of the complexity of the production, processing, and delivery system, involving a number of natural and human factors, including cultural and institutional. Historically, there have been a number of unforeseen mistakes in the manner in which Bt crops have been handled. For example, regulated Bt-corn (StarlinkTM) approved only for animal feed was erroneously mixed in with corn meant for human consumption; it has now been found in processed foodstuffs in other countries [55]. Bt mustard weeds have been discovered along roadsides where loosely contained Bt crop seed was dispersed by trucks far from the region in which the crops are grown [5658]. Transgenes have also been detected in crop centers of origin [59] when Bt crop seed is exported as grain for food or sold on the black market. In each of these cases, regulatory solutions can be found in hindsight. However, the sheer complexity of the natural-human-regulatory-institutional system implies that there are likely to be several other potential risks as yet undiscovered or not thought about.

It is this prospect—of multiple pathways for risk, combined with the prospect of catastrophic outcomes (with potential for the breakdown of ecosystems, among other consequences), that makes it important for analysts of GMO risk to take into account the significance and import of both NAT and HRO theories. It could be argued that NAT might not be directly applicable to GMO risks, primarily because of the difficulty in applying the concept of tight coupling to landscape and ecosystem level changes over varying periods of time. Yet, the analogues, even if not exact, can be disturbing. It can also be claimed that it is possible to build highly resilient systems to manage GMO risks. However, this is easier said than done, for all the aforementioned reasons, especially as GM crops enter the next generations of stacked genes for multiple agronomic traits, food, and pharmaceutical products.

5. Costs versus Benefits

It is reasonable to contend that regardless of all these considerations of uncertainty in risk assessments, risk acceptance is subject to cost-benefit tradeoffs. It has been argued that if a GM product has little demonstrable benefit, even the minimal risk associated with the product would not be acceptable to the general public and society. On the other hand, if it has tremendous demonstrable benefit, some level of risks (even with uncertainty) would be acceptable to the general public. However, risk perception is more nuanced in practice. Firstly, the perception of risk is difficult to model according to rational behavioristic assumptions [60, 61]. For example, stigma can skew an otherwise safe prognosis and perception [62]. Conversely, positive bias can make people risk averse in a manner not warranted by the data alone. Secondly, it is extremely difficult to grasp, let alone compute rigorously, the will of the “the general public and society.” Although the EPA guidelines on GMO risk assessment [10] include consultation with “interested parties” in some parts of the process, the fact that public controversies about GM regulations have not ceased in the United States, Europe, and elsewhere indicates that not everyone has their views represented adequately. While it is plausible to argue that such dissenters are irrational or biased, it is equally plausible to allege that processes of consultation are not transparent—especially over the question of who bears the costs and who benefits. It is, therefore, difficult to decouple bias from cost-benefit analysis, especially in cases such as GM regulation, that are complex for all the other reasons we have discussed. The NRC 2002 report [40] recognizes at least a part of this problem when it argues that using only present conditions as a comparison to assess risks (costs or benefits) is inadequate, because GM crops, for example, are only one of a number of alternative futures.

6. Conclusion

Where exactly to go with GMO risk management given what we know about the limitations of conventional risk analytical approaches, and about complex systems, is up for debate, and reasonable people can disagree. Uncertainty, whether it involves unpredictable synergies among environmental toxins, side effects of medications, radically different future climate scenarios, or GMOs, is a problem that is difficult to address. It is unfair to expect scientists to solve the problem of uncertainty without providing the resources to do the research to collect the data necessary to address it. Ultimately, uncertainty is not just a problem for scientists, but for societies at large, for it is the latter who stand to gain, (and in some cases, to lose) from public policy decisions made on judgments about risk. It is, therefore, critically important that scientists and risk analysts, while they interact with lay publics and policy makers, should explain the difficulties involved in doing risk estimates in the first place (e.g., [63]). Rather than sustain the fiction that science can be ordered off the shelf to understand GMO risks adequately, or that robust and reliable systems for risk management can be built easily, it might be more prudent to explain the complexities involved throughout the regulatory process. Such an approach, which we might call the public understanding of uncertainty (echoing the field of public understanding of science and science communication, more generally) should also highlight the importance of the provisioning of resources to collect various kinds of risk data, and for building interdisciplinary institutions that can iteratively understand and address the complex interactions between ecosystems and society. Rather than black box risk analysis, this approach allows for a more honest and transparent process for deciding what risks to accept, under what conditions, and on whose authority.

Acknowledgments

The authors are grateful to our colleagues and graduate students in the Department of Environmental Studies at UC Santa Cruz, especially Joy Hagen, Anna Zivian, and Karen Holl, and to special issue editor Joel Ochieng for helpful comments on earlier versions of the manuscript. A grant from the France-Berkeley Fund supported the conference where the draft paper was first presented and improved by insights from conference participants.