Complexity

Complexity / 2019 / Article

Research Article | Open Access

Volume 2019 |Article ID 3758159 | 31 pages | https://doi.org/10.1155/2019/3758159

Mixing Dyadic and Deliberative Opinion Dynamics in an Agent-Based Model of Group Decision-Making

Academic Editor: José Manuel Galán
Received15 Dec 2018
Revised06 May 2019
Accepted10 Jun 2019
Published07 Aug 2019

Abstract

In this article, we propose an agent-based model of opinion diffusion and voting where influence among individuals and deliberation in a group are mixed. The model is inspired from social modeling, as it describes an iterative process of collective decision-making that repeats a series of interindividual influences and collective deliberation steps, and studies the evolution of opinions and decisions in a group. It also aims at founding a comprehensive model to describe collective decision-making as a combination of two different paradigms: argumentation theory and ABM-influence models, which are not obvious to combine as a formal link between them is required. In our model, we find that deliberation, through the exchange of arguments, reduces the variance of opinions and the proportion of extremists in a population as long as not too much deliberation takes place in the decision processes. Additionally, if we define the correct collective decisions in the system in terms of the arguments that should be accepted, allowing for more deliberation favors convergence towards the correct decisions.

1. Introduction

In a group, opinions are formed over affinities and conflicts among the individuals that compose it. Axelrod [1], a pioneer in opinion dynamics, cast light on two key factors required to model the processes of diffusion, namely, social influence (i.e., individuals become more similar when they interact) and homophily (i.e., individuals interact preferentially with similar others). He was the first to show that a radical differentiation of culture in a group could emerge from simple imitation through dyadic interactions. His results suggested that interactions through homophily and social influence could lead to collective states or collective opinions whose explanation, in many situations, went beyond the individual or micro level. Further, it was found that these collective states could be characterized by quantities like statistical distributions and averages, which explains why, in recent years, a growing body of research has endeavored to identify the conditions under which social influence at the micro (dyadic)-level translates into macropatterns of diffusion through repeated iterations [2]. In particular, several models have been developed to reproduce the emergent properties of opinion diffusion which may be classified in two groups: on the one hand, the discrete opinion models where opinions, or other ontological equivalents, take discrete values [3, 4]; on the other, the continuous opinion models where opinions are represented by real numbers [510].

In the context of continuous opinion dynamics introduced by Deffuant et al. [6], and later extended by other authors to include network effects [10, 11], trust [12], and many other social phenomena [7], individuals meet in random pair-wise encounters and then converge to a common opinion if and only if their respective opinions are sufficiently close to each other, in a kind of bounded confidence mechanism based on confirmation bias. After some transient evolution and social dynamics, this leads to final states in which either full consensus is reached or the population splits into a finite number of clusters such that all individuals in one cluster share the same opinion. So far, these models have been mostly applied to political issues such as societal cleavages and the emergence of extremism.

However, these representations of social interactions in opinion dynamics fail to take into account everyday communication settings that characterize democracies such as meetings, debate arenas, and the Media in which individuals exchange points of view and can influence one another in a collective manner. In effect, individual actions that translate into collective decisions, such as voting, are shaped by factors related to the structure and size of the channels of communication and deliberation. When a group engages in a collective discussion, group size, what arguments are advanced, how discussion is organized over time, and the acceptability criteria for proposals may lead to a transformation of preferences [13] and play a crucial role in consensus formation [1416]. For instance, work in [14] defends that deliberation polarizes individuals in the direction of their initial opinions due to social pressure and to limited knowledge (biased or unbalanced argument pool) within the discussing group. In contrast, from several empirical studies of deliberation processes, other authors infer that deliberation can have a stabilizing or moderating effect on opinions [16, 17], which they interpret as opinions becoming more informed [18], balanced, and/or confused [13]. Authors in [15] express that deliberation may encourage moderate opinion consensuses if it is procedural, or be polarizing if it ego-involves the participants. All these phenomena may introduce some degree of discrepancy into the otherwise well-known steady states of opinions (clusters) obtained in classical opinion dynamics and should be modeled somehow.

Opinion diffusion has also been used to track convergence towards “correct” or accurate opinions in groups. Although authors in [19] study how interactions among agents diffuse true information, they focus their analyses on network effects and noisy signaling, and not on deliberation protocols. In [20], Rouchier and Tanimura explore the diffusion of information about an exogenous “true state” of the world (represented by bits) through social interactions and learning but assume the interactions to be only dyadic and not deliberative. In our context, a correct decision corresponds to one derived from a dialectical situation in which all the arguments for and against a proposed alternative are taken into account [15]. The existence of an ideal criteria of correctness of collective decisions can be used to evaluate the outcome of deliberative procedures and democratic decision-making. Since deliberation imposes regulatory conditions to decision-making processes, one may attribute a rational and democratic value to the collective decisions obtained from it. Deliberation is a way of getting closer to the ideal state in which a group judges propositions as if it had all arguments at its disposal. In [18], Barabas shows empirical evidence that deliberation increases knowledge and is correlated to correct responses to objective questions. Up until now, opinion diffusion models have not taken into account this dimension and models mixing deliberative and dyadic interactions may well do so. In the same manner, collective truth-seeking models can benefit from more insight on processes of peer-to-peer information diffusion of the like of continuous opinion dynamics models.

From a social welfare perspective, a claim in favor of deliberation is that promoting dialogue leads to better decision-making, where “better” goes as improving social welfare. Preference structures that are rationally untenable or unjustifiable are eliminated from the pool of admissible preferences outright [21]. We argue that different ways of partaking deliberation, e.g. the structure and protocol of decision-making processes, may allow for more accurate collective decision-making.

In this direction, agent-based modeling proves to be an interesting method to study and observe the effect of deliberation on opinions. For one, it helps infer knowledge from models in which a multiplicity of different modes of communication among heterogeneous agents are, analytically, difficult to describe; second, given that empirical-based conclusions on the topic may be costly and difficult to ascertain due to confronting ideologies and theories on the topic (see [15, 16]), it provides an alternative way of testing the effects of collective decisions and deliberation protocols on public opinion; and, not an exhaustive third, it furnishes an interesting modeling environment for collective choice analysis by giving the possibility to account for different levels of decision-making and a diversity of influence loops.

The aim of our model is to breach a gap between deliberation and opinion diffusion. We model dyadic or ego-involved dynamics using an opinion diffusion model based on social judgment theory [9, 22], whereas formal or deliberative discussion is modeled using abstract argumentation theory [2325]. We build a process of collective decision-making where deliberation and voting are necessary conditions for collective choice as we are inspired from results in the deliberative democracy [16, 17, 26] and social psychology [14, 15] literature. We present the effects of deliberation on the opinions of a group and on its ability to correctly judge propositions. We call the latter the group’s judgment accuracy. We also observe how deliberation impacts a group’s ability to eventually vote in favor of proposals that are discussed and accepted during deliberation, in other words, on the group’s coherence in decision-making. Furthermore, since a collective decision in the model is an outcome of a structured group decision-making process, a sequence of deliberative and dyadic interactions among agents, we seek to study the impact of its structure on the group’s opinions, judgment accuracy, and coherence. The frequency of deliberative interactions within a decision-making process, the size of deliberation (number of agents that deliberate), and the majority voting rules used determine the collective acceptance of proposals are considered to this end. Ultimately, we strive to create a tool that helps policy-makers analyze deliberation protocols and make studied decisions about them.

Our model allows us to explore a new paradigm in opinion diffusion and answer the following research questions concerning decision-making in groups:(1)What effect can deliberation have on a group’s opinion distribution when the opinion dynamics are described using bounded confidence models? In what way are opinion dynamics through deliberation alone interesting?(2)How do the structure and protocol of deliberative decision-making processes affect a group’s distribution of opinions, coherence, and judgment accuracy?(3)In what way can controlling for protocol relate to social situations in which collective decisions are coherent and accurate?

Simulations show that deliberation yields, on average, qualitative loose consensus and group polarization while reducing the number of different opinion clusters over the distribution of opinions. In effect, our model shows that deliberation has a significant overall impact on the distribution of opinions (on its variance) and on the shifts in individual opinion. In particular, when specifying opinion dynamics as only deliberative, the proportion of extremists and the variance of opinions are lower and the shifts of opinions greater than in a nondeliberative, dyadic opinion dynamics model. However, when considering a mix of deliberation and dyadic interactions, parameters that promote bigger or more frequent deliberation during decision-making processes increase the variance of opinions and the proportion of extremists and limit the effect of deliberation on opinions. The majority voting quota rule to accept a proposal plays a preponderant role as it determines if the consensus-driving power of deliberation outweighs the propensity to dissensus observed in bounded-confidence models with rejection.

The model sheds light on the fact that a group’s coherence and judgment accuracy may depend significantly on how decision processes are structured. The number of debates allowed and the number of agents that may participate in them increase judgment accuracy in a marginally decreasing fashion but have little or no effect on the group’s coherence in decision-making. Last, we point out that results are conditioned on how many arguments agents have at their disposal and on how they advance them during deliberation.

The remainder of this paper is organized as follows: in Section 2, we present the model and provide some necessary basics to understand its implementation; in the subsequent section, we introduce the metrics of interest and the calibration of the model. In Section 4, we report and discuss the results of the simulations; and in Sections 5 and 6, respectively, we survey related work and we conclude the article.

2. A Model for Collective Decision-Making with Deliberation

We propose a system of collective decision-making among agents made of three objects: arguments, agents, and tables for deliberation. Agents have agency, arguments represent pieces of information, and tables validate collective decisions while organizing all deliberative interactions among agents.

2.1. Model Overview

In this section, we propose an overview of the model. We quickly introduce the agents and the objects that make collective decisions possible in the multiagent system. We also provide a basic interpretation of the collective decision-making procedure, which we illustrate with an example.

2.1.1. Overview of Agents and Objects in the Model

Arguments are objects that relate to each other by a defeat relation. They are characterized by their support of some value or principle. Agents are characterized by their sensitivity to deliberated ideas, by the arguments they possess, which reflect their opinions, and their knowledge about arguments. They communicate one-to-one or collectively by the dint of arguments in a public arena. Communication leads them to an eventual update of their opinions. Tables are entities that contain both: agents and arguments. They are controlled by a central authority (CA) [27] that fixes the rules in the deliberation process, the frequency of deliberative interactions, and the conditions for collective acceptability of deliberated information. The existence of a central authority, namely, a person or a machine that can reveal the correct epistemic status of any set of arguments can be equated to Habermas’ claim that the “unforced force of the better argument” will triumph in an ideal speech situation1. A central authority may also be associated with Rancière’s notion of “police” that he defines as “an order of bodies that defines the allocation of ways of doing, ways of being, and ways of saying, and that sees that those bodies are assigned by name to a particular place and task” [28] (p.29).

2.1.2. Overview of the Decision-Making Process

Let be a group of agents that is asked to deliberate and vote on a proposal . Given an argument in favor of , they have to judge on whether is desirable or not. The argument or proposal argument determines how much the proposal supports a principle or its opposite . A proposal is a sentence that indicates a way of attaining a goal or solving a problem. A principle derives from the notion of value—values seen as fundamental social or personal goods that are desirable in themselves [29]. Environmentalism and patriotism are examples of values; to choose proposals that minimize environmental impact or maximize welfare are examples of two not mutually exclusive principles.

Agents are assumed to decide on the acceptance of the proposal on the basis of their opinions or their adherence to the said principle, whether they argue formally or informally about . When agents are not deliberating, they are subject to random pair-wise influence; when the deliberative exchange of arguments concludes, they are influenced by the acceptance status of the proposal that is being discussed. Only a fraction of all agents deliberate in each deliberative exchange; all agents are prone to pair-wise discussion. They vote for proposals according to their opinions. A proposal is accepted if and only if the argument that is given with it is accepted after deliberation and the proposal is voted favorably by a majority of agents. To vote in favor of a proposal is considered to be equivalent to accepting the argument that comes along with it.

Example 1. = “Protect the environment” is a principle; a proposal may be = “Reduce carbon emissions by 2030 using electric cars.” To justify the proposal, one may advance the argument “Electric cars will reduce society’s dependency on fossil fuels and will result in a reduction of carbon emissions for 2030 while proctecting the environment” that expresses the degree to which the proposal is in line with environmental protection. An argument that tackles the proposal argument and that opposes the principle could be “Electric cars may protect the environment by reducing effective carbon emissions but the batteries they depend on are very pollutant”. The proposal may be accepted in the deliberation arena if some other argument that defeats , say “Battery recycling businesses are hatching everywhere. By 2020, it is very likely that scientists find a viable solution to chemical pollution due to batteries” is advanced. It is not accepted otherwise. If is accepted, then is collectively accepted if a majority of agents vote favorably for it.

2.2. Arguments as the Basic Units of Collective Discussion among Agents

In this subsection, we introduce the simplest object in our system: arguments. We present abstract argumentation theory and Caminada’s labeling approach to argumentation [30] that allows us to track the epistemic status of arguments during deliberation.

2.2.1. Arguments

Arguments are objects that represent pieces of information that agents can understand. They are informational cues that enable agents to discuss with one another in a public, collective context and make decisions on the acceptability of other pieces of information. They are assumed to be nonfallacious. In our approach, each argument is modeled by a real number that stands for how much respects or supports the principle . means that argument is totally coherent with the principle , whereas reads that is totally incoherent with the principle . Arguments relate to each other through an incompatibility relation that states that one cannot stand behind two conflicting arguments. Arguments are also characterized by their acceptability status that indicates whether they are accepted, refuted, or undecided in a given discursive context. Arguments have an epistemic reach () or a maximum number of arguments they can attack. This can be interpreted as the argument’s level of generality or as the potential level of argumentative conflict within the argument pool.

2.2.2. Abstract Argumentation Theory

Deliberation, defined as an exchange of arguments, is modeled by confronting representations of different, eventually contending, arguments. In our model, we use abstract argumentation theory [23] to represent deliberation, where an argument is just a node in a graph, like arguments , , and in Figure 1. Abstract argumentation theory models incompatibility between arguments and abstracts away from their internal structure. The intuition is that if an argument attacks an argument (represented by an arrow from node to node , as in Figure 1), a rational agent cannot accept both and .

Formally, let be a finite set of arguments and a subset of called attack relation. The attack relation is intransitive and we note the fact that argument attacks or is incompatible with argument . One says that an argument defends an argument , noted , if there exists an argument such that and (see Figure 1). An argumentation framework is a digraph in which the nodes represent the arguments and the arcs represent the attacks among them. Given an argumentation framework , the classic problem in abstract argumentation resides in finding which of the arguments in can be accepted, rejected, or left undecided.

In the labeling approach [30], a label of an argument denotes the epistemic status of . Intuitively, an argument is labeled if it is justifiable and if it is not. If is labeled , it is considered to be in abeyance due to, for example, insufficient grounds for it to be labeled or . Furthermore, given an argumentation framework , one calls labeling a complete function that assigns a label to each argument in . A labeling is written as a triplet where . For example, stands for the set of arguments in that are labeled in the labeling . A labeling is said to be legal if all argument it labels is legally labeled. An argument is said to be legally labeled (i), if s.t , : if is not attacked or is only attacked by arguments that are themselves labeled ;(ii), if s.t , , that is if is attacked by at least one argument that is labeled ;(iii), if s.t , s.t ; or equivalently, if there exists at least one argument labeled that attacks and there is no argument labeled attacking .

Roughly speaking, a semantics (denoted by ) is a rationality criteria to decide which arguments to accept given an argumentation framework. The basic normative requirements for labeling-based semantics and argument acceptability in abstract argumentation repose on conflict-free labelings or labelings in which no two -labeled arguments attack each other, and admissible labelings, which are conflict-free labelings that ask for arguments that are defended by the -labeled arguments to be themselves -labeled. The family of admissibility-based labelings goes from complete labelings to preferred and grounded labelings which are complete labelings that capture properties such as credulity and skepticism in argumentation. Formally, a labeling for the arguments in an argumentation framework is said to be(i)conflict-free if s.t ;(ii)admissible if conflict-free and is legally ;(iii)complete if admissible and is legally labeled ;(iv)preferred if complete and if it maximizes ;(v)grounded if complete and if it maximizes .

We choose admissibility-based semantics because, for one, they supply a comprehensive model of collective reasoning; they allow for a meaningful parameterization of credulous and skeptical collective reasoning, and, last, they are relatively easy to interpret since the differences between them are well documented in the literature (e.g. [18]). Additionally and in contrast to other rank-based or graded semantics in abstract argumentation, admissibility semantics assume that all arguments have the same weight.

2.2.3. Abstract Argumentation and Discursive Situations in Collective Decision-Making

Abstract argumentation is a convenient formalism for argument-based decision-making since it ignores difficulties relative to the nature, generation, and number of arguments and posits the possibility of using graph theoretic tools to model (collective) reasoning in a clear, coherent, and easy way [31]. Example 2 below provides a simple overview of a debate between two agents over the acceptability of the proposal “Tax the rich.”

Example 2. Let = “Tax the rich” and Liberalism.” Figure 1 represents the abstract argumentation framework obtained from the following arguments: (i) “Only the rich should be taxed because they possess most of the capital in the country”;(ii) “If you only tax the rich, the rich will leave and then you’ll have no one to tax”;(iii) “The rich will not leave because they have their livelihoods here and it would cost them more to leave than to pay the taxes.” Agent 1 advances the proposal argument , agent 2 argues that, given the justification for , is not tenable. Agent 1 defends his proposal by advancing and defeating the argument . The conclusion is that holding and is a tenable position, so the rich should be taxed. Notice that by simply accepting and taking no position on nor is conflict-free and still corresponds to the position advocating that the rich should be taxed.

Abstract argumentation comes as an immediate application to our model in the construction of an ideal argumentation framework that loosely models Habermas’ ideal speech situation. The ideal speech situation is important in our work because it corresponds to a normative state that is, in practice, difficult to reach and that allows us to observe how different deliberation protocols affect deliberative outcomes.

In the model, a label given to an argument, provided a semantics , is said to be ideal if it is obtained from a state of affairs in which all arguments are presented during deliberation. We call ideal or consensual argumentation framework the argumentation framework containing all arguments in the system and all consensual attacks among them. In a multiagent approach, a consensual attack between two arguments is a couple such that if and only if a certain majority of agents recognizes that such attack exists. In our model, all agents agree on the attack relation over . It follows that if two agents advance two distinct arguments such that , then all agents will recognize that such conflict exists. An immediate consequence of this assumption is that all deliberated results are consensual, even if an opinion consensus2 on the principle is not necessarily reached.

2.3. Deliberative Social Agents

In this section we present the agents in our model. We define them on the basis of their opinions on the principle , the arguments they possess, their knowledge of the relationship among them, their behavior during deliberation, and their sensitivity to deliberated proposals.

2.3.1. Agents in Dyadic Social Interactions

Every agent has an opinion, a relative position or degree of adherence to a principle , and a couple () of latitudes of acceptance and rejection, respectively, of informational cues. They live in since 0 and 2 are, respectively, the minimum and maximum distances of any two informational cues in the system. The idea behind the couple () is that there exist levels of relative tolerance from which informational cues have either an attractive or a repulsive effect on the individual [22, 32]. An close to 1 implies that agent fully supports the principle , and close to -1 that she rejects principle or, equivalently, fully supports . Moreover, if an agent ’s opinion is such that , is considered to have an extreme opinion, a moderate one otherwise. may be considered as ’s uncertainty about her own opinion [33] or as the limit below which the object she judges may attract her. may be seen as ’s bound of tolerance from which informational cues disgust her and confirm her initial position. Different combinations of (, ) can be associated with agent ’s ego or personal involvement in discussion processes, as it is vividly explained in [32]. Finally, agents are assumed to be sincere and precise when communicating their positions to one another (no noise in the interactions).

2.3.2. Agents in Deliberative Social Interactions

Agents vote and participate in deliberation because they are aware of the potential changes an accepted proposal may induce in the opinion of the group. After all, proposals promote a principle that potentially leads to a shift in other agents’ opinions. Agents’ incentive to participate in deliberation is based on the idea that every single one of them wants to make her point across and, at the same time, reach a correct collective decision. Agents are endowed with a probability, , of being attracted to a deliberated cue and a probability of being repulsed by it, . The two probabilities are assumed to be independent. They are a function of the distance between the opinion and the proposal argument and of the group’s sensitivity3 to deliberation (). The former is decreasing on and increasing on , while the latter is increasing on both and . Agents are also assumed(i)to be capable of assessing the degree of support for of all arguments;(ii)to trust4 one another when they utter informational cues.

2.3.3. Agents and Voting

At the end of the decision-making process, agents vote on whether they agree or not on the proposal argument that has been discussed during deliberation. Voting, for an agent, is the expression of her opinion in the final phase of the decision-making process. An agent is said to vote favorably for a proposal of justification argument if and only if or, equivalently, if the proposal argument does not adhere to the opposite of the principle supports5.

2.3.4. Agent Knowledge of Arguments

Let be a finite set of arguments. Each agent has a sack of arguments of size whose content reflects her relative position, , on (see Figure 2). For extreme-opinion agents, a proportion of the arguments in their sacks are of the same adherence to the principle than their opinions, whereas for moderate agents a proportion of their arguments are of the valence of their opinions. The hypothesis feeds from results found in [22] stating that “an individual places a verbal statement on an issue both in terms of the item’s relative proximity to his own position and the latitude which is acceptable to him around that focal point of acceptance.”

The arguments in an agent ’s sack are those that she knows how to use and advance in a deliberative interaction. The size of the sack represents agent ’s ability to communicate in a deliberative context. It follows that each agent possesses partial knowledge () of the attack relation between the arguments in and of , which she derives from the arguments in her sack and what she observes in deliberation. Knowledge of arguments is assumed to be “attack-oriented”—an agent knows an attack if she knows, upon observation, that neither she nor the group can rationally accept alongside . She may be aware of the existing defense relation among arguments when concatenating the information she accumulates on their attack relation during deliberation. She may use this information strategically in deliberative contexts.

Let be the length of the shortest path between two elements in the argumentation framework induced by an agent ’s knowledge . sees an argument as an attacker (defender) of an argument if (. An argument that is at an odd distance from another argument attacks it since either it directly attacks the argument or it attacks a defender of the argument. Likewise, if is at a pair distance of , it attacks an attacker of and thus defends it. If such path does not exist, then the distance is undefined and the agent does not see either argument as an attacker or defender of the other. Further, let denote the current state of affairs in deliberation. An agent ’s knowledge of is the set of attacks and defenses she can infer from the arguments she knows (, ) and the attacks and defenses she can infer from deliberation (, ):where,(i);(ii);(iii);(iv).

Notice that agents have no restriction on the amount of information they can carry and use during deliberation. The model implies that agents have the ability to use and process all information on the attack relations if they could observe the attacks and that they all have equally high cognitive capacity. Agent knowledge resets at the end of each decision-making process, but argument sacks stay untouched. In other words, argument sacks are static in the model.

2.3.5. Agent Behavior during Deliberation

Agents may behave in two different ways in deliberation. They may behave naively or focusedly. Naive agents use deliberation to voice their opinions on the principle through arguments. Focused agents strategically argue in favor of proposal arguments that support the principle they favor. In both cases, agents advance arguments in terms of the arguments’ relative proximity to their own positions [22].

Let denote the argument an agent advances in a debate (of a deliberation process ) over a central argument and the valence of an argument . Let(i), the formula that is satisfied if the argument is of the same valence as the opinion of agent ;(ii), the formula that is satisfied only if the argument is in ’s sack and not in the debate and is the closest, in terms of adherence to the principle , to ’s opinion;(iii) the conjunction of the preceding formuli indicating that an argument is in agent ’s sack and not in the debate, of the same sign as , and closest to in absolute value.

Naive agents choose such that is true given any proposal argument . Agents that behave strategically, or to say, focusedly, choose their moves in the debate as follows ( indicates no move or null argument):

(a) if agent ’s position with respect to is of the same sign as the proposal argument’s adherence to (), she advances an argument such that(i) is true. If ,(i1) (chooses an argument not of the same sign and closest to her opinion and defending ) is true. If ,(ii) (anticipates an attack from an argument on and advances an argument that attacks the argument )6 is true. If ,(ii1) is true. If ,(iii) (simply does not attack ) is true. If ,(iii1) is true. If ,(iv) (the agent will advance no argument).

(b) if agent ’s position with respect to is of the opposite sign of the proposal argument’s adherence to (), she plays the argument such that(i) 7. If ,(i1) is true. If ,(ii) (anticipates a defense from an argument to and advances an argument that attacks ) is true. If ,(ii1) is true. If ,(iii) (avoids attacking any attacker of ). If ,(iii1) . If ,(iv) .

In English, a focused agent that is opposed to the proposal attempts to either attack it directly or attack an argument that is defending it, whereas an agent that agrees with the proposal attempts to either defend it or avoid attacking it altogether. This behavioral approach is similar to debate protocols in abstract argumentation but for the fact that agents may advance arguments that do not result in an “advancement” of the debate. In our case, the debates stop for other reasons (see Section 2.4).

Two important points deserve to be highlighted. Firstly, focused agents with small argument sacks will regularly find themselves applying rules or . In effect, if they do not have enough arguments to infer attacks and defenses, their behavior in deliberation is likely to be similar to that of the naive agents. Secondly, the naive and focused behavioral assumptions presented here are very different in terms of computational complexity. The first assumes that the agent observes the debate but what she sees has no effect on her course of action; i.e., the argument she voices is uniquely determined by how she feels about the principle behind the proposal argument. In the second, an agent debates, essentially, to knock out any proposal argument she disagrees with8.

2.4. Tables for Deliberation

Tables are the physical or virtual places where the exchange of arguments occurs. Agents deliberate on these tables to determine whether the proposal argument is acceptable or not and are, thus, subdued by the deliberation procedure imposed by the table’s central authority (CA) [27]. The CA decides on the structure and length of the collective decision-making process and the deliberation procedure. It controls the percentage of agents from the population that actively participates in the debate () and the labeling-based semantics used to extract acceptable arguments from the framework (). The percentage denotes either the proportion of agents in the population that gets to advance arguments in a population-scale deliberation, or an independent sample of agents summoned to actively participate in the debate. stands for the procedure used to conclude on the epistemic status of the proposal argument and the arguments advanced during deliberation.

The CA has the ability to stop debates at will using a stop rule that inherently depends on the number of debates that ought to take place before a decision is deemed sufficiently discussed (), the maximum number of debates that can take place before abandoning deliberation (), and the label given to the proposal argument (). is a Boolean function whose value “true” signals the call for a vote and/or the end of the decision-making process. is associated with a minimal dialectical or epistemic requirement to consider a proposal for voting and to a lower bound of the length of the deliberation process; provides an upper bound. Moreover, the CA controls the size of the time interval between debates (), the collective decision rule (e.g. if there is voting on the deliberated proposals), and the majority quota rule for accepting proposal arguments () in the decision-making process. may represent the frequency or density of pair-wise interactions in a decision-making process.

2.4.1. The Construction of a Decision-Making Process

A deliberation process or debate in our model is a tool to obtain labels for proposal arguments that are as close as possible to the ideal or consensual ones. To define a deliberation process formally, we introduce the notion of debate step as constituent of a deliberation process. Informally, a debate step is a time step in which a debate occurs. Formally and more event-oriented, a debate step of a debate on is a quadruplet composed of a set of agents , a set of arguments , a set of attack relations , and a mapping that adds the arguments in and some attacks in to a framework and where . In the same spirit, we define a deliberation process on a proposal argument as a sequence of debate steps such that andwith(i) (the initial argument is the proposal argument);(ii) (no reflexive attack from to is allowed);(iii)) (any newly added argument to the framework has yet to be added to it);(iv) (any newly considered attack among arguments cannot be declared among arguments that are not in the framework);(v) (the system is stable; no arguments are created during deliberation).

Finally, let denote the label given to argument at a debate step during a deliberation process . A decision-making process, noted , over a proposal whose justification argument is is a sequence of debate and nondebate steps , , such that(i) are debate steps and nondebate steps that correspond to time steps at which there are no debates.(ii)If , then , and , otherwise;(iii)The subsequence of with such that is a deliberation process with ;(iv) pursues the deliberation process as long as under the semantics ;(v)The length of the sequence , or, equivalently, the duration of the process, is somewhere in the interval ;(vi)The final labeling for the proposal argument , , is determined by a combination of its deliberated label and a majority vote of majority quota .

A decision-making process ends when a final decision has been taken concerning the acceptability of the proposal ; e.g. has been deemed unacceptable () or a majority of agents have voted against . For a representation of a deliberative interaction on a table, see Figure 3; for one of a decision-making process, see Figure 4

Please note that every collective decision can be seen as a “time step” in as much as it describes how agents update opinions and make collective decisions. In the model, the decision-making process and the parameters define the substeps or events that occur within the time step. Henceforth, comparing simulations on the basis of the different decision-making processes translates into comparing these collective decision steps.

2.4.2. Deliberation Protocol

To define the deliberation protocol held in the table, either as a consequence of the definition process or as a statement, we assume the following:(i)Agents may decide not to contribute to the debate.(ii)All agents have the same probability, conditional to their opinions, of being picked to participate in a debate or deliberation step.(iii)There is no restriction on the number of times each agent can participate in a deliberation process.(iv)Each agent may only place one argument per debate step.(v)Arguments that have already been advanced in the debate may attack newly placed arguments9.

The deliberation or debate protocol goes as follows:(1)The CA randomly generates and makes public a central argument or proposal argument and informs all agents about the rules of the decision-making process.(2)The CA randomly draws two sets of agents with divergent views10 on and . It merges them to create the set of debaters .(3)Each agent advances an argument from her sack . The CA makes sure that there are no repeating arguments (agents already take argument repetition into account).(4)The CA establishes the debate step ’s argumentation framework and computes its labeling, , using .(5)If the computed label for is or the number of debates steps is inferior or equal to at time , then the CA stops the debate and resumes it at the ’th time step, by repeating , and .(6)Let be the final label given to the proposal argument . If voting is allowed, then if more than agents agree with , is accepted (), it is rejected if strictly less than agree with it (). When there is a tie, if , then ; if , then . When voting is not part of the decision-making process (), .

One important point to notice about this protocol is that it induces a stop rule, (cf. steps and of protocol) for the decision process and, thus, it always ends. Either agents debate and agree on the proposal’s acceptability through argumentation or, after debate steps, they directly vote on it so that a decision concerning the proposal is always reached. Refer to Figure 5 for a sweeping description of the deliberation protocol and the decision-making process.

2.5. Opinion Dynamics for Social Interactions

In this subsection, we describe the opinion diffusion model based on pair-wise interactions among agents and deliberation. Sherif’s and Hovland’s social judgment theory [22] motivates part of our approach. It describes how individuals’ opinions change on the basis of their attitude structures. Attitude structures refer to the relative scope, width, or latitude of categories used by individuals when evaluating information, namely, the latitudes of acceptance, rejection, and noncommitment [32]. The idea behind this theory is that individuals change their positions only in accordance to how far or close the communicative cues they receive are from (to) their anchor positions. It holds that if communicative cues are far (close) from (to) an agent’s position, say over her latitude of rejection (acceptance), then the agent shifts her position away from (towards) the position defended by the cues. In the case where the cues fall within the agent’s latitude of noncommitment, her position does not change (see Figure 6 and Equation (5)).

2.5.1. Pair-Wise or Dyadic Opinion Dynamics

As agents may communicate and deliberate collectively, they may also engage in one-to-one conversations with other agents to ponder on their positions. We loosely associate this type of communication with dyadic nonargumentative exchange or discussion based on fallacious arguments and persuasion. In the light of social judgment theory and the description of agents in the system, we model pair-wise symmetric interactions following the opinion dynamics model in [9]. An agent ’s influence on an agent ’s opinion at time is governed by the ensuing difference equation:where the parameter controls for the strength of the attraction and repulsion in social influence and and are the latitudes of rejection and acceptance for agent , respectively. The parameter may be thought of as the relative importance agent gives to the opinions of her peers, or, analogously, the weight she gives to her own opinion when updating. means that agents will never give more weight to the opinions of other agents than to their own (egocentric bias). When two agents and discuss, if, for instance, advances an informational cue (argument, persuasion tactic) and it happens to be close enough to ’s opinion anchor, then shifts her opinion towards the direction of ’s informational cue. The symmetric influence from agent to agent takes effect in the same way (see Figure 6).

We add to the dyadic dynamics a rule of encounters: at each step of pair-wise interactions, each agent meets exactly one other agent at random. This rule may be associated with random day-to-day encounters among agents. Steps of social influence correspond to nondebate steps () in decision-making processes.

2.5.2. Deliberative Opinion Dynamics

We define an opinion update equation that links the proposal arguments that are advanced during deliberation and opinions. We combine the uncertain and probabilistic nature of the effect of deliberation, have it a moderating [15, 16] or polarizing [14] one, with a mechanism similar to the one in social judgment theory based on the distance between different informational cues and opinion anchors [32]. The probabilistic modeling of the opinion update can be related to deliberation encouraging open-mindedness11 during collective discussion [18]. From this choice, it follows that deliberated cues can affect even the most extreme of agents, as opposed to some classic opinion diffusion models (e.g. [5, 8]) where once agents become extremists they may no longer become moderate.

Let be a proposal, the proposal argument ’s level of support for a principle , and an agent ’s opinion at time . Then, given the difference , we define ’s probability of being attracted to a decision’s informational cue by , where denotes a general probability parameter that characterizes how important deliberated results are for the group. The parameter may also be interpreted as the group’s tendency to be swayed by a decisional majority. Similarly, we define ’s probability of being repulsed from a decision informational cue by , where denotes a general probability parameter that characterizes the group’s dislike of deliberated results. can also be thought of as the group’s distrust of the system symbolized by deliberation and democracy.

Let denote the epistemic status of the proposal argument at the end of a decision process over ; every agent updates her opinion as follows:(i)If :(ii)If :

where is the strength of repulsion and attraction in the dynamic. The meaning of is analogous to the one of in the dyadic interactions model but for the observation that weights her opinion to an argument rather than to an opinion. The interpretation of this dynamics is straightforward. If the deliberated proposal argument is close to an agent’s opinion, then it is very likely that the agent shifts its opinion towards it. Please note that the probability () that an agent is attracted (repelled) to (from) an accepted deliberated proposal argument is bounded below (above) by ( (equivalently, ; ). Steps of deliberative opinion dynamics are associated with debate steps in which a concluding collective decision is made.

2.5.3. Mixed Opinion Dynamics

Opinion dynamics in the context of decision-making processes can be best described as a combination of the two preceding dynamics. In this opinion dynamics model, agents are engaged in a democratic system in which they deliberate, vote for proposals, and occasionally discuss with one another on the principle supported by the proposals. The opinion dynamics for an agent can be written as a sequence of dyadic and deliberative opinion updates:where is the decision process’s stop rule, the number of time steps between each deliberation step, Equation (5) describes the dynamics for the dyadic interactions, and Equation (6) posits the changes in opinions due to the deliberated cues. Equation (5) applies when agents are not deliberating and when each agent encounters another agent to exchange information that may lead to local opinion updates. When there is deliberation, a handful of agents deliberate and, if they reach the stop condition imposed by the table, all agents vote for the proposal. They update (see Equation (6)) their opinions on the basis of the proposal argument’s support for the principle and the result of the scrutiny. Otherwise, there is no change in the opinions of the agents. A vote indicates the end of the decision-making process.

3. Experiments and Calibration

In this section, we introduce the metrics that enable us to observe the simulations and characterize the calibration of the model.

3.1. Observations and Initialization

In this section we describe the simulations and the protocol used to test and observe the results of our model. We introduce the metrics of interest, explain the calibration of the model, describe some results obtained from the simulation data used for calibration, and conclude with a brief discussion on the expected outcomes of the mixed opinion dynamics model.

3.1.1. Metrics or Statistics of Interest

Let denote the end of a simulation. We are interested in the effect of deliberation and of the model’s procedural parameters on the following metrics or statistics:(i) Variance of opinions (): the variance of opinions at time . Since opinions live in the union of the positive and negative unit intervals, . The higher the variance of the distribution is, the more “diverse” the opinions are.(ii) Proportion of extremists in the population (): the proportion of agents in the population with opinions such that . A high proportion of extremists makes “healthy" consensus12 difficult to reach and deliberation more or less informative.(iii) Shifts of opinions () [33]: statistic that measures the aggregated change in individual opinion at time with respect to aggregated individual opinion at the beginning of the simulation, is positive and bounded above by since . A low shift statistic implies that the process has a small impact on opinions.(iv) Judgment or consensual inaccuracy (): consensual accuracy of a group consists of an ad hoc statistic measuring a group’s ability to infer correct labels for proposal arguments from a decision-making process, given the ideal consensual labeling based on full-information, . We use a Hamming-based distance on labelings [34] to define the statistic, where is the set of all discussed proposal arguments and if or if , otherwise. lives in the interval . An inaccuracy statistic close to 1 indicates that agents, subdued to a particular decision-making process, make many mistakes in judging the labels of the proposal arguments. Note that, when there is voting, all proposal arguments are labeled either or after deliberation.(v) Coherence (): let be the label obtained for from the deliberation process without voting. The coherence statistic measures how well voting results adjust to results obtained during deliberation only. We use the proportion of arguments that have been labeled in the debate and that agents have voted favorably for,The coherence statistic’s domain is . If after debates no deliberated central argument has been labeled or voting is not part of the procedure of collective decision-making, then the statistic equals 1 or, said differently, agents are perfectly coherent. This comes for the fact that if the central argument is labeled , then it is not even eligible for voting; if labeled , then the debate will always be coherent with the preferences of the agents since the result will be a simple aggregation of their votes. If the statistic equals 0, then none of the proposal arguments labeled are voted in favorably by the agents. It follows that a high coherence statistic implies that when agents vote for acceptable proposal arguments, the results of the scrutiny reflect the consensual rationality expressed in the deliberation process.

3.1.2. Parameters of Interest

We recall the parameters of interest in our study that are linked to the structure of the decision-making processes:(i): the minimum number of time steps in which a debate or a scrutiny occurs in a decision-making process before a final deliberated decision is submitted to a vote;(ii): the number of agents that deliberate, as a proportion of the population;(iii): the number of time steps between debates in which pair-wise interactions among agents may occur;(iv): the proportional majority requirement for the acceptance of a proposal. When , it stands for no voting: “any proposal argument that is labeled during deliberation is accepted.”

Please recall that, in terms of the definition of a decision-making process, each parameter controls either the length or the content of the sequence (, ) or the rules that are applied during the debate steps (, ) (refer to Figure 4).

3.1.3. Initialization

All agents start off with an opinion drawn from a uniform distribution 13. Given , every agent randomly draws a set () of arguments from a balanced14 argument pool of nonneutral () arguments on the basis of : if (the opinion is moderate), then agent randomly fills half of her argument sack with arguments such that and the other half with arguments such that (). Otherwise, she randomly fills of her sack with arguments such that and are of the same sign and the remaining with arguments of the opposite sign15.

Like for the opinion of agents, every argument ’s adherence to the principle , , is drawn from a uniform distribution . The attack relation that gives birth to the ideal argumentation framework () is established on the basis of the ’s and the arguments’ epistemic reach, .

Let be an auxiliary positive real-valued function that takes two arguments (, ) and a positive real number () as input. The probability that any argument creates a link to any other argument is given by the equation , and, thus, arguments supporting opposing views of the principle always attack each other. We fix , the epistemic correlation parameter, to . If was greater than 0.15, then arguments of the same sign would attack each other too often (and focused agents would be rarely incited to advance favorable arguments during deliberation). If was lower, then the arguments of the same type would induce an almost empty graph, which is unrealistic. The number of arguments that any argument can attack is bounded by ’s epistemic reach that we fix to 15. A higher value of epistemic reach makes the argument lattice too conflicting and nearly bipartite. A lower epistemic reach makes the lattice not sufficiently conflicting in the sense that too many arguments can attack the proposal argument relatively to the few that can defend it. If any argument attacks the proposal argument, the chances that another argument attacks the attacker are low. Hence, the proposal argument is almost never accepted and deliberative opinion updates become rare.

The arguments in the resulting argumentation framework, , are given a permanent labeling, (in short ), using grounded semantics. We choose the grounded labeling-based semantics because it provides a unique admissible labeling, respects minimal rationality constraints while simplifying the model, and models a skeptic approach to accepting arguments (see [23, 25, 31]), as it maximizes the cardinality of the set . To some extent, if we consider proposals as committing because they may guide collective action, choosing a skeptic semantics seems reasonable insofar as it labels an argument or only if it has no reason to label it . For instance, if a proposal is a public policy that requires large amounts of resources and engages to future course of action, then it may also determine policy cycles and heavily burden a group and its future decisions. In important situations like these, it seems reasonable to lengthen deliberative cues and ask for more demanding and “grounded” criteria for policy argument acceptability.

On the proposal’s side, we create an argument whose adherence to is also drawn from a , and we label it . is interpreted as being the main argument justifying the discussed proposal and as its support for at time . cannot attack other arguments but other arguments can attack it. For each argument , a directed arc or attack from to is activated with probability . When ’s lower bound is too high, the proposal argument is almost always defeated and, thus, deliberative opinion updates do not happen. When ’s lower bound is lower than 0.03, then the opposite occurs—the proposal argument is always accepted due to an absence of attacks towards it. Finally, we set the maximum number of debates to for the decision-making processes stop rules16.

3.2. Calibration and Simulation Protocol

In this subsection, we discuss the calibration of the models, the termination conditions for runs in the deliberative and mixed model, and the expected outcomes in terms of the observations.

3.2.1. Calibrating Dyadic Opinion Dynamics

Dyadic opinion dynamics correspond to a space of parameters in which argumentation and deliberation spaces are not taken into account. Deliberated arguments and informational cues have no effect on agent opinion, but agents still vote for the proposal argument17. We use and calibrate this model for comparability in terms of all our metrics since they are not explicitly observed in [9], a reference opinion dynamics model. Furthermore, for simplicity and comparability again, we suppose that agents are homogeneous in terms of their attitude structures and opinion weightings. We fix , and for all and for some triplet .

Experiments suggest that the strength of the dyadic interactions is an explanatory factor of the time of convergence () and of the dyamics’ steady states. Bigger values of speed up convergence towards a stable set of opinion clusters and affect the size and the relative position of these clusters in the opinion distribution. , however, does not play a major role in the number of opinion clusters observed at the steady state of the dynamics.

We set to the value given to it in Jagger et al.’s opinion dynamics model () [9]. For one, it is the smallest real number for which most results found in [9] hold and the dynamics are smooth; secondly, it seems to be a reasonable “strength” of influence considering that we would not want pair-wise social interactions to completely shadow or mask an eventual effect of deliberation in the opinion dynamics.

For different values of attitude structures, and , we get exactly the same results as in [9] in terms of the number and density of the opinion clusters. The different couples of values of and model the size of the latitude of noncommitment () and, thus, the agents’ propensity to update opinions. Whenever , regularities in convergence and opinion clustering appear. If , there is always central convergence; if , there is always bipolar convergence with clusters of similar sizes. For , relatively low values of () and relatively high values of always yield extreme bipolar convergence with a relatively small cluster of moderate agents (bipolar-central convergence). For sufficiently high () and sufficiently low (, in other words not very ego-involved agents, there is a bigger spectrum of steady states. For this reason, we studied the metrics of interest for these value differences and kept the couples of depicted in Table 1. For these couples, central, bipolar, bipolar-central convergence (3 clusters), and multicluster convergence (4 or 5 clusters) are possible and result from meaningfully different attitude structures. Judgment accuracy varies significantly across the four different attitude structures we retain and so it happens with the variance of opinions. These scenarios provides us with reference results from which to build upon.


Length of Decisional in Social influence Deliberation influence Cognitive


3.2.2. Calibrating Deliberative Social Interactions

Deliberative social interactions correspond to social situations in which individuals are not influenced by pair-wise discussion with peers but are sensitive to collectively deliberated informational cues. On these terms, parameters such as the size of agents’ argument sacks (), the distribution of arguments in them (), the minimum number of debates (), and the proportion of debaters from the population () are no longer mute. We assume, for simplicity, that the formula is true with , for some value , and for .

The chosen domains for these parameters (see Table 1) are justified by the size of the population and the size of the argument pool () which we set from the start. If is too big () and/or too many agents are allowed in the debate (), then individuals always disprove the central argument and deliberation is never or rarely taken into account in the update of opinions. Similar effects occur when is high, yet we only calibrate it with respect to the mean running time of a simulation and the values of . Different distributions of seem to significantly or directly explain none of the statistics we analyze, so we conveniently set . We chose the parameter to be always divisible by 4, for it is convenient given the initial conditions regarding the argument distribution in the agents’ argument sacks.

We fix the value space for the parameters as in Table 1 to account for different intensities of the effect of deliberative voting retroaction on the system. For example, the scenarios where the three parameters are at their lowest values correspond to the scenarios with the smallest fixed prior effect of deliberation in the model. Increasing any of these parameters should, mechanically, be associated with a world in which agents are more sensitive to collective decisions. The values for are taken from [13], where the author states that, by means of deliberation, 7% to 28% of individuals changed their opinion from agreeing to disagreeing or vice versa on a referendum question for Denmark’s participation in the Euro. We choose two different values for that account for high (0.3) and very high (0.5) sensitivity to deliberation in a group. , on the other hand, is taken to be small (equal to 0.1), since we believe that there exist agents that will always go against the reached consensus. Their numbers are meager.

For simplicity, we assume that, for all , for some . In effect, we posit that the heterogeneity of agents in the deliberative context only derives from their arguments and their opinions. We decide to calibrate in allusion to the strength of the social dynamics in pair-wise interactions. In doing so, we limit ourselves to consider three different scenarios in which deliberation has, respectively, half, equal, and twice as much opinion-shifting power than one-to-one social influence. Finally, we include the acceptability voting quota rule whose domain is inspired from classical majority rules [35] observed empirically (Table 1).

3.2.3. Calibrating the Mixed Social Interactions Model

The mixed interaction model ascribes to a parameter space in which the effect of collective choices on our metrics is nontrivial and where deliberation and voting on proposals determine their acceptability. Since the mixed model is equivalent to periodic iterations of pair-wise and collective social interactions, the calibration of the parameters in both preceding models holds (see Table 1 for the calibration). The first reason for this is comparability; the second is that the previous calibration takes into account the fact that both dynamics are going to be combined. We include the frequency of debates by adding the parameter which controls for the amount of pair-wise interaction among agents between two distinct deliberation steps.

3.2.4. Termination Conditions for Runs

Simulations stop once 100 proposals are deliberated on and/or voted for, in other words, after 100 collective decision steps have occured. The number of proposals discussed seems arbitrary, yet it is high enough to observe the effects of deliberation on opinion distributions and on other metrics related to coherence and judgment accuracy. We choose the number of runs as a function of our research questions, which give relevance to the procedural parameters in the mixed model rather than to the parameters set to describe the population and its behavior.

3.2.5. Expected Outcomes of the Simulations

We expect more deliberation, in terms of higher and , to increase judgment accuracy and, at the same time, to reduce the variance of opinions. In turn, a smaller variance of opinions implies that the argument pool for deliberation is smaller and, therefore, judgment accuracy should be lower. Moreover, variance increasing dyadic interactions (or rejection) should also increase judgment accuracy and coherence since bipolarization and dissensus foster argument diversity in deliberation. The coherence statistic should be stronger in simulations in which central convergence appears quickly and agents are naive. Attitude structures, sensitivity to informational cues, and weights given to deliberated cues that makes paths to bipolar convergence smaller or to central convergence longer should be associated with higher judgment accuracy. Shifts of opinion should be more visible in scenarios in which extreme agents are pulled away from the extremes. Hence, the sensitivity to deliberated cues (, ) and should explain the shifts. When crossed with high values of and , the shifting power of deliberation should be at its highest.

In the end, we do not know how these mechanisms will play out. The results of the subsequent experiments give us insight on the interplay between the aforementioned effects.

4. Results

Before pointing at any result obtained from the mixed interactions model, we describe and compare the dyadic and deliberative interactions models in the parameter space obtained from the calibration in Section 3 (see Table 1). Primarily, we require our results to describe two orthogonal types of opinion dynamics: the pair-wise dyadic and the deliberative. The latter comprises scenarios in which individuals do not influence each other by means of pair-wise discussion and only update their opinions from deliberation; the former scenarios where only pair-wise interactions among agents determine the dynamics. We describe both on the parameter space given in Table 1 by performing at least 20 simulations per scenario. Quantitative results from the pair-wise interaction model alone are described during the calibration and are not treated here—deliberative parameters have no effect on it. Moreover, we show how these different model specifications yield qualitatively different opinion distributions and compare them on the basis of the metrics of interest. To this end, we perform independent two-sample Student’s t-tests18 or Welch t-tests19 and compute confidence intervals at a level of confidence. Last but not least, we comment ordinary least squares (OLS) regression estimates to account for the direction and magnitude20 of the effects of the parameters on the metrics. Estimates are declared significant at a level of risk.

We perform the same analysis on the mixed interactions model, as we compare it to the pair-wise dyadic and deliberative interactions models and discuss the marginal effects of the parameters on the metrics. More precisely, we obtain two different types of results for the mixed interactions model: the first compares the mixed scenarios to their monolithic counterparts with respect to each metric and with respect to the regression estimates; the second gives a clear idea of the marginal effect of each governance parameter (or parameter of interest) on our observations. In the first, we allow control and procedural parameters to vary on a parameter space similar to those of the dyadic and deliberative opinion dynamics (see Table 1). In the second, we fix all of our control parameters, execute 36,000 balanced21 runs, and focus only on the one-way, pooled effects of our procedural and behavioral (all agents are focused or naive) parameters. We generate comprehensive graphs and tables to account for the obtained results.

Please notice the slight change in the size of the parameter space for the procedural parameters and in the values of the nonprocedural ones for the first and second type of results (refer to Tables 1 and 2, respectively). Since we want to have a finer idea of the marginal effects of each of the procedural parameters on how well a group decides on proposals, we make the value jumps for each parameter small enough to detect significant differences in the estimates and meaningful for interpretation. For the fixed, one-valued parameters in Table 2, we use Table 1 to set them to their minimum, mean, or median values.


Procedural parameters Other parameters


4.1. Comments on Dyadic and Deliberative Opinions Dynamics

In this subsection, we present our first results regarding the deliberative opinion dynamics described in Equation (6) and its differences with the dyadic dynamics (Equation (5)) in terms of our observations.

4.1.1. Qualitative Analysis of Pair-Wise Opinion Dynamics

Pair-wise opinion dynamics alone produces multicluster stable convergence with variable cluster sizes (see Figures 7(a) and 7(d)). Insight on the complexity of these dynamics was discussed in Section 3.2.1.

In Figures 7(a) and 7(d), agents discuss randomly with one another in pairs. Clusters in the extremes form quickly since either agents that are close to the extremes attract one another or are convinced to stay close to the extremes by interacting with other agents they disagree with. Other agents with near-extreme positions may be simply attracted to one of the several moderate foci of agents in the opinion distribution. On the other hand, moderate agents are either attracted to the closest opinion focus or pushed towards the extremes of the distribution. Over time, agents in the central focus either attract other agents into it or ignore the opinions of the extreme agents that interact with them. The result of these dynamics yields a multiclustered convergence where the opinion foci are at least at distance from one another and only the extreme foci (clusters of agents with ) are at a distance greater than or equal to from one another. The foci at the bounds of the opinion distribution are very stable and, in analogy to the -rule in assimilation bounded-confidence models [33], the number of clusters that form is roughly equal to .

4.1.2. Qualitative Analysis of Deliberative Opinion Dynamics

Qualitatively, deliberative opinion dynamics yield a loose unipolar convergence of opinions near the center (central convergence, see Figure 7(b)), near the center of either the left or right portions of the distribution (group polarization, see Figure 7(e)) or to a sparse bipolar opinion distribution with clusters at the center and at the extremes (see Figure 7(c)) if mixed with pair-wise interactions. This is probably a reason to expect the variance of opinions, and the proportion of extremists (almost perfectly correlated: ), to be low for these scenarios. The side towards which the opinion cluster skews depends, essentially, on the valence of the first argument that is collectively accepted.

In Figures 7(e) and 7(b), agents that update their opinions do so at the same time and only when deliberation is successful. In these scenarios, every time a decision is collectively accepted, agents update their opinions towards the argument’s defended position. Hence, if there is voting and if agents happen to obtain proposal arguments, they inexorably cluster towards one or the other side of the opinion distribution, resulting in opinion convergence remeniscent of group polarization (see Figure 7(e)). Furthermore, as agents cluster in the same side of the opinion spectrum, the arguments that may be advanced in deliberation are fewer and more skewed towards that one side of the opinion spectrum. Agents become more easily persuaded in deliberation and only by arguments supporting one side of the spectrum. Convergence of opinions towards one loose opinion focus becomes faster and more certain. If there is no vote (see Figure 7(b)), the dynamics are similar but for the fact that it is the uniform randomness of the proposal argument that guarantees a central convergence of opinions.

4.1.3. Quantitative Analysis of Pair-Wise Opinion Dynamics

The three parameters that allow for quantitative analysis are and , the agents’ attitude structure, and the majority quota voting rule . Table 3 supports the claim that going from α = 0.5 to α = 0.66 does not affect the variance nor the shifts of opinions. However, judgment accuracy is significantly higher when accepting proposals becomes more difficult. This is rather surprising but also intuitive since decisions that are taken using more restrictive methods of scrutiny should be more accurate.


Dependent variable:

Constant 0.000
(0.002) (0.206) (0.003) (0.000)

() 0.000
(0.002) (0.260) (0.004) (0.000)

(ref. ) 0.004 0.000
(0.002) (0.260) (0.004) (0.000)

(ref. ) 0.002−0.059