Paths to Polarization: How Extreme Views, Miscommunication, and Random Chance Drive Opinion Dynamics
Understanding the social conditions that tend to increase or decrease polarization is important for many reasons. We study a network-structured agent-based model of opinion dynamics, extending a model previously introduced by Flache and Macy (2011), who found that polarization appeared to increase with the introduction of long-range ties but decrease with the number of salient opinions, which they called the population’s “cultural complexity.” We find the following. First, polarization is strongly path dependent and sensitive to stochastic variation. Second, polarization depends strongly on the initial distribution of opinions in the population. In the absence of extremists, polarization may be mitigated. Third, noisy communication can drive a population toward more extreme opinions and even cause acute polarization. Finally, the apparent reduction in polarization under increased “cultural complexity” arises via a particular property of the polarization measurement, under which a population containing a wider diversity of extreme views is deemed less polarized. This work has implications for understanding the population dynamics of beliefs, opinions, and polarization as well as broader implications for the analysis of agent-based models of social phenomena.
Diversity of opinions in a community is often difficult to maintain. Iterative exposure, norm enforcement, and psychological biases for conformity can drive consensus within a group [1–6]. On the other hand, in-group bias, outgroup aversion, and the tendency to further differentiate ourselves from those deemed different may lead to the emergence of strong intergroup differences [7–15]. Such differences can lead to polarization in opinions under certain conditions. Understanding the social conditions that tend to increase or decrease polarization is important for many reasons. Primary among these is that a functioning democratic society depends on clear communication among the citizenry, which is impeded by the mismatch in norms, the differential interpretation of facts, and the dehumanization that polarization can engender (see Pew Research Center  for a current analysis of these dynamics in the United States). The maintenance of social differences in the form of cliques and clubs may be inevitable, but cooperation depends on transcending differences.
We take a network theoretic approach to studying the conditions for polarization in an agent-based model of opinion dynamics. Empirical research on the population dynamics of opinions is challenging and must be supplemented by formal modeling . Models reduce complex systems to ones that are tractable using mathematical or computational analysis and allow for the exploration of replicate and counterfactual scenarios. Of course, the conclusions we draw from our models depend essentially on the assumptions of those models, and so caution must be taken when using model results to make inferences about empirical phenomenon. For example, Smaldino and Schank  analyzed models of human mate choice and showed that very different individual decision rules could be fit to almost any empirical outcome by modulating assumptions about the population structure that had been ignored in prior analyses. When considering an important phenomena such as polarization, similar caution must be exercised, as we will demonstrate.
Our analysis extends the work of Flache and Macy , who used a network-structured model of opinions and biased influence (hereafter the FM model) to study polarization. Network ties in this model exist between individuals as an indicator of social influence. Like several other models of opinions and beliefs, they operationalized the well-known phenomena of biased assimilation [9, 11], the tendency for an individual to become more similar to those to whom they are similar and to become more distinct from those with whom they already differ. Some empirical studies support the assumption of both positive and negative biased assimilation [20, 21], e.g. Other empirical studies failed to find evidence of negative biased assimilation at work where computational studies suggested it would be [22, 23], e.g. Of course, if further empirical research turns out to invalidate that assumption, then our model conclusions must also be reexamined, as with any theoretical model . Flache and Macy found that, when compared with a highly clustered population structure, the addition of long-range ties could dramatically increase polarization. When individuals were clustered into relatively isolated groups, they tended to converge to local consensus while maintaining diversity in the population at large. However, the addition of long-range ties increased exposure to substantially different opinions. Whether by attractive or repulsive forces, these long-range ties tended to drive opinions more toward their extreme values, resulting in increased polarization. Another important result was that the extent of “cultural complexity”—the number of orthogonal traits that are important to individuals in assessing their similarities and differences with others—mitigated polarization. When the number of traits was large, polarization was reduced. DellaPosta, Shi, and Macy  used a variant of the FM model to explain data from the General Social Survey indicating that arbitrary traits tend to become associated with polarized identity groups, leading to often-puzzling stereotypes such as “latte-drinking liberals” and “bird-hunting conservatives.”
If we take the results of Flache and Macy  at face value, two possible recommendations for the reduction of polarization readily emerge. First, we might try to reduce the number of long-range ties in our social network. This is made difficult due to the pervasive influence of internet social media [26, 27]. Second, we might attempt to broaden the number of domains in the public discussion, so that points of agreement are easier to discover. This is also challenging, due to the increasingly fractured media landscape in which niche interests are increasing and common knowledge diminishing . However, challenging is not the same thing as impossible. We must ask then how seriously should we take these recommendations? Might there be other solutions available?
To address these questions, we perform new analyses of the FM model and reveal several additional factors influencing polarization. First, polarization is almost always a probabilistic occurrence. Even when parameter exploration appears to reveal regularities in polarization, specific outcomes are strongly path dependent. Indeed, there is often a wide range of possible outcomes even given identically repeatable starting conditions, due to stochasticity in the dynamics of interactions. This result highlights potential limits of our ability to make reliable predictions about polarization in any particular social system. Complex systems are often stochastic, and something that increases or decreases average polarization in a simulation is not guaranteed to do so in reality. Second, resultant polarization depends strongly on the initial distribution of opinions in the population. In the absence of extremists, polarization may be mitigated. This highlights the well-known danger of extremists and suggests new routes to avoiding polarization. More broadly, we show that too much diversity of extreme opinions makes polarization more likely. Third, noisy communication can drive a population toward more extreme opinions and even cause acute polarization. Cooperation and consensus building depend on individuals finding common ground, which can be jeopardized even in the presence of unbiased error . Finally, we show that the apparent reduction in polarization under increased “cultural complexity” arises via a particular property of the polarization measurement, under which a population containing a wider diversity of extreme views is deemed less polarized. Although this may often be a reasonable assumption, it highlights the need for caution in our measurement of complex social phenomena.
2.1. Modeling Individuals and Their Opinions
Our model is an extension of one presented by Flache and Macy  and shares many general features with other models of opinion dynamics in structured populations [7–9, 12, 25, 30–32]. The population is modeled as a network of individuals (or agents), each of whom is defined by a vector of opinions. The size of this vector, , is called the “cultural complexity,” and may be more descriptively explained as the number of opinions that are important to individuals in assessing their similarities and differences with others. Opinions can represent political views, religious or moral values, artistic tastes, or myriad other beliefs. The opinion of agent on issue , is operationalized as a real number implicitly bounded in by smoothing (equation (3)). In Flache and Macy’s original analysis, all opinions were initialized as random draws from the uniform distribution . In order to study the importance of initially extreme opinions, each initial opinion is here drawn instead from , where .
2.2. Modeling Social Influence
The aggregation of the opinions held by an agent determines its coordinates in opinion space. We adopt the FM model’s measure of distance between agents and as follows:
Distance thus defined measures the average absolute difference across opinion coordinates. Agents are nodes in a network, with an edge between agents reflecting a relationship and an opportunity for the agents to influence one another. The magnitude and direction of that influence is characterized by the weight of each edge. Weights are determined by the relative opinions of the two agents, as measured by their distance, and so can change dynamically. Positive weights represent positive influence, in which agents become closer in their opinions, while negative weights represent the tendency toward differentiation. For descriptive convenience, if two agents are connected with a positive weight, they could be considered “friends” and if the weight is negative they could be considered “enemies.” In reality, no assumptions about such clear social roles are necessary. The weight of an edge between agents and is given by
So, if the opinions of agents and are separated by , the agents are friends and will harmonize their opinions. If , the agents are enemies and will drive each other’s opinions to more extreme levels. This weighting rule embodies the psychological phenomena of biased assimilation, in which similar individuals grow more similar and dissimilar individuals grow further apart after interacting . This is a common assumption in models of social influence [9, 19, 33]. It should be noted that while the empirical evidence for biased assimilation is quite strong and spans almost four decades, it is less clear how coherence on various opinions or beliefs affects influence on orthogonal opinions or beliefs. The assumption in this model is that it is only average distance in opinions that matters.
At time , agents update their opinions by adding the average influence from all neighbor agents. For each opinion , agent uses the following update rule: where
Here, is the number of agents with which agent shares an edge, and is a noise term that reflects errors in the communication of opinions. This term is in each instance drawn at random from a normal distribution with a mean of zero and a standard deviation of . We conceptualize updating to be the result of agents sensing the communicated opinions of neighbors. Furthermore, we conceptualize this as representing noise either in an agent sensing the opinions of other agents, noise in agents communicating their opinions, or both. In their original study, Flache and Macy  considered only scenarios without noise . Time in the model progressed in discrete time steps. At each time step, each agent’s opinions were updated asynchronously in random order to avoid well-known artefacts that often accompany simultaneous agent updating.
It is worth noting a few immediate consequences of these update equations. First, agents with extreme opinions in dimension will tend to make smaller changes to those opinions because of the smoothing factor . In other words, extreme opinions will be harder to change. Second, there are two opposing factors that modulate the magnitude of influence between two agents. On the one hand, edge weight is maximal when agents’ opinions are very similar. On the other hand, (which Flache and Macy refer to as the “raw” state change) increases the more agents’ opinions differ, presumably because larger distances provide larger room for change, with a mathematical form drawn from psychological models of reinforcement learning [34, 35]. Influence will therefore be maximal for agents who are an intermediate distance apart in opinion space. To facilitate an intuitive understanding of dyadic interactions, we illustrate the strength of influence on agent opinions in opinion space in Figure 1. We see that an agent with opinions at the origin of opinion space has only a moderate, attractive influence on other agent opinions in the opinion space. Agents at the corners of opinion space are barely influenced by a central opinion vector. When we consider the influence of an agent opinion nearer to the corner, at we see that there is a clear line where relationships switch from friend to enemy . Due to the comingling of effects described above, there is a varied and nonmonotonic landscape of influence.
2.3. Measuring Polarization
There are a multitude of measures for polarization , and no single measure is widely agreed upon. We follow Flache and Macy  and define polarization at time to be the variance of all distances between agents
This metric has the advantage of simple interpretation. If half of all agents are in one corner of opinion space and the other half of agents are in the opposite corner, then the population is maximally polarized. As agent opinions spread to other corners and to other regions of opinion space, polarization will decrease. One disadvantage is that more general patterns of clustering, as would be detected using various machine learning clustering algorithms, will go undetected. In the final subsection of our results, we illustrate another limitation of this metric. Nonetheless, we generally find that it is a useful and suitable operationalization for the concept of polarization.
2.4. Network Structure
Our network structures are taken from Flache and Macy’s  experiment 2. We begin with the connected caveman network structure introduced by Watts . Specifically, we consider a network of agents, grouped into 20 fully connected clusters (caves) of five agents each. These caves are arranged on a circle, and for each cave, one edge is selected at random and rewired to connect to a random agent in the cave immediately to the right of the focal cave. This network has the appearance of tight-knit communities with weak ties to neighboring communities. The connected caveman network is highly clustered, meaning that if two agents are both neighbors of another single agent, there is a high probability that those two agents are also neighbors. However, relative path length is considerably greater in a connected caveman graph than for a totally random graph.
To assess the influence of adding long-range ties, we then consider a network for which 20 additional edges are added between randomly selected pairs of agents from across the entire network (Figure 2). Long-range ties are added at to give the local communities (caves) time to yield enclaves of conformity that differ slightly from their neighboring enclaves, following Flache and Macy . The long-range ties reduce the average path length of the network while retaining high clustering, yielding networks with “small-world” properties Watts .
(a) Connected caveman graph before long-range ties added
(b) After long-range ties added
Finally, as a way to control for the effect of simply adding additional ties, we also consider the connected caveman network with short-range ties. In this case, a randomly selected agent from each cave (who is not already connected to another cave) is connected to a random agent in the cave immediately to the right of the focal cave. Unless stated otherwise, all of our analyses were restricted to the connected caveman network with long-range ties, as this was the network structure found by Flache and Macy  to maximize polarization.
2.5. Computational Experiments
Below, we present the results of our computational experiments. For all parameter combinations, we ran 100 simulations of the model, with data collected after time steps. This was always sufficient time for the system to settle down into a relatively stable pattern (true equilibria were not always reached due to the stochasticity inherent in the model). By calculating the difference in polarization on the final time step for all simulations and finding all to be sufficiently small, we confirmed that 10 time steps was sufficient to achieve stable behavior across all simulations. We first replicate the major result of Flache and Macy  that polarization increases with the addition of long-range ties but decreases with increasing cultural complexity, . We then perform three sets of experiments: (1)Quantifying variation. We take a closer look at the variation among simulation runs and explore path dependence on the road to polarization(2)Reducing extremism. We investigate values of , in which the initial distribution of opinions is less extreme(3)Adding noise. We investigate values of , in which communication about opinions is noisy and influence is therefore more stochastic
Unless stated otherwise, all simulations used a connected caveman network with random long-range ties, and . Model and analysis code is available on GitHub at https://github.com/mt-digital/polarization.
In their original analysis of the FM model, Flache and Macy  found two main causes of polarization. First, random long-range ties decreased the average path length of the network and increased the average polarization of the system across trials. Second, average polarization across trials decreased with increasing cultural complexity, . We replicated these results, as illustrated in Figure 3. The remainder of this section is dedicated to novel results. The first three subsections show results of new analyses of the original FM model. The final subsection shows our analysis of the FM model modified to include communication noise.
3.1. Polarization Is Probabilistic and Path Dependent
Averages do not carry information about variation between trials. Here, we explore that variation. Figure 4 shows the polarization for each of the individual trials averaged in Figure 3. We see a lot of variation around those averages, and that although polarization was low in all cases for large , there are still individual trials for which polarization was high across all three network structures.
(a) Non-random connected caveman network
(b) Randomized connected caveman network with long-range random ties added at iteration 2000
In addition to the demonstrated influence of the overall network structure, three possible sources of variation in system polarization are (1) the initial distribution of agent opinions, (2) the initial distribution of how agent opinions are clustered on the network, and (3) the update path—the order in which weights or agent opinions are updated. We performed additional analyses to investigate the contributions from each of these three factors, focusing on the initial distribution of agent opinions. We studied the nonrandom connected caveman network so as to keep network structure constant across trials, and for simplicity, we restricted this analysis to . Due to the nature of our polarization measure, at initialization, the system will have some nonzero degree of polarization, which will vary depending on the random draws of agents’ initial opinions. Over 100 trials, we compare the initial polarization of the system to the final polarization. We found a significant, if relatively small, correlation between the initial and final polarization of agent opinions, (Figure 5). This means that the level of initial polarization accounts for only about 14% of the variation in final polarizations. It seems, then, that initial clustering of agent opinions and the stochasticity of the update path account for a large portion of the variability. In order to delineate the contributions of these two remaining factors to the overall variability in polarization, we considered the previously discussed simulations and ran 100 replicate trials with the initial conditions taken from the trials with the lowest and highest initial polarization. In other words, for each of two conditions, we ran replicate simulations with the exact same starting conditions between trials. Any variation in outcomes must therefore be due to stochasticity in the update paths. For example, if two opposing extremists influence a disjoint set of moderates disproportionately often, polarization will increase. The results are shown in Figure 6. Final polarization was clearly biased by the initial polarization (average final polarization across trials was 0.66 for the larger initial polarization and 0.290 for the smaller initial polarization), but showed considerable variability. In other words, a large proportion of the variation between trials was due to stochasticity not in the initial configuration of the population, but to stochasticity in the transient dynamics of agent interactions.
3.2. The Absence of Initially Extreme Opinions Reduces Polarization
Next, we extend our analysis of initial conditions further, by studying the breadth of opinions initially present in the population. Specifically, initial opinions were drawn from the uniform distribution . Figures 7 and 8 show the mean and median polarization of the population as function of , for . In general, the average final polarization decreased with smaller for all values of . The lines are not perfectly smooth due to the large variation in outcomes described in the previous section (see Figure 9).
We again examined the within-condition variation in final polarization (Figure 9). Even when the average polarization was very small, we nevertheless saw instances of strongly polarized outcomes for across all values of . For small values of , much more polarization occurred with small . This further highlights the fact that initial conditions, in conjunction with the cultural complexity, bias the system towards larger or smaller levels of polarization, but do not eliminate the possibility of either conformity or extreme polarization.
3.3. The Meaning of Polarization in High-Dimensional Opinion Space
Clearly extreme positions are important in the FM model. Extremists are more stubborn (and therefore more influential) than centrists due to smoothing. Our analysis indicates that under a wide range of conditions, all opinions are likely to end up at extreme values. Indeed, the only stable states of the model are complete consensus, which can be at any point in opinion space in the absence of noise, or for all opinions to be at extreme values. This brings us back to a key result of the FM model, which is that increased cultural complexity, , decreases polarization. Recall that polarization is measured as the variance among distances between agent opinions. To what extent is this decrease in polarization with increased cultural complexity driven by the fact that, for larger , there are simply more “corners” (extreme opinion values) for agent opinions to settle on?
We investigated this question by comparing polarization emerging from the dynamics of the FM model with polarization that occurs when agents are artificially placed on a random vertex of the -dimensional opinion hypercube. We found the polarization for this combinatorial condition is via Monte Carlo sampling with 100 agents and 1000 trials for each . In the Appendix, we derive a formal proof that exactly in the limit as .
When we compare the combinatorial result to the FM model results, we find that observed decrease in polarization with increased follows the combinatorial results very closely (Figure 10). The connected caveman condition results in a lower polarization, on average, than for all that we tested. The random long-range condition results in an average polarization roughly equal to for , higher average polarization than from to , and lower polarization for . The source of this jump from above-combinatorial to below-combinatorial is not clear, but is an interesting avenue for future work.
3.4. Noisy Communication Increases Polarization, Particularly in the Absence of Initially Extreme Opinions
Up to this point, we have assumed that agents accurately express their own opinions and accurately receive information concerning the opinions of others. As this assumption is unlikely to fully hold in most cases of human interaction, it is important to assess the model’s robustness to noisy communication. To do this, we introduced random error into the opinion update equation, so that every cultural feature communication channel, for every connected dyad, was modulated by a noise term, , drawn from a normal distribution with mean 0 and standard deviation . Let us call the “noise level.” We varied the noise level from 0 to 0.2 in increments of 0.02. For each of these noise levels, we also varied from 0.5 to 1.0 in steps of 0.05 for a total of 121 parameter pairs for each . Note that we did not explicitly bound opinion components in the presence of noise. This led to us discarding 19 of the 60,500 runs due to runaway opinions that diverged to infinity, and this was only for the highest noise levels used. These discarded runs had noise levels of .18 or .2. Most parameter settings had only one discarded run if any, with one parameter setting having three discarded runs, lowering the number of samples to 97 from 100 for that parameter setting ( = 5, = 0.95, and noise level = 0.2). This lack of smoothing had no effect on nondivergent model runs polarization outcomes, as polarization was less than or equal to 1.0 for all.
These experiments reveal an interesting pattern of results. A sufficiently large amount of noise produced high levels of polarization for low values of , which never produced polarization in the absence of noise. Indeed, there appears to be a phase transition point for under low , below which the system collapses to complete conformity and above which we see high levels of polarization (Figure 11). Across the values of we tested, this threshold appealed to be around , below which we never saw any polarization for low (Figure 12). As increases, however, the system behavior becomes less sensitive to noise, appearing to be completely insensitive to noise close to .
Even though polarization is rare at moderate noise levels, extremism is not. A noise level of over 0.1 was required to reliably drive the system to polarization in our simulations, but lower noise levels led to consensus around an extreme location in opinion space rather than at a most centrist position. We infer this because the average agent distance from center increases to the maximum, 1.0, with noise levels of only 0.6 (Figure 13). Thus, we obtain the interesting result that even small amounts of communication noise can move the population to extremist positions.
Figures 11 and 13 also illustrate a curious interaction between noise level, , and initial extremism, . For smaller , we observe clear phase transitions from centrist conformity to extremist conformity to polarization. For larger , the populations responses are less clearly delineated. To help explain, we present illustrations of the spatiotemporal dynamics of the model for exemplar trials. Consider first a case of very low initial extremism, (Figure 14). In the absence of noise, the system collapses around the center of opinion space at and by has reached full consensus (Figure 14(a)). At the other extreme, under high levels of noise, , agents reach a near consensus by and remain there until , when random long-range ties are added. At this point, agents are exposed to individuals with very slightly different sets of opinions, and those differences are amplified by the noise, leading to repulsion. This is sufficient to jolt the system away from conformity and into opposing camps moving towards opposing corners (Figure 14(c)).
For , we found most simulations end in extreme consensus. That is, all opinions were at the extremes rather than closer to zero, but these opinions were universally shared so that final polarization was zero. One such trial is shown in Figure 14(b). This occurs because noise is sufficient to move the population toward the extremes (from which it is difficult to return to center), but agents remain sufficiently clustered so that all forces remain attractive rather than repulsive.
When initial opinions are drawn from the full range of possibilities (), the system always achieves some degree of polarization. Because noise only serves to increase the likelihood of extreme opinions, this condition is unaffected by noise. Typical cases are shown in Figure 15. The behavior for is similar in all three cases: each cave reaches a local consensus, and the network of caves reaches a stable configuration. Some of the caves find consensus values at the corners. When random ties are added, the stable configuration is broken, and agents are pulled towards one of the four corners, where some caves have already been stably established. The caves in the corners do not move. Recall that a key assumption of the FM model is that extremist opinions influence centrist opinions more than centrists influence extremists. The noise is not strong enough to move extremists from extreme positions. In other words, in the presence of extreme opinions, network structure, not noise, dominates the dynamics. We extend the intuition to higher dimensions of opinions space using parallel coordinate plots, visualizing time series of opinion dynamics for (Figure 16).
(a) Moderate noise, extreme consensus
(b) Moderate noise, extreme polarization
Humans are the quintessential cultural species. Our instinct to learn from others is a key reason for our domination of the planet [38, 39]. An under-appreciated component of cultural learning concerns exacerbating differences and rejecting opinions when individuals are not likely to share one’s current norms and beliefs. When those differences occur within a community, they can lead to discord. Many of us live in multicultural societies requiring cooperation and common ground, and so it is natural to ask when do we expect polarization and is there anything we can do about it. Any suggestions based on our modeling efforts here should of course be compared with empirical studies. Hopefully, these results stimulate further empirical work to understand when and why polarization emerges in real-world situations. One such opportunity for future work is to connect our findings to the political science literature on polarization , especially in relation to communication. If agents had different roles, such as elite agents (politicians and media) and common agents, we could model the effects of ideologically biased news in political polarization [28, 41]. Our results show that in the presence of sufficiently large communication noise and small-world networks, a situation we are arguably in today, a state of polarization is the only stable state (Figures 14–16). It is interesting to consider this in light of one recent analysis suggesting that the United States Constitution was designed not just to accommodate polarization but to foster it for the sake of stability .
We have highlighted the stochastic nature of the system being modeled. A key conclusion is that empirical results of opinions on social networks may, when taken on a case-by-case basis, exhibit trends that bear little resemblance to those predicted by the model. This is not necessarily an invalidation of the model, but merely a consequence of the variability inherent in complex systems. That said, given enough data, key trends should emerge. We have confirmed Flache and Macy’s  result that long-range ties increase polarization. As such, we might emphasize the importance of local communities being allowed to reach their own consensus. We have shown that decreasing initial extremism can reduce polarization, as one might expect. Achieving consensus in a community relies heavily on the absence of opinions at the extremes. However, this result is quite sensitive to noise in communication. A little bit of noise can shift consensus from centrist or ambivalent positions to more extreme views, while more noise can lead to polarization. Even if polarization is to be avoided, what about the intermediate case of “extreme consensus”? While it may be natural to view extreme opinions as undesirable, an alternative perspective is that they represent a more stable system of cultural coherence. Note that these findings contradict computational and mathematical studies of the bounded confidence model under the influence of noise, where sufficient noise breaks polarization and leads to disordered opinion spreading [43–45]. This is because in the bounded confidence model, agents that are too far from one another do not interact. In the FM model, connected agents always interact, and the further apart they are in opinion space, the more strongly they repel one another in opinion space.
We confirmed Flache and Macy’s  result that increased “cultural complexity”—the number of opinions that are important to individuals in assessing their similarities and differences with others—decreased overall polarization. We also showed that this result stems directly from an increase in the number of permutations of extreme opinions individuals can hold when there are more items on which one can hold opinions. This might be viewed as a flaw in the metric of polarization used here. Alternatively, we believe it is reasonable to posit that a community with a wider diversity of views should be considered less polarized than a community with only a few suites of clustered opinions. In any case, this finding highlights the importance of a thorough understanding of one’s distance measure when dealing with multidimensional opinions. Our analysis may in fact cast doubt on the interpretation by Flache and Macy  that cultural complexity decreases opinion polarization, if one also rejects the interpretation that adding arbitrary traits on which actors are indifferent should reduce their opinion distance.
As noted, the model we have studied is a simplified abstraction and does not include many details that are important to the empirical reality of opinion dynamics. In general, theoretical modeling work should start simple and gradually add heterogeneity as the simpler versions of the system in question become fully described. Future work should explore these sources of heterogeneity. First, we did not distinguish between private opinions and public productions representing those opinions . Our operationalization of communication noise could be interpreted as a modulation of private opinion, but communication noise could also be interpreted as misunderstanding of perfectly reproduced, publicly voiced opinions. People often communicate public opinions that differ from their private opinions when incentives for the parties involved are not aligned [46–48]. Second, we ignored the structural influence of explicit identity groups. It could be argued that clustering of agent opinions implicitly defines an identity group. For example,  measured network autocorrelation to explain why people’s preferences cluster together. This data-driven approach was offered as an attempt to explain arbitrary opinion clustering, as indicated by the paper’s title, “Why do liberals drink lattes?”. Nevertheless, explicit identity with groups and roles influences human behavior far beyond homophilic clustering [49–51]. Third, we ignored individual differences in how individuals influence and are influenced. Some people may be stubborn while others are easily swayed. Some prestigious or charismatic individuals may have outsized influence while others are ineffective at communicating their opinions. Relatedly, individuals may also vary in their confidence in their opinions, which will influence the extent of their mutability and persuasion. The assumption that as agents become more extreme, their opinions become more stubborn, as formalized in (3), may not always hold. Indeed, our work highlights the need for additional empirical work on how individuals alter their opinions as a function of how extreme those opinions are. Finally, the social networks used in our model are simplistic in both dynamics and structure. Ties in many real-world networks change with greater frequency than we modeled, providing new opportunities for social influence. Moreover, interactions and opinions are contextual. Individuals are embedded in multilayered social networks, in which the dynamics of opinions may be considerably more nuanced than indicated by our relatively static, single-layer network [30, 52].
In our study of the FM model, we have found rich behaviors and theoretical lessons for understanding opinion dynamics. This work highlights the potential for complexity even in a very simple model of individual behavior, because network structure provides for path dependent effects and can be further influenced by initial conditions and noise. Our analytic approach highlights the value of systematic investigation of a model’s explicit and tacit assumptions.
Proof that Polarization Scales with
We hypothesized that the decrease in polarization with increasing observed in simulations of the FM model was driven by an increase in the number of permutations of binary vectors of length , in which each element was or 1. We supported this hypothesis in the main text with simulations in which agents were randomly initialized at such extreme positions in opinion space. Here, we derive a formal proof that polarization in the FM model scales with if we assume that agents are randomly assigned a vector of “extreme” opinions, such that , . To do this, we exactly calculate the polarization of a population where each agent occupies one of the corners of opinion space with cultural features.
Recall that polarization is defined as the variance in pairwise distances between all agents. We define the combinatorial polarization, , as the polarization that arises from randomly placing each agent at one of the corners of opinion space with cultural features, which is a -hypercube, denoted . “Corners” of opinion space are simply vertices in the graph of . We computed this value numerically for and found it tracks closely to (see Figure 10). Here, we demonstrate that exactly in the large limit. To calculate , we need three elements. First, we need to calculate the distance between pairs of agents at different corners of . Second, we must count the number of agent pairs separated by the distance from one corner to another. We do this by first counting the number of subcubes of dimension or -subcube. Then, we count the number of maximally separated pairs in a subcube of -subcube. Finally, we calculate the distance of maximally separated, or antipodal pairs, of agents in an -subcube. We can then calculate the expected value of pairwise distances, , and the expected square of pairwise distance, , from which we will have the combinatorial polarization
We will show that by showing that and . Before we do that, we will derive functions to help us count the number of pairs separated by a particular distance and to calculate distances between vertices on subcubes . First, we denote the total number of pairwise distances as , where is the number of agents. The number of -subcubes is
This results from the fact that at all vertices of , subcubes can be created by choosing nodes adjacent to the vertex. This gives us subcubes. This overcounts since each generated subcube was generated once for each of its vertices. So, we must divide by a factor of , giving us the expression in (A.2). Within , the number of pairwise distances where agents occupy antipodal vertices is
There are pairs of antipodal vertices in . In the large limit, agents are distributed in equal number to each vertex of . Then, the number of agents at a single vertex is , so the number of pairwise distances between any two antipodal pairs is . The total number of antipodal pairs across all is then
Finally, the distance between agent opinions and in antipodal vertices of is since any antipodal vertices of share opinion coordinates, and the maximum magnitude of difference on a single opinion dimension is 2.
With these quantities, we can write the expected value of pairwise distance
Simplifying and taking , this becomes
Using the identity we find . Calculating proceeds similarly, beginning with
Simplifying and taking , this becomes
With the identity we find . So,
Data used for our analyses is available for download (14 GB) at http://mt.digital/static/data/polarization_v0.1-data.tar.
Conflicts of Interest
There are no conflicts of interest for either of the authors.
Computational experiments were performed on the MERCED computing cluster, which is supported by the National Science Foundation (Grant no. ACI-1429783).
K. M. Carley, “Group stability: a socio-cognitive approach,” Advances in Group Processes, vol. 7, no. 1, p. 44, 1990.View at: Google Scholar
Pew Research Center, The Partisan Divide on Political Values Grows Even Wider, Pew Research Cente, 2017.
P. E. Smaldino, “Models are stupid, and we need more of them,” in Computational Models in Social Psychology, R. R. Vallacher, A. Nowak, and S. J. Read, Eds., Psychology Press, 2017.View at: Google Scholar
Pew Research Center, News Use Across Social Media Platforms 2016, Pew Research Cente, 2016.
Pew Research Center, Anger Beat Love in Facebook Reactions to Lawmaker Posts after 2016 Election, Pew Research Cente, 2018.
Pew Research Center, Political Polarization and Media Habits, Internet Project, 2014.
H. Clark, Using Language, Cambridge University Press, Cambridge, UK, 5th edition, 1996.
R. Hegselmann and U. Krause, “Opinion dynamics and bounded confidence models, analysis, and simulation,” Journal of Artifical Societies and Social Simulation, vol. 5, no. 3, 2002.View at: Google Scholar
R. A. Rescorla and A. R. Wagner, “A theory of Pavlovian conditioning: variations on the effectiveness of reinforcement and non-reinforcement,” in Classical Conditioning II: Current Research and Theory, pp. 64–99, Appleton-Century-Crofts, New York, NY, USA, 1972.View at: Google Scholar
R. S. Sutton and A. G. Barto, Reinforcement Learning: An Introduction, MIT Press, Cambridge, MA, USA, 1998.
J. Henrich, The Secret of Our Success: How Culture Is Driving Human Evolution, Domesticating Our Species, and Making Us Smarter, Princeton University Press, 2015.
K. N. Laland, Darwin’s Unfinished Symphony: How Culture Made the Human Mind, Princeton University Press, 2017.
J. Sides and D. J. Hopkins, Eds., Political Polarization in American Politics, Bloomsbury, New York, NY, USA, 2015.
F. Barth, Ethnic Groups and Boundaries: The Social Organization of Culture Difference, Little, Brown, 1969.