Abstract

How can individual acts amount to coherent systems of interaction? In this paper, we attempt to answer this key question by suggesting that there is a place for cities in the way we coordinate seemingly chaotic decisions. We look into the elementary processes of social interaction exploring a particular concept, “social entropy,” or how social systems deal with uncertainty and unpredictability in the transition from individual actions to systems of interaction. Examining possibilities that (i) actions rely on informational differences latent in their environments and that (ii) the city itself is an information environment to actions, we propose that (iii) space becomes a form of creating differences in the probabilities of interaction. We investigate this process through simulations of distinct material scenarios, to find that space is a necessary but not sufficient condition for the reduction of entropy. Finally, we suggest that states and fluctuations of entropy are a vital part of social reproduction and reveal a deep connection between social, informational, and spatial systems.

1. Introduction: Challenges for Social Reproduction

Among a number of questions that might linger in the sociological imagination [1], one refers to social systems in a particularly acute way: how do we put our actions together in a way to create a society? How can individual actions develop into something like a working, coherent system of interactions? These questions feel more crucial if we think of the seemingly increasing challenges faced by contemporary societies. Part of these challenges has to do with the growing profusion of information and messages circulating within social systems (see [2]). For instance, computational processing power doubles every eighteen months, while the volume of data doubles every twelve months [3, 4], making the processing of available information impossible in the long run. Interestingly, Luhmann [5] sees this problem bringing increasing difficulties for dealing with possibilities of action and how we put them together as interaction systems, a challenge of selection that seems more and more imposed on our daily experiences. According to Luhmann, a key question faced by social systems is as follows: “among the possibilities of information and interaction, which ones will be actualized?” Considering that we have more possibilities than we can know about, the very way we build our interactions depends on this selection. Crucially, having more options ends up bringing more difficulty in anticipating what will happen next. For all intents and purposes, more channels of information and possibilities of action mean less certainty and predictability.

This apparent difficulty has a name: entropy, a measure of the probability of events and recognizing order in the face of unpredictability and uncertainty. In fact, social systems face entropy all the time. Our daily actions are riddled with uncertainty, from daily choices we have to make to the way they will play out once they merge into the actions of other people. But seeing that requires a somewhat unusual perspective, one aware of the conditions through which choices are made, conditions that are prior to our actions and have to be there so these possibilities may be known and decided upon.

We suggest that among all systems of communication created to give us information about what is available out there, so that we can have counterparts in interaction, there is a crucial, rather old one: the city. We will propose that possibilities of interaction are presented to us and to some extent preselected for us by our own environment and that this environment is deeply spatial, shaped in the form of cities. Cities have historically been a key part of the process of information, selection, and actualization of actions. We discuss in this article how this is the case: how cities play a key role in how seemingly chaotic decisions amount to coherent systems of interaction. In short, we are interested in knowing how social organization emerges once its material and informational conditions are taken simultaneously into account, namely, in the form of an urban environment.

Essentially, we propose to track the passage from possibilities of action surrounding our decisions to the performance of action in our daily lives and the variation of entropy that these passages entail. We suggest that looking carefully into the place of cities in those passages will clarify the constant ordering/disordering of action, helping to explain social reproduction more clearly. We do so arguing that city spaces have the effect of reducing uncertainty and unpredictability in the transition from individual actions to ensembles of interaction. As social agents, we would unconsciously engage into processes of increasing and decreasing entropy whenever we perform socially in the city, that is, whenever our actions have effects on the world. Since the organization of actions in our daily lives transcends local contexts and is largely beyond observation, we designed simulations in silico to try and clarify the effect of cities. These computational experiments are intended to explore how every time we go about the city, we participate in the fragile but recursive emergence of interaction systems.

Once the problem is set, Section 2 discusses entropy, concentrating on its developments in information and social theories in order to reach a little explored instance of social entropy active in the coordination of action. Section 3 establishes a presence for cities as information environment, while Section 4 advances the city as a frame of reference to actions and how they coevolve into systems. In turn, Section 5 hypothesizes how the reduction of entropy mediated by the city becomes an essential part of the self-organization of action. We examine this process in Section 6 proposing an agent based model (ABM) able to assess the role of different social and spatial factors in entropy, such as personal orientations and an extensive space creating friction for mobile agents able to create and retrieve information from their environment. In the final section, we discuss what our theory and model say about the role of cities in social interaction.

2. What Is Social Entropy?

The concept of entropy, a term derived from the Greek word tropos, “transformation,” and introduced by the German physicist Rudolf Clausius in 1865, originated in the physical systems context, specifically the second law of thermodynamics, which describes the irreversible process of energy dissipation. It was advanced by Ludwig Boltzmann’s connection of entropy and probability in 1877, addressing nonequilibrium processes and linking macroscopic properties of a system to its microscopic disorder. There are more possible disordered states than ordered ones, so systems are likely to move toward disorder. From the infinite variety of dynamic states, entropy is a measure of the propensity for certain states. It characterizes each macroscopic state in terms of the number of ways of achieving this state [6].

However, entropy is not an exclusive property of physical systems. The property was translated into “human affairs” by Shannon [7], in the context of information transmission. Shannon sees entropy related to the probability of observing certain events over time and the tendency to increase the variability of entities, leading to uncertainty and unpredictability, as we shall see below. In turn, a number of theorists have seen entropy in societies as well. They have explored Shannon’s measure of information since the 1960s in a field that came to be known as Social Entropy Theory (SET) (e.g., [8, 9]). Charvat et al. [10, 11] proposed notions like “semantic entropy” in decision making and “entropy of behaviour” to measure the homogeneity of needs and dependence between systems, while Horan [12] developed a measure of proportional reduction of uncertainty. Threats to the social system come not only from the accumulation of internally produced entropy, but also from the external environment. Entropy was essentially seen as the opposite of information, something to be controlled by system boundaries (see [13]).

Certain arguments are particularly interesting to our approach. Galtung [14] analyzed entropy at micro- and macroscopic levels using two basic types, actor entropy and interaction entropy. Strong forces would push a social system into a pendulum, oscillating between low- and high-entropy states. The overall outcome of actions is not random, since individual agents would act within generalized roles, follow generalized norms, and pursue generalized goals. Social norms, rules, culture, language, and other “steering media” of a social system [15] would be constraints on behaviour and keep it from reaching maximum entropy. The degree of entropy would fluctuate cyclically or noncyclically, rather than maintain a constant level. Bailey [16, 17] advanced this idea by posing a question: how can social systems increase their organizational complexity or decrease their entropy over time? He argues that “society does face a number of recurring needs and thus has recurring goals, which are met through recurring actions. Inasmuch as these actions are replicated, they are said to be orderly […]. It is this degree of order, which results from the patterned replication of social action over time that results in a degree of entropy less than maximum. If the constraints on action were removed, it is reasonable to expect that society would not decrease its entropy levels but might in fact move in the direction of increased entropy levels” [16, page 127]. We shall see through our model that the reduction of entropy does not have to depend on routine or constraints over action as Galtung and Bailey argue but may emerge out of the very coordination of actions between agents pursuing changing goals, mediated by an informational space.

Other approaches take societies as collections of agents interacting in space-time within geographic boundaries [18]. Bailey [19, 20] also related entropy to measures of city size, population size, and territory. Features from social cybernetics were also introduced, such as context-dependency and the self-reflexivity of agents and applied to issues like the division of labour, problematic communication, adaptation in societies of growing complexity, the role of information in decision making, and the reduction of entropy [21, 22]. However, such conceptualizations have a thin spatiality. In them, space is mostly a background, not a part of the problem of entropy or its resolution in time. Cities are virtually absent as part of the environment of social systems and as complex systems in their own right. In turn, we will not find much support in spatial disciplines either: they have mostly ignored the conditions of production of actions and, therefore, the place of cities in it. Beyond other means to stimulate social organization usually seen in social theory (see [23]), our aim is to deal with space as an active environmental condition and a steering medium in the coordination of action. In order to include cities systemically, we suggest getting back to our initial question: how can seemingly unpredictable individual acts amount to interconnected systems of action?

We find a way to answer this question in Luhmann [5]: a social system faces a number of possibilities of action larger than it can convert into actual actions, which in turn imposes the need of selection. Luhmann understands societies as a network of interconnected subsystems where events are communicatively formed. The structural elements that enable these subsystems are fragile: the fleeting but successive moments of selection and connection between actions. The reproduction of a social system requires the ability to produce these connections. In societies with growing amounts of information and agency, the number of possible interactions grows exponentially. Accordingly, we have a growing difficulty of knowing these possibilities and choosing among them. We have an increase in a type of complexity that Luhmann calls “unstructured”: an entropic informational complexity that may at any moment lead into semantic and organizational loss. Rather than describing the actual course of events, this is an approach designed to throw light on counterfactual possibilities and risks that social systems face all the time. Luhmann uncovers the effort we do every day, choosing agents and activities to interact with in order to reduce risks of not performing at all [5, page 287].

In order to fully appreciate this scenario of social systems prone to entropy, let us get back to the pioneering definition of Shannon [7]. Shannon defined entropy in the context of communication. The fundamental problem of communication is that of reproducing at one point a message selected at another point, either exactly or approximately. Messages have meanings, correlated with certain physical or conceptual entities according to some system. If the number of messages in a set is finite, entropy is a measure of the distribution of probabilities that certain messages will occur.

An example will help clarify what this means as shown in the following:

Reduction of Entropy in the Construction of Language [7](1)Zero-order approximation (symbols independent and equiprobable):XFOML RXKHRJFFJUJ ZLPWCFWKCYJ FFJEYVKCQSGHYD QPAAMKBZAACIBZL-HJQD.(2)First-order approximation (symbols independent but with frequencies of English text):OCRO HLI RGWR NMIELWIS EU LL NBNESEBYA TH EEI ALHENHTTPA OOBTTVA NAH BRL.(3)Second-order approximation (diagram structure as in English):ON IE ANTSOUTINYS ARE T INCTORE ST BE S DEAMY ACHIN D ILONASIVE TUCOOWE AT TEASONARE FUSO TIZIN ANDY TOBE SEACE CTISBE.(4)Third-order approximation (trigram structure as in English):IN NO IST LAT WHEY CRATICT FROURE BIRS GROCID PONDENOME OF DEMONS-TURES OF THE REPTAGIN IS REGOACTIONA OF CRE.(5)First-order word approximation. Rather than continue with tetragram, …, -gram structure, it is easier and better to jump at this point to word units. Here words are chosen independently but with their appropriate frequencies:REPRESENTING AND SPEEDILY IS AN GOOD APT OR COME CAN DIFFERENT NAT-URAL HERE HE THE A IN CAME THE TO OF TO EXPERT GRAY COME TO FURNISHES THE LINE MESSAGE HAD BE THESE.(6)Second-order word approximation. The word transition probabilities are correct but no further structure is included:THE HEAD AND IN FRONTAL ATTACK ON AN ENGLISH WRITER THAT THE CHAR-ACTER OF THIS POINT IS THEREFORE ANOTHER METHOD FOR THE LETTERS THAT THE TIME OF WHO EVER TOLD THE PROBLEM FOR AN UNEXPECTED.In Shannon’s diagram, line () shows independent letters with the same probability of occurrence. On a “first-order” approximation (line ()), letters are independent of each other and have the same probability of appearing in sentences that they have in English. In line (), we have the addition of the frequency with which letters follow other letters in pairs in English. On the third-order approximation (line ()), letters are randomly ordered according to the likelihood of association with others in trios. Line () shows words chosen independent to each other but with the same probability of appearing in sentences that they have in English. Finally, line () comprises the probability that a word is followed by another specific word. Even without decoding the meanings of words, a nearly intelligible message is emerging. Differences in frequency and sequence in letters and words produce differences in the probability of occurrence and an increased probability of certain combinations. In practice, this means that the message components become more predictable and intelligible. The reduction of entropy allows communication.

One could argue that Shannon entropy is anything but semantic. This is a crucial point. Shannon indeed defines information irrespective of meaning. However, Shannon’s coauthor in their 1949 book The Mathematical Theory of Communication, Warren Weaver, had already attempted to incorporate semantic information into Shannon’s theory. The argument, recently expanded by Haken and Portugali [24], shows that Shannon information participates in semantic information and vice versa: as the mind relates to the environment deflating/inflating information, variations in Shannon information entail different meanings. Furthermore, different meanings affect the quantity of information, as they are produced and retrieved through differences in events or entities, however subject to ambiguities. As we shall see below, for the sake of assessing entropy pragmatically, differences in semantic information can be sufficiently captured through Shannon information. This allows one to estimate semantic entropy through Shannon entropy.

But what does this have to do with social entropy? Societies are interaction systems. People’s actions are mediated by information: they have meanings, from personal orientation [25] to informational [5] and practical contents [26]. Meanings produce informational difference. We recognize actions through such informational differences, say, in the contents of utterances or in the tasks we perform with someone. In principle, the greater the diversity of orientations and actions, the harder it is to predict what someone’s next action will be. But there is more to this temporal condition. A state of high entropy surrounds actions in a potential state, before they come into being, when a number of possibilities of action lie ahead. Bringing Shannon’s and Luhmann’s insights together, the informational problem that societies face on a daily basis lies in dealing with the selection of actions.

We experience entropy whenever we deal with uncertainty, options, decisions, unpredictable situations, or too much information, but we rarely realize we do so. Entropy is a phenomenon “beyond observation”: we cannot see or touch entropy, even though we experience it on a daily basis. We are not used to think about the challenges we face in order to make choices or put our actions together in any workable sense. The key question here is how such a heightened field of choices and the taken-for-granted process of selection involve our environment. We need to understand how cities are part of the semantic exchanges that constitute collective action. And the first step here is to understand city space as information in its own right.

3. The City as Information Environment

Luhmann shows that social systems are immersed in semantic production: meaning becomes their environment. We wish to further develop this idea exploring urban space as part of such an environment. In fact, the idea of space as an environment able to contain social information has gained great support recently. From Vygotsky [27] to Wilson [28] and Haken and Portugali [24], a number of cognitive properties of space and spatial properties of cognition have been identified.

(i) Cognition Is Situated and Extended. Our cognitive activity occurs in the context of a real environment and inherently involves perception. Internal cognitive processes are shaped by their coordination with external resources [27]. While cognitive processes occur, perceptual information continues to be captured, so as to affect our actions. The ways in which we dive in cognitive activity are linked to our continuous interaction with the environment [28]. Theories of extended mind assert a causal flow as the mind uses resources in the environment and vice versa, a two-way interaction in a coupled cognitive system [29]. Order and systematicity in human cognition and action derive in part from the stability of our environment (Michaelian and Sutton, 2013).

(ii) We Load the Built Environment with Information. Certain cognitive processes trigger associations with elements of the environment through the incorporation of socially acquired information [30]. In turn, information is classified into potentially shared categories [31]. Clustering processes connect pieces of information through similarities of physical aspects of the environment or through relations between their meanings [32]. Environmental elements (say, buildings or places) are interpreted through such socially shared meanings [33]. Visible elements of a city convey different amounts of information [24]. Small details (like a building entrance) can be related to higher orders of information and refer to larger spatial formations (such as a main street or a city centre), evolving to rich, multilevel hierarchical structures in which the interactions between information units in the lower levels generate patterns of environmental elements in higher levels and vice versa. The probability of a building or place evoking a mental representation shared by people is enhanced by its physical appearance and visual identity, along with its visibility and location in the environment and the social information associated with activities performed there ([34], cf. [35]).

(iii) Spatial Information Relieves Our Cognitive Work. We deal with the environment also through memory. However, our short-term memory is constrained in its capability to process information [24]. Luckily, our knowledge of spatial properties and patterns can integrate semantic, visual, and configurational aspects projected into an urban environment as symbolic off-loading. The spatial environment becomes an external memory carrying information about activities and agencies found in it for future retrieval [28]. Instead of trying to keep all the relevant details about activities in our short-term memory, we retrieve these details from the environment itself as an extension memory and information source [36]. Environmental information is a means to reduce the effort of memorization. Other mnemonic features such as kinesthetic images have been shown to functionally preserve spatial and semantic properties of the external world [37]. This external semantic resource relieves the cognitive load and what Wilson [28] calls “representational bottleneck”: the limits faced by our internal memory, which lead us to extend our mnemonics to the environment in the first place. The roles of long-term and short-term memories for agents able to retrieve social information from space will be of particular interest to our approach.

(iv) Cognition Is Pressed for Time. Daily life requires response capacity. More cognitive capacity can be built up from successive layers of real-time interaction with our spatial context. We create mental models of the environment from which we create action plans. Sophisticated forms of situated cognition happen in any activity that involves the continuous updating of plans in response to changing conditions of action. Time pressure demands spatial decisions [3840].

(v) Spatial Information Serves Action. Agents build instructions about events in the environment in an indexical form [41]. The practical aims of agents can thus be directly connected to their situation [42, 43]. We manipulate the environment as a way of dealing with our practical problems. Embodied cognition includes mechanisms serving adaptive activities [44]. Our cognitive memory evolved in the service of perception and action in a three-dimensional environment [28]. Cognition serves action through a flexible and sophisticated strategy, where information is stored for future use without firm commitments on what the future use may be. Spatial information can be absorbed through a variety of uses for which it was not originally encoded. This means that new uses may be derived from a stored representation of space. They need not be triggered by direct observation of the environment and its affordances [45]. Environmental representations, either schematic or detailed, appear to be largely free of a particular purpose, or at least contain information beyond what is necessary for a specific action. This is certainly an adaptive cognitive strategy. The fact that humans encode the physical world using spatially and semantically structured mental models offer a huge advantage in solving problems.

This nondiscursive knowledge includes spatial properties and heterogeneities and enables us to build inferences, for example, when we try to imagine a likely street where to find certain activity. In turn, Lakoff and Johnson [46] argue that mental models are based on a modelling of the physical world and bolster in analogies between abstract and concrete domains. Other works tie semantics with imagetic schemes able to incorporate knowledge of the physical world as a way to encode relationships between events. In perceiving something, we perceive not only its observed form, but also the potential information enfolded in it [24]. This very property is crucial to the presence of urban space in the selection of actions to be performed.

So our actions encode space with social information, and such informationally differentiated space plays a part in easing the representational bottleneck in our memories. We are able to retrieve information from this semantic arrangement that takes the form of the city before and during our actions and experiences. Such semantic layer is stored in activity places and in the spatial relations between them and may be related to physical and functional heterogeneities, such as accessibility and centrality patterns [34]. Such layers of spatial information can be evoked in our memories and used when we need to make inferences, such as where we could find a particular social situation or place. However, most theories of situated cognition still do not recognize how this informational space is part of the way we create action systems. Let us see how this could be the case.

4. The City as a Frame of Reference

Our aim has strong parallels with Hutchins’s [47, 48] view on cultural ecosystems operating cognitively at larger spatial and temporal scales than an individual person. According to Hutchins, all instances of cognition can be seen as emerging from distributed processes, that is, from interactions between elements in a system. Cognitive properties are not predictable from individual cognitive capacities. Humans create their cognitive powers by creating the environment in which they exercise those powers. This socially distributed cognition is a collective operation produced by interactions among agents in active relation to their environment, in contexts of ongoing activity. Possibilities for individual learning depend on the structure of the environment to the point that the distribution of cognitive skills is determined by the distribution of practices engaged in by people. As Hutchins, we are concerned with the tasks people confront in their everyday world, taking part in processes of coordination of action, as they move through physical spaces. Ultimately, our aim relates well to his proposition that “cultural practices decrease entropy and increase the predictability of experience” [48, page 46]. Our work follows parallel lines, through a systems approach able to encompass large-scale systems of action and spaces.

We propose to explore the informational role of city space in how agents build connections between actions. We all know that urban space is produced to express human activity. But theories of cognition suggest much more: space would be semanticized by our actions. That means that activity places are informationally differentiated and acquire a level of semantic definition similar to types of action [49, 50]. This differentiation is enacted in and associated with the very structure of urban space. Space is differentiated in at least three levels: (i) its physical properties of extensity and configuration, such as accessibility and centrality patterns in a city; (ii) recognizable shapes in its visual and tridimensional form [24]; and (iii) the semantic contents of activities performed in buildings and places (cf. [34, 35]) (Figure 1).

How can this informational space become a part of action? There is a remarkable lack of theoretical and empirical work on this problem. First, we seem to retrieve meanings associated with our actions in architectural and urban spaces. We arguably grasp useful information about the activity performed in a place not only by visual features, but also inferring what people do there. Space not only represents activity: it is actuated and thus loaded with meanings related to performance. This is a semantic dimension of space: space “means” as much as our actions, precisely because it is semanticized by our actions. This enacted information is very important. Action is temporally related to spatial locations.

Second, these locations allow us to access potential counterparts in interaction. As we shall see below, knowing a city means that we can find places where certain activities are performed. Recognizing spatial differences and patterns allows us to infer such locations. Apart from other sources of information such as linguistic exchanges, we do so because we can retrieve environmental information about activities and location patterns.

Third, places are means of connection: we can actualize connections with certain people or activities during a period of time. These spatial links allow us to create complexes of interaction. In practice, places “draw” courses of action with different orientations and contents and tie them together momentarily. Flows of convergent and divergent actions happen all the time, when people leave their homes to work, to seek services or go about to socialize, and do so often without prior knowledge of activities, counterparts or places they wish to visit (Figure 2). Urban space becomes what Parsons [51] would call an “action frame of reference,” a connective fabric which both enables and constrains choices materially and socially.

5. Entropy as a Way into Social Organization

In order to be a part of selection, the urban environment needs to be informational enough, and not just to suggest particular options but allow unforeseen changes in the course of our actions. Luhmann suggests that a desired social activity must be within reach. “Because complete interdependence is unattainable, however, interdependencies only come about by selection. […] Successfully established interdependencies then serve as perspectives for and constraints on the structural selections that connect onto them” [5, page 284]. As a social event, this connection is produced through communication. But before communication comes into being, certain conditions have to be there, spatially there. Places must be chosen as conjunctive pieces that materialize connections.

Could cities stimulate connections? If they could, and given the fact that they are formed by space and that space is not fully malleable, is it reasonable to suppose that certain spatial formations could ease such momentary connections? We know that proximity increases the intensity of interaction [52, 53]. But what form should urbanized space take in order to meet such systemic expectations? We may understand such spatial conditions through a counterfactual scenario. Imagine a fully chaotic urban formation with no recognizable concentrations or paths. Visual elements could not convey information, as spatial differences would not develop into structures emerging out of absolute heterogeneity. References to activities of interest would have to rely only on our memory. Selection of actions would become dependent on very detailed mental representations of such a chaotic formation. Inferences about more likely places to find particular activities would be very hard to make. We would be immersed in a spatial environment without patterns to guide our choices, reduce our efforts, and amplify our interactions. In a world of unstructured spaces, actions would face lack of information, noise, and relentless entropy. The possibility that space may have an informational presence in interaction expresses the possibility that the physical environment may have informational properties and that the world is rich in information.

Now, not only our daily experience denies that cities are such completely chaotic, structureless places, but also spatial economics and urban studies offer plenty of theories and evidences that, as complex as they may be, cities are (at least partially) structured places. From location patterns described since Alonso [54] and accessibility patterns described from Hansen [55] to Hillier [56], we know that urban patterns matter: they imply that activities can be arranged in ways that are recognizable and accessible. And from the point of view of selection, the realization that activities are somehow arranged along a spatial structure is a form of preselection. Activities tend to be distributed according to levels of accessibility, land values, and practical dependencies, facilitating complementary actions. Intentionally or not and however subject to contingencies, urban space is also structured (physically and informationally), which allows possibilities of action to be more easily known, selected and viable. In short, at first, the urban structure affords a range of possibilities for selection. In a second moment, the structure itself suggests gradients of attractive possibilities.

In a view of social systems constituted by connections of actions, this means that space can be part of the constant transition from individual acts to complexes of actions (individual act places as connections action systems). Our hypothesis here, to be tested in computational experiments in the following section, is that cities become a crucial part of this connectivity. They would reduce risks such as the lack of information or courses of action too costly to emerge.

What does this say about the effect of cities on entropy? Beginning with Luhmann, interactions have structural value because they represent selections from combinatorial possibilities. If we relate this to Shannon’s entropy, the very emergence of urban structures eliminates spatial scenarios where every possible connection would have the same probability. Another counterfactual scenario might help here. Entirely homogeneous cities (or entirely heterogeneous, for that matter), free from structure, would present even probabilities of choice between actions. Of course this would be a problem for social systems: connections potentially interesting to most people would be as difficult to find as any. If urban space was purely homogeneous or purely random, the probability of finding an activity would be distributed homogeneously in space. That is a scenario of maximum entropy. The spatial environment would be useless in stimulating certain sought-after connections which could otherwise be there, say, when we concentrate activities like final supply or input-output exchanges in production. Effort, time spent, and economic costs involved in finding activities of interest would rise.

On the other hand, when urban space finds differentiated contents and structures, say, location patterns along accessible paths, it naturally creates differences in the probability of interaction and decreases in entropy. Location patterns are in fact material and informational expressions of interaction systems. The differences engendered by such patterns generate differences in the chances of finding an agent, a service, or commerce. If a city did not have recognizable internal formations such as a centralities or high streets, agents would have more difficulties in knowing where they are most likely to find what they are after. A partially structured environment allows us to build inferences about actions and agencies available in a social system. When these agencies are also related temporally through proximity and functional complementarity, there is an even greater potential for differences in probability to emerge, suggesting new connections and sequences. What all this means is that cities, internally differentiated yet far from absolute heterogeneity, have the effect of reducing social entropy.

Before experimenting with these relationships in silico, let us attempt an introductory description of what these differences in probability entail in the way social systems handle entropy (Figure 3). Linearizing for the sake of simplicity processes that in fact occur simultaneously, the reduction of social entropy would take the following form. (a) In an initial state free of space, think of the actions as lines moving in time. Agents can do anything, and we cannot foresee what they will do, a potential state of high entropy. Colors of the lines in Figure 4 represent different informational contents in personal orientations. (b) Then agents start selecting actions and converging into different positions in a spatial system, arranged as places or buildings (represented by colors in the vertical strip), in order to connect with others. Action lines might have subtle differences in relation to the places they approach, so they converge into places by informational proximity. Connections can only happen if all other possibilities of connection are excluded. (c) As these convergences happen, the initially chaotic maze becomes a momentarily coordinated system, where agents cooperate. The overall entropy of actions is reduced. (d) After each spatially held event, actions may change again, according to changing orientations. Accordingly, the colors of lines change. Entropy increases as actions go into another momentary state of unpredictability, as new possibilities are presented to agents and some must be selected if actions are to be actualized. (e) And then actions “refer” again to meanings and positions in space, starting a new cycle of entropy (Figure 4).

These distinct moments are theoretical, of course. In reality, reminding Prigogine and Stengers [6, page 124], this “multitude of events” takes place simultaneously and may render any clearly emergent cycle invisible. But that neither means that individual cycles do not happen or aggregate in bunches, nor they compensate one another like peaks and troughs in colliding sound waves. In fact, routinization in daily life leads to rhythms and cycles unfolding from or folding over the seemingly chaotic maze of actions. But even if actions were far from synchronic, the fabric of interaction still faces the challenge of entropy, that is, the challenge of producing differences in the probability of connections as a way to ease organization whenever agents feel compelled to create new interactions, and this process is remarkably recursive. We converge to places to act with others and do so on a daily basis. Once engaged in an activity place, interaction requires levels of cooperation. We may bring our own distinct tendencies or latent orientations, in the form of long-term memories of how to act, along with short-term memories of a previous action, both shaping our next actions. As we act socially, we bring our orientations into contact in a way to open them in the communication process, regulating collective action. A result of this coordination is to reduce initial differences. Our actions “align” within an activity place (Figure 5).

All this suggests that social life involves a continuous although unconscious handling of entropy, a vital oscillation between unpredictability and predictability. As this process engages an enormous number of agents and events and transcends places and contexts and therefore cannot be observed as a whole, entropy can only be fully grasped in theoretical representations. A useful way to examine space as an information environment for action is through a computational model, a means to examine a counterfactual world where other steering media such as language or social rules were “switched off,” keeping only properties of space active along with agents’ cognitive abilities to align orientations with activities. Once we abstract from other media, we may assess whether space could have any presence in the self-organization of action. Let us examine this possibility through some experiments.

6. Digital Experiments: A Unidimensional Model

6.1. Design Concepts

An agent based model (ABM) is proposed. Agents are defined to perform actions at each time-step and places by corresponding activity types. The decision on which action to perform next may be influenced by three different conditions: namely, (i) latent orientation , representing the tendency of a single agent to act around a particular type of action, initially randomly distributed; this condition remains over time; (ii) current action performed by an agent as he/she selects an activity place in order to perform a new action; this means an influence of the current action, while allowing gradual changes of orientation in time influenced by other agents and activities; and (iii) types of activity places where agents perform, supporting specific types of action. Agents follow simple rules, selecting their activity places closer to their latent orientations and current actions, while being influenced by those activities. Activity places are also influenced by visiting agents, but they change at a slower rate. In short, agents coevolve with their spatial and social environment. The model simulates different situations under this basic structure. Differences in the weight of these factors over the next action may lead to quite different levels of social entropy, suggesting new possibilities for understanding the phenomenon. Although there is a tradition in modelling daily activities, including a growing literature using digital locational data (e.g., [57]), our ABM is not designed to represent empirical sequences of actions or actual routines. Instead, it focuses on trends that might emerge from the interfaces of simplified systems of action, information, and space.

6.2. Variables and Scale

Consider a unidimensional city formed by agents and places. Places form a ring with length (or perimeter) , as shown in Figure 6. Our choice for representing the city by a ring is justified by the minimal sufficient representation of spatial distance as a factor whose role in social organization is to be assessed. This is allowed by a unidimensional model (e.g., [5860]). The ring form allows continuous movement across a linear sequence of locations, eliminating centrality factors while taking into account periodic boundary conditions in order to reduce border effects, eliminating the role of topology while isolating the problem of distance (see [6163]). This stylized city houses agents, which in each time-step select and visit a specific place located at the position within the city. Consider that the position of the th agent at the time is represented by . The position of an agent can assume the integer values , according to the place that she/he chooses to visit. This means that, at every time-step, each agent will choose one activity place to perform her next action.

We considered as the current action of the -th agent and the activity performed in a place located at the position of the city, both in time . We quantify action orientations weighting the three variables (latent orientation, current action, and activity place). First, variables are translated into different numeric values. Let us assume that an orientation is an integer number between and , each number represents a particular type of action, and numbers in their vicinity mean similar actions. The resulting orientation in time for each agent is the weighted average of those three parameters. The difference gives us the difference in orientation between an agent and the activity place. At every time-step, each agent assesses the difference between her orientation and every activity place in the city. An agent chooses a specific place based on her orientation affinity with this place, minimum difference.

As we hypothesized above, the urban structure has a role in this selection. In this experiment, we reduced “urban structure” to distance. In order to investigate how distance may or not interfere in this selection, we propose two scenarios. In the first one, the agent selects her next activity based only on the similarity in orientations between herself and the activity place. Agents can move across a space free of friction. Space is not a constraint. This scenario considers only differences in orientation. The parameter estimates the interaction between agent and the place located at at the time . The agent selects activity locations that minimize this quantity. We propose

In the second more realistic scenario, space imposes friction to movement. The agent takes into account the physical distance between her current location in the city and the location of her next activity and opts to minimize this distance along with the difference between her orientation and that of the activity place. We propose

Summing up, an agent selects at time a particular activity place located at that minimizes the function . In the first scenario, she will choose activity places closer to her orientation. In the second, she will choose activity places closer in both orientation and physical distance. This brings a form of “energy cost” into the factors considered. Space takes on a function role because of the energy used in “visiting.” Agents move in this linear city willing to minimize this energy, while searching for social information latent in places close to their changing orientations. Changes in entropy levels are derived from this energy function. In its spatial version, the model assesses effects of the effort to reduce energy in action minimizing informational differences (between orientations and activity places) and spatial distances (between agents and activity places).

6.3. The Evolution of Action Orientations

Orientation evolves in time according to the following rule. When an agent chooses and joins a particular place, both agent and place become a little closer in terms of orientation. This means that activity places are not immune to what agents perform in them and that the urban activity system also changes in time. To illustrate how orientations are constantly updated, consider that agent selects an activity place located at . Orientations will be updated according the rule:

Selection also depends on three additional parameters: latent orientation weighted by parameter ; current orientation weighted by parameter ; and type of activity place weighted by parameter . This means that agents will consider the three factors previously described (, , ) with different weights (, , ) to estimate which activity she wants to perform in the next time-step.

The latent orientation evolves from an initial, randomly distributed behaviour, . We consider that is normally distributed around , meaning that every agent has an “average” latent orientation, whose value will be derived at each time-step from a normal distribution peaking at the agent’s average orientation. If is sufficiently small in relation to and , then the latent orientation does not play a role in the agent’s action. However, if is sufficiently large (once in relation to and ), then the action becomes strongly dependent on the latent orientation. This would lead to a pretty conservative city where inhabitants are not open to changes in behaviour.

The current action of an agent may influence a new action with an intensity that depends on parameter . If this parameter is small in comparison to and , then the agent has no memory of an ongoing orientation. However, if is sufficiently large, then the new action is strongly dependent on a previous one (a Markovian behaviour).

The intensity of the activity place’s influence on the agent’s action depends on the value of . If this parameter is small in comparison to and , then the activity place does not affect her next action, meaning that the place does not change behaviours. However, if this parameter is sufficiently large, then the place plays a strong influence on future actions. The activity place will be updated according to the rule:where is a parameter sufficiently small. That is, places are less influenced by agents than the other way round. This means that, at every time-step, a place would have its type of activity closer to the average of orientations of its visitors. Therefore, the main activity of the place will change slightly according to visiting agents, while changing their orientations.

6.4. Frequency of Orientations and Entropy Levels

We calculate entropy assessing the distribution of resulting orientations of all agents in the social system at each time-step, using Shannon entropy. Our quantitative interpretation of information removes ambiguities from the relation between Shannon information and semantic information, as different orientations, actions, and types of activity places are represented by numerical values. This allows us to estimate social entropy through Shannon entropy. The entropy of actions is calculated as follows: consider as the number of agents with orientation at the time (note that the total population is ). We compute the frequency (or density) of this orientation within a population with

Equation (5) calculates the probability of observing an orientation at time . We compute the entropy level for any distribution of orientations withEquation (6) describes how uneven is the probability of finding different orientations. Higher values mean that different orientations have almost the same probability to happen, while lower values indicate a system with clear orientation trends. The reduction of entropy implies that the probability of certain actions increases, that is, actions grow in similarity. In the limit, as entropy falls to zero, all agents in the system would reach the same orientation.

6.5. Model Procedure

The model performs the following procedures:(1)The city is created by generating the following:(a)The type of activity in places: that is, , also generated from a random distribution. This step creates activity places in the ring-city and assign them randomly distributed activity values .(b)The initial position of agents: that is , and their respective orientations , generated from a random distribution. This step creates moving agents in random locations and assign them latent orientations based on the initial, randomly distributed orientation .(2)At each time-step every agent chooses a place to act. This place should minimize the function . Suppose that the chosen place by is defined as . Then agent and place update their orientations and , as described above.(3)Update the distribution of orientations .(4)Compute Shannon entropy .

6.6. Results

We have developed a number of observations on the behaviour of social systems working under different parameters.(i)Entropy reduction is only found in scenarios where space imposes friction to movement, that is, where the spatial distance between agents and activity places is considered as an active factor in selection. Entropy requires an extensive space in order to be reduced. But space cannot do it alone. Surprisingly, space is a necessary, but not sufficient condition. Figure 7 compares two scenarios. In the first one, distance is an issue in the selection of activities (red line). The distribution of orientations changes from a homogeneous one at the start of simulation (a) to a nearly normal distribution at the end of simulation (b), where certain kinds of action are more likely to happen. Informational contents in extensive space help aligning contents in actions. In the second scenario, agents move free of spatial friction (blue line). Orientations are initially randomly distributed (a) and continue to be so at the end of simulation (b), as a similar number of agents are distributed along different orientations. This result suggests that a materially active space becomes a means for increasing the probability of certain interactions, easing the collective coordination of action.

Now assessing the relative influence of social and personal factors on the selection of a next action, namely, the social information in activity places, latent orientation, and the current action, under the influence of spatial friction (Figure 8), we can say the following:(i)There is structure in the relation between selection factors and the reduction of entropy. Entropy begins to fall and fluctuate around specific values, according to different combinations of factors. For some combinations, there is a great reduction of entropy. For others, the reduction is minimal, almost negligible. Fluctuations derive from the random component in the decision making of the agent (under the influence of latent orientation).(ii)A strong latent orientation ( sufficiently large) leads to increasing entropy (blue lines in Figures 8 and 9), since agents cannot align their actions with other agents and activity places. Like a long-term memory, systems whose actions are dictated by latent orientations are likely to preserve initial orientations, which were randomly distributed and, therefore, homogeneous. This condition is responsible for limiting the reduction of entropy.(iii)Current action alone does not shape new actions in any specific way (green lines in Figures 8 and 9). It just keeps what is already happening, working as “reinforcement feedback” for either direction the agent (and the system) is going. Current actions are means to conserve tendencies in the system.(iv)Activity places play a key role in the reduction of entropy (red lines in Figures 8 and 9). The social information in space “contaminates” agents: they align their actions through the social contents of places. We have seen that space matters as extension. Now we see that space also has a very active informational presence.(v)Different combinations of factors have different effects over entropy. Figure 8 shows the transitions between the three parameters and how nuanced weights resulting from blending factors matter in the reduction of entropy. For instance, blending the weight of activity places and latent orientation on the next action displays a strong reduction of entropy (pink lines in Figure 8). That means that when agents do not keep information from previous selections (i.e., ), the action system reduces more entropy than in other parameter combinations. A strong short-term memory leads the system into conserving itself and into a poorer capability to coordinate actions. Also, when current actions and activity places share a similar weight over the next action (orange lines on top), the social system does not experience a great reduction of entropy as agents tend to reproduce their actions.(vi)Finally, the reduction of entropy implies that the probability of certain actions and interactions increases. In practical terms, this means more alignments between agents and more connections between actions (interactions). However, if all agents in the system reached the same orientation and entropy dropped to zero, the system would lose internal differentiation. Agents would behave in the same way, say, in a world with no personal differentiation, specialization, or division of labour. New orientations (therefore, entropy) are necessary if the social system is to keep differentiated agencies. On the other hand, a system with maximum entropy would create difference in actions to a point where no action coordination or interaction would be possible. Clearly this situation cannot be the case. A social system requires balances, neither full entropy nor total predictability.

These results also suggest that we are dealing with distinct “social memories”: a long-term memory is active in what we call latent orientations. Short-term memory is active in the influence of a current orientation over a new action. In turn, the social information latent in activity places is a result of social arrangements (say, as firms, a local economy, and so on) which find certain stability, changing much slower then actions. It is an extension of the social system projected onto urban space, stabilizing the system to some extent.

7. Cities and Social Interaction: Conclusions

What does this approach bring to the state of the art on the relations of social interaction, information, and space and on social entropy and ABM in particular? We have seen that previous approaches overlooked the coordination of action as a key empirical and analytical problem. Furthermore, they tend to have the thin spatiality of a passive territorial background (e.g., [16, 20]). In turn, although the concept of “collective action” is finally getting attention in urban studies [64], the problem of social organization is still largely underestimated. The problem of entropy has been dealt with since Wilson’s [65] work on spatial interaction but has not reached the mainstream of the discipline and deals mostly with urban form (e.g., [24]) and spatial distributions [66], not with social information and action. Regarding agent based models, we are not aware of an approach that deals with the problem of how cities and space are part of social organization. In a sense, our model is closer to Axelrod’s [67] ABM model of dissemination of culture. His agents exchange “culture” through direct contact with neighbours in a cellular automata model, whereas our agents are mobile and actively deal with distance and social information in space, recognizing types of activities in places. The intensity of exchanges between agents in Axelrod’s model is a function of similarities between them. In our model, exchanges are mediated by the selection and interaction with space. In our understanding, our contribution in terms of ABM lies on the fact that a new and simple model is proposed to deal with the material and informational dimensions of social organization, not just as an illustration, but as a proof of concept: space can have causal presence if agents generate preferences as detailed in a domain that coevolves with them.

Now, does the simulation model corroborate the theory introduced in this paper? Our proposition cannot be empirically validated at this stage, given that it is very hard to assess people’s choices and the whole panorama of actions in a city. Entropy may well be beyond observation. In these circumstances, simulations of the behaviour of agents under different spatial conditions become useful to assess entities, events, and entropic forces at play. One expected result from the model is that space matters for coordinating actions, but simulations surprisingly showed that space is a necessary but not sufficient condition for the reduction of entropy. Neither material cognitive resources nor individual powers of agents are enough. Information in activity places and a bit of agents’ personal history also play roles in the way social systems deal with their own entropy. That would mean a causal but nondeterministic presence of space.

Substantively, this approach explores a subtle but fundamental presence of cities in social organization. We argue that urban space materializes gradients of difference in potential interactions, from less to more recognizable, costly or likely. Placing simple criteria including distance as a factor in selecting activities, our ABM showed that space becomes a means for producing differences in the probabilities of interaction, increasing chances of certain selections and convergences in collective action.

In reality, people recognize the social information of activity places, along with uses of busy streets or local centralities. We are able to retrieve these differences from the built environment and relate to them. One of the key points of our argument is that these differences help us make selections. Every time we move through the city and select places to perform, we participate in the large-scale coordination of action. We enact space as a “referential system” [49, 50], a set of semantic bits indexing social activities, informing and guiding daily decisions. Space becomes inherently part of the conversion of orientations into interactions, that is, connections that crisscross sequences of actions, re/creating the interaction system. By means of a structure that both materializes and restricts the quasi-endless combinatorial possibilities of interaction, this system can acquire sufficient internal guidance to make its own reproduction possible. By distributing sufficiently recognizable differences in the probability of interaction, cities express and release local forces of reproduction of social systems. The interfaces of action, cognitive, and spatial systems transform entropic complexity into structured complexity, constantly reordering actions in time.

This materialistic viewpoint can be integrated with other views of social organization. Further work may include the relational roles of cultures, language, social norms, and other steering media, along with explorations of possibly similar effects of communication technologies over entropy. At this stage, we attempted to integrate a social dimension (agents engaged in coordination), a spatial dimension (an extensive environment), and an informational dimension (as differences in action and in the environment itself), a demanding interdisciplinary effort. The approach identifies a central role for coevolving agents actualizing aims as they coordinate (cf. [68]), and a central role for the environment, exterior but responsive to agents, leading to cumulative changes along their history. The model also identifies roles for cognition and memory in the ways agents deal with their own orientations and choose their actions and activity places, changing their spatial environment, with consequences in the overall levels of entropy. All this suggests that the behaviour of the model cannot be reduced to or fully predicted from the behaviour of individual variables, which is a key aspect of complexity.

So one of the main aims of this paper was to assess whether space as information environment had any effect on the coordination and entropy of interaction. However, if physical distance and social information in activity places matter, what about city size and the internal spatial structures of cities? What about rural areas or rarefied suburbs? Since our model explores an abstract circular city, it only begins to answer such questions. Our findings indicate that shorter distances between activity places tend to reduce social entropy. Density seems to matter: denser spatialities (as opposed to rarefied ones) have a role to play. A path for further development here is Jacobs’s [69] idea that the diversity of activities generates positive externalities. And diversity, as spatial economics shows us, has to do with the size of cities and density of population (e.g., [70, 71]). Entropy may also be influenced by diversity. Batty et al. [66] have shown that information increases as cities get bigger. The remaining question is whether larger, denser, and more internally structured cities could create, process, and reduce entropy with more intensity than smaller cities, transforming big pools of activities into differences in the probability of interaction. Although we proposed a unidimensional model, our agents’ behaviour can be explored in more realistic representations of cities, testing the roles of density, topology, and diversity, while going beyond perfectly informed agents. By proposing the ring model, we intended to simplify a city to a minimum system where the spatial dimension of social organization becomes intelligible, without losing fundamental properties.

Should we conclude from all this that entropy is a problem for social systems? Our approach suggests that it is not. Entropy is a necessary force. It means that novel orientations are entering the interaction system and must be dealt with. Entropy only becomes a problem if not converted into organization, only if numerous potential actions are not converted into smaller sets of actual interactions. The effects of new actions over entropy would be positive once actualized, having to do with the diversity of agency. Conservative systems are likely to face less entropy, but also likely to produce less novelty, having homogeneity as a problematic horizon. Fluctuations of entropy seem vital to complex societies.

Finally, our intention was to explore entropy as a means to think about the conditions of social organization “from another angle,” so to speak, from the viewpoint of challenges involved in social reproduction. Hence, the approach focused on the idea of uncertainty surrounding agents dealing with aims and decisions and the different probabilities of interaction that follow, an attempt to bring social, cognitive, informational, and spatial theories under a single roof. Our approach sees cities as connective systems produced to create the delicate fabric of interaction that keeps large numbers of agents coherently living together. It also suggests that the varying states of entropy reveal deep connections between social, informational, and spatial systems.

Conflicts of Interest

The authors declare that there are no conflicts of interest regarding the publication of this paper.

Acknowledgments

The authors would like to thank the Ecole Polytechnique Federale de Lausanne (EPFL), CAPES, FAPEMIG, FAPESP, and CNPq for financial support, Maíra Pinheiro, Henrique Lorea, and colleagues at ABMUS2017 for earlier discussions, and Mike Batty for exchanges and inspiration. Finally, the authors thank Romulo Krafta, maverick of the complexity studies of cities in Brazil, for his input and support throughout the years. This work is dedicated to him.