Abstract

We believe that the ongoing global pandemic has highlighted the need for comprehensive approaches to address issues that transcend geographical and cultural boundaries. Therefore, this article aims to provide a general but abstract review to allow readers of a broad spectrum to learn the basic principles of three related concepts: systems, cybernetics, and complexity. Additionally, to better exemplify these concepts, we offer a review of works from the last decade that use systems theory, complexity, and cybernetics for their development. In this context, the result of this review will allow for breaking down the barriers of reductionist silos of knowledge and fostering a multidisciplinary and interdisciplinary dialogue.

1. Introduction

Recently, the literature has highlighted that optimisation, multiattribute decision-making, and human factors are motor themes of current research in engineering and systems science [1]. Furthermore, while these areas play a fundamental role in resolving problems in our society, it is also true that their theoretical substratum is less visible to technicians and invisible to numerous nontechnicians. This work visualises this conceptual substratum based on three interrelated concepts: systems, cybernetics, and complexity. Our focus provides a general review but simultaneously abstracts to establish a background so that readers of a broad spectrum may know the basic principles of this critical subject. In addition, with the idea of giving updated examples of these concepts, we offer a review of works from the last decade that use systems theory, complexity, and cybernetics for their development.

The initial motivation stems from the challenge to integrate systems thinking into engineering and science, especially in the ongoing global pandemic that has highlighted the need for comprehensive approaches to address issues that transcend geographical and cultural boundaries. An essential aspect of cybernetics and the systems approach is analysing or defining strategies for solving problems regardless of their nature. That is, defining and describing every different issue with the same conceptualisation by abstracting and observing their structure, dynamics and components, their relationships with the environment, and their level of complexity, and, with all this, to evaluate the best strategy from a solution. These aspects are elaborated upon in this review.

The historical journey concerning these concepts goes way back. Ancient Greek thinkers have already observed that the whole is more than the sum of the parts. Plato coined the term cybernetic with an interdisciplinary look associated with the art of navigating or directing. This concept refers to navigation, and its use implies permanent knowledge of the ship’s direction; this situation was called regulation; later, it was known as communication and control. Subsequently, many philosophers made different contributions to systematic views, scientific methods, and the theory of evolution and contributions from social sciences. However, the addition of von Bertalanffy’s general systems theory drastically brings together these different currents of thought. It proposes a comprehensive approach to analyse problems that, by their nature, are complex and multidisciplinary.

Cybernetics reveals that a system must use signals on deviations from the goal to direct its behaviour to achieve a goal. Wiener’s cybernetics appeared in 1948 [2], and this fundamental contribution appears almost simultaneously along with two others, i.e., the information theory of Shannon [3] and the theory of games of von Neumann and Morgenstern [4]. Wiener took cybernetics, feedback, and information concepts far beyond technology and generalised them into biological and social realms. In this vein, in the beginning, there was a significant development of first-order cybernetics closely associated with the mechanistic models which is more typical of engineering. Yet, von Foester launches into applications to psychic and social systems, including the observer, giving rise to second-order cybernetics. Moreover, cybernetics as a theoretical field continues to be developed; examples are the Principia Cybernetica Project, the American Cybernetics Society, and the development of social systems and psychological therapy applications.

We will approach the complexity of systems through distinction, which is to recognise a difference and complexity, which is difficult to understand. The elements provided by British cybernetics, in addition to exemplifying the fundamental notions of their theoretical framework, such as variety and regulation, will allow us to have an initial vision of the concept of systems complexity. A first approach is through the variety of Shannon, the possible number of states that a system can have. However, given the difficulty of defining systemic complexity, we will examine more complex system characteristics. These characteristics include emergence, feedback, nonlinearity, spontaneous order, interdependence and self-organisation, noncentral control, hierarchies, numerousness, and diversity.

Finally, we will focus on a general analysis of complexity measurements according to different currents of thought. We will follow Lloyd [5] for this path, who compiled many measures to measure the complexity in three dimensions, i.e., how difficult is it to describe it?, how difficult is it to create it?, and what is its degree of organisation? Our exploration considers complexity based on Ladyman et al. [5] as disorder and diversity, feedback, computational measurements, thermodynamic depth and statistical complexity, effective complexity, and logical depth.

2. Fundamentals

The representation of reality in models that allow the study of its behaviour is influenced by the context. Thus, in premodern times, Western culture was based on the thought of Greek teachers and philosophers and the principles of Christianity established in the Bible. This context of preconceived scientific truths created a hierarchical ontology in which God came first. Moving forward to the modern era between the XV and XVIII centuries, thought was influenced by positivist philosophers such as Francis Bacon [6]. Bacon established that knowledge of the world could be achieved through empirical observation, and this is how the rise of scientific thought, the establishment of hypotheses, the search for data, and the validation of truths began. In this way, new knowledge is generated which can be accumulated and put at the service of humanity.

Although Aristotle already established the ideas of systems thinking as his well-known “the whole is more than the sum of its parts,” we cannot fail to mention other thinkers. For example, philosophers such as Descartes, Kant, Hegel, and others established important guidelines concerning the interpretation of reality and its representation as knowledge, which significantly impact the systems’ approach [7].

Knowledge should be based solely on reason and not on superstition and tradition. Thus, it should also be noted that rationalism represents one of the fundamental philosophical currents in the development of scientific thought. In this development, Descartes (1596–1650) plays a fundamental role, above all known for being the author of the Discourse on Method [6], a work in which he presents his philosophical work, also known as “methodical doubt.” For Descartes, the “I think” (cogito) is the first truth because it is the first thing in which one cannot doubt, and in which our thinking stumbles: one can doubt everything, except that one doubts, and since doubting is a straightforward exercise of thinking, there is no way to doubt thinking itself. The importance of Descartes in systemic thinking is due to essential aspects of Cartesian thought. The first is his emphasis on the essential unity of knowledge, which contrasts sharply with the Aristotelian conception of the sciences as a series of separate disciplines. The second aspect is captured by making a simile with a tree. This refers to the usefulness of philosophy for ordinary life: the tree is valued for its fruits, and these are gathered, and finally, there is the metaphysics or philosophy that is located at the roots of the tree and which perfectly captures the Cartesian belief in what has come to be called foundationalism, a view that holds that knowledge is to be constructed from the bottom up. Descartes’ method is based on mathematics since it provides certainties and truths that are certain and secure, grounded on strict reasoning, and has four rules: evidence, analysis, synthesis, and verification. Even so, “innate ideas” are necessary, which we do not make but which come already given and are part of our reason. Descartes alludes to the self, the world, and God as the main ideas later treated as substances. David Hume (1711–1776), the Scottish philosopher and author of the treatise on human nature, sustained another significant current of modern philosophy called empiricism and was a strong opponent of Cartesian ideas. Hume argued that knowledge, contrary to what Descartes thought, is not based on the mathematical operation of reason, but on experience. Furthermore, this knowledge does not have, once again compared to Descartes, innate ideas since, for Hume, ideas come from experience and impressions of reality through the senses. Hume does not despise reason either, but he does relegate it to a second place by maintaining that on the ethical plane; the reason is not the guide of life and that it occupies the place of “slave of the passions.” Hume, in a similar way to Leibniz, distinguished two kinds of knowledge: logical truths and facts. He also stated that philosophy could not go beyond experience. If a hypothesis claims to have discovered the ultimate original qualities of human nature, it must be rejected out of hand as chimerical and presumptuous. Hume, taking a sceptical view, generates a science of human nature based on observation and experience, which is characterised by the development of a comprehensive and constructive treatment of human nature [6].

According to Jackson [7], p. 1, Kant’s work is significant for systems thinking for three reasons. First, he felt that science could acquire greater recognition, as had already been the case with Newtonian physics, so he wanted and wished to contribute to it; he also believed that it was imperative to understand the limitations of science. The second reason lies in his interest in “organicism” as a complementary approach to mechanistic thinking, especially in studying nature. Moreover, the third place is his arguments about the capacity of humans to generate principles of moral conduct because they uniquely possess the autonomy of freedom. In this manner, rationalists such as Descartes believe that it is possible to employ only cogent thought to arrive at knowledge about the nature of things. However, for Kant, rational thought alone leads to contradictions, such as proof that God exists and does not exist. Therefore, reason must be based on experience if it is to produce true knowledge.

At the beginning of the 19th century, Hegel criticised Kant for his ahistorical description of the mind. For Hegel, reason gives rise to reality but is itself historically conditioned at the same time. The process by which the mind can overcome its historical limitations and gain a holistic understanding of itself is dialectical. Hegel frequently states that the central theme of philosophy is the reason or the absolute, and thus, an interpretation of the totality where the natural world and human ends is parts [6]. Understanding the whole, the absolute, is obtained through a systemic unfolding of partial truths in the thesis, antithesis, and synthesis, which embraces the positive aspects of the thesis and antithesis and overcomes them. With the synthesis becoming the new thesis, each movement through this cycle gradually enriches our understanding of the whole system.

Husserl is another philosopher who has had a significant influence on systemic thinkers. He wrote his major works on phenomenology in the early years of the 20th century. The term phenomenology indicates that his interest was in phenomena, namely, all conscious mental activities, whether linked to sensory perception, imagination, or our emotions, is thinking about something. Philosophy is about discovering how the mind directs itself and gives meaning to the world through intentionality. Note that Husserl establishes the apophantic and ontological domains. The apophantic is the domain of the senses and propositions, while the ontological is the domain of things, states of affairs, and relations. Apophantic analytics examines the logical and formal structures of the domain of the apophantic. Formal ontology examines the formal structures of the domain of the ontological. In his later work, this thinking brought him closer to Hegel’s philosophy [6] as he became interested in the historicity of consciousness. He began to see experiences as conditioned. Husserl’s attention to the experiences of the world was attractive to many philosophers and established a phenomenological tradition. Heidegger, his disciple, who, following the same line, advances and tries to recognise, through language, the being hidden amid its environment, transformed phenomenology into an investigation of being and especially of the way of being in the world in a particular social context. In the first place, the being, the Dasein, is thrown into a world that it does not choose; others establish it. Secondly, it has to adopt a posture to act in that world: human existence. Finally, Dasein is discourse and must consistently articulate, either discoursing or discussing the entities that operate in ordinary situations.

We cannot fail to mention Piaget, who stated that cognitive development in children occurs when mental processes are reorganised due to the interaction between biological maturation and environmental experience.

In the more scientific realm, the rise of Newton establishes the first approaches to a linear view of the world. Science aimed to discover the laws of nature and their mathematical modelling. This idea was accepted as absolute truth until the birth of quantum physics. The reductionist approach divides the parts as much as possible and solves these problems independently, as established by Descartes to study their components. However, the first approaches for the modelling of nonlinear systems appear in social systems, owners of high complexity, and where the traditional modelling tools are no longer sufficient. In this nonlinear scenario, the approach is oriented to study the interactions instead of being reductionist. This situation orients the approaches to consider how the world is seen, instead of emphasising people and their interactions. In the reductionist analysis approach, a problem or system is broken down into parts, and each is studied separately. Instead, the system approach has a synthesis strategy; from the elements or components, it explores their interactions and concludes the behaviour of the whole system. However, it is indispensable to establish from a conceptual point of view what is meant by a system so as to understand it better for developing the systems approach strategy. Brent D. Ruben emphasises in the Ackoft preface [8], “Nature does not come to us in a disciplinary form. Phenomena are not physical, chemical, or biological. The disciplines are the ways in which we study the phenomena; they emerge from points of view and not from what is viewed. Hence, the disciplinary nature of science is a filing system of knowledge. Its organisation is not to be confused with the organisation of nature itself.”

2.1. Systems

The most general conception of systems is that it is a set of interacting parts. According to the Oxford English Dictionary, a system refers to “a set of things that work together as parts of a mechanism or an interconnecting network; a complex whole.”

It is notorious that Bertalanffy himself [9], p. 37, refers to systems emphasising that similar conceptions and general points of view have been developed in various disciplines of modern science. According to Bertalanffy, science tried to explain observable phenomena by reducing them to interactions of investigable elementary units independent of each other in the past. Systems of various orders were investigated by investigating their respective parts in isolation. Hence, we see the significant impact of his general systems theory on the integral treatment of natural, artificial, or social phenomena. However, when speaking of systems, one must necessarily talk about the environment or surroundings; in this sense, a system is distinguished from the environment; it is a form that can either be physical or abstract. The concept of distinction, formally defined by Spencer-Brown [10], states that “a distinction is a boundary that separates two sides so that one side cannot reach the other side without crossing the boundary.” In this way, it is essential to highlight that a system has an edge that separates it from its environment to distinguish the interior, the exterior, and the edge. Observing our environment through cognitive operations allows us to conceptualise it through different distinctions, i.e., we can observe cars, hills, animals, and other forms that capture the exterior through these distinctions. Distinctions consider all the perceptions we have through our senses. Thus, the distinctions we make are determined by our biological structure due to our nervous system [11]. The deepening of these elements and good examples concerning the concept of systems are given in Espejo and Reyes [12]. An analysis of the German theorist Luhmann [13] is made by Eguzki Urteaga of the University of the Basque Country Department of Sociology [14], emphasising that Luhmann breaks with the assumption that there is an actor or an action behind social communication. He goes further by not considering any theoretical project as an identity (the system) but as a difference (between the system and its environment). The system does not exist in itself but only exists and is maintained by its distinction from its environment. Thus, it is vital to highlight Luhmann’s importance to the relationship between system and distinction; a system is a distinction and, therefore, has an interior, an edge, and an exterior. Luhmann distinguishes three types of systems: psychic, biological, and social. The social system consists only of communications. People are outside the social system since it is conceived only as communications between people and people belong to the environment (p. 249 of [13]).

For interacting with the environment, a system must be organised, configured as an organisation, and the question is what is an organisation? One of the first to establish the notion of systems that organise themselves is Ashby [15], who states that the parts are organised when communication occurs between them and when dependencies or conditions are established in the interaction. For example, some correlation of what happens in A will act in B. This establishes the occurrence of only some events or happenings in B; otherwise, there will be no communication.

von Foerster [16] clarifies that the existence of such systems suppose that they must always be inserted in an environment that possesses order and available energy to live at the expense of it. Thus, according to von Foerster, self-organisation is sustained if (1) a self-organised system is a system that consumes energy and order from its environment, (2) there is an environmental reality in a sense suggested by the acceptance of the principle of relativity, which states that if a hypothesis that is applicable to a set of objects holds for one object and also holds for another object, simultaneously, it will then be acceptable for all objects in the set, and it is then the accepted reality as a system environment, consistent for two observers, and (3) the environment has structure because it is also a system outside of the system under study. Of course, it can also be constituted by other systems that are as well highly likely to be complex.

A system can interact and adapt to the environment to the extent that it learns from it and manages that knowledge. Knowledge is the fundamental question that defines the domain of epistemology. Specifically, the Principia Cybernetica Web group, in the MST theory, metasystem transition theory, states it is understood that knowledge consists of models that allow the adaptation of a cybernetic system to its environment, thereby anticipating possible perturbations. Models function as recursive generators of predictions about the world and the self. A model is necessarily simpler than the environment it represents, allowing it to run faster, i.e., to anticipate processes in the environment.

Thus, concerning system and environment, Luhmann states that self-reference refers to how a system can establish relations with itself and differentiate these relations from the relations with its environment (p. 44 of [17]). The author’s reference implies the system’s closure in itself, but this does not mean that it cannot establish relations with the environment. On the contrary, closure means that the system’s operations have been recursively possible by the system operations’ results (p. 101 of [18]). An essential element of operational closure is that the system can define its limits through its operations, thus distinguishing itself from its environment, and this is the way it has to observe itself as a system. From Spencer-Brown’s [10] point of view, one can say that the system always operates within the form that is in itself and not outside it. Operations are from start to finish within the system and one cannot intervene in the environment. When the edge is crossed, it is not an operation of the system; perhaps, one could say that knowledge is possible because it is operationally closed (p. 63 of [18]). For Luhmann [19], autopoietic systems (living, psychic, and social) are operationally closed. All operationally closed systems react only to internal operations, which give rise to other operations, which in turn give rise to other operations (and so on) but always within the system’s limits.

However, the above does not imply that the system cannot communicate with the environment or other systems. On the contrary, it is a selective relationship, named interpenetration (p. 196 of [18]). This term, instead is used to describe the highly close structural coupling between the psychic and social systems, means that a system’s active operation depends on complex conditions and achievements that must be guaranteed in the environment. Nevertheless, these conditions do not have an active operation in the system; in other words, neither the environmental conditions are part of the system nor they are their independent operations. Thus, although it is indeed related to interpenetration, the structural coupling has more to do with an external view that wonders how they are connected. Therefore, how can the system operate in an environment despite being autopoietic? That is, can it reproduce itself through its operations? Maturana, who coined the autopoietic concept, means that structural development depends on the structural coupling to the extent that it does not produce structures incompatible with the environment [11].

Jackson (p. 63 of [7]) summarises that, in Luhmann’s theory, systems are significantly differentiated from each other (for a social system, all the others are in its environment), so relationships develop between them. To explain how this can occur, Luhmann again turns to the work of Maturana and Varela, this time making use of the concept of interpenetration to describe the extreme structural coupling. Social systems are operationally closed and, therefore, develop according to their structural logics. However, they can be perturbed or irritated by other systems in their environments in ways that cause structure-determined changes. Over time, frequent irritations between two social systems can cause them to resonate with each other continually and become structurally coupled in the sense that their relationship reaches specific stability and they become dependent on each other. The association between the function systems of politics and economics, for example, is signalled by taxation and central banking. Both function systems retain their general autonomy but integration reduces the freedom that each has individually.

2.2. Cybernetics

Before developing the ideas presented in this section, we emphasise particulary on one fact. In 1948, Norbert Wiener’s cybernetics appeared due to the developments in computer technology, information theory, and self-regulating machines (p. 15 of [9]). It was again one of the coincidences that occur when ideas are being discussed in the community; three fundamental contributions appear almost at the same time: Wiener’s cybernetics [2], Shannon and Weaver’s information theory [3], and von Neumann and Morgenstern’s game theory [4]. Wiener took the cybernetic, feedback, and information concepts far beyond the technology fields and generalised them into the biological and social realm. He clarified that systems theory is often identified with cybernetics and control theory, which is incorrect. Cybernetics is founded on information and feedback and is part of general systems theory, such as the control mechanism theory in technology and nature. Cybernetic systems are a particular case of systems that exhibit self-regulation. Additionally, Heylighen and Joslyn [20] in the Encyclopedia of Physical Science and Technology state that the term cybernetics derived from the Greek kubernetes, or “helmsman,” first appeared in antiquity with Plato, and in the 19th century with Ampère, who saw it as the science of effective governance. However, the concept was revived by Wiener in “Cybernetics or the control and communication in animals and machines [2].” Wiener defined cybernetics as the science of control and communication in animals and machines, i.e., the art of management (p. 1 of [21]). Thus, coordination, regulation, and control will be its themes. Therefore, cybernetics is not concerned with what systems consist of but rather with how they work. Instead, it focuses on how systems use information, models, and control actions to orient themselves and to maintain their objectives while counteracting various disturbances.

Cybernetics received contributions from different disciplines. It emerged from a series of interdisciplinary meetings from 1944 to 1953 that brought together several notable postwar intellectuals, including Wiener, John von Neumann, Warren McCulloch, Claude Shannon, Heinz von Foerster, W. Ross Ashby, Gregory Bateson, and Margaret Mead. It is consistent with the principles developed in the general systems theory developed by von Bertalanffy [9]. There are many concepts related to cybernetics, such as self-regulation, feedback, self-replication, goal orientation, and goals.

During the 1950s, cybernetic thinkers became consistent with the school of general systems theory (GST), founded at about the same time by von Bertalanffy, to build a unified science by discovering common principles governing open evolving systems. GST studies system levels of generality, from static structures to metaphysical systems [22]. At the same time, cybernetics focuses more specifically on goal-directed functional systems with some form of control relationship [2]. The development of second-order cybernetics includes the system being observed. It also includes the observer in the system [16], which significantly impacts applications to psychic and social systems. For first-order cybernetics, more identified with more mechanical engineering systems and of deep interest to automatic control engineers, computer scientists or creators of automata are oriented to the fulfilment of objectives. As the first example of Wiener’s rockets, the observer has no significant relevance; however, in the systems of the social sciences area, the second observer is very relevant mainly because of the bias that this effect may imply in the results of the analysis.

One crucial aspect of cybernetics and the systems approach is that these allow defining strategies to solve problems independently of their nature. These permit the definition and description of every different problem with the same conceptualisation, i.e., to abstract and observe their structure, dynamics, and components, as well as their relationships with the environment, and to establish their complexity in order to evaluate the best solution strategy. The study of components and their interactions is essential, as are adaptation processes and knowledge management, which is the key to system learning [23]. Concepts such as order, organisation, complexity, hierarchy, structure, information and control, system-level interactions, boundary, distinction, environment, and homeostasis, among others, are manifested in systems of different types.

The primary analysis in cybernetics refers to the difference between one phenomenon and another, so the phenomenon itself is not so important, but what makes one phenomenon different from another is important. This approach has its origin in Leibniz and is better expressed by Bateson [24], who, with his complex thinking, expresses that when there is relevant information, there is a difference that makes one object different from another. This distinguishing characteristic gives rise to the later development of inheritance, fundamental in developing the programming, and design of object-oriented systems. Cybernetics as a theoretical field continues to develop; examples are the Principia Cybernetica Project and the American Society of Cybernetics. In addition, many applications in social systems and psychological therapy have also been developed.

2.3. Variety and Complexity

Let us consider that a distinction is to recognise the difference and complexity which is somewhat difficult to understand. Thus, we could say that something is complex when something is challenging to understand and when many differences can be recognised. Below, we introduce concepts that allow the first view of complexity and exemplify its variety and regulation and the fundamental elements in cybernetics. A critical centre that integrally studies complexity is the Santa Fe Research Institute, which studies this multidisciplinary concept.

One first approach is the British school of cybernetics. According to Pickering [25]and p. 306 of [7], the unique feature of British cybernetics is that it abandons the search for objective knowledge in the traditional sense and instead advocates a process for discovering what is possible when systems interact with the world. Thus, Beer, one of its most connoted representatives, inventor of the viable system model (VSM), converts Ashby’s brain model into an “embodied organ” interested in interpretation rather than representation. However, one cannot ignore the importance that Beer [26, 27] establishes in the VSM concerning the organisation’s interaction with the environment. However, Beer [27] emphasises that it is related to the observer for the exemplification of complexity. For example, for an entrepreneur, complexity will be associated with human resources, materials, equipment, and capital, which would be very different for a computer programmer, where complexity is associated with lines of code, logic, number of inputs and outputs, and interaction with other systems, among others. As a first approach to the idea of complexity, we will use the formulation of Ashby [15, 21] to consider variety as a way to measure complexity. For example, suppose we have the following set, which contains 12 elements; yet, we will only have three different elements to have a variety of three:

It is important to emphasise that the variety will depend on the observer. If we have two lights, we will have a variety of 4, considering all the possibilities of on and off for each one of them. However, if we move away, we will not distinguish them individually; we will only distinguish on or off, and thus, we will have a variety of 2. A fundamental concept is a constraint in a relation between the two sets. The variety that exists in one condition is smaller than the one that exists in another. This is a relation between two sets and occurs when the variety which exists under one condition is inferior to the variety which exists under another [15].

Variety V is a measure of complexity and is defined as the number of possible states that a system can adopt. We assume that the variables representing a particular state are discrete and that a set of state variables represents a state. The variables used to describe a system may not be discrete or independent. For example, if a particular cat is small and restless or large and aggressive, then the size of the variable and character are related. If the size of the variables has two values, small and large, and the variable character has two values, restless and aggressive, then they are related. It would have only two possible states, but the total number of possible states would be four. In the example, if the size is measured in kilograms, it can be categorised into two levels.

More generally, if the variables that describe a system set as a total number of feasible states more minor than the states that we can conceive, then the system is said to be constrained [21, 28]. The system cannot be in all potentially conceivable states. This fact is because some internal or external laws, relationships, or controls prohibit certain combinations of values for the variables. The constraint C can be defined as the difference between the maximum and the actual variety, that is,

The constraint reduces our uncertainty about the state of the system and thus allows us to make nontrivial predictions. For example, if we detect that the cat is small, it will be restless in the aforementioned example. The constraint also allows us to formally model relationships, dependencies, or couplings between different systems or aspects of systems. Variety and its complement, constraint, can be generalised to a probabilistic framework, replaced by entropy and information. In this case, we will consider that the complexity is equivalent to the feasible variety; it is a measure that allows comparing the complexity of two or more systems. It is a rather a theoretical concept, but it is an excellent way to understand the meaning of complexity through variety, in a first approach to the concept of complexity.

Suppose we are comparing two higher educational institutions according to their mission functions: teaching, research, extension, internationalisation, and management. Then, we could measure this through other metrics. However, according to Ashby [15], we essentially measure complexity, which allows us to compare two institutions from a more objective point of view. For example, if each mission function has three possible levels, the total space of possibilities is 35.

As another example, we can consider a company with capital, human resources, materials, and equipment. We would have a comparison of complexities according to this point of view. If the items measured are in different units, we can transform them discretely and establish three categories for each. Then, the total space of possibilities is 34. A restriction could be only to consider category 1 or 2 in human resources, and the total number of possibilities would be 2 × 33. Beer [26] states that we tend to have a low variety of representations in our minds to represent a high variety of situations in reality. We cannot attenuate a variety of new system states because they proliferate too quickly and lead to significant errors in our judgments.

However, we must adjust our mental models to absorb the variety of new situations. Therefore, a fundamental element for our systems’ analysis is to quantify the variety of systems represented by black boxes. Where we know only the input and output variables, and the mathematical way to calculate the variety of a black box as a function of its input and output variables is to raise the variety of the output variables to the power of the variety of the input variables. For example, if the possible values of the variables are zero and one, then the input is 22 and the output is 2. The variety is 24, and the combinations reflect the static variety, which measures all possible combinations between the input and output (see Figure 1).

Suppose now that the information inputs at the input consider time. We consider that each state corresponds to a bit pattern 0 and 1; in a particular instance, a combination of 0 and 1 will enter, and a response of 0 or 1 will be the output; for example, a string could be at times T1, T2, T3, and T4,… 000, 010, 100, and 110, … Moreover, the output would be, in this case, at T1, T2, T3, and T4, namely, 0, 0, 1, 0. The total number of patterns would now be 16, i.e., a dynamic variety. The variety is calculated considering that n is the number of inputs and m is the number of outputs, and in our example, this is represented aswhere 22, which is 4, is raised to 2, which is 16, the total number of possible patterns. Figure 2 shows the different patterns generated.

In any case, if the input’s possible values are q and the output is p and the total input and output variables are n and m, then the variety would be

Finally, according to Ashby [21], the law of variety required indicates that a system is viable when it can cope with the complexity of the environment in which it operates. Controlling a situation implies being able to cope with its complexity, which is measured by its variety. Thus, Ashby’s law states that only the variety can absorb or destroy its variety; control is only possible if the variety of the controller is equivalent to the variety of the situation under control, in this case, the environment in which it operates.

Thus, Ashby states that a system S receives perturbations represented in vector D, with a particular variety, from an environment A. Also, S has a regulator R, a vector corresponding to actions taken for each perturbation. R acts on D and produces results Z. If R and D are vectors, then R × D = Z. Furthermore, we have an additional mapping from set Z of results, to set E of values, which can be as simple as the 2-element set {good, bad}, and represents the purpose of the system.

In this case, the regulator limits the result to a particular subset, keeps some variables within certain limits, or even keeps them constant. For example, suppose the varieties are measured logarithmically; the varieties of D, R, and the actual results are Vd, Vr, and Vo, respectively. Then, the minimum value of Vo is Vd-Vr. If Vd is given, the minimum of Vo can only be decreased by a corresponding increase in Vr. The previous example is the law of required variety. This law means that restricting the results to the subset valued as good requires a particular variety in R.

Elements developed by Wiener have allowed a better understanding of natural phenomena. These elements include feedback, control, communication, self-regulation, and action coordination. Also, these ideas have enabled generating wide applications in engineering and management [26, 27]. A practical example from the engineering field described by Espejo and Reyes (p. 24 in [12]) is the centrifugal regulator, which shows a mechanical model of self-regulation to maintain a permanent gas load. Moreover, another example by the same authors (p. 26 in [12]), referring to applications in organisations, highlights control in organisations and does not refer to its naive interpretation as a simple process of coercion. On the contrary, however, it refers primarily to self-regulation, a homeostatic process similar to the aforementioned process explained above. So, we can say that, in a complex system of any kind, interactions occur between an excellent infinity of elements, and innumerable phenomena of communication, feedback, and self-regulation are developing between them.

3. Systems and Complexity

The review and scope of complexity in systems began in the 19th century when Carnot and other scientists initiated the development of thermodynamics, considering Newton’s principles and laws. Predictions were made according to the laws of thermodynamics. Since then, several schools of thought have been developed in the complexity of systems and it is embodied in the schools of thought to address systems with different orientations in their complexity [29]. Warfield [29] indicates that there is no agreement in the community in this respect; also, he states that the quality of what emerges from a science depends on the support infrastructure; if it is not adequate for its development, then it will be subjected to profound criticism. However, Warfield [29] highlights the schools of thought of system dynamics, adaptive systems theory, chaos theory, the structure-based school, and those which are not classifiable. We look over these proposals in the following sections..

3.1. System Dynamics

System dynamics, based on the early ideas of Forrester [30], is a methodology for analysing and modelling behaviour over time in complex environments. Based on identifying feedback components between elements and information and material delays within the system, a modelling of level variables and flows is developed, assimilating the phenomenon of modelling a system of ponds and flows with feedback loops. Limits to growth, one of the applications of system dynamics commissioned by the Club of Rome to a group of Massachusetts Institute of Technology (MIT) scientists, was published in 1972, before the oil crisis. The report’s lead author, Meadows et al. [31], a biophysicist and environmental physicist specialising in system dynamics, prepared this report with 17 scientists. Senge’s fifth discipline represents more qualitative approaches [32] that base their action on five driving ideas: (1) personal mastery: the ability to clarify vision, focus energy, be patient, and act with objectivity; (2) mental models: understanding the deep-seated mental images that influence actions; (3) building a shared vision: the ability to develop a shared vision of the future that everyone shares; (4) team learning: the ability of a team to collaborate to produce exceptional results; and (5) systems thinking: the ability to see patterns of change and how the parts affect the whole.

3.2. Adaptive Systems’ Theory

This proposal was predominantly associated with the Santa Fe Institute but is now associated with many business schools. However, the best way to understand this school of thought is to review how Mitchell [33] refers to this conception, i.e., how is it possible for those systems in nature that we call complex and adaptive (brains, insect colonies, the immune system, cells, the global economy, and biological evolution) to produce such complex, adaptive behaviour from simple, underlying rules? How can interrelated but selfish organisms collaborate to solve problems that affect their overall survival? And are there any overarching principles or laws that apply to these phenomena? Can life, intelligence, and adaption be considered mechanical and computational? If they can, could we build bright, living machines? And if we could do that, would we?

3.3. Chaos Theory

This school of thought’s origins correspond to various groups, especially in the field of physics. However, chaos is a characteristic of a complex system; Aristotle already conducted studies of chaos. An excellent start to understanding this school is to review what many scientists and mathematicians who study such things have used. This is a more straightforward form of the logistic model called a logistic map, which is perhaps the most famous equation in the science of dynamical systems and chaos. There are also essential contributions from physics and those made by the ones who received Nobel Prize in Chemistry, Ilya Prigogine [34].

3.4. The Structure-Based School

Warfield developed this proposal along with his colleagues and associates, emphasising the collaborative and computer-assisted construction of the structure of a problem situation as a fundamental step in resolving the complexity of the phenomenon under study [29, 35]. Warfield adds that the differences among the three lie in the particular formalisms underlying their thinking and in the extent to which their metaphors (e.g., “chaos” or “adaptive systems”) are replaced by the specific results arising from applications of these formalisms.

3.5. Unclassifiable

This thought vein corresponds to groups of academics that can be described as the members that constitute a vast school formed by professors linked to the academy and by professionals of different specialisations. According to Warfield, these subgroups are characterised by either their interdisciplinary approaches (e.g., fostered for integrative studies, predominantly for liberal arts faculty) or postmodern approaches, often challenging organised knowledge. Moreover, none of them openly acknowledges complexity in their philosophy or practice.

4. Complexity

According to Ladyman et al. [5, 36] and Ladyman and Wiesner [36], complexity science is a new science or knowledge area along with its development, and it is studied in different science areas, acquiring new conceptualisations and developing approaches relative to each of these areas or knowledge subjects. They as well add that a unique phenomenon called complexity is reaching its development in different branches of science. Also, Mitchell [33] states that a complex system refers to a system in which large networks of components without a central control and simple rules of operation give rise to complex collective behaviour, sophisticated information processing, and adaptation through learning or evolution. An alternative short definition is a system that exhibits nontrivial emergence and self-organising behaviours.

The contribution of the importance of physical systems indicates that it is necessary to incorporate elements established by Prigogine and Stengers [34]. Many social, biological, or physical systems have characteristics based on nonequilibrium thermodynamics. Cooperation phenomena are also found in inanimate nature by forming ordered structures that appear in physics and chemistry. Thus, these structures, which seem to be more the rule than the exception, are strongly irreversible and dissipative phenomena in energy and matter and appear in systems that exchange energy and matter with the environment, i.e., they are open systems; it is the closed systems that have been frequently presented in thermodynamics. In these open systems, the entropy balances consider two terms, one due to energy exchanges with the environment either by mass or energy transfer, which can be negative or positive, and the other due to irreversible internal processes; this sum establishes the total energy change. Unlike the equilibrium conditions [37], different parameters, and initial and environmental conditions, an open system can adopt a wide variety of different forms and structures, showing a specific adaptation to the environment. This situation appears in highly dissipative conditions far from equilibrium, which require a permanent supply of energy from the outside to be maintained. When a system X is coupled to an external system Y and Y guides X to meet some established goals by sending addressing signals, then system X either corrects behaviours or does not. Thus, system X has an organisation monitored from outside; X does not have its (own) organisation. If the control system, or regulator as Ashby calls it, is within the system itself, then it is said to be self-regulating and becomes self-organising.

Many physical systems [37] of self-organised examples, such as the Van der Pol oscillator, show nonlinear behaviour when their parameters are varied. Yet, the most important thing is that if we consider a particular period, the balance of absorption and dissipation of energy is zero. The system starts to oscillate spontaneously, sustaining and stabilising itself with a continuous balance between energy absorption and dissipation. These characteristics can be projected to other systems, such as ecological systems or societies to produce hierarchies of self-organising elements. These elements themselves can be open systems, so there will be multiple positive or negative feedback loops. In general, a system can be stable and, for example, if it is evaluated as a function of time for a long initial period, still new equilibrium states may develop after a specific instant. However, despite this, the system moves away from this new equilibrium and advances to new equilibrium states; thus, species of equilibrium gaps appear as the whole evolution of the process unfolds [34]. Finally, Boulton et al. [38], cited in p. 127 of [7] , state how dissipative structures can have a great interpretation in the field of social sciences and project management. Prigogine’s models encompass internal and external “fluctuations” and microscopic diversity that fall within the “realm of complex evolutionary models” that are relevant to both the social and natural sciences.

The following section presents a very general first approach to answer the question of which dimensions could characterise a complex system and help us to understand what a complex system is. This approach is supported by the work of Ladyman et al. [5, 36], Ladyman and Wiesner [36], Mitchell [33], Page [39], Warfield’s classification [35], and other authors who have contributed to the development of complexity.

4.1. Emergency

Emergence is visualised in different human knowledge areas. In these areas, similar behaviours can be glimpsed in fields as disparate as physics, biology, epidemiology, sociology, political science, and computer science, among others; Mitchell [33], Jackson [7], and Ladyman and Wiesner [36] highlight this phenomenon in their books in a cross-cutting manner with examples taken from different knowledge fields. Also, these fields are viewed from a higher level of abstraction to pursue explanatory and predictive mathematical theories that make the similarities between complex systems more formal and that can describe and predict the emergence phenomena. Thus, a way is being sought to formally express the fact that the behaviour of systems is unique and that knowledge is one. The emergence in systems arises from unexpected behaviours resulting from the interaction of the components or parts of a system which may give rise to some forms of adaptation through differentiation phenomena [40]. An essential element of emergence is, as Ladyman et al. [5] indicate, that an emergence object, property, or process exhibits downward causality type behaviours, and upward causality produces degradation in the sense that, for example, a subatomic element can produce radiation in a cell that produces radiation that will lead to the degradation of the whole system. Bottom-up or top-down causalities often go together, as do interactions between different system levels. Perhaps, in a simple way, the cause-and-effect chain can be established from the whole to the parts and from the parts to the whole, producing also the feedback in the interaction [41, 42]. The emergence to which we refer is more from the point of view of the evolution of nature and more from practical approaches, such as the one that has to do with fractal formation or the organisation of ant colonies and how levels of organisation in nature arise from fundamental physics and the physical parts of more complex systems, as well as systems of a social-technical type, which are being treated as excellent examples by Mitchell [33]. From a more pragmatic point of view, Miller and Page [43] state that many of our most profound experiences of emergence come from those systems in which local behaviour seems so wholly disconnected from the resulting aggregate that it has magically emerged, echoing Clarke’s observation on advanced technology. Also, some statistical behaviours such as the law of large numbers indicate that the distribution of the sample means of a population independent of its distribution of origin is normally distributed when the sample size tends to intensify. Finally, Page (p. 24 of [39]) provides a sound synthesis of emergence. According to Page, emergence refers to higher-order structures and functions resulting from entity interactions. Ant bridges, market failures, domestic cultures, and collective wisdom are all examples of emergence. Emergence underpins the idea of scientific scale. Physics becomes chemistry; chemistry becomes biology; biology becomes psychology, etc. In other words, cells are born from the interactions of atoms, organs are born from the interactions of cells, and societies are born from the interactions of persons. Each level of emergence results in higher-order features: cells divide, hearts beat, people think, and societies mobilise. As a result, the emergence generates behaviours that are very difficult to predict from the interaction of the parties, and this occurs in both physical and social systems.

4.2. Feedback

Wiener [2] states that a fundamental issue in cybernetics is that everything boils down to messages (i.e., information) sent and responded to (i.e., feedback). The effectiveness of the behaviour of society or any system depends on the quality of these messages. It also exemplifies the way we drive a car on an icy road. All our driving behaviour depends on the knowledge of the slipperiness of the road surface. When we use the steering wheel, there will be an interaction with the road through signals that come and go; finally, the vehicle must be stabilised through a process of permanent checking. The same happens when we are taking a shower, and we are regulating the water temperature. With our hands, we check if the temperature is adequate; if it is low, we will regulate the flow of hot water with the tap; if it is too hot, we will regulate this with the cold water tap and then check again, so we continue with a process of trial and error. Another classic example is the antiaircraft project that was assigned to Norbert Wiener, a brilliant mathematician working at MIT [44]. The main problem was predicting the position of an aircraft; this was because, given the limited velocity of the cannon projectiles, the cannon operator should not aim directly at the aircraft. If he did, by the time the projectile reached its intended target, the aircraft would of course no longer be there. In addition, pilots will likely move randomly to avoid destroying each other. Wiener’s approach was to develop a mathematical theory to predict future events by extrapolating incomplete information from the past, which was ultimately the basis of modern statistical communication theory [44]. While working with a young engineer, Julian Bigelow, they built an antiaircraft machine by connecting a cannon to the newly developed radar. However, feedback is a fundamental feature in adaptation, a property that is considered to be a characteristic of complex systems [45]. Yet, feedback is classically associated with automatic control systems in engineering; the adaptation process involves a very intense interaction between the different components and is very difficult to implement if no feedback processes allow fundamental adjustments to be made for adaptation. Mitchell [33] states that all these systems adapt; that is, they change their behaviour to improve their chances of survival or success through evolutionary or learning process adaptation. Though the concept of feedback was coined from cybernetics, it is closely related to the process of adaptation that requires feedback for the achievement of adjustments that in cybernetics are short steps but, in organisms, take long periods of evolution, using information from the environment and the system itself.

4.3. Nonlinearity

Two variables have a linear relationship when the variation causes a constant proportional change in the other. Although reductionist approaches use a strategy in which, by decomposing a problem to find its solution, they assume that the whole is the sum of its parts, the nonlinear behaviours that appear can generate more complex behaviours and are very difficult to predict. The example of rabbits described by Mitchell [33], which have exponential behaviour, is very illustrative in this regard:

Many researchers have used a simplified version of the logistic map or logistic equation that became well-known due to a scientific paper by the biologist Robert May and was further studied by the physicist Feigenbaum [46]. It represents a demographic model in which births and deaths are represented in R, and represents the percentage of occupation of the territory, which corresponds to the maximum habitation capacity of the species; in simple terms, it explains the behaviour of a population approaching a limit established by the capacities of the place it inhabits:

May observed and demonstrated that slight variations in the parameter R causes very different behaviours in the values of X, and this model is used as an example of a system that, when changing its conditions as the value of R generates a chaotic change, it is a representative of the nonlinearity, and this is also shown as a chaotic representation of a system that is very difficult to predict for particular values of R. The increase in R can be seen in the figure. It implies variations in the percentage of occupancy greater than specific points of R, resulting in values of X that are difficult to predict. Also, Ladyman and Wiesner (p. 48 in [36]) add that the universe contains many components that interact with each other in a nonlinear manner: “There is a nesting of emergence structure on many spatial scales. Each galactic structure represents the history of the early universe and the symmetry breaking that gave rise to the fundamental forces and subatomic particles, as well as the more specific history of galaxy formation itself.” From the natural sciences, Page [39] indicates that Stuart Kauffman, a physician and theoretical biologist influenced by cyberneticists McCulloch and Ashby and presents an attractive idea to build computer models to illustrate spontaneous emergence in biological systems. Kauffman represented the interaction of agents in models of coevolution, considering the environment as a wild landscape, where they move, throwing peaks of different heights separated by valleys due to the nonlinear interactions between agents possessing different attributes. Thus, nonlinear interactions in complex systems generate unexpected and very difficult-to-predict behaviours in the system’s overall behaviour.

4.4. Spontaneous Order, Interdependence, and Self-Organisation

In this case, Ladyman et al. [5] indicate that a fundamental idea in complex systems’ research is the order in the behaviour of a system that arises from the sum of a large number of awkward interactions between elements. However, it is not at all easy to say what the order is. Related notions include symmetry, organisation, periodicity, determinism, and pattern. Ladyman also points out that total order undermines the system’s complexity because it indicates that there is a bureaucracy that controls its behaviour. Interdependence is an element that emerges and establishes an influence in the endless search for order in systems with operations specific to armed forces [45]. Order and interdependence are permanently related, considering that, in dynamic processes, forces are established and will always seek the attractors of equilibrium. Concerning complex interacting systems, an ecosystem consists of organisms belonging to many different species, competing or cooperating with their shared physical environment [47]. Another example is the market, where different producers compete and exchange money and goods with consumers. Although the market is highly chaotic and nonlinear, this system generally reaches an approximate equilibrium in which all the changing and conflicting consumer demands are satisfied. Beinhocker [48] indicates that traditional rules cannot explain the economy because three factors influence this view. First, wealth has grown explosively, complexity has grown explosively, and no one is taking responsibility; no one is responsible for these events. So, a different explanation is required, more like a complex adaptive system where agents interact through inductive rules with much peer-to-peer interaction using imperfect information in an environment of high computational power and fast learning; from these interactions, patterns of behaviour often emerge that are not expected. Therefore, the economy is seen as a dynamic system with heterogeneous agents that learn over time, adapt and change, interact with institutions, and change not only economic mechanisms but also behavioural ones, generating evolutionary behaviours that permanently create innovation. From the field of physical chemistry, an essential element is the contribution of Ilya Prigogine [34, 49] Nobel Prize winner in Chemistry in the conception of the character of spontaneous order and self-regulation, who introduces the concept of dissipative structures in open systems with an extensive exchange of energy and matter with the environment, which corresponds to irreversible thermodynamic systems far from equilibrium. These structures, whose most relevant characteristic is that they are self-organised coherent structures in systems far from equilibrium, which associate the ideas of order and dissipation. Prigogine observes that new types of structures appear spontaneously far from the equilibrium situation. From chaos arise ordered structures that require an input of energy to sustain themselves, that do not maintain linear relationships, and that are impossible to predict accurately. These structures generate momentary equilibria that can lead to new expansions in qualitatively different situations from those near equilibrium. A classic example is the Benard instability, where the liquid is in a container with a temperature difference between the upper and lower surface due to the latter being heated. There is, therefore, a temperature gradient, as the bottom is hotter than the surface, which produces heat conduction from the bottom to the top. Instability occurs when the gradient exceeds a specific limit. Generating heat transport by conduction is augmented by convective transport due to the movement of the particles. Vortices are formed temporarily, which disappear when heat is removed, and the initial condition returns.

4.5. Noncentral Control

A widely agreed characteristic of complex systems is the lack of centralised control [5]. Mitchell (p.8 in [33]) indicates that the immune system consists of many different types of cells distributed throughout the body (in the blood, bone marrow, lymph nodes, and other organs). This collection of cells works together effectively and efficiently without any central control. Also, Mitchell (p. 12 in [33]) states that large networks of individual components (ants, B cells, neurons, stock buyers, and website creators, among others) follow relatively simple rules without a central control or leader. It is the collective actions of many components that give rise to the complex, hard-to-predict, and changing patterns of behaviour that fascinate us. Often these types of interaction rules are elementary, but they generate complex systemic behaviours. It is not clear that there is a centralised information system to guide the whole’s behaviour, so from the interactions, it is possible to infer the system’s global behaviour. Heylighen and Gershenson [47] emphasise that there is no centralised control, highlighting that, in some artificial systems such as neural networks, there is no centralised control, and all neurons are connected directly or indirectly to each other, but none is in control, and there is no centralised control in a neural network. Heylighen and Gershenson [47] add that the various studies we reviewed have uncovered many fundamental features or “signatures” that distinguish self-organising systems from the more traditional mechanical systems studied in physics and engineering. Some features, such as centralised control absence, are shared by all self-organised systems and can, therefore, be seen as part of what defines them. Also, Beer (p. 25 in [27]) states the following: “The first principle of control is that the controller is part of the system under control. The controller is not something attached to a system by a higher authority who then grants it a management prerogative. In any natural system, whether we are talking about animal population or the inner workings of some living organism, the control function is major through the architecture of the system, and it is not identifiable, but its existence is somehow inferred from the behaviour of the system. In addition, the controller grows with the system and, if we look back in time, we see that the control also evolves with the system.”

4.6. Hierarchical Organisation

Ladyman and Wiesner [36] posit that, in complex systems, there are often many levels of organisation that can be considered to form a hierarchy of system and subsystem, as proposed by Herbert [50] in “The Architecture of Complexity.” Emergence occurs because the order arising from interactions between parts at a lower level is robust. One way to absorb variety is through organisation into hierarchies, for which work must be performed, and the environment’s entropy must be increased [15]. Thus, Simon examines the complexity and its structure from four points of view. The first is that a complex system is composed of subsystems; these have their subsystems, and so on. The second corresponds to the relationship between the structure of a complex system and the time required for this structure to emerge through evolutionary processes, which specifies that hierarchical systems will evolve much faster than nonhierarchical systems of a comparable size. The third explores the dynamic properties of hierarchically organised systems and shows how they can be decomposed into subsystems to analyse their behaviour. The fourth aspect examines the relationship between complex systems and their description, i.e., simple descriptions that are important for human knowledge and understanding of how the system reproduces itself. From the point of view of philosophy, hierarchies to absorb variety are also explained by the principle of ascending and descending causalities; Jáuregui [42] and Morales [41] present a more detailed explanation of this principle with good graphical examples of the interrelationship between hierarchical levels, the influence of one level on the other, and how they are related when an event is triggered. All processes at a lower hierarchy level are constrained and act according to the higher-level laws. Heylighen and Joslyn [20] indicate that cybernetics is concerned with the properties of systems irrespective of their material or concrete components. In this way, very different systems can be described, and isomorphisms can be sought between them, for which it is essential to study and consider the relationships between their components and how they are transformed into each other. To address these issues, it is essential to study order, organisation, complexity, hierarchy, structure, information, and control to investigate how these manifest themselves in systems of different types. These concepts are relational and allow us to analyse and model different abstract properties of systems formally and to study the behaviour of their complexity over time. Heylighen and Joslyn [20] also state that, in complex control systems, such as organisms or organisations, goals are organised in hierarchies, where higher-level goals control the configuration of subsidiary goals. Endorsing the concept of ascending and descending causality mechanisms, Morales [41] and Jáuregui [42] state that if the objective of a living being is to escape from a danger, the brain through the nervous system will activate the legs to flee from this situation. Heylighen and Joslyn add another example: if your primary survival goal involves the lower-order goal of maintaining sufficient hydration, this may trigger the goal of drinking a glass of water. This goal, in turn, will activate the goal of bringing the glass to your lips. At the lowest level, this involves the goal of keeping your hand steady without spilling water. In relation to hierarchies and systems, Heylighen [28] indicates “We have seen that evolution constantly generates higher levels of supersystems. The more complex the system, the later its emergence. Therefore, any system can be analysed or “decomposed” into its constituent subsystems, which in turn can be reduced to their constituents, and thus progressively follow, at the lowest level, the elementary particles… such consecutive layers or levels of “subsystems in systems in supersystems in ...” are called hierarchy.”

4.7. Numeracy and Diversity

Usually, when there are many elements involved in a system, this could imply higher levels of complexity, as it is likely to require higher organisation levels. Ladyman et al. [5] add that the elements must be not only many but also similar prerequisites for the interaction condition. Thus, in order for systems to interact or communicate (in the broadest sense) with each other, they must be able to exchange energy, matter, or information. Many examples have been cited from different areas of knowledge, such as neurons in the brain, ant communities, and economic systems where agents with similar characteristics operate in a common market. However, according to Page (p. 16 in [39]), scientists speak of diversity when referring to one of three characteristics of a population, namely, a variation in some attributes, such as differences in size or weight, a diversity of types, such as types of vehicles, or differences in configuration, as different as the composition of atoms in a molecule or other compounds. Diversity is common in complex systems, especially when there is an interaction of people, machines, and equipment, and in this case, the concept of diversity also plays an important role. Thus, Page states that diversity applies to populations or sets of entities. A ball bearing cannot be diverse nor can a flower. Diversity requires multitudes. Cities are diverse; they contain many people, organisations, buildings, roads, etc. Ecosystems are diverse because they contain multiple types of flora and fauna. He also distinguishes three diversity types: the first refers to diversity within a type or variation that corresponds to differences in the value of some attribute or characteristic, such as the height of giraffes, the second diversity is that of types and classes or species in biological systems, which refers to differences in type, such as the different types of food that are stored in the refrigerator, and the third refers to compositional diversity that refers to differences in how types are organised, examples include recipes and molecules. Diversity is a dimension that arouses interest in different disciplines such as sociologists, ecologists, and biologists specialising in management and integrated systems of people, machines, and equipment, whether information technology or industrial, because they reflect the diversity and not only the complexity that corresponds somewhat to the interaction of many similar elements.

5. Measures of Complexity

So far, we have selected and described different views and characteristics of a complex system. In the following sections, we will continue with a very general synthesis of some measures proposed in the literature to establish more quantitatively the level of complexity of a system from different perspectives and points of view. A detailed description of these measures can be found in Appendix A.

The physicist Seth Lloyd published an article proposing three different dimensions by which the complexity of an object or process can be measured, considering: How difficult is it to describe? How difficult is it to create? Furthermore, how organised is it? [51]. Lloyd established forty measures of complexity proposed by different people, and in each, he addressed one or more of these three questions, using concepts from dynamical systems, thermodynamics, information theory, and computation. Mitchell [33] provides a complete summary of some of these measures, and Ladyman and Wiesner [36] put forward a more formal explanation.

5.1. Complexity as Disorder and Diversity

The order degree of a system is always related to its complexity; a complex system dwells somewhere between order and disorder. Scott Page differentiates three types of diversity measures: diversity within a type corresponding to measures of variation, diversity between types such as entropy, distance, and attribute, and diversity of the population composition of a community [39]. Variance can be used for numerical events and allows measuring variability within a particular type, such as the number of edges per node in a network, but not for types, such as species in a population; for measuring differences between types, entropy is a better indicator. Shannon’s entropy [3] is used in many engineering applications to measure the complexity of systems, not only in the field of communications but also in other areas such as manufacturing [52], supply chains, and others. Weitzman [53] posits a reasonably general concept of distance derived from the idea of traditional taxonomy and an associated characterisation of the diversity of a population based on that distance.

5.2. Feedback

There are no special measures to measure feedback in a system. However, feedback is intrinsic to complex systems, as it refers to the self-regulation of interactions; its origins appear from the beginning of cybernetics, but it is a widespread concept in automatic control in engineering. System dynamics is a well-known and widely used approach [30, 31, 54], which allows modelling and analysing these interactions reflecting temporal behaviour in complex environments.

5.3. The Lotka-Volterra Equations

Well known in the literature and also known as predator-prey or prey-predator equations, these equations reflect in a simple way the dynamic characteristics of biological systems in which there are two interacting species: one is the prey and the other is the predator. These species are modelled through a pair of nonlinear first-order differential equations used to describe their behaviours that are representative of a wide range of similar situations of applications from various disciplines.

5.4. Computational Measurements

These theoretical measures help us establish some concepts that allow us to represent the characteristics of a complex system and help us understand its meaning and the importance of incorporating measurements that enable us to understand its behaviour better. For example, Lloyd [51] compiles more than 40 measures to view different aspects of a complex system, considering different perspectives and schools of thought. Some of these measures that have been studied historically and have inspired some practical applications will be reviewed [52, 55]; their essential characteristic is that they consider complex systems as computational devices with memory and computational power.

5.5. Thermodynamic Depth

Lloyd and Pagels [56] consider the most probable sequence of scientifically determined events leading to the formation of the object which measures the total amount of thermodynamic and informational resources used to constitute the object itself. Ladyman and Wiesner [36] state that the trajectory that defines the system states is not unique and that if a probability is assigned to each state involved in the trajectory to constitute that object, then its thermodynamic depth is defined as the function of the average of all possible trajectories for the constitution of that object.

5.6. Statistical Complexity

A complex system is an entity that stores and processes information. This occurs in nature. Our brain stores processes and information, and this structures all our behaviour. Thus, always the result of the behaviour of a system will result from a computation [36]. The measurement of the system behaviour assumes that its result is manifested in sequences. These sequences will be measured through instruments and procedures that will depend on the problem’s nature. Through an algorithm, the main regularities of the chain are represented. The sizes of the automata that generate these sequences define the statistical complexity of the system based on the information provided by the sequences.

5.7. Effective Complexity

An entity’s effective complexity is the length of a highly compressed description of its regularities, which are part of the chain considered for the effective complexity, and the rest are being considered as random characters, a concept established by Gell-Mann and Lloyd [57]. Kolmogorov and, independently, Chaitin and Solomonoff propose that, for such strings, one can use algorithmic information content (AIC), which is a kind of minimum description length [33]. A string possessing many regularities on different length scales, which is what we believe a complex system to be, will be assigned a high effective complexity, so the description of a shorter computer programme acts as a universal code that is uniformly good for all possible strings, thus resulting in algorithmic complexity being a conceptual precursor of entropy.

5.8. Logical Depth

According to Bennett and Herken [58], the logical depth of an object is a measure of how difficult it is to construct that object. Mitchell [33] adds that “logically deep objects contain internal evidence of having been the result of a long computation or a slow dynamic process to simulate and could not have originated otherwise.” For example, a highly ordered sequence of A, C, G, and T is easy to construct. Likewise, if I asked you to give me a random sequence of A, C, G, and T, it would be easy enough; then, the logical depth of the system, represented as a string, depends on the runtime of the programmes that produce it. Thus, Bennett and Herken [58] draw on an algorithmic information theory, proposing that the shortest programme to generate a string represents the most feasible and logically comprehensible a priori description.

6. Examples of Application

The review of recent works that together use systems’ theory, complexity, and cybernetics for their development is helpful to exemplify these concepts better. Figure 3 describes the method used. Between 2012 and 2022, thirty-two indexed articles were published in the Web of Science database that meet these criteria.

The review of these studies is detailed below.

6.1. Social Systems and Postindustrial Society

The first set of studies is oriented to see how this vision is applied in social systems for understanding the phenomena in the postindustrial society. O’Sullivan and Manson [59] focused on studying how geography can use techniques and methodologies from physics to carry out research studies in an increasing datafication of the society. Together with the traditional methods of geographers, these studies could improve the understanding of a world in a far more complex way than even physicists can imagine. Building on the work of human geographers who have reported how the relationship between human societies and their environments has come to be designed in the adaptive capacity approach, Adams [60] explains how the current discourse centred on adaptive capacity has emerged, and based on this, this study explores its meaning for climate change adaptation policies, in the understanding that the scope of human action is circumscribed to the adaptive dynamics of the socioecological system. Other authors have used the theoretical bases of Niklas Luhmann in this area. In response to the growing use of natural language processing within artificial intelligence, Straeubig [61] proposes using Niklas Luhmann’s social systems’ theory, which places communication firmly within social systems to develop comprehensive models for the practical implementation of this communication process. In the same vein, these proposals have also been exemplified in understanding problems associated with communications in cyberspace; as Clark and Zhang did [62], these authors proposed to use Niklas Luhmann’s social systems’ theory to explain Internet censorship in China.

Also, these concepts serve as the basis for understanding natural and imaginary societies. Krispin [63] proposes combining the culture-behaviour approach and metacontingency to understand humanity. The science of culture and behaviour aims to expand our understanding beyond individual behaviour to include complex interactions between individuals. This seeks to integrate concepts from behaviour analysis with ideas from fields outside behaviour analysis, including systems’ theory, anthropology, and biology. On the contrary, metacontingency is a contingency relationship between a set of interlocking behavioural contingencies, their aggregate product, and the consequences of selection. Likewise, these concepts can be applied to unreal societies created in literature and art, such as the case of James Joyce’s Finnegans Wake, which can be interpreted through the elements of cybernetics and systems. Indeed, Ball [64] claims that Finnegans Wake anticipates and provides a narrative foreshadowing of the work of early cyberneticians such as Humberto Maturana and Francisco Varela and of systems theorists such as Niklas Luhmann in their depiction of how meaning is produced out of overwhelming complexity through self-reference and through the selection of semantic connections. Appignanesi [65] shows us some of the fundamentals of general systems theory examined through the lens developed by Spencer Brown and von Foerster, emphasising on the late works of Luhmann, and all of these are possible through the analysis of Escher’s artworks and Calvino’s literary works. This theoretical framework has been proposed to enable intervention in social phenomena such as family and community.

In the field of family, Becvar and Becvar [66] call for marriage and family therapists to conduct a therapy from a systems’ theory perspective and not use the individual as the primary unit of analysis. This proposal states that family therapists should use the basic principles of holistic metatheory that moves beyond the first-order cybernetics into the realm of second-order cybernetics with its social constructionist orientation. On the contrary, Almaguer-Kalixto et al. [67] propose using sociocybernetics to analyse and intervene in complex social problems. Through a case of promoting the recovery of collective memory and understanding of the impact of associations on the development of a community, the use of first- and second-order cybernetics concepts and general systems’ theory applied to the social sciences is exemplified. Finally, it is possible to apply these concepts to developing technical systems. According to Xu [68], systems’ science is a must to deal with the overwhelming complexity of systems in Industry 4.0 and the surrounding industrial ecosystem. Industry 4.0 is an interconnected system that integrates technical systems such as smart factories and creates a complex system. An example of the application of system concepts in Industry 4.0 is that it can be defined from multiple perspectives, such as the ones relating to functions, structure, and organisation.

6.2. Management and Business

The second group of studies focuses on using these concepts in management and business; these papers are characterised by examples of organisations with high interaction with their environment; in this sense, Bartscht [69] establishes that to keep viability, a system permanently needs to balance exploration and exploitation activities, and many organisations today fail in finding an adequate tradeoff between them. The system has gained new knowledge which is incorporated as a part of its identity during the adaptation process. Bojnec [70] introduces cybernetic systems into defence management applications to address the unique challenges of the information society and systems modelling for decision-making. The author presents an evidence-based analysis of the defence system and a new way of thinking that influences defence planning and management. Umpleby [71] summarises a vision of these concepts from a business perspective and, among other things, establishes the lack of academic programs that encompasses this type of thinking. Nachbagauer and Schirl-Boeck [72] explore an application in the area of project management and proclaim that the management of risks and uncertainty in megaprojects is an emerging topic; they present clarifications of concepts based on second-order cybernetics and systems theory, transferring knowledge from organisation theory to project management, the article shows that managing the unexpected in megaprojects requires finding a balance between structure and self-organisation in fields such as planning, communication, hierarchy, experience, and organisational culture. Relevant conclusions are established about plans, communication, management of system structures, accountability, the exercise of leadership, and corporate culture. Finally, in this line, Kandjani and colleagues [73] propose a model for the evolution of systems using concepts from enterprise architecture (EA), cybernetics, and systems theory to develop the coevolution path model (CePM) that explains how the company coevolve with its environment. It is established that depending on the variety of the system and the environment, the states of the system can be classified into four groups: viable states, vulnerable states, inefficient states, and states that can coevolve in equilibrium.

6.3. Exploring the Unknown

The third class of investigations explores the unknown and identifies the complex and chaotic dynamics that drive volatile, uncertain, complex, and ambiguous modern situations. For example, the use of systems theory elements to understand and frame the creativity process is proposed by Hieronymi [74], or to understand the health systems from a complexity perspective, moving away from the linear, the rigid, and the directional [75]. In the latter proposal, health systems are characterised by the emergence of unexpected behaviours. Wallis and Wright [76] analyse the theories that explain poverty based on systemic and complexity levels; their proposal places -sociological theory as the highest score in this explanation. Katine and associates [77] develop a systems-based framework to support the rigorous design, analysis, and transformation of the structure of research and development organisations. The proposed viable system model allows for stability while allowing for change based on changing circumstances. In connection with this line, Laszlo and associated [78] focus their work on concerning the complexity of the great themes of humankind; population growth, social inequities, hunger, armed conflict, water shortages, pollution, and climate change, that still imply greater complexity when they are treated together, to try this systemic problem, they highlight that it is necessary to develop empathy-oriented education. Pečarič [79] makes a description and parallel to the principal elements of the systems theory and cybernetics with the characteristics and behaviour of legal systems; he also includes a section on Bayesian network theory and its possible applications in legal systems, giving particular importance to the concept of regulation and adaptation. Finally, Keating and Katina [6] explore three perspectives for complex system governance (CSG). First, they show the influence of systems theory, management cybernetics, and system governance in CSG, and a model and general characteristic for CSG is given. Secondly, the role and nature of CSG pathologies as deviation from normal or healthy system conditions are developed. Lastly, it is established that the success factor for developing CGS corresponds to the design, execution, and evolution of the CSG metasystem functions.

6.4. Models and Complexity

The latter type of research focuses on the models and complexity. First, to contribute to the general theory of models, Ashby [15] presented three different perspectives from the model theory that can be used in systems research based on systems theory, cybernetics, and constructivism. The views shown are general model theory, general morphological approach, and Cynefin framework. Additionally, Leendertz [80] analysed how the concept of social complexity emerged in the social sciences and when scholars transferred and adapted elements of complexity theory from mathematics, computer science, cybernetics, and general systems theory to refine the social theory. The conditions under which this shift occurred intertwine with public-political discourses and public policy in advanced Western democracies. The address on complexity among social scientists had meaning in academic discourse and was used as a buzzword and as a metaphor. In this sense, Koopmans [81] defines the term complexity considering perspectives encountered in education such as information theory, cybernetics, and general systems theory. The paper includes Morin's thinking, which emphasises the temporal aspects of systemic behaviour, the relationship between the system’s behaviour and constituent elements, searching for causality as a recursive rather than a linear process, and emergence. It is also added that novelty and complexity perspectives must be seen as a behaviour of individuals in their systemic context. An essential characteristic of a complex system is its anticipation capacity, in this sense, Nechansky [82] directs the purpose of his work to analyse the main differences in the cybernetic structures necessary for elementary anticipation, taking into consideration that anticipation is seen as the repetition of a known pattern, and complex anticipation, as the repetition of known sequences of patterns. Contributions to more quantitative tools are given in two papers of Wang [83, 84], in the first one, he proposes based on the grey system theory, a connection analysis method to analyse incomplete sequences of information; in the second, he suggests a grey linear control system for regulating the price of China’s real estate and provides the necessary support to assist the relevant management departments with their policymaking. Wang creates a grey state equation of the real estate market price that can reflect both the market supply-demand price mechanism and the production price mechanism using the principles of economic cybernetics. Finally, Yolles and Fink [85, 86] presents three papers. He develops a generic modelling theory of simplex orders using principles from Schwarz’s living systems and conceptualisation from cultural agency theory, considering Rosen’s and Dubois’ concepts of anticipation. In the first paper, Yolles and Fink [85] introduce generic modelling for living systems theory and designate the number of generic constructs to orders of simplex modelling. They present a generic modelling theory of higher orders of simplexity, where simplexity refers to the dialectics between simplicity and complexity; each higher order correspond to every generic construct involved. In the second paper, Yolles [87] explains the need for an adaptive model to respond effectively to complex situations in wicked problems and identifies its essential aspects. A reasonable conclusion of this paper is the introduction of the relational paradigm, which include conceptualisations, theory, strategic processes, and operative decision processes (involving methods of communication and agreement among stakeholders), essential to respond to the issue needs of a wicked problem. Finally, Yolles and Fink [86] present a fourth-order simplex model and explore the potential for higher orders using recursive techniques through a cultural agency theory. Indeed, it is essential to highlight that cultural agency is helpful to structure complex problems with both a top-down and bottom-up approach and takes into account behavioural anticipation given an appropriate modelling approach; this paper includes examples of first, second, third and fourth-order simplexity.

7. Conclusion

This article aimed to comprehensively present the concepts of systems, cybernetics, and systems complexity to establish a conceptual basis for systems thinking for science and engineering. This text was written in a language understood by specialists in other areas. The complex issues facing the world today, whether in natural or artificial systems, are multidisciplinary in all areas of knowledge and practice. In problems associated with science, engineering, technology development, ecology, climate change, crisis, and social changes, common paradigms coexist that characterise isomorphic situations of systemic and cybernetic behaviour and have the characteristics of complexity presented in this article.

In order to better illustrate the ideas of this review, we have explored the last decade of articles that combine systems’ theory, complexity, and cybernetics. Selected papers were categorised into four main topics: (1) social systems and postindustrial society, (2) management and business, (3) exploring the unknown, and (4) models and complexity. The articles on the topic of social systems and postindustrial society are oriented towards how this vision is applied in social systems and the understanding of the phenomena in postindustrial society. On the contrary, articles on the topic of management and business are characterised by examples of organisations with high interaction with their surroundings. In addition, papers on the topic of exploring the unknown find the unknown and identify the complex and chaotic dynamics that lead to volatile, uncertain, complex, and ambiguous modern situations. Finally, the articles on models and complexity aim to contribute to the general theory of models and complexity using conceptual and quantitative tools. Two significant conclusions can be drawn from this literature analysis for illustrative purposes. First, the number of papers in a decade is meagre, an average of three per year, and on the contrary, a high percentage of these studies are authored by a single author; we believe that this indicates the need for more work and more researchers in the integration of these concepts. Second, although most of the papers analysed are found in journals with scope in systems, complexity, and cybernetics, there are also contributions in journals in other areas, such as health, geography, and sociology. This fact shows the cross-cutting nature of these ideas in current science.

After 1900, a conceptual movement began, almost simultaneously and in parallel, where von Bertalanffy established the ideas of systems theory, Ashby and Winner established the ideas of cybernetics, and the Santa Fe Institute consolidated and formally integrated the study of the complexity of systems and measurements of complex systems. This movement is based on essential ideas such as the whole and the sum of the parts of Aristotle, the ideas of empiricists such as Bacon, and rationalists such as Descartes, who proposed the integral development of science, and Hume, who reinforced that the ideas come from experience, and Hegel, who highlighted the role of the mind and its historical conditioning and for interpretation of the whole.

For a scientist, a technologist, or a manager, studying the systemic approach provides a reference framework to face the complexity of controlling higher-order systems in large-scale organisations and projects, where preparation for management is often associated with practical learning. In the field of management, the organisation often moves away from equilibrium, looking for new states associated with temporary equilibria, forcing managers to monitor possible future events and patterns that could affect its structure and dynamics and to consequently take corrective actions. This situation frequently cannot be made visible, as it generates criticism from stakeholders. This is a recursive situation that spreads through the organisation with different repercussions and internal reactions depending on the levels at which the event’s impact was received; thus, triggering reactions that force those affected to study how the problem will be solved. The previously described scenario is from the point of view of an organisation that permanently faces different events and that has the permanent duty of environmental monitoring to be prepared to face possible contingency. However, imbalances permanently occur in natural or artificial systems, and the previously described logic repeats itself. Hence, the importance of multidisciplinary and communication protocols between the different disciplines, along with the coordination between different actors to face surrounding changes, is one of the most challenging issues to solve to face the environmental complex phenomena and the imminent change of era in our society. Also, in technology, innovation, and research, specialised knowledge is no longer sufficient since society faces more and more multidisciplinary or interdisciplinary problems. Thus, knowing isomorphic behaviours allows us to understand the phenomenon of nature better so that we can have better strategies to face it. The significant problems of humanity in natural systems related to climate change, environmental problems, and sustainability, as well as the need to handle crises in modern societies or to manage large and complex artificial technological systems in the different fields of engineering, make it imperative to know the general principles of systems, cybernetics, and complexity. On the contrary, one of the difficulties observed in problem-solving processes is first to establish what the problem is, and most of the time the process of understanding is interactive with permanent feedback processes with users or stakeholders. This interactive process of systemic thinking has intense feedback processes and an integration of concepts that make up the final story or conceptual model.

These days, introducing systems’ thinking into the education of scientists and engineers is a huge challenge. In this sense, there are studies at different levels, whether in elementary and secondary education or at university and postgraduate levels [88, 89]. These experiments suggest that preparation should begin with instructors and that there should be differentiated strategies according to the education level. However, despite the novelty of these proposals, it is clear that given the problems that humanity faces today, we must agree that systems thinking is a fundamental tool that must be introduced in the training of scientists and engineers so that they can support the development of humanity. In this context, how can we prepare our communities to face complex problems of higher dimensional orders? and how can we introduce the systems approach in our curricula? A particular answer for STEM teaching may be the following: First is to convince the people who exercise leadership, then recruit some instructors giving them general knowledge such as those raised in this article, and then to formulate some strategy for its implementation. After which they will work at four levels: basic undergraduate, intermediate undergraduate, upper-undergraduate, and postgraduate. In the first levels, concepts of interactions and their dynamics through essential tools must be introduced, and in the upper levels, a more formal method such as object process modelling must be presented. Also, the curricular reforms implemented in engineering along with the systemic approach could be complemented with curricular strategies, such as transversal workshops or other techniques used in specific subjects. In any case, it will be a learning process that will bear fruit in a long process of continuous improvement.

The initial motivation of this article was to integrate systems thinking into engineering and science, especially in the ongoing global pandemic that has underscored the need for approaches to address issues that rise above geographical and cultural frontiers. This thinking should start from the first levels. Although it appears paradoxical, it should have simple examples that would allow incorporating of concepts of edge, environment, and above all the interactions between the components. Think first, at a conceptual level, and then define components interactions, feedback, and afterwards variables and equations. However, beyond this initial orientation, we believe that education, in general, should start with disseminating these concepts to the public with structured and specialised academic training in specific fields. This action will allow us to initiate transversal conversations on these issues. Furthermore, it will enable us to break down barriers of the silos of reductionist mentalities of knowledge and encourage a multidisciplinary and interdisciplinary dialogue.

Appendix

A. Measures of Complexity

What has been done so far is to speak indirectly of the concept of complexity, through what characterises a complex system. The following is a very general synthesis of some measures that have been proposed in the literature to establish in a more quantitative way the level of complexity of a system from different points of view. The physicist Seth Lloyd published an article in 2001 proposing three different dimensions via which to measure the complexity of an object or process, namely, how difficult is it to describe?, how difficult is it to create?, and what is its degree of organisation? [51]. He established forty measures of complexity that had been proposed by different people, each of which addressed one or more of these three questions using concepts from dynamical systems, thermodynamics, information theory, and computation. Mitchell provides a very good summary of some of these measures [33], and Ladyman and Wiesner provide a more formal explanation [36].

A.1. Complexity as Disorder and Diversity

A good way to start this discussion is to indicate what Mitchell says: living systems are complex, they exist somewhere in between order and disorder [33]. She adds that over the long history of life, living systems have become much more complex and intricate rather than more disordered and entropic; thus, a complex system dwells somewhere between order and disorder. Ladyman, Wiener [36], and Page [39] differentiate three types of diversity measures; diversity within a type corresponding to measures of variation, diversity between types such as entropy, distance, and attribute, and diversity of community composition, i.e., population:

The variance of a random variable X allows measuring the variability within a particular type. Entropy is a better measure to measure differences between types because it can consider the frequency with which the types are presented, and in this way, the complexity can be better measured considering this concept:

Shannon’s entropy [3] is the measure used in many engineering applications, not only in the field of communications but also in other areas such as manufacturing [52], supply chains, and others, to measure the complexity of systems. If x is a random variable that has a probability of occurrence , then the entropy of the system represented by this variable is

The definition of this concept comes from information theory and is a way of measuring uncertainty. Shannon’s entropy measures the amount of uncertainty in the probability distribution P. If the probabilities are equal and the uncertainty is maximum, as with the event of flipping a two-sided coin, where each has a probability of 1/2, then the entropy is represented by the following equation:

In this case, if all events are equally probable, the uncertainty and, therefore, the Shannon entropy concerning the events, is maximum. The Shannon entropy is zero when one of the events has probability one and the rest have zero probability of occurrence. When the events are equiprobable, then H (X) would quantify the difficulty of prediction, which in this case is maximum. Entropy is maximum when it becomes very difficult to make a projection. In manufacturing or production systems, being able to measure complexity allows the design of improvement strategies to improve system performance.

Measuring the distance between two points in space seems simple but defining a more general idea of the distance that can capture the difference between a dog and a cat is more complex. Weitzman posits a fairly general concept of distance derived from the idea of ancestral taxonomy and an associated characterisation of the diversity of a population based on that distance [53].

A.2. Feedback

There are no special measures to measure feedback in a system; however, feedback is intrinsic to complex systems, since it refers to the self-regulation of interactions. Its origins go back to the beginnings of cybernetics, but it is a very common concept in automatic control in engineering. It is widely used in the modelling of systems in which when the results obtained do not achieve their objective, the inputs of the system are acted upon directly in order to control the behaviour schematically.

Complex systems have innumerable interactions between their parts, and there are permanent adaptations due to the processes of adaptation or adaptation to the environment. System dynamics is a tool for modelling and analysing these interactions reflecting temporal behaviour in complex environments [30, 31, 54]. It identifies feedback loops between elements and also the delays in the information and materials within the system and identifies level, flow, and auxiliary variables. In this way, the system dynamics structure through mathematical models of the dynamics of the behaviour of these systems. A simulation of these models can currently be performed with the help of specific computer programmes such as Powersim [54] or AnyLogic [90]. The specification of requirements is performed through causal diagrams that are then converted to blocks in the chosen software tool. Its applications can be in the industrial and service fields, as well as in the ecological systems. A well-known problem is that of the Lotka Volterra equations, also known as predator-prey or prey-predator equations. These are a pair of nonlinear first-order differential equations used to describe the dynamics of biological systems in which two species interact, one as the prey and the other as the predator. These equations were independently proposed by Lotka in 1925 and Vito Volterra in 1926. Such equations are defined aswhere Y is the number of some predator (e.g., a wolf), x is the number of its prey (e.g., rabbits), dx/dt and dy/dt represent the growth of the two populations over time, and t represents time. The first equation means that the growth of prey per unit time is proportional to the amount of prey existing at that time, minus the interaction between them. The second equation represents the growth of predators that is proportional to the level of prey minus the natural death of the prey.

A.3. Computational Measures

These measures, which are theoretical, help us to establish concepts that allow us to capture some characteristics of a complex system that will help us to understand the meaning of a complex system and the importance of incorporating measurements that enable us to better understand its behaviour. Lloyd compiles more than 40 measures that permit viewing different aspects of a complex system but not a general conception [51]. Some measures that have been studied historically and that have inspired some practical applications will be reviewed.

A.4. Thermodynamic Depth

Lloyd and Pagels define a measure of complexity for macroscopic states of physical systems. This is universal, applying to all physical systems. Thermodynamic depth considers the most probable sequence of scientifically determined events leading to the object itself and measures the total amount of thermodynamic and informational resources used to constitute the object itself [56]. Mitchell adds that, in order to determine the thermodynamic depth of the human genome, we could start with the genome of the first creature that lived and list all the evolutionary genetic events (random mutations, recombinations, gene duplications, etc.) which drove modern humans. Probably, since mankind evolved billions of years later than amoebae, its thermodynamic depth is much greater [33].

From the more formal point of view, let us consider that the physical state of a system at time n and tn, is sn [36]. Also, the trajectory, defining the states of a system between t1 and tn-1, is not unique. If we assign a probability to all states involved in the trajectory Pr (s1, s2,. . ., sn−1| sn), then the thermodynamic depth of the state sn is defined as −k ln Pr (s1, s2,. . ., sn−1| sn) averaged over all possible trajectories of s1, s2,. . . and sn – 1. It follows thatwhere k is the Boltzmann constant. It is a difficult measure to implement, but there are many articles in the literature that refer to this subject.

A.5. Statistical Complexity

This measure has its origin in computational mechanics established by physicists James Crutch field and Karl Young [36], who summarised the principles of that measure. It is assumed that a complex system is an entity that stores and processes information. This occurs in nature; for example, our brain stores processes and information, and this structures all our behaviour, so the result of the behaviour of a system will always be the result of a computation. The measurement of the behaviour of this system assumes that its result is manifested in sequences; these sequences will be measured through instruments and procedures that will depend on the nature of the problem, where the main regularities of the chain are represented through an algorithm. The data from these sequences are what you see of the system, so the challenge is to find the static and dynamic structure of the system reflected in a model and thus determine its complexity.

Its principles inspired by computational mechanics allow us to directly address the problems of pattern, structure, and organisation and to infer a model of the hidden process that generated the observed behaviour. This representation, the state machine, captures the patterns and regularities in the observations in a way that reflects the causal structure of the process. In addition, this machine is the unique model of maximum efficiency of the observed data generation process.

The causal states that the equivalence classes of behaviours and the structure of the transitions between causal states define the state machine. The sizes of these automata define the statistical complexity of the system based on the information delivered. This set of causal states S and probabilistic transitions between them, summarised in the so-called minimal and optimal machine, represents our ability to predict the future behaviour of the process [36]. The mathematical structure of the machine is that of a hidden Markov model or a finite-state stochastic automaton.

A.6. Effective Complexity

A concept introduced by Murray Gell-Mann and Seth Lloyd (1996) states that, in a nontechnical way, one can define the effective complexity (EC) of an entity as the length of a highly compressed description of its regularities, which is the part of the chain that is considered for the effective complexity; the rest is considered to be random characters [57]. It is useful to encode the entity description in a bit string. Although the choice of an encoding scheme depends on the context, researchers have proposed alternatives to entropy as a measure of complexity. Andrey Kolmogorov and, independently, Gregory Chaitin and Ray Solomonoff propose that, for such strings, one can use the concept of algorithmic information content (AIC), which is a kind of minimum description length. The AIC is a bit string of the entity describing it. It is assimilated to the length of the shortest programme that will cause a given universal computer U to print the string and then stop. The effective complexity ε (s) of a string is defined as the algorithmic complexity of the set E of which it is a typical member, ε (s) = K U (E). Thus, the set of a string that is perfectly random is the set of all strings of the same length [36], where K U (s) is the algorithmic complexity of string s in a particular description. A chain that has many regularities on different length scales, which is what we believe to be a complex system, will be assigned a high effective complexity. This concept is close to Kolmogorov’s complexity and from a more practical point of view assimilated to Lempel Ziv’s complexity [91]. The expected length of the shortest binary computer description of a random variable is approximately equal to its entropy [91].

Thus, the description of a shorter computer programme acts as a universal code that is uniformly good for all probability distributions. In this sense, algorithmic complexity is a conceptual precursor to entropy.

To understand the complexity of Kolmogorov [91]and Mitchell [33], let us review these three following chains:(A)0101010101010101010101010101010101010101010101010101010101010101010(B)0110101000001001111001100110011111110011101111001100100100001000(C)110111110101011111011111111111010101011101110111111101010101110111011111111111010

What are the shortest binary computer programmes for each of these sequences? The first sequence is definitely simple. It consists of thirty-two 01s. The second sequence looks random and passes most tests for randomness, but it is the initial segment of the binary expansion of root (2) 1. The third one looks random; however, it is not as simple as the other two sequences, which have programmes of constant length. In fact, its complexity is proportional to log (n) + H (K/n) bits, H (x) Shannon entropy.

A.7. Logical Depth

The logical depth of an object is a measure of how difficult it is to construct that object [58]. A highly ordered sequence of A, C, G, and T is obviously easy to construct; likewise, if I asked you to give me a random sequence of A, C, G, and T, it would be fairly easy, with the help of a coin I could flip or a dice I could roll. Mitchell adds in agreement with Bennett; logically, deep objects contain internal evidence of having been the result of a long computation or a slow dynamic process to simulate and could not have originated otherwise [33]. Or as Seth Lloyd says “It is an attractive idea to identify the complexity of a thing with the amount of information processed in the most plausible method of its creation.”

Bennett and Herken draw on algorithmic information theory, proposing that the shortest program to generate a string represents the most viable and logically comprehensible priori description, while a “print” programme, on the contrary, offers no explanation and is equivalent to saying “it just happened” and so is effectively a null hypothesis [58].

Then, the logical depth of the system, represented as a string, depends on the execution time of the programmes that produce it [5, 36]. Bennett’s example of π digits illustrates the difference between algorithmically complex and logically “deep.” A “print” programme could have a great complexity of algorithmic logic, but logically it is very “shallow” because it runs very fast, whereas an algorithm that calculates the digits of π is a comparatively short programme but has a longer execution time.

In addition to considering the duration of a programme as a measure of causal history, Bennett also takes into account the execution time of the programme. A programme that runs for a long time before generating a result means that the chain has a complicated order that needs to be untangled. The definition of logical depth is the following. Let x be a finite string and K (x) be its algorithmic complexity. The logical depth of x at a significance level s is defined as the minimum time T (p) required for programme p to compute x. A way to define logical depth comprehensively is to let x be a finite string and Ku (x) be its algorithmic complexity [5]. The logical depth of x at a significance level s is defined as the minimum time T (p) required for the programme p to compute x, and then, we stop where the length of the programme p, l (p), cannot differ from Ku (x) by more than s bits.

Data Availability

The data used to support the findings of this study are available from the corresponding author upon request.

Conflicts of Interest

The authors declare that they have no conflicts of interest.

Acknowledgments

This work was supported by the Universidad Católica del Norte (Chile).