Abstract

Automata are machines, which receive inputs, accordingly update their internal state, and produce output, and are a common abstraction for the basic building blocks used in engineering and science to describe and design complex systems. These arbitrarily simple machines can be wired together—so that the output of one is passed to another as its input—to form more complex machines. Indeed, both modern computers and biological systems can be described in this way, as assemblies of transistors or assemblies of simple cells. The complexity is in the network, i.e., the connection patterns between simple machines. The main result of this paper is to show that the range of simplicity for parts as compared to the complexity for wholes is in some sense complete: the most complex automaton can be obtained by wiring together direct-output memoryless components. The model we use—discrete-time automata sending each other messages from a fixed set of possibilities—is certainly more appropriate for computer systems than for biological systems. However, the result leads one to wonder what might be the simplest sort of machines, broadly construed, that can be assembled to produce the behaviour found in biological systems, including the brain.

1. Introduction

Automata represent systems that receive inputs, alter their internal states, and produce outputs. The state set of the automaton is to be interpreted as the set of all potential memories or storable experiences. In automata theory, the state set is typically finite. In this case, one can view this memory capacity as limited. On the contrary, when the memory of the automaton is not assumed to be limited (human brain), or its capacity can always be extended (RAM-machines or Turing machines as models of computers in computation theory), the automaton should have an infinite state set.

In the theory of dynamical systems, we use a generalisation of automata in which the size of the state space is not restricted to finiteness, or even to countability. Dynamical systems with the behaviour of an automaton, that is, taking inputs in discrete time, are called discrete systems. The state space of such a system acts as a sort of memory of the inputs. Each input influences the current state of the automaton, and the current state is the result of the system’s own form—how it deals with inputs—together with the system’s history.

One can imagine a dynamical system whose state space is that of all possible input-histories; a new input simply appends to the existing history to become a new history. On the contrary, one can imagine the “opposite” kind of system: one that completely forgets the previous inputs. These systems are referred to as “simple reflex” in ([1], p. 49), reactive, or memoryless in this paper. The transitions of these automata depend only on the input, as no experience is stored. The system decides according to the current perception of the world, rather than current perception together with past perception. In fact, these memoryless systems could act by making a single distinction in the input—a yes/no Boolean response—and nothing more; we call these Boolean reactive systems.

In this paper, we will study the links between systems that have memory and those that do not. More precisely, we prove that systems with memory can be simulated by wiring together systems without memory. Our result provides a theoretical framework that supports artificial neural network approaches. Memory is carried by connections, and not only by individuals, within a compositional hierarchy of parts. In particular, feedback generates memory. This result is already known by electronicians and computer scientists (transistors), but this article formally proves and generalises this intuitive result to any kind of automaton in discrete time, which takes any kind of inputs and returns any kind of outputs. As a special case, when the automata are Boolean (which can compare to transistors), we generate the class of finite automata (which can compare to computers).

This article lies between two fields of mathematics: category theory and dynamical systems. Regarding category theory, we need no more than the basic definitions of categories, functors, natural transformations, (symmetric) monoidal categories, and monoidal functors. Regarding dynamical systems from a category-theoretic point of view, Section 2 recalls the notions of -typed finite sets, -boxes, wiring diagrams, and discrete systems inside a -box, with reference to [2].

In Section 3, we introduce discrete systems and a specific mapping that will serve our purposes (Section 3.1). We then introduce two equivalence relations between discrete systems. Both are bigger than the usual bisimulation used in automata theory (in the sense of set inclusion). One corresponds to an external point of view; two systems are equivalent if they transform input streams into output streams in the same way (Section 3.2). The other relation corresponds to an internal point of view: two systems are equivalent if they have “the same structure” (in a sense that is defined in Section 3.3). We prove that these are just two perspectives on the same relation.

This equivalence relations plays a crucial role in the two results of Section 4. First, we show that any discrete system is equivalent to some wiring together of memoryless systems (Section 4.2). Second, we show that any discrete system with a finite state set is equivalent to a combination of finitely many Boolean reactive systems (Section 4.3).

1.1. Notation

In this article, we will use the following notation.(i)Let denote the set of all natural numbers, .(ii)By default, the variable n will refer to a natural number: . We will also see the integers n as their set-theoretic counterparts, that is, , and ; in that context, simply means . Note that the set n contains exactly n elements and this is what really matters in this notation.(iii)When the size of a sequence does not matter, it will be denoted , which makes it easier to write and read. If each is an element of the same set X, then we will write , instead of defining an and writing . If for possibly different values, and if there exists a compact notation for , then we will write too.(iv) is the usual category of sets.

2. Boxes and Wiring Diagrams

In this section, we will present the background necessary for the understanding of this paper, namely, that of dynamical systems from a categorical point of view. It will be reduced to the absolute minimum used in this article. For more background on monoidal categories and functors, refer [3] and [4]. Our approach is different from the one in [5]. The dynamical systems presented here are defined as a generalisation of automata whose input and output spaces are predetermined. We will define a category of lists, a category of boxes, and diverse operations on them.

In this section, will be any category with finite products, that is: for any finite sequence , the product always exists (typically ). Most of the following notions were already defined in [2]; we only recall them without proving their properties. Examples can be found in the longer version of this article [6].

2.1. The Category of Typed Finite Sets

Before defining proper boxes, we need to define the notion of input and output ports. These will eventually be the sides of our boxes.

Definition 1 (category of -typed finite sets [2]). The category of -typed finite sets is defined as follows:Objects: an object is any pair such that P is a finite set and is a functionMorphisms: a morphism from to is a function such that Identities: the identity morphism on is the identity function of the set PComposition: the composition of morphisms is the usual composition of functionsAn object in is called a -typed finite set; a morphism in is called a -typed function.
We can rewrite a -typed finite set as the finite sequence , where . A -typed finite set is simply a list of objects in , indexed by a finite set P. If , a -typed finite set is a list of sets.
A -typed function can be then seen as a means to obtain the former list from the latter list , by reordering, duplicating, or even ignoring its elements. As , the list can be rewritten as . Beware of the inversion: γ goes from to and we see it as a transformation of the list into the list .

Definition 2 (sum of typed finite sets [2]). Let be two -typed finite sets.
We define their sum by as the usual disjoint union of sets and as on for .

Definition 3 (sum of typed functions [2]). Let be two -typed functions.
We define their sum as the -typed function such that , if .
We can view the sum as the concatenation of the lists and , that is, the list and the sum of -typed functions as a action on each part of the concatenated list.

Proposition 1. The category has the following properties:(i)The sum of -typed finite sets is a coproduct.(ii)There is only one -typed finite set , where . We denote it by 0.(iii) has a symmetric monoidal structure for the sum , with 0 as the unit.

Proof. Refer to [2].

2.2. Dependent Products

In this subsection, we define the dependent product functor. If a -typed finite set can be viewed as a list of objects of , then the dependent product of this list is simply the product of its elements.

Definition 4 (dependent product [2]). We define the dependent product as the functor with the following actions:Action on objects: Action on morphisms: if , then is defined as the function such that and .The interpretation of the dependent product is actually quite straightforward: the dependent product of a -typed finite set, viewed as a list, is the product of the elements of the list in the same order as they appear in the list. The dependent product is thus a functor that packages the usual operations of diagonal , projection , and swapping .
We remind that has finite products; as a consequence, the dependent product always exists.

Proposition 2. There is a natural isomorphism ; in other words, the dependent product functor sends coproducts in to products in .

Proof. Refer to [2].

This property is also quite intuitive: if one views the coproduct in as the concatenation of lists and the dependent product as the product of the elements of the list, then the dependent product of the concatenation of two lists is the product of the dependent products of each list.

2.3. The Category of Boxes and Wiring Diagrams

The category is not the main purpose of this article; however, its properties will be useful for the rest of this article.

In the following, by abuse of notation, we will write for and for .

Definition 5 (-box [2]). We call -box any pair .

A -box is a pair of -typed finite sets , where represents the list of input ports and represents the list of outputs ports.

Definition 6 (wiring diagram [2]). Let and be -boxes.

A wiring diagram is a pair of -typed functions such that (i) and (ii) .

The -typed function tells what feeds the input ports of the box X: each input port of X is either connected to an input port of Y or to an output port of X (in case of feedback); the -typed function tells what feeds the output ports of Y: each output port of Y is connected to some output port of X.

We can now compose the wiring diagrams:

Definition 7 (composition of wiring diagrams [2]). Let and be two wiring diagrams. We define their composition, denoted , as the pair , where is defined such that the following diagram commutes:and is defined such that the following diagram commutes:

Definition 8 (category of -boxes and wiring diagrams [2]). The category of -boxes and wiring diagrams is defined as follows:Objects: an object in is a -boxMorphisms: a morphism between two -boxes X and Y is a wiring diagram Identities: an identity morphism on X is the identity wiring diagramComposition: the composition of wiring diagrams is the composition defined in Definition 7

2.4. Monoidal Structure of the Category of Boxes

The category has a monoidal structure for the parallel composition of boxes that corresponds to the intuitive idea of parallelising boxes.

Definition 9 (parallel composition of boxes [2]). Let and be two -boxes.

The parallel composition, or sum, of X and Y, denoted as , is the box , where is the sum of -typed finite sets (cf. Definition 2).

The parallel composition of two -boxes summarises to the concatenation of both input ports and both output ports.

Definition 10 (parallel composition of wiring diagrams [2]). Let and be two wiring diagrams.

The parallel composition, or sum, of φ and ψ, denoted as , is the wiring diagram , where is the sum of -typed functions (cf. Definition 3).

Proposition 3. The category has the following properties:(i)The closed box , defined as , where 0 is the -typed finite set defined in Proposition 2.4 is the unit for the sum of boxes .(ii) has a symmetric monoidal structure for the sum of boxes , with as the unit.

Proof. Refer to [2].

2.5. Dependent Product of Boxes

The aim of this section is to extend the notion of dependent product (Definition 4) to -boxes and wiring diagrams.

Definition 11 (dependent product of a -box [2]). The dependent product of the -box is the pair .

Remark 1. The dependent product of is .

Definition 12 (dependent product of wiring diagrams [2]). The dependent product of the wiring diagram is the pair .

Remark 2. The dependent product is .

Proposition 4. Let and . The dependent product of is the pair , where (i) and (ii) .

Proof. Refer [2].

Remark 3. The dependent product of -boxes and wiring diagrams could be described in terms of monoidal functors; however, the codomain of this functor is not as expected, but a category that has the same objects (pairs of objects of ) but whose morphisms are pairs of morphisms such that is the morphism in and is the morphism in . The composition law is the one given in Proposition 4.

Until now, we have only defined a category of -boxes, with interesting properties. These -boxes are exactly as their name suggests: empty boxes. The extension of the dependent product to -boxes is a necessary step in order to define the “inhabitants” of -boxes.

3. Discrete Systems and Their Equivalences

In this section and in the rest of this paper, we will consider the special case where . Thus, in general, we will simply call “boxes” what we introduced as “-boxes”. We denote the symmetric monoidal category of boxes as .

3.1. Definition and Basic Properties

The notions introduced in this section come from [2]. The properties stated here are proven in the same article.

Definition 13 (discrete systems [2]). Let be a box.

A discrete system for the box X, or discrete system for short, is a 4-tuple where(i) is the state set of F(ii) is its readout function(iii) is its update function(iv) is its initial state

We denote by the set of all discrete systems for the box X.

Remark 4. In Proposition 3, we defined the closed box , where 0 denotes . Its dependent product is , where is any typed finite set of the form . As a consequence, we haveIn other words, an inhabitant of a closed box is a dynamical system with no inputs and no outputs, just a set S and a function .

Remark 5. From a set-theoretic point of view, is too big to be a set. A potential solution is to define the within a set big enough for our purposes; for example, the set from the von Neumann hierarchy of sets, which contains the usual sets, vector spaces, measurable spaces, Hausdorff spaces, fields, etc., used in mathematics ( suffices [6, Lemma 2.9]).

In the following, we will continue to write (and similarly for mappings) with the state set in for the sake of understandability, but in case set-theoretic problems emerge, we should not write but .

Note that discrete systems can be viewed as a generalisation of automata. They have no final states, and the transition function is always a function, i.e., all discrete systems are deterministic, the input alphabet can be infinite, and the transition function is always defined on every input and every state. Discrete systems are not automata that recognize a language, but rather automata that take any input stream and return an output stream based on the states it transitioned to; that is, discrete systems are a generalisation of transducers as defined in [7]. Alternatively, discrete systems exactly correspond to the sequential automata in [8].

We previously viewed general boxes (objects in ) as empty frames. Discrete systems are the objects that “live” inside. One can draw a parallel with programming: a -box is the signature of the function, that is, its accepted types of inputs and outputs, and the discrete system is the actual code of the function.

In the rest of the article, we will often represent a discrete system as the following two-arrow graph:

The first function describes how a state and an input are transformed into a new state; the second describes how the state is output, or “read out”. In general, the initial state will not be represented in these diagrams though it is implicitly there.

Discrete systems are part of the more general class of dynamical systems. We can define other types of dynamical systems depending on the category that we are interested in. If is the category of Euclidean spaces, then we will refer to continuous systems. For more examples, refer [2].

Definition 14 (DS application of a wiring diagram [2]). Let be a wiring diagram. Let .

The DS application of φ to F, denoted as , is the discrete system such that (i) , (ii) , (iii) , and (iv) .

We can view as the discrete system, and we obtain from F by implementing the wiring diagram φ.

Definition 15 (parallel composition of discrete systems [2]). Let and be boxes and let be discrete systems.
The parallel composition of and , denoted as , is the discrete system such that (i) , (ii) , (iii) , and (iv) makes the following diagram commute:We also define the parallel composition of and , denoted as , by

Proposition 5. Parallel composition provides a natural map .

Proof. Refer to [2].

Theorem 1. Definitions 13–15 define a lax monoidal functor .

Proof. Refer to [2].

3.2. An External Equivalence Relation on Dynamical Systems

Via the monoidal functor , a box contains a specified sort of discrete system (depending on the ports of the box). For an exterior spectator, the content of the box does not matter; what matters is the way it transforms input streams to output streams. Thus, even if two boxes contain different discrete systems, for example, one with an infinite state set, and the other with a finite state set, as long as they both give the same output in response to the same input, then they are viewed as “equivalent” from an external point of view.

The following definitions formalise this idea.

Definition 16 (input and output streams). Let .

An input stream (for X) is a finite sequence , where .

The output stream produced by F when given , denoted , is the stream defined by the following recursive system:

We refer to the state s that F reaches after having processed the input stream as the resulting state of F and denote it . Formally, if , then according to the previous recursive system, the resulting state of F is .

Remark 6. According to the notation proposed in Section 1.1, will be written as .

Remark 7. Definition 16 is a continuation of the definitions of run maps and behaviours in [8], which are functions that assign, respectively, the resulting state and the last output of the automaton given an input stream. The results we obtain with our notations are similar to those in [8].

Definition 17 (equivalence as stream transducers). Let and be two discrete systems.

We say that F and G are equivalent as stream transducers, and we write , when .

It is easy to see the following.

Proposition 6. The relation is an equivalence relation on the set , for any box X.

3.3. An Internal Equivalence Relation on Dynamical Systems

The relation defined above does not give any information on the links between two discrete systems that are equivalent as stream transducers. In this subsection, we define another equivalence relation that provides an internal point of view. We then prove that the two equivalence relations are the same.

In the following, is any box.

Definition 18 (simulation relation). Suppose given and in .

We say that F simulates G, and we write , if there exists such that and such that and , that is, preserving the initial state and making the following two diagrams commute:

We refer to α as a simulation function: it witnesses the simulation .

A priori, the simulation relation does not relate the output of the two discrete systems F and G (though this does follow; see Lemma 2); it only declares a correspondence between both their state sets and update and readout functions. Both discrete systems can work in parallel; their state sets need not be the same, nor even of the same cardinality, but they somehow coordinate via the map α. The function α draws the parallel between the internal machinery of F and that of G.

For the rest of the article, we will be more interested in the simulation relation than any particular simulation function witnessing it: any one will do.

Remark 8. Definition 18 refers to the existence of morphisms between two automata as described in the automata theory literature [8]. The existence of such morphisms suffices for our purposes. We are a bit more restrictive here, as the outputs need to be the same in both automata, while in the usual definition of morphisms, automata can have different output alphabets, as long as there is a function to convert one output into the other.

The simulation relation is not necessarily an equivalence relation and is not enough for our purpose, but we can use it to generate the equivalence relation we actually need.

Definition 19 (internal equivalence relation on ). Let .

We say that F and G are simulation equivalent, and we write , if there exists a finite sequence such that

It is not hard to check the following.

Theorem 2. The equivalence relation is the equivalence relation generated by ; that is, is the smallest equivalence relation R such that .

Finally, we need to show that the equivalence relation actually groups discrete systems that have the same behaviour as a stream transducer, in the sense of Definition 17; that is, the external and the internal equivalence relation are the same.

Lemma 1. Let . If , then such that and .

Proof. Let and .

Take such that (i) , (ii) , (iii) , and (iv) .

Take as simulation functions the respective projections and .

It is easy to see that the required diagrams in Definition 18 do commute. For all , we have by defintion of . Also, for all , because there exists some stream such that F results in s and G results in ; besides, as , we have , which implies . Consequently, the diagrams commute and H simulates both F and G.

Lemma 2. Let . If , then .

Proof. Follows by induction on the length of an input stream .

Theorem 3. ; or equivalently: .

Proof. To prove , we need and .

Suppose first that are dynamical systems such that . According to Lemma 1, there exists a such that and , and hence by Definition 19. This establishes .

We now show that . According to Proposition 6, is an equivalence relation, and according to Lemma 2, contains . Theorem 2 states that is the smallest equivalence relation that contains ; necessarily, we have .

The goal of this article is to show that the behaviour of a general discrete system can be emulated by some specific wiring of some other discrete system, chosen with constraints (for example, on its internal structure). As far as we know, this result cannot be obtained with a pure equality. However, we have a description of what it means to be equivalent, both from an internal and from an external point of view, with the assurance that, seen as a blackbox, the “inhabited” box remains unchanged.

As we are not using real equalities, we need to define relations between sets that correspond to the usual inclusion and equality.

Definition 20 (inclusion/equality up to equivalence). Let . We consider the equivalence relation from Definition 19 (or, equivalently, in Definition 17).

We say that A is a subset of B up to equivalence, and we write , when .

We say that A is equal to B up to equivalence, or A is equivalent to B, and we write , when and .

If are functors, then we write when, for all box X, we have . We write , when and .

If are mappings (not necessarily functors), then we write when, for all boxes X and Y, we have and .

4. Main Results

Before we introduce the actual results of the paper, we need a few more notions.

4.1. Algebras and Closures

Definition 21 (algebra). Given a monoidal category , a functor is called an algebra over when it is a lax monoidal functor.
In our case, and DS is an algebra by Theorem 1.

Definition 22 (subalgebra). Let be an algebra over . Let denote its first coherence map (we recall that is the parallel composition of boxes (cf. Definition 9)).

A functor is called a subalgebra of A when (i) , and (ii) and .

Here, A and B are functors that transform boxes into sets. In our setting, the conditions can be interpreted as follows:(i)First item: discrete systems generated by B are included in those generated by A(ii)Second item: the parallel composition of two discrete systems F and G generated by B is also generated by B.(iii)Third item: B is stable through wiring diagrams: wiring a discrete system generated by B gives another discrete system generated by B.

Note that a subalgebra is itself an algebra.

Definition 23 (closure). Let be an algebra over .

Let be any map such that . The closure of B, denoted , is the intersection of all subalgebras of A that contain for all (any intersection of subalgebras is a subalgebra).

The closure of a map B can be understood as the minimal lax monoidal functor (or algebra) containing B.

4.2. Memoryless Systems

Our first main result concerns the subclass of discrete systems that we call memoryless. We show that wiring together memoryless systems can lead to systems that have memory.

Definition 24 (memoryless discrete systems). DLet be a box.

A memoryless discrete system for the box X, or memoryless discrete system for short, is a discrete system such that immediately discards the previous state and uses only the current input; formally, factors asfor some .

We denote by the set of all memoryless discrete systems for the box X: .

We call these discrete systems memoryless because we see the states as a kind of memory. The discrete systems defined above transition from one state to another without checking their current state, i.e., without checking their memory.

The following definition is a natural restriction of memoryless discrete systems; as these systems are memoryless, the only goal of their states is to produce the output via their readout function. The simplest case is when the readout function is the identity.

Definition 25 (direct-output discrete systems). Let be a box.

A direct-output memoryless discrete system for the box X, or direct-output discrete system for short, is a discrete system such that (i) , (ii) , and (iii) for some .

We denote by the set of all direct-output discrete systems for the box X:

Remark 9. The maps and are not functors because they are not closed under wiring. Indeed, the whole point is that the result of wiring together memoryless systems is not necessarily memoryless.

We can now prove one of the main results of this paper, which is that every discrete system can be obtained (up to equivalence) by a memoryless system and a feedback loop. The feedback loop is responsible for holding the state that was originally in the discrete system.

Here is the formal statement.

Theorem 4. .

Proof. We have , so ; thus, . We need the opposite inclusion (up to equivalence) .

Let and let . We will find , and such that .

Let be the list with one element, , and consider the box with only that port on the left and the right. We define X as the parallel composition of this box and Y; that is,

Note that and . Thus, if , then . Similarly, if , then .

We choose as the pair of coproduct inclusions: (i) and (ii) .

It follows from definition 12 that their dependent products and are projections.

Recall that the goal is to find such that . So define F as follows: (i) , (ii) , (iii) , and (iv) .

It is easy to see that F is in because and have the correct form. So let ; we need to show it is equivalent to G. We compute each part of according to Definition 14.

Its state set is as follows:

Its readout function is defined on an arbitrary as follows:

Its update function is defined on an arbitrary as follows:

Finally, its start state is as follows:

Consequently, the following diagram commutes:Here, . This yields , and hence, , which concludes the proof.

Corollary 1. .

Corollary 2. For all , if G has finite state set, then there exists with the finite state set such that .

Proof. In the proof of Theorem 4, take , but instead of , take . If is finite, so is .

In that case, H is no more in but in .

Corollary 3. (assuming the axiom of choice) For all , if G has an infinite state set, then there exists with a state set of the same cardinality as G, such that .

Proof. In the proof of Theorem 4, take , but instead of , take . The axiom of choice gives .

In that case, H is no more in but in .

Theorem 4 states that systems without memory can be wired together to form systems with memory. In fact, the result is more subtle. It states that, for any discrete system, we can find (or build) a memoryless discrete system with the certain wiring such that both systems are equivalent as stream transducers. The internal equivalence relation described in Theorem 3 is instrumental to prove Theorem 4, while the result is stated with regard to the external equivalence relation.

Another interpretation is the following. The mapping produces very “degenerate” discrete systems: they do not remember anything, and they do not have real-state spaces; really produces nothing more than mathematical functions. The result to see here is that the only thing a discrete system needs to generate memory is really its last output, not necessarily the whole list of inputs and outputs.

4.3. Finite-State Systems

The second result is a refinement of Theorem 4 and is somewhat similar to it. We show that wiring together two-state discrete systems can generate a finite-state system with memory.

We can view the result as the generalisation of transistors being wired together in order to build a computer, or a system of neurons wired together to form a brain with finite memory.

Definition 26 (finite-state systems). Let be a box.

A finite-state discrete system for the box X, or finite-state system for short, is a discrete system such that is a finite set.

We denote by the set of all finite-state discrete systems for the box X: . For a wiring diagram ϕ, we set .

It is easy to see the following.

Proposition 7. The map is a subalgebra of DS.

Proof. Follows from Definition 22.

Definition 27 (Boolean systems). Let be a box.
A Boolean memoryless discrete system for the box X, or Boolean system for short, is a discrete system such that F is memoryless and .
We denote by the set of all Boolean memoryless discrete systems for the box X: .

Remark 10. The map is not a functor, for the same reason as in Remark 9.

Lemma 3. .

Proof. By construction, , so . We need to show the other inclusion, so let , and it suffices to show that there is with .

We have and finite. Let . There exists an injection and a surjection such that . This is just a binary encoding of .

Define such that (i) , (ii) , and (iii) and .

Then, the following diagram commutes:

We have and (with i as simulation function), so , and hence the result.

Lemma 4. .

Proof. Observe that . By Corollary 1, we have . In particular, , so . This gives one inclusion, .

As for the reverse inclusion (up to equivalence), let X be a box and let . By Corollary 1, there exists such that . By Corollary 2, we can choose G so that and hence the result.

Theorem 5. .

Proof. Clearly, . By Lemma 3, in order to prove , it suffices to prove . Furthermore, since (Lemma 4), we reduce to proving that .

Let be a box, and let . We have , so according to Corollary 1, there exists a box , a wiring diagram , and a such that . Furthermore, according to Corollary 2, we can choose F with the finite state set. Finally, , and , so and hence the result.

5. Conclusion

Boxes are empty frames that condition the inputs and outputs of their content, a generalisation of automata called discrete systems. Such systems come with a state set that represents their memory of previous inputs. In a sense, discrete systems can learn. However, we can define a subclass of discrete systems that do not store any experience of their past. We see these as reactive, in the sense that they still react to any input, but their past experience does not influence that reaction. Unlike typical discrete systems, they do not keep a memory of the previous inputs.

In this paper, we use a category-theoretic framework to give a constructive proof that any discrete system with memory can be simulated by some correctly wired memoryless system. This result can be understood as a phenomenon of emergence in a complex system.

This construction opens a number of new questions. A possible question might consist in finding the “best” memoryless system, where “best” could depend on the definition of some valuation function, e.g., the most parsimonious in terms of the state set. A similar question could be asked with respect to wiring diagrams, whose number of feedback loops could be bounded by a cost function.

Possible extensions of this work could concern dynamical systems other than DS. We can establish the same kind of results when considering measurable or continuous dynamical systems.

Data Availability

No data were used to support this study.

Conflicts of Interest

The authors declare that there are no conflicts of interest regarding the publication of this paper.

Authors’ Contributions

Erwan Beurier, Dominique Pastor, and David I. Spivak contributed equally to this work.

Acknowledgments

David I. Spivak was supported by the AFOSR (grant nos. FA9550-14-1-0031 and FA9550-17-1-0058) and NASA (grant no. NNH13ZEA001N-SSAT) while working on this project.