Advances in Mathematical Physics

Volume 2016, Article ID 5030593, 21 pages

http://dx.doi.org/10.1155/2016/5030593

## Fractal Dimension versus Process Complexity

^{1}Departamento de Lògica, Història i Filosofia de la Ciéncia, Universitat de Barcelona, 08001 Barcelona, Spain^{2}Algorithmic Nature Group, LABORES, 75006 Paris, France^{3}Departamento de Filosofía, Lógica y Filosofía de la Ciencia, Universidad de Sevilla, 41018 Seville, Spain^{4}Unit of Computational Medicine, SciLifeLab, Department of Medicine Solna, Center for Molecular Medicine, Karolinska Institute, 171 76 Stockholm, Sweden^{5}Department of Computer Science, University of Oxford, Oxford OX1 3QD, UK

Received 1 May 2016; Accepted 29 June 2016

Academic Editor: Joao Florindo

Copyright © 2016 Joost J. Joosten et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

#### Abstract

We look at small Turing machines (TMs) that work with just two colors (alphabet symbols) and either two or three states. For any particular such machine and any particular input , we consider what we call the* space-time* diagram which is basically the collection of consecutive tape configurations of the computation . In our setting, it makes sense to define a fractal dimension for a Turing machine as the limiting fractal dimension for the corresponding space-time diagrams. It turns out that there is a very strong relation between the fractal dimension of a Turing machine of the above-specified type and its runtime complexity. In particular, a TM with three states and two colors runs in at most linear time, if and only if its dimension is 2, and its dimension is 1, if and only if it runs in superpolynomial time and it uses polynomial space. If a TM runs in time , we have empirically verified that the corresponding dimension is , a result that we can only partially prove. We find the results presented here remarkable because they relate two completely different complexity measures: the geometrical fractal dimension on one side versus the time complexity of a computation on the other side.

#### 1. Part I: Theoretical Setting

In the first part of the paper, we will define the basic notions we work with. In particular, we will fix on a computational model: small Turing machines with a one-way infinite tape. For these machines, we will define the so-called* space-time* diagrams which are a representation of the memory state throughout time. For these diagrams, we will define a notion of fractal dimension. Next, some theoretical results are proven about this dimension.

##### 1.1. Complexity Measures

Complexity measures are designed to capture complex behavior and quantify* how* complex, according to that measure, that particular behavior is. It can be expected that different complexity measures from possibly entirely different fields are related to each other in a nontrivial fashion. This paper explores the relation between two rather different but widely studied concepts and measures of complexity. On the one hand, there is a geometrical framework in which the complexity of spatiotemporal objects is measured by their fractal dimension. On the other hand, there is the standard framework of computational (resources) complexity where the complexity of algorithms is measured by the amount of time and memory they take to be executed.

The relation we have between both frameworks is as follows. We start in the framework of computations and algorithms and for simplicity assume that they can be modeled as using discrete time steps. Now, suppose we have some computer that performs a certain task on input . We can assign a spatiotemporal object to the computation corresponding to as follows.

We look at the spatial representation of the memory when starts on input . Next we look at : the spatial representation of the memory after one step in the computation and so forth for . Then, we “glue” these spatial objects together into one object by putting each output in time next to the other: . Each can be seen as a slice of of the memory at one particular time in the computation. This is why we call the space-time diagram of . It is of these spatiotemporal objects and in particular the limit of going to infinity that we can sometimes compute or estimate the fractal dimension that we shall denote by .

One can set this up in such a way that becomes a well defined quantity. Thus, we have a translation from the computational framework to the geometrical framework. Next, one can then investigate the relation between these two frameworks and, in particular, whether complex algorithms (in terms of time and space complexity) get translated to complex (in the sense of fractal dimension) space-time diagrams.

It is this main question that is being investigated in this paper. The computational model that we choose is that of Turing machines. In particular, we look at small one-way infinite Turing machines (TMs) with just two or three states and a binary tape alphabet.

For these particular machines, we define a notion of dimension along the lines sketched above. In exhaustive computer experiments, we compute the dimensions of all machines with at most three states. Among the various relations that we uncover is the notion that such a TM runs in at most linear time if the corresponding dimension is 2. Likewise, if a TM (in general) runs in superpolynomial time and uses polynomial space, we see that the corresponding dimension is 1.

Admittedly, the way in which fractal geometry measures complexity is not entirely clear and one could even sustain the view that fractal geometry entirely measures something else. Nonetheless, dimension is clearly related to degrees of freedom and as such related to an amount of information storage.

In [1], space-time diagrams of Turing machines and one-dimensional cellular automata were investigated in the context of algorithmic information theory. Notably, an incompressibility test on the space-time diagrams led to a classification of the behavior of CAs and TMs thereby identifying nontrivial behavior [2]. The same type of space-time diagrams was also investigated in connection to two other seminal measures of complexity [3–5] connected to Kolmogorov complexity, namely, Solomonoff’s algorithmic probability [2, 6] and Bennett’s logical depth [7, 8]. Interesting connections between fractal dimension and spatiotemporal parameters have also been explored in the past [9–11], delivering a range of applications in landscape analysis and even medicine in the study of time series.

The results presented in this paper were found by computer experiments and proven in part. To the best of our knowledge, it is the first time that a relation is studied between computational complexity and fractal geometry, of a nature as presented here.

*Outline*. The current paper is naturally divided into three parts. In the first part (Sections 1.2–1.4), we define the ideas and concepts and prove various theoretical results. In the second part, Sections 2.1-2.2, we describe our experiment and its results to investigate those cases where none of our theoretical results would apply. Finally, in the third part, we present a literature study where we mention various results that link fractal dimension to other complexity notions.

More in detail, in Section 1.2, we describe the kind of TMs we will work with. This paper can be seen as part of a larger project where the authors mine and study the space of small TMs. As such, various previous results and data sets could be reused in this paper and in Section 1.2 we give an adequate description of these used data sets and results.

In Section 1.3, we revisit the box-counting dimension and define a suitable similar notion of fractal dimension for TMs . We prove that in case runs in at most linear time in the size of the input. Next, in Section 1.4, we prove an upper and a lower bound for the dimension of Turing machines. The Upper Bound Conjecture is formulated to the effect that the proven upper bound is actually always attained. For special cases, this can be proved. Moreover, under some additional assumptions, this can also be proven in general. In our experiment, we test whether in our test space the sufficient additional assumptions were also necessary ones and they turn out to be so.

Section 2.1 describes how we performed the experiment, what difficulties we encountered, and how they were overcome, and also some preliminary findings are given. The main findings are presented in Section 2.2.

We conclude the paper with Section 3.1 where we present various results from the literature that link different notions of complexity to put our results within this panorama.

##### 1.2. The Space of Small Turing Machines

As mentioned before, this paper forms part of a larger project where the authors exhaustively mine and investigate a set of small Turing machines. In this section, we will briefly describe the raw data that was used for the experiments in this paper and refer for details to the relevant sources.

###### 1.2.1. The Model

A TM can be conceived as both a computational device and a dynamical system. In our studies, a TM is represented by a* head* moving over a* tape* consisting of discrete* tape cells* where the tape extends infinitely in one direction. In our pictures and diagrams, we will mostly depict the tape as extending infinitely to the left. Each tape cell can contain a symbol from an* alphabet*. Instead of symbols, we speak of* colors* and in the current paper we will work with just two colors: black and white.

The head of a TM can be in various* states* as it moves over the cells of the tape. We will refer to the collection of TMs that use states and symbols/colors as the -space of TMs. We will always enumerate the states from to and the colors from to . In this paper, we work with just two symbols so that we represent a cell containing a 0 with a white cell and a cell containing a 1 with a black cell.

A computation of a TM proceeds in discrete time steps. The tape content at the start of the computation is called the* input*. By definition, our TMs will always start with the head at the position of the first tape cell, that is, the tape cell next to the edge of the tape; in our pictures, this is normally the rightmost tape. Moreover, by definition, our TMs will always commence their computation in default start state .

A TM in space is completely specified by its* transition table*. This table tells what* action* the head should perform when it is in State at some tape cell and reads there some symbol . Such an action in turn consists of three aspects: changing to some state (possibly the same one); the head moving either one cell left or one cell right but never staying still; writing some symbol at (possibly the same symbol as before). Consequently, each -space consists of many different TMs. We number these machines according to Wolfram’s enumeration scheme [12, 13] which is similar to the lexicographical enumeration.

Clearly, each TM in space is also present in space for , by just not using the extra states since they are “inaccessible” from State 1. Many rules in a space are trivially equivalent in the computational sense up to a simple transformation of the underlying geometry, for example, by relabeling states by reflection or complementation, hence, for all identical purposes. In the literature, machines that have equivalents are sometimes called* amphicheiral*; we will sometimes refer to them as machine* twins*.

We say that a TM* halts* when the head “falls off" the tape on the right-hand side, in other words, when the head is at the rightmost position and receives an instruction to move right. The tape configuration upon termination of a computation is called the* output*.

We will refer to the input consisting of the first tape cells by black on an otherwise white tape as the input (this is in slight discrepancy with the convention in [14]). In this context, a* function* is a map sending an input to some output tape configuration. We call the function where the output is always identical to the input the* tape identity* function.

By Rice’s theorem, it is in principle undecidable if two TMs compute the same function. Nonetheless, for spaces with small, no universal computation is yet present [15, 16]. In [14], the authors completely classify the TMs in (3,2) space among the functions they compute, taking pragmatic approaches that possibly produce small errors to deal with undecidability and unfeasibility issues.

###### 1.2.2. Space-Time Diagrams

As previously mentioned in this paper, a central role is played by the so-called* space-time diagrams*. A space-time diagram for some computation is nothing more but the joint collection of consecutive memory configurations. We have included a picture of space-time diagrams for a particular TM for inputs 1 to 14 in Figure 1.