- About this Journal ·
- Abstracting and Indexing ·
- Aims and Scope ·
- Article Processing Charges ·
- Articles in Press ·
- Author Guidelines ·
- Bibliographic Information ·
- Citations to this Journal ·
- Contact Information ·
- Editorial Board ·
- Editorial Workflow ·
- Free eTOC Alerts ·
- Publication Ethics ·
- Reviewers Acknowledgment ·
- Submit a Manuscript ·
- Subscription Information ·
- Table of Contents
ISRN Computational Mathematics
Volume 2012 (2012), Article ID 321372, 15 pages
Physical Portrayal of Computational Complexity
Department of Physics, Institute of Biotechnology and Department of Biosciences, University of Helsinki, 00014 Helsinki, Finland
Received 3 October 2011; Accepted 3 November 2011
Academic Editor: L. Pan
Copyright © 2012 Arto Annila. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
Computational complexity is examined using the principle of increasing entropy. To consider computation as a physical process from an initial instance to the final acceptance is motivated because information requires physical representations and because many natural processes complete in nondeterministic polynomial time (NP). The irreversible process with three or more degrees of freedom is found intractable when, in terms of physics, flows of energy are inseparable from their driving forces. In computational terms, when solving a problem in the class NP, decisions among alternatives will affect subsequently available sets of decisions. Thus the state space of a nondeterministic finite automaton is evolving due to the computation itself, hence it cannot be efficiently contracted using a deterministic finite automaton. Conversely when solving problems in the class P, the set of states does not depend on computational history, hence it can be efficiently contracted to the accepting state by a deterministic sequence of dissipative transformations. Thus it is concluded that the state set of class P is inherently smaller than the state set of class NP. Since the computational time needed to contract a given set is proportional to dissipation, the computational complexity class P is a proper (strict) subset of NP.
Currently it is unclear whether every problem whose solution can be efficiently checked by a computer can also be efficiently solved by a computer [1, 2]. On one hand, decision problems in a computational complexity class can be solved efficiently by a deterministic algorithm within a number of steps bound by a polynomial function of the input’s length. An example of a problem is that of the shortest path: what is the least-cost one-way path through a given network of cities to the destination? On the other hand, to solve problems in class efficiently seems to require some nondeterministic parallel machine; yet solutions can be verified as correct in a deterministic manner. An example of an -complete problem is that of the traveling salesman: what is the least-cost round-trip path via a given network of cities, visiting each exactly once?
It appears, although it has not been proven, that the traveling salesman problem  and numerous other problems in mathematics, physics, biology, economics, optimization, artificial intelligence, and so forth  cannot be solved in deterministic manner in polynomial time unlike the shortest path problem and other problems. Yet, the initial instances of the traveling salesman and the shortest path problem seem to differ at most polynomially from one another. Therefore, could it be that there are, after all, for the problems as efficient algorithms as there are for the problems but these simply were not found yet?
In this study insight to the versus question is obtained by considering computation as a physical process [5–8] that follows the 2nd law of thermodynamics [9–11]. The natural law was recently written as an equation of motion that complies with the principle of least action and Newton’s second law [12–15]. The ubiquitous imperative to consume free energy, known also as the principle of increasing entropy, describes a system in evolution toward more probable states in least time. Here, it is of particular interest that evolution is in general a nondeterministic process as is class computation. Furthermore, the end point of evolution, that is, the thermodynamically stable stationary state itself, can be efficiently validated as the free energy minimum in a similar manner as the solution to a computation can be verified as accepting.
The recent formulation of the 2nd law as an equation of motion based on statistical mechanics of open systems has rationalized diverse evolutionary courses that result in skewed distributions whose cumulative curves are open-form integrals [16–26]. Several of these natural processes , for example, protein folding that directs down along intractable trajectories to diminish free energy , have been recognized as the hardest problems in class . Although many other -complete problems do not seem to concern physical reality, the concept of -completeness  encourages one to consider computation as an energy transduction process that follows the 2nd law of thermodynamics. The physical portrayal of computational complexity allows one to use the fundamental theorems concerning conserved currents [31, 32] and gradient systems [27, 33] in the classification of computational complexity. Specifically, it is found that circuit currents remain tractable during the class problem computation because the accessible states of a computer do not depend on the processing steps themselves. Thus the class state set can be efficiently contracted using a deterministic finite automaton to the accepting set along the dissipative path without additional degrees of freedom. Physically speaking boundary conditions remain fixed. In contrast, the circuit currents are intractable during the class problem computation because each step of the problem-solving process depends on the computational history and affects future decisions. Thus the contraction of states along alternative but interdependent computational paths to the accepting set remains a nondeterministic process. Physically speaking boundary conditions are changing due to the process itself.
The adopted physical perspective on computation is consistent with the standpoint that no information exists without its physical representation [5, 6] and that information processing itself is governed by the 2nd law . The connection between computational complexity and the natural law also yields insight to the abundance of natural problems in class . In the following, the description of computation as an evolutionary process is first outlined and then developed to mathematical forms to make the distinction between the computations that belong to classes and .
2. Computation as a Physical Process
According to the 2nd law of thermodynamics a computational circuit, just as any other physical system, evolves by diminishing energy density differences within the system and relative to its surroundings. The consumption of free energy  is generally referred to as evolution where flows of energy naturally select [36, 37] the steepest directional descents in the free energy landscape to abolish the energy density differences in least time . At first sight it may appear that the physical representations of computational states, in particular as they are represented by modern computers, would be insignificant to play any role in computational complexity. However, since no representation of information can escape from following laws of physics, also computation must ultimately comply with laws of physics. A clocked circuit as a physical realization of a finite automaton is an energy transduction network. Likewise, Boolean components and shift register nodes are components of a thermodynamic network. In accordance with the network notion, the versus question can be phrased in terms of graphs . In this study it will be shown that the computations in the two computational complexity classes do differ from each other in thermodynamic terms. Thus it follows that no algorithm can abolish this profound distinction.
Computation is, according to the principle of increasing entropy, a probable physical process. The sequence of computational steps will begin when an energy density difference, representing an input, appears at the interface between the computational system and its surroundings. Thus, the input by its physical representation places the automaton at the initial state of computation, that is, physically speaking evolution. A specific input string of alphabetic symbols is represented to the circuit by a particular physical influx, for example, as a train of voltages. Importantly no instance is without physical realization.
The algorithmic execution is an irreversible thermalization process where the energy absorbed at the input interface will begin to disperse within the circuit. Eventually, after a series of dissipative transformations from one state to another, more probable one, the computational system arrives at a thermodynamic steady state, the final acceptance, by emitting an output, for example, writing a solution on a tape. No solution can be produced without physical representation. Although it may seem secondary, the condition of termination must ultimately be the physical free energy minimum state, otherwise, there would still be free energy that would drive the computational processes further.
Physically speaking, the most effective problem solving is about finding the path of least action, which is equivalent to the maximal energy transduction from the initial instance down along the most voluminous gradients of energy to the final acceptance. However, the path for the optimal conductance, that is, for the most rapid reduction of free energy, is tricky to find in a circuit with three or more degrees of freedom because flows (currents) and forces (voltages) are inseparable. In contrast, when the process has no additional degrees of freedom in dissipation, the minimal resistance path corresponding to the solution can be found in a deterministic manner.
In the general case the computational path is intractable because the state space keeps changing due to the search itself. A particular decision to move from the present state to another depends on the past decisions and will also affect accessible states in the future. For example, when the traveling salesman decides for the next destination, the decision will depend on the past path, except at the very end, when there are no choices but to return home. The path is directed because revisits are not allowed (or eventually restricted by costs). This class, referred to as , contains intractable problems that describe irreversible (directional) processes (Figure 1) with additional () degrees of freedom.
In the special case the computational path is tractable as decisions are independent of computational history. For example, when searching for the shortest path through a network, the entire invariant state space is, at least in principle, visible from the initial instance, that is, the problem is deterministic. A decision at any node is independent of traversed paths. This class, referred to as , contains tractable problems that describe irreversible processes without additional degrees of freedom. Moreover, when the search among alternatives is not associated with any costs, the process is reversible (nondirectional), that is, indifferent to the total conductance from the input to output node.
Finally, it is of interest to note the particular case when a particular physical system has no mechanisms to proceed from a state to any other by transforming absorbed quanta to any emission. Since dispersion relations of physical systems will be revealed first when interacting with them [39, 40], it is impossible to know for a given circuit and finite influx, a priori, without interacting whether the system will arrive at the free energy minimum state finishing with emission or remain at an excited state without output forever. This is the physical rationale of the halting problem . It is impossible to decide for a given program and finite input, a priori, without processing whether the execution will arrive at the accepting state finishing with output or remain at a running state without output forever. These processes that acquire but do not yield relate to problems that cannot be decided. They are beyond class  and will not be examined further. Here the focus is on the principal difference between the truly tractable and inherently intractable problems.
3. Self-Similar Circuits
The physical portrayal of problem processing according to the principle of increasing entropy is based on the hierarchical and holistic formalism . It recognizes that circuits are self-similar in energy transduction (Figure 2) [21, 44, 45]. A circuit is composed of circuits or, equivalently, there are networks within nodes of networks. The most elementary physical entity is the single quantum of action [15, 46].
Each node of a transduction network is a physical entity associated with energy . A set of identical nodes representing, for example, a memory register, is associated, following Gibbs , with a density-in-energy defined by relative to the average energy density . The self-similar formalism assigns to a set of indistinguishable nodes in numbers a probability measure [12, 46] in a recursive manner, so that each node in numbers is a product of embedded n-nodes, each distinct type available in numbers . The combinatorial configurations of identical n-nodes in the k-node are numbered by . Likewise, the identical -nodes in numbers are indistinguishable from each other in the network. The internal difference and the external flux denote the quanta of (interaction) energy.
The computational system is processing from one state to another, more probable one, when energy is flowing down along gradients through the network from one node to another with concurrent dissipation to the surroundings. For example, a j-node can be driven from its present state, defined by the potential , to another state by an energy flow from a preceding k-node at a higher potential and by an energy efflux to the surroundings (Figure 2). Subsequently the j-node may transform anew from its current high-energy state to a stationary state by yielding an efflux to a connected i-node at a lower potential coupled with emission to the surroundings. Any two states are distinguished from each other as different only when the transformation from one to the other is dissipative [12–14]. When thermalization has abolished all density differences, the irreversible process has arrived at a dynamic steady state where reversible, to-and-fro flows of energy (currents) are conserved and, on the average, the densities remain invariant.
It is convenient to measure the state space of computation by associating each j-system with logarithmic probability in analogy to (1), where is the potential difference between the j-node and all other connected k-nodes in degenerate (equal-energy) numbers . Stirling’s approximation implies that kBT is a sufficient statistic for the average energy  so that the system may accept (or discard) a quantum without a marked change in its total energy content, that is, the free energy . Otherwise, a high influx , such as a voltage spike from the preceding k-node or heat from the surroundings, might “damage” the j-system, for example, “burn” a memory register, by forcing the embedded n-nodes into evolution (Figure 2). Such a non-statistic phenomenon may manifest itself even as chaotic motion but this is no obstacle for the adopted formalism because then the same self-similar equations are used at a lower level of hierarchy to describe processes involving sufficiently statistical systems.
According to the scale-independent formalism the network is a system in the same way as its constituent nodes are systems themselves. Any two networks, just as any two nodes, are distinguishable from each other when there is some influx sequence of energy so that exactly one of the two systems is transforming. In computational terms, any two states of a finite automaton are distinguishable when there is some input string so that exactly one of the two transition functions is accepting . Those nodes that are distinguishable from each other by mutual density differences are nonequivalent. These distinct physical entities of a circuit are represented by disjoint sets and indexed separately in the total additive measure of the entire circuit defined as The affine union of disjoint sets is depicted as a graph that is merged from subgraphs by connections.
In the general case the calculation of measure (3) implies a complicated energy transduction network by indexing numerous nodes as well as differences between them and in respect to the surroundings. In a sufficiently statistical system the changes in occupancies balance as since the influx to the j-node results from the effluxes from the k-nodes (or vice versa). The flows along the jk-edges are proportional to the free energy by an invariant conductance defined as  The form ensures continuity so that, when a particular jk-flow is increasing the occupancy of the j-node, the very same flow is decreasing the occupancies at the k-nodes (or vice versa). Importantly, owing to the other affine connections, the jk-transformation will affect occupancies of other nodes that in turn affect . Consequently when there are, among interdependent nodes (), alternative paths () of conduction, the problem of finding the optimal path becomes intractable [12, 14]. As long as the gradient system with degrees of freedom does not enclose integrable (tractable) orbits .
Conversely in the special case, when the reduction of a difference does not affect other differences, that is, there are no additional degrees of freedom, the changes in occupancies remain tractable. The conservation of energy requires that, when there are only two degrees of freedom, the flow from one node will inevitably arrive exclusively at the other node. Therefore, it is not necessary to explore all these integrable paths to their very ends. Then the outcome can be predicted and the particular path in question can be found efficiently. Moreover, when there are no differences , there are no net variations in occupancies, that is, no net flows either. These conserved, reversible flows are statistically predictable even in a complicated but stationary () network with degrees of freedom. When the currents are conserved, the network is idle, that is, not transforming. In accordance with Noether’s theorem also the Poincaré-Bendixson theorem holds for the stationary system [27, 33].
The overall transduction processes, both intractable and tractable direct toward more probable states, that is, . However when a natural process with three or more degrees of freedom is examined in a deterministic manner, it is necessary to explore all conceivable transformation paths to their ends. The paths cannot be integrated in closed forms (predicted) because each decision will affect the choice of future states. The set of conceivable states that is generated by decisions at consequent branching points of computation can be enormous.
The physical portrayal of computational complexity reveals that it is the noninvariant, evolving state space of class computation that prevents from completing the contraction by dissipative transformations in deterministic manner in polynomial time. Since the dissipated flow of energy during the computation relates directly to the irreversible flow of time , the class completion time is inherently longer than that of class . Thus it is concluded that is a proper subset of .
4. Computation as a Probable Process
When computation is described as a probable physical process, the additive logarithmic probability measure is increasing as the dissipative transformations are leveling the differences (). When the definitions in (4) and are used, the change is found to be nonnegative since the squares and are necessarily nonnegative and the absolute temperature , , and .
The definition of entropy yields from (5) the principle of increasing entropy . Equation (5) says that entropy is increasing when free energy is decreasing, in agreement with the thermodynamic maxim  and Gouy-Stodola theorem [49, 50] and the mathematical foundations of thermodynamics [51–53]. In other words, when the process generator , there is free energy for the computation to commence from the initial state toward the accepting state where the output will thermalize the circuit and . Admittedly, dissipation is often small, however, not negligible but necessary for any computation to advance and to yield an output [5, 6, 34].
During the computational process the state space accessible by is contracting toward the free energy minimum state where and no further changes of state are possible. Consistently, when is increasing due to the changing occupancies , the change in the process generator  is found to decrease almost everywhere using the definition in (4) because the squares and are necessarily nonnegative and for any spatially confined energy density . Equations (5) and (6) show that during the computation the state space is contracting toward the stationary state where .
The free energy minimum partition corresponds to the solution. It is a stable state of computational process in its surroundings because any variation below (above) the steady-state occupancy will reintroduce (> 0) that will drive the system back to the stationary state by invoking a returning flow (<0). Explicitly, the maximum entropy system is Lyapunov stable [27, 33] according to the definitions and that are available from (5) and (6). The dynamic steady state is maintained by frequent to-and-fro flows between the system’s constituents and the surroundings. These nondissipative processes do not amount to any change in P.
In general the trajectories of natural processes cannot be solved analytically because the flows and are inseparable in L (5) at any j-node where cardinality of . Nonetheless, the inherently intractable trajectories can be mapped by simulations where T, , and are updated after each change of state. The occupancies keep changing due to the changing driving forces that, in turn, are affected by the changes . In terms of physics the non-Hamiltonian system is without invariants of motion and Liouville’s theorem is not satisfied because the open dissipative system is subject to an influx (efflux) from (to) its surroundings. The nonconserved, gradient system is without norm. Thus the evolving (cf. Bayesian) distribution of probabilities cannot be normalized. The dissipative equation of motion for the class of irreversible processes cannot be integrated in a closed form or transformed to a time-independent frame  to obtain a solution efficiently.
According to the maximum entropy production principle [54–66] energy differences will be reduced most effectively when entropy increases most rapidly, that is, most voluminous currents direct along the steepest paths of free energy. However, when choosing at every instance a particular descent that appears as the steepest, there is no guarantee that the most optimal overall path will be found because the transformations themselves will affect the future states between the initial instance and the final acceptance. To be sure about the optimal trajectory it takes time, that is, dissipation  because the deterministic algorithmic execution of the class problem will have to address by conceivable transformations the entire power set of states, one member for each distinct path of energy dispersal.
In the special case when the currents are separable from the driving forces, the energy transduction network will remain invariant. In terms of physics the Hamiltonian system has invariants of motion and Liouville’s theorem is satisfied. The deterministic computation as a tractable energy transduction process will solve the problem in question because the dissipative steps are without additional degrees of freedom. The conceivable courses can be integrated (predicted). Hence the solution can be obtained efficiently, for example, by an algorithm that follows the steepest descent and does not waste time in wandering along paths that can be predicted to be futile.
5. Manifold in Motion
Further insight to the distinction between computations in the classes and is obtained when the computation as a physical process is described in terms of an evolving energy landscape [67–69]. To this end the discrete differences Δ that denote properly transforming forces, and quantized flows, are replaced by differentials ∂ of continuous variables. A spatial gradient is a convenient way to relate a density labeled by at a continuum coordinate with another one labeled by but displaced by dissipation at [13, 14]. When the j-system at evolves down along the scalar potential gradient in the field , the conservation of energy requires that the transforming current . The dissipation is an efflux of photons at the speed of light to the surrounding medium (or vice versa).
The continuum equation of motion corresponding to (5) is obtained from (3) by differentiating and using the chain rule  where directional derivates span an affine manifold  of energy densities (Figure 3). The total potential is decomposed to the orthogonal scalar and vector parts . All distinguishable densities and flows are indexed by . The evolving energy landscape is concisely given by the total change in kinetic energy [13, 14] where the transforming flows with three or more degrees of freedom () are indexed as . Conversely, the flow without additional degrees of freedom () is indexed as . In fact the derivate should be denoted as inexact (đ) because in general the entered state depends on the past path.
The equation for the flows of energy can also be obtained from the familiar Newton’s 2nd law  for the change in momentum by multiplying with velocities. The gradient is again decomposed to the spatial and temporal parts. The sign convention is the same as above, that is, when , then . Since momenta are at all times tangential to the manifold, the Newton’s 2nd law (9) requires that the corresponding flow at any moment is proportional to the driving force in accordance with the continuity across the jk-edges between nodes of the network (4) . The linear relationship in (10) that reminds of Onsager reciprocal relations  is consistent with the previous notion that the densities in energy (the nodes) are sufficiently statistical systems. Otherwise, a high current between and would force the underlying conducting system (jk-edge), parameterized by the coefficient , to evolution. In such a case the channel’s conductance would depend on transmitted bits .
A particular flow funnels by dissipative transformations down along the steepest descent , that is, along the shortest path known as the geodesic [51, 73, 74]. At any given moment the positive definite resistance in (10) identifies to the mass that as the metric tensor defines the geometry of the free energy landscape  (cf. Lorentzian manifold). Formally can be denoted as an integral; however in the general case of the evolving non-Euclidean landscape it cannot be integrated in a closed form . The curved landscape is shrinking (or growing) because the surroundings are draining it by a net efflux (or supplying it with a net influx) of radiation and/or a material flow . When the forces and flows are inseparable in L, the noninvariant landscape is, at any given locus and moment, a result of its evolutionary history. The rate of net emission (or net absorption) declines as the system steps, quantum by quantum, toward the free energy minimum, which is the stationary state in the respective surroundings. Only in the special case, when the forces and flows are separable, can the trajectories be integrated in a closed form.
Finally, when all density differences have vanished, the manifold has flattened to the stationary state (). The state space has contracted to a single stationary state where . In agreement with Noether’s theorem the currents are conserved and tractable throughout the invariant manifold. Also in accordance with Poincaré’s recurrence theorem the steady-state reversible dynamics are exclusively on bound and (statistically) predictable orbits. Moreover the conserved currents, that is, , bring about no net changes in the total energy content of the system. Hence (9) reduces to which implies in accordance with the virial theorem that the components of kinetic energy 2K match the components of potential everywhere.
According to the geometric description of computational processes, the flattening (evolving) non-Euclidean landscape represents the state space of the class computation whereas the flat Euclidean manifold represents the state space of the class computation. The geodesics that span the class landscape are arcs whereas those that span the class manifold are straight lines. According to (8) the class state space is, due to its three or more degrees of freedom (), larger in dissipation by the terms indexed with , than the class state space without additional degrees of freedom () for dissipation given by the term indexed with . In other words, class is larger than because the curved manifold cannot be embedded in the plane. The measure of the non-Euclidean landscape is simply larger by the degrees of freedom () in dissipation than the measure of Euclidean manifold.
The argument for the failure to map the larger manifold one-to-one onto the smaller manifold is familiar from the pigeonhole principle applied to manifolds . The quanta that are dissipated during evolution from diverse density loci of the curved, evolving landscape are not mapped anywhere on the flat, invariant landscape. Thus it is concluded that is a proper subset of .
6. Intractability in the Degrees of Freedom
The transduction path between two nodes can be represented by only one edge, hence there are interdependent currents (4) between densities . The degrees of freedom are less than by 1 because it takes at least two densities to have a difference. In the general case , there are alternative paths for the currents from the initial state via alternative states toward the accepting state. The intractable evolutionary courses are familiar from the n-body () problems [76, 77]. Accordingly, the satisfiability problem of a Boolean expression (n-SAT) belongs to class when there are three or more literals () per clause . In the special case , the energy dispersal process is deterministic as there are no alternative dissipative paths for the current. When only one path is conducting, the problem for the maximal conduction is 1-separable and tractable. The two-body problem does not present a challenge. Accordingly, 2-SAT is deterministic and 1-SAT is trivial, essentially only a statement.
For example, the problem of maximizing the shortest path by two or more interdicts () is intractable. When the first interdict is placed, flows will be redirected and, in turn, affect the decision to place the second interdict. Similarly the search history of the traveling salesman for the optimal round-trip path is intractable. A decision to visit a particular city will narrow irreversibly the available state space by excluding that city from the subsequent choices. Thus, at any particular node one cannot consider decisions as if not knowing the specific search history that led to that node. When each decision will open a new set for future decisions, the computational space state of class is a tedious power set of deterministic decisions. On the other hand, when optimizing the shortest path, a choice for a particular path will not affect, in any way, the future explorations of other paths. At any particular node one may consider decisions irrespective of the search history. In the deterministic case it is not necessary to explore all conceivable choices because the trajectories are tractable (predictable). Likewise, the problem of maximizing the shortest path by a single interdict can be solved efficiently. Any particular decision to place the interdict does not affect future decisions because there are no more interdicts to be placed. When the state space is not affected by the problem-solving process itself, at most, a polynomial array of invariant circuits, that is, deterministic finite automata, will compute class problems.
The versus question is not only a fundamental but also a practical problem for which no computational machinery exists without physical representation. A particular input instance is imposed on the computational circuit by the surroundings and a particular output is accepted as a solution by the surroundings. The communication between the automaton and its surroundings relates to information processing that was understood already early on to be equivalent to the (impedance) matching of circuits for optimal energy transmission [78, 79]. When the matching of a circuit will affect the matching of two or more connected circuits, the total matching of the interdependent circuits for the optimal overall transduction is intractable. Although in practice the iterative process may be converging rapidly in a nondeterministic manner, the conceivable set of circuit states is a power set of the tuning operations. Conversely, when the matching does not involve degrees of freedom, the tuning for optimal transduction is tractable.
In summary, the class problem-solving process is inherently nondeterministic because the contraction process will itself affect the set of future states accessible from a particular instance. The course toward acceptance cannot be accelerated by prediction but the state space must be explored. On the other hand when dissipative steps between the input and output operations have no additional degrees of freedom, the search for the class problem solution will itself not affect the accessible set of states at any instance. The invariant state set can be contracted efficiently by predicting rather than exploring all conceivable paths. Therefore, the completion time of the class deterministic computation is shorter than that of . Thus it is concluded that is a proper subset of .
7. State Spaces of Automata
The computational complexity classification to and by the differing degrees of freedom in dissipation relates to the algorithmic execution times, which are proportional to circuit sizes. A Boolean circuit that simulates a Turing machine is commonly represented as a (directed, acyclic) graph structure of a tree with the assignments of gates (functions) to its vertices (nodes) (Figure 2).
The class problems are represented by circuits where forces (voltages) are inseparable from currents. Since there are no invariants of motions, the ceteris paribus assumption does not hold when solving the class problems . Consistently, no deterministic algorithms are available for the class of nonconserved flow problems but, for example, brute-force optimization, simulated annealing and dynamic programming are employed .
The class problems can be considered to be computed by a nondeterministic Turing machine (NTM). For each pair of state and input symbol there may be several possible states to be accessed by a subsequent transition. The NTM 5-tuple (Φ, Δ, Λ, , ) consists of a finite set of states Φ, a finite set of input symbols including blank, an initial state , a set of accepting (stationary) states , and a transition function where is left and is right shift of the input tape. Since Turing machine has an unlimited amount of storage space for computations and eventually an infinite input as well, such a machine cannot be realized. Therefore, to consider the computational complexity in context of a finite state machine by the physical principle is more motivated, however, without compromising conclusions. For example, a read-only, right-moving Turing machine is equivalent to a nondeterministic finite automaton (NFA) where for each pair of state and input symbol there may be several possible states to be accessed by a subsequent transition. The NFA 5-tuple (Φ, Δ, Λ, ϕ1, ϕss) consists of a finite set of states , a finite set of input symbols , an initial state , a set of accepting (stationary) states , and a transition function , where denotes the power set of . A circuit for the nondeterministic computation can also be constructed from an array of deterministic finite automata (DFA). Each DFA is a finite state machine where for each pair of state and input symbol there is one and only one transition to the next state. The DFA 5-tuple (Φ, Δ, Λ, ϕ1, ϕss) consists of a finite set of states (), a finite alphabet Δ, an initial state (), a set of accepting states (), and a transition function .
In the general case when the forces are inseparable from the flows, the execution time by the DFA array grows super-polynomial as function of the input length n, for example, as . For example, when maximizing the shortest path by interdicts (), any two alternative choices will give rise to two circuits that differ from each other as much as the currents of the two DFAs differ from each other. These two sets are nonequivalent due to the difference in dissipation, and one cannot be reduced to the other. Accordingly, the circuit for the NFA is adequately constructed from the entire power set of distinct DFAs to cover the entire conceivable set of states of the nondeterministic computation (Figure 4). The union of DFAs is nonreducible, that is, each DFA is distinguished from all other DFAs by its distinct transition function.
The class problems are represented by circuits where forces are separable from currents. When the proposed questions do not depend on previous decisions (answers), the problem can be computed efficiently by DFA. Consistently in the class of flow conservation problems many deterministic methods deliver the solution corresponding to the maximum flow in polynomial time. For example, during the search for the maximally conducting path through the network, currents will disperse from the input node to diverse alternative nodes but only the flow along the steepest descent will arrive at the output node and establish the only and most voluminous flow. The other paths of energy dispersal will terminate at dead ends and will not contribute or affect the maximum flow at all. Importantly, on an invariant landscape these inferior paths do not have to be followed to their very ends as is exemplified by Dijkstra’s algorithm . The search terminates at the accepting state whereas other paths end up at nil states. These particular sequences of states “died.” The shortest path problem can be presented by a single DFA because the nonaccepting dead states that keep going to themselves belong to ∅, the empty set of states. However, as has been accurately pointed out , technically this automaton is a nondeterministic finite automaton, which reflects understanding that the single flow without additional degrees of freedom () is the special deterministic subclass of the generally () nondeterministic class. Likewise, the special case of maximizing the shortest path by a single interdict () is deterministic in contrast to the general case of two or more interdicts (). The special 1-separable problem can be represented by a linear set of distinct circuits in contrast to the general inseparable problem that requires a power set of distinct circuits. Accordingly, the automaton for the special cases of deterministic problems is adequately constructed at most from a polynomial set of distinct DFAs and the corresponding deterministic computation is completed in polynomial time.
Since the class varying state space is larger, due to its additional degrees of freedom, than the class invariant state space, it is concluded that is a proper subset of .
8. The Measures of States
To measure the difference between the classes and , the thermodynamic formalism of computation will be transcribed to the mathematical notation . Consistently with the reasoning presented in Sections 2–7, the computational complexity class will be distinguished from by measuring the difference in dissipative computation due to the difference in degrees of freedom. Moreover, since the computation does not advance by nondissipative (reversible) transitions, these exchanges of quanta do not affect the measure.
To maintain a connection to practicalities, it is worth noting that tractable problems are often idealizations of intractable natural processes. For example, when determining the shortest path for a long-haul trucker to take through a network of cities to the destination, it is implicitly assumed that, when the computed optimal path is actually taken, the traffic itself would not congest the current and cause a need for rerouting and finding a new, best possible route under the changing circumstances.
The state space of a finite energy system is represented by elements ϕ of the set Φ . Transformations from a state to another are represented by elements λ, referred to as process generators of the set Λ. The computation is a series of transformations along a piecewise continuous path in the state space. According to the 2nd law the paths of energy dispersal that span the affine manifold M are shortening until the free energy minimum state has been attained. Then the state space has contracted during the transformation process to the accepting state.
Definition 1. A system is a pair (Φ, Λ), with Φ a set whose elements ϕ are called states and Λ a set whose elements λ are called process generators, together with two functions. The function assigns to each λ a transformation , whose domain and range are non-empty subsets of such that for each ϕ in Φ the condition of accessibility holds:(i)where Λϕ is the entire set of states accessible from ϕ, with the assertion that, for every state ϕ, Λϕ equals the entire state space Φ. Furthermore, the function assigns to each pair the (extended) process generator for the successive application of and with the property:(i)if , then and, for each in , there holds for all when for any other .The extended process generators formalize the successive transformations with less than three degrees of freedom. When the transformation is emissive, its inverse is absorptive.
Definition 2. A process of () is a pair () such that . The process generators transform the system from an initial state via intermediate states to the final state. The set of all processes of (Φ, Λ) is According to Definitions 1 and 2 the states and process generators are interdependent (Figure 5) so that(i)when the system has transformed from the state ϕ to the state , the process generator λ has vanished;(ii)when the system has transformed from ϕ to , the system is no longer at ϕ available for another transformation by another process generator to ;(iii)when the system has transformed from the initial state ϕ to an intermediate state and subsequently from to , the final state is identical to the state resulting from the extended transformation from ϕ to , only when is not a domain of any other transformation .
Definition 3 (see ). Let and let be piecewise continuous, and define to be the set of states such that the differential equation
has a solution that satisfies the initial condition and follows the trajectory which is entirely in Φ. In other words, if and only if is in Φ for every .
When (14) is compared with (5), is understood in the continuum limit to generate a transformation from the initial density (cf. the definition of energy density in Section 3) to a succeeding density during a step via the flow that consumes the free energy.
Definition 4 (see ). Define Λ to be the set of functions for which . For each , define by the formula
If denotes the path determined by , , then is taken to be the final point of . Moreover .
The step of evolution along the oriented and piecewise smooth curve from ϕ to is the path determined by the formal integration from 0 to τ (15). In the general case of dissipative transformations with degrees of freedom () the integration is not closed. An open system is spiraling along an open trajectory either by loosing quanta to or acquiring them from its surroundings. Consequently the state space is contracting by successive applications of and that diminish the free energy almost everywhere such that . The dissipation ceases first at the free energy minimum state where the orbits are closed and the domain and range are indistinguishable for any process.
Definition 5 (see ). After a series of successive applications of and the evolving system arrives at the free energy minimum. Then the open system is in a dynamic state defined as the ε-steady state by a fixed nonzero set such that during τ if and only if, for all , there exists such that for all , it follows At the ε-steady state there is no net flux over the period of integration . Thus the probability may fluctuate due to sporadic influx and efflux but its absolute value may not exceed so that the system continues to reside within ε. The set value defines the acceptable state of computation, otherwise in the continuum limit ε 0 the state space would contract indefinitely. In practice the state space sampling by brute-force algorithms or simulated annealing methods is limited by , for example, according to the available computational resources.
Definition 6 (see ). A family Σ of subsets of the state space Φ is an algebra, if it has the following properties:(i), (ii), (iii). From these it follows (i), (ii)the algebra Σ is closed under countable intersections and subtraction of sets,(iii)if then is said to be a sigma-algebra.
Definition 7 (see ). A function is a measure if it is additive for any countable subfamily , consisting of mutually disjoints sets, such that It follows that(i), (ii)if and ,(iii)if and .Moreover, if is a sigma-algebra and , then is said sigma-additive. The triple () is a measure space.
Definition 8 (see ). An energy density manifold is a set whose elements ϕ are called energy densities together with a set Σ of functions called energy scale, satisfying(i)the range of μ is an open interval for each,(ii)for every, and ,(iii)for every is a continuous, strictly increasing function. (i) asserts that each energy scale takes on all values in an open interval in P, while (ii) guarantees that each such scale establishes a one-to-one correspondence between energy levels and real numbers in its range. By means of (iii) the set determines an order relation on M written as Physically speaking the energy densities are in relation to each other on the energy scale given in the units of .
Definition 9. Entropy is defined as where the absolute temperature and the Boltzmann’s constant in accordance with (3).
Definition 10. The change in occupancy is defined proportional to the free energy in accordance with (4).
Theorem 11 (the principle of increasing entropy). The condition of stationary state for the open system is that its entropy reaches the maximum.
Proof. From Definitions 9 and 10 and , it follows that
because the squares are nonnegative, the conductance and its inverse, that is, resistance, and .
The proof is in agreement with given by (5). The principle of increasing entropy has been proven alternatively by variations δ using the principle of least action , where the Lagrangian integrand (kinetic energy) defined by the Gouy-Stodola theorem is necessarily positive.
Theorem 12. The state space Φ contracts in dissipative transformations.
Proof. As a consequence of Definition 10 and Theorem 11 it follows that
because the squares are nonnegative, the occupancies for nonzero densities-in-energy, the conductance , , and .
When entropy is increasing, the state space accessible by the process generator is decreasing. In the continuum limit the theorem for contraction has been proven earlier . In practice the contraction of the state space by a finite automaton is limited to a fixed nonzero set . Then any member in ε is qualified as solution.
Definition 13. The definition for the class state space measure follows from Definitions 7 and 9: The nondissipative (reversible) and dissipative (irreversible) components have been denoted separately. In fact, the indexing is redundant because for the indistinguishable sets there is no difference, per definition . The conserved term is invariant according to Noether’s theorem [31, 32]. The nonzero dissipative term defines class to contain at least one irreversible deterministic decision with two degrees of freedom ().
Definition 14. The definition for the class state space measure follows from the Definitions 7 and 9: The conserved components have been denoted separately from the dissipative components that have been decomposed further to those with two degrees of freedom using the indexing notation as well as to those with three or more degrees of freedom using the indexing notation . The conserved and dissipative components with only two degrees of freedom are the same as those in Definition 13. The nonzero dissipative term defines class to contain at least one irreversible decision between at least two choices, that is, with the three or more degrees of freedom.
Definition 15. The -complete problem contains only dissipative processes with three or more degrees of freedom, that is, and none with two degrees of freedom .
Theorem 16. One has .
Proof. It follows from Definitions 13 and 14 that the state space set of class is larger than class measured by the difference
If and only if for all , the measure but this is a contradiction with Definition 14 that class contains at least one irreversible decision with three or more degrees of freedom, that is, . Thus class is a proper (strict) subset of class .
The difference between the classes can also be measured by in accordance with the noncommutative measure known as Gibb’s inequality or Kullback-Leibler divergence that gives the difference between two probability distributions.
The class problem can be reduced to the class -complete problem by removing the deterministic steps denoted by , that is, by polynomial time reduction [30, 84, 85]. In graphical terms the reduction of the problem to the -complete problem involves removal of nodes with less than three degrees of freedom (Figure 6). In geometric terms the non-Euclidean landscape is reduced to a manifold covered by nonequivalent triangles each having a local Lorentzian metric.In summary the computational complexity classes are related to each other as -C (Figure 7).
At first sight it may appear strange for some that the distinction between the computational complexity class and was made on the basis of the natural law because both classes contain many abstract problems without apparent physical connection. However, the view is not new [86–90]. The adopted approach to classify computational complexity is motivated because the practical computation is a thermodynamic process hence inevitably subject to the 2nd law of thermodynamics. Of course, some may still argue that the distinction between tractable and intractable problems ought to be proven without any reference to physics. Indeed, the physical portrayal can be regarded merely as a formal notation to express that the computation is a series of time-ordered (i.e., dissipative) operations that are intractable when there are three or more degrees of freedom among interdependent operations. Also noncommutative operations and non-abelian groups formalize time series [91, 92]. The essential character of nondeterministic problems, irrespective of physical realization, is that decisions affect set of future decisions, that is, the driving forces of computation depend on the process itself. The process formulation by the 2nd law of thermodynamics is a natural expression because the free energy and the flow of energy are interdependent.
The natural law may well be the invaluable ingredient to rationalize the distinction between the computational complexity classes and . It serves not only to prove that but to account for the computational course itself. For both classes of problems the natural process of computation is directing toward increasingly more probable states. When there are three or more degrees of freedom, decisions influence the choice of future decisions and the computation is intractable. The set of conceivable states generated at the branching points can be enormous, similar to a causal Bayesian network . Finally, when the maximum entropy state has been attained, it can be validated independent of the path as the free energy minimum stationary state. The corresponding solution is verifiably independent of the computational history in deterministic manner in polynomial time.
Furthermore, the crossing from class to is found precisely where n-SAT, n-coloring, n-clique problems and maximizing the shortest path with interdicts become intractable, that is, when the degrees of freedom . The efficient reduction of problems to -complete problems is also understood as operations that remove the deterministic dissipative steps and eventual redundant reversible paths. Besides, when the problem is beyond class , the natural process does not terminate at the accepting state with emission. For example, the halting problem belongs to the class -hard. Importantly, the natural law relates computational time directly to the flow of energy, that is, to the amount of dissipation . Thus the 2nd law implies that nondissipative processing protocols are deemed futile .
The practical value of computational complexity classification by the natural law of the maximal energy dispersal is that no deterministic algorithm can be found that would complete the class problems in polynomial time. The conclusion is anticipated , nonetheless, its premises imply that there is no all-purpose algorithm to trace the maximal flow paths through noninvariant landscapes. Presumably the most general and efficient algorithms balance execution between exploration of the landscape and progression down along the steep gradients in time. Perhaps most importantly, the universal law provides us with holistic understanding of the phenomena themselves to formulate questions and computational tasks in the most meaningful way.
The author is grateful to Mahesh Karnani, Heikki Suhonen, and Alessio Zibellini for valuable corrections and instructive comments.
- S. A. Cook, “The P vs. NP problem. CLAY Mathematics Foundation Millenium Problems,” http://www.claymath.org/millennium/.
- M. Sipser, Introduction to the Theory of Computation, Pws Publishing, New York, NY, USA, 2001.
- D. L. Applegate, R. E. Bixby, V. Chvátal, and W. J. Cook, The Traveling Salesman Problem: A Computational Study, Princeton University Press, Princeton, NJ, USA, 2006.
- M. R. Garey and D. S. Johnson, Computers and Intractability, A Guide to the Theory of NP-Completeness, Freeman, New York, NY, USA, 1999.
- R. Landauer, “Irreversibility and heat generation in the computing process,” IBM Journal of Research and Development, vol. 5, pp. 183–191, 1961.
- R. Landauer, “Minimal energy requirements in communication,” Science, vol. 272, no. 5270, pp. 1914–1918, 1996.
- C. H. Bennett, “Notes on Landauer's principle, reversible computation, and Maxwell's Demon,” Studies in History and Philosophy of Science Part B, vol. 34, no. 3, pp. 501–510, 2003.
- J. Ladyman, “Physics and computation: the status of Landauer's principle,” in Proceedings of the 3rd Conference on Computability in Europe (CiE '070, S. B. Cooper, B. Lwe, and A. Sorbi, Eds., vol. 4497 of Lecture Notes in Computer Science, pp. 446–454, Springer, Siena, Italy, June 2007.
- S. Carnot, Réflexions Sur la Puissance Motrice du feu et sur les Machines Propres à Développer Cette Puissance, Bachelier, Paris, France, 1824.
- L. Boltzmann, “Populäre Schriften (Leipzig: J. A. Barth, 1905); partially translated,” in Theoretical Physics and Philosophical Problems, B. McGuinness, Ed., Reidel, Dordrecht, The Netherlands, 1974.
- A. S. Eddington, The Nature of Physical World, Macmillan, New York, NY, USA, 1928.
- V. Sharma and A. Annila, “Natural process—Natural selection,” Biophysical Chemistry, vol. 127, no. 1-2, pp. 123–128, 2007.
- V. R. I. Kaila and A. Annila, “Natural selection for least action,” Proceedings of the Royal Society A, vol. 464, no. 2099, pp. 3055–3070, 2008.
- P. Tuisku, T. K. Pernu, and A. Annila, “In the light of time,” Proceedings of the Royal Society A, vol. 465, no. 2104, pp. 1173–1198, 2009.
- Z. K. Silagadze, “Citation entropy and research impact estimation,” Acta Physica Polonica B, vol. 41, no. 11, pp. 2325–2333, 2010.
- S. Jaakkola, V. Sharma, and A. Annila, “Cause of chirality consensus,” Current Chemical Biology, vol. 2, no. 2, pp. 153–158, 2008.
- S. Jaakkola, S. El-Showk, and A. Annila, “The driving force behind genomic diversity,” Biophysical Chemistry, vol. 134, no. 3, pp. 232–238, 2008.
- T. Grönholm and A. Annila, “Natural distribution,” Mathematical Biosciences, vol. 210, no. 2, pp. 659–667, 2007.
- P. Würtz and A. Annila, “Roots of diversity relations,” Biophysical Journal, vol. 2008, Article ID 654672, 8 pages, 2008.
- M. Karnani and A. Annila, “Gaia again,” BioSystems, vol. 95, no. 1, pp. 82–87, 2009.
- A. Annila and E. Kuismanen, “Natural hierarchy emerges from energy dispersal,” BioSystems, vol. 95, no. 3, pp. 227–233, 2009.
- A. Annila and E. Annila, “Why did life emerge?” International Journal of Astrobiology, vol. 7, no. 3-4, pp. 293–300, 2008.
- P. Würtz and A. Annila, “Ecological succession as an energy dispersal process,” BioSystems, vol. 100, no. 1, pp. 70–78, 2010.
- A. Annila and S. Salthe, “Economies evolve by energy dispersal,” Entropy, vol. 11, no. 4, pp. 606–633, 2009.
- T. Mäkelä and A. Annila, “Natural patterns of energy dispersal,” Physics of Life Reviews, vol. 7, no. 4, pp. 477–498, 2010.
- J. Anttila and A. Annila, “Natural games,” Physics Letters, Section A, vol. 375, no. 43, pp. 3755–3761, 2011.
- D. Kondepudi and I. Prigogine, Modern Thermodynamics, John Wiley & Sons, New York, NY, USA, 1998.
- V. Sharma, V. R. I. Kaila, and A. Annila, “Protein folding as an evolutionary process,” Physica A, vol. 388, no. 6, pp. 851–862, 2009.
- A. S. Fraenkel, “Complexity of protein folding,” Bulletin of Mathematical Biology, vol. 55, no. 6, pp. 1199–1210, 1993.
- S. A. Cook, “The complexity of theorem proving procedures,” in Proceedings of the 3rd annual ACM symposium on Theory of Computing (STOC '71), pp. 151–158, 1971.
- E. Noether, “Invariante Variationprobleme. Nach. v.d. Ges. d. Wiss zu goettingen,” Mathphys. Klasse, pp. 235–257, 1918.
- M. A. Tavel, “Invariant variation problem,” Transport Theory and Statistical Physics, vol. 1, pp. 183–207, 1971, English translation: E. Noether.
- S. H. Strogatz, Nonlinear Dynamics and Chaos with Applications to Physics, Biology, Chemistry and Engineering, Westview, Cambridge, Mass, USA, 2000.
- M. Karnani, K. Pääkönen, and A. Annila, “The physical character of information,” Proceedings of the Royal Society A, vol. 465, no. 2107, pp. 2155–2175, 2009.
- P. W. Atkins and J. de Paula, Physical Chemistry, Oxford University Press, New York, NY, USA, 2006.
- C. Darwin, On the Origin of Species, John Murray, London, UK, 1859.
- A. Annila and S. Salthe, “Physical foundations of evolutionary theory,” Journal of Non-Equilibrium Thermodynamics, vol. 35, no. 3, pp. 301–321, 2010.
- M. Sipser, “History and status of the P versus NP question,” in Proceedings of the 24th Annual ACM Symposium on the Theory of Computing, pp. 603–618, May 1992.
- L. Brillouin, Science and Information Theory, Academic Press, New York, NY, USA, 1963.
- A. J. Leggett and A. Garg, “Quantum mechanics versus macroscopic realism: is the flux there when nobody looks?” Physical Review Letters, vol. 54, no. 9, pp. 857–860, 1985.
- A. Turing, “On computable numbers, with an application to the Entscheidungsproblem,” Proceedings of the London Mathematical Society, vol. 2, no. 42, pp. 230–265, 1936.
- R. E. Ladner, “On the structure of polynomial time reducibility,” Journal of the Association for Computing Machinery, vol. 22, no. 1, pp. 155–171, 1975.
- S. N. Salthe, Evolving Hierarchical Systems: Their Structure and Representation, Columbia University Press, New York, NY, USA, 1985.
- R. P. Feynman and A. R. Hibbs, Quantum Physics and Path Integrals, McGraw-Hill, New York, NY, USA, 1965.
- R. D. Mattuck, A guide to Feynman Diagrams in the Many-Body Problem, Dover, New York, NY, USA, 1992.
- M. Alonso and E. J. Finn, Fundamental University Physics, vol. 3, Addison-Wesley, Reading, Mass, USA, 1983.
- J. W. Gibbs, The Scientific Papers of J. Willard Gibbs, Ox Bow Press, Woodbridge, Conn, USA, 1993-1994.
- S. Kullback, Information Theory and Statistics, John Wiley & Sons, New York, NY, USA, 1959.
- L. G. Gouy, “Sur l’energie utilizable,” Journal de Physique, vol. 8, pp. 501–518, 1889.
- A. Stodola, Steam and Gas Turbines, McGraw-Hill, New York, NY, USA, 1910.
- B. H. Lavenda, Nonequilibrium Statistical Thermodynamics, John Wiley & Sons, New York, NY, USA, 1985.
- D. R. Owen, A First Course in the Mathematical Foundations of Rhermodynamics, Springer, New York, NY, USA, 1984.
- U. Lucia, “Probability, ergodicity, irreversibility and dynamical systems,” Proceedings of the Royal Society A, vol. 464, no. 2093, pp. 1089–1104, 2008.
- E. T. Jaynes, “Information theory and statistical mechanics,” Physical Review, vol. 106, no. 4, pp. 620–630, 1957.
- H. Ziegler, An Introduction to Thermomechanics, North-Holland, Amsterdam, The Netherlands, 1983.
- R. E. Ulanowicz and B. M. Hannon, “Life and the production of entropy,” Proceedings of the Royal Society B, vol. 232, pp. 181–192, 1987.
- D. R. Brooks and E. O. Wiley, Evolution as Entropy: Toward a Unified Theory of Biology, University of Chicago Press, Chicago, Ill, USA, 1988.
- R. Swenson, “Emergent attractors and the law of maximum entropy production: foundations to a theory of general evolution,” Systems Research, vol. 6, pp. 187–198, 1989.
- S. N. Salthe, Development and Evolution: Complexity and Change in Biology, MIT Press, Cambridge, Mass, USA, 1993.
- E. D. Schneider and J. J. Kay, “Life as a manifestation of the second law of thermodynamics,” Mathematical and Computer Modelling, vol. 19, no. 6–8, pp. 25–48, 1994.
- A. Bejan, Advanced Engineering Thermodynamics, John Wiley & Sons, New York, NY, USA, 1977.
- E. J. Chaisson, Cosmic Evolution: The Rise of Complexity in Nature, Harvard University Press, Cambridge, Mass, USA, 2001.
- R. D. Lorenz, “Planets, life and the production of entropy,” International Journal of Astrobiology, vol. 1, pp. 3–13, 2002.
- R. Dewar, “Information theory explanation of the fluctuation theorem, maximum entropy production and self-organized criticality in non-equilibrium stationary states,” Journal of Physics A, vol. 36, no. 3, pp. 631–641, 2003.
- C. H. Lineweaver, “Cosmological and biological reproducibility: limits of the maximum entropy production principle,” in Non-Equilibrium Thermodynamics and the Production of Entropy: Life, Earth and Beyond, A. Kleidon and R. D. Lorenz, Eds., Springer, Heidelberg, Germany, 2005.
- L. M. Martyushev and V. D. Seleznev, “Maximum entropy production principle in physics, chemistry and biology,” Physics Reports, vol. 426, no. 1, pp. 1–45, 2006.
- M. Berry, Principles of Cosmology and Gravitation, Cambridge University Press, Cambridge, UK, 2001.
- S. Weinberg, Gravitation and Cosmology, Principles and Applications of the General Theory of Relativity, John Wiley & Sons, New York, NY, USA, 1972.
- E. F. Taylor and J. A. Wheeler, Spacetime Physics, Freeman, New York, NY, USA, 1992.
- J. M. Lee, Introduction to Smooth Manifolds, Springer, New York, NY, USA, 2003.
- D. Griffiths, Introduction to Quantum Mechanics, Prentice Hall, Upper Saddle River, NJ, USA, 1995.
- I. Newton, The Principia, Daniel Adee, New York, NY, USA, 1687, translated by A. Motte, 1846.
- A. Annila, “Least-time paths of light,” Monthly Notices of the Royal Astronomical Society, vol. 416, pp. 2944–2948, 2011.
- M. Koskela and A. Annila, “Least-time perihelion precession,” Monthly Notices of the Royal Astronomical Society, vol. 417, pp. 1742–1746, 2011.
- S. Carroll, Spacetime and Geometry: An Introduction to general relativity, Addison-Wesley, Essex, UK, 2004.
- J. H. Poincaré, “Sur le problème des trois corps et les équations de la dynamique. Divergence des séries de M. Lindstedt,” Acta Mathematica, vol. 13, pp. 1–270, 1890.
- K. F. Sundman, “Memoire sur le probleme de trois corps,” Acta Mathematica, vol. 36, pp. 105–179, 1912.
- C. E. Shannon and W. Weaver, The Mathematical Theory of Communication, The University of Illinois Press, Urbana, Ill, USA, 1962.
- C. E. Shannon, “The mathematical theory of communication,” Bell System Technical Journal, vol. 27, pp. 379–423–623–656, 1948.
- S. J. Gould, The Structure of Evolutionary Ttheory, Harvard University Press, Cambridge, Mass, USA, 2002.
- T. H. Cormen, C. E. Leiserson, R. L. Rivest, and C. Stein, Introduction to Algorithms, MIT Press & McGraw-Hill, Cambridge, Mass, USA, 2001.
- E. W. Dijkstra, “A note on two problems in connexion with graphs,” Numerische Mathematik, vol. 1, no. 1, pp. 269–271, 1959.
- P. Billingsley, Probability and Measure, John Wiley & Sons, New York, NY, USA, 1979.
- L. Levin, “Universal search problems,” Problems of Information Transmission, vol. 3, pp. 265–266, 1973 (Russian).
- B. A. Trakhtenbrot, “A survey of Russian approaches to perebor (brute-force searches) algorithms,” Annals of the History of Computing, vol. 6, pp. 384–400, 1984, English translation: L. Levin.
- S. Aaronson, “NP-complete problems and physical reality,” Electronic colloquium on computational complexity, Report no. 26, 2005.
- A. A. Razborov and S. Rudich, “Natural proofs,” Journal of Computer and System Sciences, vol. 55, no. 1, pp. 24–35, 1997.
- M. Franzén, The P versus NP brief, 2007, http://arxiv.org/ftp/arxiv/papers/0709/0709.1207.pdf.
- S. N. Coppersmith, “The computational complexity of Kauffman nets and the P versus NP problem,” Physical Review E., vol. 75, 4 pages, 2007.
- J. Ladyman, “What does it mean to say that a physical system implements a computation?” Theoretical Computer Science, vol. 410, no. 4-5, pp. 376–383, 2009.
- A. Connes, Noncommutative Geometry (Géométrie non commutative), Academic Press, San Diego, Calif, USA, 1994.
- D. Hestenes and G. Sobczyk, Clifford Algebra to Geometric Calculus. A Unified Language for Mathematics and Physics, Reidel, Dordrecht, The Netherlands, 1984.
- J. Pearl, Causality: Models, Reasoning, and Inference, Cambridge University Press, New York, NY, USA, 2000.
- R. Landauer, “The physical nature of information,” Physics Letters, Section A, vol. 217, no. 4-5, pp. 188–193, 1996.
- W. I. Gasarch, “The P=?NP poll,” SIGACT News, vol. 33, no. 2, pp. 34–47, 2002.