Table of Contents
ISRN Computational Mathematics
Volume 2012 (2012), Article ID 321372, 15 pages
Research Article

Physical Portrayal of Computational Complexity

Department of Physics, Institute of Biotechnology and Department of Biosciences, University of Helsinki, 00014 Helsinki, Finland

Received 3 October 2011; Accepted 3 November 2011

Academic Editor: L. Pan

Copyright © 2012 Arto Annila. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.


Computational complexity is examined using the principle of increasing entropy. To consider computation as a physical process from an initial instance to the final acceptance is motivated because information requires physical representations and because many natural processes complete in nondeterministic polynomial time (NP). The irreversible process with three or more degrees of freedom is found intractable when, in terms of physics, flows of energy are inseparable from their driving forces. In computational terms, when solving a problem in the class NP, decisions among alternatives will affect subsequently available sets of decisions. Thus the state space of a nondeterministic finite automaton is evolving due to the computation itself, hence it cannot be efficiently contracted using a deterministic finite automaton. Conversely when solving problems in the class P, the set of states does not depend on computational history, hence it can be efficiently contracted to the accepting state by a deterministic sequence of dissipative transformations. Thus it is concluded that the state set of class P is inherently smaller than the state set of class NP. Since the computational time needed to contract a given set is proportional to dissipation, the computational complexity class P is a proper (strict) subset of NP.

1. Introduction

Currently it is unclear whether every problem whose solution can be efficiently checked by a computer can also be efficiently solved by a computer [1, 2]. On one hand, decision problems in a computational complexity class 𝑃 can be solved efficiently by a deterministic algorithm within a number of steps bound by a polynomial function of the input’s length. An example of a 𝑃 problem is that of the shortest path: what is the least-cost one-way path through a given network of cities to the destination? On the other hand, to solve problems in class 𝑁𝑃 efficiently seems to require some nondeterministic parallel machine; yet solutions can be verified as correct in a deterministic manner. An example of an 𝑁𝑃-complete problem is that of the traveling salesman: what is the least-cost round-trip path via a given network of cities, visiting each exactly once?

It appears, although it has not been proven, that the traveling salesman problem [3] and numerous other 𝑁𝑃 problems in mathematics, physics, biology, economics, optimization, artificial intelligence, and so forth [4] cannot be solved in deterministic manner in polynomial time unlike the shortest path problem and other 𝑃 problems. Yet, the initial instances of the traveling salesman and the shortest path problem seem to differ at most polynomially from one another. Therefore, could it be that there are, after all, for the 𝑁𝑃 problems as efficient algorithms as there are for the 𝑃 problems but these simply were not found yet?

In this study insight to the 𝑃 versus 𝑁𝑃 question is obtained by considering computation as a physical process [58] that follows the 2nd law of thermodynamics [911]. The natural law was recently written as an equation of motion that complies with the principle of least action and Newton’s second law [1215]. The ubiquitous imperative to consume free energy, known also as the principle of increasing entropy, describes a system in evolution toward more probable states in least time. Here, it is of particular interest that evolution is in general a nondeterministic process as is class 𝑁𝑃 computation. Furthermore, the end point of evolution, that is, the thermodynamically stable stationary state itself, can be efficiently validated as the free energy minimum in a similar manner as the solution to a 𝑁𝑃 computation can be verified as accepting.

The recent formulation of the 2nd law as an equation of motion based on statistical mechanics of open systems has rationalized diverse evolutionary courses that result in skewed distributions whose cumulative curves are open-form integrals [1626]. Several of these natural processes [27], for example, protein folding that directs down along intractable trajectories to diminish free energy [28], have been recognized as the hardest problems in class 𝑁𝑃 [29]. Although many other 𝑁𝑃-complete problems do not seem to concern physical reality, the concept of 𝑁𝑃-completeness [30] encourages one to consider computation as an energy transduction process that follows the 2nd law of thermodynamics. The physical portrayal of computational complexity allows one to use the fundamental theorems concerning conserved currents [31, 32] and gradient systems [27, 33] in the classification of computational complexity. Specifically, it is found that circuit currents remain tractable during the class 𝑃 problem computation because the accessible states of a computer do not depend on the processing steps themselves. Thus the class 𝑃 state set can be efficiently contracted using a deterministic finite automaton to the accepting set along the dissipative path without additional degrees of freedom. Physically speaking boundary conditions remain fixed. In contrast, the circuit currents are intractable during the class 𝑁𝑃 problem computation because each step of the problem-solving process depends on the computational history and affects future decisions. Thus the contraction of states along alternative but interdependent computational paths to the accepting set remains a nondeterministic process. Physically speaking boundary conditions are changing due to the process itself.

The adopted physical perspective on computation is consistent with the standpoint that no information exists without its physical representation [5, 6] and that information processing itself is governed by the 2nd law [34]. The connection between computational complexity and the natural law also yields insight to the abundance of natural problems in class 𝑁𝑃 [4]. In the following, the description of computation as an evolutionary process is first outlined and then developed to mathematical forms to make the distinction between the computations that belong to classes 𝑃 and 𝑁𝑃.

2. Computation as a Physical Process

According to the 2nd law of thermodynamics a computational circuit, just as any other physical system, evolves by diminishing energy density differences within the system and relative to its surroundings. The consumption of free energy [35] is generally referred to as evolution where flows of energy naturally select [36, 37] the steepest directional descents in the free energy landscape to abolish the energy density differences in least time [14]. At first sight it may appear that the physical representations of computational states, in particular as they are represented by modern computers, would be insignificant to play any role in computational complexity. However, since no representation of information can escape from following laws of physics, also computation must ultimately comply with laws of physics. A clocked circuit as a physical realization of a finite automaton is an energy transduction network. Likewise, Boolean components and shift register nodes are components of a thermodynamic network. In accordance with the network notion, the 𝑃 versus 𝑁𝑃 question can be phrased in terms of graphs [38]. In this study it will be shown that the computations in the two computational complexity classes do differ from each other in thermodynamic terms. Thus it follows that no algorithm can abolish this profound distinction.

Computation is, according to the principle of increasing entropy, a probable physical process. The sequence of computational steps will begin when an energy density difference, representing an input, appears at the interface between the computational system and its surroundings. Thus, the input by its physical representation places the automaton at the initial state of computation, that is, physically speaking evolution. A specific input string of alphabetic symbols is represented to the circuit by a particular physical influx, for example, as a train of voltages. Importantly no instance is without physical realization.

The algorithmic execution is an irreversible thermalization process where the energy absorbed at the input interface will begin to disperse within the circuit. Eventually, after a series of dissipative transformations from one state to another, more probable one, the computational system arrives at a thermodynamic steady state, the final acceptance, by emitting an output, for example, writing a solution on a tape. No solution can be produced without physical representation. Although it may seem secondary, the condition of termination must ultimately be the physical free energy minimum state, otherwise, there would still be free energy that would drive the computational processes further.

Physically speaking, the most effective problem solving is about finding the path of least action, which is equivalent to the maximal energy transduction from the initial instance down along the most voluminous gradients of energy to the final acceptance. However, the path for the optimal conductance, that is, for the most rapid reduction of free energy, is tricky to find in a circuit with three or more degrees of freedom because flows (currents) and forces (voltages) are inseparable. In contrast, when the process has no additional degrees of freedom in dissipation, the minimal resistance path corresponding to the solution can be found in a deterministic manner.

In the general case the computational path is intractable because the state space keeps changing due to the search itself. A particular decision to move from the present state to another depends on the past decisions and will also affect accessible states in the future. For example, when the traveling salesman decides for the next destination, the decision will depend on the past path, except at the very end, when there are no choices but to return home. The path is directed because revisits are not allowed (or eventually restricted by costs). This class, referred to as 𝑁𝑃, contains intractable problems that describe irreversible (directional) processes (Figure 1) with additional (𝑛3) degrees of freedom.

Figure 1: Computation is considered as a dissipative process. The input as an influx of energy disperses from the input interface (top) through the network that evolves during the computation, according to the 2nd law of thermodynamics by dissipative transitions that acquire high (blue) and yield low (red) density in energy, toward the stationary state (bottom). Reversible transitions, that is, conserved currents (purple), do not bring about changes of state and do not advance the computation. Driving forces (free energy between the nodes) and flows (between the nodes) are inseparable when there are additional degrees of freedom (𝑛3), that is, alternative yet interdependent paths for the dissipative processes to proceed along. Then the flows are intractable and the corresponding algorithmic execution is nondeterministic.

In the special case the computational path is tractable as decisions are independent of computational history. For example, when searching for the shortest path through a network, the entire invariant state space is, at least in principle, visible from the initial instance, that is, the problem is deterministic. A decision at any node is independent of traversed paths. This class, referred to as 𝑃, contains tractable problems that describe irreversible processes without additional degrees of freedom. Moreover, when the search among alternatives is not associated with any costs, the process is reversible (nondirectional), that is, indifferent to the total conductance from the input to output node.

Finally, it is of interest to note the particular case when a particular physical system has no mechanisms to proceed from a state to any other by transforming absorbed quanta to any emission. Since dispersion relations of physical systems will be revealed first when interacting with them [39, 40], it is impossible to know for a given circuit and finite influx, a priori, without interacting whether the system will arrive at the free energy minimum state finishing with emission or remain at an excited state without output forever. This is the physical rationale of the halting problem [41]. It is impossible to decide for a given program and finite input, a priori, without processing whether the execution will arrive at the accepting state finishing with output or remain at a running state without output forever. These processes that acquire but do not yield relate to problems that cannot be decided. They are beyond class 𝑁𝑃 [42] and will not be examined further. Here the focus is on the principal difference between the truly tractable and inherently intractable problems.

3. Self-Similar Circuits

The physical portrayal of problem processing according to the principle of increasing entropy is based on the hierarchical and holistic formalism [43]. It recognizes that circuits are self-similar in energy transduction (Figure 2) [21, 44, 45]. A circuit is composed of circuits or, equivalently, there are networks within nodes of networks. The most elementary physical entity is the single quantum of action [15, 46].

Figure 2: According to the self-similar formulation of energy transduction the nodes of network are themselves networks. Any two densities 𝜙𝑗 and 𝜙𝑘 at the nodes 𝑗 and 𝑘 are distinguished from each other by a dissipative 𝑗𝑘-transformation Δ𝑄𝑗𝑘0.

Each node of a transduction network is a physical entity associated with energy 𝐺𝑘. A set of identical nodes 𝑁𝑘>0 representing, for example, a memory register, is associated, following Gibbs [47], with a density-in-energy defined by 𝜙𝑘=𝑁𝑘exp(𝐺𝑘/𝑘𝐵𝑇) relative to the average energy density 𝑘𝐵𝑇. The self-similar formalism assigns to a set of indistinguishable nodes in numbers 𝑁𝑘 a probability measure 𝑃𝑘 [12, 46]𝑃𝑘=𝑛𝑁𝑛expΔ𝐺𝑘𝑛+Δ𝑄𝑘𝑛/𝑘𝐵𝑇𝑔𝑘𝑛/𝑔𝑘𝑛!𝑁𝑘𝑁𝑘!(1) in a recursive manner, so that each node 𝑘 in numbers 𝑁𝑘 is a product of embedded n-nodes, each distinct type available in numbers 𝑁𝑛. The combinatorial configurations of identical n-nodes in the k-node are numbered by 𝑔𝑘𝑛. Likewise, the identical 𝑘-nodes in numbers 𝑁𝑘 are indistinguishable from each other in the network. The internal difference Δ𝐺𝑘𝑛=𝐺𝑘𝐺𝑛 and the external flux Δ𝑄𝑘𝑛 denote the quanta of (interaction) energy.

The computational system is processing from one state to another, more probable one, when energy is flowing down along gradients through the network from one node to another with concurrent dissipation to the surroundings. For example, a j-node can be driven from its present state, defined by the potential 𝜇𝑗=𝑘𝐵𝑇ln𝜙𝑗 [35], to another state by an energy flow from a preceding k-node at a higher potential 𝜇𝑘 and by an energy efflux Δ𝑄𝑗𝑘 to the surroundings (Figure 2). Subsequently the j-node may transform anew from its current high-energy state to a stationary state by yielding an efflux to a connected i-node at a lower potential coupled with emission to the surroundings. Any two states are distinguished from each other as different only when the transformation from one to the other is dissipative Δ𝑄𝑗𝑘0 [1214]. When thermalization has abolished all density differences, the irreversible process has arrived at a dynamic steady state where reversible, to-and-fro flows of energy (currents) are conserved and, on the average, the densities remain invariant.

It is convenient to measure the state space of computation by associating each j-system with logarithmic probabilityln𝑃𝑗𝑁𝑗1𝑘Δ𝜇𝑗𝑘Δ𝑄𝑗𝑘𝑘𝐵𝑇=𝑁𝑗1𝑘Δ𝑉𝑗𝑘𝑘𝐵𝑇(2) in analogy to (1), where Δ𝜇𝑗𝑘/𝑘𝐵𝑇=ln𝜙𝑗Σ𝑔𝑗𝑘ln(𝜙𝑘/𝑔𝑗𝑘!) is the potential difference between the j-node and all other connected k-nodes in degenerate (equal-energy) numbers 𝑔𝑗𝑘. Stirling’s approximation implies that kBT is a sufficient statistic for the average energy [48] so that the system may accept (or discard) a quantum without a marked change in its total energy content, that is, the free energy Δ𝑉𝑗𝑘=Δ𝜇𝑗𝑘Δ𝑄𝑗𝑘𝑘𝐵𝑇. Otherwise, a high influx Δ𝑉𝑗𝑘𝜏𝑘𝐵𝑇, such as a voltage spike from the preceding k-node or heat from the surroundings, might “damage” the j-system, for example, “burn” a memory register, by forcing the embedded n-nodes into evolution (Figure 2). Such a non-statistic phenomenon may manifest itself even as chaotic motion but this is no obstacle for the adopted formalism because then the same self-similar equations are used at a lower level of hierarchy to describe processes involving sufficiently statistical systems.

According to the scale-independent formalism the network is a system in the same way as its constituent nodes are systems themselves. Any two networks, just as any two nodes, are distinguishable from each other when there is some influx sequence of energy so that exactly one of the two systems is transforming. In computational terms, any two states of a finite automaton are distinguishable when there is some input string so that exactly one of the two transition functions is accepting [2]. Those nodes that are distinguishable from each other by mutual density differences are nonequivalent. These distinct physical entities of a circuit are represented by disjoint sets and indexed separately in the total additive measure of the entire circuit defined asln𝑃=𝑗=1ln𝑃𝑗=𝑗=1𝑁𝑗1𝑘𝑗Δ𝑉𝑗𝑘𝑘𝐵𝑇.(3) The affine union of disjoint sets is depicted as a graph that is merged from subgraphs by connections.

In the general case the calculation of measure ln𝑃 (3) implies a complicated energy transduction network by indexing numerous nodes as well as differences between them and in respect to the surroundings. In a sufficiently statistical system the changes in occupancies balance as Δ𝑁𝑗=ΣΔ𝑁𝑘since the influx to the j-node results from the effluxes from the k-nodes (or vice versa). The flows along the jk-edges are proportional to the free energy by an invariant conductance 𝜎𝑗𝑘>0 defined as [12]Δ𝑁𝑗=𝑘𝜎𝑗𝑘Δ𝑉𝑗𝑘𝑘𝐵𝑇.(4) The form ensures continuity so that, when a particular jk-flow is increasing the occupancy Δ𝑁𝑗>0 of the j-node, the very same flow is decreasing the occupancies ΣΔ𝑁𝑘<0 at the k-nodes (or vice versa). Importantly, owing to the other affine connections, the jk-transformation will affect occupancies of other nodes that in turn affect Δ𝑉𝑗𝑘. Consequently when there are, among interdependent nodes (𝑛3), alternative paths (𝑘2) of conduction, the problem of finding the optimal path becomes intractable [12, 14]. As long as Δ𝑉𝑗𝑘0 the gradient system with 𝑛3 degrees of freedom does not enclose integrable (tractable) orbits [33].

Conversely in the special case, when the reduction of a difference does not affect other differences, that is, there are no additional degrees of freedom, the changes in occupancies remain tractable. The conservation of energy requires that, when there are only two degrees of freedom, the flow from one node will inevitably arrive exclusively at the other node. Therefore, it is not necessary to explore all these integrable paths to their very ends. Then the outcome can be predicted and the particular path in question can be found efficiently. Moreover, when there are no differences Δ𝑉𝑗𝑘=0, there are no net variations in occupancies, that is, no net flows either. These conserved, reversible flows are statistically predictable even in a complicated but stationary (Δln𝑃=0) network with degrees of freedom. When the currents are conserved, the network is idle, that is, not transforming. In accordance with Noether’s theorem also the Poincaré-Bendixson theorem holds for the stationary system [27, 33].

The overall transduction processes, both intractable and tractable direct toward more probable states, that is, Δln𝑃>0. However when a natural process with three or more degrees of freedom is examined in a deterministic manner, it is necessary to explore all conceivable transformation paths to their ends. The paths cannot be integrated in closed forms (predicted) because each decision will affect the choice of future states. The set of conceivable states that is generated by decisions at consequent branching points of computation can be enormous.

The physical portrayal of computational complexity reveals that it is the noninvariant, evolving state space of class 𝑁𝑃 computation that prevents from completing the contraction by dissipative transformations in deterministic manner in polynomial time. Since the dissipated flow of energy during the computation relates directly to the irreversible flow of time [14], the class 𝑁𝑃 completion time is inherently longer than that of class 𝑃. Thus it is concluded that 𝑃 is a proper subset of 𝑁𝑃.

4. Computation as a Probable Process

When computation is described as a probable physical process, the additive logarithmic probability measure ln𝑃 is increasing as the dissipative transformations are leveling the differences Δ𝑉𝑗𝑘0 (Δ𝑉𝑗𝑗=0). When the definitions in (4) and Δ𝜇𝑗𝑘(Δ𝑁𝑗)/𝑘𝐵𝑇=Δ𝑁𝑗/𝑁𝑗 are used, the change Δln𝑃𝐿=Δln𝑃=𝑗=1Δ𝑁𝑗𝑘Δ𝑉𝑗𝑘𝑘𝐵𝑇=𝑗,𝑘𝜎𝑗𝑘Δ𝑉𝑗𝑘𝑘𝐵𝑇20(5) is found to be nonnegative since the squares (Δ𝑉𝑗𝑘)2 and (Δ𝑁𝑗)2 are necessarily nonnegative and the absolute temperature 𝑇>0, 𝜎𝑗𝑘0, and 𝑘𝐵>0.

The definition of entropy 𝑆=𝑘𝐵ln𝑃 yields from (5) the principle of increasing entropy Δ𝑆=𝑗Δ𝑁𝑗𝑘Δ𝑉𝑗𝑘/𝑇0. Equation (5) says that entropy is increasing when free energy is decreasing, in agreement with the thermodynamic maxim [35] and Gouy-Stodola theorem [49, 50] and the mathematical foundations of thermodynamics [5153]. In other words, when the process generator 𝐿>0, there is free energy for the computation to commence from the initial state toward the accepting state where the output will thermalize the circuit and 𝐿=0. Admittedly, dissipation is often small, however, not negligible but necessary for any computation to advance and to yield an output [5, 6, 34].

During the computational process the state space accessible by 𝐿>0 is contracting toward the free energy minimum state where 𝐿=0 and no further changes of state are possible. Consistently, when ln𝑃 is increasing due to the changing occupancies Δ𝑁𝑗, the change in the process generator [27]Δ𝐿=2𝑗=1Δ𝑁𝑗𝑁𝑗𝑘𝜎𝑗𝑘Δ𝑉𝑗𝑘𝑘𝐵𝑇=2𝑗=1Δ𝑁𝑗2𝑁𝑗0(6) is found to decrease almost everywhere using the definition in (4) because the squares (Δ𝑁𝑗)2 and (Δ𝑉𝑗𝑘)2 are necessarily nonnegative and 𝑁𝑗>0 for any spatially confined energy density [14]. Equations (5) and (6) show that during the computation the state space is contracting toward the stationary state where 𝐿=0.

The free energy minimum partition ln𝑃max=Σ𝑁𝑗𝑠𝑠 corresponds to the solution. It is a stable state of computational process in its surroundings because any variation 𝛿𝑁𝑗 below (above) the steady-state occupancy 𝑁𝑗𝑠𝑠 will reintroduce Δ𝑉𝑗𝑘<0 (> 0) that will drive the system back to the stationary state by invoking a returning flow Δ𝑁𝑗>0 (<0). Explicitly, the maximum entropy system is Lyapunov stable [27, 33] according to the definitions 𝛿ln𝑃=𝐿(𝛿𝑁𝑗)<0 and 𝛿𝐿(𝛿𝑁𝑗)>0 that are available from (5) and (6). The dynamic steady state is maintained by frequent to-and-fro flows between the system’s constituents and the surroundings. These nondissipative processes do not amount to any change in P.

In general the trajectories of natural processes cannot be solved analytically because the flows Δ𝑁𝑗 and Δ𝑉𝑗𝑘 are inseparable in L (5) at any j-node where cardinality of {𝑗,𝑘}3. Nonetheless, the inherently intractable trajectories can be mapped by simulations where T, Δ𝑉𝑗𝑘, and 𝑁𝑗 are updated after each change of state. The occupancies 𝑁𝑗 keep changing due to the changing driving forces Δ𝑉𝑗𝑘 that, in turn, are affected by the changes Δ𝑁𝑗. In terms of physics the non-Hamiltonian system is without invariants of motion and Liouville’s theorem is not satisfied because the open dissipative system is subject to an influx (efflux) from (to) its surroundings. The nonconserved, gradient system is without norm. Thus the evolving (cf. Bayesian) distribution of probabilities 𝑃𝑗 cannot be normalized. The dissipative equation of motion Δ𝑃/Δ𝑡=𝐿𝑃 for the class of irreversible processes cannot be integrated in a closed form or transformed to a time-independent frame [14] to obtain a solution efficiently.

According to the maximum entropy production principle [5466] energy differences will be reduced most effectively when entropy increases most rapidly, that is, most voluminous currents direct along the steepest paths of free energy. However, when choosing at every instance a particular descent that appears as the steepest, there is no guarantee that the most optimal overall path will be found because the transformations themselves will affect the future states between the initial instance and the final acceptance. To be sure about the optimal trajectory it takes time, that is, dissipation [14] because the deterministic algorithmic execution of the class 𝑁𝑃 problem will have to address by conceivable transformations the entire power set of states, one member for each distinct path of energy dispersal.

In the special case when the currents are separable from the driving forces, the energy transduction network will remain invariant. In terms of physics the Hamiltonian system has invariants of motion and Liouville’s theorem is satisfied. The deterministic computation as a tractable energy transduction process will solve the problem in question because the dissipative steps are without additional degrees of freedom. The conceivable courses can be integrated (predicted). Hence the solution can be obtained efficiently, for example, by an algorithm that follows the steepest descent and does not waste time in wandering along paths that can be predicted to be futile.

5. Manifold in Motion

Further insight to the distinction between computations in the classes 𝑃 and 𝑁𝑃 is obtained when the computation as a physical process is described in terms of an evolving energy landscape [6769]. To this end the discrete differences Δ that denote properly transforming forces, and quantized flows, are replaced by differentials of continuous variables. A spatial gradient 𝜕𝑈𝑗𝑘/𝜕𝑥𝑗 is a convenient way to relate a density labeled by 𝑗 at a continuum coordinate 𝑥𝑗 with another one labeled by 𝑘 but displaced by dissipation 𝜕𝑄𝑗𝑘/𝜕𝑡 at 𝑥𝑘 [13, 14]. When the j-system at 𝑥𝑗 evolves down along the scalar potential gradient 𝜕𝑈𝑗𝑘/𝜕𝑥𝑗 in the field 𝜕𝑄𝑗𝑘/𝜕𝑥𝑗, the conservation of energy requires that the transforming current 𝑣𝑗=𝑑𝑥𝑗/𝑑𝑡=Σ𝑑𝑥𝑘/𝑑𝑡. The dissipation 𝜕𝑄𝑗𝑘/𝜕𝑡 is an efflux of photons at the speed of light 𝑐 to the surrounding medium (or vice versa).

The continuum equation of motion corresponding to (5) is obtained from (3) by differentiating and using the chain rule (𝑑𝑃𝑗/𝑑𝑥𝑗)(𝑑𝑥𝑗/𝑑𝑡) [14]𝐿=𝑗,𝑘𝐷𝑗𝑉𝑗𝑘𝑘𝐵𝑇,(7) where directional derivates 𝐷𝑗=(𝑑𝑥𝑗/𝑑𝑡)(𝜕/𝜕𝑥𝑗) span an affine manifold [70] of energy densities (Figure 3). The total potential 𝑉𝑗𝑘=𝑈𝑗𝑘𝑖𝑄𝑗𝑘 is decomposed to the orthogonal scalar 𝑈𝑗𝑘 and vector 𝑄𝑗𝑘 parts [71]. All distinguishable densities and flows are indexed by 𝑗𝑘. The evolving energy landscape is concisely given by the total change in kinetic energy 𝜕(2𝐾)/𝜕𝑡=𝑘𝐵𝑇𝐿=𝑇𝜕𝑆/𝜕𝑡 [13, 14]𝑗,𝑘𝑣𝑗𝜕𝑚𝜕𝑡𝑗𝑘𝑣𝑘=𝑗,𝑘𝑣𝑗𝑚𝑗𝑘𝜕𝑣𝑘+𝜕𝑡𝑗,𝑘𝑣𝑗𝜕𝑚𝑗𝑘𝑣𝜕𝑡𝑘𝜕𝜕𝑡2𝐾=𝑗,𝑘𝑣𝑗𝜕𝑈𝑗𝑘𝜕𝑥𝑗+𝑗,𝑘𝜕𝑄𝑗𝑘,𝜕𝑡(8) where the transforming flows with three or more degrees of freedom (𝑛3) are indexed as 𝑗𝑘±1. Conversely, the flow without additional degrees of freedom (𝑛<3) is indexed as 𝑗=𝑘±1. In fact the derivate should be denoted as inexact (đ) because in general the entered state depends on the past path.

Figure 3: The curved energy landscape, covered by triangles, represents the state set of intractable computation. The non-Euclidian manifold is evolving by the contraction process itself toward the optimal path of maximal conduction (red arrows) corresponding to the solution. During the contraction the path with additional degrees of freedom (exemplified at a branching point) from the initial instance (top) toward the final acceptance (bottom) is shortening but remains nonintegrable (unpredictable) due to the dissipation. In contrast the paths (blue arrows) on the invariant Euclidean plane (grey) do not mold the landscape and thus they do not have to be followed to their ends but can be integrated (predicted).

The equation for the flows of energy can also be obtained from the familiar Newton’s 2nd law [72] for the change in momentum 𝑝𝑗𝑘=𝑚𝑗𝑘𝑣𝑘𝜕𝜕𝑡𝑗,𝑘𝑝𝑗𝑘=𝑗,𝑘𝑚𝑗𝑘𝑎𝑘+𝑗,𝑘𝜕𝑚𝑗𝑘𝑣𝜕𝑡𝑘=𝑗,𝑘𝜕𝑉𝑗𝑘𝜕𝑥𝑗=𝑗,𝑘𝜕𝑈𝑗𝑘𝜕𝑥𝑗+𝑗,𝑘𝜕𝑄𝑗𝑘𝑣𝑗𝜕𝑡(9) by multiplying with velocities. The gradient 𝜕𝑉𝑗𝑘/𝜕𝑥𝑗 is again decomposed to the spatial and temporal parts. The sign convention is the same as above, that is, when 𝜕𝑈𝑗𝑘/𝜕𝑥𝑗<0, then 𝑣𝑗>0. Since momenta are at all times tangential to the manifold, the Newton’s 2nd law (9) requires that the corresponding flow at any moment𝑣𝑗=𝑘𝜎𝑗𝑘𝑘𝐵𝑇𝜕𝑉𝑗𝑘𝜕𝑥𝑗(10) is proportional to the driving force in accordance with the continuity 𝑣𝑗=Σ𝑣𝑘 across the jk-edges between nodes of the network (4) [12]. The linear relationship in (10) that reminds of Onsager reciprocal relations [51] is consistent with the previous notion that the densities in energy (the nodes) are sufficiently statistical systems. Otherwise, a high current between 𝑥𝑘 and 𝑥𝑗 would force the underlying conducting system (jk-edge), parameterized by the coefficient 𝜎𝑗𝑘, to evolution. In such a case the channel’s conductance would depend on transmitted bits [34].

A particular flow 𝑣𝑗 funnels by dissipative transformations down along the steepest descent 𝜕𝑉𝑗𝑘/𝜕𝑥𝑗, that is, along the shortest path 𝑠𝑗𝑘=𝑣𝑑(𝑗𝑚𝑗𝑘𝑣𝑘) known as the geodesic [51, 73, 74]. At any given moment the positive definite resistance 𝑟𝑗𝑘=𝑘𝐵𝑇𝜎𝑗𝑘1>0 in (10) identifies to the mass 𝑚𝑗𝑘>0 that as the metric tensor defines the geometry of the free energy landscape [75] (cf. Lorentzian manifold). Formally 𝑠𝑗𝑘 can be denoted as an integral; however in the general case of the evolving non-Euclidean landscape it cannot be integrated in a closed form [33]. The curved landscape is shrinking (or growing) because the surroundings are draining it by a net efflux (or supplying it with a net influx) of radiation 𝜕𝑄𝑗𝑘/𝜕𝑡0 and/or a material flow 𝜕𝑈𝑗𝑘/𝜕𝑡0. When the forces and flows are inseparable in L, the noninvariant landscape is, at any given locus and moment, a result of its evolutionary history. The rate of net emission (or net absorption) declines as the system steps, quantum by quantum, toward the free energy minimum, which is the stationary state in the respective surroundings. Only in the special case, when the forces and flows are separable, can the trajectories be integrated in a closed form.

Finally, when all density differences have vanished, the manifold has flattened to the stationary state (𝑑𝑆/𝑑𝑡=0). The state space has contracted to a single stationary state where 𝐿=0. In agreement with Noether’s theorem the currents are conserved and tractable throughout the invariant manifold. Also in accordance with Poincaré’s recurrence theorem the steady-state reversible dynamics are exclusively on bound and (statistically) predictable orbits. Moreover the conserved currents, that is, 𝜕𝑚𝑗𝑘/𝜕𝑡=0, bring about no net changes in the total energy content of the system. Hence (9) reduces to𝑗,𝑘𝑣𝑗𝜕𝑚𝜕𝑡𝑗𝑘𝑣𝑘=𝑗,𝑘𝑣𝑗𝑚𝑗𝑘𝜕𝑣𝑘𝜕𝜕𝑡𝜕𝑡2𝐾=𝑗,𝑘𝑣𝑗𝜕𝑈𝑗𝑘𝜕𝑥𝑗(11) which implies in accordance with the virial theorem that the components of kinetic energy 2K match the components of potential 𝑈 everywhere.

According to the geometric description of computational processes, the flattening (evolving) non-Euclidean landscape represents the state space of the class 𝑁𝑃 computation whereas the flat Euclidean manifold represents the state space of the class 𝑃 computation. The geodesics that span the class 𝑁𝑃 landscape are arcs whereas those that span the class 𝑃 manifold are straight lines. According to (8) the class 𝑁𝑃 state space is, due to its three or more degrees of freedom (𝑛3), larger in dissipation by the terms Σ𝑣𝑗𝑑𝑚𝑗𝑘𝑣𝑘>0 indexed with 𝑗𝑘±1, than the class 𝑃 state space without additional degrees of freedom (𝑛<3) for dissipation given by the term 𝑣𝑗𝑑𝑚𝑗𝑘𝑣𝑘>0 indexed with 𝑗=𝑘±1. In other words, class 𝑁𝑃 is larger than 𝑃 because the curved manifold cannot be embedded in the plane. The measure ln𝑃𝑁𝑃of the non-Euclidean landscape is simply larger by the degrees of freedom (𝑛3) in dissipation than the measure ln𝑃𝑃 of Euclidean manifold.

The argument for the failure to map the larger 𝑁𝑃 manifold one-to-one onto the smaller 𝑃 manifold is familiar from the pigeonhole principle PHP𝑃𝑁𝑃 applied to manifolds ln𝑃𝑁𝑃>ln𝑃𝑃. The quanta that are dissipated during evolution from diverse density loci of the curved, evolving 𝑁𝑃 landscape are not mapped anywhere on the flat, invariant 𝑃 landscape. Thus it is concluded that 𝑃 is a proper subset of 𝑁𝑃.

6. Intractability in the Degrees of Freedom

The transduction path between two nodes can be represented by only one edge, hence there are 𝑘=𝑛1 interdependent currents (4) between 𝑛 densities [27]. The degrees of freedom are less than 𝑛 by 1 because it takes at least two densities to have a difference. In the general case 𝑛3, there are alternative paths for the currents from the initial state via alternative states toward the accepting state. The intractable evolutionary courses are familiar from the n-body (𝑛3) problems [76, 77]. Accordingly, the satisfiability problem of a Boolean expression (n-SAT) belongs to class 𝑁𝑃 when there are three or more literals (𝑛3) per clause [30]. In the special case 𝑛=2, the energy dispersal process is deterministic as there are no alternative dissipative paths for the current. When only one path is conducting, the problem for the maximal conduction is 1-separable and tractable. The two-body problem does not present a challenge. Accordingly, 2-SAT is deterministic and 1-SAT is trivial, essentially only a statement.

For example, the problem of maximizing the shortest path by two or more interdicts (𝑘2) is intractable. When the first interdict is placed, flows will be redirected and, in turn, affect the decision to place the second interdict. Similarly the search history of the traveling salesman for the optimal round-trip path is intractable. A decision to visit a particular city will narrow irreversibly the available state space by excluding that city from the subsequent choices. Thus, at any particular node one cannot consider decisions as if not knowing the specific search history that led to that node. When each decision will open a new set for future decisions, the computational space state of class 𝑁𝑃 is a tedious power set of deterministic decisions. On the other hand, when optimizing the shortest path, a choice for a particular path will not affect, in any way, the future explorations of other paths. At any particular node one may consider decisions irrespective of the search history. In the deterministic case it is not necessary to explore all conceivable choices because the trajectories are tractable (predictable). Likewise, the problem of maximizing the shortest path by a single interdict 𝑘=1 can be solved efficiently. Any particular decision to place the interdict does not affect future decisions because there are no more interdicts to be placed. When the state space is not affected by the problem-solving process itself, at most, a polynomial array of invariant circuits, that is, deterministic finite automata, will compute class 𝑃 problems.

The 𝑃 versus 𝑁𝑃 question is not only a fundamental but also a practical problem for which no computational machinery exists without physical representation. A particular input instance is imposed on the computational circuit by the surroundings and a particular output is accepted as a solution by the surroundings. The communication between the automaton and its surroundings relates to information processing that was understood already early on to be equivalent to the (impedance) matching of circuits for optimal energy transmission [78, 79]. When the matching of a circuit will affect the matching of two or more connected circuits, the total matching of the interdependent circuits for the optimal overall transduction is intractable. Although in practice the iterative process may be converging rapidly in a nondeterministic manner, the conceivable set of circuit states is a power set of the tuning operations. Conversely, when the matching does not involve degrees of freedom, the tuning for optimal transduction is tractable.

In summary, the class 𝑁𝑃 problem-solving process is inherently nondeterministic because the contraction process will itself affect the set of future states accessible from a particular instance. The course toward acceptance cannot be accelerated by prediction but the state space must be explored. On the other hand when dissipative steps between the input and output operations have no additional degrees of freedom, the search for the class 𝑃 problem solution will itself not affect the accessible set of states at any instance. The invariant state set can be contracted efficiently by predicting rather than exploring all conceivable paths. Therefore, the completion time of the class 𝑃 deterministic computation is shorter than that of 𝑁𝑃. Thus it is concluded that 𝑃 is a proper subset of 𝑁𝑃.

7. State Spaces of Automata

The computational complexity classification to 𝑃 and 𝑁𝑃 by the differing degrees of freedom in dissipation relates to the algorithmic execution times, which are proportional to circuit sizes. A Boolean circuit that simulates a Turing machine is commonly represented as a (directed, acyclic) graph structure of a tree with the assignments of gates (functions) to its vertices (nodes) (Figure 2).

The class 𝑁𝑃 problems are represented by circuits where forces (voltages) are inseparable from currents. Since there are no invariants of motions, the ceteris paribus assumption does not hold when solving the class 𝑁𝑃 problems [80]. Consistently, no deterministic algorithms are available for the class of nonconserved flow problems but, for example, brute-force optimization, simulated annealing and dynamic programming are employed [81].

The class 𝑁𝑃 problems can be considered to be computed by a nondeterministic Turing machine (NTM). For each pair of state and input symbol there may be several possible states to be accessed by a subsequent transition. The NTM 5-tuple (Φ, Δ, Λ, 𝜙1, 𝜙𝑠𝑠) consists of a finite set of states Φ, a finite set of input symbols Δ including blank, an initial state 𝜙1Φ, a set of accepting (stationary) states 𝜙𝑠𝑠Φ, and a transition function ΛΦ×ΔΦ×Δ×{𝑅,𝐿} where 𝐿 is left and 𝑅 is right shift of the input tape. Since Turing machine has an unlimited amount of storage space for computations and eventually an infinite input as well, such a machine cannot be realized. Therefore, to consider the computational complexity in context of a finite state machine by the physical principle is more motivated, however, without compromising conclusions. For example, a read-only, right-moving Turing machine is equivalent to a nondeterministic finite automaton (NFA) where for each pair of state and input symbol there may be several possible states to be accessed by a subsequent transition. The NFA 5-tuple (Φ, Δ, Λ, ϕ1, ϕss) consists of a finite set of states Φ, a finite set of input symbols Δ, an initial state 𝜙1Φ, a set of accepting (stationary) states 𝜙𝑠𝑠Φ, and a transition function ΛΦ×Δ𝑃(Φ), where 𝑃(Φ) denotes the power set of Φ. A circuit for the nondeterministic computation can also be constructed from an array of deterministic finite automata (DFA). Each DFA is a finite state machine where for each pair of state and input symbol there is one and only one transition to the next state. The DFA 5-tuple (Φ, Δ, Λ, ϕ1, ϕss) consists of a finite set of states (Φ), a finite alphabet Δ, an initial state (𝜙1Φ), a set of accepting states (𝜙𝑠𝑠Φ), and a transition function ΛΦ×ΔΦ.

In the general case when the forces are inseparable from the flows, the execution time by the DFA array grows super-polynomial as function of the input length n, for example, as 𝑂(𝑁𝑛). For example, when maximizing the shortest path by interdicts (𝑘2), any two alternative choices will give rise to two circuits that differ from each other as much as the currents of the two DFAs differ from each other. These two sets are nonequivalent due to the difference in dissipation, and one cannot be reduced to the other. Accordingly, the circuit for the NFA is adequately constructed from the entire power set of distinct DFAs to cover the entire conceivable set of states of the nondeterministic computation (Figure 4). The union of DFAs is nonreducible, that is, each DFA is distinguished from all other DFAs by its distinct transition function.

Figure 4: A circuit (O) containing nodes with degrees of freedom (𝑛3) represents an NFA. The computation steps from a state to another when currents are driven from the input instance (top) down along alternative but interdependent paths toward the output acceptance (bottom). Since the currents affect each other by affecting the driving forces, the circuit corresponds to the NFA having a power set of states. It can be decomposed to the distinct circuits (A–E), one member for each conceivable current without additional degrees of freedom, that are representing an array of DFAs each having at most a polynomial set of states.

The class 𝑃 problems are represented by circuits where forces are separable from currents. When the proposed questions do not depend on previous decisions (answers), the problem can be computed efficiently by DFA. Consistently in the class 𝑃 of flow conservation problems many deterministic methods deliver the solution corresponding to the maximum flow in polynomial time. For example, during the search for the maximally conducting path through the network, currents will disperse from the input node 𝑘 to diverse alternative nodes 𝑙 but only the flow along the steepest descent will arrive at the output node 𝑗 and establish the only and most voluminous flow. The other paths of energy dispersal will terminate at dead ends and will not contribute or affect the maximum flow at all. Importantly, on an invariant landscape these inferior paths do not have to be followed to their very ends as is exemplified by Dijkstra’s algorithm [82]. The search terminates at the accepting state whereas other paths end up at nil states. These particular sequences of states “died.” The shortest path problem can be presented by a single DFA because the nonaccepting dead states that keep going to themselves belong to , the empty set of states. However, as has been accurately pointed out [2], technically this automaton is a nondeterministic finite automaton, which reflects understanding that the single flow without additional degrees of freedom (𝑛=2) is the special deterministic subclass of the generally (𝑛3) nondeterministic class. Likewise, the special case of maximizing the shortest path by a single interdict (𝑘=1) is deterministic in contrast to the general case of two or more interdicts (𝑘2). The special 1-separable problem can be represented by a linear set of distinct circuits in contrast to the general inseparable problem that requires a power set of distinct circuits. Accordingly, the automaton for the special cases of deterministic problems is adequately constructed at most from a polynomial set of distinct DFAs and the corresponding deterministic computation is completed in polynomial time.

Since the class 𝑁𝑃 varying state space is larger, due to its additional degrees of freedom, than the class 𝑃 invariant state space, it is concluded that 𝑃 is a proper subset of 𝑁𝑃.

8. The Measures of States

To measure the difference between the classes 𝑃 and 𝑁𝑃, the thermodynamic formalism of computation will be transcribed to the mathematical notation [52]. Consistently with the reasoning presented in Sections 27, the computational complexity class 𝑃 will be distinguished from 𝑁𝑃 by measuring the difference in dissipative computation due to the difference in degrees of freedom. Moreover, since the computation does not advance by nondissipative (reversible) transitions, these exchanges of quanta do not affect the measure.

To maintain a connection to practicalities, it is worth noting that tractable problems are often idealizations of intractable natural processes. For example, when determining the shortest path for a long-haul trucker to take through a network of cities to the destination, it is implicitly assumed that, when the computed optimal path is actually taken, the traffic itself would not congest the current and cause a need for rerouting and finding a new, best possible route under the changing circumstances.

The state space of a finite energy system is represented by elements ϕ of the set Φ [52]. Transformations from a state to another are represented by elements λ, referred to as process generators of the set Λ. The computation is a series of transformations along a piecewise continuous path 𝑠(𝜆,𝜙) in the state space. According to the 2nd law the paths of energy dispersal that span the affine manifold M are shortening until the free energy minimum state has been attained. Then the state space has contracted during the transformation process to the accepting state.

Definition 1. A system is a pair (Φ, Λ), with Φ a set whose elements ϕ are called states and Λ a set whose elements λ are called process generators, together with two functions. The function 𝜆𝑆 assigns to each λ a transformation 𝑆, whose domain 𝐷(𝜆) and range 𝑅(𝜆) are non-empty subsets of Φ such that for each ϕ in Φ the condition of accessibility holds:(i)Λ𝜙={𝑆𝜙𝜆Λ,𝜙𝐷(𝜆)}=Φ,(12)where Λϕ is the entire set of states accessible from ϕ, with the assertion that, for every state ϕ, Λϕ equals the entire state space Φ. Furthermore, the function (𝜆,𝜆)𝜆𝜆 assigns to each pair (𝜆,𝜆) the (extended) process generator 𝜆𝜆 for the successive application of 𝜆 and 𝜆 with the property:(i)if 𝐷(𝜆)𝑅(𝜆), then 𝐷(𝜆𝜆)=𝑆𝜆1(𝐷(𝜆)) and, for each 𝜙 in 𝐷(𝜆𝜆), there holds 𝑆𝜆𝜆𝜙=𝑆𝜆𝑆𝜆𝜙 for all 𝜙𝐷(𝜆𝜆) when for any other 𝜆𝐷(𝜆)𝐷(𝜆)=.The extended process generators 𝜆𝜆 formalize the successive transformations with less than three degrees of freedom. When the transformation 𝑆𝜆is emissive, its inverse 𝑆𝜆1 is absorptive.

Definition 2. A process of (Φ,Λ) is a pair (𝜆,𝜙) such that 𝜙𝐷(𝜆). The process generators transform the system from an initial state via intermediate states to the final state. The set of all processes of (Φ, Λ) is ΛΦ={(𝜆,𝜙)𝜆Λ,𝜙𝐷(𝜆)}.(13) According to Definitions 1 and 2 the states and process generators are interdependent (Figure 5) so that(i)when the system has transformed from the state ϕ to the state 𝑆𝜆𝜙, the process generator λ has vanished;(ii)when the system has transformed from ϕ to 𝑆𝜆𝜙, the system is no longer at ϕ available for another transformation 𝑆𝜆 by another process generator 𝜆 to 𝑆𝜆𝜙;(iii)when the system has transformed from the initial state ϕ to an intermediate state 𝑆𝜆𝜙 and subsequently from 𝑆𝜆𝜙 to 𝑆𝜆𝑆𝜆𝜙, the final state 𝑆𝜆𝑆𝜆𝜙 is identical to the state resulting from the extended transformation from ϕ to 𝑆𝜆𝑆𝜆𝜙, only when 𝑆𝜆𝜙 is not a domain 𝐷(𝜆) of any other transformation 𝑆𝜆.

Figure 5: (a) The system evolves, according to Definitions 1 and 2, from an initial state (top) to other states by a sequence of transformations 𝑆 (arrows) that are directional, that is, dissipative due to the distinct domains 𝐷 and ranges 𝑃 for distinct elements λ of process generators. (b) The successive transformations 𝑆𝜆 and 𝑆𝜆can be reduced to 𝑆𝜆𝜆 only when the intermediate state cannot be transformed by any other process 𝜆.

Definition 3 (see [52]). Let 𝑡>0 and let 𝜆𝑡[0,𝑡)𝑃 be piecewise continuous, and define 𝐷(𝜆𝑡) to be the set of states 𝜙=(𝑁,𝐺)𝑆 such that the differential equation 𝑑𝑁(𝜏),𝑑𝑡𝑑𝐺(𝜏)𝑑𝑡=𝜆𝑡(𝜏)(14) has a solution 𝜏(𝑁(𝜏),𝐺(𝜏)) that satisfies the initial condition (𝑁(0),𝐺(0))=𝜙 and follows the trajectory {(𝑁(𝜏),𝐺(𝜏))𝜏[0,𝑡]} which is entirely in Φ. In other words, 𝜙𝐷(𝜆𝑡) if and only if 𝜙+𝜏0𝜆𝑡(𝜉)𝑑𝜉is in Φ for every 𝜏[0,𝑡].
When (14) is compared with (5), 𝜆𝑡 is understood in the continuum limit to generate a transformation from the initial density 𝜙=(𝑁(0),𝐺(0)) (cf. the definition of energy density in Section 3) to a succeeding density 𝜙𝜏=(𝑁(𝜏),𝐺(𝜏)) during a step 𝜏[0,𝑡] via the flow 𝑣=𝑑𝑁/𝑑𝑡 that consumes the free energy.

Definition 4 (see [52]). Define Λ to be the set of functions 𝜆𝑡 for which 𝐷(𝜆𝑡). For each 𝜆𝑡Λ, define 𝑆𝜆𝑡𝜙𝐷(𝜆𝑡)Φ by the formula 𝑆𝜋𝑡𝜙=𝜙+𝑡0𝜆𝑡(𝜉)𝑑𝜉.(15) If 𝑠(𝜆𝑡,𝜙) denotes the path determined by 𝜏𝜙+𝜏0𝜆𝑡(𝜉)𝑑𝜉, 𝜏[0,𝑡], then 𝑆𝜆𝑡𝜙 is taken to be the final point of 𝑠(𝜆𝑡,𝜙). Moreover 𝜙𝐷(𝜆𝑡)𝑠(𝜆𝑡,𝜙)Φ.
The step of evolution along the oriented and piecewise smooth curve from ϕ to 𝑆𝜆𝑡𝜙 is the path 𝑠(𝜆𝑡,𝜙)Φ determined by the formal integration from 0 to τ (15). In the general case of dissipative transformations with degrees of freedom (𝑛3) the integration is not closed. An open system is spiraling along an open trajectory either by loosing quanta to or acquiring them from its surroundings. Consequently the state space 𝜙𝐷(𝜆𝑡) is contracting by successive applications of 𝜆𝑡 and 𝜆𝑡 that diminish the free energy almost everywhere such that 𝑅(𝜆𝑡)𝐷(𝜆𝑡). The dissipation ceases first at the free energy minimum state where the orbits are closed and the domain and range are indistinguishable for any process.

Definition 5 (see [53]). After a series of successive applications of 𝜆𝑡 and 𝜆𝑡 the evolving system arrives at the free energy minimum. Then the open system is in a dynamic state defined as the ε-steady state by a fixed nonzero set 𝜀={𝜀𝑆} such that during τ if and only if, for all 𝑆Φ, there exists 𝜁𝑆𝑃 such that for all 𝜏Φ, it follows ||𝑆𝜏𝜁𝑆||𝜀𝑆.(16) At the ε-steady state there is no net flux over the period of integration 𝜏[0,𝑡]. Thus the probability 𝑃 may fluctuate due to sporadic influx and efflux but its absolute value may not exceed 𝜀𝑆 so that the system continues to reside within ε. The set value 𝜀𝑆 defines the acceptable state of computation, otherwise in the continuum limit ε 0 the state space would contract indefinitely. In practice the state space sampling by brute-force algorithms or simulated annealing methods is limited by 𝜀𝑆, for example, according to the available computational resources.

Definition 6 (see [83]). A family Σ of subsets of the state space Φ is an algebra, if it has the following properties:(i)ΦΣ, (ii)Φ0ΣΦ𝑐0Σ, (iii){Φ𝑖}𝑖[1,𝑘]Σ𝑘𝑖=1Φ𝑖Σ. From these it follows (i)Σ, (ii)the algebra Σ is closed under countable intersections and subtraction of sets,(iii)if 𝑘 then Σ is said to be a sigma-algebra.

Definition 7 (see [83]). A function 𝜇𝐶Σ[0,) is a measure if it is additive for any countable subfamily {Φ𝑖,𝑖[1,𝑛]}Σ, consisting of mutually disjoints sets, such that 𝜇𝐶𝑛𝑖=1Φ𝑖=𝑛𝑖=1𝜇𝐶Φ𝑖.(17) It follows that(i)𝜇𝐶()=0, (ii)if Φ𝛼,Φ𝛽𝑆 and Φ𝛼Φ𝛽𝜇𝐶(Φ𝛼)𝜇𝐶(Φ𝛽),(iii)if Φ𝛼,Φ𝛽𝑆 and Φ1Φ2Φ𝑛,{Φ𝑖,𝑖[1,𝑛]}𝑆𝜇𝐶(𝑛𝑖=1Φ𝑖)=sup𝑖𝜇𝐶(Φ𝑖).Moreover, if 𝑆 is a sigma-algebra and 𝑛{}, then 𝜇𝐶 is said sigma-additive. The triple (Φ,𝑆,𝜇𝐶) is a measure space.

Definition 8 (see [52]). An energy density manifold is a set 𝑀 whose elements ϕ are called energy densities together with a set Σ of functions 𝜇𝑖𝑀𝑃 called energy scale, satisfying(i)the range of μ is an open interval for each𝜇𝑖Σ,(ii)for every𝜙𝐴,𝜙𝐵𝑀, and 𝜇Σ,𝜇𝐴(𝜙𝐴)=𝜇𝐵(𝜙𝐵)𝜙𝐴=𝜙𝐵,(iii)for every𝜇𝐴,𝜇𝐵Σ,𝜃𝜇𝐵(𝜇𝐴1(𝜃)) is a continuous, strictly increasing function. (i) asserts that each energy scale takes on all values in an open interval in P, while (ii) guarantees that each such scale establishes a one-to-one correspondence between energy levels and real numbers in its range. By means of (iii) the set Σ determines an order relation on M written as 𝜙𝐴𝜙𝐵thereexists𝜇𝑖𝜇Σsuchthat𝐴𝜙𝐴<𝜇𝐵𝜙𝐵.(18) Physically speaking the energy densities are in relation to each other on the energy scale given in the units of 𝜃=𝑘𝐵𝑇.

Definition 9. Entropy is defined as 𝑆=𝑗=1𝑘𝐵ln𝑃𝑗=𝑗=1𝑘𝐵𝑁𝑗1𝑗,𝑘Δ𝑉𝑗𝑘𝑘𝐵𝑇,(19) where the absolute temperature 𝑇>0 and the Boltzmann’s constant 𝑘𝐵>0 in accordance with (3).

Definition 10. The change in occupancy 𝑁𝑗 is defined proportional to the free energy Δ𝑁𝑗=𝑘𝜎𝑗𝑘Δ𝑉𝑗𝑘𝑘𝐵𝑇(20) in accordance with (4).

Theorem 11 (the principle of increasing entropy). The condition of stationary state for the open system is that its entropy reaches the maximum.

Proof. From Definitions 9 and 10 and Δ𝜇𝑗𝑘(Δ𝑁𝑗)/𝑘𝐵𝑇=Δ𝑁𝑗/𝑁𝑗, it follows that Δ𝑆=𝑘𝐵𝐿=𝑘𝐵𝑗=1Δ𝑁𝑗𝑘Δ𝑉𝑗𝑘𝑘𝐵𝑇=𝑘𝐵𝑗,𝑘𝜎1𝑗𝑘Δ𝑁𝑗2=𝑘𝐵𝑗,𝑘𝜎𝑗𝑘Δ𝑉𝑗𝑘𝑘𝐵𝑇20(21) because the squares are nonnegative, the conductance 𝜎𝑗𝑘>0 and its inverse, that is, resistance, 𝜎1𝑗𝑘=𝑚𝑗𝑘/𝑘𝐵𝑇>0 and 𝑘𝐵>0.
The proof is in agreement with Δ𝑆=𝑘𝐵Δln𝑃=𝑘𝐵𝐿0 given by (5). The principle of increasing entropy has been proven alternatively by variations δ using the principle of least action 𝛿𝐴=𝛿𝑡0𝑑𝑡=𝛿𝑡0𝑇𝑆𝑑𝑡0 [53], where the Lagrangian integrand (kinetic energy) defined by the Gouy-Stodola theorem is necessarily positive.

Theorem 12. The state space Φ contracts in dissipative transformations.

Proof. As a consequence of Definition 10 and Theorem 11 it follows that Δ(Δ𝑆)=𝑘𝐵Δ𝐿=2𝑗=1Δ𝑁𝑗𝑁𝑗𝑘𝜎𝑗𝑘Δ𝑉𝑗𝑘𝑇=2𝑘𝐵𝑗=1Δ𝑁𝑗2𝑁𝑗0(22) because the squares are nonnegative, the occupancies 𝑁𝑗>0 for nonzero densities-in-energy, the conductance 𝜎𝑗𝑘0, 𝑇>0, and 𝑘𝐵>0.
When entropy 𝑆 is increasing, the state space accessible by the process generator 𝐿 is decreasing. In the continuum limit the theorem for contraction has been proven earlier [53]. In practice the contraction of the state space by a finite automaton is limited to a fixed nonzero set 𝜀={𝜀𝑆}. Then any member in ε is qualified as solution.

Definition 13. The definition for the class 𝑃 state space measure 𝜇𝑃 follows from Definitions 7 and 9: 𝜇𝑃=ln𝑃𝑃=𝑛𝑗=1𝑁𝑗1𝑛𝑘𝑗Δ𝜇𝑗𝑘𝑘𝐵𝑇+𝑛𝑗=1𝑁𝑗𝑛𝑘=𝑗±1Δ𝑄𝑗𝑘𝑘𝐵𝑇.(23) The nondissipative (reversible) and dissipative (irreversible) components have been denoted separately. In fact, the indexing 𝑘𝑗 is redundant because for the indistinguishable sets 𝑘=𝑗 there is no difference, per definition Δ𝜇𝑗𝑗=0. The conserved term Σ𝑗𝑁𝑗(1𝑘𝑗Δ𝜇𝑗𝑘) is invariant according to Noether’s theorem [31, 32]. The nonzero dissipative term Σ𝑗𝑁𝑗𝑘=𝑗±1Δ𝑄𝑗𝑘 defines class 𝑃 to contain at least one irreversible deterministic decision with two degrees of freedom (𝑛=2).

Definition 14. The definition for the class 𝑁𝑃 state space measure 𝜇𝑁𝑃 follows from the Definitions 7 and 9: 𝜇𝑁𝑃=ln𝑃𝑁𝑃=𝑛𝑗=1𝑁𝑗1𝑛𝑘𝑗Δ𝜇𝑗𝑘𝑘𝐵𝑇+𝑛𝑗=1𝑁𝑗𝑛𝑘=𝑗±1Δ𝑄𝑗𝑘𝑘𝐵𝑇+𝑛𝑗=1𝑁𝑗𝑛𝑘𝑗±1Δ𝑄𝑗𝑘𝑘𝐵𝑇.(24) The conserved components have been denoted separately from the dissipative components that have been decomposed further to those with two degrees of freedom using the indexing notation 𝑘=𝑗±1 as well as to those with three or more degrees of freedom using the indexing notation 𝑘𝑗±1. The conserved and dissipative components with only two degrees of freedom are the same as those in Definition 13. The nonzero dissipative term Σ𝑗𝑁𝑗𝑘𝑗±1Δ𝑄𝑗𝑘 defines class 𝑁𝑃 to contain at least one irreversible decision between at least two choices, that is, with the three or more degrees of freedom.

Definition 15. The 𝑁𝑃-complete problem contains only dissipative processes with three or more degrees of freedom, that is, Σ𝑗𝑁𝑗𝑘𝑗±1Δ𝑄𝑗𝑘>0 and none with two degrees of freedom Σ𝑗𝑁𝑗𝑘𝑗±1Δ𝑄𝑗𝑘=0.

Theorem 16. One has 𝑃𝑁𝑃.

Proof. It follows from Definitions 13 and 14 that the state space set of class 𝑁𝑃 is larger than class 𝑃 measured by the difference 𝜇𝑁𝑃𝑃=𝜇𝑁𝑃𝜇𝑃=𝑛𝑗=1𝑁𝑗𝑛𝑘𝑗±1Δ𝑄𝑗𝑘𝑘𝐵𝑇>0.(25) If and only if Δ𝑄𝑗𝑘=0 for all 𝑘𝑗±1, the measure 𝜇𝑁𝑃-𝑃()=0 but this is a contradiction with Definition 14 that class 𝑁𝑃 contains at least one irreversible decision with three or more degrees of freedom, that is, Σ𝑗𝑁𝑗𝑘𝑗±1Δ𝑄𝑗𝑘>0. Thus class 𝑃 is a proper (strict) subset of class 𝑁𝑃.
The difference between the classes can also be measured by 𝑃𝑁𝑃ln(𝑃𝑁𝑃/𝑃𝑃)>0 in accordance with the noncommutative measure known as Gibb’s inequality or Kullback-Leibler divergence that gives the difference between two probability distributions.
The class 𝑁𝑃 problem can be reduced to the class 𝑁𝑃-complete problem by removing the deterministic steps denoted by 𝑘=𝑗±1, that is, by polynomial time reduction [30, 84, 85]. In graphical terms the reduction of the 𝑁𝑃 problem to the 𝑁𝑃-complete problem involves removal of nodes with less than three degrees of freedom (Figure 6). In geometric terms the non-Euclidean landscape is reduced to a manifold covered by nonequivalent triangles each having a local Lorentzian metric.In summary the computational complexity classes are related to each other as 𝑃𝑁𝑃-C 𝑁𝑃 (Figure 7).

Figure 6: The network representing the class 𝑁𝑃 problem (O) is reduced (𝑂𝐴𝐵) to the network representing the class 𝑁𝑃-complete problem by removing nodes along deterministic dissipative paths to yield a network of triangles.
Figure 7: Venn diagram for the computational complexity classes 𝑃, 𝑁𝑃-complete and 𝑁𝑃 based on the thermodynamic analysis of computation. The class 𝑃 problems can be computed by dissipative processes that have less than three degrees of freedom whereas the class 𝑁𝑃 problem computation involves also dissipative processes with three or more degrees of freedom. The class 𝑁𝑃-complete problem computation contains only dissipative processes with three or more degrees of freedom.

9. Discussion

At first sight it may appear strange for some that the distinction between the computational complexity class 𝑃 and 𝑁𝑃 was made on the basis of the natural law because both classes contain many abstract problems without apparent physical connection. However, the view is not new [8690]. The adopted approach to classify computational complexity is motivated because the practical computation is a thermodynamic process hence inevitably subject to the 2nd law of thermodynamics. Of course, some may still argue that the distinction between tractable and intractable problems ought to be proven without any reference to physics. Indeed, the physical portrayal can be regarded merely as a formal notation to express that the computation is a series of time-ordered (i.e., dissipative) operations that are intractable when there are three or more degrees of freedom among interdependent operations. Also noncommutative operations and non-abelian groups formalize time series [91, 92]. The essential character of nondeterministic problems, irrespective of physical realization, is that decisions affect set of future decisions, that is, the driving forces of computation depend on the process itself. The process formulation by the 2nd law of thermodynamics is a natural expression because the free energy and the flow of energy are interdependent.

The natural law may well be the invaluable ingredient to rationalize the distinction between the computational complexity classes 𝑃 and 𝑁𝑃. It serves not only to prove that 𝑃𝑁𝑃 but to account for the computational course itself. For both classes of problems the natural process of computation is directing toward increasingly more probable states. When there are three or more degrees of freedom, decisions influence the choice of future decisions and the computation is intractable. The set of conceivable states generated at the branching points can be enormous, similar to a causal Bayesian network [93]. Finally, when the maximum entropy state has been attained, it can be validated independent of the path as the free energy minimum stationary state. The corresponding solution is verifiably independent of the computational history in deterministic manner in polynomial time.

Furthermore, the crossing from class 𝑃 to 𝑁𝑃 is found precisely where n-SAT, n-coloring, n-clique problems and maximizing the shortest path with interdicts become intractable, that is, when the degrees of freedom 𝑛3. The efficient reduction of 𝑁𝑃 problems to 𝑁𝑃-complete problems is also understood as operations that remove the deterministic dissipative steps and eventual redundant reversible paths. Besides, when the problem is beyond class 𝑁𝑃, the natural process does not terminate at the accepting state with emission. For example, the halting problem belongs to the class 𝑁𝑃-hard. Importantly, the natural law relates computational time directly to the flow of energy, that is, to the amount of dissipation [14]. Thus the 2nd law implies that nondissipative processing protocols are deemed futile [94].

The practical value of computational complexity classification by the natural law of the maximal energy dispersal is that no deterministic algorithm can be found that would complete the class 𝑁𝑃 problems in polynomial time. The conclusion is anticipated [95], nonetheless, its premises imply that there is no all-purpose algorithm to trace the maximal flow paths through noninvariant landscapes. Presumably the most general and efficient algorithms balance execution between exploration of the landscape and progression down along the steep gradients in time. Perhaps most importantly, the universal law provides us with holistic understanding of the phenomena themselves to formulate questions and computational tasks in the most meaningful way.


The author is grateful to Mahesh Karnani, Heikki Suhonen, and Alessio Zibellini for valuable corrections and instructive comments.


  1. S. A. Cook, “The P vs. NP problem. CLAY Mathematics Foundation Millenium Problems,”
  2. M. Sipser, Introduction to the Theory of Computation, Pws Publishing, New York, NY, USA, 2001.
  3. D. L. Applegate, R. E. Bixby, V. Chvátal, and W. J. Cook, The Traveling Salesman Problem: A Computational Study, Princeton University Press, Princeton, NJ, USA, 2006.
  4. M. R. Garey and D. S. Johnson, Computers and Intractability, A Guide to the Theory of NP-Completeness, Freeman, New York, NY, USA, 1999.
  5. R. Landauer, “Irreversibility and heat generation in the computing process,” IBM Journal of Research and Development, vol. 5, pp. 183–191, 1961. View at Google Scholar
  6. R. Landauer, “Minimal energy requirements in communication,” Science, vol. 272, no. 5270, pp. 1914–1918, 1996. View at Google Scholar · View at Scopus
  7. C. H. Bennett, “Notes on Landauer's principle, reversible computation, and Maxwell's Demon,” Studies in History and Philosophy of Science Part B, vol. 34, no. 3, pp. 501–510, 2003. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus
  8. J. Ladyman, “Physics and computation: the status of Landauer's principle,” in Proceedings of the 3rd Conference on Computability in Europe (CiE '070, S. B. Cooper, B. Lwe, and A. Sorbi, Eds., vol. 4497 of Lecture Notes in Computer Science, pp. 446–454, Springer, Siena, Italy, June 2007.
  9. S. Carnot, Réflexions Sur la Puissance Motrice du feu et sur les Machines Propres à Développer Cette Puissance, Bachelier, Paris, France, 1824.
  10. L. Boltzmann, “Populäre Schriften (Leipzig: J. A. Barth, 1905); partially translated,” in Theoretical Physics and Philosophical Problems, B. McGuinness, Ed., Reidel, Dordrecht, The Netherlands, 1974. View at Google Scholar
  11. A. S. Eddington, The Nature of Physical World, Macmillan, New York, NY, USA, 1928.
  12. V. Sharma and A. Annila, “Natural process—Natural selection,” Biophysical Chemistry, vol. 127, no. 1-2, pp. 123–128, 2007. View at Publisher · View at Google Scholar · View at PubMed · View at Scopus
  13. V. R. I. Kaila and A. Annila, “Natural selection for least action,” Proceedings of the Royal Society A, vol. 464, no. 2099, pp. 3055–3070, 2008. View at Publisher · View at Google Scholar · View at Scopus
  14. P. Tuisku, T. K. Pernu, and A. Annila, “In the light of time,” Proceedings of the Royal Society A, vol. 465, no. 2104, pp. 1173–1198, 2009. View at Publisher · View at Google Scholar · View at Scopus
  15. Z. K. Silagadze, “Citation entropy and research impact estimation,” Acta Physica Polonica B, vol. 41, no. 11, pp. 2325–2333, 2010. View at Google Scholar · View at Scopus
  16. S. Jaakkola, V. Sharma, and A. Annila, “Cause of chirality consensus,” Current Chemical Biology, vol. 2, no. 2, pp. 153–158, 2008. View at Publisher · View at Google Scholar · View at Scopus
  17. S. Jaakkola, S. El-Showk, and A. Annila, “The driving force behind genomic diversity,” Biophysical Chemistry, vol. 134, no. 3, pp. 232–238, 2008. View at Publisher · View at Google Scholar · View at PubMed · View at Scopus
  18. T. Grönholm and A. Annila, “Natural distribution,” Mathematical Biosciences, vol. 210, no. 2, pp. 659–667, 2007. View at Publisher · View at Google Scholar · View at PubMed · View at MathSciNet · View at Scopus
  19. P. Würtz and A. Annila, “Roots of diversity relations,” Biophysical Journal, vol. 2008, Article ID 654672, 8 pages, 2008. View at Publisher · View at Google Scholar · View at PubMed
  20. M. Karnani and A. Annila, “Gaia again,” BioSystems, vol. 95, no. 1, pp. 82–87, 2009. View at Publisher · View at Google Scholar · View at PubMed · View at Scopus
  21. A. Annila and E. Kuismanen, “Natural hierarchy emerges from energy dispersal,” BioSystems, vol. 95, no. 3, pp. 227–233, 2009. View at Publisher · View at Google Scholar · View at PubMed · View at Scopus
  22. A. Annila and E. Annila, “Why did life emerge?” International Journal of Astrobiology, vol. 7, no. 3-4, pp. 293–300, 2008. View at Publisher · View at Google Scholar · View at Scopus
  23. P. Würtz and A. Annila, “Ecological succession as an energy dispersal process,” BioSystems, vol. 100, no. 1, pp. 70–78, 2010. View at Publisher · View at Google Scholar · View at PubMed · View at Scopus
  24. A. Annila and S. Salthe, “Economies evolve by energy dispersal,” Entropy, vol. 11, no. 4, pp. 606–633, 2009. View at Publisher · View at Google Scholar · View at Scopus
  25. T. Mäkelä and A. Annila, “Natural patterns of energy dispersal,” Physics of Life Reviews, vol. 7, no. 4, pp. 477–498, 2010. View at Publisher · View at Google Scholar · View at PubMed · View at Scopus
  26. J. Anttila and A. Annila, “Natural games,” Physics Letters, Section A, vol. 375, no. 43, pp. 3755–3761, 2011. View at Publisher · View at Google Scholar
  27. D. Kondepudi and I. Prigogine, Modern Thermodynamics, John Wiley & Sons, New York, NY, USA, 1998.
  28. V. Sharma, V. R. I. Kaila, and A. Annila, “Protein folding as an evolutionary process,” Physica A, vol. 388, no. 6, pp. 851–862, 2009. View at Publisher · View at Google Scholar · View at Scopus
  29. A. S. Fraenkel, “Complexity of protein folding,” Bulletin of Mathematical Biology, vol. 55, no. 6, pp. 1199–1210, 1993. View at Google Scholar · View at Scopus
  30. S. A. Cook, “The complexity of theorem proving procedures,” in Proceedings of the 3rd annual ACM symposium on Theory of Computing (STOC '71), pp. 151–158, 1971. View at Publisher · View at Google Scholar
  31. E. Noether, “Invariante Variationprobleme. Nach. v.d. Ges. d. Wiss zu goettingen,” Mathphys. Klasse, pp. 235–257, 1918. View at Google Scholar
  32. M. A. Tavel, “Invariant variation problem,” Transport Theory and Statistical Physics, vol. 1, pp. 183–207, 1971, English translation: E. Noether. View at Google Scholar
  33. S. H. Strogatz, Nonlinear Dynamics and Chaos with Applications to Physics, Biology, Chemistry and Engineering, Westview, Cambridge, Mass, USA, 2000.
  34. M. Karnani, K. Pääkönen, and A. Annila, “The physical character of information,” Proceedings of the Royal Society A, vol. 465, no. 2107, pp. 2155–2175, 2009. View at Publisher · View at Google Scholar · View at Scopus
  35. P. W. Atkins and J. de Paula, Physical Chemistry, Oxford University Press, New York, NY, USA, 2006.
  36. C. Darwin, On the Origin of Species, John Murray, London, UK, 1859.
  37. A. Annila and S. Salthe, “Physical foundations of evolutionary theory,” Journal of Non-Equilibrium Thermodynamics, vol. 35, no. 3, pp. 301–321, 2010. View at Publisher · View at Google Scholar · View at Scopus
  38. M. Sipser, “History and status of the P versus NP question,” in Proceedings of the 24th Annual ACM Symposium on the Theory of Computing, pp. 603–618, May 1992. View at Scopus
  39. L. Brillouin, Science and Information Theory, Academic Press, New York, NY, USA, 1963.
  40. A. J. Leggett and A. Garg, “Quantum mechanics versus macroscopic realism: is the flux there when nobody looks?” Physical Review Letters, vol. 54, no. 9, pp. 857–860, 1985. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus
  41. A. Turing, “On computable numbers, with an application to the Entscheidungsproblem,” Proceedings of the London Mathematical Society, vol. 2, no. 42, pp. 230–265, 1936. View at Google Scholar
  42. R. E. Ladner, “On the structure of polynomial time reducibility,” Journal of the Association for Computing Machinery, vol. 22, no. 1, pp. 155–171, 1975. View at Google Scholar · View at Scopus
  43. S. N. Salthe, Evolving Hierarchical Systems: Their Structure and Representation, Columbia University Press, New York, NY, USA, 1985.
  44. R. P. Feynman and A. R. Hibbs, Quantum Physics and Path Integrals, McGraw-Hill, New York, NY, USA, 1965.
  45. R. D. Mattuck, A guide to Feynman Diagrams in the Many-Body Problem, Dover, New York, NY, USA, 1992.
  46. M. Alonso and E. J. Finn, Fundamental University Physics, vol. 3, Addison-Wesley, Reading, Mass, USA, 1983.
  47. J. W. Gibbs, The Scientific Papers of J. Willard Gibbs, Ox Bow Press, Woodbridge, Conn, USA, 1993-1994.
  48. S. Kullback, Information Theory and Statistics, John Wiley & Sons, New York, NY, USA, 1959.
  49. L. G. Gouy, “Sur l’energie utilizable,” Journal de Physique, vol. 8, pp. 501–518, 1889. View at Google Scholar
  50. A. Stodola, Steam and Gas Turbines, McGraw-Hill, New York, NY, USA, 1910.
  51. B. H. Lavenda, Nonequilibrium Statistical Thermodynamics, John Wiley & Sons, New York, NY, USA, 1985.
  52. D. R. Owen, A First Course in the Mathematical Foundations of Rhermodynamics, Springer, New York, NY, USA, 1984.
  53. U. Lucia, “Probability, ergodicity, irreversibility and dynamical systems,” Proceedings of the Royal Society A, vol. 464, no. 2093, pp. 1089–1104, 2008. View at Publisher · View at Google Scholar · View at Scopus
  54. E. T. Jaynes, “Information theory and statistical mechanics,” Physical Review, vol. 106, no. 4, pp. 620–630, 1957. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at Scopus
  55. H. Ziegler, An Introduction to Thermomechanics, North-Holland, Amsterdam, The Netherlands, 1983.
  56. R. E. Ulanowicz and B. M. Hannon, “Life and the production of entropy,” Proceedings of the Royal Society B, vol. 232, pp. 181–192, 1987. View at Google Scholar
  57. D. R. Brooks and E. O. Wiley, Evolution as Entropy: Toward a Unified Theory of Biology, University of Chicago Press, Chicago, Ill, USA, 1988.
  58. R. Swenson, “Emergent attractors and the law of maximum entropy production: foundations to a theory of general evolution,” Systems Research, vol. 6, pp. 187–198, 1989. View at Google Scholar
  59. S. N. Salthe, Development and Evolution: Complexity and Change in Biology, MIT Press, Cambridge, Mass, USA, 1993.
  60. E. D. Schneider and J. J. Kay, “Life as a manifestation of the second law of thermodynamics,” Mathematical and Computer Modelling, vol. 19, no. 6–8, pp. 25–48, 1994. View at Google Scholar · View at Scopus
  61. A. Bejan, Advanced Engineering Thermodynamics, John Wiley & Sons, New York, NY, USA, 1977.
  62. E. J. Chaisson, Cosmic Evolution: The Rise of Complexity in Nature, Harvard University Press, Cambridge, Mass, USA, 2001.
  63. R. D. Lorenz, “Planets, life and the production of entropy,” International Journal of Astrobiology, vol. 1, pp. 3–13, 2002. View at Google Scholar
  64. R. Dewar, “Information theory explanation of the fluctuation theorem, maximum entropy production and self-organized criticality in non-equilibrium stationary states,” Journal of Physics A, vol. 36, no. 3, pp. 631–641, 2003. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus
  65. C. H. Lineweaver, “Cosmological and biological reproducibility: limits of the maximum entropy production principle,” in Non-Equilibrium Thermodynamics and the Production of Entropy: Life, Earth and Beyond, A. Kleidon and R. D. Lorenz, Eds., Springer, Heidelberg, Germany, 2005. View at Google Scholar
  66. L. M. Martyushev and V. D. Seleznev, “Maximum entropy production principle in physics, chemistry and biology,” Physics Reports, vol. 426, no. 1, pp. 1–45, 2006. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus
  67. M. Berry, Principles of Cosmology and Gravitation, Cambridge University Press, Cambridge, UK, 2001.
  68. S. Weinberg, Gravitation and Cosmology, Principles and Applications of the General Theory of Relativity, John Wiley & Sons, New York, NY, USA, 1972.
  69. E. F. Taylor and J. A. Wheeler, Spacetime Physics, Freeman, New York, NY, USA, 1992.
  70. J. M. Lee, Introduction to Smooth Manifolds, Springer, New York, NY, USA, 2003.
  71. D. Griffiths, Introduction to Quantum Mechanics, Prentice Hall, Upper Saddle River, NJ, USA, 1995.
  72. I. Newton, The Principia, Daniel Adee, New York, NY, USA, 1687, translated by A. Motte, 1846.
  73. A. Annila, “Least-time paths of light,” Monthly Notices of the Royal Astronomical Society, vol. 416, pp. 2944–2948, 2011. View at Google Scholar
  74. M. Koskela and A. Annila, “Least-time perihelion precession,” Monthly Notices of the Royal Astronomical Society, vol. 417, pp. 1742–1746, 2011. View at Google Scholar
  75. S. Carroll, Spacetime and Geometry: An Introduction to general relativity, Addison-Wesley, Essex, UK, 2004.
  76. J. H. Poincaré, “Sur le problème des trois corps et les équations de la dynamique. Divergence des séries de M. Lindstedt,” Acta Mathematica, vol. 13, pp. 1–270, 1890. View at Google Scholar
  77. K. F. Sundman, “Memoire sur le probleme de trois corps,” Acta Mathematica, vol. 36, pp. 105–179, 1912. View at Google Scholar
  78. C. E. Shannon and W. Weaver, The Mathematical Theory of Communication, The University of Illinois Press, Urbana, Ill, USA, 1962.
  79. C. E. Shannon, “The mathematical theory of communication,” Bell System Technical Journal, vol. 27, pp. 379–423–623–656, 1948. View at Google Scholar
  80. S. J. Gould, The Structure of Evolutionary Ttheory, Harvard University Press, Cambridge, Mass, USA, 2002.
  81. T. H. Cormen, C. E. Leiserson, R. L. Rivest, and C. Stein, Introduction to Algorithms, MIT Press & McGraw-Hill, Cambridge, Mass, USA, 2001.
  82. E. W. Dijkstra, “A note on two problems in connexion with graphs,” Numerische Mathematik, vol. 1, no. 1, pp. 269–271, 1959. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet · View at Scopus
  83. P. Billingsley, Probability and Measure, John Wiley & Sons, New York, NY, USA, 1979.
  84. L. Levin, “Universal search problems,” Problems of Information Transmission, vol. 3, pp. 265–266, 1973 (Russian). View at Google Scholar
  85. B. A. Trakhtenbrot, “A survey of Russian approaches to perebor (brute-force searches) algorithms,” Annals of the History of Computing, vol. 6, pp. 384–400, 1984, English translation: L. Levin. View at Google Scholar
  86. S. Aaronson, “NP-complete problems and physical reality,” Electronic colloquium on computational complexity, Report no. 26, 2005. View at Google Scholar
  87. A. A. Razborov and S. Rudich, “Natural proofs,” Journal of Computer and System Sciences, vol. 55, no. 1, pp. 24–35, 1997. View at Google Scholar · View at Scopus
  88. M. Franzén, The P versus NP brief, 2007,
  89. S. N. Coppersmith, “The computational complexity of Kauffman nets and the P versus NP problem,” Physical Review E., vol. 75, 4 pages, 2007. View at Google Scholar
  90. J. Ladyman, “What does it mean to say that a physical system implements a computation?” Theoretical Computer Science, vol. 410, no. 4-5, pp. 376–383, 2009. View at Publisher · View at Google Scholar · View at Scopus
  91. A. Connes, Noncommutative Geometry (Géométrie non commutative), Academic Press, San Diego, Calif, USA, 1994.
  92. D. Hestenes and G. Sobczyk, Clifford Algebra to Geometric Calculus. A Unified Language for Mathematics and Physics, Reidel, Dordrecht, The Netherlands, 1984.
  93. J. Pearl, Causality: Models, Reasoning, and Inference, Cambridge University Press, New York, NY, USA, 2000.
  94. R. Landauer, “The physical nature of information,” Physics Letters, Section A, vol. 217, no. 4-5, pp. 188–193, 1996. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus
  95. W. I. Gasarch, “The P=?NP poll,” SIGACT News, vol. 33, no. 2, pp. 34–47, 2002. View at Publisher · View at Google Scholar