Abstract

Computational complexity is examined using the principle of increasing entropy. To consider computation as a physical process from an initial instance to the final acceptance is motivated because information requires physical representations and because many natural processes complete in nondeterministic polynomial time (NP). The irreversible process with three or more degrees of freedom is found intractable when, in terms of physics, flows of energy are inseparable from their driving forces. In computational terms, when solving a problem in the class NP, decisions among alternatives will affect subsequently available sets of decisions. Thus the state space of a nondeterministic finite automaton is evolving due to the computation itself, hence it cannot be efficiently contracted using a deterministic finite automaton. Conversely when solving problems in the class P, the set of states does not depend on computational history, hence it can be efficiently contracted to the accepting state by a deterministic sequence of dissipative transformations. Thus it is concluded that the state set of class P is inherently smaller than the state set of class NP. Since the computational time needed to contract a given set is proportional to dissipation, the computational complexity class P is a proper (strict) subset of NP.

1. Introduction

Currently it is unclear whether every problem whose solution can be efficiently checked by a computer can also be efficiently solved by a computer [1, 2]. On one hand, decision problems in a computational complexity class 𝑃 can be solved efficiently by a deterministic algorithm within a number of steps bound by a polynomial function of the input’s length. An example of a 𝑃 problem is that of the shortest path: what is the least-cost one-way path through a given network of cities to the destination? On the other hand, to solve problems in class 𝑁𝑃 efficiently seems to require some nondeterministic parallel machine; yet solutions can be verified as correct in a deterministic manner. An example of an 𝑁𝑃-complete problem is that of the traveling salesman: what is the least-cost round-trip path via a given network of cities, visiting each exactly once?

It appears, although it has not been proven, that the traveling salesman problem [3] and numerous other 𝑁𝑃 problems in mathematics, physics, biology, economics, optimization, artificial intelligence, and so forth [4] cannot be solved in deterministic manner in polynomial time unlike the shortest path problem and other 𝑃 problems. Yet, the initial instances of the traveling salesman and the shortest path problem seem to differ at most polynomially from one another. Therefore, could it be that there are, after all, for the 𝑁𝑃 problems as efficient algorithms as there are for the 𝑃 problems but these simply were not found yet?

In this study insight to the 𝑃 versus 𝑁𝑃 question is obtained by considering computation as a physical process [5–8] that follows the 2nd law of thermodynamics [9–11]. The natural law was recently written as an equation of motion that complies with the principle of least action and Newton’s second law [12–15]. The ubiquitous imperative to consume free energy, known also as the principle of increasing entropy, describes a system in evolution toward more probable states in least time. Here, it is of particular interest that evolution is in general a nondeterministic process as is class 𝑁𝑃 computation. Furthermore, the end point of evolution, that is, the thermodynamically stable stationary state itself, can be efficiently validated as the free energy minimum in a similar manner as the solution to a 𝑁𝑃 computation can be verified as accepting.

The recent formulation of the 2nd law as an equation of motion based on statistical mechanics of open systems has rationalized diverse evolutionary courses that result in skewed distributions whose cumulative curves are open-form integrals [16–26]. Several of these natural processes [27], for example, protein folding that directs down along intractable trajectories to diminish free energy [28], have been recognized as the hardest problems in class 𝑁𝑃 [29]. Although many other 𝑁𝑃-complete problems do not seem to concern physical reality, the concept of 𝑁𝑃-completeness [30] encourages one to consider computation as an energy transduction process that follows the 2nd law of thermodynamics. The physical portrayal of computational complexity allows one to use the fundamental theorems concerning conserved currents [31, 32] and gradient systems [27, 33] in the classification of computational complexity. Specifically, it is found that circuit currents remain tractable during the class 𝑃 problem computation because the accessible states of a computer do not depend on the processing steps themselves. Thus the class 𝑃 state set can be efficiently contracted using a deterministic finite automaton to the accepting set along the dissipative path without additional degrees of freedom. Physically speaking boundary conditions remain fixed. In contrast, the circuit currents are intractable during the class 𝑁𝑃 problem computation because each step of the problem-solving process depends on the computational history and affects future decisions. Thus the contraction of states along alternative but interdependent computational paths to the accepting set remains a nondeterministic process. Physically speaking boundary conditions are changing due to the process itself.

The adopted physical perspective on computation is consistent with the standpoint that no information exists without its physical representation [5, 6] and that information processing itself is governed by the 2nd law [34]. The connection between computational complexity and the natural law also yields insight to the abundance of natural problems in class 𝑁𝑃 [4]. In the following, the description of computation as an evolutionary process is first outlined and then developed to mathematical forms to make the distinction between the computations that belong to classes 𝑃 and 𝑁𝑃.

2. Computation as a Physical Process

According to the 2nd law of thermodynamics a computational circuit, just as any other physical system, evolves by diminishing energy density differences within the system and relative to its surroundings. The consumption of free energy [35] is generally referred to as evolution where flows of energy naturally select [36, 37] the steepest directional descents in the free energy landscape to abolish the energy density differences in least time [14]. At first sight it may appear that the physical representations of computational states, in particular as they are represented by modern computers, would be insignificant to play any role in computational complexity. However, since no representation of information can escape from following laws of physics, also computation must ultimately comply with laws of physics. A clocked circuit as a physical realization of a finite automaton is an energy transduction network. Likewise, Boolean components and shift register nodes are components of a thermodynamic network. In accordance with the network notion, the 𝑃 versus 𝑁𝑃 question can be phrased in terms of graphs [38]. In this study it will be shown that the computations in the two computational complexity classes do differ from each other in thermodynamic terms. Thus it follows that no algorithm can abolish this profound distinction.

Computation is, according to the principle of increasing entropy, a probable physical process. The sequence of computational steps will begin when an energy density difference, representing an input, appears at the interface between the computational system and its surroundings. Thus, the input by its physical representation places the automaton at the initial state of computation, that is, physically speaking evolution. A specific input string of alphabetic symbols is represented to the circuit by a particular physical influx, for example, as a train of voltages. Importantly no instance is without physical realization.

The algorithmic execution is an irreversible thermalization process where the energy absorbed at the input interface will begin to disperse within the circuit. Eventually, after a series of dissipative transformations from one state to another, more probable one, the computational system arrives at a thermodynamic steady state, the final acceptance, by emitting an output, for example, writing a solution on a tape. No solution can be produced without physical representation. Although it may seem secondary, the condition of termination must ultimately be the physical free energy minimum state, otherwise, there would still be free energy that would drive the computational processes further.

Physically speaking, the most effective problem solving is about finding the path of least action, which is equivalent to the maximal energy transduction from the initial instance down along the most voluminous gradients of energy to the final acceptance. However, the path for the optimal conductance, that is, for the most rapid reduction of free energy, is tricky to find in a circuit with three or more degrees of freedom because flows (currents) and forces (voltages) are inseparable. In contrast, when the process has no additional degrees of freedom in dissipation, the minimal resistance path corresponding to the solution can be found in a deterministic manner.

In the general case the computational path is intractable because the state space keeps changing due to the search itself. A particular decision to move from the present state to another depends on the past decisions and will also affect accessible states in the future. For example, when the traveling salesman decides for the next destination, the decision will depend on the past path, except at the very end, when there are no choices but to return home. The path is directed because revisits are not allowed (or eventually restricted by costs). This class, referred to as 𝑁𝑃, contains intractable problems that describe irreversible (directional) processes (Figure 1) with additional (𝑛β‰₯3) degrees of freedom.

In the special case the computational path is tractable as decisions are independent of computational history. For example, when searching for the shortest path through a network, the entire invariant state space is, at least in principle, visible from the initial instance, that is, the problem is deterministic. A decision at any node is independent of traversed paths. This class, referred to as 𝑃, contains tractable problems that describe irreversible processes without additional degrees of freedom. Moreover, when the search among alternatives is not associated with any costs, the process is reversible (nondirectional), that is, indifferent to the total conductance from the input to output node.

Finally, it is of interest to note the particular case when a particular physical system has no mechanisms to proceed from a state to any other by transforming absorbed quanta to any emission. Since dispersion relations of physical systems will be revealed first when interacting with them [39, 40], it is impossible to know for a given circuit and finite influx, a priori, without interacting whether the system will arrive at the free energy minimum state finishing with emission or remain at an excited state without output forever. This is the physical rationale of the halting problem [41]. It is impossible to decide for a given program and finite input, a priori, without processing whether the execution will arrive at the accepting state finishing with output or remain at a running state without output forever. These processes that acquire but do not yield relate to problems that cannot be decided. They are beyond class 𝑁𝑃 [42] and will not be examined further. Here the focus is on the principal difference between the truly tractable and inherently intractable problems.

3. Self-Similar Circuits

The physical portrayal of problem processing according to the principle of increasing entropy is based on the hierarchical and holistic formalism [43]. It recognizes that circuits are self-similar in energy transduction (Figure 2) [21, 44, 45]. A circuit is composed of circuits or, equivalently, there are networks within nodes of networks. The most elementary physical entity is the single quantum of action [15, 46].

Each node of a transduction network is a physical entity associated with energy πΊπ‘˜. A set of identical nodes π‘π‘˜>0 representing, for example, a memory register, is associated, following Gibbs [47], with a density-in-energy defined by πœ™π‘˜=π‘π‘˜exp(πΊπ‘˜/π‘˜π΅π‘‡) relative to the average energy density π‘˜π΅π‘‡. The self-similar formalism assigns to a set of indistinguishable nodes in numbers π‘π‘˜ a probability measure π‘ƒπ‘˜ [12, 46]π‘ƒπ‘˜=ξ€Ίβˆπ‘›ξ€Ίπ‘π‘›expξ€·ξ€·βˆ’Ξ”πΊπ‘˜π‘›+Ξ”π‘„π‘˜π‘›ξ€Έ/π‘˜π΅π‘‡ξ€Έξ€»π‘”π‘˜π‘›/π‘”π‘˜π‘›!ξ€»π‘π‘˜π‘π‘˜!(1) in a recursive manner, so that each node π‘˜ in numbers π‘π‘˜ is a product of embedded n-nodes, each distinct type available in numbers 𝑁𝑛. The combinatorial configurations of identical n-nodes in the k-node are numbered by π‘”π‘˜π‘›. Likewise, the identical π‘˜-nodes in numbers π‘π‘˜ are indistinguishable from each other in the network. The internal difference Ξ”πΊπ‘˜π‘›=πΊπ‘˜βˆ’πΊπ‘› and the external flux Ξ”π‘„π‘˜π‘› denote the quanta of (interaction) energy.

The computational system is processing from one state to another, more probable one, when energy is flowing down along gradients through the network from one node to another with concurrent dissipation to the surroundings. For example, a j-node can be driven from its present state, defined by the potential πœ‡π‘—=π‘˜π΅π‘‡lnπœ™π‘— [35], to another state by an energy flow from a preceding k-node at a higher potential πœ‡π‘˜ and by an energy efflux Ξ”π‘„π‘—π‘˜ to the surroundings (Figure 2). Subsequently the j-node may transform anew from its current high-energy state to a stationary state by yielding an efflux to a connected i-node at a lower potential coupled with emission to the surroundings. Any two states are distinguished from each other as different only when the transformation from one to the other is dissipative Ξ”π‘„π‘—π‘˜β‰ 0 [12–14]. When thermalization has abolished all density differences, the irreversible process has arrived at a dynamic steady state where reversible, to-and-fro flows of energy (currents) are conserved and, on the average, the densities remain invariant.

It is convenient to measure the state space of computation by associating each j-system with logarithmic probabilitylnπ‘ƒπ‘—β‰ˆπ‘π‘—ξƒ©ξ“1βˆ’π‘˜Ξ”πœ‡π‘—π‘˜βˆ’Ξ”π‘„π‘—π‘˜π‘˜π΅π‘‡ξƒͺ=𝑁𝑗1βˆ’π‘˜Ξ”π‘‰π‘—π‘˜π‘˜π΅π‘‡ξƒͺ(2) in analogy to (1), where Ξ”πœ‡π‘—π‘˜/π‘˜π΅π‘‡=lnπœ™π‘—βˆ’Ξ£π‘”π‘—π‘˜ln(πœ™π‘˜/π‘”π‘—π‘˜!) is the potential difference between the j-node and all other connected k-nodes in degenerate (equal-energy) numbers π‘”π‘—π‘˜. Stirling’s approximation implies that kBT is a sufficient statistic for the average energy [48] so that the system may accept (or discard) a quantum without a marked change in its total energy content, that is, the free energy Ξ”π‘‰π‘—π‘˜=Ξ”πœ‡π‘—π‘˜βˆ’Ξ”π‘„π‘—π‘˜β‰ͺπ‘˜π΅π‘‡. Otherwise, a high influx Ξ”π‘‰π‘—π‘˜πœπ‘˜π΅π‘‡, such as a voltage spike from the preceding k-node or heat from the surroundings, might β€œdamage” the j-system, for example, β€œburn” a memory register, by forcing the embedded n-nodes into evolution (Figure 2). Such a non-statistic phenomenon may manifest itself even as chaotic motion but this is no obstacle for the adopted formalism because then the same self-similar equations are used at a lower level of hierarchy to describe processes involving sufficiently statistical systems.

According to the scale-independent formalism the network is a system in the same way as its constituent nodes are systems themselves. Any two networks, just as any two nodes, are distinguishable from each other when there is some influx sequence of energy so that exactly one of the two systems is transforming. In computational terms, any two states of a finite automaton are distinguishable when there is some input string so that exactly one of the two transition functions is accepting [2]. Those nodes that are distinguishable from each other by mutual density differences are nonequivalent. These distinct physical entities of a circuit are represented by disjoint sets and indexed separately in the total additive measure of the entire circuit defined asln𝑃=𝑗=1ln𝑃𝑗=𝑗=1𝑁𝑗1βˆ’π‘˜β‰ π‘—Ξ”π‘‰π‘—π‘˜π‘˜π΅π‘‡ξƒͺ.(3) The affine union of disjoint sets is depicted as a graph that is merged from subgraphs by connections.

In the general case the calculation of measure ln𝑃 (3) implies a complicated energy transduction network by indexing numerous nodes as well as differences between them and in respect to the surroundings. In a sufficiently statistical system the changes in occupancies balance as Δ𝑁𝑗=βˆ’Ξ£Ξ”π‘π‘˜since the influx to the j-node results from the effluxes from the k-nodes (or vice versa). The flows along the jk-edges are proportional to the free energy by an invariant conductance πœŽπ‘—π‘˜>0 defined as [12]Δ𝑁𝑗=βˆ’π‘˜πœŽπ‘—π‘˜Ξ”π‘‰π‘—π‘˜π‘˜π΅π‘‡.(4) The form ensures continuity so that, when a particular jk-flow is increasing the occupancy Δ𝑁𝑗>0 of the j-node, the very same flow is decreasing the occupancies Ξ£Ξ”π‘π‘˜<0 at the k-nodes (or vice versa). Importantly, owing to the other affine connections, the jk-transformation will affect occupancies of other nodes that in turn affect Ξ”π‘‰π‘—π‘˜. Consequently when there are, among interdependent nodes (𝑛β‰₯3), alternative paths (π‘˜β‰₯2) of conduction, the problem of finding the optimal path becomes intractable [12, 14]. As long as Ξ”π‘‰π‘—π‘˜β‰ 0 the gradient system with 𝑛β‰₯3 degrees of freedom does not enclose integrable (tractable) orbits [33].

Conversely in the special case, when the reduction of a difference does not affect other differences, that is, there are no additional degrees of freedom, the changes in occupancies remain tractable. The conservation of energy requires that, when there are only two degrees of freedom, the flow from one node will inevitably arrive exclusively at the other node. Therefore, it is not necessary to explore all these integrable paths to their very ends. Then the outcome can be predicted and the particular path in question can be found efficiently. Moreover, when there are no differences Ξ”π‘‰π‘—π‘˜=0, there are no net variations in occupancies, that is, no net flows either. These conserved, reversible flows are statistically predictable even in a complicated but stationary (Ξ”ln𝑃=0) network with degrees of freedom. When the currents are conserved, the network is idle, that is, not transforming. In accordance with Noether’s theorem also the PoincarΓ©-Bendixson theorem holds for the stationary system [27, 33].

The overall transduction processes, both intractable and tractable direct toward more probable states, that is, Ξ”ln𝑃>0. However when a natural process with three or more degrees of freedom is examined in a deterministic manner, it is necessary to explore all conceivable transformation paths to their ends. The paths cannot be integrated in closed forms (predicted) because each decision will affect the choice of future states. The set of conceivable states that is generated by decisions at consequent branching points of computation can be enormous.

The physical portrayal of computational complexity reveals that it is the noninvariant, evolving state space of class 𝑁𝑃 computation that prevents from completing the contraction by dissipative transformations in deterministic manner in polynomial time. Since the dissipated flow of energy during the computation relates directly to the irreversible flow of time [14], the class 𝑁𝑃 completion time is inherently longer than that of class 𝑃. Thus it is concluded that 𝑃 is a proper subset of 𝑁𝑃.

4. Computation as a Probable Process

When computation is described as a probable physical process, the additive logarithmic probability measure ln𝑃 is increasing as the dissipative transformations are leveling the differences Ξ”π‘‰π‘—π‘˜β‰ 0 (Δ𝑉𝑗𝑗=0). When the definitions in (4) and Ξ”πœ‡π‘—π‘˜(Δ𝑁𝑗)/π‘˜π΅π‘‡=Δ𝑁𝑗/𝑁𝑗 are used, the change Ξ”ln𝑃𝐿=Ξ”ln𝑃=βˆ’π‘—=1Ξ”π‘π‘—ξ“π‘˜Ξ”π‘‰π‘—π‘˜π‘˜π΅π‘‡=𝑗,π‘˜πœŽπ‘—π‘˜ξ‚΅Ξ”π‘‰π‘—π‘˜π‘˜π΅π‘‡ξ‚Ά2β‰₯0(5) is found to be nonnegative since the squares (Ξ”π‘‰π‘—π‘˜)2 and (Δ𝑁𝑗)2 are necessarily nonnegative and the absolute temperature 𝑇>0, πœŽπ‘—π‘˜β‰₯0, and π‘˜π΅>0.

The definition of entropy 𝑆=π‘˜π΅ln𝑃 yields from (5) the principle of increasing entropy βˆ‘Ξ”π‘†=βˆ’π‘—Ξ”π‘π‘—βˆ‘π‘˜Ξ”π‘‰π‘—π‘˜/𝑇β‰₯0. Equation (5) says that entropy is increasing when free energy is decreasing, in agreement with the thermodynamic maxim [35] and Gouy-Stodola theorem [49, 50] and the mathematical foundations of thermodynamics [51–53]. In other words, when the process generator 𝐿>0, there is free energy for the computation to commence from the initial state toward the accepting state where the output will thermalize the circuit and 𝐿=0. Admittedly, dissipation is often small, however, not negligible but necessary for any computation to advance and to yield an output [5, 6, 34].

During the computational process the state space accessible by 𝐿>0 is contracting toward the free energy minimum state where 𝐿=0 and no further changes of state are possible. Consistently, when ln𝑃 is increasing due to the changing occupancies Δ𝑁𝑗, the change in the process generator [27]Δ𝐿=2𝑗=1Ξ”π‘π‘—π‘π‘—ξ“π‘˜πœŽπ‘—π‘˜Ξ”π‘‰π‘—π‘˜π‘˜π΅π‘‡ξ“=βˆ’2𝑗=1Δ𝑁𝑗2𝑁𝑗≀0(6) is found to decrease almost everywhere using the definition in (4) because the squares (Δ𝑁𝑗)2 and (Ξ”π‘‰π‘—π‘˜)2 are necessarily nonnegative and 𝑁𝑗>0 for any spatially confined energy density [14]. Equations (5) and (6) show that during the computation the state space is contracting toward the stationary state where 𝐿=0.

The free energy minimum partition ln𝑃max=Σ𝑁𝑗𝑠𝑠 corresponds to the solution. It is a stable state of computational process in its surroundings because any variation 𝛿𝑁𝑗 below (above) the steady-state occupancy 𝑁𝑗𝑠𝑠 will reintroduce Ξ”π‘‰π‘—π‘˜<0 (> 0) that will drive the system back to the stationary state by invoking a returning flow Δ𝑁𝑗>0 (<0). Explicitly, the maximum entropy system is Lyapunov stable [27, 33] according to the definitions 𝛿ln𝑃=𝐿(𝛿𝑁𝑗)<0 and 𝛿𝐿(𝛿𝑁𝑗)>0 that are available from (5) and (6). The dynamic steady state is maintained by frequent to-and-fro flows between the system’s constituents and the surroundings. These nondissipative processes do not amount to any change in P.

In general the trajectories of natural processes cannot be solved analytically because the flows Δ𝑁𝑗 and Ξ”π‘‰π‘—π‘˜ are inseparable in L (5) at any j-node where cardinality of {𝑗,π‘˜}β‰₯3. Nonetheless, the inherently intractable trajectories can be mapped by simulations where T, Ξ”π‘‰π‘—π‘˜, and 𝑁𝑗 are updated after each change of state. The occupancies 𝑁𝑗 keep changing due to the changing driving forces Ξ”π‘‰π‘—π‘˜ that, in turn, are affected by the changes Δ𝑁𝑗. In terms of physics the non-Hamiltonian system is without invariants of motion and Liouville’s theorem is not satisfied because the open dissipative system is subject to an influx (efflux) from (to) its surroundings. The nonconserved, gradient system is without norm. Thus the evolving (cf. Bayesian) distribution of probabilities 𝑃𝑗 cannot be normalized. The dissipative equation of motion Δ𝑃/Δ𝑑=𝐿𝑃 for the class of irreversible processes cannot be integrated in a closed form or transformed to a time-independent frame [14] to obtain a solution efficiently.

According to the maximum entropy production principle [54–66] energy differences will be reduced most effectively when entropy increases most rapidly, that is, most voluminous currents direct along the steepest paths of free energy. However, when choosing at every instance a particular descent that appears as the steepest, there is no guarantee that the most optimal overall path will be found because the transformations themselves will affect the future states between the initial instance and the final acceptance. To be sure about the optimal trajectory it takes time, that is, dissipation [14] because the deterministic algorithmic execution of the class 𝑁𝑃 problem will have to address by conceivable transformations the entire power set of states, one member for each distinct path of energy dispersal.

In the special case when the currents are separable from the driving forces, the energy transduction network will remain invariant. In terms of physics the Hamiltonian system has invariants of motion and Liouville’s theorem is satisfied. The deterministic computation as a tractable energy transduction process will solve the problem in question because the dissipative steps are without additional degrees of freedom. The conceivable courses can be integrated (predicted). Hence the solution can be obtained efficiently, for example, by an algorithm that follows the steepest descent and does not waste time in wandering along paths that can be predicted to be futile.

5. Manifold in Motion

Further insight to the distinction between computations in the classes 𝑃 and 𝑁𝑃 is obtained when the computation as a physical process is described in terms of an evolving energy landscape [67–69]. To this end the discrete differences Ξ” that denote properly transforming forces, and quantized flows, are replaced by differentials βˆ‚ of continuous variables. A spatial gradient πœ•π‘ˆπ‘—π‘˜/πœ•π‘₯𝑗 is a convenient way to relate a density labeled by 𝑗 at a continuum coordinate π‘₯𝑗 with another one labeled by π‘˜ but displaced by dissipation πœ•π‘„π‘—π‘˜/πœ•π‘‘ at π‘₯π‘˜ [13, 14]. When the j-system at π‘₯𝑗 evolves down along the scalar potential gradient πœ•π‘ˆπ‘—π‘˜/πœ•π‘₯𝑗 in the field πœ•π‘„π‘—π‘˜/πœ•π‘₯𝑗, the conservation of energy requires that the transforming current 𝑣𝑗=𝑑π‘₯𝑗/𝑑𝑑=βˆ’Ξ£π‘‘π‘₯π‘˜/𝑑𝑑. The dissipation πœ•π‘„π‘—π‘˜/πœ•π‘‘ is an efflux of photons at the speed of light 𝑐 to the surrounding medium (or vice versa).

The continuum equation of motion corresponding to (5) is obtained from (3) by differentiating and using the chain rule (𝑑𝑃𝑗/𝑑π‘₯𝑗)(𝑑π‘₯𝑗/𝑑𝑑) [14]𝐿=βˆ’π‘—,π‘˜π·π‘—π‘‰π‘—π‘˜π‘˜π΅π‘‡,(7) where directional derivates 𝐷𝑗=(𝑑π‘₯𝑗/𝑑𝑑)(πœ•/πœ•π‘₯𝑗) span an affine manifold [70] of energy densities (Figure 3). The total potential π‘‰π‘—π‘˜=π‘ˆπ‘—π‘˜βˆ’π‘–π‘„π‘—π‘˜ is decomposed to the orthogonal scalar π‘ˆπ‘—π‘˜ and vector π‘„π‘—π‘˜ parts [71]. All distinguishable densities and flows are indexed by π‘—β‰ π‘˜. The evolving energy landscape is concisely given by the total change in kinetic energy πœ•(2𝐾)/πœ•π‘‘=π‘˜π΅π‘‡πΏ=π‘‡πœ•π‘†/πœ•π‘‘ [13, 14]𝑗,π‘˜π‘£π‘—πœ•π‘šπœ•π‘‘π‘—π‘˜π‘£π‘˜=𝑗,π‘˜π‘£π‘—π‘šπ‘—π‘˜πœ•π‘£π‘˜+ξ“πœ•π‘‘π‘—,π‘˜π‘£π‘—πœ•π‘šπ‘—π‘˜π‘£πœ•π‘‘π‘˜βŸΊπœ•ξ“πœ•π‘‘2𝐾=βˆ’π‘—,π‘˜π‘£π‘—πœ•π‘ˆπ‘—π‘˜πœ•π‘₯𝑗+𝑗,π‘˜πœ•π‘„π‘—π‘˜,πœ•π‘‘(8) where the transforming flows with three or more degrees of freedom (𝑛β‰₯3) are indexed as π‘—β‰ π‘˜Β±1. Conversely, the flow without additional degrees of freedom (𝑛<3) is indexed as 𝑗=π‘˜Β±1. In fact the derivate should be denoted as inexact (Δ‘) because in general the entered state depends on the past path.

The equation for the flows of energy can also be obtained from the familiar Newton’s 2nd law [72] for the change in momentum π‘π‘—π‘˜=π‘šπ‘—π‘˜π‘£π‘˜πœ•ξ“πœ•π‘‘π‘—,π‘˜π‘π‘—π‘˜=𝑗,π‘˜π‘šπ‘—π‘˜π‘Žπ‘˜+𝑗,π‘˜πœ•π‘šπ‘—π‘˜π‘£πœ•π‘‘π‘˜ξ“=βˆ’π‘—,π‘˜πœ•π‘‰π‘—π‘˜πœ•π‘₯𝑗=βˆ’π‘—,π‘˜πœ•π‘ˆπ‘—π‘˜πœ•π‘₯𝑗+𝑗,π‘˜πœ•π‘„π‘—π‘˜π‘£π‘—πœ•π‘‘(9) by multiplying with velocities. The gradient πœ•π‘‰π‘—π‘˜/πœ•π‘₯𝑗 is again decomposed to the spatial and temporal parts. The sign convention is the same as above, that is, when πœ•π‘ˆπ‘—π‘˜/πœ•π‘₯𝑗<0, then 𝑣𝑗>0. Since momenta are at all times tangential to the manifold, the Newton’s 2nd law (9) requires that the corresponding flow at any moment𝑣𝑗=βˆ’π‘˜πœŽπ‘—π‘˜π‘˜π΅π‘‡πœ•π‘‰π‘—π‘˜πœ•π‘₯𝑗(10) is proportional to the driving force in accordance with the continuity 𝑣𝑗=βˆ’Ξ£π‘£π‘˜ across the jk-edges between nodes of the network (4) [12]. The linear relationship in (10) that reminds of Onsager reciprocal relations [51] is consistent with the previous notion that the densities in energy (the nodes) are sufficiently statistical systems. Otherwise, a high current between π‘₯π‘˜ and π‘₯𝑗 would force the underlying conducting system (jk-edge), parameterized by the coefficient πœŽπ‘—π‘˜, to evolution. In such a case the channel’s conductance would depend on transmitted bits [34].

A particular flow 𝑣𝑗 funnels by dissipative transformations down along the steepest descent βˆ’πœ•π‘‰π‘—π‘˜/πœ•π‘₯𝑗, that is, along the shortest path π‘ π‘—π‘˜=βˆ«βˆšπ‘£π‘‘(π‘—π‘šπ‘—π‘˜π‘£π‘˜) known as the geodesic [51, 73, 74]. At any given moment the positive definite resistance π‘Ÿπ‘—π‘˜=π‘˜π΅π‘‡πœŽπ‘—π‘˜βˆ’1>0 in (10) identifies to the mass π‘šπ‘—π‘˜>0 that as the metric tensor defines the geometry of the free energy landscape [75] (cf. Lorentzian manifold). Formally π‘ π‘—π‘˜ can be denoted as an integral; however in the general case of the evolving non-Euclidean landscape it cannot be integrated in a closed form [33]. The curved landscape is shrinking (or growing) because the surroundings are draining it by a net efflux (or supplying it with a net influx) of radiation πœ•π‘„π‘—π‘˜/πœ•π‘‘β‰ 0 and/or a material flow πœ•π‘ˆπ‘—π‘˜/πœ•π‘‘β‰ 0. When the forces and flows are inseparable in L, the noninvariant landscape is, at any given locus and moment, a result of its evolutionary history. The rate of net emission (or net absorption) declines as the system steps, quantum by quantum, toward the free energy minimum, which is the stationary state in the respective surroundings. Only in the special case, when the forces and flows are separable, can the trajectories be integrated in a closed form.

Finally, when all density differences have vanished, the manifold has flattened to the stationary state (𝑑𝑆/𝑑𝑑=0). The state space has contracted to a single stationary state where 𝐿=0. In agreement with Noether’s theorem the currents are conserved and tractable throughout the invariant manifold. Also in accordance with Poincaré’s recurrence theorem the steady-state reversible dynamics are exclusively on bound and (statistically) predictable orbits. Moreover the conserved currents, that is, πœ•π‘šπ‘—π‘˜/πœ•π‘‘=0, bring about no net changes in the total energy content of the system. Hence (9) reduces to𝑗,π‘˜π‘£π‘—πœ•π‘šπœ•π‘‘π‘—π‘˜π‘£π‘˜=𝑗,π‘˜π‘£π‘—π‘šπ‘—π‘˜πœ•π‘£π‘˜βŸΊπœ•πœ•π‘‘ξ“πœ•π‘‘2𝐾=βˆ’π‘—,π‘˜π‘£π‘—πœ•π‘ˆπ‘—π‘˜πœ•π‘₯𝑗(11) which implies in accordance with the virial theorem that the components of kinetic energy 2K match the components of potential π‘ˆ everywhere.

According to the geometric description of computational processes, the flattening (evolving) non-Euclidean landscape represents the state space of the class 𝑁𝑃 computation whereas the flat Euclidean manifold represents the state space of the class 𝑃 computation. The geodesics that span the class 𝑁𝑃 landscape are arcs whereas those that span the class 𝑃 manifold are straight lines. According to (8) the class 𝑁𝑃 state space is, due to its three or more degrees of freedom (𝑛β‰₯3), larger in dissipation by the terms Ξ£π‘£π‘—π‘‘π‘šπ‘—π‘˜π‘£π‘˜>0 indexed with π‘—β‰ π‘˜Β±1, than the class 𝑃 state space without additional degrees of freedom (𝑛<3) for dissipation given by the term βˆ‘π‘£π‘—π‘‘π‘šπ‘—π‘˜π‘£π‘˜>0 indexed with 𝑗=π‘˜Β±1. In other words, class 𝑁𝑃 is larger than 𝑃 because the curved manifold cannot be embedded in the plane. The measure ln𝑃𝑁𝑃of the non-Euclidean landscape is simply larger by the degrees of freedom (𝑛β‰₯3) in dissipation than the measure ln𝑃𝑃 of Euclidean manifold.

The argument for the failure to map the larger 𝑁𝑃 manifold one-to-one onto the smaller 𝑃 manifold is familiar from the pigeonhole principle PHP𝑃𝑁𝑃 applied to manifolds ln𝑃𝑁𝑃>ln𝑃𝑃. The quanta that are dissipated during evolution from diverse density loci of the curved, evolving 𝑁𝑃 landscape are not mapped anywhere on the flat, invariant 𝑃 landscape. Thus it is concluded that 𝑃 is a proper subset of 𝑁𝑃.

6. Intractability in the Degrees of Freedom

The transduction path between two nodes can be represented by only one edge, hence there are π‘˜=π‘›βˆ’1 interdependent currents (4) between 𝑛 densities [27]. The degrees of freedom are less than 𝑛 by 1 because it takes at least two densities to have a difference. In the general case 𝑛β‰₯3, there are alternative paths for the currents from the initial state via alternative states toward the accepting state. The intractable evolutionary courses are familiar from the n-body (𝑛β‰₯3) problems [76, 77]. Accordingly, the satisfiability problem of a Boolean expression (n-SAT) belongs to class 𝑁𝑃 when there are three or more literals (𝑛β‰₯3) per clause [30]. In the special case 𝑛=2, the energy dispersal process is deterministic as there are no alternative dissipative paths for the current. When only one path is conducting, the problem for the maximal conduction is 1-separable and tractable. The two-body problem does not present a challenge. Accordingly, 2-SAT is deterministic and 1-SAT is trivial, essentially only a statement.

For example, the problem of maximizing the shortest path by two or more interdicts (π‘˜β‰₯2) is intractable. When the first interdict is placed, flows will be redirected and, in turn, affect the decision to place the second interdict. Similarly the search history of the traveling salesman for the optimal round-trip path is intractable. A decision to visit a particular city will narrow irreversibly the available state space by excluding that city from the subsequent choices. Thus, at any particular node one cannot consider decisions as if not knowing the specific search history that led to that node. When each decision will open a new set for future decisions, the computational space state of class 𝑁𝑃 is a tedious power set of deterministic decisions. On the other hand, when optimizing the shortest path, a choice for a particular path will not affect, in any way, the future explorations of other paths. At any particular node one may consider decisions irrespective of the search history. In the deterministic case it is not necessary to explore all conceivable choices because the trajectories are tractable (predictable). Likewise, the problem of maximizing the shortest path by a single interdict π‘˜=1 can be solved efficiently. Any particular decision to place the interdict does not affect future decisions because there are no more interdicts to be placed. When the state space is not affected by the problem-solving process itself, at most, a polynomial array of invariant circuits, that is, deterministic finite automata, will compute class 𝑃 problems.

The 𝑃 versus 𝑁𝑃 question is not only a fundamental but also a practical problem for which no computational machinery exists without physical representation. A particular input instance is imposed on the computational circuit by the surroundings and a particular output is accepted as a solution by the surroundings. The communication between the automaton and its surroundings relates to information processing that was understood already early on to be equivalent to the (impedance) matching of circuits for optimal energy transmission [78, 79]. When the matching of a circuit will affect the matching of two or more connected circuits, the total matching of the interdependent circuits for the optimal overall transduction is intractable. Although in practice the iterative process may be converging rapidly in a nondeterministic manner, the conceivable set of circuit states is a power set of the tuning operations. Conversely, when the matching does not involve degrees of freedom, the tuning for optimal transduction is tractable.

In summary, the class 𝑁𝑃 problem-solving process is inherently nondeterministic because the contraction process will itself affect the set of future states accessible from a particular instance. The course toward acceptance cannot be accelerated by prediction but the state space must be explored. On the other hand when dissipative steps between the input and output operations have no additional degrees of freedom, the search for the class 𝑃 problem solution will itself not affect the accessible set of states at any instance. The invariant state set can be contracted efficiently by predicting rather than exploring all conceivable paths. Therefore, the completion time of the class 𝑃 deterministic computation is shorter than that of 𝑁𝑃. Thus it is concluded that 𝑃 is a proper subset of 𝑁𝑃.

7. State Spaces of Automata

The computational complexity classification to 𝑃 and 𝑁𝑃 by the differing degrees of freedom in dissipation relates to the algorithmic execution times, which are proportional to circuit sizes. A Boolean circuit that simulates a Turing machine is commonly represented as a (directed, acyclic) graph structure of a tree with the assignments of gates (functions) to its vertices (nodes) (Figure 2).

The class 𝑁𝑃 problems are represented by circuits where forces (voltages) are inseparable from currents. Since there are no invariants of motions, the ceteris paribus assumption does not hold when solving the class 𝑁𝑃 problems [80]. Consistently, no deterministic algorithms are available for the class of nonconserved flow problems but, for example, brute-force optimization, simulated annealing and dynamic programming are employed [81].

The class 𝑁𝑃 problems can be considered to be computed by a nondeterministic Turing machine (NTM). For each pair of state and input symbol there may be several possible states to be accessed by a subsequent transition. The NTM 5-tuple (Ξ¦, Ξ”, Ξ›, πœ™1, πœ™π‘ π‘ ) consists of a finite set of states Ξ¦, a finite set of input symbols Ξ” including blank, an initial state πœ™1∈Φ, a set of accepting (stationary) states πœ™π‘ π‘ βŠ†Ξ¦, and a transition function Ξ›βˆΆΞ¦Γ—Ξ”β†’Ξ¦Γ—Ξ”Γ—{𝑅,𝐿} where 𝐿 is left and 𝑅 is right shift of the input tape. Since Turing machine has an unlimited amount of storage space for computations and eventually an infinite input as well, such a machine cannot be realized. Therefore, to consider the computational complexity in context of a finite state machine by the physical principle is more motivated, however, without compromising conclusions. For example, a read-only, right-moving Turing machine is equivalent to a nondeterministic finite automaton (NFA) where for each pair of state and input symbol there may be several possible states to be accessed by a subsequent transition. The NFA 5-tuple (Ξ¦, Ξ”, Ξ›, Ο•1, Ο•ss) consists of a finite set of states Ξ¦, a finite set of input symbols Ξ”, an initial state πœ™1∈Φ, a set of accepting (stationary) states πœ™π‘ π‘ βŠ†Ξ¦, and a transition function Ξ›βˆΆΞ¦Γ—Ξ”β†’π‘ƒ(Ξ¦), where 𝑃(Ξ¦) denotes the power set of Ξ¦. A circuit for the nondeterministic computation can also be constructed from an array of deterministic finite automata (DFA). Each DFA is a finite state machine where for each pair of state and input symbol there is one and only one transition to the next state. The DFA 5-tuple (Ξ¦, Ξ”, Ξ›, Ο•1, Ο•ss) consists of a finite set of states (Ξ¦), a finite alphabet Ξ”, an initial state (πœ™1∈Φ), a set of accepting states (πœ™π‘ π‘ βŠ†Ξ¦), and a transition function Ξ›βˆΆΞ¦Γ—Ξ”β†’Ξ¦.

In the general case when the forces are inseparable from the flows, the execution time by the DFA array grows super-polynomial as function of the input length n, for example, as 𝑂(𝑁𝑛). For example, when maximizing the shortest path by interdicts (π‘˜β‰₯2), any two alternative choices will give rise to two circuits that differ from each other as much as the currents of the two DFAs differ from each other. These two sets are nonequivalent due to the difference in dissipation, and one cannot be reduced to the other. Accordingly, the circuit for the NFA is adequately constructed from the entire power set of distinct DFAs to cover the entire conceivable set of states of the nondeterministic computation (Figure 4). The union of DFAs is nonreducible, that is, each DFA is distinguished from all other DFAs by its distinct transition function.

The class 𝑃 problems are represented by circuits where forces are separable from currents. When the proposed questions do not depend on previous decisions (answers), the problem can be computed efficiently by DFA. Consistently in the class 𝑃 of flow conservation problems many deterministic methods deliver the solution corresponding to the maximum flow in polynomial time. For example, during the search for the maximally conducting path through the network, currents will disperse from the input node π‘˜ to diverse alternative nodes 𝑙 but only the flow along the steepest descent will arrive at the output node 𝑗 and establish the only and most voluminous flow. The other paths of energy dispersal will terminate at dead ends and will not contribute or affect the maximum flow at all. Importantly, on an invariant landscape these inferior paths do not have to be followed to their very ends as is exemplified by Dijkstra’s algorithm [82]. The search terminates at the accepting state whereas other paths end up at nil states. These particular sequences of states β€œdied.” The shortest path problem can be presented by a single DFA because the nonaccepting dead states that keep going to themselves belong to βˆ…, the empty set of states. However, as has been accurately pointed out [2], technically this automaton is a nondeterministic finite automaton, which reflects understanding that the single flow without additional degrees of freedom (𝑛=2) is the special deterministic subclass of the generally (𝑛β‰₯3) nondeterministic class. Likewise, the special case of maximizing the shortest path by a single interdict (π‘˜=1) is deterministic in contrast to the general case of two or more interdicts (π‘˜β‰₯2). The special 1-separable problem can be represented by a linear set of distinct circuits in contrast to the general inseparable problem that requires a power set of distinct circuits. Accordingly, the automaton for the special cases of deterministic problems is adequately constructed at most from a polynomial set of distinct DFAs and the corresponding deterministic computation is completed in polynomial time.

Since the class 𝑁𝑃 varying state space is larger, due to its additional degrees of freedom, than the class 𝑃 invariant state space, it is concluded that 𝑃 is a proper subset of 𝑁𝑃.

8. The Measures of States

To measure the difference between the classes 𝑃 and 𝑁𝑃, the thermodynamic formalism of computation will be transcribed to the mathematical notation [52]. Consistently with the reasoning presented in Sections 2–7, the computational complexity class 𝑃 will be distinguished from 𝑁𝑃 by measuring the difference in dissipative computation due to the difference in degrees of freedom. Moreover, since the computation does not advance by nondissipative (reversible) transitions, these exchanges of quanta do not affect the measure.

To maintain a connection to practicalities, it is worth noting that tractable problems are often idealizations of intractable natural processes. For example, when determining the shortest path for a long-haul trucker to take through a network of cities to the destination, it is implicitly assumed that, when the computed optimal path is actually taken, the traffic itself would not congest the current and cause a need for rerouting and finding a new, best possible route under the changing circumstances.

The state space of a finite energy system is represented by elements Ο• of the set Ξ¦ [52]. Transformations from a state to another are represented by elements Ξ», referred to as process generators of the set Ξ›. The computation is a series of transformations along a piecewise continuous path 𝑠(πœ†,πœ™) in the state space. According to the 2nd law the paths of energy dispersal that span the affine manifold M are shortening until the free energy minimum state has been attained. Then the state space has contracted during the transformation process to the accepting state.

Definition 1. A system is a pair (Ξ¦, Ξ›), with Ξ¦ a set whose elements Ο• are called states and Ξ› a set whose elements Ξ» are called process generators, together with two functions. The function πœ†β†’π‘† assigns to each Ξ» a transformation 𝑆, whose domain 𝐷(πœ†) and range 𝑅(πœ†) are non-empty subsets of Ξ¦ such that for each Ο• in Ξ¦ the condition of accessibility holds:(i)Ξ›πœ™βˆΆ={π‘†πœ™βˆΆπœ†βˆˆΞ›,πœ™βˆˆπ·(πœ†)}=Ξ¦,(12)where Λϕ is the entire set of states accessible from Ο•, with the assertion that, for every state Ο•, Λϕ equals the entire state space Ξ¦. Furthermore, the function (πœ†β€²,πœ†ξ…žξ…ž)β†’πœ†ξ…žξ…žπœ†ξ…ž assigns to each pair (πœ†ξ…ž,πœ†ξ…žξ…ž) the (extended) process generator πœ†ξ…žξ…žπœ†ξ…ž for the successive application of πœ†ξ…žξ…ž and πœ†ξ…ž with the property:(i)if 𝐷(πœ†ξ…žξ…ž)βˆ©π‘…(πœ†ξ…ž)β‰ βˆ…, then 𝐷(πœ†ξ…žξ…žπœ†ξ…ž)=π‘†πœ†βˆ’1β€²(𝐷(πœ†ξ…žξ…ž)) and, for each πœ™ in 𝐷(πœ†ξ…žξ…žπœ†ξ…ž), there holds π‘†πœ†β€²β€²πœ†β€²πœ™=π‘†πœ†β€²β€²π‘†πœ†β€²πœ™ for all πœ™βˆˆπ·(πœ†ξ…žξ…žπœ†ξ…ž) when for any other πœ†βˆ—π·(πœ†ξ…ž)∩𝐷(πœ†βˆ—)=βˆ….The extended process generators πœ†ξ…žξ…žπœ†ξ…ž formalize the successive transformations with less than three degrees of freedom. When the transformation π‘†πœ†β€²is emissive, its inverse π‘†πœ†β€²βˆ’1 is absorptive.

Definition 2. A process of (Ξ¦,Ξ›) is a pair (πœ†,πœ™) such that πœ™βˆˆπ·(πœ†). The process generators transform the system from an initial state via intermediate states to the final state. The set of all processes of (Ξ¦, Ξ›) is Ξ›β‹„Ξ¦={(πœ†,πœ™)βˆΆπœ†βˆˆΞ›,πœ™βˆˆπ·(πœ†)}.(13) According to Definitions 1 and 2 the states and process generators are interdependent (Figure 5) so that(i)when the system has transformed from the state Ο• to the state π‘†πœ†πœ™, the process generator Ξ» has vanished;(ii)when the system has transformed from Ο• to π‘†πœ†πœ™, the system is no longer at Ο• available for another transformation π‘†πœ†βˆ— by another process generator πœ†βˆ— to π‘†πœ†βˆ—πœ™;(iii)when the system has transformed from the initial state Ο• to an intermediate state π‘†πœ†β€²πœ™ and subsequently from π‘†πœ†β€²πœ™ to π‘†πœ†β€²β€²π‘†πœ†β€²πœ™, the final state π‘†πœ†β€²β€²π‘†πœ†β€²πœ™ is identical to the state resulting from the extended transformation from Ο• to π‘†πœ†β€²β€²π‘†πœ†β€²πœ™, only when π‘†πœ†β€²πœ™ is not a domain 𝐷(πœ†βˆ—) of any other transformation π‘†πœ†βˆ—.

Definition 3 (see [52]). Let 𝑑>0 and let πœ†π‘‘βˆΆ[0,𝑑)→𝑃 be piecewise continuous, and define 𝐷(πœ†π‘‘) to be the set of states πœ™=(𝑁,𝐺)βˆˆπ‘† such that the differential equation 𝑑𝑁(𝜏),𝑑𝑑𝑑𝐺(𝜏)𝑑𝑑=πœ†π‘‘(𝜏)(14) has a solution πœβ†’(𝑁(𝜏),𝐺(𝜏)) that satisfies the initial condition (𝑁(0),𝐺(0))=πœ™ and follows the trajectory {(𝑁(𝜏),𝐺(𝜏))∣𝜏∈[0,𝑑]} which is entirely in Ξ¦. In other words, πœ™βˆˆπ·(πœ†π‘‘) if and only if βˆ«πœ™+𝜏0πœ†π‘‘(πœ‰)π‘‘πœ‰is in Ξ¦ for every 𝜏∈[0,𝑑].
When (14) is compared with (5), πœ†π‘‘ is understood in the continuum limit to generate a transformation from the initial density πœ™=(𝑁(0),𝐺(0)) (cf. the definition of energy density in Section 3) to a succeeding density πœ™πœ=(𝑁(𝜏),𝐺(𝜏)) during a step 𝜏∈[0,𝑑] via the flow 𝑣=𝑑𝑁/𝑑𝑑 that consumes the free energy.

Definition 4 (see [52]). Define Ξ› to be the set of functions πœ†π‘‘ for which 𝐷(πœ†π‘‘)β‰ βˆ…. For each πœ†π‘‘βˆˆΞ›, define π‘†πœ†π‘‘πœ™βˆΆπ·(πœ†π‘‘)β†’Ξ¦ by the formula π‘†πœ‹π‘‘ξ€œπœ™=πœ™+𝑑0πœ†π‘‘(πœ‰)π‘‘πœ‰.(15) If 𝑠(πœ†π‘‘,πœ™) denotes the path determined by βˆ«πœβ†’πœ™+𝜏0πœ†π‘‘(πœ‰)π‘‘πœ‰, 𝜏∈[0,𝑑], then π‘†πœ†π‘‘πœ™ is taken to be the final point of 𝑠(πœ†π‘‘,πœ™). Moreover πœ™βˆˆπ·(πœ†π‘‘)⇔𝑠(πœ†π‘‘,πœ™)βŠ‚Ξ¦.
The step of evolution along the oriented and piecewise smooth curve from Ο• to π‘†πœ†π‘‘πœ™ is the path 𝑠(πœ†π‘‘,πœ™)βŠ‚Ξ¦ determined by the formal integration from 0 to Ο„ (15). In the general case of dissipative transformations with degrees of freedom (𝑛β‰₯3) the integration is not closed. An open system is spiraling along an open trajectory either by loosing quanta to or acquiring them from its surroundings. Consequently the state space πœ™βˆˆπ·(πœ†π‘‘) is contracting by successive applications of πœ†ξ…žπ‘‘ and πœ†π‘‘ξ…žξ…ž that diminish the free energy almost everywhere such that 𝑅(πœ†π‘‘ξ…žξ…ž)βŠ†π·(πœ†ξ…žπ‘‘). The dissipation ceases first at the free energy minimum state where the orbits are closed and the domain and range are indistinguishable for any process.

Definition 5 (see [53]). After a series of successive applications of πœ†π‘‘ξ…žξ…ž and πœ†ξ…žπ‘‘ the evolving system arrives at the free energy minimum. Then the open system is in a dynamic state defined as the Ξ΅-steady state by a fixed nonzero set πœ€={πœ€π‘†} such that during Ο„ if and only if, for all π‘†βˆˆΞ¦, there exists πœπ‘†βˆˆπ‘ƒ such that for all 𝜏∈Φ, it follows ||βŸ¨π‘†βŸ©πœβˆ’πœπ‘†||β‰€πœ€π‘†.(16) At the Ξ΅-steady state there is no net flux over the period of integration 𝜏∈[0,𝑑]. Thus the probability 𝑃 may fluctuate due to sporadic influx and efflux but its absolute value may not exceed πœ€π‘† so that the system continues to reside within Ξ΅. The set value πœ€π‘† defines the acceptable state of computation, otherwise in the continuum limit Ξ΅β†’ 0 the state space would contract indefinitely. In practice the state space sampling by brute-force algorithms or simulated annealing methods is limited by πœ€π‘†, for example, according to the available computational resources.

Definition 6 (see [83]). A family Ξ£ of subsets of the state space Ξ¦ is an algebra, if it has the following properties:(i)Φ∈Σ, (ii)Ξ¦0βˆˆΞ£β‡’Ξ¦π‘0∈Σ, (iii){Φ𝑖}π‘–βˆˆ[1,π‘˜]β‹ƒβŠ‚Ξ£β‡’π‘˜π‘–=1Ξ¦π‘–βˆˆΞ£. From these it follows (i)βˆ…βˆˆΞ£, (ii)the algebra Ξ£ is closed under countable intersections and subtraction of sets,(iii)if π‘˜β‰‘βˆž then Ξ£ is said to be a sigma-algebra.

Definition 7 (see [83]). A function πœ‡πΆβˆΆΞ£β†’[0,∞) is a measure if it is additive for any countable subfamily {Φ𝑖,π‘–βˆˆ[1,𝑛]}βŠ†Ξ£, consisting of mutually disjoints sets, such that πœ‡πΆξƒ©π‘›ξšπ‘–=1Φ𝑖ξƒͺ=𝑛𝑖=1πœ‡πΆξ€·Ξ¦π‘–ξ€Έ.(17) It follows that(i)πœ‡πΆ(βˆ…)=0, (ii)if Φ𝛼,Ξ¦π›½βˆˆπ‘† and Ξ¦π›ΌβŠ‚Ξ¦π›½β‡’πœ‡πΆ(Φ𝛼)β‰€πœ‡πΆ(Φ𝛽),(iii)if Φ𝛼,Ξ¦π›½βˆˆπ‘† and Ξ¦1βŠ‚Ξ¦2βŠ‚β‹―βŠ‚Ξ¦π‘›,{Φ𝑖,π‘–βˆˆ[1,𝑛]}βˆˆπ‘†β‡’πœ‡πΆ(⋃𝑛𝑖=1Φ𝑖)=supπ‘–πœ‡πΆ(Φ𝑖).Moreover, if 𝑆 is a sigma-algebra and 𝑛≑{∞}, then πœ‡πΆ is said sigma-additive. The triple (Ξ¦,𝑆,πœ‡πΆ) is a measure space.

Definition 8 (see [52]). An energy density manifold is a set 𝑀 whose elements Ο• are called energy densities together with a set Ξ£ of functions πœ‡π‘–βˆΆπ‘€β†’π‘ƒ called energy scale, satisfying(i)the range of ΞΌ is an open interval for eachπœ‡π‘–βˆˆΞ£,(ii)for everyπœ™π΄,πœ™π΅βˆˆπ‘€, and πœ‡βˆˆΞ£,πœ‡π΄(πœ™π΄)=πœ‡π΅(πœ™π΅)β‡’πœ™π΄=πœ™π΅,(iii)for everyπœ‡π΄,πœ‡π΅βˆˆΞ£,πœƒβ†¦πœ‡π΅(πœ‡π΄βˆ’1(πœƒ)) is a continuous, strictly increasing function. (i) asserts that each energy scale takes on all values in an open interval in P, while (ii) guarantees that each such scale establishes a one-to-one correspondence between energy levels and real numbers in its range. By means of (iii) the set Ξ£ determines an order relation β‰Ί on M written as πœ™π΄β‰Ίπœ™π΅βŸΊthereexistsπœ‡π‘–πœ‡βˆˆΞ£suchthatπ΄ξ€·πœ™π΄ξ€Έ<πœ‡π΅ξ€·πœ™π΅ξ€Έ.(18) Physically speaking the energy densities are in relation to each other on the energy scale given in the units of πœƒ=π‘˜π΅π‘‡.

Definition 9. Entropy is defined as 𝑆=𝑗=1π‘˜π΅ln𝑃𝑗=𝑗=1π‘˜π΅π‘π‘—ξƒ©ξ“1βˆ’π‘—,π‘˜Ξ”π‘‰π‘—π‘˜π‘˜π΅π‘‡ξƒͺ,(19) where the absolute temperature 𝑇>0 and the Boltzmann’s constant π‘˜π΅>0 in accordance with (3).

Definition 10. The change in occupancy 𝑁𝑗 is defined proportional to the free energy Δ𝑁𝑗=βˆ’π‘˜πœŽπ‘—π‘˜Ξ”π‘‰π‘—π‘˜π‘˜π΅π‘‡(20) in accordance with (4).

Theorem 11 (the principle of increasing entropy). The condition of stationary state for the open system is that its entropy reaches the maximum.

Proof. From Definitions 9 and 10 and Ξ”πœ‡π‘—π‘˜(Δ𝑁𝑗)/π‘˜π΅π‘‡=Δ𝑁𝑗/𝑁𝑗, it follows that Δ𝑆=π‘˜π΅πΏ=βˆ’π‘˜π΅ξ“π‘—=1Ξ”π‘π‘—ξ“π‘˜Ξ”π‘‰π‘—π‘˜π‘˜π΅π‘‡=π‘˜π΅ξ“π‘—,π‘˜πœŽβˆ’1π‘—π‘˜ξ€·Ξ”π‘π‘—ξ€Έ2=π‘˜π΅ξ“π‘—,π‘˜πœŽπ‘—π‘˜ξ‚΅Ξ”π‘‰π‘—π‘˜π‘˜π΅π‘‡ξ‚Ά2β‰₯0(21) because the squares are nonnegative, the conductance πœŽπ‘—π‘˜>0 and its inverse, that is, resistance, πœŽβˆ’1π‘—π‘˜=π‘šπ‘—π‘˜/π‘˜π΅π‘‡>0 and π‘˜π΅>0.
The proof is in agreement with Δ𝑆=π‘˜π΅Ξ”ln𝑃=π‘˜π΅πΏβ‰₯0 given by (5). The principle of increasing entropy has been proven alternatively by variations Ξ΄ using the principle of least action βˆ«π›Ώπ΄βˆΆ=𝛿𝑑0βˆ«β„’π‘‘π‘‘=βˆ’π›Ώπ‘‘0𝑇𝑆𝑑𝑑≀0 [53], where the Lagrangian β„’ integrand (kinetic energy) defined by the Gouy-Stodola theorem is necessarily positive.

Theorem 12. The state space Ξ¦ contracts in dissipative transformations.

Proof. As a consequence of Definition 10 and Theorem 11 it follows that Ξ”(Δ𝑆)=π‘˜π΅ξ“Ξ”πΏ=2𝑗=1Ξ”π‘π‘—π‘π‘—ξ“π‘˜πœŽπ‘—π‘˜Ξ”π‘‰π‘—π‘˜π‘‡=βˆ’2π‘˜π΅ξ“π‘—=1Δ𝑁𝑗2𝑁𝑗≀0(22) because the squares are nonnegative, the occupancies 𝑁𝑗>0 for nonzero densities-in-energy, the conductance πœŽπ‘—π‘˜β‰₯0, 𝑇>0, and π‘˜π΅>0.
When entropy 𝑆 is increasing, the state space accessible by the process generator 𝐿 is decreasing. In the continuum limit the theorem for contraction has been proven earlier [53]. In practice the contraction of the state space by a finite automaton is limited to a fixed nonzero set πœ€={πœ€π‘†}. Then any member in Ξ΅ is qualified as solution.

Definition 13. The definition for the class 𝑃 state space measure πœ‡π‘ƒ follows from Definitions 7 and 9: πœ‡π‘ƒ=ln𝑃𝑃=𝑛𝑗=1𝑁𝑗1βˆ’π‘›ξ“π‘˜β‰ π‘—Ξ”πœ‡π‘—π‘˜π‘˜π΅π‘‡ξƒͺ+𝑛𝑗=1π‘π‘—π‘›ξ“π‘˜=𝑗±1Ξ”π‘„π‘—π‘˜π‘˜π΅π‘‡.(23) The nondissipative (reversible) and dissipative (irreversible) components have been denoted separately. In fact, the indexing π‘˜β‰ π‘— is redundant because for the indistinguishable sets π‘˜=𝑗 there is no difference, per definition Ξ”πœ‡π‘—π‘—=0. The conserved term Ξ£π‘—π‘π‘—βˆ‘(1βˆ’π‘˜β‰ π‘—Ξ”πœ‡π‘—π‘˜) is invariant according to Noether’s theorem [31, 32]. The nonzero dissipative term Ξ£π‘—π‘π‘—βˆ‘π‘˜=𝑗±1Ξ”π‘„π‘—π‘˜ defines class 𝑃 to contain at least one irreversible deterministic decision with two degrees of freedom (𝑛=2).

Definition 14. The definition for the class 𝑁𝑃 state space measure πœ‡π‘π‘ƒ follows from the Definitions 7 and 9: πœ‡π‘π‘ƒ=ln𝑃𝑁𝑃=𝑛𝑗=1𝑁𝑗1βˆ’π‘›ξ“π‘˜β‰ π‘—Ξ”πœ‡π‘—π‘˜π‘˜π΅π‘‡ξƒͺ+𝑛𝑗=1π‘π‘—π‘›ξ“π‘˜=𝑗±1Ξ”π‘„π‘—π‘˜π‘˜π΅π‘‡+𝑛𝑗=1π‘π‘—π‘›ξ“π‘˜β‰ π‘—Β±1Ξ”π‘„π‘—π‘˜π‘˜π΅π‘‡.(24) The conserved components have been denoted separately from the dissipative components that have been decomposed further to those with two degrees of freedom using the indexing notation π‘˜=𝑗±1 as well as to those with three or more degrees of freedom using the indexing notation π‘˜β‰ π‘—Β±1. The conserved and dissipative components with only two degrees of freedom are the same as those in Definition 13. The nonzero dissipative term Ξ£π‘—π‘π‘—βˆ‘π‘˜β‰ π‘—Β±1Ξ”π‘„π‘—π‘˜ defines class 𝑁𝑃 to contain at least one irreversible decision between at least two choices, that is, with the three or more degrees of freedom.

Definition 15. The 𝑁𝑃-complete problem contains only dissipative processes with three or more degrees of freedom, that is, Ξ£π‘—π‘π‘—βˆ‘π‘˜β‰ π‘—Β±1Ξ”π‘„π‘—π‘˜>0 and none with two degrees of freedom Ξ£π‘—π‘π‘—βˆ‘π‘˜β‰ π‘—Β±1Ξ”π‘„π‘—π‘˜=0.

Theorem 16. One has π‘ƒβŠ‚π‘π‘ƒ.

Proof. It follows from Definitions 13 and 14 that the state space set of class 𝑁𝑃 is larger than class 𝑃 measured by the difference πœ‡π‘π‘ƒβˆ’π‘ƒ=πœ‡π‘π‘ƒβˆ’πœ‡π‘ƒ=𝑛𝑗=1π‘π‘—π‘›ξ“π‘˜β‰ π‘—Β±1Ξ”π‘„π‘—π‘˜π‘˜π΅π‘‡>0.(25) If and only if Ξ”π‘„π‘—π‘˜=0 for all π‘˜β‰ π‘—Β±1, the measure πœ‡π‘π‘ƒ-𝑃(βˆ…)=0 but this is a contradiction with Definition 14 that class 𝑁𝑃 contains at least one irreversible decision with three or more degrees of freedom, that is, Ξ£π‘—π‘π‘—βˆ‘π‘˜β‰ π‘—Β±1Ξ”π‘„π‘—π‘˜>0. Thus class 𝑃 is a proper (strict) subset of class 𝑁𝑃.
The difference between the classes can also be measured by 𝑃𝑁𝑃ln(𝑃𝑁𝑃/𝑃𝑃)>0 in accordance with the noncommutative measure known as Gibb’s inequality or Kullback-Leibler divergence that gives the difference between two probability distributions.
The class 𝑁𝑃 problem can be reduced to the class 𝑁𝑃-complete problem by removing the deterministic steps denoted by π‘˜=𝑗±1, that is, by polynomial time reduction [30, 84, 85]. In graphical terms the reduction of the 𝑁𝑃 problem to the 𝑁𝑃-complete problem involves removal of nodes with less than three degrees of freedom (Figure 6). In geometric terms the non-Euclidean landscape is reduced to a manifold covered by nonequivalent triangles each having a local Lorentzian metric.In summary the computational complexity classes are related to each other as π‘ƒβŠ‚π‘π‘ƒ-C βŠ†π‘π‘ƒ (Figure 7).

9. Discussion

At first sight it may appear strange for some that the distinction between the computational complexity class 𝑃 and 𝑁𝑃 was made on the basis of the natural law because both classes contain many abstract problems without apparent physical connection. However, the view is not new [86–90]. The adopted approach to classify computational complexity is motivated because the practical computation is a thermodynamic process hence inevitably subject to the 2nd law of thermodynamics. Of course, some may still argue that the distinction between tractable and intractable problems ought to be proven without any reference to physics. Indeed, the physical portrayal can be regarded merely as a formal notation to express that the computation is a series of time-ordered (i.e., dissipative) operations that are intractable when there are three or more degrees of freedom among interdependent operations. Also noncommutative operations and non-abelian groups formalize time series [91, 92]. The essential character of nondeterministic problems, irrespective of physical realization, is that decisions affect set of future decisions, that is, the driving forces of computation depend on the process itself. The process formulation by the 2nd law of thermodynamics is a natural expression because the free energy and the flow of energy are interdependent.

The natural law may well be the invaluable ingredient to rationalize the distinction between the computational complexity classes 𝑃 and 𝑁𝑃. It serves not only to prove that π‘ƒβŠ‚π‘π‘ƒ but to account for the computational course itself. For both classes of problems the natural process of computation is directing toward increasingly more probable states. When there are three or more degrees of freedom, decisions influence the choice of future decisions and the computation is intractable. The set of conceivable states generated at the branching points can be enormous, similar to a causal Bayesian network [93]. Finally, when the maximum entropy state has been attained, it can be validated independent of the path as the free energy minimum stationary state. The corresponding solution is verifiably independent of the computational history in deterministic manner in polynomial time.

Furthermore, the crossing from class 𝑃 to 𝑁𝑃 is found precisely where n-SAT, n-coloring, n-clique problems and maximizing the shortest path with interdicts become intractable, that is, when the degrees of freedom 𝑛β‰₯3. The efficient reduction of 𝑁𝑃 problems to 𝑁𝑃-complete problems is also understood as operations that remove the deterministic dissipative steps and eventual redundant reversible paths. Besides, when the problem is beyond class 𝑁𝑃, the natural process does not terminate at the accepting state with emission. For example, the halting problem belongs to the class 𝑁𝑃-hard. Importantly, the natural law relates computational time directly to the flow of energy, that is, to the amount of dissipation [14]. Thus the 2nd law implies that nondissipative processing protocols are deemed futile [94].

The practical value of computational complexity classification by the natural law of the maximal energy dispersal is that no deterministic algorithm can be found that would complete the class 𝑁𝑃 problems in polynomial time. The conclusion is anticipated [95], nonetheless, its premises imply that there is no all-purpose algorithm to trace the maximal flow paths through noninvariant landscapes. Presumably the most general and efficient algorithms balance execution between exploration of the landscape and progression down along the steep gradients in time. Perhaps most importantly, the universal law provides us with holistic understanding of the phenomena themselves to formulate questions and computational tasks in the most meaningful way.

Acknowledgments

The author is grateful to Mahesh Karnani, Heikki Suhonen, and Alessio Zibellini for valuable corrections and instructive comments.