Complexity

Volume 2018, Article ID 7982851, 13 pages

https://doi.org/10.1155/2018/7982851

## Evidence of Exponential Speed-Up in the Solution of Hard Optimization Problems

^{1}MemComputing Inc., San Diego, CA 92130, USA^{2}San Diego Supercomputer Center, La Jolla, CA 92093, USA^{3}Department of Physics, University of California, La Jolla, San Diego, CA 92093, USA

Correspondence should be addressed to Massimiliano Di Ventra; ude.dscu.scisyhp@artnevid

Received 17 April 2018; Accepted 29 May 2018; Published 3 July 2018

Academic Editor: Viet-Thanh Pham

Copyright © 2018 Fabio L. Traversa et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

#### Abstract

Optimization problems pervade essentially every scientific discipline and industry. A common form requires identifying a solution satisfying the maximum number among a set of many conflicting constraints. Often, these problems are particularly difficult to solve, requiring resources that grow exponentially with the size of the problem. Over the past decades, research has focused on developing heuristic approaches that attempt to find an approximation to the solution. However, despite numerous research efforts, in many cases even approximations to the optimal solution are hard to find, as the computational time for further refining a candidate solution also grows exponentially with input size. In this paper, we show a *noncombinatorial* approach to hard optimization problems that achieves an *exponential speed-up* and finds better approximations than the current state of the art. First, we map the optimization problem into a Boolean circuit made of specially designed, *self-organizing* logic gates, which can be built with (nonquantum) electronic elements with memory. The equilibrium points of the circuit represent the approximation to the problem at hand. Then, we solve its associated *nonlinear* ordinary differential equations numerically, towards the equilibrium points. We demonstrate this exponential gain by comparing a sequential MATLAB implementation of our solver with the winners of the 2016 Max-SAT competition on a variety of hard optimization instances. We show empirical evidence that our solver scales *linearly* with the size of the problem, both in time and memory, and argue that this property derives from the *collective* behavior of the simulated physical circuit. Our approach can be applied to other types of optimization problems, and the results presented here have far-reaching consequences in many fields.

#### 1. Introduction

In real-life applications, it is common to encounter problems where one needs to find the best solution within a vast set of possible solutions. These *optimization problems* are routinely faced in many commercial segments, including transportation, goods delivery, software packages or hardware upgrades, network traffic and congestion management, and circuit design, to name just a few [1, 2]. Many of these problems can be easily mapped into *combinatorial optimization problems*, namely, they can be written as Boolean formulas with many constraints (clauses) among different variables (either negated or not, i.e., literals) with the constraints themselves related by some logical proposition [1].

It is typical to write the Boolean formulas as *conjunctions* (the logical ANDs, also represented by the symbol ) of *disjunctions* (the logical ORs, represented by the symbol ), in the so called *conjunctive normal form* (CNF). The CNF representation is universal in that any Boolean formula can be written in this form [3].

A simple example of a CNF formula is in which we have four variables, , with , five clauses, and fourteen literals (the symbol indicates negation). The problem is then to find an assignment satisfying the maximum number of clauses, that is, in which as many clauses as possible have at least one literal that is true. Such a clause is then said to be satisfied, otherwise it is unsatisfied [3], and the problem itself is known as Max-SAT (maximum satisfiability).

A Max-SAT problem whose CNF representation has exactly literals per clause is called Max-ESAT. Max-ESAT is a ubiquitous optimization problem with widespread industrial applications. We will focus on its solution as a test bed in the main text and refer the reader to the appendix where we have applied our approach to a wide range of optimization problems, including weighted Max-SAT, [4] for its application to machine learning and [5] for the solution of the worst cases of a satisfiable problem known as the subset sum.

Max-ESAT lies in the NP-hard class, meaning that any problem in NP can be reduced to it in polynomial time [1]. More informally, we expect that worst case instances will require resources which grow (at least) exponentially in the input size to solve, and additionally, problems in this class generally also require exponential resources in order to check a proposed solution. Due to this, complete algorithms that attempt to solve Max-ESAT instances quickly become infeasible for large problems. Much research has instead focused on incomplete solvers that perform a stochastic local search, by generating an initial assignment and iteratively improving upon it. This approach has proven effective at approximating and sometimes solving large instances of SAT and other problems. For instance, in recent Max-SAT competitions [6], incomplete solvers outpace complete solvers by two orders of magnitude on random and crafted benchmarks. However, they too suffer from the same exponential time dependence as complete solvers for sufficiently large or hard instances [7–9].

It has further been shown, using probabilistically checkable proofs [10], that many classes of combinatorial optimization problems (including the Max-ESAT) have an *inapproximability gap*. This means that no algorithm can overcome, in polynomial time, a fraction of the optimal solution, unless NP = P [10, 11]. In other words, for heuristics to improve on their approximation beyond this limit would require exponentially increasing time. For example, for the Max-E3SAT, it has been proved that if , then there is no algorithm that can give an approximation better than 7/8 of the optimal number of satisfied clauses [11].

Despite these difficulties, it is often necessary to solve or approximate optimization problems such as these as quickly as possible, and the quality of the approximation can have direct outcomes on the cost to businesses, the speed of our internet connections, or the efficiency of our shipping, to name a few important cases. In what follows, we outline a novel approach to generating approximations to Max-ESAT and demonstrate its efficacy on a variety of instances both generated to provide the worst cases within the inapproximability gap and drawn from Max-SAT competitions [6].

#### 2. The Memcomputing Approach

In this work, we consider a radically different *noncombinatorial* approach to hard optimization problems. Our approach is based on the *simulation* of *digital memcomputing machines* (DMMs) [5, 12, 13]. A brief introduction of these machines is provided in the appendix. The reader interested in a more in-depth discussion is urged to look at the extensive papers [5, 12]. The practical realization of DMMs can be accomplished using standard circuit elements and those with memory (time nonlocality, hence the name “memcomputing” [14]).

Time nonlocality allows us to build logic gates that *self-organize* into their logical proposition, *irrespective* of whether the signal comes from the traditional input or output [12]. We call them *self-organizing logic gates* (SOLGs), and circuits built out of them, *self-organizing logic circuits* (SOLCs). Our approach then follows these steps.
(1)We first construct the Boolean circuit that represents the problem at hand (e.g., the Max-ESAT of Figure 1).(2)We replace the traditional (unidirectional) Boolean gates of this Boolean circuit with SOLGs.(3)We feed the appropriate terminals with the required output of the problem (e.g., the logical 1 if we are interested in checking its satisfiability).(4)Finally, the electronic circuit built out of these SOLGs can be described by *nonlinear* ordinary differential equations, which can be solved to find the equilibrium (steady-state) points. These equilibria represent the approximation to the optimization problem [12].