Stress-Testing Memcomputing on Hard Combinatorial Optimization Problems.
- Author(s): Sheldon, Forrest
- Cicotti, Pietro
- Traversa, Fabio L
- Di Ventra, Massimiliano
- et al.
Published Web Locationhttps://doi.org/10.1109/tnnls.2019.2927480
Memcomputing is a novel computing paradigm that employs time non-local dynamical systems to compute with and in memory. The digital version of these machines [digital memcomputing machines or (DMMs)] is scalable, and is particularly suited to solve combinatorial optimization problems. One of its possible realizations is by means of standard electronic circuits, with and without memory. Since these elements are non-quantum, they can be described by ordinary differential equations. Therefore, the circuit representation of DMMs can also be simulated efficiently on our traditional computers. We have indeed previously shown that these simulations only require time and memory resources that scale linearly with the problem size when applied to finding a good approximation to the optimum of hard instances of the maximum-satisfiability problem. The state-of-the-art algorithms, instead, require exponential resources for the same instances. However, in that work, we did not push the simulations to the limit of the processor used. Since linear scalability at smaller problem sizes cannot guarantee linear scalability at much larger sizes, we have extended these results in a stress-test up to 64×106 variables (corresponding to about 1 billion literals), namely the largest case that we could fit on a single core of an Intel Xeon E5-2860 with 128 GB of dynamic random-access memory (DRAM). For this test, we have employed a commercial simulator, Falcon of MemComputing, Inc. We find that the simulations of DMMs still scale linearly in both time and memory up to these very large problem sizes versus the exponential requirements of the state-of-the-art solvers. These results further reinforce the advantages of the physics-based memcomputing approach compared with traditional ones.