Smoothing the gap between NP and βˆƒβ„

With Ivor van der Hoog and Tillmann Miltzow.

πŸ”₯Proceedings of the 61st Annual IEEE Symposium on Foundations of Computer Science, 1022-1033, 2020.

We study algorithmic problems that belong to the complexity class of the existential theory of the reals (βˆƒβ„). A problem is βˆƒβ„-complete if it is as hard as the problem ETR and if it can be written as an ETR formula. Traditionally, these problems are studied in the real RAM, a model of computation that assumes that the storage and comparison of real valued numbers can be done in constant space and time, with infinite precision. The complexity class βˆƒβ„ is often called a real RAM analogue of NP, since the problem ETR can be viewed as the real-valued variant of SAT. The real RAM assumption that we can represent and compare irrational values in constant space and time is not very realistic. Yet this assumption is vital, since some βˆƒβ„-complete problems have an β€œexponential bit phenomenon” where there exists an input for the problem, such that the witness of the solution requires geometric coordinates which need exponential number of bits when represented in bitnotation. The problems that exhibit this phenomenon are NP-hard (since ETR is NP-hard) but it is unknown if they lie in NP. NP membership is often showed by using the famous Cook-Levin theorem which states that the existence of a polynomial-time verification algorithm for the problem witness is equivalent to NP membership. The exponential bit phenomenon prohibits a straightforward application of the Cook-Levin theorem.

In this paper we first present a result which we believe to be of independent interest: we prove a real RAM analogue to the Cook-Levin theorem which shows that βˆƒβ„ membership is equivalent to having a verification algorithm that runs in polynomial time on a real RAM. This gives an easy proof of βˆƒβ„-membership, as verification algorithms on a real RAM are much more versatile than ETR-formulas.

We use this result to construct a framework to study βˆƒβ„-complete algorithms under smoothed analysis. We show that for a wide class of βˆƒβ„-complete problems, its witness can be represented with logarithmic bit-precision by using smoothed analysis on its real RAM verification algorithm. This shows in a formal way that the boundary between NP and βˆƒβ„ (formed by inputs whose solution needs high bit-precision) consists of contrived input.

We apply our framework to well-studied βˆƒβ„-complete recognition problems which have the exponential bit phenomenon such as recognition of realizable order types or the Steinitz problem in fixed dimension. Interestingly our techniques also generalize to problems with a natural notion of resource augmentation (geometric packing, the art gallery problem).

Publications - Jeff Erickson ( 28 Jan 2021