Is P=NP a Grave Matter?

admin
7 Min Read

This Halloween we discuss a different matter: Is work on P vs. NP whistling past the graveyard?

And a second question is suggested by the punning last words of Mercutio in Romeo and Juliet. Always a jokester despite a mortal wound, he says:

“Ask for me tomorrow, and you shall find me a grave man.”

The pun is on “grave” meaning “serious.” Certainly P vs. NP has been central to our field’s formative period. But will it stay that way tomorrow?

Consider, for instance, the career of Adam Tauman Kalai whom we just featured. His DBLP page shows recent papers on:

These are all vital areas, and it is wonderful that theory opens a window on them. But they are not going to break through on P vs. NP. Well, low-rank matrices are important to lower bounds, so that is a possibility. We’ve kept our ears to the ground for word of new ideas on P vs. NP. The closest we’ve seen is recent work on the Minimum Circuit Size Problem, which is surveyed at the end of this wonderful long recent article in Quanta that surveys the state of complexity theory.

Complexity theory used to represent a check on scaling up computers to solve moderately larger cases of important hard computational problems. The famous 1979 book by Michael Garey and David Johnson, Computers and Intractability: A Guide to the Theory of NP-Completeness, opened with a parable of a project worker needing to explain to the boss why a task is hard.

Well, nothing since has prevented computers from scaling up majorly. Right now Google Chrome is telling me (Ken) that one of my tabs — just one of a dozen tabs I have open — is taking up 419 megabytes of memory. A tab I just opened for tracking tonight’s World Series game grabbed 130 MB.

What’s more, the hard problems themselves have yielded in many significant, sizable instances. We noted the success of SAT solvers already seven years ago and likewise for new Traveling Salesman algorithms. An analogy is that the game of Go is PSPACE-hard on boards, but the standard board is plenty complex for us — and conquerable by computers.

Other disciplines have found that they can grow around problems that are “Big” but not intractable. Weather forecasting has long regarded getting to kilometer-scale fineness as the “holy grail” but limitations apart from computing power have conspired to open other areas of progress that our computers are ready for. Deep learning results come from long training computations. The underlying model may be a small number of levels of threshold circuits — but we have not clamped any tight upper bounds on the corresponding complexity class TC⁰ either. Large language models (LLMs), ones that result from months of computation, are able to solve problems in a universal manner — with varying degrees of confidence, to be sure.

President Joe Biden just rolled out a sweeping executive order Monday that aims to monitor and regulate the risks of artificial intelligence while also harnessing its potential. This marks the latest effort to address a rapidly evolving technology that has sparked concern among world leaders. He did not say the same about complexity theory. The AI warnings are quite apart from P vs. NP. Perhaps he should have included complexity? Perhaps marrying LLMs to SAT solvers and game programs will radically change the world even more than AI is expected to do so now. Is this possible?

The focus on whether SAT is in polynomial time hides a more desperate reality: In over half a century of trying, we really have not proved a super-linear lower bound on the complexity of any natural problem.

This question involves us in models and “worlds” that are a lot less robust and approchable than the complexity notions by which our field developed.

One essence is that natural attempts to prove lower bounds for one hard problem — if they succeed — convert around to become actual algorithms that give upper bounds for another hard problem. In the case of discrete logarithm, the two problems are the same. Thus the effort to push lower bounds by “natural” means can actually be self-defeating.

We have noticed this self-defeating phenomenon in other veins. Completeness and hardness results often exploit succinct representations of core predicates. Those predicates embody expansive power in small programs or circuits — power that makes it hard to prove that those small programs or circuits lack general solving power.

We have remarked on particular attempts with self-defeating elements

Mulmuley’s program had real novelty and seeming potential to resolve P vs. NP when it was introduced, for reasons I gave in my own 2002 survey of it. But Ketan himself admitted that he didn’t see resolution on any shorter timescale than 150 years. Maybe there is hope in efforts like this by David Zuckerman that relax the problem in a way that may dodge some barriers and apply high-powered combinatorics.

The vital meta-problems are: how long will the P versus NP problem stay open — and how long will it stay inviting?

[some word and link fixes; added mention of Zuckerman’s work at end]

Share This Article
By admin
test bio
Please login to use this feature.