Here's a question I'm hoping my readers will help out with. It seems that every week someone comes out with a "proof" that P=NP, and only slightly less frequently that P!=NP. Most of these are written by amateurs who don't even understand the problem.

Have there been any attemps by reputable mathematicians to resolve the issue? Lindenmann had produced several flawed proofs of Fermat's last theorem a century before Wiles got it right, and he was certainly no amateur. Does anyone know of any credible proof attempts, with subtle, nontrivial mistakes?

Subscribe to:
Post Comments (Atom)

## 8 comments:

I don't know of proof attempts with subtle bugs. What I do know is of entire research programs that ultimately were shown not to work. Two examples of these are

* using oracles: it was shown in the 70s that the P=NP question is independent of oracles: i.e any proof method cannot relativize

* using circuit lower bounds: this is the paper that just got the Godel prize: showing that there are serious problems with extending most "standard" circuit lower bound arguments to show that NP doesn't have poly size circuits, for example.

But it's an interesting question. As far as I know, failed proofs on either side of the issue usually founder on simple issues, rather than on some substantial point that enlightens.

Thanks, Suresh. I'd like to have a better understanding of oracle-free proofs. If I recall correctly, a SAT oracle is simply a "free" (i.e., time O(1)) call to a SAT solver; thus any problem in NP becomes polynomial-time solvable with a SAT oracle.

What does it mean for a problem to be independent of oracles?

What Baker, Gill and Solovay showed is that there exist oracles A and B such that

P^A = NP^A; and P^B != NP^B. In other words, any proof technique that "relativizes" i.e remains invariant under the invocation of oracles cannot be used to resolve P vs NP.

Scott Aaronson has a nice survey of P vs NP attempts in this paper:

http://www.scottaaronson.com/papers/pnp.pdf

The other sort of attempt that I've heard of depends on the Blum/Shub/Smale model of computation over arbitrary fields. We can treat a Turing machine as having as alphabet the field of two elements, and allowing as basic operations any field operation. In this way, we see that SAT is exactly Hilbert's Nullstellensatz - finding out whether there is a possible valuation on which the set of polynomials are all non-zero. I seem to recall that for certain fields, Nullstellensatz is actually in P, but I can't recall.

I thought the BBS model allows arbitrary constants (including uncomputable ones, such as Chaitin's Omega) -- making it distinct from the standard Turing-computable model?..

Regarding the former, check out Mark Braverman's more realistic model of computing with real-valued functions:

http://www.arxiv.org/abs/cs.CC/0502066

Mulmuley and Sohoni have a daunting algebraic-geometry approach, aimed at exploring the differences between permanent and determinant, that appears well-motivated and non-crackpot. Here's a survey by Ken Regan appearing in a mainstream journal:

www.math.cas.cz/~pudlak/alg-geom/gctulm.pdf

This is not to say that I understand the approach, or that it's expected to succeed.

Thanks for the fascinating link, Andy. Maybe we can chat about it in San Diego next week -- will you be around?

Yeah, I should be at STOC. Hope to see you.

Post a Comment