Monday, March 26, 2007

Is math tautological?

Scott throws the following teaser:

One example is the surprisingly common view that "all mathematical propositions are tautologies," and therefore can’t convey any new information
and of course I can't help but take the bait. As you surely recall from this discussion, I'm a firm Platonist:
Pythagoras's theorem is a statement about objects that have no width, mass, or time duration. It is not a statement about depressions in sand, sticks, or strings. [...] The fact that in a right triangle, the sum of the squares of the legs equals the square of the hypotenuse was true long before Pythagoras or even planet Earth was around; that it was discovered by some humans (long before Pythagoras, actually) has no bearing on its validity.
However, I had also dug myself into a bit of a hole:
Yes, the boundary between "discovery" and "invention" is indeed blurry; I am not sure I can give a meaningful answer to whether chess was invented or discovered.
And now, thanks to Scott, I think I can dig myself out of that hole. We are going to define two realms: E (for Euclid) and B (for Borges). E contains all the mathematical "tautologies". Thus, if you seed E with the definition of a group, E will also contain all the facts about groups, including the theorems we've discovered, ones we've yet to discover, ones we'll never discover, and ones that are true but unprovable. B is a much more boring set -- it is the collection of all possible statements, true and false, about anything. It includes a statement and proof of Pythagoras's theorem (and its negation with a false proof), a description of the game of chess (and its infinite variations), as well as lots of pure gibberish.

Now I can make a meaningful distinction between invention and discovery. We discover elements of E, but invent elements of B. We discover mathematical truths, but invent proof techniques. The game of chess belongs squarely in B, and thus is an invention.

And what bearing does this have on Scott's comment? Well, E consists of self-contained truths, or tautologies. We can only access a tautology via a proof. The heart of math isn't making true statements, it's finding clever proofs!

18 comments:

... said...

While I find myself generally agreeing with the whole of your thought, I can't help but to think the distinction between invention and discovery is still just as blurry as it was, unless we account for the context in which the discovery was made.

Can I assume from your description above that the rules and definitions that seed E are themselves inventions--members of B?

At the risk of grossly misinterpreting what you said, what if I were to seed E with some funky definitions and set of rules such that through a particular set of applications the game of chess somehow emerged as a consequence (now that would be something!)?

Certainly we would have to invent a way to prove that this was so, but then would chess itself be a discovery even though we had already invented it and been playing with its consequences for years?

Forgive me if that sounds too abstract or doesn't make sense in the spirit of the discussion...just a thought.

Kenny said...

What about the axioms of groups? Are they in E (as consequences of something else?) or in B? Are groups "natural" structures that people would have run into anyway, or are they arbitrary the way chess is? Could semigroups or monoids have been the fundamental concept the way groups have turned out to be?

I'd say that there's something objectively natural about groups in a way that there isn't about chess, but I'm actually committed to anti-platonism and anti-realism about mathematics. I'm not quite sure how to make sense of this naturalness, but it's something I'm thinking about.

Leo said...

Poster #1 and Kenny are raising a point which I glossed over, namely: are the axioms used to seed E elements of B? The answer is yes. There are a lot of boring theories out there (try studying interegers that are simultaneously even and odd). The point is that B contains all the junk you could possibly conceive of (and more), while E is pure gold (even when vacuous).

Kenny, what's "natural" about groups is that such simple axioms lead to such a rich theory. The axioms of chess aren't that simple to begin with, and you can modify them to create the many known variants of chess (e.g., turn the board into a torus instead of grid with boundaries).

Platonist/anti-Patonist debates are a bit odd since we're not arguing about anything empirical or mathematically provable. How does an anti-Platonist view the validity of Pythagoras's theorem before humans even appeared? Did it not exist then? Did it spring into existence when someone first wrote it down?

And what is mathematical (anti-) realism?

Geet said...

Ah...fixed the "..." to actually show my name, something I glossed over when editing the profile.

It would seem there would be a degree of "artificiality" in the axioms for chess that do not exist in those for groups, which you argue is justified by the simplicity of the axioms resulting in such a rich theory, whereas the axioms of chess will likely result in less generally applicable theory. No complaints there.

Not being a mathematician myself (other than the occasional excursion), I'm curious from a historical perspective, if there are some global patterns to the motivation behind defining axioms they way they have been for a variety of mathematical objects over the years.

My hunch says that the majority of mathematical objects are defined to account for a deficiency in a previously defined axiomatic system, but I'd be interested in hearing what an actual mathematician's take on that is.

Look forward to more of your posts!

Leo said...

Geet,

you raise an excellent question -- one that I, to my chagrin, an incompetent to answer. Anybody out there know how the axioms of group theory evolved? The wikipedia entry seems fairly reliable and informative, but one must take these things with a grain of salt.

I can tell you one thing with absolute certainty. The mathematician's quest for abstraction is never frivolous. To put it more colloquially, we never just invent funky s[tuff] for the he[ck] of it. Whenever we discover a new fact, we try to see if it's a special case of a more general phenomemnon. If much of linear algebra can be done without referring to the coordinates of vectors, then the results also hold in infinite-dimensional spaces. Now much of linear algebra evolved (I'm assuming) out of the need to solve linear equations. As it matured, it became the study of linear operators. By the time Hilbert and Banach came around, the operators were playing the central role, and it's probably no coincidence that the emerging quantum mechanics was an eager customer for infinite-dimensional tools.

Mathematicians seek nothing more and nothing less than to speak the truth and only the truth (enough pathos and bombast there for you?). Abstraction is a way of seeing older truths in greater generality -- i.e., saying more true things with relatively little effort.

Kenny said...

The "naturalness" of groups is at least in part about simple axioms leading to a rich theory, but it's also in part because groups come up all over the place. If you're interested in the historical stuff, I believe that the first study of groups was only as automorphism groups of field extensions (hence, Galois theory). A bit later they started thinking about permutations on arbitrary sets as groups, and I think they had the notion of group isomorphism at that point too. But I'm pretty sure it wasn't until the 1880's that they started thinking of groups that didn't come together with an action. And it's also only in the 1880's that really any areas of mathematics got axiomatized in the modern sense. (Euclidean geometry always used non-axiomatic reasoning along with the axioms until Hilbert finally gave the first full axiomatization in 1900.)

Anti-realism is just the idea that there isn't a separate realm of entities that mathematics talks about. In some sense, this makes all mathematical existence claims false, though I think the relevant standards we normally judge mathematical claims on are more like the standards for truth in fiction. (It's not actually true that Sherlock Holmes lived in London, but it is true according to the stories, and false in both senses that he lived in Antarctica.)

As for the Pythagorean theorem, I'd say that it would have been a useful tool earlier, just as it is now. But there aren't any mathematical triangles now and there never were, so there's not really something for it to be true about (except in the trivial sense that any statement about all triangles is true if there aren't any).

The reason it's relevant is just that one might think that naturalness of a theory just means that it describes some existing structure that is either very common or somehow very fundamental. Earlier versions of mathematical anti-realism (like formalism) ran into lots of trouble with the objection that they seemed to suggest that any set of axioms whatsoever should be just as worthwhile to study.

Leo said...

Kenny, I don't find it satisfying or meaningful to say that "all mathematical existence claims [are] false". When we say that there exists a continuous nowhere differentiable function but not a monotone nowhere differentiable function we not talking about arbitrary conventions like where Sherlock Holmes lived, but rather about facts that are quite independent of who discovers them, if anyone does at all.

Don't you agree that Sherlock Holmes, while a brilliant creation, is a completely arbitrary one? He didn't have to smoke a pipe or have a sidekick named Dr. Watson, and his catchphrase didn't have to be "elementary, my dear". On the other hand, as soon as you've defined continuity and differentiability, those defintions entail Lebesgue's theorem (that fact I quoted above) -- whether the definer realizes it or not!

Let me put it another way. Suppose Sir Arthur Conan Doyle, instead of dying, went into a deep coma. In the meantime, an enthusiastic reader decided to write a Sherlock Holmes story in his style (I believe there have been such stories after Doyle's death). Now suppose Doyle wakes up from the coma, reads the story, and vehemently protests that Sherlock Holmes is completely out of character, that he couldn't have possibly behaved that way and that the amateur writer had totally misunderstood what Holmes is all about.

Could this happen in math? The delicious fact of the matter is: absolutely not! We now know more about Hilbert spaces than Hilbert did, understand Galois theory better than Galois, and Borel-measurability better than Borel. And there's no fear that a mathematician will rise from the dead and claim that we're misusing or misunderstanding his theory.

Once you define some terms, you spawn into existence an entire theory -- and if it's a deep theory, its implications will continue to be worked out long after your death.

You see my point?

Kenny said...

I do see your point, and this is a major problem for the fictionalist position. But there are major problems for every other position as well. For instance, someone who says that these things really exist has to explain how it is that we manage to come to have knowledge of them. After all, for most things, in order to have knowledge of them we have to either directly experience them, or have experience of things that are sufficiently like them. Gödel suggested that we do have something like perception of mathematical objects, borrowing some of Husserl's ideas about phenomenology, and Penelope Maddy tried to develop this into a more plausible position in her book Realism in Mathematics, but the idea does strike most people as being quite crazy.

Maddy now (well, in her book from ten years ago) takes what she calls a "naturalist" position, according to which mathematical claims are only subject to mathematical questions - but this ends up making seemingly sensible statements (like the continuum hypothesis) end up looking meaningless.

The problem is just that there isn't a non-crazy position about mathematics that's available. So I lean towards one based on the fact that it seems to me to make the fewest crazy claims (though the arguments in favor of this are quite complicated).

Leo said...

someone who says that these things really exist has to explain how it is that we manage to come to have knowledge of them

I am familiar with this "stumbling block" for some time and quite frankly don't see what the problem is. In what sense do the natural numbers "really exist"? They clearly can't exist in any physical representation, since the universe is finite. On the other hand, we can reason about them in rigorous, well-defined, and compelling ways. Once I've defined the naturals and primes, I can claim that there are either finitely or infinitely many primes. You could refuse to answer the question on the grounds that it's meaningless, as there aren't any natural numbers and a forteriori no primes. This is a rather boring, cop-out stance. Life turns out to be much more interesting if you engage the question on its terms. Euclid's proof of the infinitude of the primes is an excellent example of crisp, compelling mathematical reasoning -- still being admired as a clever and elegant argument 2000 years later.

Which do you choose? The nihilistic "nothing exists so I know nothing" or the commonsensical "let's roll up our sleeves and do some math" approach?

mathematical claims are only subject to mathematical questions - but this ends up making seemingly sensible statements (like the continuum hypothesis) end up looking meaningless

To me it's so obvious that "mathematical claims are only subject to mathematical questions" that I have trouble seeing how this needs a whole book to flesh out :)
Why does it make the continuum hypothesis meaningless? It's a perfectly ordinary set-theoretic axiom, which may be both accepted or negated to create a consistent theory of numbers and sets. I am genuinely puzzled by this latter claim -- elaborate?

Kevembuangga said...

To me it's so obvious that "mathematical claims are only subject to mathematical questions" that I have trouble seeing how this needs a whole book to flesh out

Then we are done...
Because "truth" and "exist" are concepts which are meaningful ONLY in mathematics or at least in some discourse about the world

The Platonist stance isn't wrong it is meaningless!

Leo said...

I've stated over and over again that strictly speaking, math has nothing to do with the real world (even though mathematical research is often inspired by real-world problems and has had an enormous impact on mankind's development).

I claim that we can and do meaningfully reason about objects that don't exist in any physical sense and that we don't have any direct experience with.

Kevembuangga -- if Platonism is "meaningless" then how do you make sense of the infinitude of the primes?

And just what would it mean for a self-consistent, non-empirical philosophy to be "wrong"?

Kevembuangga said...

"I claim that we can and do meaningfully reason about objects that don't exist in any physical sense and that we don't have any direct experience with."

Agreed but then in which sense can you say these non physical objects exist and how can you claim that they "exist long before Pythagoras or even planet Earth was around"?
In this statement you are surreptitiously using TWO differents meanings of the word "exist", one being the "everyday" evidence, the other being about the proof of an existentially quantified statement.
Those two meanings are about different realms and should not be confused.
My take on this is that the mathematical meaning of "exist" is some sort of language abuse built upon the lay meaning.
Please show me that the mathematical meaning entails MORE than just the fact that someone ACTUALLY exhibited a proof of existence for an object akin to Hilbert's Tau symbol which only "exist" (in the lay meaning) in the universe of discourse.
That is, if you stick to Platonism you imply that the "Universe of all possible discourses" exists (in a way akin to the lay meaning) and already contains ALL possible symbols and discourses, including of course all possible mathematical statements ever.
As an anti-Platonist I find useless and even meaningless to conflate the possible occurrence of a proof statement with some sort of "existence" in a non physical realm.

if Platonism is "meaningless" then how do you make sense of the infinitude of the primes?

Infinitude of the primes is a true statement in the universe of discourse (with the proper axioms of course) which entails NO MORE than what it says : if you have pined down a prime in whatever notation you can build a notation for a larger prime.
I don't feel any need for a more "metaphysical" meaning.
The power of mathematics comes from this juggling with hypothetical notations that you almost never need to actually build but on which you can rely for consistency and maintenance of the "truth".
Unless of course your axioms and/or rules of inferences are inconsistent, you never know, he, he...
Platonism is just FAITH that correctness has and will be maintained forever, if at the cost of tweaking the axioms or rules according to current needs and "mathematical evidence" and therefore that "mathematical objects" once proved to exist in the mathematical sense are "eternal", i.e. no matter which quirks could ever be discovered in the axioms and rules the objects will be "saved" in the revamping of the theory.
This is probably why Platonists think that this "eternity" extends also to the past and that these mathematical "objects" exist long before they are discovered.
Just an useless illusion, but if you are comfortable with this it's fine.
Beware however to be fooled by subconscious assimilation between the "mathematical existence" and everyday evidence of physical objects, this is the source of all the fuss and puzzlement about higher order infinities.

Kenny said...

I don't claim that "nothing exists so I know nothing" - you're right that this would be a pointless sort of game to play. But at the same time, you say that once we have defined the natural numbers and primes, then we can prove things about them - but this seems to suggest that we can define things into existence! This sort of reasoning seems to be at least in some sense related to that which gives you ontological arguments for the existence of god.

The reason this seems to make the continuum hypothesis (or at least some undecidable sentences) meaningless is that there are no methods to determine the truth or falsity of the claim. It seems that the Platonist should say that there is a truth value and we just can't come up with a proof either way. (Though, following Hugh Woodin, this doesn't mean the Platonist has to concede that we have no means of knowing the truth value.) But if Woodin's program fails, and no mathematical methods will reveal the truth value, then the naturalist approach suggests not that we can choose whether CH is true or false, but that there just is no fact of the matter.

Leo said...

But that's the thing -- and I think this addresses Kevembuangga's point also -- I don't think we define things (e.g., primes) into existence, but rather discover a fundamental object, give it a name, and operate with it.

It's perfectly reasonable to imagine an alien intelligence discovering the primes on their own (though the thought of them "discovering" Sherlock Holmes would be rather far-fetched).

Existence of God -- whoa! Where'd that come from? :) The various "proofs" of the existence of God are an embarrassment to the otherwise excellent mathematicians who proffered them (Pascal, Descartes); my only hope is that they were doing so in jest or under pressure from the Church.

I think claiming that the primes exist independently of their human discoverers is a far cry proving God's existence.

As for CH -- it tends to be shrouded by a veil of mystery, but since Godel and Cohen's work, we know that it cannot be decided within the ZF+C system, so ZFC may be augmented with either CH or its negation. The frequent interpretation that CH is "either true or false but we'll never know" is simply inaccurate.
See here for a probabilistic argument for rejecting CH.

Incidentally, I am not aware of any important result in analysis that relies on CH (unlike the axiom of choice, without which key results like Hahn-Banach would collapse). Anyone know of a non-trivial use of CH in analysis?

Kenny said...

I agree - we can't be defining things into existence. But it's really not clear how it could even be possible to "discover a fundamental object" like the natural numbers, because they're not the sort of thing that are subject to empirical observation. If it's possible to prove the existence of these objects through logic alone (as Frege and Russell thought long ago, and people like Bob Hale and Crispin Wright have argued more recently), then it seems at least conceivable that some sort of ontological argument could prove the existence of something like god, regardless of whether or not previous ontological arguments have been any good. (Gödel actually gave an ontological argument as well - it's in Vol. 3 of his collected works. Of course, the thing it claims to prove the existence of bears little resemblance to traditional concepts of god.)

I agree that we can very likely expect alien intelligences to have come up with the concept of prime number - but it also wouldn't be a shock if they had a concept of "greatest possible being", and that certainly wouldn't demonstrate that such a thing exists.

The only reason I brought up proofs of the existence of god is that both sorts of arguments have the same form - from reasoning purely about concepts, we come up with a claim that certain objects must necessarily exist, independently of humans. I find all such reasoning questionable, and therefore interpret mathematics in a way that isn't committed to the existence of special mathematical objects.

I'm not sure why you say "the frequent interpretation that CH is "either true or false but we'll never know" is simply inaccurate." Are you denying that it's either true or false, or saying that we'll know some time? If numbers and sets exist independently of their human discoverers, then CH has got to be either true or false.* It does seem that we can know things mathematically without being able to prove them from the axioms. (For instance, I think it's safe to say that we know that ZFC is consistent.) The question is just whether CH is one of these things we can know independently of the axioms.

(*There is one qualification to this claim that platonism entails a unique truth value. John Steel has suggested the following scenario. Consider two models V and W of ZFC. V is isomorphic to a forcing extension of W, and W is isomorphic to a forcing extension of V, so each is also isomorphic to a submodel of the other. V satisfies CH and W satisfies ~CH. If the actual universe of set theory is like V, then our word "set" may be ambiguous between the things that make up V, and the things that make up the smaller copy of W - the usage of our word may well be fixed entirely by its usage on the hereditarily finite part, which of course is the same in both. So CH would be true on one interpretation of the word "set" and false on the other. Of course, it doesn't seem that something like this is possible with "natural number", because we've got a clear enough understanding of "finite" that every non-standard model clearly contains things that don't count.)

As for results that depend on CH, I know that Wodzicki has shown that some results about the cohomology of something or other depend on the value of the continuum. (Something or other is n-dimensional iff 2^{\aleph_0}=\aleph_n.) I only saw a talk on this once, a couple years ago, so I don't really remember what. And Freiling's arguments are interesting, but I think most set theorists find them much less convincing than the arguments that support large cardinals.

Kevembuangga said...

Kenny and Leo, could you both clarify what you mean by "exist".
May be you disagree because you don't have the same definition.

Leo said...

Kenny -- thanks for the interesting pointers and discussion. Re: God -- I daresay any attempt to construct God via a set-theoretic argument would constitute sacrilege and an affront to any person who takes his religion seriously; it certainly does to yours truly (a Jew). The whole point of a belief in God is that it is based entirely on faith and needs no empirical or logical confirmation; anything within the latter realm is subject to observation and refutation, while God certainly is not.

Re: CH:
I'm not sure why you say "the frequent interpretation that CH is "either true or false but we'll never know" is simply inaccurate." Are you denying that it's either true or false, or saying that we'll know some time? If numbers and sets exist independently of their human discoverers, then CH has got to be either true or false.* It does seem that we can know things mathematically without being able to prove them from the axioms. (For instance, I think it's safe to say that we know that ZFC is consistent.) The question is just whether CH is one of these things we can know independently of the axioms.

Ack -- I wish I had Rudin's memoir in front of me; he goes through a very nice discussion. If both CH and its negation are consistent with ZFC, doesn't that mean that it's meaningless to pose the question in ZFC without additional axioms?

Kevembuangga said...
Kenny and Leo, could you both clarify what you mean by "exist".
May be you disagree because you don't have the same definition.


Well, I was being intentionally vague. Clearly, the integers don't exist in any physical sense. On the other hand, their existence is not as arbitrary or "brittle" as that of Sherlock Holmes.

Consider the following thought experiment. Suppose Euclid had died in his early childhood, and had never proved the infinitude of the primes. Is there any doubt in anyone's mind that this fact would have been discovered by someone else?

Now consider an analogous scenario with Arhtur Conan Doyle. Can you really claim that someone else would've conjured up Sherlock Holmes?

Without being able to put my finger on it precisely, there is certainly something persistent about mathematical facts -- and this persistence is quite independent of the discoverer!

Anonymous said...

The reality remains that at the end of the day, 2 + 2 = is the same thing as saying 4 = 4, which can be reduced further to "four is four," which then can be reduce further to "four."

Complex mathematical theorems attempted to connect A with B, yes those proofs (such as Fermat'sLast, etc) are inventive, etc; and there are real world applications I admit. However, at the end of the day, once something is encapsulated in mathematical proof the end result of it is this: "Four."