Some incomplete thoughts about Gödel

I saw another article on Gödel’s incompleteness theorems linked from Reddit today. It’s a topic I’ve wanted to write about for some time. Although many articles do a decent job in giving an idea of what the big deal is (and this one is pretty good), they can sometimes give a misleading impression of what the theorems actually imply. I’m by no means an expert, but hopefully these notes are useful.

Kurt Gödel (source)

A common interpretation of Gödel is that he managed to come up with a clever sentence of arithmetic that is “true” but can’t be proven (in the logical system of Russell and Whitehead). And that he further showed that there will always exist such sentences in any consistent formal system that can express some basic principles of arithmetic. For example, here’s Douglas Hofstadter in I am a strange loop talking about the incompleteness theorems:

This weird situation is utterly unprecedented and profoundly perverse. It flies in the face of the Mathematician’s Credo, which asserts that truth and provability are just two sides of the same coin — that they always go together, because they entail each other. Instead, we’ve just encountered a case where, astoundingly, truth and unprovability entail each other. Now isn’t that a fine how-do-you-do?

I am a strange loop, pp. 165

But truth in logic is not really that straightforward. A statement in logic is not true or false absolutely, but only with respect to some interpretation. For example, the statement “it is raining” may be true or not depending on the time and place you are interpreting it to apply to. Even apparently obvious truths like “1 + 1 = 2” depend on what you interpret the symbols “1”, “+”, “=“, and “2” to mean.

What Gödel demonstrated was not really a sentence that is true but unprovable, but rather a sentence that is true in some interpretations but false in others.

Proof is concerned with valid statements—those that are true in all possible interpretations. So such a statement is indeed unprovable in any consistent proof system. But doesn’t the statement assert its own unprovability, and therefore must be true?! In short, no. The sentence only asserts its own unprovability under a particular interpretation of that sentence. There are other interpretations in which it doesn’t state anything of the sort and in fact states something that is false.

(We know that this must be the case because of Gödel’s earlier, and confusingly named, completeness theorem. This theorem states that for any first-order theory, all valid sentences can be proved. Gödel’s sentence can’t be proved, so can’t be valid, and so must be false in at least one interpretation. The two uses of “completeness” here mean different things. The completeness theorem is concerned with semantic completeness, while the incompleteness theorems are concerned with syntactic completeness, which is closely related to the notion of decidability in computing).

Implications for mathematics

When mathematicians try to formalise some area of mathematics, like Russell and Whitehead did for arithmetic, they usually have in mind a particular interpretation. For example, one in which 1 + 1 = 2 means what we expect it to mean, rather than being a statement about the favourite colour of donkeys (green, thanks for asking). Douglas Hofstadter refers to this as “the Standard Model”. They then try and formalise the theory such that all possible interpretations agree with this intended model. In this way, all the true statements in the intended model are also the valid statements that are true in all interpretations. This is useful because as I said before, proof systems prove valid statements and we ultimately want to be able to prove things to discover true statements in our intended model.

What Gödel’s incompleteness theorems show is that beyond a certain level of complexity we cannot do this. Once a theory reaches a certain level of expressiveness, we can never formalise it in such a way that all of the possible interpretations agree. Therefore there will be statements that are true in our intended interpretation which are not true in every interpretation (are not valid), and so cannot be proven (or refuted) in a given proof system.

(Even this needs some clarification. You can for example take all the truths of arithmetic as axioms and avoid Gödel that way. But the truths of arithmetic are not “effectively enumerable”, so such a theory is not useful – you could never tell what the axioms actually are. We are therefore usually only concerned with “effectively axiomatisable” theories, to which Gödel’s incompleteness theorems apply).

One consequence of this is that it implies that there are nonstandard models of arithmetic that don’t agree with the intended model, and indeed this is the case. This fact is also implied by other theorems, but Gödel showed that this is a widespread problem.

The second incompleteness theorem showed that a formal system can’t be proven consistent by a simpler system. You can only prove consistency in a “larger”, more complex system. This was a bit of a bummer for the project to formalise all of mathematics from a minimal consistent logic.

Implications for philosophy and AI

It is sometimes argued that Gödel’s incompleteness theorems have profound implications for the development of artificial intelligence. Computers are formal systems, so the argument goes, and Gödel has shown that formal systems are inherently limited. Because humans can grasp the obvious truth of Gödel’s problematic sentences, they presumably are not limited in this way.

The problem is that “grasping” is not the same thing as proving. As I said before, proofs are concerned with validity: statements that are true in all possible interpretations. But to act intelligently in the world you don’t need to be concerned with all possible interpretations, just the one that corresponds to the actual world. Rodney Brooks once said “the world is its own best model”. I would rephrase that as “the world is the model”.

An implication of what I am arguing here is that the reasoning systems of an AI need not be sound. That is, they need not produce only valid conclusions (true in all interpretations), so long as the conclusions they produce are true in the real world. But of course, even that is unnecessary – humans make plenty of mistakes.

Summary

Gödel’s incompleteness theorems are fascinating but also endlessly misleading. I think the misunderstandings stem from using the word “true” instead of the more precise term “valid” when talking about them. If you think of the results in terms of validity then they express a relatively straightforward idea: that an expressive language will always allow some amount of ambiguity. There will be statements that can be made that are neither always true nor always false, but depend upon the interpretation.

I suspect this desire to speak in terms of (singular) “truth” is a lingering sign of Platonism in mathematics, but that’s another article for another day.

Author: Neil Madden

Security Director at ForgeRock. Experienced software engineer with a PhD in computer science. Interested in application security, applied cryptography, logic programming and intelligent agents.