Re-evaluating Searle’s arguments against machine consciousness

Like many students of artificial intelligence, I have long viewed John Searle’s Chinese Room argument with slight annoyance. It seems so obviously wrong headed and yet it persists. So I was pleasantly surprised to find myself re-evaluating my interpretation of the argument after watching some old videos of Searle. This is not to say that I suddenly believe it is a good argument, but I now think I understand a little better why Searle makes it.

The reason why I find myself suddenly thinking about the Chinese Room again is due in a roundabout way to the recent death of Bryan Magee. Through the reaction to Magee’s death on Twitter I came across this fairly comprehensive list of TV interviews he conducted on various philosophical subjects. While I had watched some of these before, I had never seen his interview with Searle on the philosophy of language before. I recommend watching it, as it’s an engaging introduction to a subject I haven’t previously spent much time on. It’s a testament to how well Searle explains his topic that Magee mostly just lets him talk, without many of the interruptions to clarify concepts that pepper other interviews.

To understand why I think Searle’s philosophy of language is key to understanding his view of the Chinese Room, watch this talk he gave at Google a few years ago on the topic. In it, he makes a claim that the real distinction between a computing machine and a human is that the non-human computer is purely syntactic, while human understanding is semantic. (This discussion starts at around 15:13 in the video). He uses this to dismiss what is known as the “systems reply” to the Chinese Room — that even if Searle inside the Chinese Room doesn’t understand Chinese, the system as a whole does. Searle’s objection to this reply is “how does the room get from the syntax of the [Chinese] symbols to the semantics of the understanding [of Chinese]?”.

I’ve heard this objection from Searle before, and it always puzzled me as to why he thought this was a good argument. Consider the (verbal) thought “Donald Trump is a terrible golfer”. My understanding of semantics comes via formal logic, in which such a statement would be broken down into constituent parts and the meaning of the statement would be a function of the meaning of those parts. For example, the meaning (denotation) of the name “Donald Trump” is the man who is the current occupant of the White House. But when I think this thought, Donald Trump doesn’t actually appear in my brain (thank goodness!). So how exactly does human conscious thought “get to” the semantics? If the semantics of that thought involves a physical human being on the other side of the planet from me, in what sense does my thought involve this semantics in a way that an artificial intelligence could not?

I think that part of the answer comes from understanding Searle’s background in the philosophy of language. In the Magee interview that I linked to earlier, Searle traces two distinct strands of the philosophy of language (and a third from Chomsky). The first, which connects to the logical semantics I just used, traces from early Wittgenstein’s picture theory of language and the logical positivists. This strand views the meaning of sentences as being primarily about the conditions in which the sentence is considered true, hence the connection to logic.

The second strand stems from the later Wittgenstein, through J. L. Austin, and Searle’s own work on speech acts. This strand views the meaning of an utterance as related to the way it is used in practice. It’s not (just) about the truth conditions of statements, but instead about how words are used to perform actions in the world: asking, promising, ordering, and so on. As Searle says:

For me in this tradition, the fundamental question is: how do we get from the noises that come out of my mouth to all these semantic properties that we attribute to them?

[…]

The way that language relates to the world is a matter of how people do that relating. And the basic term of that […] is the notion of a speech act.

So for Searle the way that language relates to the world (its meaning) is through the way we use language to interact (performing speech acts) within and with that world. Perhaps the same is true for his conception of conscious thought: that the semantics of thought is less to do with how thoughts in the brain connect with objects in reality, but more to do with the causal role of those thoughts in producing behaviour.

This would suggest that Searle is a functionalist, but in fact he is strongly opposed to functionalism. The crucial difference appears to be that while many functionalists would ascribe to multiple realisability, Searle believes that consciousness is fundamentally connected to its realisation as a biological process, like digestion. It is not just that the brain implements a particular causal mechanism, but that it implements a specific biological mechanism. As for what that biological basis might be (or how we’d recognise it when we find it), Searle seems to be mostly silent, leaving this up to future scientific discoveries. I don’t find this aspect of Searle very convincing, and I’m not sure it represents a falsifiable scientific claim: at what point could we give up looking and declare that there is no specific biological basis for consciousness? Searle could always claim that we just haven’t found the mechanism yet. (But then, is there any convincing falsifiable theory of consciousness?)

My own view would be that any account of the meaning of a thought like “Donald Trump is a terrible golfer” must involve the physical object of Donald Trump, and so a human mind is no more able to “get to” this semantics than any purely formal machine. Human thoughts are meaningful to the extent they give rise to behaviour that is consistent with the meaning of human thoughts, but that doesn’t mean that human minds are somehow intrinsically semantic in nature. An embodied artificial mind would have just as much ability to produce meaningful behaviour and so its thoughts would also be meaningful in the same way as a human’s.

Author: Neil Madden

Founder of Illuminated Security, providing application security and cryptography training courses. Previously Security Architect at ForgeRock. Experienced software engineer with a PhD in computer science. Interested in application security, applied cryptography, logic programming and intelligent agents.

%d bloggers like this: