Another entry from the I-already-posted-this-on-Stack-Exchange Dept. It's rare that I can be clear about a subject, so I wanted to record this for my ego's benefit.

Does Searle's Chinese Room model computers correctly?

Searle invented a thought experiment, the Chinese Room, which he proposes is an argument against Strong AI (that machines think) but not against Weak AI (that machines simulate thinking), he has a man in a room manipulating chinese symbols via an instruction book written in english.

My question is, Where does this instruction book come from? We're all aware that humans write the code that drives a computer, or writes code that writes more code to drive a computer (ie a compiler) etc.

My clarification (that is, if it is), of Searle's Chinese Room thought experiment, is to have a man (John) who doesn't understand Chinese in a room with two windows. At one window, someone (Mai) submits questions in Chinese, at the other window stands a man (Lao) who does understand Chinese; when a question is submitted, John takes the question walks across the room, and passes it through the window, and gives it to Lao, who reads it, answers it, and then John walks back to the first window, and hands the answer back to the Mai.

To Mai, it appears John understands Chinese (even, if he rather strangely refuses to speak it). She is not aware that he has a secret human collaborator, Lao.

I think this models what actually happens in a computer, much more clearly. But am I right?

My Answer:

I'm very familiar with the argument John makes with his Chinese Room argument, and he's extremely consistent about what he means it to portray: that our concept of what it means to understand language is mistaken when we try to apply the term to any machine which operates only syntactically. It's primarily a refutation of the notion that a Turing Test is sufficient to claim that a conscious understanding is present.

As a system administrator by day, and an aspiring philosophy student by night, I can with full confidence tell you that yes, John Searle is correct when he claims that computers operate purely syntactically. All they do is manipulate symbols, and we still require a human agent to imbue those symbols with meaning. Still, the realization that syntax alone can have such incredible power is the great lesson of our age.

The problem with the example you gave above is that it sidesteps the very point of the reductio that the Chinese Room makes.

In the original example, Mai would submit her answer to a great big box, and receives intelligible responses from this box (whose occupant she's agnostic of) in a reasonably rapid amount of time. From Mai's perspective then, the box has passed the Turing Test- Mai believes she's been understood by a conscious being. On John's side of it, he has a set of drawers which contain all sorts of responses and phrases for different questions, and the guidebook he carries simply directs him to an appropriate drawer based on the Chinese message he receives.

The intuition Searle latches on to here is that John doesn't understand Chinese, so Mai's belief that her words are being understood by a conscious being must be wrong. Trying to replay the thought experiment with Lao playing the role of the conscious, understanding responder thus just circumvents the whole argument without addressing the problem it presents.

There's plenty of deep disgreements to be had at this point: we could defend Mai by claiming that John+Box+guidebook together make a system which understands Chinese, for instance. Searle himself denies this position is coherent, but not everyone buys his opinion. There's also the issue Daniel Dennett raises, that Searle makes a category mistake when using the word "understanding". In Dennett's view, semantics are unnecessary to understanding language, and syntactic operations are all that there is to explain consciousness.

You could also try leveling the charge that Searle's mistake is in thinking that there could even be a set of rules which a living language such as Chinese could be reduced to. This argument however has the consequence that it denies any possibility that a Turing Test could ever succeed. As a result, leveling this charge requires that you already agree with the results of the thought experiment: that rule-following alone cannot account for our normative understanding of what constitutes "understanding".