Encyclopedia > Talk:Chinese Room

  Article Content

Talk:Chinese Room

Moved from talk:Turing_Test:

I do agree that Turing made no claim about what it means if a machine were to pass the Turing Test, but the implications are clear enough. Searle[?] coined the term Strong AI[?] to describe the belief that a machine passing the Turing test is indeed thinking, in the same sense that we attribute thought to another human. And Searle's Chinese Room refutes that.

Searle's Chinese room doesn't refute anything, because it's a nonsense argument. Even he only defends it half-heartedly these days because Hofstadter and others have so thoroughly debunked it. Lee Daniel Crocker

What a silly thing to write with nothing backing it up!

OK, I'll bite... Since Hofstadter's reply was the one Searle called the Systems reply and since you edited it out for no apparent reason, I'll put it and the other responses here and then we'll talk. My paper was actually on the whole drawn out debate between Searle & H/D. The next part of my original paper was on Hofstader's reply, but I left that out, since it has little bearing on the Chinese Room article. --Eventi

I was only cleaning up the language of this article--if I removed something it was probably a mistake (something like a cut with intention to paste at another location, which was forgotten). Thanks for putting it back. I agree that responses to Searle ought to go into other article(s). I'll see if I can come up with a few. I admit that my totally unsupported comment is just that--it's on a Talk page, after all--and I may well choose to back it up if I can find some time, but the impression I get from those in the AI field I know well--Minksy, Kurzweil, and others with whom I have conversed--is that no one takes Searle seriously except Searle. --LDC


I'm digging up an old report I wrote on a Searle v. Hofstadter/Dennet debate which took place in The New York Review of Books through Searle's review of The Mind's Eye, and subsequent letters from Dennet. I'm going to cut and paste in the Chinese Room content, and I'd appreciate your review of it, since you're familiar with the argument.

I'll be glad to look at the Chinese Room article. The most difficult part of writing it will be to achieve neutral point of view. -LC

I think the essay was well written, though a little out of date. It's far from neutral point of view, which would be the hardest part to fix. -LC
Thanks... What do you think is out of date? --Eventi

Replies to the Chinese Room

The first of the six replies is called the ?Systems Response," coming from Berkeley University. Those who support this response claim that it is true that the individual in the room does not understand Chinese, but the system does. The operator of the room is merely a part of a system that understands Chinese.

Searle?s reply to this is, in his words ?quite simple." If a person were to internalize the entire system, memorize the rules for manipulating Chinese symbols and doing all the necessary manipulations in his head, he would still be unable to understand the Chinese questions, though he could give the correct answers using the system. ?All the same, he understands nothing of the Chinese, and a fortiori neither does the system, because there isn?t anything in the system that isn?t in him.? The user of the mental Chinese system still does not understand Chinese, so neither could the subsystem.

The second reply presented by Searle is called ?The Robot Reply," and it comes from Yale. The reply supposes that the Chinese room could be attached to a robot, and that certain symbols coming into the room would be coming from other ?sensory organs? on the robot, like cameras, and speakers. Furthermore, some of the symbols passed out of the room would cause the robot to perform various activities, enabling it to interact with the world, much as a human does. According to supporters of this reply, such a machine would have genuine understanding.

Searle?s reply to this suggestion is that the person inside the room still has no understanding of the input he receives or the output he gives. It makes no difference to the operator of the room whether the input symbols are coming from a video camera or an interviewer, or whether the output symbols are replies to a question or commands to the robot.

The third reply, coming from Berkeley and MIT, is called the ?Brain Simulator? reply. Suppose that the program acting as a mind is not intended to simply answer questions at the level of human understanding, but instead is designed to simulate the functioning of the brain. Such a computer would duplicate the sequence of neuron firings at the synapses of the brain of a native Chinese speaker, in massive parallel, ?In the manner that actual human brains presumably operate when they process natural language (Searle 1980).? With this intricate simulation of the brain, wouldn?t the causal powers of the brain also be duplicated? Then wouldn?t the machine truly understand?

Searle first points out that this reply is actually inconsistent with the strong AI belief that one does not have to know how the brain works to know how the mind works, since the mind is a program that runs on the brain?s hardware. ?On the assumption of strong AI, the mind is to the brain as the program is to the hardware, and thus we can understand the mind without doing neurophysiology (363).? Searle?s answer to this suggestion is yet another mental experiment. Imagine that instead of using a computer to simulate the brain?s operation, we use a complex system of water pipes and valves to simulate it. A person receives Chinese symbols as before, but instead of looking up rules and the calculating the answer as before, he adjusts the valves in the plumbing. The water pipes return their output, and the person passes the correct symbol back out. The person still has no understanding.

Searle foresees objections to his water pipe computer, namely the systems response, that it is the conjunction of the operator and the pipes that understand. Again he answers that in principle, the operator could internalize the system as before, and there would still be no understanding. Searle says that this is because the system simulates the wrong things about the brain. Only the formal description of the brain, and not its ability to produce ?intentional states?.

One reply which comes from Berkeley takes a different tack from the others, and really asks a different philosophical question. Searle calls it the ?Other Minds? reply. How does anyone know that other people understand Chinese, or for that matter, anything at all? The only way to tell if anything has cognitive ability is by its behavior. Therefore, if a machine could pass the same behavioral tests as a person, you must attribute understanding to the machine.

Searle?s response to this objection, is that to study a cognitive science, such as proponents of strong AI claim to do, you must assume that cognition exists, just as in the physical sciences you assume that there are real, knowable physical objects (366). Cognitive states, such as we assume our own minds create, are not demonstrable by conventional empirical means, unlike the components of physical science. These cognitive states are only demonstrable by cognition; we only know we think because we know we think.

The last reply printed in The Mind?s I is called the ?Many Mansions? reply, and comes also from Berkeley. The proponents of this reply state that Searle?s argument presupposes that the hypothetical machine uses only the technology available today. They believe that there will some day be devices manufactured to duplicate the ?causal? powers of the brain that Searle believes to be the real difference between machines and brains. Such machines would have artificial intelligence, and along with it cognition.

Searle writes that he has no objection to this claim, but argues that this is fundamentally different from the claim made by strong AI. Searle states that his interest was in challenging the thesis that ?mental processes are computational processes over formally defined elements.? Since this response redefines the goals of strong AI, it also trivializes the original claim. The new claim is that strong AI is ?whatever artificially produces and explains cognition,? and not that a formal program can produce and explain cognition.


These hypothetical situations remind me of a common, everyday-type scenario which has to do with what it means to understand something. When you count, for instance, your footsteps, you do not understand the numbers. You are probably only mentally manipulating the words, and perhaps the symbols, for the numbers. If you truly *understood* the numbers, you would be able to give the number as easily in, say, octal, as in decimal. So do you know the number of steps you have walked? Or only the name and the symbol for the number in your native language and native script? I have tried counting my steps in Japanese, a language foreign to me, and noticed thast I was quite often merely reciting number WORDS in the language without thinking of the numbers represented by them. I mention that it was Japanese because if I had been counting in, say, German or Polish, the number words would have been nearly identical to English for the purposes of this experiment and thus would have proven nothing.

When you multiply largish numbers, you almost certainly do not understand what you are doing. The only reason 6*7=42 looks better than 6*7=38 is memorization. If you had not memorized addition and multiplication tables, they would both look equally OK. You do not really multiply the numbers, yet you say you do. The Chinese room experiment is redundant. Do not ask about the Chinese room when you have better to work with.



All Wikipedia text is available under the terms of the GNU Free Documentation License

 
  Search Encyclopedia

Search over one million articles, find something about almost anything!
 
 
  
  Featured Article
East Farmingdale, New York

... a female householder with no husband present, and 24.0% are non-families. 19.0% of all households are made up of individuals and 7.0% have someone living alone who is 65 ...

 
 
 
This page was created in 25.7 ms