The first of the six replies is called the ?Systems Response," coming from Berkeley University. Those who support this response claim that it is true that the individual in the room does not understand Chinese, but the system does. The operator of the room is merely a part of a system that understands Chinese.
Searle?s reply to this is, in his words ?quite simple." If a person were to internalize the entire system, memorize the rules for manipulating Chinese symbols and doing all the necessary manipulations in his head, he would still be unable to understand the Chinese questions, though he could give the correct answers using the system. ?All the same, he understands nothing of the Chinese, and a fortiori neither does the system, because there isn?t anything in the system that isn?t in him.? The user of the mental Chinese system still does not understand Chinese, so neither could the subsystem.
The second reply presented by Searle is called ?The Robot Reply," and it comes from Yale. The reply supposes that the Chinese room could be attached to a robot, and that certain symbols coming into the room would be coming from other ?sensory organs? on the robot, like cameras, and speakers. Furthermore, some of the symbols passed out of the room would cause the robot to perform various activities, enabling it to interact with the world, much as a human does. According to supporters of this reply, such a machine would have genuine understanding.
Searle?s reply to this suggestion is that the person inside the room still has no understanding of the input he receives or the output he gives. It makes no difference to the operator of the room whether the input symbols are coming from a video camera or an interviewer, or whether the output symbols are replies to a question or commands to the robot.
The third reply, coming from Berkeley and MIT, is called the ?Brain Simulator? reply. Suppose that the program acting as a mind is not intended to simply answer questions at the level of human understanding, but instead is designed to simulate the functioning of the brain. Such a computer would duplicate the sequence of neuron firings at the synapses of the brain of a native Chinese speaker, in massive parallel, ?In the manner that actual human brains presumably operate when they process natural language (Searle 1980).? With this intricate simulation of the brain, wouldn?t the causal powers of the brain also be duplicated? Then wouldn?t the machine truly understand?
Searle first points out that this reply is actually inconsistent with the strong AI belief that one does not have to know how the brain works to know how the mind works, since the mind is a program that runs on the brain?s hardware. ?On the assumption of strong AI, the mind is to the brain as the program is to the hardware, and thus we can understand the mind without doing neurophysiology (363).? Searle?s answer to this suggestion is yet another mental experiment. Imagine that instead of using a computer to simulate the brain?s operation, we use a complex system of water pipes and valves to simulate it. A person receives Chinese symbols as before, but instead of looking up rules and the calculating the answer as before, he adjusts the valves in the plumbing. The water pipes return their output, and the person passes the correct symbol back out. The person still has no understanding.
Searle foresees objections to his water pipe computer, namely the systems response, that it is the conjunction of the operator and the pipes that understand. Again he answers that in principle, the operator could internalize the system as before, and there would still be no understanding. Searle says that this is because the system simulates the wrong things about the brain. Only the formal description of the brain, and not its ability to produce ?intentional states?.
One reply which comes from Berkeley takes a different tack from the others, and really asks a different philosophical question. Searle calls it the ?Other Minds? reply. How does anyone know that other people understand Chinese, or for that matter, anything at all? The only way to tell if anything has cognitive ability is by its behavior. Therefore, if a machine could pass the same behavioral tests as a person, you must attribute understanding to the machine.
Searle?s response to this objection, is that to study a cognitive science, such as proponents of strong AI claim to do, you must assume that cognition exists, just as in the physical sciences you assume that there are real, knowable physical objects (366). Cognitive states, such as we assume our own minds create, are not demonstrable by conventional empirical means, unlike the components of physical science. These cognitive states are only demonstrable by cognition; we only know we think because we know we think.
The last reply printed in The Mind?s I is called the ?Many Mansions? reply, and comes also from Berkeley. The proponents of this reply state that Searle?s argument presupposes that the hypothetical machine uses only the technology available today. They believe that there will some day be devices manufactured to duplicate the ?causal? powers of the brain that Searle believes to be the real difference between machines and brains. Such machines would have artificial intelligence, and along with it cognition.
Searle writes that he has no objection to this claim, but argues that this is fundamentally different from the claim made by strong AI. Searle states that his interest was in challenging the thesis that ?mental processes are computational processes over formally defined elements.? Since this response redefines the goals of strong AI, it also trivializes the original claim. The new claim is that strong AI is ?whatever artificially produces and explains cognition,? and not that a formal program can produce and explain cognition.
These hypothetical situations remind me of a common, everyday-type scenario which has to do with what it means to understand something. When you count, for instance, your footsteps, you do not understand the numbers. You are probably only mentally manipulating the words, and perhaps the symbols, for the numbers. If you truly *understood* the numbers, you would be able to give the number as easily in, say, octal, as in decimal. So do you know the number of steps you have walked? Or only the name and the symbol for the number in your native language and native script? I have tried counting my steps in Japanese, a language foreign to me, and noticed thast I was quite often merely reciting number WORDS in the language without thinking of the numbers represented by them. I mention that it was Japanese because if I had been counting in, say, German or Polish, the number words would have been nearly identical to English for the purposes of this experiment and thus would have proven nothing.
When you multiply largish numbers, you almost certainly do not understand what you are doing. The only reason 6*7=42 looks better than 6*7=38 is memorization. If you had not memorized addition and multiplication tables, they would both look equally OK. You do not really multiply the numbers, yet you say you do. The Chinese room experiment is redundant. Do not ask about the Chinese room when you have better to work with.
Search Encyclopedia
|
Featured Article
|