Encyclopedia > Chinese Room

  Article Content

Chinese Room

The Chinese room argument is a thought experiment designed by John Searle (1980) to debunk the stronger claims made by Strong AI[?] (also functionalism).

The basic belief of strong AI is that if a machine were to pass a Turing test, then it can be regarded as "thinking" in the same sense as human thought. Or put another way, the human mind is a computer (of a sort) running a program. Adherents to this philosophy believe furthermore that systems demonstrating these abilities help us to explain human thought. A third belief, necessary to the first two, is that the biological material present in the brain is not necessary for thought. Searle summarizes this viewpoint, which he opposes, in this manner:

The computer is not merely a tool in the study of the mind; rather, the appropriately programmed computer really is a mind, in a sense that computers given the right programs can be literally said to understand and have other cognitive states. (Hofstadter and Dennett, 353)

In the Chinese room thought experiment, a person who understands no Chinese sits in a room into which written Chinese characters are passed. The person uses a complex set of rules (established ahead of time) to manipulate these characters, and pass other characters out of the room. The idea is that a Chinese-speaking interviewer would pass questions written in Chinese into the room, and the corresponding answers would come out of the room appearing from the outside as if there were a native Chinese speaker in the room.

It is Searle's belief that such a system could indeed pass a Turing Test, yet the person who manipulated the symbols would obviously not understand Chinese any better than he did before entering the room. Searle proceeds in the article to try to refute the claims of strong AI one at a time, by positioning himself as the one who manipulates the Chinese symbols.

The first claim is that a system which can pass the Turing test understands the input and output. Searle replies that as the "computer" in the Chinese room, he gains no understanding of Chinese by simply manipulating the symbols according to the formal program, in this case being the complex rules. The operator of the room need not have any understanding of what the interviewer is asking, or the replies that he is producing. He may not even know that there is a question and answer session going on outside the room.

The second claim of strong AI which Searle objects to is the claim that the system explains human understanding. Searle asserts that since the system is functioning, in this case passing the Turing Test, and yet there is no understanding on the part of the operator, then the system does not understand and therefore could not explain human understanding.

Searle's primary argument reamains the linking between the simulation of intelligence, with "real" intelligence. In the Chinese room he attempts to put this on a firm footing, but in the discussion he makes this claim more clear when he notes no one would expect to get wet jumping into a swimming pool full of ping-pong balls simulating water".

Critics of the argument are many, and varied. One key problem is the use of the term simulation as opposed to emulation, which is an unstated assumption of the argument. The difference is key, if the human mind is a program running on a computer, then the strong AI claims are likely nessessarily true (see Turing machine).

But more telling is that the term "understanding" is left undefined, likely deliberately. Many have pointed out that it makes no difference if the person inside the room "understands" what's taking place, because the system as a whole does. That is, the room, operator, set of rules and database needed to drive them together understand the problem. Searle focusses on only one part of the overall system, perhaps the least important part, leaving the reader to conclude that he has a point.

When first published the argument generated a heated flow of counterclaims and articles. Today most of the AI world appears to consider it unimportant.

Further Reading

  • John Searle (1980). "Minds, Brains and Programs". In The Brain and Behavioral Sciences.

External link

  • Dissertation by Larry Stephen Houser (http://members.aol.com/wutsamada/disserta); likely the "most exhaustive study of Searle's Chinese room argument and its surrounding intellectual terrain around." Also, "among the most unsympathetic treatments." [Quotes are from the dissertation's preface]



All Wikipedia text is available under the terms of the GNU Free Documentation License

 
  Search Encyclopedia

Search over one million articles, find something about almost anything!
 
 
  
  Featured Article
U.S. presidential election, 1804

... 1812, 1816 Source: U.S. Office of the Federal Register (http://www.archives.gov/federal_register/electoral_college/scores.html#1804) (Larger version) See also: ...

 
 
 
This page was created in 33.8 ms