Saturday, January 22, 2011

Chinese Room

Comments
Chris Kam
Ryan Kerbow

References
Minds, Brains, and Programs by John R. Searle. Behavioral and Brain Sciences 3 (3): 417-457
Chinese Room. Wikipedia.

Summary
The Chinese Room is an experiment devised by John R. Searle that attempts to explain the limitations of strong AI, that is, a computer program is not enough to represent understanding or/and intentionality. The experiment can be summarized as follows:
Suppose that AI researchers have succeeded in constructing a computer that acts as if it understands Chinese. This computer takes Chinese characters as an input, processes these characters following a set of rules (program), and gives Chinese characters as an output. Does the machine understand Chinese?
To answer that question, Searle goes on to make the following argument:
Suppose that a man who speaks no Chinese is locked in a room. In this room, he is given rules (program) on how to process Chinese symbols so that he can mimic a the writing of a native Chinese person. As he is given paper with chines symbols, he process them using the rules inside the room, and gives the output back.
Source: http://www.jimpryor.net/teaching/courses/mind/notes/searle.html
Even though the room (thus the man) has the ability of processing information written in Chinese -- even supposing it can do it as well as any native Chinese speaker -- the man understands no Chinese.

Discussion
This experiment represents one of the strongest critics to strong AI. What I noticed throughout Searle's paper is that he makes clear the goals that can be reached by strong AI, provided they adhere to the idea that the mind is to the brain what programs are to computer. I really liked the way the author makes the distinction between simulation and duplication since it highlights the fact that strong AI can only be possible if a system reproduced the "casual powers" of the brain. This being said, I agree for the most part that true understanding cannot be achieved by means of computer programs. However, I do think that machines can achieve some level of understanding. For example, a thermostat knows to lower the temperature if it is too hot in the room. This is not say that a thermostat has feelings or beliefs the same way humans do.

No comments:

Post a Comment