Proposed by philosopher John Searle in 1980, the argument holds that a computer executing a program cannot have a mind, understanding or consciousness, regardless of how intelligently or human-like the program may make the computer behave.1

Thought experiment

  1. Imagine a person who does not understand Chinese isolated in a room with a book containing detailed instructions for manipulating Chinese.
  2. When Chinese text is passed into the room, the person follows the book instructions to produce an appropriate response to fluent Chinese speakers outside the room.
  3. According to Searle, the person is simply following syntactic rules without semantic comprehension.
  4. When computers execute programs, they are similarly just applying syntactic rules without understanding or thinking.

Strong AI

Searle identifies Strong AI as: the appropriately programmed computer with the right inputs and outputs would thereby have a mind in exactly the same sense human beings have mind.

The definition depends on the distinction between simulating a mind and actually having one. Searle writes that “according to Strong AI, the correct simulation really is a mind. According to Weak AI, the correct simulation is a model of the mind.”.

Responses

Footnotes

  1. https://en.wikipedia.org/wiki/Chinese_room