• Visit our partners: Our Partners:
  • Visit our partners: Our Partners:

The Chinese Room: Can Computers Be Conscious?

Can computers think? Can they understand their actions? Can they ever be conscious? Philosophers asked these questions even before computers were invented, but, in the 20th-century, a handful of computer and cognitive scientists began to think about them much more deeply. As computers have grown stronger with the advent of artificial intelligence and machine learning, these questions have become more relevant than ever. Instead of just exciting little disquisitions, they’ve become essential to establishing new ethics for the kinds of technology we create and how we interact with that technology.

Today, we’re going to discuss one of these thought experiments, a follow-up to the famed Turing Test. The Chinese Room attempts to answer whether a man-made machine can think and feel just like you or me and, perhaps, gain consciousness. Let’s explore.

The Turing Test

Alan Turing Age 16.By PhotoColor, is licensed under CC-BY-SA

With the rise of digital computers in the mid-20th-century came one of the most influential thinkers in computer science and, by association, cognitive theory. Alan Turing’s impact on the development of modern computing is immeasurable, but his most famous contribution is the Turing Test.

In 1950, Turing published a paper called “Computing Machinery and Intelligence,” where he introduced the “imitation game.” The game set out to answer a single question: can machines think? Turing recognized that this question posed some challenges based on the difficulty of defining the word “think,” so he instead focused on designing an experiment where a computer must indistinguishably imitate a thinking human.

The test included three participants: a human interrogator, a human responder, and a machine responder. The three parties are placed in separate rooms, and the interrogator sends both of the participants written questions. Following a series of questions and answers, the interrogator is asked whether they noticed any differences between the responders, for instance, whether one or the other seemed to be male or female. Finally, the interrogator is informed that one of his conversation partners was a machine, and they are asked to identify which one it was. If the judge cannot reliably tell the difference, the computer has succeeded in the imitation game. 

Since its inception, the Turing Test has become arguably the most influential concept in the theory of artificial intelligence. It’s been widely praised and criticized, but it’s also been misunderstood. Remember, Turing recognized that attributing thought to machines was a messy prospect. His test assumes that convincingly imitating thought was in effect the same as thinking. But, his claim has been misconstrued and broadened to apply to an even murkier concept: consciousness.

Turing presciently predicted that his test would be used to identify consciousness in computers, so he did his best to make the distinction himself. He stated that he did “not wish to give the impression that [he thought] there [was] no mystery about consciousness.” After all, the year was 1950. His thought experiment was already decades ahead of the computer technology of his time. Still, the Turing Test led to countless criticisms, many of which also took the form of thought experiments. The Chinese Room is the most famous or, perhaps, infamous of those critiques.

The Chinese Room Thought Experiment

John searle
John searle.BY Matthew Breindel, is
licensed under CC-BY-SA

John Searle is a philosophy professor at the University of California – Berkeley. Since the 1960s, Searle has devoted his career to advancing the conversation around computer consciousness and theory of mind. This culminated in a paper published in 1980 called “Minds, Brains, and Programs,” which introduced the Chinese Room to the world. The thought experiment was based on a hypothetical computer that understood Chinese. Users could input questions or sentences in Chinese, and the computer could respond in kind and pass the Turing Test. But Searle posed the question, “does the machine literally ‘understand’ Chinese? Or is it merely simulating the ability to understand [the language]?” Searle posited that it was the latter and that, therefore, the computer could not think.

To prove this point, suppose that a non-Chinese speaker was placed into a room with the necessary materials for responding to the Chinese inputs, essentially a written version of the computer program. The man would receive written communication in Chinese through a slit in the door, and, after following the program’s English instructions, he produces a response in seemingly fluent Chinese. Searle then asks the same question. Does the person understand Chinese? Or are they simulating understanding?

In Searle’s eyes, the answer was clear— the man is simulating understanding. Searle then argued that, without understanding its purpose and actions, we cannot describe what the machine is doing as thinking. He then took his argument one step further by discussing the future of artificial intelligence. 

He separated Ai into two distinct categories: strong and weak. Strong Ai comprehends what it is doing without only simulating it. He argued that only a computer with the ability to understand its actions could have a mind and, therefore, consciousness. On the other hand, Weak Ai could only simulate understanding and, thus, could not think, comprehend, or contain consciousness. Based on this logic, Searle concluded that Strong Ai is impossible.

Replies to the Chinese Room

The Chinese Room immediately became a defining thought experiment for philosophers of cognitive science and artificial intelligence. It’s been hailed in modern times as the most widely discussed philosophical argument in cognitive science in the last 40 years, but much of the discussion has been criticism. Dozens of critics have rejected it for several key reasons. While the basis for the objections is broad, most of the responses can be categorized into a few arguments.

The first category of responses is that Searle’s definition of a mind is restricted by its basis in human biology and anatomy. The human mind comprises several parts, but it is often incorrectly thought of as a single cohesive organism: the brain. Many cognitive scientists claim that the Chinese Room is based on that fallacy. So, the man in the room may not understand Chinese, but that’s because he’s just one piece of the puzzle, one component of the mind. The man understands the system, and the system understands the language. As individual components, they are useless, but as a single mind, they comprehend. If, as Searle argues, the Chinese room is a perfect analog for the computer, then it stands to reason that the computer, as a complete system, also understands Chinese, even if no individual component does. 

The next important category of rebuttal is based on the meaning of symbols. Part of Searle’s argument was that computers have a syntactic knowledge of symbols, in this case, the Chinese characters, but they lack a semantic understanding. In other words, the machine knows that a particular Chinese character refers to fire, and it may even be able to spit out a string of English words that describe fire, but it still lacks a real knowledge of what fire is. However, opponents point out that, while the man in the room may look at the Chinese characters and see random, meaningless squiggles, the symbols still contain semantic meaning. The humans who designed the program understood the semantics, and whoever is in the Chinese Room, man or machine, derives the meaning from those creators. This rebuttal is linked to the last one because it relies on an expanded interpretation of the mind as something that isn’t limited by its physical borders.

One particularly intriguing category of responses is that the Chinese Room is not an accurate representation of a mind because it lacks the automatic biological feedback of a system like the human brain. Proponents of this critique claim the system needs to be redesigned to reflect the mind’s intuitive, autonomous nature. Searle responded with a different variation of the thought experiment, which he explained in detail:

“Imagine that instead…we have the man operate an elaborate set of water pipes with valves connecting them. When the man receives the Chinese symbols, he looks up in the program…which valves he has to turn on and off. Each water connection corresponds to a synapse in the Chinese brain, and the whole system is rigged up so that after…turning on all the right faucets, the Chinese answers pop out at the output end of the series of pipes. Now, where is the understanding in this system? It takes Chinese as input, it simulates the formal structure of the brain’s synapses, and it gives Chinese as output. But the man certainly doesn’t understand Chinese, and neither do the water pipes.”

This particular argument tends to hold weight with a certain kind of cognitive scientist who believes that there is some indescribable phenomenon in organic brains that creates knowledge and consciousness. Still, while all of the above arguments are valid, there is one more argument that most deftly punctures Searle’s claim.

The Problem of Other Minds

The most essential reply to Searle’s Chinese Room is based on a cognitive theory called the Problem of Other Minds. Searle claims that human consciousness is self-evident, but this goes against the standard hypothesis.

The problem of other minds theory posits that it’s impossible to know the nature of any consciousness besides one’s own. We can observe others’ behavior and assume that, like ourselves, they are conscious, but how can it be proven? Researchers point out that the human mind may be merely responding to symbols and information-inputs in a complex way, like computers. So, we can only study their behavior by, for example, giving them a version of the Turing test. Alan Turing similarly noted that we never consider the problem of other minds when dealing with each other. He wrote that “instead of arguing continually over this point, it is usual to have the polite convention that everyone thinks.” The Turing test extends this “polite convention” to machines.

Searle’s critics argue that he’s holding computers to a higher standard than we would hold an ordinary person. After all, has anyone ever asked you to prove your own consciousness?

So, if we can’t prove the existence of consciousness in humans, how can we ever prove it with machines? Modern science still believes that it’s impossible and that Searle’s claims rely on assumptions that violate this standard.

Similarly, many people believe that simulating an event is the same as actually doing it. For example, when asked what’s three times three, you probably know the answer is nine without having to run through the multiplication in your head. A child learning multiplication for the first time may need to count out the problem, but eventually, you memorized the answer and began simulating multiplication. Perhaps simulation of a task is simply the next step in knowledge after learning. Computers like the one Searle described don’t need to learn. They’re programmed with all of the knowledge that they will ever need, so, when a machine simulates a task, we assume that it is, in fact, doing that task.

With Ai, however, the problems with the Chinese Room become even more resonant. Through machine learning, modern Ai can find new solutions to the issues that they once faced. This requires a computer to diagnose circumstances that it doesn’t comprehend and then grow to understand them. As the problem of other minds states, we will probably always be incapable of proving whether an Ai is conscious, but it seems that the question of understanding may be resolved. 

To Searle’s credit, he readily admits that the advances in computer technology have changed the argument. Still, he holds that old-school computers that simply executed programs without ever dramatically changing or learning never understood their processes and lacked any possibility of consciousness. He believes that progress in artificial intelligence may produce consciousness, but it would require a different model of computing.

So, what do you think? Despite the excessive pushback, many philosophers and cognitive scientists agree with Searle’s claim that computers lack understanding and consciousness. But do you believe that it’s sufficient enough to show that computers lack understanding? Does the problem of other minds and the inability to prove consciousness render Searle’s argument irrelevant?

Looking to the future, will computers ever be conscious? If they are, then how will we know? With the current rate of advancement, we will undoubtedly watch Ai drastically eclipse human capabilities, but does extreme intelligence prove thought? Let us know what you think in the comments and, as always, thanks for watching. 

Related Articles

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Stay Connected

44,156FollowersFollow
78,600SubscribersSubscribe

Random article

00:09:25

What Would Democracy in North Korea Actually Look Like?

It’s the most-isolated state in existence. North Korea - also known with fantastic irony as the Democratic People’s Republic of Korea - is an officially...

Latest Articles