In Searle’s article Minds, Brains and Programs he says that a computer program (like strong AI) will never have understanding. He gives the example of the Chinese Room and that the room will never understand Chinese because it only manipulates the symbols (a highly simplified explanation). In the article, he gives anticipates the counter arguments of his critics that a program will never truly understand. The fifth reply is given as:
“The other minds reply (Yale). How do you know that other people understand Chinese or anything else? Only by their behaviour. Now the computer can pass the behavioural tests as well as they can (in principle), so if you are going to attribute cognition to other people you must in principle also attribute it to computers.”
What this signifies, in simplified terms, is that once we build a computer that simulates the brain, why can we not say it is conscious, understands or has intentionality? Searle does not go very far with his answer and this is where I want to give my critique on his arguments and strong AI in general.
It is my argument that this problem cannot be solved because it is not a philosophical problem. It is a problem of language. Take the example:
The dolphin painted Mona Lisa.
You know this is not possible because there need to be “hands” to hold a brush, there needs to be “air” to dry the painting and a dolphin must live under water. Thus we can say with relative ease that claiming a dolphin painted Mona Lisa is impossible because it cannot be done. The words “painted/painting” cannot be linked to “dolphin”in this way, as the “doer” of the painting needs to have hands or it needs to be done on land etc. (This critique piece does not go into the problems of defining what art is, it is a simplified example.)
Now the same can be said about the computer that simulates the brain or mind. It cannot be conscious, understand or have intentionality because all these words are defined with “things” not associated with a mechanical computer or programs. It might seem pedantic or unimportant, but saying a computer is “conscious” is, in a way, saying that a dolphin can make a painting. The problem is not that a computer cannot be conscious or understand; it may well have intentionality one day, but by describing the system in terms that we ascribe to humans, are giving computers anthropomorphic qualities.
My claim then is that we cannot give anthropomorphic terms to computers; it is an “abuse” of language and it creates the wrong connotations. My proposal is that new terms need to be given to the revolutions occurring in computer sciences. When computers first came into being, terms were created that are in everyday use today; the possibility of creating new terms for strong AI is not that impossible act. The act of anthropomorphizing computers seems odd in the age we are living.