The readings presented two different ways to view artificial intelligence. The first saw AI as a “machine brain” that would surpass a human brain. The Notre Dame Magazine article called this the “singularity,” and herald the machine apocalypse. The second AI view was that machines can be taught to do a few things very well, better than any human in fact. However, these computers are really bad at everything else, so to us the word “intelligence” to describe these sorts of algorithms is a misnomer.
I am in the second camp. We have demonstrated that we can “teach” machines to do difficult tasks like play chess or an Atari game better than any human but these AIs can’t do things like identify images though like others can. While I suppose that someone could the best of the specialized AIs into one super AI that can do a lot of things well, but I still do not think that this would qualify it to be comparable to human intelligence. As a few of the articles pointed out, many image recognition algorithms can be duped into thinking a picture of television static is actually a picture of a cat. Maybe this just indicates that there is still room for growth in computer vision, but I think that there are human qualities that cannot be captured by a machine, not necessarily vision, but compassion or curiosity as examples.
One of the articles claimed that AlphaGo is not an AI because it lacks many of the qualities that the author believes are required to qualify something as intelligent. I found his argument interesting and, for the most part, convincing. I don’t know that I would go so far as to claim that AlphaGO and AlphaZero are not artificial intelligence, but I do think that these examples of sophisticated programs that can play board games indicate that artificial general intelligence is on the way.
Watson is a more interesting example, as this AI demonstrated an ability to understand trick, nuanced questions and more often than not, correctly answer them. The articles that looked at the Watson case acknowledged that the sort of natural language processing and ability to access huge repositories of information incredibly quickly would lend itself rather easily to other domains such as medicine. Still though, I do not think that Watson is the herald of AGI. Referring back to the AlphaGO article, there is a lot that needs to go into “intelligence,” such as the ability to interact with the world and a demonstration of curiosity.
Before reading about the Chinese Room problem, I did think that the Turing test was an adequate test of intelligence. The test demonstrates a machine’s capacity to carry a human conversation without the human becoming aware the machine is a machine. The Chinese Room counter-argument is that the an AI may be able to accept the natural language input and spit out valid output, but by simply translating the input into something it could understand and then following the internal rules to generate acceptable output. The Chinese Room argument argues that this is no different than if a person was alone in the room pretending to be the AI. He or she would accept natural Chinese language, translate it into something that they could understand, generate acceptable output and then translate it back to Chinese and send it as output.
Proponents of the Chinese Room claim that since the person in the room cannot understand Chinese and better than a computer can truly “understand” natural language. The machine is simply translating the input and producing the output blindly following the rules set in place. When I was reading this article, I found the argument quite convincing, but after some thought, I feel like the whole in the argument is the definition of “understanding.” The human may not be able to understand Chinese, but once they translate it into some other language, they can. Similarly, the machine may not be able to understand the English input, but it has rules, like human grammar rules, that dictate how to translate the natural language into code that it can comprehend. I don’t know that the Turing test is a conclusive test of intelligence, but it is certainly large step towards it.
Personally, I am not worried about AI taking over our lives. I think that there are a lot of domains that AI could augment in out lives, but I do not think that there is a worry about AGI taking over. I am convinced by the skeptics that see these AIs that can do spectacular things like play Chess, Go, or play Jeopardy as simply specialized machines. They may be able to perform great feats in the design domain, but I don’t see that translating to a general intelligence anytime soon.
Some of the articles that talked about the mind referenced that many experts say that the brain can be reduced down to a biological computer. There is no “vital spark,” as one article put it, that separates a mechanical computer from our biological ones. I think that computers can be taught many of the things that humans can, but there are many examples of how humans learn differently, or know or understand things that cannot be taught to machines in a way that we know of yet. Morality is one such thing. I do not think that there is any way currently to give a machine morality, because between two people, their morality can vary radically. Because humans cannot agree on what constitutes a moral or immoral action, I do not see an AI having morality.