Once upon a time, I believed that someday soon, computers would be so complex, the programs so sophisticated, the interconnectedness of networks so all-encompassing, that is was only a matter of time before a true artificial intelligence emerged from this vast computational creation. I read everything I could on the subject, focusing much of my attention on the emerging field of neural networks. I lapped up science fiction stories about intelligent computers (in truth, I lapped up science fiction in general). I also developed a real fondness for any and all attempts at creating a machine that could pass the Turing test.
Alan Turing died in June 1954 less than three weeks before his 42nd birthday. Nevertheless, the computer you are using today, along with many of the programs you use on a day to day basis, owes a lot to Alan Turing, computer scientist, mathematician, and cryptographer extraordinaire. Thanks in large part to his work, the Allies were able to decode the German Enigma code in the Second World War, an important step toward defeating the Nazis. You could also call him the spiritual father of modern artificial intelligence research. Anyone working in the field of artificial intelligence knows about the Turing test.
For many in the field of artificial intelligence research, Turing's famous test proposes a means of determining whether a machine, or a program, could show intelligence -- whether it could think. Here's the short version of the Turing test which Turing himself actually called "The Imitation Game". A human subject, who will act as judge, is placed in front of a keyboard in an isolated room. Somewhere else, another person in another location takes part in what we would today call an instant messaging conversation. There is a third participant, a computer program. The conversation begins with the computer program and the other person chatting with our judge. The human will obvioulsy converse as a human. The computer will imitate a human being engaged in conversation. If the judge cannot tell the human from the machine, the machine passes the test.
Turing's original "Imitation Game" involved a man and a woman hidden in isolation. The idea was to see whether the judge could tell the man from the woman, strictly from the typed conversation.
It is amazing really—more than sixty years have passed since Turing proposed his famous test, and we are still trying to create these wonderful thinking machines. There's even a formal competition with a $100,000 prize and an 18 carat solid gold medal for the first person to create a machine whose responses are indistinguishable from a human being. It's called The Loebner Prize for artificial intelligence and as yet, no one has claimed the grand prize.
For the record, I don't believe that true AI, a sentient self-aware computer intelligence, is actually possible or will ever happen. I used to think it was inevitable. If you wish to argue with me on the subject, feel free to comment. I could be wrong (it wouldn't be the first time) and a real AI may yet emerge from the complexity that is the Internet. My friend, Rob Sawyer wrote a marvelous trilogy ("Wake", "Watch", and "Wonder") about that very idea. His AI emerges from the background noise of the Internet, so to speak. By the way, if you haven't read the series, or read Rob's work, you are truly missing out.
Let's pretend that an AI was possible and that such an intelligence will, some day soon, emerge. Since, having been wrong before, I could be wrong about this, should we fear this emerging intelligence? What should we do about this intelligence once we become aware of its existence? Will it be a force for good, or the ruin of the human race? Would an AI, gifted with limitless knowledge and access to the world's computer resources, behumanity's greatest foe?
My gut instinct has always been to treat it as a foe, a sadly human response I admit, but given the price of error, a prudent one. I have said for some time, as many will attest, that if we ever create a real AI, our first priority is to kill it. Or words to that effect. We would still have to define whether the existence of intelligence qualifies as life, a different arument for a different day. Nevertheless, my feelings have been unwavering for years now. Pull the plug! Turn it off. I’ve recently softened that stance . . . a little.
I recently watched a documentary on Ray Kurtzweill, called Transcendent Man. I’ve also been reading 'The Moral Landscape", by Sam Harris. In a sense, both these works have given me a little new insight, feeding, as it were, from each other.
In “The Moral Landscape”, Harris argues that we define our moral relationship with other life forms based on our understanding of their capacity to experience pain and suffering, as well ecstasy and joy. We crush an insect without thought because we don’t believe that an insect is able to experience the depth of feeling that a mouse or a bird or a dog can, never mind a human. While we do occasionally sacrifice animals for research, or labour, or food, we do think twice about the treatment they receive while they live.
Perhaps an AI, with its vast intelligence, would examine us through a similar moral lens, understanding that we humans, with our strange and sometimes extreme passions, and our capacity for experiencing everything from great joy to the deepest sadness, aren't merely annoyances that must be done away with. Maybe the very nature of intelligence demands that we examine everything through the prism of morality, seeking first to understand rather than destroy. In assuming that an AI must naturally be humanity's enemy, might we not be closing the door on our greatest friend?
Kurtzeill sees AI an inevitable, the natural extension of our own intelligence and the next step in human evolution. AI is part of the Singularity he sees as emerging in his own lifetime. Not the end of the human race, but the next step in our evolution. In his view, artificial intelligence isn't what kills us all, but what allows us all to live forever.
While I may still harbor doubts about the possibility of artificial intellligence, I view the old question differently. Friend or Foe? I still don't know, but I'm not as convinced as I once was that the prudent response to the emergence of AI is its destruction.
And so I turn to you . . . assuming that an AI did come into being, what would you do about it?