It may be on a go slow, or do something I didn’t ask it to, or simply freeze and refuse to function at all. Whatever the trouble may be, sometimes I think my computer has got it in for me. Then, after the moment has passed, I console myself with the fact that an inanimate object can’t consciously be out to get me.
Obviously there’s “AI” to consider here but, for now, I don’t think my desktop PC falls into the HALesque category of cunning computers … not by any stretch of the imagination.
Or maybe this is not such a stretch. It has been suggested that the internet is creating some sort of “World Wide Computer”, a giant grid of computing power that could have a dark side. According to the FBI, millions of personal computers have been commandeered into criminal “botnets”, controlled by hackers and programmed with the potential for mischief.
I was talking to an acquaintance the other day, who recounted how he had recently added a new printer to his office network and it had proceeded to swipe the IP address from another piece of hardware. He described the whole event as though it had actively and consciously “bullied” the other piece of kit off the network.
Bullied? It is an intriguing thought, but what’s really missing here, though, is any actual intent on the behalf of the machines involved. The perpetrators are human (be it via malevolence or error). Through some misplaced anthropomorphisation we might imagine some sort of malcontent in our machines. Yet, from naughtiness to criminal deception, we’re still simply projecting uniquely human characteristics and bad behaviour onto an inanimate object. Sure, computers are getting smarter all the time, but can they really think and act for themselves?
Up until now, accepted wisdom has been that to catch them out all you have to do is ask: “How do you feel?” They can calculate Pi to a million decimal places, but can’t tell if they’re happy or not. Unfortunately, that may not be the case any more. Research last year showed that computers are becoming increasingly skilled at disguising themselves as humans.
It may only have been text-based communication, but in a five-minute “chat” one machine (called Elbot, if you were wondering) fooled 25 percent of judges into thinking he – or she? – lived and breathed.
Elbot’s accomplishment brings computers closer to passing what is known as the Turing Test, which says computers that can surpass the 30 percent are considered to be “thinking and, therefore, could be attributed with intelligence”, according to Alan Turing, who created the test in 1950.
Of course, this may be as much a commentary on the sad decline of interpersonal communication skills as on the advancement of artificial intelligence. Nevertheless, times are clearly changing. One day (and maybe quite soon) we probably will be able to ask our desktop how it feels. What worries me is that will we really want to know?
Just imagine coming into work only to find your PC is having a “bad circuit day”, as well as being in a sulk about something your laptop said!