[ooyala id=”Y1cmhjMjogL5tMsqt4VkddB7rgzUu_fa” ]
The day I realised that machines were going to take over the world was March 29 2011, when I saw a YouTube video of two quadcopter drones playing ping-pong.
Yes, I know, ping-pong is an unlikely choice of combat technique. But it was creepy. They were obviously tracking the movement of the ball, responding swiftly and accurately to its movement, and taking appropriate steps to return it. There was intelligence (albeit of a limited sort) and an ability to respond to and interact with the real, physical universe in a controlled, purposeful way – all in machines about the size of a side-plate. Today ping-pong, I thought: tomorrow, auto-tracking turret-mounted plasma cannons, fiery death blazing from implacable faceless robots, and all that. The human world coming to an end not with a bang, or a whimper, but the irritating mosquito whine of electric-powered rotor blades.
Anyway. A mere three years after my insight into our terrible future, Elon Musk, the man behind the Tesla electric car company, has finally caught up. “We need to be super careful with AI,” he tweeted. “Potentially more dangerous than nukes.” He’d been reading a book entitled Superintelligence, by an Oxford University professor called Nick Bostrom, and it had clearly given him the heebie-jeebies.
The thing is, I’ve read it too, and he’s right to have the heebie-jeebies. We’ve all been hoodwinked by science fiction for too long. We’ve seen machine intelligences conquer the world, but then be defeated because they made some startlingly stupid error, such as building all their killer robots as fragile humanoids, instead of, say, as a heavily armed orbital platform which can drop lots of hydrogen bombs on the hero’s head from the comfort of space. But that’s because we, as humans, think that human intelligence is the best. Hollywood is filled with feel-good messages about how robotic logic is no match for fuzzy, warm, human irrationality, and how the power of love will overcome pesky obstacles such as a malevolent superintelligent computer. Unfortunately there isn’t a great deal of cause to think this is the case, any more than there is that noble gorillas can defeat evil human poachers with the power of chest-beating and the ability to use rudimentary tools. The superintelligences – should they come to exist – will be superintelligent. They will be cleverer than us. If they want us out of the way, for whatever reason, then there probably won’t be an awful lot we can do about it.
The good news is that, for the moment at least, there’s no imminent danger of us successfully building one of these things (although, as Bostrom points out, once something a bit smarter than us is built, it won’t take long for that smarter thing to improve itself, because it’s smarter than us). But probably, one day, we will, and then the survival of our species will depend on whether or not the Silicon Valley geeks who make the first one have successfully programmed it not to turn all the matter in the entire solar system, us included, into spare parts for itself.
It is possible that we will avoid that fate – Bostrom considers ways in which we could programme values and morality into such a machine, although it would be a lot trickier than Isaac Asimov’s old Three Laws of Robotics (“Do not harm humans”, etc). But it’s also possible that attempts to do so would completely backfire. Essentially, at some point in the next century or so, we’ll find out whether we’ve built humanity’s ultimate servant, or the thing that kills us all, and we won’t know which it is until it happens.
All of which is exciting, and a little terrifying, in the way that existential threats to the species can be. I for one will never look at ping-pong in the same way again.