Can somebody please have a chat with Elon Musk?
Because he’s been stirring up a mass panic about an AI killer robot apocalypse lately. And as we all know, the media LOVES them some mass panic! He’s been preaching a doom and gloom prophecy for some time now. He’s starting to inspire others to tell spook stories about AI bogeymen. He’s starting to get people nervous enough to seriously ask whether we’re all doomed.
This is pretty frustrating, if not for everybody in the tech field, then certainly the present author. We are all struggling every day for progress in our society. We want to think forward, not backward. We want to encourage more students to turn to STEM careers, not scare them away. We want people to embrace the Information Age, not reject it. We want to improve the image of engineers, not paint them as cackling evil monsters unleashing horrors upon the world.
We know Elon Musk has many fans in the technology community. After all, he’s got lots of money and he gets quoted in shiny magazines, so he’s always right about everything. That’s fine, you can defend Elon Musk if you want to. But when hordes of paranoid survivalists kick down your door and smash all your gadgets with sledgehammers before dragging you out for a speedy hearsay trial on your front lawn, don’t come crying to us.
A few brave columnists have begun to debunk Elon Musk. Over at Fortune, Michael L. Littman, who IS a professor of computer science, calls Musk “wrong,” and damn the backlash. The Verge dumps the notion that Musk’s AI players beat humans in a video game. The real story is that while a competent bot did learn to mimic human players’ behavior enough to be challenging, the human players responded by using inventive strategies of their own and beat the bot anyway. And here’s a top MIT AI scientist flat-out stating Musk doesn’t know what he’s talking about, damn the backlash again. Even Politico agrees that it’s just the Luddites at it again.
We Need To Explode Some Technology Myths
Here is why we will not ever have human-like A.I.:
#1: Moore’s Law doesn’t lead to human-like A.I.
It’s a common misconception that our current ramp of computing power gains will continue on indefinitely. If robots are beating humans at Go, one could easily conclude that it’s only a matter of time until they are Terminators. The problem with this reasoning is that our gains have all been in processing speed. We haven’t solved any new, unique problems in computing in quite some time, except we now have machines fast enough to tackle problems with brute force and get a solution in a reasonable amount of time.
An old XKCD webcomic strip explains this concept nicely. Given enough time and resources, you, too, could have built an AI Go-playing robot out of nothing but rows of rocks on an endless plain of sand. Each move would have taken the approximate age of the universe to compute, but that’s a trivial matter of processing power.
The thing is, not every problem in the world is solvable just by brute force and speed. Even if you take a brain like a worm’s (which is still many times more sophisticated than our best computers, in its own way), and speed it up 1000x, all you end up with is a tweaked-out worm. Get back to us when it’s solving NP-Complete problems, then we’ll take it seriously.
#2: You have to understand how to perform a task before you program a computer to perform it.
The next time somebody confidently assures you that computers, after a certain event horizon, will be able to design “smarter versions of themselves,” just draw this problem on the nearest napkin and hand it to them:
// FIXME: code goes here
Can you make it go?
Somehow, the hard, inescapable obstruction of not being able to teach a computer how to do something, which blocks us from designing computers “smarter” than ourselves, just doesn’t sink in with most people. Yes, we can teach a computer to recognize faces, understand speech, and play Go – all of which, not coincidentally, we can also explain to any four-year-old how to do.
There’s a corollary axiom we need to address:
#2(a) We don’t even understand how human intelligence works yet.
We often hear the argument “just hook up a bunch of processors in a neural net and turn it loose – it will teach itself!” The problem with this argument is that there is much more to the human brain – or any brain, come to that – than mere electronic signals traveling along wires. In addition to the nerve system, we also have the neurotransmitter system, the individual functions of the various brain regions, and… Well, here’s a rough outline. We need to simulate all THAT, just to get the beginning of a working model.
Google’s AI guru himself has pointed this out. And he also barks his shins upon the factor that makes so many people so wrong about AI development; it’s difficult enough to be an expert in either artificial intelligence or neuroscience, let alone be an expert in both. Whenever you encounter somebody insisting that computers are right around the corner from human-like AI, you may be sure that they’re not experts in both fields.
Never mind humans, we still don’t fully understand how a chimpanzee’s brain works… or a cat’s, or a mouse’s, or even a butterfly’s.
#3: Emergent learning isn’t the magic genie either.
Recently a Facebook experiment started this whole claptrap about human-like AI going again. Facebook tested out some bots in a language-learning experiment, which proved unsuccessful. By the time the story had been filtered through the media panic machine, it became “Facebook made a robot that started learning on its own and almost destroyed the world before they pulled the plug!!! EVERYBODY PANIC!!!!!11111oneoneoneone”
Sure, emergent learning has made some headway in very specific problems. Self-driving cars are shaving a few quirks off their performance, and automated phone systems are starting to only need you to scream your account number five times instead of six. But consider actual language comprehension. That’s easy to demonstrate: Head on over to Cleverbot and have a chat. I just asked it if Elon Musk is right about human-like AI and got back: “It’s pretty cool, how is being a bot?” Which not only fails to parse the question, but is also a complete non-sequitur. Six years and hundreds of millions of conversations, and it can’t even sound as coherent as a Cheech and Chong script.
Moore’s law doesn’t seem to be helping much there, does it?
The Real Threat To The World Is Cargo Cult Thinking
We’ll have to dig into the subject of cargo cults some time, because it’s funny and involves lots of Richard Feynman anecdotes. But for now, even Wired can pin it down as “The Myth of Superhuman AI.”
But please, can we chill out on the scaremongering? Those hordes of paranoid survivalists will be at our doors sooner than we think. That is one event that does have a historic precedent.