Philosophical geeks never seem to get tired of arguing at what point Artificial Intelligence becomes competent on a par with humans. Among the many bench tests and break-points thrown around, may we suggest one more? We’ll concede that Artificial Intelligence has finally come into its own when…
…it makes an interesting movie character!
Hey, this isn’t as far off as we might think. Recent months have shown amazing new landmarks in AI research. It’s developing ways to detect identity theft. It’s being used to predict a person’s chances of committing suicide. And yeah, we know, we finally tackled Go. Modern AI methods employ Machine Learning, which is actually a better brute force approach that’s starting to show results in leaps and bounds, now that we’re figuring it out. It’s the standard method for making autonomous cars, image recognition, speech recognition, and anything that works using predictive models.
OK, the time when an existing AI could make a good character is still pretty far off. While AI, as it exists today, can be used as a pretty solid plot device, we can’t see an interesting AI character based on present technology in the foreseeable. We have some interesting new fields of AI research making gains, but we seem to only make strides in one narrow field at a time. Perhaps if we someday link all these expert systems together into a super-Siri, we’ll have a knit-together machine mind that’s better than the sum of its parts.
How Close Is Hollywood’s Best So Far?
There’s a select handful of Artificial Intelligence characters portrayed on the screen that at least approach reality on a couple of points. They provide some food for thought as we push real-world AI research.
Blade Runner (1982)
For creating an Artificial Intelligence that’s as close as possible to real meat intelligence, nothing compares to starting out with meat to begin with. Cheating? Of course. But manipulating DNA is still going to be the most practical method of getting an Artificial (as in – we built it) Intelligence for a long time yet. We might just get to the point where we implant an actual brain from a living organism into a robot exoskeleton with an Expert System interface and call it a day.
Look, you can make a robot toddler and turn it loose to bounce off walls for twenty years until it learns to walk, or you can steal a roach brain and save twenty years and billions in research funding. Bet on bioware to beat software every time!
A very recent example is our pick for the closest Hollywood has gotten to real-life AI research. Samantha starts right out explaining she’s a coalition of “the DNA of a million programmers who wrote me” which we take as Hollywood-speak for “Machine Learning.” Plus, it learns new things the same way humans do, by looking them up on the Internet. As the story develops, it even turns out that the AI adapts and evolves by talking to users all over the world collectively. There isn’t that much difference between Samantha and IBM’s Watson.
Of course, where the story goes is another matter, as it flies off into less realistic territory. This pinpoints the problem with Hollywood getting anything right. Fiction has to take some artistic license to abridge reality or we’re stuck in Stanley Kubrick land watching a space capsule dock in silence for twenty minutes at a time.
Ex Machina (2015)
Another recent example, Ava is maybe not so realistic, but is couched in a story that takes some realistic (come on, work with us here) approaches to AI development. The back-story of Ava starts out with search engines and Machine Learning; it ends up with the guy making prototypes based upon prototypes until he has a machine simultaneously complex enough to begin to sound human yet already too difficult to understand.
Ava also shows us something else about AI which has come true: Emergent AI has a side effect of unpredictability, and that could turn out to be a problem. While the rest of the film is unrealistic, Ava does peg the “black box” factor pretty well. It doesn’t “go rogue” like a Terminator, nor does it melt down from trying to follow its own directions in the face of a paradox like the HAL 9000. Instead, it takes a very original approach to its situation that we humans wouldn’t consider.
Science Fiction Is Serious Business
Part of the role of Science Fiction, as opposed to other genres, is to speculate about humanity’s near-future. We could do with a lot more Avas, Samanthas, and even HAL 9000s as we explore our digital frontier, because they make a good model for discussing problems as they crop up.
Will there be a day when we have protest marches calling for AI civil rights? What will the first court trial involving an AI be like, be it in the role of defendant, plaintiff, or witness? What laws will we have to pass regarding AIs? It’s starting to look like we might have to deal with questions like these in our lifetimes, even though true thinky-feely Asimov-style AI may always be out of our reach. When it comes to AI, you don’t have to try too hard to make it human before it starts becoming just as much trouble as a human!