In a recent Slashdot poll, readers were asked “I think my current job will be replaced by a robot/software…” with results ranging from “within 5 years” to “it happened already” to “never,” plus the obligatory “Cowboy Neal” answer. Despite being one of the most tech-savvy audiences on the web, the majority of 43% confidently asserted that they’d never be replaced by a robot.
But as history has shown us, never say never. We’re a little overdue around here for a roundup of Artificial Intelligence news, and 2017 has shown a drastic leap in the field so far. What’s curious is how quickly and readily AI is being accepted on faith by society, in places where you’d expect people to start getting leery. Where are all those squawking Chicken Littles we had just a few years ago, panicking that Skynet would take over the world?
Now, the present author is on record many places as saying that human-like AI is never going to happen – and I stand by that. That is, a computer with “consciousness,” “free will,” etc. No matter how many computers you hook together in parallel nor how many you let loose to evolve in a playpen. Before we get to that point, it will be much more practical to invent more sophisticated intelligences through DNA engineering. Think Blade Runner rather than Terminator. But AI as we mean it today is just very advanced problem-solving – and Moore’s Law is holding solid in this area.
AIs Can Selectively Breed Humans Now
According to Sean Rad, CEO of Tinder, we might be ready to take another stab at computer dating. Everybody groan! Computer algorithm matching has notoriously been rotten at pairing people up romantically. That’s because there’s more to love than liking the same band. Dating algorithms do weed out some obvious mis-matches, though, at least, and that’s about where it’s going to end up. So far, the most ambitious attempt discussed for Tinder is being able to find single people in real life using Augmented Reality on your phone. Big whoop.
How about computers ruling over decisions in court and law enforcement? Durham Constabulary in England has created the Harm Assessment Risk Tool (HART), which analyses crime data to determine if someone should be ticketed and released, or taken in on bail, and what the bail amount should be. It’s not Pre-Cog, but it’s getting closer. One of the concerns is that racial profiling could be built into such a system. We’ll find out for sure when Texas starts using it.
To make it a bit scarier, there’s AI software Compas, made by Nortpointe Inc., already being used to sentence convicted offenders in court. It’s scary because the software is proprietary, and at least one case has been challenged by the offender sentenced because “his right to due process was violated as neither he nor his representatives were able to scrutinize or challenge the algorithm behind the recommendation.” Software just like this is already showing up in courtrooms around the US.
How About An AI President?
With all the problems we have already with Russians hacking our election, isn’t an AI president the last thing we need? Well, this Wired article is enthusiastic about the idea, citing that at least an AI would perform better than some recent occupants of the Oval Office – got us there!
We’ve seen this movie a hundred times – SPOILER ALERT for the ending of Colossus: The Forbin Project, the great grand-daddy of this genre:
Good luck getting that harsh Cylon voice out of your head now.
While it could be argued that AI would be useful in helping to make some decisions, the way doctors use AI-assisted diagnosis now, you probably don’t want Colossus to have the nuclear codes. Domestic policy is bad enough, but do you really want to ask a computer to handle foreign policy? Stephen Hawking and Elon Musk are already on the record saying that starting an AI arms race, where we’d have military defense incorporating AI methods, is a bad idea.
Google Ups The AI Stakes
A lot of press has been going to Google’s latest advances in AI. This week, they announced a new AI chip and supercomputer aimed at deep learning. far from controlling the world, it’s ultimate ambitions are to differentiate between a picture with a hot dog in it, and a picture with no hot dog in it.
At the heart of Google’s AI pursuits is AutoML, developed by the Google Brain artificial intelligence research group. It’s an AI that tries to built better AIs. Before the Singularity cult out there has a stroke, we should remind everybody that the problem space is still things like finding the hot dog in a photo. There’s some very dangerous and irresponsible reporting around all this, because gullible people will hear “the AI already outperformed its designers” and think “that’s it! We have Skynet!” No, it means the software developed a better method of creating an AI that could “read” photos, which is the same thing as saying that CAD can design a microchip faster than a human can.
Indy UK shivers “So impressive, it’s scary.” Fine, have your giggles. I, for one, do not fear the advance of AI; I fear the misunderstanding and misapplication of AI. I’m fine with my car driving itself, until it drives itself into the lake.
The Deeper AIs Learn, The Less Transparent They Are
There’s a dark secret at the heart of deep learning AI. Every programmer knows the problem with machine-generated code: As you get more dependent on it, the code produced becomes more dense and impossible for humans to parse. Deep learning algorithms, by necessity, become dense forests of data generated by the machine. If we write AIs that deep learn in order to write more AIs that deep learn, we can arrive at a program that makes decisions for completely impenetrable reasons.
The present author will ask you to indulge an anecdote from his own experience: Way back in my grasshopper days in the ‘80s, I was tinkering with BASIC and Lisp and reading up on AI rudimentary research. I tried writing a tic-tac-toe program which would write a record of each game it had played, then avoid patterns in future games which led to losses. It had no built-in strategy, no look-ahead routine, it just knew the rules. I was surprised at how easy this was to do, and more surprised at how fast it worked: It only took about 300 games for it to always make the best move, leading to a win or tie. Reading its data record, I could even clearly see its progress.
Encouraged, I tried doing the same thing with chess (yes, that’s hilarious, I was a kid). Without doing the math exactly, I knew the problem space was far bigger for chess than tic-tac-toe, so I “cloned” the program into two parts so that it could play both sides. I set it running and watched some amusingly naive games, then just left it running. Days passed. No apparent progress was happening. Finally I patched in an instruction to sound a beep when it at least recognized a non-losing play pattern, and left it running for days yet more.
Finally one day, I came home from school to be greeted by a beeping computer! Excited, I rushed over to the screen to see the result. There, in the middle of a blank board, were two kings moving back and forth between the same two spaces. Both halves of the program had annihilated every other piece, then cheerfully reported that they had discovered how not to lose.
And that’s what can go wrong with black box AI.