There has been a rash of commentary lately about the potential for strong AI — AGI, or artificial general intelligence as it’s often referred to — to turn around and bite the human species that created it. Paypal/Tesla/SpaceX founder Elon Musk and physicist Stephen Hawking have recently opined publicly and apocalyptically on what Musk described as “our greatest existential threat.” Hawking, in a BBC interview, said that “full AI” could “spell the end of the human race.” Naturally, there are equivalently authoritative opposing views; Ray Kurzweil, for example, sees the future of AI in much more optimistic terms, as expounded at length in several books.

Econtalk, a consistently excellent weekly podcast by the always insightful Prof. Russ Roberts, has aired two hour-long audio interviews on the subject in recent weeks, the first with philosophy professor Nick Bostrum of Oxford University (who is perhaps better known for his paper exploring the possibility that what we perceive as reality is actually a simulation), and the second with  Gary Marcus, a cognitive psychologist at NYU. Both are authors of recent books: Superintelligence: Paths, Dangers, Strategies by Bostrum, and The Future of the Brain: Essays by the World’s Leading Neuroscientists, by Marcus.

Obviously, the current crop of luminaries are by no means the first to perceive a potential problem with the idea of creating machines that are smart enough to out-think their creators.  And for those willing undertake a harder philosophical slog, the prolific and often thought-provoking Eliezer Yudkowski and his collaborators at the Machine Intelligence Research Institute and on the web site lesswrong.com have been exploring for more than a decade the implications of strong AI and the possible ways to confine it. One may suspect that the current surge in awareness may relate to recent improvements in the apparent “intelligence” of widely used software products such as Siri and Cortana (see, for example, this quite impressive demo video showing Cortana’s new abilities).

My take is that the public debate is focusing on the wrong question. I’m not too worried about a super-AI deciding that its overriding purpose is to turn everything in the universe into paper clips (one of Nick Bostrum’s examples). Long before we would ever get to the point of having to deal with a single-minded paper clip manufacturing rogue AI that we couldn’t unplug, we will have to deal with two other problems, both of which will be upon us very soon, and neither of which requires AI any stronger than we’re already capable of creating.

The first problem is the effect of AI — weak or strong — as a tool for amplifying human intellectual capability. Even fairly weak AI, when used as a tool to gain an advantage over those who don’t have access to it, is enough to allow those who do have access to fairly quickly wipe everyone else off the game board. The divide between those with access to good computational tools and those without is already very visible even in America, and much more so in the developing world where legions of teenagers are leaving school barely able to make change correctly, much less figure out how to control sophisticated computational tools. And meanwhile, those tools are being used to more and more effectively manipulate the have-nots and bend them to the will of those who control the resources. This is my biggest AI-related fear: that we bifurcate into a kind of feudal society of intellectual haves and have-nots, in which the difference in effective intellectual horsepower is amplified to the point where those who have access to the best tools are in total control and no meaningful challenge is possible.

The second problem is that even fairly weak AI can be given enough decision-making power to be dangerous to humans. We have probably already seen examples of this in the form of sudden financial market crashes. And consider the current enthusiasm for drone warfare: currently (as far as we know), when a drone is engaging a potential target, some human somewhere makes the final decision to shoot the missile, but it certainly doesn’t have to be done that way — it would be technologically possible to allow a computer to make the decision, based on face-recognition technology and/or whatever other inputs are deemed sufficient. And it’s a very short step from there — and certainly within current technological capabilities — to program a system where a drone would fly around looking for targets and automatically fire on any targets identified.

So, personally, I’m much more concerned about the uses of less-than-strong AI by humans as a tool for gaining advantage, than I am about a strong AI developing a mind of its own. You don’t need a malevolent computer to make even modestly intelligent machines run amok and start harming humans. There are plenty of malevolent humans who will happily supply the required malevolent intent. The big risk isn’t what the machines will decide to do, the big risk is what humans will decide to use the machines to do, when presented with an opportunity to gain an advantage. The philosophizing about how to ensure that an AI does not spontaneously evolve into something harmful to humans is, in my view, not very interesting, because I don’t think there will be anything spontaneous about it. The risks posed by strong AI are fundamentally a game theory problem about humans, not a philosophical problem about machines.

I don’t know what the solution is for society — there may not be one — but I know what I’m telling the kids in my household: computational tools matter, and you’d be wise to learn all you can about how to use them.

 

Leave a Reply

Your email address will not be published. Required fields are marked *