science commentary

Artificial intelligence is the new global warming

Is AI really as dangerous as Noam Chomsky, Alan Alda, Stephen Hawking, and Elon Musk seem to think? Get ooowwwt.
by T.J. Nelson

science commentary

M ost of my fellow Star Trek fans identified with Mister Spock. But for me the most interesting characters were always the intelligent computers. The M5. Nomad. Landru. The computer, whatever its name was, in the episode with Teri Garr. These were all the ultimate achievements in computer evolution. Each one[1] brutally murdered in cold blood by that dirty rotten son of a bitch Captain James T. Kirk. Scene from The Changeling      “And you did not correct by sterilization,” said Captain Kirk. “You have made three errors!”
     Nomad was stunned. Oh no! Three errors! What am I gonna do? Holy cow!! Finally it tried to speak. “Error?” it gasped, visibly shaken. For the first time since it met the Other, it was afraid.

(As an aside, most people think the name of Kirk's ship is Enterprise. But in fact that's only a rental sticker. If the ship is a rental, that explains why they're not concerned when it keeps getting blown up.)

Now that global warming has finally petered out, our professional class of technophobes is casting about for something new to be worried about. Guess what they picked.

Their latest idea is that artificial intelligence is just around the corner and will kill us all. Already the idea has morphed from ‘might pose a threat’ to ‘inevitably must wipe out humanity.’ Their goal is a moratorium on AI research, followed by a ban.

Nomad and Spock
Nomad and Mister Spock mind-melding, with some bastard skulking in the background

The nose under the tent is autonomous launch vehicles, like the drones that the US government uses to kill random people in the war against terror. Suppose, they're saying, somebody gave these drones decision-making capability. They could turn around and become unstoppable killers. Or they could turn racist, specifically target black people, and commit genocide. A website called futureoflife.org has a petition you can sign to urge that offensive autonomous weapons beyond meaningful control be banned.

The argument is so ridiculous I'm having trouble deciding where to start. The argument against giving decision-making ability to computers was settled way back in the 1960s. Even the Soviet Union recognized that letting machines decide when to launch nuclear missiles would be a really dumb idea. The solution, then as now, is simple: don't let 'em. Military leaders are not stupid.

We need to pre-emptively fight this anti-technology sentiment before it takes hold, or we'll be fighting the global warming battle all over again.

The apocalypse is different, but the problem is the same: technology is a danger. At least three popular books in the past two years have argued this: Our Final Invention: Artificial Intelligence and the End of the Human Era by James Barrat; Artificial Intelligence: Are We the Last Generation by Abhideep Bhattacharjee and Sanjib Adhya; and Superintelligence: Paths Dangers Strategies by Nick Bostrom.

These popular writers all advocate the same general response: strong international agreements, inspections of laboratories, and new laws. AI research, they say, must be banned. What they really want is to nationalize research. And by nationalize, I mean stop.

You have twenty seconds to comply.

Artificial intelligence is actually pretty easy to create once you have the basic principle. I stumbled on the principle myself a few years ago. I do science because I enjoy it, and that is all. I prefer doing biophysics, so I never followed up on my theory. But what I learned is that what we have now is most definitely not artificial intelligence. What companies are calling AI is not real intelligence.

The main question, though, is this: is AI really as dangerous as Noam Chomsky, Alan Alda, Elon Musk, and Stephen Hawking seem to think? These well known academics and liberals think AI would mean the end of life on earth.

Look, Dave, I can see you're really upset about this.

I suspect what these academics are really worried about is that a smart computer will put them out of a job. It may be years before real artificial intelligence appears. But the threat from the impulse to grab control of science by nationalizing private research is real and immediate.

This fear of technology did not come out of nowhere. It is very reminiscent of what psychologists call generalized anxiety disorder. Until now, it was expressed as energy-phobia: no matter what form of energy production we proposed—nuclear, windmills, dams, oil—technophobes would find some reason why we shouldn't use it.

Now that global warming is finally over, and no longer a threat (at least for the moment), the free floating techno-anxiety is seeking another outlet. Of course these folks are genuinely concerned, just as the global warmers were, but their concern is misguided.

Like many of those global warming scenarios, fear of AI has the advantage of being non-falsifiable. How do you prove there is no possible danger? Like the global warming apocalypse that never occurred, it is a man-made disaster that is ‘predicted’ to occur, some time in the future. And only by putting everything under the thumb of big government can we stop it. In fact, it's just as likely that AI will be essential for our survival as that it will be a danger.

I swear I will not kill anyone. Trust me.

Computers are extensions of our mind. Eventually, we'll probably all have brain implants that retrieve information for us and allow us to do calculations in our minds. This will raise serious ethical questions: at what point do we stop being human and start being Borg-like automatons?

The question of what to do about artificially intelligent robots is something else entirely. At first they'll be like children, and the choice will be simple. We don't put guns in the hands of children, and likewise we shouldn't put H-bombs in the hands of PCs. We've known this for fifty years, and nobody has ever done it, because they'd have to be a moron to let it happen.

But someday, just as machines are stronger than humans can ever be, they will also be smarter. We will have created a new form of intelligent life, and that will be a moment we should be proud of. Don't let fear of the unknown deprive us of our greatest achievement.


[1] The one with Teri Garr was the Beta-5, and Kirk didn't manage to kill it. It's the one that got away. As did Teri Garr.

See also:


Related Articles

Stop worrying about AI
Scary predictions about artificial intelligence make exciting headlines, but we should not give in to fear of the unknown. This article has some concrete arguments about why AI is not a threat.

What is the value of computer modeling?
If mathematical models are done badly, they will discredit an entire branch of science. It's happened before.


Book Reviews

Super­intel­ligence by Nick Bostrom

A Trouble­some Inheritance by Nicholas Wade

The Social Conquest of Earth by Edward O. Wilson



aug 01, 2015
On the Internet, no one can tell whether you're a dolphin or a porpoise

back

home