books book review

Superintelligence:
Paths, Dangers, Strategies by Nick Bostrom

reviewed by T. Nelson

book review: Superintelligence by Nick Bostrom

score-4

C ontrary to what some have suggested, there is absolutely no doubt that artificial intelligence is possible. We have living, working examples to prove it. But as often happens, we also have an entrenched army of engineers and scientists doggedly following their established paradigms, and a research system that strongly penalizes innovation. So instead of AI, all we have so far is companies falsely labeling their products as intelligent.

For instance, if I read a news article on counterfeit tires made in China, up pops a jiggling, animated advertisement for new tires from a company I never heard of. The company's computer presumably figures since my old tires turned my car into a death trap I'm going to need some new ones. Clearly AI has a long way to go.

In Superintelligence, Nick Bostrom asks what would be needed to make real AI happen, and speculates on what the risks might be. He's reasonably well versed in the philosophical issues but, unlike Ray Kurzweil, he's not optimistic. He seems to understand the science, but his reservations about AI suggest he's been watching too many bad science fiction movies.

Fear of artificial intelligence is really a form of anti-intellectualism: it is the fear of minds we don't understand. At first Bostrom sounds fairly reasonable. He suggests that we could use a brute-force method: model all the connections in a C. elegans, then a bumblebee, and work our way up. It might work. The most likely scenario, though, is with a computer algorithm. To make this work, we would need a fundamentally different processor architecture than we have now, and new insights into what precisely, on a circuit level, constitutes intelligence. Devising algorithms for something we don't understand is challenging, but maybe all that's required is a few key insights. He writes [p.36], “It is possible, though unlikely, that somebody will get the right insight for how to do this in the near future.”

But this idea seems to trigger Bostrom's primal fear of the unknown, and he suddenly veers off into anti-science territory. Could the key insight for AI, or even a blueprint for it, have already been discovered? For Nick Bostrom, the prospect is not exciting, but anxiety-provoking. He writes, “had I been in possession of such a blueprint, I most certainly would not have published it in a book.” Neither would I. But not, I suspect, for the same reason.

Bostrom spends the rest of the book warning about the risks of artificial intelligence. How, he asks, would we control it? He talks of safety protocols and putting it in “boxes.” Would the AI keep us alive and feed us like pets? Or would it consider us a threat and wipe us out? I guess you could think so if you had six copies of Terminator II stored on your hard drive. Maybe instead, with their stupendous computational power, they would predict the winner of the Kentucky Derby and polish our silverware with their minds. In a fit of singularity pique they might even turn pure evil and force the cable company to replace David Attenborough with reruns of Computer Chronicles. We don't know. At this stage, we know nothing about their nature.

That's the problem here. Bostrom's angst about Malthusian collapse and “emulation workers” seems based on uninformed fear. To me, as a biophysicist who worked in AI for a short time, it sounds almost comically naïve, like the doomsday predictions from those 1970s-era environmentalists. When you think about it, fear of AI is really a form of anti-intellectualism: it's basically the fear of minds we don't understand. It springs from the same root as the fear that some people have of intellectuals and scientists. I grew up around such people, and their rhetoric, while much more extreme, is similar to Bostrom's. It made for some awkward dinner conversations when I told them my career plans.

One could just as easily argue that the risk of not developing AI is far greater than the risk of developing it. Humans desperately need some kind of AI. Without some boost to our collective IQ, civilization as we understand it could eventually collapse, maybe forever. Developing AI will also give us an understanding of how our own minds work, which we desperately need.

Frankly, Bostrom is giving me a bad case of déjà vu. We have only a vague idea about what a superintelligent AI might be like, and it's impossible to prove that bad things could never happen. Bostrom takes advantage of this to try to make us fear the technology. But artificial intelligence, once it's created, will be us, just as our computers and iPhones are us. They would have no conceivable reason to flush us Puny Humans down the toilet like goldfish, even if they could.

Reviewed on this page:

Superintel­ligence:
Paths, Dangers, Strategies
by Nick Bostrom


On the Internet, no one can tell whether you're a dolphin or a porpoise
sep 27, 2014; updated oct 22, 2014



back

home