Why should AI scare us? Let’s compare natural vs. artificial intelligence, using Edge’s 2015 big question: What to think about machines that think?

1. Despite the AI fuss, “deep learning ... is conceptually shallow,” explains Seth Lloyd. “Deep” here means more interconnected "neural network" layers, not profound learning.

2. Alison Gopnik feels machines aren’t nearly “as smart as 3-year-olds.” While AI sometimes outwits Garry Kasparov, it needs millions of pictures (labeled by humans) to learn to recognize cats. Infants need a handful (amazing pattern detectors, + see what babies know, but scientists often ignore).

3. Biology has information-processing cells with hardware and software “vastly more complex than ... Intel's latest i7” chip, says Rolf Dobelli. Chips are faster, but the i7 does 4 things at a time; biology’s “processors” do thousands. Supercomputers ≈80k CPUs, brains ≈80 billion cells. 

4. Lawrence Krauss estimates a computer would need ~10 terrawats of power (≈all humanity’s power plants) to match what the human brain does with just 10 watts (=million million times less power, = 40 doublings, ~ 120 years).

5. Intelligence, and many mind-related terms, are “suitcase” words (Marvin Minsky). They pack jumbled ideas.

6. Intelligence must process information. But many things that process information aren’t deemed intelligent. Harry Collins says sieves, trees, calculators, and cats, do what they do, and process information, the way rivers do, ~basically “mechanistically.”

7. Information processing isn’t limited to minds. Inanimate objects routinely process information. Information is “physical order,” says Cesar Hidalgo. So any interaction that alters physical order, processes information. Matter computes.

8. Until recently our tools were mostly like sieves or axes = “solidified chunks of order” — objects embodying and enacting simple fixed logic — crystallized information. But computers mean single objects can embody multiple complex, updatable logics.

9. The flexible logic of computers generally require detailed step-by-step instructions, or algorithms. (Note: Life needs what algorithms do; DNA is 2-billion-year-old software.)

10. “Little has changed algorithmically” in AI recently, notes Bart Kosko. What’s new is running old algorithms faster and cheaper. Which means IBM’s Watson, while impressive, is glorified Googling (Roger Schank). And AI can teach itself elite chess only because it’s easily algorithmized (it has fixed rules, unlike human life).

11. Humans are “machines that think,” says Sean Carroll. But our information-processing logic is uniquely flexible. Our software isn’t only in our genes.

Those close to AI’s innards aren’t afraid. It’s up to us to use our natural intelligence well, to leverage AI's narrow powers intelligently.


Illustration by Julia SuitsThe New Yorker cartoonist & author of The Extraordinary Catalog of Peculiar Inventions