Skip to content

The study of nonhuman intelligence could be missing major insights

From machines to animals, there are many kinds of possible minds.
Credit: cheekylorns / Adobe Stock
Key Takeaways
  • In 1984, the computer scientist Aaron Sloman wrote a paper proposing that, in terms of studying intelligence, scientists should get rid of distinctions between things or beings with the essence of a mind and those without.
  • Instead, he suggested examining the many detailed similarities and differences between systems.
  • To Sloman, the “space of possible minds” is not a dichotomy or a spectrum, but rather a complex map with “not two but many extremes.”

Reprinted with permission from The Book of Minds: How to Understand Ourselves and Other Beings, from Animals to AI to Aliens by Philip Ball, published by The University of Chicago Press. © 2022 by Philip Ball. All rights reserved.

In 1984 the computer scientist Aaron Sloman, of the University of Birmingham in England, published a paper arguing for more systematic thinking on the vague yet intuitive notion of mind. It was time, he said, to admit into the conversation what we had learned about animal cognition, as well as what research on artificial intelligence and computer systems was telling us. Sloman’s paper was titled “The structure of the space of possible minds”.

“Clearly there is not just one sort of mind,” he wrote: 

“Besides obvious individual differences between adults there are differences between adults, children of various ages and infants. There are cross-cultural differences. There are also differences between humans, chimpanzees, dogs, mice and other animals. And there are differences between all those and machines. Machines too are not all alike, even when made on the same production line, for identical computers can have very different characteristics if fed different programs.”

Now an emeritus professor, Sloman is the kind of academic who can’t be pigeon-holed. His ideas ricochet from philosophy to information theory to behavioural science, along a trajectory that is apt to leave fellow-travellers dizzy. Ask him a question and you’re likely to find yourself carried far from the point of departure. He can sound dismissive of, even despairing about, other efforts to ponder the mysteries of mind. “Many facts are ignored or not noticed,” he told me, “either because the researchers don’t grasp the concepts needed to describe them, or because the kinds of research required to investigate them are not taught in schools and universities.”

But Sloman shows deep humility about his own attempt four decades ago to broaden the discourse on mind. He thought that his 1984 paper barely scratched the surface of the problem and had made little impact. “My impression is that my thinking about these matters has largely been ignored,” he says – and understandably so, “because making real progress is very difficult, time-consuming, and too risky to attempt in the current climate of constant assessment by citation counts, funding, and novel demonstrations.”

But he’s wrong about that. Several researchers at the forefront of artificial intelligence now suggest that Sloman’s paper had a catalytic effect. Its blend of computer science and behaviourism must have seemed eccentric in the 1980s but today it looks astonishingly prescient. 

“We must abandon the idea that there is one major boundary between things with and without minds,” he wrote. “Instead, informed by the variety of types of computational mechanisms already explored, we must acknowledge that there are many discontinuities, or divisions within the space of possible systems: the space is not a continuum, nor is it a dichotomy.”

Part of this task of mapping out the space of possible minds, Sloman said, was to survey and classify the kinds of things different sorts of minds can do: 

“This is a classification of different sorts of abilities, capacities or behavioural dispositions – remembering that some of the behaviour may be internal, for instance recognizing a face, solving a problem, appreciating a poem. Different sorts of minds can then be described in terms of what they can and can’t do.”

The task is to explain what it is that enables different minds to acquire their distinct abilities.

“These explorations can be expected to reveal a very richly structured space,” Sloman wrote, “not one-dimensional, like a spectrum, not any kind of continuum. There will be not two but many extremes.” These might range from mechanisms so simple – like thermostats or speed controllers on engines – that we would not conventionally liken them to minds at all, to the kinds of advanced, responsive, and adaptive behaviour exemplified by simple organisms such as bacteria and amoebae. “Instead of fruitless attempts to divide the world into things with and things without the essence of mind, or consciousness,” he wrote, “we should examine the many detailed similarities and differences between systems.” 

This was a project for (among others) anthropologists and cognitive scientists, ethologists and computer scientists, philosophers, and neuroscientists. Sloman felt that AI researchers should focus less on the question of how close artificial cognition might be brought to that of humans, and more on learning about how cognition evolved and how it manifests in other animals: squirrels, weaver birds, corvids, elephants, orangutans, cetaceans, spiders, and so on. “Current AI,” he said, “throws increasing memory and speed and increasing amounts of training data at the problem, which allows progress to be reported with little understanding or replication of natural intelligence.” In his view, that isn’t the right way to go about it. 

Although Sloman’s concept of a Space of Possible Minds was stimulating to some researchers thinking about intelligence and how it might be created, the cartography has still scarcely begun. The relevant disciplines he listed were too distant from one another in the 1980s to make much common cause, and in any case we were then only just beginning to make progress in unravelling the cognitive complexities of our own minds. In the mid-1980s, a burst of corporate interest in so-called expert-system AI research was soon to dissipate, creating a lull that lasted through the early 1990s. The notion of “machine minds” became widely regarded as hyperbole. 

Now the wheel has turned, and there has never been a better time to consider what Sloman’s “Mindspace” might look like. Not only has AI at last started to prove its value, but there is a widespread perception that making further improvements – and perhaps even creating the kind of “artificial general intelligence,” with human-like capabilities, that the field’s founders envisaged – will require a close consideration of how today’s putative machine minds differ from our own. 

In this article


Up Next