Skip to content
Who's in the Video
Lawrence Maxwell Krauss is a Canadian-American theoretical physicist who is a professor of physics, and the author of several bestselling books, including The Physics of Star Trek and A Universe from Nothing. He is an advocate of scientific skepticism, science education, and the science of morality. Krauss is[…]
Sign up for the Smarter Faster newsletter
A weekly newsletter featuring the biggest ideas from the smartest people

All new technology is frightening, says physicist Lawrence Krauss. But there are many more reasons to welcome machine consciousness than to fear it.


Right now, says Krauss, robots can’t even fold laundry. But when they do learn to think (which he considers very likely), then there’s also reason to believe that they’ll develop consciences.

Lawrence Krauss: I see no obstacle to computers eventually becoming conscious in some sense. That’ll be a fascinating experience and as a physicist I’ll want to know if those computers do physics the same way humans do physics. And there’s no doubt that those machines will be able to evolve computationally potentially at a faster rate than humans. And in the long term, the ultimate highest forms of consciousness on the planet may not be purely biological. But that’s not necessarily a bad thing. We always present computers as if they don’t have capabilities of empathy or emotion. But I would think that any intelligent machine would ultimately have experience. It’s a learning machine and ultimately it would learn from its experience like a biological conscious being. And therefore it’s hard for me to believe that it would not be able to have many of the characteristics that we now associate with being human.

Elon Musk and others who have expressed concern and Stephen Hawking are friends of mine and I understand their potential concerns, but I’m frankly not as concerned about AI in the near term at the very least as many of my friends and colleagues are. It’s far less powerful than people imagine. I mean you try to get a robot to fold laundry and I’ve just been told you can’t even get robots to fold laundry. Someone just wrote me they were surprised when I said an elevator as an old example of the fact that when you get in an elevator it’s a primitive form of a computer and you’re giving up control of the fact that it’s going to take you where you want to go. Cars are the same thing. Machines are useful because they’re tools that help us do what we want to do. And I think computation machines are good examples of that. One has to be very careful in creating machines to not assume they’re more capable than they are. That’s true in cars. That’s true in vehicles that we make. That’s true in weapons we create. That’s true in defensive mechanisms we create. And so to me the dangers of AI are mostly due to the fact that people may assume the devices they create are more capable than they are and don’t need more control and monitoring. I guess I find the opportunities to be far more exciting than the dangers. The unknown is always dangerous, but ultimately machines and computational machines are improving our lives in many ways. We, of course, have to realize that the rate at which machines are evolving in capability may far exceed the rate at which society is able to deal with them. The fact that teenagers aren’t talking to each other, but always looking at their phones — not just teenagers — I was just in a restaurant here in New York this afternoon and half the people were not talking to people they were with but were staring at their phones. Well that may be not a good thing for societal interaction and people may have to come to terms with that. But I don’t think people view their phones as a danger. They view their phones as a tool that in many ways allow them to do what they otherwise do more effectively.

 


Related