Get paid to be a good human being? That's the future AI will deliver
A.I. will bring a series of social and financial changes, and it will force us to confront a problem we've been avoiding for much too long, says Joscha Bach.
Dr. Joscha Bach (MIT Media Lab and the Harvard Program for Evolutionary Dynamics) is an AI researcher who works and writes about cognitive architectures, mental representation, emotion, social modeling, and multi-agent systems. He is founder of the MicroPsi project, in which virtual agents are constructed and used in a computer model to discover and describe the interactions of emotion, motivation, and cognition of situated agents. Bach’s mission to build a model of the mind is the bedrock research in the creation of Strong AI, i.e. cognition on par with that of a human being. He is especially interested in the philosophy of AI and in the augmentation of the human mind.
Joscha Bach: I think the question of whether we should be afraid of strong A.I. taking over and squashing us like bugs because it doesn’t need us for the things that it’s doing is exactly the same question as if we should be afraid of big corporations taking over and squashing us like bugs. Because big corporations are already agents: they are already intelligent agents in some sense. They’re not sentient. They borrow humans right now for their decision making. But they do have goals of their own that are different from the goals of the humans that they employ. They usually live longer. They’re much more powerful than people. And it’s very hard for a person to do anything against a corporation.
Usually if you want to fight a corporation you have to become some major organization or corporation or nation state yourself. So in some sense the agency of an A.I. is going to be the agency of the system that builds it, that employs it. And of course most of the A.I.s that we are going to build will not be little Roombas that clean your floors, but it’s going to be very intelligent systems—corporations, for instance—that will perform exactly according to the logic of these systems. So if we want to have these systems built in such a way that they treat us nicely you have to start right now. And it seems to be a very hard problem to do so.
The job loss because of automation has several aspects. I think the most obvious thing that we should be seeing is: if our jobs can be done by machines, that’s a very, very good thing. It’s not a bug, it’s a feature.
If I don’t need to clean the street, if I don’t need to drive a car for other people, if I don’t need to work a cash register for other people, if I don’t need to pick goods in a big warehouse and put it into boxes, it’s an extremely good thing.
And the trouble that we have with this is that right now this mode of labor, that people sell their lifetime to some kind of corporation or employer, is not only the way that we are protected, it’s also the way we allocate resources. This is how we measure how much bread you deserve in this world. And I think this is something that we need to change.
Some people suggest that we need a Universal Basic Income. I think it might be good to be able to pay people to be good citizens, which means massive public employment. There are going to be many jobs that can only be done by people and these are those jobs where we are paid for being good, interesting people. For instance good teachers, good scientists, good philosophers, good thinkers, good social people, good nurses, for instance. Good people that raise children. Good people that build restaurants and theaters. Good people that make art. And for all these jobs people have enough productivity to make sure that enough bread comes on the table. The question is how we can distribute this.
There’s going to be much, much more productivity in our future. Actually we already have enough productivity to give everybody in the U.S. an extremely good life. And we haven’t fixed the problem of allocating it, how to distribute these things in the best possible way.
And this is something that we need to deal with in the future, and AI is going to accelerate this need, and I think by and large it might turn out to be a very good thing that we are forced to do this and to address this problem.
If the past is any evidence of the future it might be a very bumpy road, but who knows. Maybe when we are forced to understand that actually we live in an age of abundance it might turn out to be easier than we think.
Right now we are living in a world where we do certain things the way we’ve done them in the past decades—and sometimes in the past centuries—and we perceive that this is the way it “has” to be done. And we often don’t question these ways, so we might think, “If I do work at this particular factory and this is how I earn my bread, how can we keep that state? How can we prevent A.I. from making my job obsolete? How is it possible that I can keep up my standard of living and so on in this world?"
Maybe this is the wrong question to ask. Maybe the right question is: how can we reorganize societies so that I can do the things that I want to do most, that I think are useful to me and other people, that I really, really want to? Because there will be other ways that I can get my bread made and how I can get money or how I can get a roof over my head, that are going to be more awesome and abundant than the ways that we have now.
We are going to be able to build better cars in the future, better houses in the future, better roads or better ways of transporting people. We are going to have a cleaner environment, if we want to and if we might pull it off, because we have more productivity. And people can have better food, better healthcare and a better way of living. And it’s not because we give them jobs that require more work and require them to lean in harder. They need to work less. They need to sell less of their lifetime for things that they don’t want.
And that’s actually a very, very good thing. But it means that we have to change the way our current economy works which means we have to reinvent a lot of our labor market systems. We have to reinvent the way we distribute money and allocate resources in society. And when that happens people have a much better life than we can currently imagine.
To know whether or not we should fear A.I., we first have to understand how it will behave in the world. Cognitive scientist Joscha Bach believes A.I. has the potential to mistreat humans—but no worse than big corporations already do. The future won't filled with Roombas and anthropomorphized house-help robots, he says, so a physical threat is not the main concern. A.I. will take the form of intelligent systems that operate as corporations, and they will adopt the ethics of whatever company builds them. "If we want to have these systems built in such a way that they treat us nicely you have to start right now. And it seems to be a very hard problem to do so," Bach says. And yet he appears to be optimistic about society's other main A.I. fear: job automation. He frames it like this: if a job is you selling the best years of your life to a corporation, automating as many manual tasks as possible is really a release from that contract—but how will we afford to live, and what will we do with our days? Many think Universal Basic Income, but Bach sees it a little differently: mass public employment. Pay people to be good humans: good at teaching and at raising their children. Pay them to be good scientists, good philosophers, good architects and chefs — the things that make us most human. Job automation will also force us to confront one of our most difficult and uncomfortable problems: that we are living in an age of abundance, but fail to distribute resources so that everyone can live a decent life. "It might turn out to be a very good thing if you are forced... to address this problem," he says. Joscha Bach's latest book is Principles of Synthetic Intelligence.
An ordained Lama in a Tibetan Buddhist lineage, Lama Rod grew up a queer, black male within the black Christian church in the American south. Navigating all of these intersecting, evolving identities has led him to a life's work based on compassion for self and others.
- "What I'm interested in is deep, systematic change. What I understand now is that real change doesn't happen until change on the inside begins to happen."
- "Masculinity is not inherently toxic. Patriarchy is toxic. We have to let that energy go so we can stop forcing other people to do emotional labor for us."
We were gaining three IQ points per decade for many, many years. Now, that's going backward. Could this explain some of our choices lately?
There's a new study out of Norway that indicates our—well, technically, their—IQs are shrinking, to the tune of about seven IQ points per generation.
Here's why generalists triumph over specialists in the new era of innovation.
- Since the explosion of the knowledge economy in the 1990s, generalist inventors have been making larger and more important contributions than specialists.
- One theory is that the rise of rapid communication technologies allowed the information created by specialists to be rapidly disseminated, meaning generalists can combine information across disciplines to invent something new.
- Here, David Epstein explains how Nintendo's Game Boy was a case of "lateral thinking with withered technology." He also relays the findings of a fascinating study that found the common factor of success among comic book authors.
SMARTER FASTER trademarks owned by The Big Think, Inc. All rights reserved.