Skip to content
Who's in the Video
Ben Goertzel is CEO and chief scientist at SingularityNET, a project dedicated to creating benevolent decentralized artificial general intelligence. He is also chief scientist of financial prediction firm Aidyia Holdings[…]
Sign up for the Smarter Faster newsletter
A weekly newsletter featuring the biggest ideas from the smartest people

BEN GOERTZEL: The way I've instilled my four human children with the human values that I prefer is mostly not by preaching at them what's right and what's wrong, that's not very effective, especially for people with a contrarian personality, which somehow all my kids ended up with, I don't know how. So the way I inculcated them with some approximation of the values that are important to me is just by spending time with them in various situations, and if your children enter into various situations with you and see how you respond to various things and how you guide them to respond to various things in real life then your kids pick up sort of by osmosis. I mean they pick up due to their desire to imitate and to learn and to follow, they pick up the practicalities of your values and your culture. And, interestingly, this can stick with them on an implicit level even if on the surface level many values change. One of my sons became a Sufi Muslim at one point and some of his values are, on the surface, very different than mine. I'm not Muslim, I'm not religious in any conventional sense. On the other hand, he's very compassionate, a kind-hearted person, he's very intellectual, he's a scholar so if you look at a practical level the vast bulk of the values and culture that he got from myself and his mom when he was growing up it's implicit rather than a list of rules, but it's all still there.

For an A.I., I think we need to take an approach somewhat similar to what we do with human children, we need to have A.I.s working and playing side-by-side with us in real situations. We need to give the A.I.s a desire to imitate us on a basic level and to understand why we're reacting, how we react in this after this after this concrete situation. And then the A.I. can get a practical model of our values and our culture as it's manifested in a hundred thousand or a million real-life situations. And this doesn't guarantee the A.I. will always respond the way we want, but it will give it a real foundation, which you're not going to get from giving a list of like the three or ten laws of being a good human or a good human-like mind. An important thing to remember when you talk about getting human values into an A.I. is that human values are very much a moving target. I mean, much of what we do in our everyday lives would have been considered horrendously immoral by the average human of the Middle Ages in Europe or even for that matter a lot of the things we take for granted as being moral and right now were considered horrendously immoral by almost everyone I went to elementary school with in the 1970s in suburban in New Jersey. Like, my mom is gay and because of that in the 1970s we got our car turned over, we got all the windows of our house smashed in, like that was completely unacceptable in the 1970s in Southern New Jersey. Now gay marriage is being legalized everywhere.

Human values even in our lifetime have changed a lot so when you get to a technological singularity with brain-computer interfacing and mind uploading and superhuman A.I.s and we're able to clone our body over and over again, our values are not going to remain precisely the same as they are now even for us humans so it's absurd to think we can start an A.I. with our current human values and they're going to stick with our 2018 human values forever; our own values are going to drift. So all we can ask for is the A.I. starts out with the values that we hold dear now and as its values grow and evolve this growth and evolution is coupled with our own growth and evolution, whose direction we cannot foresee at this time.


Related