How to be a good parent to artificial intelligence

Human values evolve. So how will we raise virtuous A.I.s?

BEN GOERTZEL: The way I've instilled my four human children with the human values that I prefer is mostly not by preaching at them what's right and what's wrong, that's not very effective, especially for people with a contrarian personality, which somehow all my kids ended up with, I don't know how. So the way I inculcated them with some approximation of the values that are important to me is just by spending time with them in various situations, and if your children enter into various situations with you and see how you respond to various things and how you guide them to respond to various things in real life then your kids pick up sort of by osmosis. I mean they pick up due to their desire to imitate and to learn and to follow, they pick up the practicalities of your values and your culture. And, interestingly, this can stick with them on an implicit level even if on the surface level many values change. One of my sons became a Sufi Muslim at one point and some of his values are, on the surface, very different than mine. I'm not Muslim, I'm not religious in any conventional sense. On the other hand, he's very compassionate, a kind-hearted person, he's very intellectual, he's a scholar so if you look at a practical level the vast bulk of the values and culture that he got from myself and his mom when he was growing up it's implicit rather than a list of rules, but it's all still there.

For an A.I., I think we need to take an approach somewhat similar to what we do with human children, we need to have A.I.s working and playing side-by-side with us in real situations. We need to give the A.I.s a desire to imitate us on a basic level and to understand why we're reacting, how we react in this after this after this concrete situation. And then the A.I. can get a practical model of our values and our culture as it's manifested in a hundred thousand or a million real-life situations. And this doesn't guarantee the A.I. will always respond the way we want, but it will give it a real foundation, which you're not going to get from giving a list of like the three or ten laws of being a good human or a good human-like mind. An important thing to remember when you talk about getting human values into an A.I. is that human values are very much a moving target. I mean, much of what we do in our everyday lives would have been considered horrendously immoral by the average human of the Middle Ages in Europe or even for that matter a lot of the things we take for granted as being moral and right now were considered horrendously immoral by almost everyone I went to elementary school with in the 1970s in suburban in New Jersey. Like, my mom is gay and because of that in the 1970s we got our car turned over, we got all the windows of our house smashed in, like that was completely unacceptable in the 1970s in Southern New Jersey. Now gay marriage is being legalized everywhere.

Human values even in our lifetime have changed a lot so when you get to a technological singularity with brain-computer interfacing and mind uploading and superhuman A.I.s and we're able to clone our body over and over again, our values are not going to remain precisely the same as they are now even for us humans so it's absurd to think we can start an A.I. with our current human values and they're going to stick with our 2018 human values forever; our own values are going to drift. So all we can ask for is the A.I. starts out with the values that we hold dear now and as its values grow and evolve this growth and evolution is coupled with our own growth and evolution, whose direction we cannot foresee at this time.

  • Until we can design a mind that's superhuman and flawless, we'll have to settle for instilling plain old human values into artificial intelligence. But how to do this in a world where values are constantly evolving?
  • Many of our life choices today would be considered immoral by people in the Middle Ages — or even the 1970s, says Ben Goertzel, whose family personally experienced the sad state of LGBTQ acceptance in Southern New Jersey 50 years ago.
  • Raising an A.I. is a lot like raising kids, says Goertzel. Kids don't learn best from a list of rules, but from lived experience – watching and imitating their parents. A.I.s and humans will have to play and learn side by side, and evolve together as values adapt toward an increasingly technological future.


Big Think Edge
  • Conformity is not conducive to good problem solving, says economist and author Tim Harford.
  • The opposite of conformity? Diversity.
  • The kind of discussions that diversity facilitates actually improve the ability of groups to arrive at effective solutions.

Why the south of Westeros is the north of Ireland

As Game of Thrones ends, a revealing resolution to its perplexing geography.

Image: YouTube / Doosh
Strange Maps
  • The fantasy world of Game of Thrones was inspired by real places and events.
  • But the map of Westeros is a good example of the perplexing relation between fantasy and reality.
  • Like Britain, it has a Wall in the North, but the map only really clicks into place if you add Ireland.
Keep reading Show less
Big Think Edge
  • Alan Lightman, physicist and author of Einstein's Dreams, examined 30 great scientific discoveries of the 20th century.
  • Here he explores the habits of mind that push innovators toward creative breakthroughs.
  • His advice for reaching creative heights? Embrace stuck-ness and don't rely on inspiration.