Artificial general intelligence: The domain of the patient, philosophical coder

Sure, some expert-level knowledge is needed if you want to program artificial intelligence. But AI expert Ben Goertzel posits that you also need something that Guns N' Roses sang about: a lil' patience.

Ben Goertzel: My cousin who lives in Hong Kong is a game programmer, and he loves what I’m doing but he just tells me when we discuss it, “I need immediate gratification.” And he codes something and he sees a game character do something cool, right? And if you need that, if you really need to see something cool happen every day AGI is not for you. In AGI you may work six months and nothing interesting happens. And then something really interesting happens. So I think if someone doesn’t have that kind of stubborn, pigheaded persistence I will tend to employ them doing, for example, data analysis, because that gives immediate gratification. You get a data set from a customer, you run a machine learning algorithm on it and you get a result which is interesting. The customer is happy. Then you go on to the next data set.

And if you explain the different types of work available actually most people are pretty good at choosing what won’t drive them crazy. So some people are like “Yeah, I want to do stuff that seems cool every day.” And other people are like “Well, I really want to understand how thinking works. I want to understand how cognition and vision work together, and that’s much more interesting to me than applying an existing vision algorithm to solve someone’s problem.”

So I tend to throw the issue at the potential employee or volunteer themselves, and sometimes that works, sometimes it doesn’t. But I trust them to know themselves better than I know them anyway.

There are many different types and levels of problems that one encounters in doing AI work, and there are sort of low-level algorithmic problems or software design problems which are solved via clever tricks. And then there are deeper problems, like how do you architect a perception system? How should perception and cognition work together in an AI system? If a system knows one language how do you leverage that knowledge to help it learn another language? I find personally with these deeper problems this is the kind of thing you solve while you were like walking the dog in the forest or taking a shower or driving down the highway or something.

And it seems to be that the people who make headway on these deeper problems have the personality type that carries the problem in their head all the time. Like you’ll think about this thing when you go to sleep, you’re still thinking about it when you woke up, and you just keep chewing on this issue a hundred times a day. It could be for days or weeks or years, or even decades. And then the solution pops up for you.

And not everyone has the inclination or personality to be obsessive at sort of keeping a problem like an egg in your mind, in your focus, until the solution hatches out. But that’s a particular cognitive style or habit or skill which I see in everyone I know who’s really making headway on the AGI problem.

Sure, some expert-level knowledge is needed if you want to program artificial intelligence. But AI expert Ben Goertzel posits that you also need something that Guns N' Roses sang about: a lil' patience. If you want instant gratification, this isn't the line of work for you. Ben's most recent book is AGI Revolution: An Inside View of the Rise of Artificial General Intelligence.

Russia sends humanoid robot to space, fails to dock with ISS

The Russian-built FEDOR was launched on a mission to help ISS astronauts.

Photos by TASS\TASS via Getty Images
Technology & Innovation
  • Russia launched a spacecraft carrying FEDOR, a humanoid robot.
  • Its mission is to help astronauts aboard the International Space Station.
  • Such androids can eventually help with dangerous missions likes spacewalks.
Keep reading Show less

Human extinction! Don't panic; think about it like a philosopher.

Most people think human extinction would be bad. These people aren't philosophers.

Shutterstock
Politics & Current Affairs
  • A new opinion piece in The New York Times argues that humanity is so horrible to other forms of life that our extinction wouldn't be all that bad, morally speaking.
  • The author, Dr. Todd May, is a philosopher who is known for advising the writers of The Good Place.
  • The idea of human extinction is a big one, with lots of disagreement on its moral value.
Keep reading Show less

this incredibly rich machinery – with Antonio Damasio

Picking up where we left off a year ago, a conversation about the homeostatic imperative as it plays out in everything from bacteria to pharmaceutical companies—and how the marvelous apparatus of the human mind also gets us into all kinds of trouble.

Think Again Podcasts
  • "Prior to nervous systems: no mind, no consciousness, no intention in the full sense of the term. After nervous systems, gradually we ascend to this possibility of having to this possibility of having minds, having consciousness, and having reasoning that allows us to arrive at some of these very interesting decisions."
  • "We are fragile culturally and socially…but life is fragile to begin with. All that it takes is a little bit of bad luck in the management of those supports, and you're cooked…you can actually be cooked—with global warming!"



Keep reading Show less