David Goggins
Former Navy Seal
Career Development
Bryan Cranston
Critical Thinking
Liv Boeree
International Poker Champion
Emotional Intelligence
Amaryllis Fox
Former CIA Clandestine Operative
Chris Hadfield
Retired Canadian Astronaut & Author
from the world's big
Start Learning

How to build an A.I. brain that can surpass human intelligence

Artificial intelligence has the capability to far surpass our intelligence in a relatively short period of time, but we have to lay the right groundwork now while we still can.

Ben Goertzel: If you think much about physics and cognition and intelligence it’s pretty obvious the human mind is not the smartest possible general intelligence any more than humans are the highest jumpers or the fastest runners. We’re not going to be the smartest thinkers.

If you are going to work toward AGI rather than focusing on some narrow application there’s a number of different approaches that you might take. And I’ve spent some time just surveying the AGI field as a whole and organizing an annual conference on the AGI. And then I’ve spent a bunch more time on the specific AGI approach which is based on the OpenCog, open source software platform. In the big picture one way to approach AGI is to try to emulate the human brain at some level of precision. And this is the approach I see, for example, Google Deep Mind is taking. They’ve taken deep neural networks which in their common form are mostly a model of visual and auditory processing in the human brain. And now in their recent work such as the DNC, differential neural computer, they’re taking these deep networks that model visual or auditory processing and they’re coupling that with a memory matrix which models some aspect of what the hippocampus does, which is the part of the brain that deals with working memory, short-term memory among other things. So this illustrates an approach where you take neural networks emulating different parts of the brain and maybe you take more and more neural networks emulating different parts of the human brain. You try to get them to all work together not necessarily doing computational neuroscience but trying to emulate the way different parts of the brain are doing processing and the way they’re talking to each other.

A totally different approach is being taken by a guy named Marcus Hutter in Australia National University. He wrote a beautiful book on universal AI in which he showed how to write a superhuman infinitely intelligence thinking machine in like 50 lines of code. The problem is it would take more computing power than there is in the entire universe to run. So it’s not practically useful but they’re then trying to scale down from this theoretical AGI to find something that will really work.

Now the approach we’re taking in the OpenCog project is different than either of those. We’re attempting to emulate at a very high level the way the human mind seems to work as an embodied social generally intelligent agent which is coming to grips with hard problems in the context of coming to grips with itself and its life in the world. We’re not trying to model the way the brain works at the level of neurons or neural networks. We’re looking at the human mind more from a high-level cognitive point of view. What kinds of memory are there? Well, there’s semantic memory about abstract knowledge or concrete facts. There’s episodic memory of our autobiographical history. There’s sensory-motor memory. There’s associative memory of things that have been related to us in our lives. There’s procedural memory of how to do things.

And we then look at the different kinds of learning and reasoning the human mind can do. We can do logical deduction sometimes. We’re not always good at it. We make emotional intuitive leaps and strange creative combinations of things. We learn by trial and error and habit. We learn socially by imitating, mirroring, emulating or opposing others. These different kinds of memory and learning that the human mind has – one can attempt to achieve each of those with a cutting-edge computer science algorithm, rather than trying to achieve each of those functions and structures in the way the brain does.

So what we have in OpenCog we have a central knowledge repository which is very dynamic and lives in RAM on a large network of computers which we call the AtomSpace. And for the mathematicians or computer science in the audience, the AtomSpace is what you’d call a weighted labeled hypergraph. So it has nodes. It has links. A link can go between two nodes or a link could go between three, four, five or 50 nodes. Different nodes and links have different types and the nodes and links can have numbers attached to them. A node or link could have a weight indicating a probability or a confidence. It could have a weight indicating how important it is to the system right now or how important it is in the long term so it should be kept around in the system’s memory.

On this AtomSpace, this weighted labeled hypergraph, we can have a lot of different AI processes working together cooperatively. So the AtomSpace, the memory store, is what we would call neural-symbolic. That means we can represent nodes and links that are like neurons in the brain which is fairly low level. But we can also represent nodes and links that are higher level representing pieces of symbolic logic expressions. So we can do explicit logical reasoning which is pretty abstract and low level neural net stuff in the same hypergraph, the same AtomSpace. Acting on this AtomSpace we have deep neural networks for visual and auditory perception. We have a probabilistic logic engine which does abstract reasoning. We have an evolutionary learning algorithm that uses genetic algorithm type methods to try to evolve radical new ideas and concepts and look for data patterns. And we have a neural net type dynamic that spreads activity and importance throughout the network. A few other algorithms. A pattern mining algorithm that just scans through the whole AtomSpace looking for surprising stuff. And the trick is all these different cognitive algorithms have to work together cooperatively to help each other rather than hurt each other.

See, the bottleneck in essentially every AI approach ever taken – be it a neural net, a logic engine, a genetic algorithm, whatever – the bottleneck in every AI approach ever taken has been what we call a combinatorial explosion. And what that means is you have a lot of data items. You have a lot of perceptions coming into your eye or you have a lot of possible moves on the chess board or a lot of possible ways to move the wheel of the car. And there are so many combinations of possible data items and possible things you could do, sifting through all those combinations becomes an exponential problem. I mean if you have a thousand things there’s two to the one-thousandth way to combine them and that’s way too many. So how to sift through combinatorial explosions is the core problem everyone has to deal with. In a deep neural network as currently pursued, it’s solved by making the network have a very specific structure which reflects a structure of visual and auditory streams. And in a logic engine, you don’t have that sort of luxury because a logic engine has to deal with anything, not just sensory data. But what we do in OpenCog is we’ve worked out a system where each of the cognitive processes can help the other one out when it gets stuck in some combinatorial explosion problem. So if a deep neural network trying to perceive things gets confused because it’s dark or it’s looking at something it never saw before, well maybe the reasoning engine can come in and do some inference to cut through that confusion.

If logical reasoning is getting confused and doesn’t know what step to take next because there’s just so many possibilities out there and not much information about them. Well, maybe you fish into your sensory-motor memory and you use deep learning to visualize something you saw before and that gives you a clue of how to pare through the many possibilities that the logic engine is seeing. Now you can model this kind of cognitive synergy mathematically using a branch of mathematics called category theory, which is something I’ve been working on lately. But what’s really interesting more so is to build a system that manifests this and achieves general intelligence as a result and that’s what we’re doing in the OpenCog project.

We’re not there yet to general intelligence but we’re getting there step by step. We’re using our open source, OpenCog platform to control David Hanson’s beautiful, incredibly realistic humanoid robots like the Sophia robot which has gotten a lot of media attention in the last year. We’re using OpenCog to analyze biological data related to the genetics of longevity and we’re doing a host of other consulting projects using this. So we’re proceeding on an R&D track and an application track at the same time. But our end goal with the system is to use cognitive synergy on our neural-symbolic knowledge store to achieve initially human level AI but that’s just an early stage goal. And then AI much beyond the human level.

And that is another advantage of taking an approach that doesn’t adhere slavishly to the human brain. The brain is pretty good at recognizing faces because millions of years of evolution went into that part of the brain. But for doing science or math or logical reasoning or strategic planning we’re pretty bad. And these are things that we’ve started doing only recently in evolutionary time as a result of modern culture. So I think actually OpenCog and other AI systems have potential to be far better than human beings at the sort of logical and strategic side of things. And I think that’s quite important because if you take a human being and upgrade them to like 10,000 IQ the outcome might not be what you want, because you’ve got a motivational system and an emotional system that basically evolved in prehuman animals. Whereas if you architect a system where rationality and empathy play a deeper role in the architecture then as its intelligence ramps way up we may find a more beneficial outcome.

Artificial intelligence has the capability to far surpass our intelligence in a relatively short period of time. But AI expert Ben Goertzel knows that the foundation has to be strong for that artificial brain power to grow exponentially. It's all good to be super-intelligent, he argues, but if you don't have rationality and empathy to match it the results will be wasted and we could just end up with an incredible number-cruncher. In this illuminating chat, he makes the case for thinking bigger. Ben Goertzel's most recent book is AGI Revolution: An Inside View of the Rise of Artificial General Intelligence.

Remote learning vs. online instruction: How COVID-19 woke America up to the difference

Educators and administrators must build new supports for faculty and student success in a world where the classroom might become virtual in the blink of an eye.

Credit: Shutterstock
Sponsored by Charles Koch Foundation
  • If you or someone you know is attending school remotely, you are more than likely learning through emergency remote instruction, which is not the same as online learning, write Rich DeMillo and Steve Harmon.
  • Education institutions must properly define and understand the difference between a course that is designed from inception to be taught in an online format and a course that has been rapidly converted to be offered to remote students.
  • In a future involving more online instruction than any of us ever imagined, it will be crucial to meticulously design factors like learner navigation, interactive recordings, feedback loops, exams and office hours in order to maximize learning potential within the virtual environment.
Keep reading Show less

White dwarfs hold key to life in the universe, suggests study

New study shows white dwarf stars create an essential component of life.

NASA and H. Richer (University of British Columbia)
Surprising Science
  • White dwarf stars create carbon atoms in the Milky Way galaxy, shows new study.
  • Carbon is an essential component of life.
  • White dwarfs make carbon in their hot insides before the stars die.
Keep reading Show less

"Forced empathy" is a powerful negotiation tool. Here's how to do it.

Master negotiator Chris Voss breaks down how to get what you want during negotiations.

Photo by Joe Raedle/Getty Images
Personal Growth
  • Former FBI negotiator Chris Voss explains how forced empathy is a powerful negotiating tactic.
  • The key is starting a sentence with "What" or "How," causing the other person to look at the situation through your eyes.
  • What appears to signal weakness is turned into a strength when using this tactic.
Keep reading Show less

Octopus-like creatures inhabit Jupiter’s moon, claims space scientist

A leading British space scientist thinks there is life under the ice sheets of Europa.

Credit: NASA/JPL-Caltech/SETI Institute
Surprising Science
  • A British scientist named Professor Monica Grady recently came out in support of extraterrestrial life on Europa.
  • Europa, the sixth largest moon in the solar system, may have favorable conditions for life under its miles of ice.
  • The moon is one of Jupiter's 79.
Keep reading Show less

How to catch a glimpse of Comet NEOWISE before it’s gone

Unless you plan to try again in 6,800 years, this week is your shot.

Image source: Sven Brandsma/Unsplash
Surprising Science
  • Comet NEOWISE will be most visible in the U.S. during the evenings from July 14-19, 2020.
  • After July 23rd, NEOWISE will be visible only through good binoculars and telescopes.
  • Look in the northwestern sky below the Big Dipper after dusk while there's a chance.

UPDATE: NASA is broadcasting a NASA Science Live episode highlighting Comet NEOWISE. NASA experts will discuss and answer public questions beginning at 3PM EST on Wednesday, July 15. Tune in via the agency's website, Facebook Live, YouTube, Periscope, LinkedIn, Twitch, or USTREAM.

Before last evening, July 14, 2020, the easiest way to see Comet NEOWISE — the brightest comet to zoom past Earth since 1977's Comet Hale-Bopp — from the United States was to catch it about an hour before sunrise. Now, however, you can see it in the evening, where it will remain for until the 19th. This is a definite don't-miss event — NEOWISE won't be coming back our way for another 6,800 years. It's the first major comet of the millennium, and by all accounts, it's unforgettable.

NEOWISE just got back from the Sun

Comet NEOWISE is named after the NASA infrared space telescope that first spotted it on March 27th. Its official moniker is C/2020 F3. It's estimated that the icy comet is about three miles across, not counting its tail.

NEOWISE is now heading away from our Sun, having made it closet approach, 27.4 million miles, to our star on July 3. The heat from that encounter is what's given NEOWISE its tail: It caused gas and dust to be released from the icy object, creating the tail of debris that looks so magical from here.

As NEOWISE moves closer to Earth, paradoxically, it will be less and less visible. By about July 23rd, you'll need binoculars or a telescope to see it at all. All of which makes this week prime time.

An evening delight

star constellation in sky

Image source: Allexxandar/Shutterstock/Big Think

First, find an unobstructed view of the northwest sky, free of streetlights, car headlights, apartment lights, and so on. And then, according to Sky & Telescope:

"Start looking about one hour after sunset, when you'll find it just over the northwestern horizon as the last of twilight fades into darkness."

It should be easy to spot since it's near to one of the most recognizable constellations up there, the Big Dipper. "Look about three fists below the bottom of the Big Dipper, which is hanging down by its handle high above, and from there perhaps a little to the right." Et voilà: Comet NEOWISE.

Says Sky & Telescope's Diana Hannikainen, "Look for a faint, fuzzy little 'star' with a fainter, fuzzier little tail extending upward from it."

The comet should be visible with the naked eye, though binoculars and a simple telescope may reveal more detail.

You may also be able to snap a photo of this special visitor, though you'll need the right gear to do so. A dedicated camera is more likely to capture a good shot than a telephone, but in either case, you'll need a tripod or some other means of holding the camera dead still as it takes a timed exposure of several seconds (not all phones can do this).