Human value is tied to the job market. Will automation be a full-on crisis?
- Andrew Yang is running for president in 2020 as a Democrat. In this video, he discusses the greatest challenge of our time: automation and human-centered capitalism.
- Currently, a person's value is linked to the economic value of their job. So the soon-to-be-extinct coal miner should find new purpose by becoming a software coder, right?
- That's shortsighted, says Yang. One day soon, even the best coders will be outpaced by AI. We need to prepare for the inevitable future by shifting how we fundamentally think about human value.
When it comes to raising superintelligent A.I., kindness may be our best bet.
- We have no guarantee that a superintelligent A.I. is going to do what we want. Once we create something many times more intelligent than we are, it may be "insane" to think we can control what it does.
- What's the best bet to ensure superintelligent A.I. remains compliant with humans and does good works, such as advance medicine? To raise it in a way that's imbued with compassion and understanding, says Goertzel.
- One way to limit "people doing bad things out of frustration," it may be advantageous for the entire world to be plugged into the A.I. economy so that developers, from whatever country, can monetize their codes.
The plan to stop megacorps from owning superintelligence is already underway.
- A.I. technology is often developed within the proprietary silos of big tech companies. What if there was an open, decentralized hub for A.I. developers to share their creations? Enter SingularityNET.
- The many A.I.s in the network could compete with each other to provide services for users but they could also cooperate, giving way to an emergent-level mind: artificial general intelligence.
- SingularityNET is powered by blockchain technology, meaning whatever 'digital organism' emerges will not be owned or controlled by any one person, company or government.
The Dr. Data Show is a new web series that breaks the mold for data science infotainment, captivating the planet with short webisodes that cover the very best of machine learning and predictive analytics.
- All the hype around artificial intelligence misunderstands what intelligence really is.
- And A.I. is definitely, definitely not going to kill you, ever.
- Machine learning as a process and a concept, however, holds more promise.
Human values evolve. So how will we raise virtuous A.I.s?
- Until we can design a mind that's superhuman and flawless, we'll have to settle for instilling plain old human values into artificial intelligence. But how to do this in a world where values are constantly evolving?
- Many of our life choices today would be considered immoral by people in the Middle Ages — or even the 1970s, says Ben Goertzel, whose family personally experienced the sad state of LGBTQ acceptance in Southern New Jersey 50 years ago.
- Raising an A.I. is a lot like raising kids, says Goertzel. Kids don't learn best from a list of rules, but from lived experience – watching and imitating their parents. A.I.s and humans will have to play and learn side by side, and evolve together as values adapt toward an increasingly technological future.
SMARTER FASTER trademarks owned by The Big Think, Inc. All rights reserved.