Elon Musk, Sam Harris, Ray Kurzweil and other visionaries discuss AI superintelligence at a recent conference.
A fascinating conference on artificial intelligence was recently hosted by the Future of Life Institute, an organization aimed at promoting “optimistic visions of the future” while anticipating “existential risks” from artificial intelligence and other directions.
The conference “Superintelligence: Science or Fiction?” featured a panel of Elon Musk from Tesla Motors and SpaceX, futurist Ray Kurzweil, Demis Hassabis of MIT’s DeepMind, neuroscientist and author Sam Harris, philosopher Nick Bostrom, philosopher and cognitive scientist David Chalmers, Skype co-founder Jaan Tallinn, as well as computer scientists Stuart Russell and Bart Selman. The discussion was led by MIT cosmologist Max Tegmark.
The conference participants offered a number of prognostications and warnings about the coming superintelligence, an artificial intelligence that will far surpass the brightest human.
Most agreed that such an AI (or AGI for Artificial General Intelligence) will come into existence. It is just a matter of when. The predictions ranged from days to years, with Elon Musk saying that one day an AI will reach a “a threshold where it's as smart as the smartest most inventive human” which it will then surpass in a “matter of days”, becoming smarter than all of humanity.
Ray Kurzweil’s view is that however long it takes, AI will be here before we know it:
“Every time there is an advance in AI, we dismiss it as 'oh, well that's not really AI:' chess, go, self-driving cars. An AI, as you know, is the field of things we haven't done yet. That will continue when we actually reach AGI. There will be lots of controversy. By the time the controversy settles down, we will realize that it's been around for a few years," says Kurzweil [5:00].
Neuroscientist and author Sam Harris acknowledges that his perspective comes from outside the AI field, but sees that there are valid concerns about how to control AI. He thinks that people don’t really take the potential issues with AI seriously yet. Many think it’s something that is not going to affect them in their lifetime - what he calls the “illusion that the time horizon matters.”
“If you feel that this is 50 or a 100 years away that is totally consoling, but there is an implicit assumption there, the assumption is that you know how long it will take to build this safely. And that 50 or a 100 years is enough time,” he says [16:25].
On the other hand, Harris points out that at stake here is how much intelligence humans actually need. If we had more intelligence, would we not be able to solve more of our problems, like cancer? In fact, if AI helped us get rid of diseases, then humanity is currently in “pain of not having enough intelligence.”
Elon Musk’s point of view is to be looking for the best possible future - the “good future” as he calls it. He thinks we are headed either for “superintelligence or civilization ending” and it’s up to us to envision the world we want to live in.
“We have to figure out, what is a world that we would like to be in where there is this digital superintelligence?,” says Musk [at 33:15].
He also brings up an interesting perspective that we are already cyborgs because we utilize “machine extensions” of ourselves like phones and computers.
Musk expands on his vision of the future by saying it will require two things - “solving the machine-brain bandwidth constraint and democratization of AI”. If these are achieved, the future will be “good” according to the SpaceX and Tesla Motors magnate [51:30].
By the “bandwidth constraint,” he means that as we become more cyborg-like, in order for humans to achieve a true symbiosis with machines, they need a high-bandwidth neural interface to the cortex so that the “digital tertiary layer” would send and receive information quickly.
At the same time, it’s important for the AI to be available equally to everyone or a smaller group with such powers could become “dictators”.
He brings up an illuminating quote about how he sees the future going:
“There was a great quote by Lord Acton which is that 'freedom consists of the distribution of power and despotism in its concentration.' And I think as long as we have - as long as AI powers, like anyone can get it if they want it, and we've got something faster than meat sticks to communicate with, then I think the future will be good,” says Musk [51:47]
You can see the whole great conversation here:
Philosopher and cognitive scientist David Chalmers warns about an AI-dominated future world without consciousness at a recent conference on artificial intelligence that also included Elon Musk, Ray Kurzweil, Sam Harris, Demis Hassabis and others.
Recently, a conference on artificial intelligence, tantalizingly titled “Superintelligence: Science or Fiction?”, was hosted by the Future of Life Institute, which works to promote “optimistic visions of the future”.
The conference offered a range of opinions on the subject from a variety of experts, including Elon Musk of Tesla Motors and SpaceX, futurist Ray Kurzweil, Demis Hassabis of Google's DeepMind, neuroscientist and author Sam Harris, philosopher Nick Bostrom, philosopher and cognitive scientist David Chalmers, Skype co-founder Jaan Tallinn, as well as computer scientists Stuart Russell and Bart Selman. The discussion was led by MIT cosmologist Max Tegmark.
The conversation's topics centered on the future benefits and risks of artificial superintelligence, with everyone generally agreeing that it’s only a matter of time before AI becomes paramount in our lives. Eventually, AI will surpass human intelligence, with the ensuing risks and transformations. And Elon Musk, for one, thinks it’s rather pointless to be concerned as we are already cyborgs, considering all the technological extensions of ourselves that we depend on a daily basis.
A worry for Australian philosopher and cognitive scientist David Chalmers is creating a world devoid of consciousness. He sees the discussion of future superintelligences often presume that eventually AIs will become conscious. But what if that kind of sci-fi possibility that we will create completely artificial humans is not going to come to fruition? Instead, we could be creating a world endowed with artificial intelligence but not actual consciousness.
David Chalmers speaking. Credit: Future of Life Institute.
Here’s how Chalmers describes this vision (starting at 22:27 in Youtube video below):
“For me, that raising the possibility of a massive failure mode in the future, the possibility that we create human or super human level AGI and we've got a whole world populated by super human level AGIs, none of whom is conscious. And that would be a world, could potentially be a world of great intelligence, no consciousness no subjective experience at all. Now, I think many many people, with a wide variety of views, take the view that basically subjective experience or consciousness is required in order to have any meaning or value in your life at all. So therefore, a world without consciousness could not possibly a positive outcome. maybe it wouldn't be a terribly negative outcome, it would just be a 0 outcome, and among the worst possible outcomes.”
Chalmers is known for his work on the philosophy of mind and has delved particularly into the nature of consciousness. He famously formulated the idea of a “hard problem of consciousness” which he describes in his 1995 paper “Facing up to the problem of consciousness” as the question of ”why does the feeling which accompanies awareness of sensory information exist at all?"
His solution to this issue of an AI-run world without consciousness? Create a world of AIs with human-like consciousness:
“I mean, one thing we ought to at least consider doing there is making, given that we don't understand consciousness, we don't have a complete theory of consciousness, maybe we can be most confident about consciousness when it's similar to the case that we know about the best, namely human human consciousness... So, therefore maybe there is an imperative to create human-like AGI in order that we can be maximally confident that there is going to be consciousness,” says Chalmers (starting at 23:51).
By making it our clear goal to fully recreate ourselves in all of our human characteristics, we may be able to avoid a soulless world of machines becoming our destiny. A warning and an objective worth considering while we can. Yet it sounds from Chalmers’s words that as we don’t understand consciousness, perhaps this is a goal doomed to failure.
Please check out the excellent conference in full here:
Robots ready to produce the new Mini Cooper are pictured during a tour of the BMW's plant at Cowley in Oxford, central England, on November 18, 2013. (Photo credit: ANDREW COWIE/AFP/Getty Images)
A recent conference on the future of artificial intelligence features visionary debate between Elon Musk, Ray Kurzweil, Sam Harris, Nick Bostrom, David Chalmers, Jaan Tallinn and others.
A fascinating conference on artificial intelligence was recently hosted by the Future of Life Institute, an organization aimed at promoting “optimistic visions of the future”. The conference “Superintelligence: Science or Fiction?” included such luminaries as Elon Musk of Tesla Motors and SpaceX, futurist Ray Kurzweil, Demis Hassabis of MIT’s DeepMind, neuroscientist and author Sam Harris, philosopher Nick Bostrom, philosopher and cognitive scientist David Chalmers, Skype co-founder Jaan Tallinn, as well as computer scientists Stuart Russell and Bart Selman. The discussion was led by MIT cosmologist Max Tegmark.
The group touched on a number of topics about the future benefits and risks of coming artificial superintelligence, with everyone generally agreeing that it’s only a matter of time before AI becomes ubiquitous in our lives. Eventually, AI will surpass human intelligence, with the risks and transformations that such a seismic event would entail.
Elon Musk has not always been an optimistic voice for AI, warning of its dangers to humanity. But here he sounds more muted about the threat. He sees the AI future as inevitable, with dangers to be mitigated through government regulation, as much as he doesn’t like the idea of them being a “bit of a buzzkill”.
He also brings up an interesting perspective that our fears of the technological changes the future will bring are largely irrelevant. According to Musk, we are already cyborgs by utilizing “machine extensions” of ourselves like phones and computers.
“By far you have more power, more capability, than the President of the United States had 30 years ago. If you have an Internet link you have an article of wisdom, you can communicate to millions of people, you can communicate to the rest of Earth instantly. I mean, these are magical powers that didn’t exist, not that long ago. So everyone is already superhuman, and a cyborg,” says Musk [at 33:56].
He sees humans as information-processing machines that pale in comparison to the powers of a computer. What is necessary, according to Musk, is to create a greater integration between man and machine, specifically altering our brains with technology to make them more computer-like.
“I think the two things that are needed for a future that we would look at and conclude is good, most likely, is, we have to solve that bandwidth constraint with a direct neural interface. I think a high bandwidth interface to the cortex, so that we can have a digital tertiary layer that’s more fully symbiotic with the rest of us. We’ve got the cortex and the limbic system, which seem to work together pretty well - they’ve got good bandwidth, whereas the bandwidth to additional tertiary layer is weak,” explained Musk [at 35:05]
Once we solve that issue, AI will spread everywhere. It’s important to do so because, according to Musk, if only a smaller group would have such capabilities, they would become “dictators” with “dominion over Earth”.
What would a world filled with such cyborgs look like? Visions of Star Trek’s Borg come to mind.
Musk thinks it will be a society full of equals:
“And if we do those things, then it will be tied to our consciousness, tied to our will, tied to the sum of individual human will, and everyone would have it so it would be sort of still a relatively even playing field, in fact, it would be probably more egalitarian than today,” points out Musk [at 36:38].
The whole conference is immensely fascinating and worth watching in full. Check it out here:
Job automation won't be as bad as we think, so we need to learn how to stop working and prepare so we're not dragged into the future kicking and screaming.
The average employed American works about nine hours a day, according to the Bureau of Labor and Statistics. Twice as many of them are unhappy at work than happy. The commute to and from work, which is also considered work, only deepens the misery. That’s why most of us are excited by the prospect of robots taking over the job market in the near future – even if we’re scared about being left jobless.
That fear is justified. IT research firm Gartner predicts that one-third of jobs will be replaced by software, robots, and smart machines by 2025. Our own Ray Kurzweil backs that up by insisting that robots will reach human levels of intelligence by 2029. Both projects are stark, and difficult to verify given the inconsistent advancement of artificial intelligence, as we’ve told you before. But the technology continues to advance in that direction, so some day in the near-ish future you should expect a machine to take over your job. IT expert Andrew MacAfee explains how – even for highly skilled workers:
As McAfee points out, we used to think that automating jobs would create a permanent underclass of unskilled cheap labor. Recent advancements in AI and deep learning are proving that to not be the case. It’s now simply a matter of time before a robot takes your job. If you want to prepare for that, you need to redefine what work means to you.
Right now, work signifies different things to different people. Economically speaking, work is a means of earning and distributing purchasing power. Personally speaking, work is a major source of identity, purpose, and even self-fulfillment. In a post-work world, work will be none of these things. Without work to give us meaning, we might drive ourselves nuts. In fact, if we don’t have work in our lives, we already drive ourselves nuts. 20 percent of Americans who’ve been unemployed for at least a year report having depression, according to this Gallup poll. That’s double the rate for working Americans. The Atlantic found even more research, including reports that suggest “the explanation for rising rates of mortality, mental-health problems, and addiction among poorly-educated, middle-aged people is a shortage of well-paid jobs. Another study shows that people are often happier at work than in their free time.”
Combine that self-torture with our cultural tendencies to demonize people who avoid work and we’ve got a society that has no idea how not to work. “People who avoid work are viewed as parasites and leeches,” John Danaher of National University of Ireland told The Atlantic. “Perhaps as a result of this cultural attitude, for most people, self-esteem and identity are tied up intricately with their job, or lack of job.”
That’s only going to get worse when robots do all the work.
As The Guardian reports, we’re not good at coping with that now:
Labour [sic] markets have coped [with robotic automation] the only way they are able: workers needing jobs have little option but to accept dismally low wages. Bosses shrug and use people to do jobs that could, if necessary, be done by machines. Big retailers and delivery firms feel less pressure to turn their warehouses over to robots when there are long queues of people willing to move boxes around for low pay. Law offices put off plans to invest in sophisticated document scanning and analysis technology because legal assistants are a dime a dozen. People continue to staff checkout counters when machines would often, if not always, be just as good. Ironically, the first symptoms of a dawning era of technological abundance are to be found in the growth of low-wage, low-productivity employment.
James Manyika of The White House Global Development Council agrees, and explained the nuts and bolts of it to us here:
As a society, we need to figure out how to live without work – not just for our sanity and self-esteem, but for the future of our species. There are tons of benefits to being a post-work society. The biggest immediate benefit is that we can reinvest the time we spend at work toward pursuits that make us happier. According to research out of Stanford, families that spend more time together experience a higher degree of happiness and life satisfaction that people who don’t. “Researchers [at Harvard] have found that having close relationships is the number-one predictor of happiness, and the social connections that a work-free world might enable could well displace the aimlessness that so many futurists predict,” The Atlantic reports.
Without work, we could finally fix our educational system and make it work for every single child. “The primary purpose of the educational system is to teach people to work. I don’t think anybody would want to put our kids through what we put our kids through now,” Gray told The Atlantic. Education may become a one-on-one approach that’s a best fit for every child instead of a factory pumping out future workers.
We could also make strides to end income inequality. Without work to define and distribute income, it could be distributed by the state “through the payment of a basic income, for instance, or direct public provision of services such as education, healthcare and housing. Or, perhaps, everyone could be given a capital allotment at birth.,” The Guardian speculates.
Best of all, we can find deeper, meaningful ways of contributing to the world around us. “If greater numbers of people were using their leisure to run the country, that would give people a sense of purpose,” Randolph Trumbach of Baruch College told The Atlantic.
But all of those options will only be possible if we figure out how to make them happen. And if we’re going to do that we need to do it now, before we’re thrust into it. If we don’t, “pushing people out of work will simply redirect the flow of income from workers to firm-owners: the rich will get richer,” as The Guardian puts it.
In a post-work world, that outcome might spell disaster.