Here's why coding skills alone won't save you from job automation.
The conventional wisdom developing in the face of job automation is to skill up: learn how to code, become a member of the rising tech economy. Venture capitalist Scott Hartley, however, thinks that may be counterproductive. "Just because you have rote technical ability, you may actually be more susceptible to job automation than someone who has flexible thinking skills," he says. Retraining yourself in tech-based areas is smart, but the smartest way to survive job automation is to develop your soft skills—like improvisation, relational intelligence, and critical thinking. Believe it or not, those 'softer' assets will rule in the digital age, so play to what makes you human. In time, everything else will be done by a robot. Scott Hartley is the author of The Fuzzy and the Techie: Why the Liberal Arts Will Rule the Digital World.
Artificial Intelligence is already outsmarting us at '80s computer games by finding ways to beat games that developers didn't even know were there. Just wait until it figures out how to beat us in ways that matter.
Chances are, unless you happen to be in the Big Think office in Manhattan, that you're watching this on a computer or phone. Chances also are that the piece of machinery that you're looking at right now has the capability to outsmart you many times over in ways that you can barely comprehend. That's the beauty and the danger of AI — it's becoming smarter and smarter at a rate that we can't keep up with. Max Tegmark relays a great story about playing a game of Breakout with a computer (i.e. the game where you break bricks with a ball and bounce the ball off a paddle you move at the bottom of the screen). At first, the computer lost every game. But quickly it had figured out a way to bounce the ball off of a certain point in the screen to rack up a crazy amount of points. Change Breakout for, let's say, nuclear warheads or solving world hunger, and we've got a world changer on our hands. Or in the case of our computers and smartphones, in our hands. Max's latest book is Life 3.0: Being Human in the Age of Artificial Intelligence
Google's DeepMind artificial intelligence learns what it takes to win, making human-like choices in competitive situations.
As the development of artificial intelligence continues at breakneck speed, questions about whether we understand what we are getting ourselves into persist. One fear is that increasingly intelligent robots will take all our jobs. Another fear is that we will create a world where a superintelligence will one day decide that it has no need for humans. This fear is well-explored in popular culture, through books and films like the Terminator series.
Another possibility is maybe the one that makes the most sense - since humans are the ones creating them, the machines and machine intelligences are likely to behave just like humans. For better or worse. DeepMind, Google’s cutting-edge AI company, has shown just that.
Recently, the DeepMind team ran a series of tests to investigate how the AI would respond when faced with certain social dilemmas. In particular, they wanted to find out whether the AI is more likely to cooperate or compete.
One of the tests involved 40 million instances of playing the computer game Gathering, during which DeepMind showed how far it’s willing to go to get what it wants. The game was chosen because it encapsulates aspects of the classic “Prisoner’s Dilemma” from game theory.
Pitting AI-controlled characters (called “agents”) against each other, DeepMind had them compete to gather the most virtual apples. Once the amount of available apples got low, the AI agents started to display "highly aggressive" tactics, employing laser beams to knock each other out. They would also steal the opponent’s apples.
Here’s how one of those games played out:
The DeepMind AI agents are in blue and red. The apples are green, while the laser beams are yellow.
The DeepMind team described their test in a blog post this way:
“We let the agents play this game many thousands of times and let them learn how to behave rationally using deep multi-agent reinforcement learning. Rather naturally, when there are enough apples in the environment, the agents learn to peacefully coexist and collect as many apples as they can. However, as the number of apples is reduced, the agents learn that it may be better for them to tag the other agent to give themselves time on their own to collect the scarce apples.”
Interestingly, what appears to have happened is that the AI systems began to develop some forms of human behavior.
“This model... shows that some aspects of human-like behaviour emerge as a product of the environment and learning. Less aggressive policies emerge from learning in relatively abundant environments with less possibility for costly action. The greed motivation reflects the temptation to take out a rival and collect all the apples oneself,” said Joel Z. Leibo from the DeepMind team to Wired.
Besides the fruit gathering, the AI was also tested via a Wolfpack hunting game. In it, two AI characters in the form of wolves chased a third AI agent - the prey. Here the researchers wanted to see if the AI characters would choose to cooperate to get the prey because they were rewarded for appearing near the prey together when it was being captured.
"The idea is that the prey is dangerous - a lone wolf can overcome it, but is at risk of losing the carcass to scavengers. However, when the two wolves capture the prey together, they can better protect the carcass from scavengers, and hence receive a higher reward,” wrote the researchers in their paper.
Indeed, the incentivized cooperation strategy won out in this instance, with the AI choosing to work together.
This is how that test panned out:
The wolves are red, chasing the blue dot (prey), while avoiding grey obstacles.
If you are thinking “Skynet is here”, perhaps the silver lining is that the second test shows how AI’s self-interest can include cooperation rather than the all-out competitiveness of the first test. Unless, of course, its cooperation to hunt down humans.
Here's a chart showing the results of the game tests that shows a clear increase in aggression during "Gathering":
Movies aside, the researchers are working to figure out how AI can eventually “control complex multi-agent systems such as the economy, traffic systems, or the ecological health of our planet – all of which depend on our continued cooperation”.
One nearby AI implementation where this could be relevant - self-driving cars which will have to choose safest routes, while keeping the objectives of all the parties involved under consideration.
The warning from the tests is that if the objectives are not balanced out in the programming, the AI might act selfishly, probably not for everyone’s benefit.
What’s next for the DeepMind team? Joel Leibo wants the AI to go deeper into the motivations behind decision-making:
“Going forward it would be interesting to equip agents with the ability to reason about other agent’s beliefs and goals,” said Leibo to Bloomberg.
They may look odd, but it’s all part of Google’s plan to solve a huge issue in machine learning: recognizing objects in images.
When Google asked its neural network to dream, the machine begin to generating some pretty wild images. They may look odd, but it’s all part of Google’s plan to solve a huge issue in machine learning: recognizing objects in images.
To be clear, Google’s software engineers didn’t ask a computer to dream, but they did ask its neural network to alter the images based on an original photo they fed into it, by applying layers. This was all part of their Deep Dream program.
The purpose was to make it better at finding patterns, which computers are none too good at. So, engineers started by “teaching” the neural network to recognize certain objects by giving it 1.2 million images, complete with object classifications the computer could understand.
These classifications allowed Google’s AI to learn to detect the different qualities of certain objects in an image, like a dog and a fork. But Google’s engineers wanted to go one step further, which is where Deep Dream comes in, which allowed the neural network to add those hallucinogenic qualities to images.
Google wanted to make its neural network better at detection to the point where it could pick out other objects in an image that may not contain that object (think of it as seeing the outline of a dog in the clouds). Deep Dream gave the computer the ability to change the rules and parameters of the images, which in turn allowed Google’s AI to recognize objects the images didn’t necessarily contain. So, an image might contain an image of a foot, but when it examined a few pixels of that image, it may have seen the outline of what looked like a dog’s nose.
So, when researchers began to ask its neural network to tell them what other objects they might be able to see in an image of a mountain, tree, or plant, it came up with these interpretations:
(Photo Credit: Michael Tyka/Google)
“The techniques presented here help us understand and visualize how neural networks are able to carry out difficult classification tasks, improve network architecture, and check what the network has learned during training,” software engineers Alexander Mordvintsev and Christopher Olah, and intern Mike Tyka wrote in a post about Deep Dream. “It also makes us wonder whether neural networks could become a tool for artists—a new way to remix visual concepts—or perhaps even shed a little light on the roots of the creative process in general.”
Just for fun, Google has opened up the tool to the public and you can generate your own Deep Dream art here: deepdreamgenerator.com
Google's DeepMind creates AI that blows away existing speech synthesizers.
Google-owned artificial intelligence company DeepMind presented a deep neural network that generates amazingly human-like speech. Called WaveNet, this AI makes a significant advancement over existing speech synthesizers. What’s more, it can write pretty good classical music.
DeepMind is a British company, previously known for creating machine-learning AI software that beat the world champion of the notoriously-intricate game Go. Machine learning allows computer systems to teach themselves and make predictions based on gathered data.
The company claims that its WaveNet creates speech that can mimic any human voice and closes the gap with human speech performance by more than 50%. Google’s 500-person blind test study found people rating WaveNet’s English speech at a 4.21 (5 being realistic human speech), while concatenate speech got a 3.86 and parametric an even worse 3.67.
WaveNet also generated speech in Mandarin, which got similar results.
They did this by re-imagining currently used text-to-speech (TTS) processes. The two most common being concatenative TTS, used by Apple’s Siri, which involves pre-recorded fragments of speech, and parametric TTS, which sounds even less natural, getting speech generated through computer algorithms.
What’s different about WaveNet is that its can directly model the raw waveform of an audio signal, an extremely complicated task that required a novel neural network. WaveNet learns from voice recordings, then on its own creates speech. This independence also allows the program to generate other kinds of audio, like music.
To bolster their claim, DeepMind released some samples, comparing their WaveNets with samples made by concatenate and parametric TTS. You be the judge.
And now, this is what WaveNet generated:
After it was trained on a dataset of classical piano music, WaveNet produced these intriguing musical creations of its own:
What are the implications of this new tech? While it also means our eventual robotic overlords should being easier to talk to, virtual AI assistants like Siri or Cortana could benefit sooner. Google isn’t promising this is headed straight to such applications, however, as WaveNet requires serious computing power.
This achievement shows again the potential of DeepMind's neural networks which can and are being used for fraud and spam detection, handwriting recognition, image search, translation and other tasks.
In a very Google move, the paper on WaveNet is available on Google Drive here.
Want to know more about DeepMind? Check out this video: