Here's why coding skills alone won't save you from job automation.
The conventional wisdom developing in the face of job automation is to skill up: learn how to code, become a member of the rising tech economy. Venture capitalist Scott Hartley, however, thinks that may be counterproductive. "Just because you have rote technical ability, you may actually be more susceptible to job automation than someone who has flexible thinking skills," he says. Retraining yourself in tech-based areas is smart, but the smartest way to survive job automation is to develop your soft skills—like improvisation, relational intelligence, and critical thinking. Believe it or not, those 'softer' assets will rule in the digital age, so play to what makes you human. In time, everything else will be done by a robot. Scott Hartley is the author of The Fuzzy and the Techie: Why the Liberal Arts Will Rule the Digital World.
If we could jump 50 years into the future, what will our world look like? Flying cars? Hologram phones? Bill Nye sees two technological paths ahead – and we're in the fork between them at this very moment.
Bill Nye is always hesitant to make predictions about the future, but especially now, when America is at such a fork in the road. What happens in the next four years will affect the technology we fund and develop – will we pioneer clean energy systems, or stay bedded down with coal? Will we prioritize oil profits over electric cars? Will the promised tax cuts narrow the wealth gap, or widen it? All these decisions will affect the way life 50 years from now looks. A lot hangs in the balance of the next U.S election in 2020; will Americans re-elect Trump, someone like Trump, or will there be a liberal reactionary choice? There are more questions about the future right now than answers, but Bill Nye is confident that if young people get involved in politics, science and show up to vote, that life in 2060 and 2070 can be one of greater equality and technology like we’ve never seen. Bill Nye's most recent book is Unstoppable: Harnessing Science to Change the World.
Bill Nye's most recent book is Unstoppable: Harnessing Science to Change the World.
Tesla's Elon Musk gives a grave warning to those trying to hold back self-driving car technology. According to him, we have it all backwards.
According to Elon Musk, we have it all backwards. It’s not self-driving cars that we should be worried about — it’s human-driven cars. In a recent call with reporters, he expressed his view that skeptics of self-driving vehicles are essentially “killing people.”
His argument is that each time a critic argues against self-driving technology, Musk said, he or she stands in the way of safer roadways. Musk expressed frustration that malfunctions of self-driving vehicles draw a disproportionate amount of attention at a time when there are so many fatalities caused by human drivers. 2015 saw the highest number of roadway deaths and injuries in 50 years, with 38,300 fatalities and 4.4 million injuries.
When a man being driven by a Tesla Model S outfitted with the company’s semi-autonomous Autopilot system died in a crash (while he was watching a Harry Potter movie), U.S. safety regulators launched an examination of 25,000 Tesla vehicles. Musk points out that this was the first fatality in 130 million autopilot-driven miles in the U.S., while there’s a fatality caused by a human driver every 94 million miles.
By equipping Tesla’s newest Model S and Model X cars with eight cameras, 12 new sensors and upgraded radar, the company hopes to have the vehicles capable of full autonomy by year’s end, "without the need for a single touch" once the car is on its way.
It’s easy to see why people are reluctant to hand over their safety to automated vehicles for which even the optimally ethical rules of the road are tricky to work out. And the technology is as yet unfinished. But it’s also easy to imagine a world in which cars communicate with each other to reliably stay safely out of each others’ way, respond effectively to unexpected hazards, use fuel more efficiently, and even eliminate current nuisances like traffic jams by coordinating their movements with mathematical precision. One Model X has already transported its owner suffering a pulmonary embolism to the emergency room for care — the man credits his Tesla with saving his life.
Musk says self-driving cars are the future and that future is coming. Every day that we cling to a comfortable, familiar system that he views as inherently more dangerous, we’re merely exposing more drivers, passengers, and pedestrians to the risk of death. According to Musk, it’s time to let someone — or something — else drive.
A new study highlights the new ethical dilemmas caused by the rise of robotic and autonomous technology, like self-driving cars.
As robots and robotic contraptions like self-driving cars become increasingly ubiquitous in our lives, we are having to address significant ethical issues that arise.
One area of most immediate concern - the moral dilemmas that might be faced by self-driving cars, which are close to coming to the road near you. You can program them with all kinds of safety features, but it's easy to imagine scenarios when the programmed rules by which a car like that operates would come into conflict with each other.
For example, what if you had a situation when a car would have to choose between hitting a pedestrian or hurting the car's passengers? Or what if it has to choose between two equally dangerous maneuvers where people would get hurt in each scenario like hitting a bus or a motorcycle driver?
A new study demonstrates that the public is also having a hard time deciding on what choice the car should make in such potential situations. People would prefer to minimize casualties, and would hypothetically rather have the car make the choice to swerve and harm one driver to avoid hitting 10 pedestrians. But the same people would not want to buy and drive such a vehicle. They wouldn't want their car to not have their safety as the prime directive.
"Most people want to live in in a world where cars will minimize casualties," says Iyad Rahwan, the co-author of a paper on the study and an associate professor in the MIT Media Lab. "But everybody want their own car to protect them at all costs. If everybody does that, then we would end up in a tragedy... whereby the cars will not minimize casualties".
Check out this great animation that ponders the questions raised by autonomous cars:
The numbers work out this way - 76% of respondents thought it more moral for a self-driving car to sacrifice one passenger over 10 pedestrians. But if they were to ride in such a car, the percentage dropped by a third. Most of the people also opposed any kind of government regulation over such vehicles, afraid that the government would essentially be choosing who lives and dies in various situations.
The researchers themselves do not have the easy answer. They think that:
"For the time being, there seems to be no easy way to design algorithms that would reconcile moral values and personal self-interest."
Still, as there is a great potential in self-driving vehicles generally eliminating human error and thus, the amount of car accidents, there is a need to figure this out.
The researchers point out that:
"This is a challenge that should be on the mind of carmakers and regulators alike."
And the long deliberation might also be counter-productive as it:
"may paradoxically increase casualties by postponing the adoption of a safer technology."
You can read their paper "The social dilemma of autonomous vehicles" here, in the journal "Science". Besides Rahwan, the paper is written by Jean-Francois Bonnefon of the Toulouse School of Economics, and Azim Shariff, an assistant professor of psychology at the University of Oregon.
Sci-fi writer Isaac Asimov famously formulated “The Three Laws of Robotics” all the way back in 1942. Their ethical implications still resonate today. The Three Laws are:
1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
2. A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.
3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.
Perhaps in anticipation of a Skynet/Terminator-style robotic takeover, Asimov later added the fourth law that would supercede all the others: “0. A robot may not harm humanity, or, by inaction, allow humanity to come to harm.”
Of course, while we debate such questions and figure out who is going to program them into our robotic helpers, the challenge will become - how do you avoid hackers or the robot itself from changing the code? Who controls the code? The government, the corporation or the individual?
Other social questions will rise with further integration of technology into our lives. For example:
Is it cheating if you sleep with a sex robot?
The 'True Companion' sex robot, Roxxxy, on display at the TrueCompanion.com booth at the AVN Adult Entertainment Expo in Las Vegas, Nevada, January 9, 2010. In what is billed as a world first, a life-size robotic girlfriend complete with artificial intelligence and flesh-like synthetic skin was introduced to adoring fans at the AVN Adult Entertainment Expo. (Photo by ROBYN BECK/AFP/Getty Images)
What if are yourself a part-robot, a human with cybernetic implants or robotic enhancements? What are your responsibilities towards an “unaltered” human? Is there a new caste system that will arise based on the scale from human to robot?
Surely, you can come up with more such quandaries. You can be sure to have to ponder more of them as we are already in the future we have envisioned.