Why creating an AI that has free will would be a huge mistake

Giving human rights to a being with unlimited knowledge? Probably not a good idea.

Joanna Bryson thinks that people are confusing artificial intelligence with human clones, mostly due to Hollywood movies like Blade Runner and Steven Spielberg's A.I., both of which feature very humanoid beings. Take away the somewhat cuddly ideas the movies have given us about artificial intelligence and you have this: hyper-smart machines with absolutely no limit to their knowledge. She posits that giving artificial intelligence the same rights a human has could result in pretty dire consequences... because AI has already proven that it can pick up negative human characteristics if those characteristics are in the data. Therefore, it's not crazy at all to think that AI could scan all of Twitter in one afternoon and pick up all the negativity we've unloaded there. If it's already proven it's not only capable of making the wrong decision but eventually will make the wrong decision when it comes to data mining and implementation, why even give it the same powers as us in the first place?

Quantcast