Is it ethical for A.I. to have a human voice?

Google's recent AI technology that can mimic humans is raising ethical concerns.

On May 8th, Google rolled out Duplex, a new AI technology that is able to hold human-sounding conversations. The tech drew astonishment across a wide spectrum of the internet at the ability of the AI to interact with an unsuspecting human when making an appointment for a haircut. Duplex was able to adjust to the information given and even inserted a few fillers like “uh” and “hmm” into its speech, coming across quite realistically. And what’s even better - the AI accomplished its task, booking the appointment. 


While undeniably impressive, Google’s demonstration also raised a host of concerns focused on what it means that an AI can be made to sound like a human. Is it ethical for an A.I. to have a human voice and impersonate a human? Or should there be a way for humans to always know they are conversing with an AI? 

Duplex is built to make the A.I. sound like you are talking to a regular person. Trained to do well in narrow domains, Duplex is directed towards specific tasks, such as scheduling appointments. That is to say, Duplex can’t talk to you about random things just yet, but as part of the Google Assistant, it could be quite useful. 

The way it achieves the naturalness is by synthesizing speech that is modeled after the imperfect speech of humans, full of corrections, omissions or undue verbosity. As Google explains on its blog, Duplex is a recurrent neural network (RNN) at its core that is built utilizing the TensorFlow Extended (TFX) machine learning platform.

The network was trained on anonymous phone conversation data. It can look at such factors as features from the audio, the goal, and the history of the conversation. The speech it generates can control intonation depending on the circumstance,” as it’s described in the Google’s blog.

The AI also varies the speed of its response to the other speaker based on the words used and the overall situation. Most of the tasks are completed fully autonomously, without any human interaction. 

Here’s how that call by the AI to the hair salon went, as presented by the Google CEO Sundar Pichai: 

Scientists are creating music to unlock your brain’s potential

Soon, parents may be able to prescribe music to their kids to help them focus.

Videos
  • Instead of prescribing medications to kids with ADD or ADHD, Clark and his team at Brain.fm are looking to music as another option for treatment.
  • Through a grant from the National Science Foundation, the company is developing music that features "neural-phase locking" — a combination of different principles that create specific characteristics in the brain, such as increased concentration or relaxation.
  • As long as they're listening to the music, the neural phase-locking aspect of Brain.fm's tunes has the potential to keep people focused.

Knowing the stages of neurological development can make you a better parent

There are four main stages. Each has its own particular set of advancements and challenges. 

 

Jordan Bruner. Vimeo.
popular

Don't you wish you could predict your child's behavior with 100 percent accuracy? Any realistic parent knows it's an impossible daydream, but an appealing one nonetheless. Kids will always surprise you. There are so many factors that go into behavior, not to mention the fact that internal and external forces can sometimes make kids act out of character.

Keep reading Show less
Personal Growth

The life choices that had led me to be sitting in a booth underneath a banner that read “Ask a Philosopher" – at the entrance to the New York City subway at 57th and 8th – were perhaps random but inevitable.

Keep reading Show less