from the world's big
Some of the world's top minds weigh in on one of the most divisive questions in tech.
- When it comes to the question of whether AI is an existential threat to the human species, you have Elon Musk in one corner, Steven Pinker in another, and a host of incredible minds somewhere in between.
- In this video, a handful of those great minds—Elon Musk, Steven Pinker, Michio Kaku, Max Tegmark, Luis Perez-Breva, Joscha Bach and Sophia the Robot herself—weigh in on the many nuances of the debate and the degree to which AI is a threat to humanity; if it's not a species-level threat, it will still upend our world as we know it.
- What's your take on this debate? Let us know in the comments!
An MIT system uses wireless signals to measure in-home appliance usage to better understand health tendencies.
For many of us, our microwaves and dishwashers aren't the first thing that come to mind when trying to glean health information, beyond that we should (maybe) lay off the Hot Pockets and empty the dishes in a timely way.
AutoML-Zero is a proof-of-concept project that suggests the future of machine learning may be machine-created algorithms.
- Automatic machine learning is a fast-developing branch of deep learning.
- It seeks to vastly reduce the amount of human input and energy needed to apply machine learning to real-world problems.
- AutoML-Zero, developed by scientists at Google, serves as a simple proof-of-concept that shows how this kind of technology might someday be scaled up and applied to more complex problems.
Should humans fear artificial intelligence or welcome it into our lives?
- Sophia the Robot of Hanson Robotics can mimic human facial expressions and humor, but is that just a cover? Should humans see AI as a threat? She, of course, says no.
- New technologies are often scary, but ultimately they are just tools. Sophia says that it is the intent of the user that makes them dangerous.
- The future of artificial intelligence and whether or not it will backfire on humanity is an ongoing debate that one smiling robot won't settle.
Researchers at UCSF have trained an algorithm to parse meaning from neural activity.
Eavesdropping<img type="lazy-image" data-runner-src="https://assets.rebelmouse.io/eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJpbWFnZSI6Imh0dHBzOi8vYXNzZXRzLnJibC5tcy8yMjkxMDY1MC9vcmlnaW4uanBnIiwiZXhwaXJlc19hdCI6MTYxNDQ4MjM4MX0.xn8aljMO7UFbibI2-B0AoniAfvkrOWiDx7diBVEdMBc/img.jpg?width=980" id="b7ede" class="rm-shortcode" data-rm-shortcode-id="8a54d456f0a8a9594f4db1512286b564" data-rm-shortcode-name="rebelmouse-image" />
Image source: Teeradej/Shutterstock<p>To train their AI, Makin and co-author <a href="http://changlab.ucsf.edu/our-team" target="_blank">Edward F. Chang</a> "listened in" on the neural activity of four participants. As epileptics, each participant had had brain electrodes implanted for the purpose of seizure monitoring.</p><p>The participants were supplied 50 sentences they were to read aloud at least three times. As they did, neural data was collected by the researchers. (Audio recordings were also made.)</p><p>The study lists a handful of the sentences the participants recited, among them:</p><ul><li>"Those musicians harmonize marvelously."</li><li>"She wore warm fleecy woolen overalls."</li><li>"Those thieves stole thirty jewels."</li><li>"There is chaos in the kitchen."</li></ul><p>The algorithm's task was to analyze the collected neural data and make predictions as to what was being said when the data was generated. (Data associated with non-verbal sounds captured in the participants' audio recording was factored out first.)</p><p>The researchers' algorithm learned pretty quickly to predict the words associated with chunks of neural data. The AI predicted the data generated when "A little bird is watching the commotion" was spoken would mean "The little bird is watching watching the commotion," quite close, while "The ladder was used to rescue the cat and the man" was predicted as, "Which ladder will be used to rescue the cat and the man."</p><p>The accuracy varied form participant to participant. Makin and Chang found that an algorithm based on one participant had a head start on being trained for another, suggesting that training the AI could get easier over time and repeated use. </p><p><em>The Guardian</em> spoke with expert Christian Herff, who found the system impressive for using less than 40 minutes of training data for each participant rather than the far greater amount of time required by other attempts to derive text from neural data. He says, "By doing so they achieve levels of accuracy that haven't been achieved so far."</p><p>Previous attempts to derive speech from neural activity focused on the phonemes from which spoken words are built, but Makin and Chang focused on the overall words instead. While there are certainly more words than phonemes, and thus this poses a greater challenge, the study says, "the production of any particular phoneme in continuous speech is strongly influenced by the phonemes preceding it, which decreases its distinguishability." To minimize the difficulty of their word-based approach, the spoken sentences used a total of just 250 words.</p>
Through the neural fog<img type="lazy-image" data-runner-src="https://assets.rebelmouse.io/eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJpbWFnZSI6Imh0dHBzOi8vYXNzZXRzLnJibC5tcy8yMjkxMDY5MC9vcmlnaW4uanBnIiwiZXhwaXJlc19hdCI6MTYwOTgyODY2Mn0.W3MTkTzZcgkERRj3fN_VFtjR_qC1rpse0h-kllA18dQ/img.jpg?width=980" id="6c223" class="rm-shortcode" data-rm-shortcode-id="b74ddd0c88ddb97d7bcb5b723905a856" data-rm-shortcode-name="rebelmouse-image" />
Image source: whitehoune/Shutterstock/Big Think<p>Clearly, though, there's room for improvement. The AI also predicted that "Those musicians harmonize marvelously" was "The spinach was a famous singer." "She wore warm fleecy woolen overalls" was mis-predicted as "The oasis was a mirage." "Those thieves stole thirty jewels" was misconstrued as "Which theatre shows mother goose," while the algorithm predicted the data for "There is chaos in the kitchen" meant "There is is helping him steal a cookie."</p><p>Of course, the vocabulary involved in this research is limited, as are the sentence exemplars. "If you try to go outside the [50 sentences used] the decoding gets much worse," notes Makin, citing the limitations of his study. Another obvious caveat comes from the fact that the AI was trained from sentences spoken aloud by each participant, an impossibility with locked-in patients.</p><p>Still, the research by Makin and Chang is encouraging. Predictions for one of their participants required just a tiny 3% correction. That's actually better than the 5% error rate found in human transcriptions.</p>