from the world's big
Some of the world's top minds weigh in on one of the most divisive questions in tech.
- When it comes to the question of whether AI is an existential threat to the human species, you have Elon Musk in one corner, Steven Pinker in another, and a host of incredible minds somewhere in between.
- In this video, a handful of those great minds—Elon Musk, Steven Pinker, Michio Kaku, Max Tegmark, Luis Perez-Breva, Joscha Bach and Sophia the Robot herself—weigh in on the many nuances of the debate and the degree to which AI is a threat to humanity; if it's not a species-level threat, it will still upend our world as we know it.
- What's your take on this debate? Let us know in the comments!
Researchers are making progress in the effort to develop safe and practical supernumerary robotic limbs.
- Unlike exoskeletons or prostheses, supernumerary robotic limbs function independently of the human skeleton.
- This new example of the technology attaches to the wearer's hips, and can lift 11 pounds.
- The arm currently isn't autonomous. Before A.I. can control supernumerary limbs, researchers first have to figure out how to make the technology understand and execute what the wearer wants it to do.
Supernumerary robotic limbs<p>When movies depict wearable robots, they usually show exoskeletons ("Iron Man") or prostheses (<a href="https://www.youtube.com/watch?v=cik8cl_n9AE" target="_blank">Luke Skywalker's robotic hand</a>). But supernumerary robotic limbs — like the new robotic arm — seem to be an underrepresented genre, at least in the popular consciousness. This genre describes robotic limbs that function independently of the human skeleton, and which "actively perform tasks similar to or beyond natural human capabilities," as a <a href="https://www.medien.ifi.lmu.de/pubdb/publications/pub/alsada2017amplify/alsada2017amplify.pdf" target="_blank">2017 research paper</a> states.<br></p><p>One hurdle in developing safe and effective supernumerary robotic limbs is figuring out how to attach the technology to the body so that it doesn't interfere with the wearer. For example, a robotic arm could throw someone off balance if it swings its arm too fast, or it could become uncomfortable if it's not attached strategically.</p><p>With the new robotic arm, the researchers attached the device to the wearer's hips with a rigid harness, close to the center of mass. It seems to work well enough, though you can see how someone could be thrown off balance. There's also the fact that it must be physically tethered to a nearby power system.</p>
Robotic limbs and human intent<p>But the biggest obstacle in developing supernumerary robotic limbs lies in artificial intelligence. For a robotic arm (or legs, fingers, etc.) to be practical, the device has to understand and execute what the wearer wants it to do. Here's how <a href="https://www.linkedin.com/in/catherine-v%C3%A9ronneau-7710a3140/?originalSubdomain=ca" target="_blank">Catherine Véronneau</a>, the lead author of a recent paper about the technology, described this problem to <a href="https://spectrum.ieee.org/automaton/robotics/robotics-hardware/robotic-third-arm-can-smash-through-walls" target="_blank">IEEE Spectrum</a>:</p><p style="margin-left: 20px;">"For instance, if the job of a supernumerary pair of arms is opening a door while the user is holding something, the controller should detect when is the right moment to open the door. So, for one particular application, it's feasible. But if we want that SRL to be multifunctional, it requires some AI or intelligent controller to detect what the human wants to do, and how the SRL could be complementary to the user (and act as a coworker). So there are a lot of things to explore in that vast field of "human intent."</p>
The programming giant exits the space due to ethical concerns.
- IBM sent a latter to Congress stating it will no longer research, develop, or sell facial recognition software.
- AI-based facial recognition software remains widely available to law enforcement and private industry.
- Facial recognition software is far from infallible, and often reflects its creators' bias.
In what strikes one as a classic case of shutting the stable door long after the horse has bolted, IBM's CEO Arvind Krishna has announced the company will no longer sell general-purpose facial recognition software, citing ethical concerns, in particular with the technology's potential for use in racial profiling by police. They will also cease research and development of this tech.
While laudable, this announcement arguably arrives about five years later than it might have, as numerous companies sell AI-based facial recognition software, often to law enforcement. Anyone who uses Facebook or Google also knows all about this technology, as we watch both companies tag friends and associates for us. (Facebook recently settled a lawsuit regarding the unlawful use of facial recognition for $550 million.)
It's worth noting that no one other than IBM has offered to cease developing and selling facial recognition software.
Image source: Tada Images/Shutterstock
Krishna made the announcement in a public letter to Senators Cory Booker (D-NJ) and Kamala Harris (D-CA), and Representatives Karen Bass (D-CA), Hakeem Jeffries (D-NY), and Jerrold Nadler (D-NY). Democrats in Congress are considering legislation to ban facial-recognition software as reported abuses pile up.
IBM's letter states:
"IBM firmly opposes and will not condone uses of any technology, including facial recognition technology offered by other vendors, for mass surveillance, racial profiling, violations of basic human rights and freedoms, or any purpose which is not consistent with our values and Principles of Trust and Transparency. We believe now is the time to begin a national dialogue on whether and how facial recognition technology should be employed by domestic law enforcement agencies."
Prior to their exit entirely from facial recognition, IBM had a mixed record. The company scanned nearly a million Creative Commons images from Flickr without their owners' consent. On the other hand, IBM released a public data set in 2018 in an attempt at transparency.
Image source: Best-Backgrounds/Shutterstock
Privacy issues aside — and there definitely are privacy concerns here — the currently available software is immature and prone to errors. Worse, it often reflects the biases of its programmers, who work for private companies with little regulation or oversight. And since commercial facial recognition software is sold to law enforcement, the frequent identification errors and biases are dangerous: They can ruin the lives of innocent people.
The website Gender Shades offers an enlightening demonstration of the type of inaccuracies to which facial recognition is inclined. The page was put together by Joy Buolamwini and Timnit Gebru in 2018, and doesn't reflect the most recent iterations of the software it tests, from three companies, Microsoft, the now-presumably-late IBM Watson, and Face++. Nonetheless, it's telling. To begin with, all three programs did significantly better at identifying men than women. However, when it came to gender identification — simplified to binary designations for simplicity — and skin color, the unimpressive results were genuinely troubling for the bias they reflected.
Amazon's Rekognition facial recognition software is the one most frequently sold to law enforcement, and an ACLU test run in 2018 revealed it also to be pretty bad: It incorrectly identified 28 members of Congress as people in a public database of 28,000 mugshots.
Update, 6/11/2020: Amazon today announced a 12-month moratorium on law-enforcement use of Rekognition, expressing the company's hope that Congress will in the interim enact "stronger regulations to govern the ethical use of facial recognition technology."
In 2019, a federal study by the National Institute of Standards and Technology reported empirical evidence of bias relating to age, gender, and race in the 189 facial recognition algorithms they analyzed. Members of certain groups of people were 100 times more likely to be misidentified. This study is ongoing.
Facial rec's poster child
Image source: Gian Cescon/Unsplash
The company most infamously associated with privacy-invading facial recognition software has to be Clearview AI, about whom we've previously written. This company scraped identification from over 3 billion social media images without posters' permission to develop software sold to law enforcement agencies.
The ACLU sued Clearview AI in May of 2020 for engaging in "unlawful, privacy-destroying surveillance activities" in violation of Illinois' Biometric Information Privacy Act. The organization wrote to CNN, "Clearview is as free to look at online photos as anyone with an internet connection. But what it can't do is capture our faceprints — uniquely identifying biometrics — from those photos without consent." The ACLU's complaint alleges "In capturing these billions of faceprints and continuing to store them in a massive database, Clearview has failed, and continues to fail, to take the basic steps necessary to ensure that its conduct is lawful."
The longer term
Though it undoubtedly sends a chill down the spine, the onrush of facial recognition technologies — encouraged by the software industry's infatuation with AI — suggests that we can't escape being identified by our faces for long, legislation or not. Advertisers want to know who we are, law enforcement wants to know who we are, and as our lives revolve ever more decisively around social media, many will no doubt welcome technology that automatically brings us together with friends and associates old and new. Concerns about the potential for abuse may wind up taking a back seat to convenience.
It's been an open question for some time whether privacy is even an issue for those who've grown up surrounded by connected devices. These generations don't care so much about privacy because they — realistically — don't expect it, particularly in the U.S. where very little is legally private.
IBM's principled stand may ultimately be more pyrrhic than anything else.
Mathematicians studied 100 billion tweets to help computer algorithms better understand our colloquial digital communication.
- A group of mathematicians from the University of Vermont used Twitter to examine how young people intentionally stretch out words in text for digital communication.
- Analyzing the language in roughly 100 billion tweets generated over eight years, the team developed two measurements to assess patterns in the tweets: balance and stretch.
- The words people stretch are not arbitrary but rather have patterned distributions such as what part of the word is stretched or how much it stretches out.
Balance and Stretch<img type="lazy-image" data-runner-src="https://assets.rebelmouse.io/eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJpbWFnZSI6Imh0dHBzOi8vYXNzZXRzLnJibC5tcy8yMzM2NTg3My9vcmlnaW4uanBnIiwiZXhwaXJlc19hdCI6MTYxMTEwMjM4NH0.P2pcKvbcsKKi8_0RTsDrsIABnxSHybXZYOLxHYT-KZk/img.jpg?width=1245&coordinates=6%2C0%2C6%2C0&height=700" id="df914" class="rm-shortcode" data-rm-shortcode-id="ddc7d5797ec2a42182452a971813111e" data-rm-shortcode-name="rebelmouse-image" />
Photo credit: Dole777 / Unsplash<p>Over the last two decades, social media has provided scientists with a trove of free information about human behavior and language. A group of mathematicians from the University of Vermont used Twitter to examine how young people intentionally stretch out words in text for digital communication. They created a method to essentially quantify the semantic nuances in between stretched words, like "right" vs. "riiiiiight," with the aim to teach future AI algorithms human digital colloquialisms.</p><p>"Written communication has recently begun encoding new forms of expression, including the emotional emphasis delivered by stretching words out," <a href="https://www.techrepublic.com/article/sayyy-whatttt-researchers-analyze-strange-human-tweets-to-build-better-ai/" target="_blank">said Chris Danforth</a>, professor of Mathematics & Statistics in the Vermont Complex Systems Center and member of the research team behind the study.</p><p>In their study, published last week in the journal PLOS One, the team analyzed the language in roughly 100 billion tweets generated from 2008 to 2016. They developed two measurements to assess patterns in the tweets: balance and stretch. For example hahahaha would be considered a stretched world high on balance while a term like wtffffff has stretch but little balance as only one letter, f, contributes to the stretchiness. This means to put emphasis on the world abbreviated by the letter "f". </p><p>"With so much communication happening electronically these days, we're all trying to find ways to convey emotion through text. Emojis are helping, but the visual effect of 30 consecutive vowels in a curse word turns a bland profanity into a form of art," Danforth said.</p><p>Interestingly, the use of elongated words was found across languages. For example, "kkkkkkk" signifies laughter in Brazilian Portuguese while "wkwkwkwkwkwk" expresses it in Indonesian, according to the researchers. </p>
Beyond the dictionary<p>Ultimately, this project could help artificial intelligence algorithms understand critical intrinsic meanings contained in the idiosyncratic variations in our communicative text or other linguistic symbols, such as punctuation and emojis.</p> <p>Dictionary definitions hardly reflect the way that we actually communicate with one another digitally. What the researchers found, though, is that the words people stretch out aren't arbitrary. Rather, they have patterned distributions such as what part of the word is stretched or how much it stretches out. Colloquial digital language is, after all, a system of symbols and for it to transfer meaning we must all be "in" on the patterns. </p> <p>This research suggests that by gaining understanding into stretched words used on social media opens more doors to helping AI better understand our slang. Tools and methods were developed that could be useful in future studies, for example investigations of intentional mis-typings and misspellings. </p> <p>What benefits come from AI algorithms better understanding our digital lingo? For one, it's possible that new tools could be applied to improve natural language processing, search engines, and spam filters. </p> <p>"We were able to comprehensively collect and count stretched words like 'gooooooaaaalll' and 'hahahaha'," the researchers <a href="https://www.sciencedaily.com/releases/2020/05/200527150155.htm" target="_blank">said in a press release</a>, "and map them across the two dimensions of overall stretchiness and balance of stretch, while developing new tools that will also aid in their continued linguistic study, and in other areas, such as language processing, augmenting dictionaries, improving search engines, analyzing the construction of sequences, and more."</p>
We'd like to think that judging people's worth based on the shape of their head is a practice that's behind us.
'Phrenology' has an old-fashioned ring to it. It sounds like it belongs in a history book, filed somewhere between bloodletting and velocipedes.