from the world's big
8 powerful voices share what it's like to be black in America, and why white people must break the racist status quo.
- Black communities have been telling the nation, for more than a century, that they have been targeted, beaten, falsely accused and killed by the police and other institutions meant to protect them.
- They have not been believed until recently, when the rise in camera phones and social media finally enabled them show and disseminate proof.
- Even after the video of George Floyd's death on May 25, 2020, there remains defensiveness and denial among white Americans and institutions—a defensiveness that prevents change to the root of the problem: systemic racism. In this video, eight powerful voices share perspectives on being black in America, and why white inaction and white politeness must end.
The programming giant exits the space due to ethical concerns.
- IBM sent a latter to Congress stating it will no longer research, develop, or sell facial recognition software.
- AI-based facial recognition software remains widely available to law enforcement and private industry.
- Facial recognition software is far from infallible, and often reflects its creators' bias.
In what strikes one as a classic case of shutting the stable door long after the horse has bolted, IBM's CEO Arvind Krishna has announced the company will no longer sell general-purpose facial recognition software, citing ethical concerns, in particular with the technology's potential for use in racial profiling by police. They will also cease research and development of this tech.
While laudable, this announcement arguably arrives about five years later than it might have, as numerous companies sell AI-based facial recognition software, often to law enforcement. Anyone who uses Facebook or Google also knows all about this technology, as we watch both companies tag friends and associates for us. (Facebook recently settled a lawsuit regarding the unlawful use of facial recognition for $550 million.)
It's worth noting that no one other than IBM has offered to cease developing and selling facial recognition software.
Image source: Tada Images/Shutterstock
Krishna made the announcement in a public letter to Senators Cory Booker (D-NJ) and Kamala Harris (D-CA), and Representatives Karen Bass (D-CA), Hakeem Jeffries (D-NY), and Jerrold Nadler (D-NY). Democrats in Congress are considering legislation to ban facial-recognition software as reported abuses pile up.
IBM's letter states:
"IBM firmly opposes and will not condone uses of any technology, including facial recognition technology offered by other vendors, for mass surveillance, racial profiling, violations of basic human rights and freedoms, or any purpose which is not consistent with our values and Principles of Trust and Transparency. We believe now is the time to begin a national dialogue on whether and how facial recognition technology should be employed by domestic law enforcement agencies."
Prior to their exit entirely from facial recognition, IBM had a mixed record. The company scanned nearly a million Creative Commons images from Flickr without their owners' consent. On the other hand, IBM released a public data set in 2018 in an attempt at transparency.
Image source: Best-Backgrounds/Shutterstock
Privacy issues aside — and there definitely are privacy concerns here — the currently available software is immature and prone to errors. Worse, it often reflects the biases of its programmers, who work for private companies with little regulation or oversight. And since commercial facial recognition software is sold to law enforcement, the frequent identification errors and biases are dangerous: They can ruin the lives of innocent people.
The website Gender Shades offers an enlightening demonstration of the type of inaccuracies to which facial recognition is inclined. The page was put together by Joy Buolamwini and Timnit Gebru in 2018, and doesn't reflect the most recent iterations of the software it tests, from three companies, Microsoft, the now-presumably-late IBM Watson, and Face++. Nonetheless, it's telling. To begin with, all three programs did significantly better at identifying men than women. However, when it came to gender identification — simplified to binary designations for simplicity — and skin color, the unimpressive results were genuinely troubling for the bias they reflected.
Amazon's Rekognition facial recognition software is the one most frequently sold to law enforcement, and an ACLU test run in 2018 revealed it also to be pretty bad: It incorrectly identified 28 members of Congress as people in a public database of 28,000 mugshots.
Update, 6/11/2020: Amazon today announced a 12-month moratorium on law-enforcement use of Rekognition, expressing the company's hope that Congress will in the interim enact "stronger regulations to govern the ethical use of facial recognition technology."
In 2019, a federal study by the National Institute of Standards and Technology reported empirical evidence of bias relating to age, gender, and race in the 189 facial recognition algorithms they analyzed. Members of certain groups of people were 100 times more likely to be misidentified. This study is ongoing.
Facial rec's poster child
Image source: Gian Cescon/Unsplash
The company most infamously associated with privacy-invading facial recognition software has to be Clearview AI, about whom we've previously written. This company scraped identification from over 3 billion social media images without posters' permission to develop software sold to law enforcement agencies.
The ACLU sued Clearview AI in May of 2020 for engaging in "unlawful, privacy-destroying surveillance activities" in violation of Illinois' Biometric Information Privacy Act. The organization wrote to CNN, "Clearview is as free to look at online photos as anyone with an internet connection. But what it can't do is capture our faceprints — uniquely identifying biometrics — from those photos without consent." The ACLU's complaint alleges "In capturing these billions of faceprints and continuing to store them in a massive database, Clearview has failed, and continues to fail, to take the basic steps necessary to ensure that its conduct is lawful."
The longer term
Though it undoubtedly sends a chill down the spine, the onrush of facial recognition technologies — encouraged by the software industry's infatuation with AI — suggests that we can't escape being identified by our faces for long, legislation or not. Advertisers want to know who we are, law enforcement wants to know who we are, and as our lives revolve ever more decisively around social media, many will no doubt welcome technology that automatically brings us together with friends and associates old and new. Concerns about the potential for abuse may wind up taking a back seat to convenience.
It's been an open question for some time whether privacy is even an issue for those who've grown up surrounded by connected devices. These generations don't care so much about privacy because they — realistically — don't expect it, particularly in the U.S. where very little is legally private.
IBM's principled stand may ultimately be more pyrrhic than anything else.
In classical liberal philosophy, individual pursuit of happiness is made possible by a framework of law.
- The rule of law as a principle has a philosophical history before it was popularized by classical liberalism, which can be traced back to Greek philosopher Aristotle.
- The classical liberal conception of laws draws upon this pre-history but differs slightly. Yes, the end goal is the common good, however "goodness" varies by individual.
- In this way of thinking, instead of telling us what will make us happy, law serves as the framework that allows us to pursue our own unique happiness.
An algorithm produced every possible melody. Now its creators want to destroy songwriter copyrights.
A computer coder and a lawyer decide they have a right to speak for all the songwriters that ever lived, those who are alive today, and all those yet to be born.
- A computer coder calculated all of the possible 8-measure, 12-beat melodies possible from Western music's 12 notes.
- The coder and a lawyer decided to claim ownership of every song melody ever.
- The two of them submitted all of these songs into the public domain so no one could ever be found in court to be plagiarizing a song.
Why on Earth would they do this?<img type="lazy-image" data-runner-src="https://assets.rebelmouse.io/eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJpbWFnZSI6Imh0dHBzOi8vYXNzZXRzLnJibC5tcy8yMjg0NTk2NS9vcmlnaW4uanBnIiwiZXhwaXJlc19hdCI6MTYzMzg5Nzc4N30.FE-P-Ikqxj_BGpUTVIjT0lKelQEYOi8tl94NckuRSbQ/img.jpg?width=980" id="da00b" class="rm-shortcode" data-rm-shortcode-id="0d07bb21eae451c2feaa8a6ea35831dd" data-rm-shortcode-name="rebelmouse-image" />
"Uptown Funk" producer Jeff Bhasker and Bruno Mars with their Grammy award
Image source: Robyn Beck/Getty<p>The motivation for Rubin and Riehl's project was putatively the duo's sympathy for famous — often wealthy — music stars who are sued for compensation by the original composer of melodies upon which their hits are based. One might suspect the duo are covertly engaging in a theft of their own, trying to steal a bit of fame fame from defendants. They even have a <a href="https://youtu.be/sJtm0MoOgiU" target="_blank">TEDTalk</a>.</p><p>Successful musicians go through this all the time. Sometimes the claims of plagiarism are valid, sometimes ridiculous, but which is which is for courts to decide, and similarities between songs may be subtle or obvious.</p><p>The problem goes way back. <a href="https://www.rollingstone.com/politics/politics-lists/songs-on-trial-12-landmark-music-copyright-cases-166396/george-harrison-vs-the-chiffons-1976-64089/" target="_blank">George Harrison</a> turned the Chiffons' "He's So Fine" into "My Sweet Lord" while his former bandmate John Lennon <a href="https://wpdh.com/the-beatles-come-together-was-stolen-from-a-chuck-berry-song/" target="_blank">pinched</a> much of Chuck Berry's "You Can't Catch Me" for The Beatles' "Come Together." Berry's music was also, um, "borrowed from" by the Beach Boys: Their breakthrough hit "Surfin' USA" was <a href="https://www.mentalfloss.com/article/93432/when-chuck-berry-became-beach-boy" target="_blank">nearly identical</a> to his "Sweet Little Sixteen." "You Can't Touch This" by M.C Hammer <a href="https://www.whosampled.com/sample/40/MC-Hammer-U-Can%27t-Touch-This-Rick-James-Super-Freak/" target="_blank">was built over a phrase from</a> Rick James" "Super Freak."</p><p>More recently, Sam Smith <a href="https://www.rollingstone.com/music/music-news/tom-petty-on-sam-smith-settlement-no-hard-feelings-these-things-happen-35541/" target="_blank">was sued</a> for the similarity of "Stay With Me" to "I Won't Back Down" written by Tom Petty and Jeff Lynne. And though "Uptown Funk" can make people at a funeral get up and dance, there's no question that there are <a href="https://www.forbes.com/sites/michellefabio/2017/12/30/bruno-mars-and-mark-ronsons-uptown-funk-faces-yet-another-copyright-infringement-suit/#4cb7350270c0" target="_blank">little bits of various other songs in there</a>, and lawsuits have caused changes to the record's credits and royalty fees in recognition of the track's sources.</p><p>The list goes <a href="https://www.rollingstone.com/politics/politics-lists/songs-on-trial-12-landmark-music-copyright-cases-166396/george-harrison-vs-the-chiffons-1976-64089/" target="_blank">on and on</a>. These are all examples of one well-known party suing another, and that's definitely a frequent scenario. However, copyrights also protect unknown songwriters from plagiarism in those rare cases when a songwriter can actually afford to hire counsel to press for restitution. It's therefore pretty weak protection to start with.</p><p><em>Full disclosure: I'm an unknown <a href="http://robbybermanmusic.com" target="_blank">songwriter</a>.</em></p>
Why this happens so much<img type="lazy-image" data-runner-src="https://assets.rebelmouse.io/eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJpbWFnZSI6Imh0dHBzOi8vYXNzZXRzLnJibC5tcy8yMjg0NjA2MS9vcmlnaW4uanBnIiwiZXhwaXJlc19hdCI6MTYzODA1MzI0NX0.ZnviYz7inGZwgraZWOus9b0aw1g0FNwvJl8wFwSf0x8/img.jpg?width=980" id="d00b7" class="rm-shortcode" data-rm-shortcode-id="a8c9ce135e4e1bd0caa0af53a63f4cb4" data-rm-shortcode-name="rebelmouse-image" />
Image source: Eamonn McCabe/Popperfoto/Getty<p>Creativity — in whatever area — involves a re-synthesis of an artist's influences into something new. All of the songs a songwriter has heard are the ingredients from which new songs are made. Songwriters are usually avid, if not rabid, music fans. The re-synthesis is typically unconscious, and in court defendants are often found guilty of "unconscious plagiarism."</p><p>Clearly a fine balance must be struck in assessing plagiarism. A songwriter must be free to mash together and rework everything they've heard, just so long as they're not seen to be simply reusing someone else's composition. <a href="https://youtu.be/aU_zMvaX05Q" target="_blank">Accidents will happen</a>, as will outright theft.</p><p>If it wasn't already crushingly hard for a financially struggling artist to derive any compensation for their creations, maybe what Rubin and Riehl have done wouldn't be so outrageous and offensive. However, the two have decided — on their own — to deprive every single songwriter with a U.S. copyright of the one meager tool they have to address being plagiarized by others, just because they decided to do so.</p><p>How serious a threat their project poses is unclear. Not only are Rubin and Riehl implicitly claiming ownership for all song melodies yet to be written — which is likely to be contested —they're also claiming it for every song that<em> exists</em> — which definitely will come as an unwelcome surprise to the actual composers. (And more attention for Rubin and Riehl.)</p>
Who really owns a song?<p>Rubin and Rielh's database is aimed squarely at Western music and U.S. copyrights. In this capitalist country, it's assumed that ownership confers financial rights. This isn't true everywhere around the world. Nonetheless, unless American society wants to provide for its songwriters some other way, financial reward remains their only possible compensation, and it's already almost impossibly difficult to acquire.</p><p>To be fair, not all Americans agree with the idea of song ownership. As legendary folk singer Woody Guthrie's<a href="https://en.wikipedia.org/wiki/Anti-copyright_notice" target="_blank"> once put it</a> on a song's sheet music:</p><p style="margin-left: 20px;"><em>"This song is Copyrighted in U.S., under Seal of Copyright # 154085, for a period of 28 years, and anybody caught singin it without our permission, will be mighty good friends of ourn, cause we don't give a dern. "</em></p>
Can AI make better predictions about future crimes?
- A new study finds algorithmic predictions of recidivism more accurate than human authorities.
- Researchers are trying to construct tests of such AI that accurately mirror real-world deliberations.
- What level of reliability should we demand of AI in sentencing?
RAIs, NG?<img type="lazy-image" data-runner-src="https://assets.rebelmouse.io/eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJpbWFnZSI6Imh0dHBzOi8vYXNzZXRzLnJibC5tcy8yMjc3MzAwMC9vcmlnaW4uanBnIiwiZXhwaXJlc19hdCI6MTYxMDQxMjM4N30.ByZIs2U0SqXx4bf6u9LvqCAk-rwUuM33ClPkxuQhEk8/img.jpg?width=980" id="aa986" class="rm-shortcode" data-rm-shortcode-id="30eabf6c00ea57d1257274d16330fd0d" data-rm-shortcode-name="rebelmouse-image" />
Image source: Andrey Suslov/Shutterstock<p>The new study, led by computational social scientist <a href="https://5harad.com" target="_blank">Sharad Goel</a> of Stanford University, is in a sense a reply to a <a href="https://advances.sciencemag.org/content/4/1/eaao5580?ijkey=acb268ff69558c41f4083ae815a5c7a262232a5d&keytype2=tf_ipsecsha" target="_blank">recent work</a> by programming expert Julia Dressel and digital image specialist Hany Farid. In that earlier research, participants attempted to predict whether or not any of 50 individuals would commit new crimes of any kind within the next two years based on short descriptions of their case histories. (No images or racial/ethnic information were provided to participants to avoid a skewing of results due to related biases.) The average accuracy rate participants achieved was 62%.</p><p>The same criminals and case histories cases were also processed through a widely used RAI called COMPAS, for "Correctional Offender Management Profiling for Alternative Sanctions." The accuracy of its predictions was about the same: 65%, leading Dressel and Farid to conclude that COMPAS "is no more accurate … than predictions made by people with little or no criminal justice expertise."</p>
Taking a second look<img type="lazy-image" data-runner-src="https://assets.rebelmouse.io/eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJpbWFnZSI6Imh0dHBzOi8vYXNzZXRzLnJibC5tcy8yMjc3MzAwNi9vcmlnaW4uanBnIiwiZXhwaXJlc19hdCI6MTY0MDg4MjA1MX0._jJsmArbQFUQquUG5Hnpu8xLaXG0qxSQa1zRx2GcL-E/img.jpg?width=980" id="c6964" class="rm-shortcode" data-rm-shortcode-id="69dbbb9244e916e4e4e2bf1a7efb6189" data-rm-shortcode-name="rebelmouse-image" /><p>Goel felt that two aspects of the testing method used by Dressel and Farid didn't reproduce closely enough the circumstances in which humans are called upon to predict recidivism during sentencing:</p><ol><li>Participants in that study learned how to improve their predictions, much as an algorithm might, as they were provided feedback as to the accuracy of each prognostication. However, as Goel points out, "In justice settings, this feedback is exceedingly rare. Judges may never find out what happens to individuals that they sentence or for whom they set bail."</li><li>Judges, etc. also often have a great deal of information in hand as they make their predictions, not short summaries in which only the most salient information is presented. In the real world, it can be hard to ascertain which information is the most relevant when there's arguably too much of it at hand.</li></ol><p>Both of these factors put participants on a more equal footing with an RAI than they would be in real life, perhaps accounting for the similar levels of accuracy encountered.</p><p>To that end, Goel and his colleagues performed several of their own, slightly different, trials.</p><p>The first experiment closely mirrored Dressel's and Farid's — with feedback and short case descriptions — and indeed found that humans and COMPAS performed pretty much equally well. Another experiment asked participants to predict the future occurrence of <em>violent</em> crime, not just any crime, and again the accuracy rates were comparable, though much higher. Humans scored 83% as COMPAS achieved 89% accuracy.</p><p>When participant feedback was removed, however, humans fell far behind COMPAS in accuracy, down to around 60% as opposed to COMPAS's 89%, as Goel hypothesized they might.</p><p>Finally, humans were tested against a different RAI tool called LSI-R. In this case, both had to try and predict an individual's future using on a large amount of case information similar to what a judge may have to wade through. Again, the RAI outperformed humans in predicting future crimes, 62% to 57%. When asked to predict who would wind up going back to prison for their future misdeeds, the results were even worse for participants, who got it right just 58% of the time as opposed to 74% for LSI-R.</p>
Good enough?<img type="lazy-image" data-runner-src="https://assets.rebelmouse.io/eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJpbWFnZSI6Imh0dHBzOi8vYXNzZXRzLnJibC5tcy8yMjc3MzAxNS9vcmlnaW4uanBnIiwiZXhwaXJlc19hdCI6MTYyNzk5MTc5OH0.kq0yWKlclL3emX-xxqeLxN53v1czSUhKBDEmglY6VZ0/img.jpg?width=980" id="b3c58" class="rm-shortcode" data-rm-shortcode-id="d00f0ba7c26d9fdada23607448ffdc33" data-rm-shortcode-name="rebelmouse-image" />
Image source: klss/Shutterstock<p>Goel concludes, "our results support the claim that algorithmic risk assessments can often outperform human predictions of reoffending." Of course, this isn't the only important question. There's also this: Is AI yet reliable enough to make its prediction count for more than that of a judge, correctional authority, or parole board member?</p><p><a href="https://www.sciencenews.org/article/ai-can-predict-criminals-repeat-offenders-better-than-humans" target="_blank"><em>Science News</em></a> asked Farid, and he said no. When asked how he'd feel about an RAI that could be counted on to be right 80% of the time, he responded, "you've got to ask yourself, if you're wrong 20 percent of the time, are you willing to tolerate that?"</p><p>As AI technology improves, we may one day reach a state in which RAIs are reliably accurate, but no one is claiming we're there yet. For now, then, the use of such technologies in an advisory role for authorities tasked with making sentencing decisions may make sense, but only as one more "voice" to consider.</p>