If its claims are true, Clearview AI has quietly blown right past privacy norms to become the nightmare many have been fearing.
- Recent reporting has revealed the existence of a company that has probably scraped your personal data for its facial recognition database.
- Though social platforms forbid it, the company has nonetheless collected personal data from everywhere it can.
- The company's claims of accuracy and popularity with law enforcement agencies is a bit murky.
Is this legal? And does it matter?<img type="lazy-image" data-runner-src="https://assets.rebelmouse.io/eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJpbWFnZSI6Imh0dHBzOi8vYXNzZXRzLnJibC5tcy8yMjY0NTY5OS9vcmlnaW4uanBnIiwiZXhwaXJlc19hdCI6MTYyMzg5ODg2M30.DMdhtKuBNY4_ZOJA7_kNumGSHavaG4CHT47KX2-k51M/img.jpg?width=980" id="75d49" class="rm-shortcode" data-rm-shortcode-id="0b3a34d5ed3f4971dd26579ee9277c5e" data-rm-shortcode-name="rebelmouse-image" />
Image source: Anton Watman/Shutterstock<p>In terms of Federal law protecting one's personal data, the regulations are way behind today's digital realities. The controlling legislation appears to be the anti-hacking <a href="https://www.wired.com/2014/11/hacker-lexicon-computer-fraud-abuse-act/" target="_blank">Computer Fraud and Abuse Act</a> (CFAA) enacted in 1984, well before the internet we know today. Prior to a Ninth Circuit Court of Appeals ruling last year, the law had been used to fight automated data-scraping. However, that ruling determined that this type of scraping doesn't violate the CFAA.</p><p>Social media sites generally include anti-scraping stipulations in their user agreements, but these are hard — and perhaps impossible given programmers' ingenuity — to enforce. Twitter, whose policies explicitly forbid automated scraping for the purposes of constructing a database, <a href="https://www.nytimes.com/2020/01/22/technology/clearview-ai-twitter-letter.html" target="_blank">recently ordered</a> Clearview AI to knock it off. Given last year's CFAA ruling, though, sites have little legal recourse when their policies are violated. In any event, tech is a troublingly incestuous industry — for example, a Facebook board member, <a href="https://bigthink.com/robby-berman/hulk-hogans-suit-against-gawker-justice-or-a-billionaires-revenge" target="_self">Peter Thiel</a>, is one of Clearview AI's primary investors, so how motivated would such people really be to block mining of their data?</p>
Is Clearview AI legit?<img type="lazy-image" data-runner-src="https://assets.rebelmouse.io/eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJpbWFnZSI6Imh0dHBzOi8vYXNzZXRzLnJibC5tcy8yMjY0NTczNC9vcmlnaW4uanBnIiwiZXhwaXJlc19hdCI6MTYyMDk1NjI0MX0._f15F1S9pw6yLmXnVXVFODHLeFypl_L8RYJi474yrLA/img.jpg?width=980" id="a6e08" class="rm-shortcode" data-rm-shortcode-id="9f6e33faf37a61164bec61e0f497a5cb" data-rm-shortcode-name="rebelmouse-image" />
Image source: Clearview AI, through Atlanta public-records request by New York Times<p>Clearview has taken pains to remain off the public's radar, at least until the <em>New York Times</em> article appeared. It<u></u>s co-founders long ago scrubbed their own social identities from the web, though one of them, Hoan Ton-That, has since reemerged online.</p><p>In efforts to remain publicly invisible while simultaneously courting law enforcement as customers for Clearview's services, the company has been quietly publishing an array of targeted promotional materials (The <em>Times</em>, BuzzFeed, and <em>WIRED</em> have acquired a number of these materials via Freedom of Information requests and through private individuals). The ads make some extraordinary and questionable claims regarding Clearview's accuracy, successes, and the number of law enforcement agencies with which it has contracts. Not least, of course, among questions about the company's integrity must be their extensive scraping of data from sites whose user agreements forbid it.</p><p>According to Clearview, over 600 law enforcement parties have used their product in the last year, though the company won't supply a list of them. There <em>are</em> a handful of confirmed clients, however, including the Indiana State Police. According to the department's then-captain, the police were able to identify the perpetrator in a shooting case in just 20 minutes thanks to Clearview's ability to find a video the man had posted of himself on social media. The department itself has officially declined to comment on the case for T<em>he </em><em>New York Times</em>. Police departments in Gainesville, Florida and Atlanta, Georgia are also among their confirmed customers.</p><p>Clearview has tried to impress potential customers with case histories that apparently aren't true. For example, they sent an email to prospective clients with the title, "How a Terrorism Suspect Was Instantly Identified With Clearview," describing how their software cracked a New York subway terrorism case. The NYPD says Clearview had nothing to do with it and that they used their own facial recognition system. Clearview even posted a video on Vimeo telling the story, which has since been removed. Clearview has also claimed several other successes that have been denied by the police departments involved. </p><p>There is skepticism regarding Clearview's claims of accuracy, a critical concern given that in this context a false positive can send an innocent person to jail. Clare Garvie, of Georgetown University's Center on Privacy and Technology, <a href="https://www.buzzfeednews.com/article/ryanmac/clearview-ai-nypd-facial-recognition" target="_blank">tells BuzzFeed</a>, "We have no data to suggest this tool is accurate. The larger the database, the larger the risk of misidentification because of the doppelgänger effect. They're talking about a massive database of random people they've found on the internet."</p><p>Clearview has not submitted their results for independent verification, though a FAQ on their site claims that an "independent panel of experts rated Clearview 100% accurate across all demographic groups according to the ACLU's facial recognition accuracy methodology." In addition, the accuracy rating of facial recognition is usually derived from a combination of variables, including its ability to detect a face in an image, its correct-match rate, reject rate, non-match rate, and the false-match rate. As far as the FAQ claim, Garvie notes that "whenever a company just lists one accuracy metric, that is necessarily an incomplete view of the accuracy of their system."</p>
Image source: Andre_Popov/Shutterstock<p>It may or may not be that Clearview is doing what they claim to be doing, and that their technology is really accurate and seeing increasing use by police departments. Regardless, there can be little doubt that the company and likely others are working toward the goal of making reliable facial recognition available to law enforcement and other government agencies (Clearview also reportedly pitches its product to private detectives).</p><p>This has many people concerned, as it represents a major blow to personal privacy. A bipartisan effort in the U.S. Senate has seemingly failed. In November 2019, Democrats introduced their own privacy bill of rights in the <a href="https://www.cantwell.senate.gov/imo/media/doc/COPRA%20Bill%20Text.pdf" target="_blank">Consumer Online Privacy Rights Act</a> (COPRA) while Republicans introduced their <a href="https://aboutblaw.com/NaZ" target="_blank">United States Consumer Data Privacy Act of 2019</a> (CDPA). States have also enacted or are in the process of considering new privacy legislation. Preserving personal privacy without unnecessarily constraining acceptable uses of data collection is complicated, and the law is likely to continue lagging behind technological reality.</p><p>In any event, the exposure of Clearview AI's system is pretty chilling, setting off alarms for anyone hoping to hold onto what's left of their personal privacy, at least for as long as it's possible to do so.<br><br><strong>UPDATE</strong>: The ACLU announced on Thursday that it is suing Clearview in the state of Illinois. <a href="https://www.cnet.com/news/clearview-ai-faces-lawsuit-over-gathering-peoples-images-without-consent/" target="_blank">CNET reports</a> that Illinois is the only state with a biometric privacy law, the Biometric Information Privacy Act, which requires "informed written consent" before companies can use someone's biometrics. "Clearview's practices are exactly the threat to privacy that the legislature intended to address, and demonstrate why states across the country should adopt legal protections like the ones in Illinois," the ACLU said in a statement. <br><br>For more on the suit, head over to the <a href="https://www.aclu.org/news/privacy-technology/were-taking-clearview-ai-to-court-to-end-its-privacy-destroying-face-surveillance-activities/" target="_blank">ACLU website</a>.</p>
We know that body language reveals a lot. But language is an even bigger tell if you know what to look for.
I can read your face better than you can. The same holds true for you. While the role of mirror neurons is still not well understood (and sometimes disputed), the fact that we can tell what another person is feeling, often more quickly than they can, is a consequence of being a social animal. This transcends facial expressions. We read bodies all of the time. For example, if we meet for the first time and I cross my arms, I’m more likely to trust you if you follow suit and cross yours. If we’re in a group and you’re the only one who doesn’t follow this pantomime, I’m less likely to trust you. Social cues have been tried and tested for a long time, so much so they don’t need to be consciously understood to be effective.
Researchers at Human Longevity have developed technology that can generate images of individuals face using only their genetic information. But not all are convinced.
What if a computer could generate a realistic image of your face using only your genetic information?