Once a week.
Subscribe to our weekly newsletter.
Amazon is selling real-time facial-recognition technology to police for wide-net surveillance
“People should be free to walk down the street without being watched by the government.” — ACLU and a coalition of civil rights groups
The North Carolina Civil Liberties Union has obtained documents that show Amazon has been nearly giving away facial recognition tools to police departments in Oregon and Orlando in an effort to essentially beta test the tools, which live in the cloud via Amazon Web Services. The package is called Rekognition and has been deployed in some capacity—including alpha and beta testing—since late 2016.
Today, a coalition of civil rights groups has jointly signed a letter that calls for Amazon to stop selling this technology. Why? The letter opens with:
"The undersigned coalition of organizations are dedicated to protecting civil rights and liberties and safeguarding communities."
In a separate petition, the ACLU states: “Facial recognition is not a neutral technology, no matter how Amazon spins this. It automates mass surveillance, threatens people's freedom to live their private lives outside the government's gaze, and is primed to amplify bias and inequality in the criminal justice system.”
In response, an Amazon spokesperson pointed out that there are some upsides to such facial recognition technology, including finding lost children at amusement parks, as well as locating people who have been abducted.
Of course, the problem is that these can also be used for nefarious purposes, such as tracking political protesters, members of activist groups such as Black Lives Matter, and immigrants, as well as simply spying on neighborhoods with no reasonable suspicion established to do so.
Banks of television monitors display a fraction of London's CCTV camera network in the Metropolitan Police's new Special Operations Room on April 20, 2007 in London, England. (Photo by Matt Cardy/Getty Images)
In the era of so-called predictive policing, technology such as this is only as good as the biases —or, rather, lack thereof—of those interpreting the data, as well as the programming itself.
One police department currently using Rekognition is Washington County, Oregon, to perform such tasks as recognizing jail booking photos then verifying them against actual video footage or photos of suspects involved in crimes. With the ubiquity of cameras now, it’s pretty much a guarantee that images of anybody out in pubic will be captured. According to the ACLU, this kind of technology can recognize up to 100 people in a single image.
However, the question can boil over into civil rights areas when, for example, images of a citizen being booked for suspicion of a crime are retained by law enforcement, despite their innocence. But also, it basically means always-on surveillance, as more images are captured with more cameras, and databases build exponentially.
Edmond O'Brien (1915 - 1985), as Winston Smith and Jan Sterling (1921 - 2004) as Julia, during the filming of an adaptation of George Orwell's novel, '1984'. (Photo by Harry Todd/Fox Photos/Getty Images)
Matt Cagle of the ACLU of Northern California says he's disturbed by what he sees as a lack of transparency and public engagement, as police and tech companies work together to bring this new tool to American streets.
"Amazon is handing governments a surveillance system primed for abuse," Cagle says. "And that's why we're blowing the whistle right now."
An article that the ACLU posted to announce the joint letter to Amazon:
“With Rekognition, a government can now build a system to automate the identification and tracking of anyone. If police body cameras, for example, were outfitted with facial recognition, devices intended for officer transparency and accountability would further transform into surveillance machines aimed at the public. With this technology, police would be able to determine who attends protests. ICE could seek to continuously monitor immigrants as they embark on new lives. Cities might routinely track their own residents, whether they have reason to suspect criminal activity or not. As with other surveillance technologies, these systems are certain to be disproportionately aimed at minority communities.”
- AI facial recognition tool causes privacy concerns - Big Think ›
- How face masks are fooling facial recognition software - Big Think ›
- Interactive face recognition tool reveals flaws in the technology - Big Think ›
- UMiami students say they were identified by facial recognition; campus police claim otherwise. - Big Think ›
- Amazon's Sidewalk aims to create 'smart neighborhoods' - Big Think ›
- The dangers of surveillance technology in an age of insurrection - Big Think ›
"Deepfakes" and "cheap fakes" are becoming strikingly convincing — even ones generated on freely available apps.
- A writer named Magdalene Visaggio recently used FaceApp and Airbrush to generate convincing portraits of early U.S. presidents.
- "Deepfake" technology has improved drastically in recent years, and some countries are already experiencing how it can weaponized for political purposes.
- It's currently unknown whether it'll be possible to develop technology that can quickly and accurately determine whether a given video is real or fake.
The future of deepfakes<p>In 2018, Gabon's president Ali Bongo had been out of the country for months receiving medical treatment. After Bongo hadn't been seen in public for months, rumors began swirling about his condition. Some suggested Bongo might even be dead. In response, Bongo's administration released a video that seemed to show the president addressing the nation.</p><p>But the <a href="https://www.facebook.com/watch/?v=324528215059254" target="_blank">video</a> is strange, appearing choppy and blurry in parts. After political opponents declared the video to be a deepfake, Gabon's military attempted an unsuccessful coup. What's striking about the story is that, to this day, experts in the field of deepfakes can't conclusively verify whether the video was real. </p><p>The uncertainty and confusion generated by deepfakes poses a "global problem," according to a <a href="https://www.brookings.edu/research/is-seeing-still-believing-the-deepfake-challenge-to-truth-in-politics/#cancel" target="_blank">2020 report from The Brookings Institution</a>. In 2018, the U.S. Department of Defense released some of the first tools able to successfully detect deepfake videos. The problem, however, is that deepfake technology keeps improving, meaning forensic approaches may forever be one step behind the most sophisticated forms of deepfakes. </p><p>As the 2020 report noted, even if the private sector or governments create technology to identify deepfakes, they will:</p><p style="margin-left: 20px;">"...operate more slowly than the generation of these fakes, allowing false representations to dominate the media landscape for days or even weeks. "A lie can go halfway around the world before the truth can get its shoes on," warns David Doermann, the director of the Artificial Intelligence Institute at the University of Buffalo. And if defensive methods yield results short of certainty, as many will, technology companies will be hesitant to label the likely misrepresentations as fakes."</p>
Context is everything.
The COVID-19 pandemic has introduced a number of new behaviours into daily routines, like physical distancing, mask-wearing and hand sanitizing. Meanwhile, many old behaviours such as attending events, eating out and seeing friends have been put on hold.
A new study looks at how images of coffee's origins affect the perception of its premiumness and quality.
- Images can affect how people perceive the quality of a product.
- In a new study, researchers show using virtual reality that images of farms positively influence the subjects' experience of coffee.
- The results provide insights on the psychology and power of marketing.