The attack on the Capitol forces us to confront an existential question about privacy.
- The insurrection attempt at the Capitol was captured by thousands of cell phones and security cameras.
- Many protestors have been arrested after their identity was reported to the FBI.
- Surveillance experts warn about the dangers of using facial recognition to monitor protests.
If ever there were a reason to wear masks, the insurrection at the Capitol last week would have been it. But many of those present believed the anti-mask rhetoric being used as a distraction from the nation's skyrocketing death rate. In fact, the day might even prove to have been a superspreader event, with at least two congresspeople becoming infected after the siege.
Those involved in the attempted coup d'état were not concerned about a virus. Nor, apparently, were they worried about shielding themselves from the tens of thousands of hours of recorded video taken by thousands of phones. In a strange merging of social media and dark web chat rooms come to life, separating actual insurrectionists from revolutionary tourists could prove to be a cumbersome vocation. One thing is certain: identifying them is not difficult.
Instagram-worthy sieges bring us to a longstanding existential question: should law enforcement be allowed to use AI and cell phone data to prosecute offenders?
Of the many security failures that day, one stood out: the small number of arrests for a breach of outsized magnitude. As the nation ogled at an unemployed actor turned conspiracy shaman behind the speaker's chair in real-time, scenes of horrendous violence took hours, even days, to be released. In a game of seemingly futile catch-up, federal agencies opened tip lines to identify the insurrectionists that should have easily been in their grasp.
But the public responded.
Brad Templeton: Today's Surveillance Society is Beyond Orwellian
There's the ex-wife of a retired Air Force lieutenant colonel whose neck gaiter was pulled down; the patriotic cohort of Internet detectives crowd-sourcing information for the FBI; the director of the infamous pseudoscience film, "Plandemic," praising the "patriots" that breached the building moments after he left the siege himself; and that unemployed actor who regularly attended QAnon events leaving the most public trail imaginable, and who is currently in custody facing serious charges.
Fish in barrels, all of them. What of the remaining thousands?
This privacy discussion is not new. Arthur Holland Michel, founder and co-director of the Center for the Study of the Drone at Bard College, warned Big Think in 2019 about the dangers of surveillance technology—specifically, in this case, a camera known as Gorgon Stare.
"Say there is a big public protest. With this camera, you can follow thousands of protesters back to their homes. Now you have a list of the home addresses of all the people involved in a political movement. If on their way home you witness them committing some crime—breaking a traffic regulation or frequenting a location that is known to be involved in the drug trade—you can use that surveillance data against them to essentially shut them up. That's why we have laws that prevent the use of surveillance technologies because it is human instinct to abuse them. That's why we need controls."
Late last year, University of Miami students pushed back against school administrators using facial recognition software for potentially insidious means—a protest not limited to that campus. Can you place students refusing to attend classes during a pandemic with armed insurrectionists attempting to change the results of a democratic election? Not even close. More to the point, however, we should leave political leanings out of the equation when deciding who we think should be monitored.
Protesters enter the U.S. Capitol Building on January 06, 2021 in Washington, DC. Congress held a joint session today to ratify President-elect Joe Biden's 306-232 Electoral College win over President Donald Trump.
Credit: Win McNamee/Getty Images
Shortly after the siege, the New Yorker's Ronan Farrow helped reveal the identity of the aforementioned lieutenant colonel while conservatives claim the riots were actually antifa—a conspiracy theory that's been peddled before. Politics simply can't be avoided in this age. Still, Albert Fox Cahn, founder of the Surveillance Technology Oversight Project, doesn't believe the insurrection attempt justifies an uptick in facial recognition technology.
"We don't need a cutting-edge surveillance dragnet to find the perpetrators of this attack: They tracked themselves. They livestreamed their felonies from the halls of Congress, recording each crime in full HD. We don't need facial recognition, geofences, and cell tower data to find those responsible, we need police officers willing to do their job."
The New Orleans City Council recently banned similar surveillance technologies due to fears that it would unfairly target minorities. San Francisco was the first city to outright ban facial recognition nearly two years ago. Cahn's point is that the FBI shouldn't be using AI to cover for the government's failure to protect the Capitol. Besides, the insurrectionists outed themselves on their own social media feeds.
When Pandora's box cracks open, it's hard to push the monster back in. Naomi Klein detailed the corporate takeover of New Orleans after Hurricane Katrina in "The Shock Doctrine." Real estate brokers, charter school companies, and government agencies didn't cause the flood, but they certainly profited from it. The fear is that companies like Clearview AI, which saw a 26 percent spike in usage of its facial recognition service following the attack, will be incentivized, as will police departments to use such technology for any means they choose.
Cahn comes to a similar conclusion: don't expose American citizens to the "anti-democratic technology" known as facial recognition. New Yorkers had to endure subway backpack checks for nearly a decade after 9/11; this slope is even slipperier.As the US braces for further "armed protests" in all 50 states over the coming week, phones need to keep capturing footage. Bystanders need to remain safe, of course. But if last week was any indication, the insurrectionists have difficulty deciphering between social media and real life. Their feeds should reveal enough.
Stay in touch with Derek on Twitter and Facebook. His most recent book is "Hero's Dose: The Case For Psychedelics in Ritual and Therapy."
A heated debate is occurring at the University of Miami.
- Students say they were identified with facial recognition technology after a protest at the University of Miami; campus police claim this isn't true.
- Over 60 universities nationwide have banned facial recognition; a few colleges, such as USC, regularly use it.
- Civil rights groups in Miami have called for the University of Miami to have talks on this topic.
Silicon Valley has led the technological revolution since the early seventies. Journalist Don Hoefler coined the term "Silicon Valley USA" to pin the region dominating the growing computer industry. The products imagined and invented in that small coastal area in Northern California have changed the world, yet a deep mistrust of technology has always seeded in the very same locale. Dystopian fervor never lies far away from utopian bliss.
Case in point: in May, the San Francisco Board of Advisors blocked the use of facial recognition technology by police forces by a vote of 8-to-1. Using this emerging tech to identify criminals was not as important as protecting civil liberties to this council. Critics have long feared the potential emergence of a surveillance state. The Board agreed.
Arthur Holland Michel has been studying surveillance technology for years. In his book, "Eyes in the Sky," he warns that such tech in the hands of police, even if promoted as protective of citizens, "could engender a style of relentless activity-based intelligence that treats all individuals as unknown unknowns—possible criminals who can only be discerned through persistent surveillance."
There's already creepy precedent. Insurance companies employ drones to spy on claimants suspected of lying about home claims or injury compensation—a practice legal in America. As Michel says of high-tech surveillance, "Everyone I've met who has been involved, even peripherally, with the all-seeing eye believes that they have created a force for good."
Arthur Holland Michel: The Future of Surveillance Technology
Americans don't always agree with that assessment, especially on college campuses. Over 60 universities—Harvard, MIT, and UCLA are on the list—have banned facial recognition. Of the few schools that utilize it, USC lets students enter their rooms via face scans; the software also ensures intruders cannot access buildings.
These are great uses of this technology. You could argue it's how any progress with our devices should work: in service of people. The problem, of course, is that those in power don't tend to stop when they have a little taste of the possibilities.
University of Miami is the latest school to be embroiled in a battle over facial recognition. The ACLU of Florida was joined by 21 other groups when requesting that the university hold an open forum so that students can express their concerns. A piece of their letter is below.
This call for action was inspired after a September incident in which students protested returning for in-person classes during the pandemic. The students, concerned about their health, predominantly wore face masks. Still, a number of them were identified, leading to concerns that facial recognition was used. Campus police denied it—the chief even claimed the tech "doesn't work," though that notion has been refuted—yet civil liberties groups are worried that an invasion of privacy occurred.
Lia Holland, a member of the digital rights nonprofit Fight for the Future, wants answers from school administrators.
"UMiami is struggling to answer to their creepy surveillance practices, and clarify whether they are using their own facial recognition system, or Florida's state facial recognition database."
Credit: Pixel Shot / Adobe Stock
The police chief in question, David Rivero, claims overhead surveillance cameras provided identification at the protest. Yet speaking of another case involving facial-recognition software, he's on the record stating, "We were able to [easily] identify and arrest him. We've [detected] a few bad guys that way."
The letter sent to the Board of Administrators includes the following demands:
- Issue a campus-wide policy banning non-personal use of facial recognition technology, and issue a statement that you have done so.
- Immediately schedule an open forum with students and faculty/staff to discuss community concerns and clarify how student activists who participated in First Amendment protected protest activities were identified by campus police.
- Immediately schedule a meeting with the UMiami Employee Student Alliance (UMESA) to address their COVID-19 safety concerns, the subject of the original protest.
There's no doubt facial-recognition technology has a place in law enforcement. Victims of unsolved crimes are relieved when the perpetrators are brought to justice, regardless of the means. As Michel writes, some police forces are already surveilling large regions of their districts using the Gorgon Stare, a camera used from airplanes. Cameras are ubiquitous, and that's not going to change.As a society, we need honest discussions regarding the application of surveillance. Nearly every citizen in China has already been logged by facial recognition software, which has led to human rights abuses. While the stated intention of this tech by American police is pure, good intentions are known to pave the way...well, we know how that ends.
Stay in touch with Derek on Twitter and Facebook. His new book is "Hero's Dose: The Case For Psychedelics in Ritual and Therapy."
A new interactive documentary "How Normal Am I?" helps reveal the shortcomings of facial recognition technology.
- The website is part of SHERPA, a European Union-funded "project which analyses how AI and big data analytics impact ethics and human rights."
- The interactive documentary uses your webcam to analyze your face, predicting metrics like age, attractiveness, gender, body mass index and life expectancy.
- Despite the shortcomings of facial recognition, there's currently no set of national laws regulating the use of the technology by governments or private companies.
Facial recognition technology has sparked controversy since law enforcement began using it two decades ago. But as facial recognition continues to creep into our public spaces and private devices, there's one fact law enforcement, tech companies and privacy advocates can agree on: the technology isn't perfect.
Sure, facial recognition can identify people with near-perfect accuracy — if the system is fed high-quality images, with good positioning and lighting. But the accuracy rates drop significantly when the systems use photos of people "in the wild," as the Center for Strategic & International Studies recently noted.
Accuracy rates drop even further when facial recognition tries to identify African-American and Asian faces. In January, this deficit ostensibly led to the wrongful arrest of a Detroit man named Robert Julian-Borchak Williams, who was held in jail for 30 hours.
An interactive facial recognition experience
Want to see for yourself how well these systems work? Check out a new interactive mini-documentary called "How Normal Am I?", created by Tijmen Schep, a technology critic and privacy designer.
The documentary is part of SHERPA, a European Union-funded "project which analyses how AI and big data analytics impact ethics and human rights." To experience it, you'll need to grant the website permission to access your webcam, though "no personal data is collected." (You can also access the website and then disconnect your computer from the internet; it should still work fine.)
"How Normal Am I?" uses facial recognition to predict your age, attractiveness, body mass index, life expectancy and gender. Don't get upset if you get a low attractiveness or a high age score: Tilting your head, moving closer to the camera, or just running the program a second time can produce different results.
And that's sort of the point: If facial recognition technology is unreliable on a broad range of measures, to what extent should governments and the private sector be using it? Even if it does become reliable, to what extent should governments be allowed to use it on citizens?
The future of facial recognition technology
In a 2019 Pew Research Center survey, a majority of U.S. respondents said it's acceptable for law enforcement agencies to use facial recognition to scan for threats in public spaces. However, far fewer said it's acceptable for advertisers to use facial recognition to do things like analyze how people respond to commercials in real time.
What could change how facial recognition operates in the U.S. is a set of national laws, which currently don't exist. (Although, some states and cities do regulate the technology.) There are currently more than a dozen bills addressing facial recognition technology, ranging from legislation that would outlaw warrantless usage of facial recognition, to banning federal agencies from using it altogether.
The system is basically facial recognition technology, but for cars.
- Some police departments use automatic license plate readers to track suspects.
- A company called Flock Safety is now allowing police departments to opt in to a national network, which shares data on car movements.
- Privacy advocates are concerned about the potential for errors and abuse.
Earlier this month, a man shot a police officer in Pell City, Alabama, and fled the scene. Police didn't know who the shooter was, but witnesses were able to describe his car and license plate. The suspect was arrested just hours later.
One factor that helped officers make the quick arrest was the Flock Safety system, which operates a network of cameras that track car movements by reading license plates. After plugging in the suspect's tag number, Pell City officers were not only able to see where the car went in their own city, but also in nearby Heflin.
"We can actually look at Heflin's Flock camera," Pell City Police Chief Paul Irwin told the St. Clair Times. "That's a huge advantage."
About 400 U.S. law enforcement agencies currently use Flock Safety cameras. Now, the company wants to give those agencies the option to connect to one network, the "Total Analytics Law Officers Network," or TALON. This opt-in network would potentially allow law enforcement agencies nationwide to track a car as it drives from coast to coast, assuming it passes through the 700 American cities that have installed Flock Safety cameras.
Map tracking the car movements of a murder suspect in Alabama.
Flock Safety says its cameras help police solve more crimes. The company website notes that "70% of crime involves a vehicle" and law enforcement agencies say "a license plate is the best piece of evidence to track leads and solve crimes."
But critics of Flock Safety have raised concerns over the potential for errors and abuse. In August, for example, police in Colorado held a family at gunpoint after a license plate reader flagged a car as stolen. It turned out to be the wrong vehicle.
With TALON, police would also have unprecedented information about the movements of citizens. It's not hard to see how this data could be abused. Think, for example, of the Florida police officer who used the Driver and Vehicle Information Database (D.A.V.I.D.) to get women's contact information so he could ask them out on dates.
Flock Safety says it designed its system under an "ethical framework," noting that it deletes data on car movements every 30 days. The company also says its customers (which include private neighborhoods, in addition to police departments) own 100 percent of the data.
But the company can't guarantee that the public gets a say in whether to install the system. What's more, even if constituents had accepted Flock Safety being installed cameras in their own community, they might not want police to join TALON, which would put their local data on the national radar.
It's currently unclear how many police departments plan to join TALON. But like the advent of facial recognition technologies, the spread of automatic license plate reader technology highlights how mass surveillance isn't always driven by the state.
"We often think of dystopian surveillance as something that's imposed by an authoritarian government," Evan Greer, deputy director of the digital rights group Fight for the Future, told CNET. "It's clearer every day that there is an enormous threat posed by privately owned and managed surveillance regimes, which will be weaponized by the rich and powerful to protect not just their wealth but the exploitative system that helped them amass it."
New documents confirm that the government agency—one of many—has been using a tracking company.
- Documents reveal that the Secret Service used Locate X as part of a social media tracking package.
- The service "allows investigators to draw a digital fence around an address or area, pinpoint mobile devices that were within that area, and see where else those devices have traveled, going back months."
- Other agencies that have used this service include the Immigration and Customs Enforcement, the Customs and Border Protection, the Coast Guard, and the Drug Enforcement Administration.
A popular, recurrent conspiracy theory: all humans will soon be microchipped so the government can track our every movement. This year, that idea was briefly attached to a potential COVID-19 vaccine, though the fear has long been present.
The irony, of course, is that we're already chipped. We just call them phones.
We probably shouldn't be surprised that the U.S. Secret Service contracted with a company to use the service, Locate X, in order to track Americans via social media. But they did, and once again we're forced to distinguish between baseless conspiracies and the real-world consequences of government surveillance.
According to internal Secret Service documents, obtained from a Freedom of Information Act request, the agency paid the Virginia-based company, Babel Street, roughly $36,000 to include Locate X in a social media monitoring package that ultimately cost $2 million.
According to the trademark, Locate X provides:
"online, non-downloadable software used to collect, analyze, filter, store, track, manage, convert, interpret, categorize, index, extrapolate, compare, prioritize and produce databases, images, emails, files, documents, information, and data from online sources, websites and social media sites and to create reports in the fields of client-driven commercial, legal and governmental inquiries and investigations."
And that's only the first one-third of Locate X's goods and services.
As early as March, Protocol reported on the dangers of Locate X. The deal with Babel Street "allows investigators to draw a digital fence around an address or area, pinpoint mobile devices that were within that area, and see where else those devices have traveled, going back months."
Young players walk through the city centre of Hanover while holding their smartphones and playing "Pokemon Go" on July 15, 2016 in Hanover, Germany.
Photo by Alexander Koerner/Getty Images
Besides, the Secret Service is not Babel Street's only client. Others include the Immigration and Customs Enforcement, the Customs and Border Protection, the Coast Guard, the Drug Enforcement Administration, and others.
The Secret Service has apparently used Locate X to identify credit card skimming thieves. According to former employees of location-based companies that provide data to Babel Street, "the sale of personal location data from commercial firms to the government is more widespread and has been going on longer than previously known."
In 1985, Neil Postman wrote about Orwell's "1984" prophecy coming and going. In "Amusing Ourselves to Death," the cultural critic noted that Orwell got a lot right, but with "Brave New World," Aldous Huxley may have been the real winner when it comes to understanding the future.
"What Orwell feared were those who would ban books. What Huxley feared was that there would be no reason to ban a book, for there would be no one who wanted to read one. Orwell feared those who would deprive us of information. Huxley feared those who would give us so much that we would be reduced to passivity and egoism. Orwell feared that the truth would be concealed from us. Huxley feared the truth would be drowned in a sea of irrelevance. Orwell feared we would become a captive culture. Huxley feared we would become a trivial culture."
There's no need for a secret government plot to microchip us. Tech companies sold chips that we willingly bought; governments are simply purchasing the rights to locate them. That's not a conspiracy. We're staring straight into the truth every single day.