The attack on the Capitol forces us to confront an existential question about privacy.
- The insurrection attempt at the Capitol was captured by thousands of cell phones and security cameras.
- Many protestors have been arrested after their identity was reported to the FBI.
- Surveillance experts warn about the dangers of using facial recognition to monitor protests.
Brad Templeton: Today's Surveillance Society is Beyond Orwellian<span style="display:block;position:relative;padding-top:56.25%;" class="rm-shortcode" data-rm-shortcode-id="06eb4a2ab19b644f5a3c0bf35ac2f42b"><iframe type="lazy-iframe" data-runner-src="https://www.youtube.com/embed/awFrWxfDA30?rel=0" width="100%" height="auto" frameborder="0" scrolling="no" style="position:absolute;top:0;left:0;width:100%;height:100%;"></iframe></span><p>There's the <a href="https://www.thecut.com/2021/01/capitol-rioter-larry-rendall-brock-identified-to-fbi-by-ex.html" target="_blank">ex-wife of a retired Air Force lieutenant colonel</a> whose neck gaiter was pulled down; the <a href="https://www.inquirer.com/news/nation-world/capitol-insurrectionists-losing-jobs-social-media-identification-20210108.html" target="_blank">patriotic cohort of Internet detectives</a> crowd-sourcing information for the FBI; the director of the infamous pseudoscience film, "Plandemic," <a href="https://conspirituality.net/transmissions/plandemics-mikki-willis-joins-praises-violent-capitol-mob/" target="_blank">praising the "patriots" that breached the building</a> moments after he left the siege himself; and that unemployed actor who regularly attended QAnon events leaving the most public trail imaginable, and who is <a href="https://www.bbc.com/news/world-us-canada-55606044" target="_blank">currently in custody</a> facing serious charges.</p><p>Fish in barrels, all of them. What of the remaining thousands? </p><p>This privacy discussion is not new. Arthur Holland Michel, founder and co-director of the Center for the Study of the Drone at Bard College, <a href="https://bigthink.com/technology-innovation/gorgon-stare-surveillance" target="_self">warned Big Think in 2019</a> about the dangers of surveillance technology—specifically, in this case, a camera known as Gorgon Stare. </p><p style="margin-left: 20px;">"Say there is a big public protest. With this camera, you can follow thousands of protesters back to their homes. Now you have a list of the home addresses of all the people involved in a political movement. If on their way home you witness them committing some crime—breaking a traffic regulation or frequenting a location that is known to be involved in the drug trade—you can use that surveillance data against them to essentially shut them up. That's why we have laws that prevent the use of surveillance technologies because it is human instinct to abuse them. That's why we need controls."</p><p>Late last year, University of Miami students <a href="https://bigthink.com/technology-innovation/facial-recognition-software" target="_self">pushed back against school administrators</a> using facial recognition software for potentially insidious means—a protest not limited to that campus. Can you place students refusing to attend classes during a pandemic with armed insurrectionists attempting to change the results of a democratic election? Not even close. More to the point, however, we should leave political leanings out of the equation when deciding who we think should be monitored. </p>
Protesters enter the U.S. Capitol Building on January 06, 2021 in Washington, DC. Congress held a joint session today to ratify President-elect Joe Biden's 306-232 Electoral College win over President Donald Trump.
Credit: Win McNamee/Getty Images<p>Shortly after the siege, the New Yorker's Ronan Farrow <a href="https://www.newyorker.com/news/news-desk/an-air-force-combat-veteran-breached-the-senate" target="_blank">helped reveal the identity</a> of the aforementioned lieutenant colonel while conservatives <a href="https://www.washingtonpost.com/nation/2021/01/07/antifa-capitol-gaetz-trump-riot/" target="_blank">claim the riots were actually antifa</a>—a conspiracy theory that's <a href="https://apnews.com/article/virus-outbreak-race-and-ethnicity-suburbs-health-racial-injustice-7edf9027af1878283f3818d96c54f748" target="_blank">been peddled before</a>. Politics simply can't be avoided in this age. Still, Albert Fox Cahn, founder of the Surveillance Technology Oversight Project, doesn't believe the insurrection attempt <a href="https://www.wired.com/story/opinion-the-capitol-attack-doesnt-justify-expanding-surveillance/" target="_blank">justifies an uptick in facial recognition technology</a>.</p><p style="margin-left: 20px;">"We don't need a cutting-edge surveillance dragnet to find the perpetrators of this attack: They tracked themselves. They livestreamed their felonies from the halls of Congress, recording each crime in full HD. We don't need facial recognition, geofences, and cell tower data to find those responsible, we need police officers willing to do their job."</p><p>The New Orleans City Council recently <a href="https://thelensnola.org/2020/12/18/new-orleans-city-council-approves-ban-on-facial-recognition-predictive-policing-and-other-surveillance-tech/" target="_blank">banned similar surveillance technologies</a> due to fears that it would unfairly target minorities. San Francisco was the <a href="https://www.nytimes.com/2019/05/14/us/facial-recognition-ban-san-francisco.html" target="_blank" rel="noopener noreferrer">first city to outright ban facial recognition</a> nearly two years ago. Cahn's point is that the FBI shouldn't be using AI to cover for the government's failure to protect the Capitol. Besides, the insurrectionists outed themselves on their own social media feeds. </p><p>When Pandora's box cracks open, it's hard to push the monster back in. Naomi Klein detailed the corporate takeover of New Orleans after Hurricane Katrina in "The Shock Doctrine." Real estate brokers, charter school companies, and government agencies didn't cause the flood, but they certainly profited from it. The fear is that companies like Clearview AI, which saw a <a href="https://www.nytimes.com/2021/01/09/technology/facial-recognition-clearview-capitol.html" target="_blank">26 percent spike in usage of its facial recognition service</a> following the attack, will be incentivized, as will police departments to use such technology for any means they choose.</p><p>Cahn comes to a similar conclusion: don't expose American citizens to the "anti-democratic technology" known as facial recognition. New Yorkers had to endure subway backpack checks for nearly a decade after 9/11; this slope is even slipperier. </p>As the US braces for <a href="https://www.cnn.com/2021/01/11/politics/fbi-bulletin-armed-protests-state-us-capitol/index.html" target="_blank" rel="noopener noreferrer">further "armed protests"</a> in all 50 states over the coming week, phones need to keep capturing footage. Bystanders need to remain safe, of course. But if last week was any indication, the insurrectionists have difficulty deciphering between social media and real life. Their feeds should reveal enough.<p>--</p><p><em>Stay in touch with Derek on <a href="http://www.twitter.com/derekberes" target="_blank">Twitter</a> and <a href="https://www.facebook.com/DerekBeresdotcom" target="_blank" rel="noopener noreferrer">Facebook</a>. His most recent book is</em> "<em><a href="https://www.amazon.com/gp/product/B08KRVMP2M?pf_rd_r=MDJW43337675SZ0X00FH&pf_rd_p=edaba0ee-c2fe-4124-9f5d-b31d6b1bfbee" target="_blank" rel="noopener noreferrer">Hero's Dose: The Case For Psychedelics in Ritual and Therapy</a>."</em></p>
Here's why you may want to opt-out of Amazon's new shared network.
- Speaking at Web Summit 2020, David Limp, the senior vice president of Devices and Services at Amazon, suggested that the company is aiming to build smart devices that would operate in neighborhoods.
- Amazon recently began rolling out Sidewalk, which aims to create a shared, intermediate-range network powered by Amazon devices.
- Sidewalk, which lets nearby devices access your Wi-Fi, raises numerous privacy and security concerns.
Amazon Sidewalk<p>Sidewalk, which began rolling out in the U.S. in November, aims to create a shared network powered by Amazon devices. It's something like a community of Wi-Fi hotspots to which compatible devices can automatically connect.</p><p>Amazon Echo and Ring devices would serve as so-called Sidewalk Bridges that "share a small portion of your internet bandwidth which is pooled together to provide these services to you and your neighbors," Amazon writes.</p><p>Say you have a security camera in your detached garage that's not working because it's just out of your router's range. With Sidewalk, it'd be able to siphon off some of the Wi-Fi connection emitted from your neighbor's devices (if they enable Sidewalk). But that also means you'd be offering up some of your Wi-Fi bandwidth (<a href="https://www.amazon.com/Amazon-Sidewalk/b?ie=UTF8&node=21328123011" target="_blank">500mb monthly maximum</a>) to any compatible devices within about a half-mile.</p><p>Amazon frames Sidewalk as a sort of first step toward creating smart neighborhoods that utilize the so-called internet of things:</p><p style="margin-left: 20px;">"When your device is connected to the Sidewalk network, you can connect to Alexa and offer more experiences for customers like keeping track of the things they care about most, whether it's their puppy, their keys, or their bicycle. Later this year, Tile devices will be supported on the Sidewalk network, enabling customers to keep track of their things by asking Alexa questions like 'Alexa, where are my keys?'"</p>
Amazon<p>When Amazon first announced Sidewalk in 2019, CEO Jeff Bezos <a href="https://www.geekwire.com/2019/amazon-ceo-jeff-bezos-sees-new-opportunity-yard-sidewalk-beyond/" target="_blank">told</a> reporters the system would fill a gap in wireless connectivity:</p><p style="margin-left: 20px;">"It's a completely new way of thinking about intermediate-range wireless. There are a lot of things where Bluetooth is way too short-range, WiFi is way too high power, and so to have something that's still low-power, but that has much longer range is really a gap in the marketplace. [...] People don't even realize yet how important that intermediate range is going to be."</p><p>On the neighborhood level, this could prove convenient for people with compatible devices, as Jonathan Collins, research director at ABI Research, told <a href="https://cities-today.com/what-amazons-new-neighbourhood-network-means-for-smart-cities/" target="_blank">Cities Today</a>:</p><p style="margin-left: 20px;">"Amazon Sidewalk has the potential to deliver low-cost, low-data, citywide mesh connectivity. It could certainly benefit from adoption within smart-city projects. For example, integrating Sidewalk into parking meters would bring low-cost network access for the city, but it will also provide more mesh nodes to strengthen network coverage and capacity."</p>
Security concerns<p>Sidewalk raises no shortage of red flags in terms of security and privacy.<br></p><p style="margin-left: 20px;">"The initial concern is really about what all the devices connected to Sidewalk collect," Forrester security analyst Jeff Pollard <a href="https://www.cnet.com/news/amazon-sidewalk-extends-your-network-but-security-is-already-in-question/" target="_blank">told</a> CNET. "If use cases like home automation or IoT devices make use of this technology, they generate telemetry data. Connected devices -- especially in the home -- give vast amounts of information about your behaviors and activities, which could also go to Amazon with this connectivity."</p>
Illustration from Amazon's Sidewalk security white paper
Amazon<p>Anticipating concerns, Amazon published a <a href="https://m.media-amazon.com/images/G/01/sidewalk/privacy_security_whitepaper_final.pdf" target="_blank">white paper</a> describing how Sidewalk uses three layers of encryption, doesn't reveal users' identities, and makes "it difficult for anyone, including Amazon, to piece together [a user's] activity history over time."</p><p>Difficult, but not impossible. Amazon said it's automatically enabling Sidewalk on Echo and Ring devices, though users can opt-out of the system. If you'd rather not serve as a guinea pig in Amazon's new shared network, here's how to disable it:</p>
A new study finds that some people just want privacy.
- Despite its reputation as a tool for criminals, only a small percentage of Tor users were actually going to the dark web.
- The rate was higher in free countries and lower in countries with censored internet access.
- The findings are controversial, and may be limited by their methodology to be general assumptions.
What do half of those words mean?<p> For those who don't spend all of their time on the internet, a few of these terms might be new to you. We'll go over them first before we continue. If you do know all of these terms, you can skip ahead to the next section.<br> <br> <em>Surface Web:</em> The regular internet that you can find with a search engine. You're on it right now; unless these articles are shared in places we don't know about. <br> <br> <em>Deep Web</em>: The part of the internet not indexed by search engines. This includes things like your email inbox; you can't get there from Google or Bing, but instead have to enter a password to find it from another page. You've probably visited the deep web today, too. </p><p><em>Dark Web</em>: A subsection of the deep web that requires special software to access. While not everything there is bad, there are social media sites, email services, hidden forums, and even puzzle games down there; this is also where you would find the places for illegal markets and other, extremely nefarious, things.</p><p> <em>Tor:</em> A kind of software that allows users to browse the internet in near-total anonymity. It does this by encrypting connection data and scrambling the route a computer takes to connect to a site, thus making it difficult, but not impossible, to find who is using a particular website. The potential value of this to criminals should be evident to you. <br> <br> While it often gets bad press for how it can be used for illicit purposes, it should be said it was created and used by the United States government for often banal purposes. The leaders of the Tor Project often remind the public that "normal people" use Tor for everyday internet activities as well.</p><p> As a personal example, I once used it to get around the <a href="https://www.wired.com/1997/06/china-3/" target="_blank">Great Firewall of China</a> when I wanted to get to the regular, uncensored internet.</p>
Back to the study<p> The study observed the final destination of a random selection of Tor users to determine if they went to surface websites or more hidden areas of the internet after connecting to the Tor network. This was done by monitoring the data from entry points in the Tor network, which would allow an observer to where someone was going, but not who.</p><p> Those going to surface websites were assumed just to be using Tor for anonymity and security, while those going into the dark web were presumed more likely to be using it for illegal reasons. <br> </p><p> Despite the popular conception of Tor as a tool for criminals looking to cover their tracks, only 6.7 percent of these users went to sites defined as the dark web, which were themselves not necessarily devoted to illegal <a href="https://www.sciencealert.com/only-a-small-fraction-of-the-dark-web-is-being-used-for-hidden-activity-study-finds" target="_blank" rel="noopener noreferrer">activity</a>. </p><p> The results were further broken down by country, which revealed another layer of information. The authors noted that in countries deemed "not free" by Freedom House, the rate of possible malicious use goes down to 4.8 percent. In countries considered free, the percentage nearly doubles to 7.8 percent.</p>
What does this mean for the internet?<iframe width="730" height="430" src="https://www.youtube.com/embed/MBh7K5ooF2s" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture" allowfullscreen></iframe><p> The dark web might be a little lighter than previously suggested. While it is true that there is some horrible stuff down there, this study suggests the people getting to it using the Tor network are mostly using it for legal, and perhaps even banal, purposes. This interpretation is additionally supported by the difference in usage across countries judged free and not free. In those countries with censorship, where a variety of tools must be used to get to sites like Facebook or Wikipedia, the percentage of users going towards locations on the dark web was smaller.</p><p>The authors conclude:</p><p style="margin-left: 20px;"> "The Tor anonymity network can be used for both licit and illicit purposes. Our results provide a clear, if probabilistic, estimation of the extent to which users of Tor engage in either form of activity. Generally, users of Tor in politically 'free' countries are significantly more likely to be using the network in likely illicit ways."</p><p> Additionally, they mention that the Tor network's infrastructure is predominately in free countries, which then see higher rates of its use to reach places that could advance illegal activities. This find may be of interest to policymakers looking to balance the promotion of autonomy and the freedom of information with the goal of preventing crime.</p>
What’s the catch?<iframe width="730" height="430" src="https://www.youtube.com/embed/2UNUMgM9Gwo" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture" allowfullscreen></iframe><p> It has been suggested that the internet is the first thing humanity ever created that we don't fully understand. If that is true, it should surprise no one that there are objections to the methods used to study it. <br> <br> The executive director of the Tor Project, Isabela Bagueros, explained their objection to the study's methodology and assumptions to <a href="https://arstechnica.com/gadgets/2020/11/does-tor-provide-more-benefit-or-harm-new-paper-says-it-depends/" target="_blank" rel="noopener noreferrer">Ars Technica</a>:</p><p style="margin-left: 20px;"> <em>"The authors of this research paper have chosen to categorize all .onion sites and all traffic to these sites as "illicit" and all traffic on the "Clear Web" as 'licit.'</em></p><p style="margin-left: 20px;"><em>This assumption is flawed. Many popular websites, tools, and services use onion services to offer privacy and censorship-circumvention benefits to their users. For example, Facebook offers an onion service. Global news organizations, including The New York Times, BBC, Deutsche Welle, Mada Masr, and Buzzfeed, offer onion services.</em></p><p style="margin-left: 20px;"><em>Whistleblowing platforms, filesharing tools, messaging apps, VPNs, browsers, email services, and free software projects also use onion services to offer privacy protections to their users, including Riseup, OnionShare, SecureDrop, GlobaLeaks, ProtonMail, Debian, Mullvad VPN, Ricochet Refresh, Briar, and Qubes OS…...</em></p><p style="margin-left: 20px;"><em>Writing off traffic to these widely-used sites and services as "illicit" is a generalization that demonizes people and organizations who choose technology that allows them to protect their privacy and circumvent censorship. In a world of increasing surveillance capitalism and internet censorship, online privacy is necessary for many of us to exercise our human rights to freely access information, share our ideas, and communicate with one another. Incorrectly identifying all onion service traffic as "illicit" harms the fight to protect encryption and benefits the powers that be that are trying to weaken or entirely outlaw strong privacy technology."</em><br> </p><p>The critique here is justified; there are legitimate websites hidden behind layers of security which were deemed "illicit" by this study's methods. Many people are just trying to protect their anonymity when using them. However, the study's authors based their assumption on previous studies that demonstrate that these hidden sites are used for illegal activities at a higher rate than other parts of the <a href="https://www.cigionline.org/sites/default/files/no20_0.pdf" target="_blank" rel="noopener noreferrer">internet</a>.</p><p>Until a more rigorous and ethically ambiguous method of determining what people using the network are doing on these dark websites is utilized, the findings of studies like this will be general and based on broad assumptions. </p><p>Despite all of this, we can take a few things from this study: most people using Tor to explore the internet aren't using it for evil, those using it in places with limited freedom of information are even less likely to use it for such purposes, and external factors can have significant impacts on how people use a tool such as the internet. <br></p>
Degoo's secure backups are available at a great price.
A new study explores how wearing a face mask affects the error rates of popular facial recognition algorithms.
- The study measured the error rates of 89 commercial facial recognition technologies as they attempted to match photos of people with and without masks.
- Wearing a mask increased error rates by 5 to 50 percent among the algorithms.
- The researchers said they expect facial recognition technology to get better at recognizing people wearing masks. But it's not clear that that's what Americans want.
NIST digitally applied mask shapes to photos and tested the performance of face recognition algorithms developed before COVID appeared. Because real-world masks differ, the team came up with variants that included differences in shape, color and nose coverage.
Credit: B. Hayes/NIST<p>But not all masks thwarted the software equally. For example, black masks led to higher error rates than blue masks (though the researchers said they weren't able to completely explore how color affected the software). Error rates were also higher when people wore wide masks (as opposed to rounder ones) that covered most of the nose.</p><p style="margin-left: 20px;">"With the arrival of the pandemic, we need to understand how face recognition technology deals with masked faces," said Mei Ngan, a NIST computer scientist and an author of the report. "We have begun by focusing on how an algorithm developed before the pandemic might be affected by subjects wearing face masks. Later this summer, we plan to test the accuracy of algorithms that were intentionally developed with masked faces in mind."</p><p>The researchers said they expect facial-recognition software will get better at recognizing people wearing masks.</p><p style="margin-left: 20px;">"But the data we've taken so far underscores one of the ideas common to previous FRVT tests: Individual algorithms perform differently," Ngan said.</p>
American opinion on facial recognition<p>But do Americans even want better facial recognition technology? The answer depends on who's deploying the software. A <a href="https://www.pewresearch.org/internet/2019/09/05/more-than-half-of-u-s-adults-trust-law-enforcement-to-use-facial-recognition-responsibly/" target="_blank">2019 survey from Pew Research Center</a> found that 56 percent of Americans would trust law enforcement to use facial recognition technology responsibly, while 59 percent said it's acceptable for officials to use the software to monitor public spaces for threats.</p><p>Americans are more wary of trusting the private sector with facial recognition. For example, 36 percent of respondents said they'd trust technology companies to use the software responsibly, while only 16 percent said they'd trust advertisers to do the same.</p>
(Photo by Steffi Loos/Getty Images)<p>No matter how Americans feel about facial recognition, it's probably here to stay. After all, the FBI already has a database of more than <a href="https://nymag.com/intelligencer/2019/11/the-future-of-facial-recognition-in-america.html" target="_blank">641 million facial images</a>, many of which simply come from publicly accessible social media posts. And even though cities like San Francisco have banned the technology, police across the country are using it with increasing frequency.</p><p>Georgetown Law School's Center on Privacy and Technology <a href="https://www.perpetuallineup.org/findings/deployment" target="_blank">estimates</a> that "more than one in four of all American state and local law enforcement agencies can run face recognition searches of their own databases, run those searches on another agency's face recognition system, or have the option to access such a system."</p>