"Our data should be ours no matter what platforms and apps we use," Yang said.
- In November, Californians will vote to pass Proposition 24, which aims to expand data privacy laws in the state.
- Proposition 24 aims to strengthen the California Consumer Privacy Act, which went into effect this year.
- However, some privacy advocates say Proposition 24 doesn't go far enough, and in some cases actually erodes the CCPA.
Critiques of Prop. 24<p>Still, some advocates say even these additions to the CCPA don't go far enough, including organizations like the ACLU of California, the Consumer Federation of California and the Electronic Frontier Foundation (EFF).</p><p>Calling it a "mixed bag of partial steps backwards and forwards," the EFF <a href="https://www.eff.org/deeplinks/2020/07/why-eff-doesnt-support-cal-prop-24" target="_blank">said</a> it wouldn't support Proposition 24 because (to name a few reasons) it:</p><ul><li>Would expand "<a href="https://www.eff.org/deeplinks/2019/02/payoff-californias-data-dividend-must-be-stronger-privacy-laws" target="_blank" rel="noopener noreferrer">pay for privacy</a>" schemes by allowing a company to withhold discounts unless consumers in loyalty clubs allow it to harvest certain data. This could lead to a society of privacy "haves" and "have-nots," wrote the EFF.</li><li>Fails to establish an "opt-in" model of data collection. Under the CCPA, consumers have to opt-out of collection, which places the burden on consumers to protect privacy. "Privacy should be the default," the EFF wrote.</li><li>Would expand the power of companies to refuse a consumer's request to delete their data.</li></ul>
(Photo by Scott Eisen/Getty Images)<p>As for Yang? It's unclear what the former presidential hopeful, whose campaign was based in part on data privacy, thinks of these critiques. But in a recent interview with <a href="https://www.ksro.com/2020/09/02/interview-andrew-yang-on-prop-24/" target="_blank" rel="noopener noreferrer">KSRO</a>, Yang said the U.S. lags far behind European nations in terms of data privacy laws, and that Proposition 24 would be a huge step towards our data dignity. He added that other states beyond California would likely follow suit if the proposal passes.</p>
The Data Dividend Project<p>Yang is also spearheading the <a href="https://www.datadividendproject.com/" target="_blank">Data Dividend Project</a>, a "movement dedicated to establishing and enforcing data property rights and to getting you compensated when companies monetize your data." The project, which operates under the laws established by the CCPA, aims to tax tech companies when they use consumer data, and to support new data privacy legislation across the country. (Some critics have <a href="https://www.vice.com/en_us/article/935358/andrew-yangs-data-dividend-isnt-radical-its-useless" target="_blank" rel="noopener noreferrer">questioned the efficacy of the project</a>.)</p><p>In an op-ed about his data dividend proposal published in the <a href="https://www.latimes.com/opinion/story/2020-06-23/andrew-yang-data-dividend-tech-privacy" target="_blank" rel="noopener noreferrer">Los Angeles Times</a>, Yang wrote: </p><p style="margin-left: 20px;">"If Congress and other states adopt legislation like the CCPA, millions more would be able to band together with even greater bargaining power to hold tech companies accountable and, ultimately, demand that they share some of the revenue generated from consumers' personal data."</p>
The system is basically facial recognition technology, but for cars.
- Some police departments use automatic license plate readers to track suspects.
- A company called Flock Safety is now allowing police departments to opt in to a national network, which shares data on car movements.
- Privacy advocates are concerned about the potential for errors and abuse.
Map tracking the car movements of a murder suspect in Alabama.
Flock Safety<p>Flock Safety says its cameras help police solve more crimes. The company <a href="https://www.flocksafety.com/flock-safety-resources" target="_blank" rel="dofollow">website</a> notes that "70% of crime involves a vehicle" and law enforcement agencies say "a license plate is the best piece of evidence to track leads and solve crimes."</p><p>But critics of Flock Safety have raised concerns over the potential for errors and abuse. In August, for example, <a href="https://gizmodo.com/cops-terrorize-black-family-but-blame-license-plate-rea-1844602731" target="_blank">police in Colorado held a family at gunpoint</a> after a license plate reader flagged a car as stolen. It turned out to be the wrong vehicle.</p><p>With TALON, police would also have unprecedented information about the movements of citizens. It's not hard to see how this data could be abused. Think, for example, of the Florida police officer who used the Driver and Vehicle Information Database (D.A.V.I.D.) to get women's contact information so he could <a href="https://www.mercurynews.com/2019/03/11/police-in-florida-allege-officer-used-database-to-gets-dates/" target="_blank" rel="noopener noreferrer dofollow">ask them out on dates</a>.</p>
Flock Safety<p>It's currently unclear how many police departments plan to join TALON. But like the advent of facial recognition technologies, the spread of automatic license plate reader technology highlights how mass surveillance isn't always driven by the state.</p><p style="margin-left: 20px;">"We often think of dystopian surveillance as something that's imposed by an authoritarian government," Evan Greer, deputy director of the digital rights group Fight for the Future, told <a href="https://www.cnet.com/news/license-plate-tracking-for-police-set-to-go-nationwide/?utm_source=reddit.com" target="_blank">CNET</a>. "It's clearer every day that there is an enormous threat posed by privately owned and managed surveillance regimes, which will be weaponized by the rich and powerful to protect not just their wealth but the exploitative system that helped them amass it."</p>
A report from the New York Times raises questions over how the teletherapy startup Talkspace handles user data.
- In the report, several former employees said that "individual users' anonymized conversations were routinely reviewed and mined for insights."
- Talkspace denied using user data for marketing purposes, though it acknowledged that it looks at client transcripts to improve its services.
- It's still unclear whether teletherapy is as effective as traditional therapy.
Talkspace.com<p>Former employees also questioned the legitimacy of certain interventions by the company into client-therapist interactions. For example, after one therapist sent a client a link to an online anxiety worksheet, a company representative instructed her to try to keep clients inside the app.</p><p style="margin-left: 20px;">"I was like, 'How do you know I did that?'" Karissa Brennan, a therapist who worked with Talkspace from 2015 to 2017, told the Times. "They said it was private, but it wasn't."</p><p>Other former employees said the company would pay special attention to its "enterprise partner" clients, who worked at companies like Google. One therapist said Talkspace contacted her for taking too long to respond to Google clients.</p><p>Talkspace responded to the Times with a Medium <a href="https://medium.com/@founders_22883/talkspace-founders-respond-to-a-new-york-times-article-78d6f5c45c59" target="_blank">post</a>, which claimed the Times report contained false and "uninformed assertions."</p><p style="margin-left: 20px;">"Talkspace is a HIPAA/HITECH and SOC2 approved platform, audited annually by external vendors, and has deployed additional technologies to keep its data safe, exceeding all existing regulatory requirements," the post states.</p>
HIPAA concerns<p>However, if the claims in the Times report are true, Talkspace may have violated the <a href="https://www.hhs.gov/sites/default/files//hipaa-privacy-rule-and-sharing-info-related-to-mental-health.pdf" target="_blank">Health Insurance Portability and Accountability Act (HIPAA) Privacy Rule</a>, which prohibits providers from disclosing patients' medical data for marketing purposes, unless the patient gives <a href="https://www.hhs.gov/hipaa/for-individuals/guidance-materials-for-consumers/index.html" target="_blank">authorization</a>.</p><p style="margin-left: 20px;">"If it is true that Talkspace used information from private therapy sessions for marketing purposes, that is a clear violation of trust with their customers," Hayley Tsukayama, Legislative Activist from the Electronic Frontier Foundation, told <a href="https://www.salon.com/2020/08/10/therapy-app-talkspace-allegedly-data-mined-patients-conversations-with-therapists/" target="_blank">Salon</a>. "All companies should be very clear with their customers about how they use personal information, make sure that they don't use information in ways that consumers don't expect, and give them the opportunity to withdraw consent for those purposes on an ongoing basis. Talkspace trades on its trustworthiness and mentions privacy frequently in its ad campaigns. Its actions should be in line with its promises."</p><p>(It's also worth noting that Talkspace recently threatened legal action against a security researcher who wrote a blog post outlining the potential discovery of a bug that allowed him to get a year's subscription for free. A report from <a href="https://techcrunch.com/2020/03/09/talkspace-cease-desist/" target="_blank" rel="dofollow">TechCrunch</a> notes that Talkspace rejected the findings, and that the company does not offer a way for researchers to submit potential security bugs.) </p><p>Beyond privacy concerns, the report also raises questions about the efficacy of teletherapy, especially within a corporate model.</p><p style="margin-left: 20px;">"The app-ification of mental health care has real problems," Hannah Zeavin, a lecturer at the University of California and author of an upcoming book on teletherapy, told the <a href="https://www.nytimes.com/2020/08/07/technology/talkspace.html" target="_blank" rel="noopener noreferrer dofollow">Times</a>. "These are corporate platforms first. And they offer therapy second."</p><p>The main problem with judging the efficacy of teletherapy is the lack of solid research — it's too new to comprehensively compare it with in-person therapy. Still, some <a href="https://www.theraplatform.com/blog/284/is-telemental-health-effective-how-does-it-measure-up" target="_blank" rel="noopener noreferrer dofollow">studies</a> suggest it could be useful for at-risk populations, or for people in the wake of a disaster.</p>
'It's just not therapy'<p>But others remain skeptical.</p><p style="margin-left: 20px;">"Maybe [teletherapy] products and services are helpful to certain people," <a href="https://www.nytimes.com/2020/08/07/technology/talkspace.html" target="_blank">said</a> Linda Michaels, a founder of the Psychotherapy Action Network, a therapists advocacy group. "But it's just not therapy."</p><p>Proper therapy or not, it's worth considering how platforms like Talkspace use — and possibly even depend on — user data. In a 2019 <a href="https://www.nytimes.com/2019/10/02/opinion/health-care-data-privacy.html" target="_blank" rel="dofollow">opinion piece published in the Times</a>, Talkspace co-founder Oren Frank wrote:</p><p style="margin-left: 20px;">"The vast amount of information each of us possesses is far too important to be left under the control of just a few entities — private or public. We can think of our health care data as a contribution to the public good and equalize its availability to scientists and researchers across disciplines, like open source code. From there, imagine better predictive models that will in turn allow better and earlier diagnoses, and eventually better treatments.</p><p style="margin-left: 20px;">Your health care data could help people who are, at least in some medical aspects, very similar to you. It might even save their lives. The right thing to do with your data is not to guard it, but to share it."</p><p>Would you?</p>
A new study explores how wearing a face mask affects the error rates of popular facial recognition algorithms.
- The study measured the error rates of 89 commercial facial recognition technologies as they attempted to match photos of people with and without masks.
- Wearing a mask increased error rates by 5 to 50 percent among the algorithms.
- The researchers said they expect facial recognition technology to get better at recognizing people wearing masks. But it's not clear that that's what Americans want.
NIST digitally applied mask shapes to photos and tested the performance of face recognition algorithms developed before COVID appeared. Because real-world masks differ, the team came up with variants that included differences in shape, color and nose coverage.
Credit: B. Hayes/NIST<p>But not all masks thwarted the software equally. For example, black masks led to higher error rates than blue masks (though the researchers said they weren't able to completely explore how color affected the software). Error rates were also higher when people wore wide masks (as opposed to rounder ones) that covered most of the nose.</p><p style="margin-left: 20px;">"With the arrival of the pandemic, we need to understand how face recognition technology deals with masked faces," said Mei Ngan, a NIST computer scientist and an author of the report. "We have begun by focusing on how an algorithm developed before the pandemic might be affected by subjects wearing face masks. Later this summer, we plan to test the accuracy of algorithms that were intentionally developed with masked faces in mind."</p><p>The researchers said they expect facial-recognition software will get better at recognizing people wearing masks.</p><p style="margin-left: 20px;">"But the data we've taken so far underscores one of the ideas common to previous FRVT tests: Individual algorithms perform differently," Ngan said.</p>
American opinion on facial recognition<p>But do Americans even want better facial recognition technology? The answer depends on who's deploying the software. A <a href="https://www.pewresearch.org/internet/2019/09/05/more-than-half-of-u-s-adults-trust-law-enforcement-to-use-facial-recognition-responsibly/" target="_blank">2019 survey from Pew Research Center</a> found that 56 percent of Americans would trust law enforcement to use facial recognition technology responsibly, while 59 percent said it's acceptable for officials to use the software to monitor public spaces for threats.</p><p>Americans are more wary of trusting the private sector with facial recognition. For example, 36 percent of respondents said they'd trust technology companies to use the software responsibly, while only 16 percent said they'd trust advertisers to do the same.</p>
(Photo by Steffi Loos/Getty Images)<p>No matter how Americans feel about facial recognition, it's probably here to stay. After all, the FBI already has a database of more than <a href="https://nymag.com/intelligencer/2019/11/the-future-of-facial-recognition-in-america.html" target="_blank">641 million facial images</a>, many of which simply come from publicly accessible social media posts. And even though cities like San Francisco have banned the technology, police across the country are using it with increasing frequency.</p><p>Georgetown Law School's Center on Privacy and Technology <a href="https://www.perpetuallineup.org/findings/deployment" target="_blank">estimates</a> that "more than one in four of all American state and local law enforcement agencies can run face recognition searches of their own databases, run those searches on another agency's face recognition system, or have the option to access such a system."</p>
Innovative use of blockchain tech, data trusts, algorithm assessments, and cultural shifts abound.
- A study published last year by the Pew Research Center found that most American's distrust the federal government, and there's plenty of evidence to suggest that the situation has yet to improve.
- Governments have more access than ever to our private information, which creates an inherent tension between how they can use data for the public good while ensuring they aren't abusing citizens' privacy rights.
- As emerging technologies mature, it will become more evident to the public which models are the most effective ways for governments to achieve the levels of transparency they've committed to delivering.