Companies can identify you from your music preferences, as well as influence and profit from your behavior.
- New research discovered that you can be identified from just three song choices.
- This type of information can be exploited by streaming services through targeted advertising.
- The researchers are calling for musical preference to be considered in regulations regarding online privacy.
While the focus on music piracy dominated the media for years, an equally important (and far less discussed) phenomenon occurred during the transition from broadcast radio to streaming. People were no longer beholden to the gatekeepers known as DJs. Today, listeners have the entire history of music at their fingertips. Each person is now their own DJ.
If it's free, you are the product
Though this might appear empowering, every advancement comes at a cost. Because listeners changed how they consumed music (namely, from radio broadcasts to personalized online streams), companies had to change their monetization strategy. Now, you are the product.
When you curate a playlist, you are inadvertently sending tons of data to different companies, with Spotify, YouTube, and Apple Music leading the way. As it turns out, according to a new study from Israeli researchers — Ariel University's Dr. Ron Hirschprung and Tel Aviv University's Dr. Ori Leshman — your musical tastes reveal more about your personality than you likely ever imagined.
Musical selection is a quasi-identifier
There are different ways in which you can be identified. Identifiers, such as your social security number, are highly specific and unique to you. But then there are quasi-identifiers — things like age, gender, and occupation — that can also give away your identity. The authors claim that musical selection is a quasi-identifier, and they argue that, as with other forms of sensitive data, our playlists should be considered when constructing privacy laws.
In their paper, they write, "[T]he combination of Big-Data, together with the availability of computational power — which is notoriously known for its potential of privacy violation — introduces a privacy threat from an unexpected angle: listening to music."
To prove their point, the researchers divided undergraduate students into four groups with roughly 35 volunteers in each. Every member submitted three songs from their playlist of favorite tracks. Then, the researchers picked five members at random in each group, and the remaining volunteers were asked to vote to determine if they could match the members with their playlists.
Photo: cherryandbees / Adobe Stock
Even to the surprise of the researchers, the participants were right between 80 percent and 100 percent of the time. Incredibly, these students did not know one another well and were not aware in advance of anyone's musical preferences.
There are many outward signs that mark us in the eyes of others: what we wear, what we eat, how we style our hair, our mannerisms and posture, and even where we stand at parties. Other people pick up on these subtle clues, which in turn allows them to predict our personalities. In this study, the volunteers were able to identify the musical preferences of strangers simply by observing their outward appearances.
Of course, companies notice similar things and are able to exploit what they learn about us. In a press release, the authors stated:
"Music can become a form of characterization, and even an identifier. It provides commercial companies like Google and Spotify with additional and more in-depth information about us as users of these platforms. In the digital world we live in today, these findings have far-reaching implications on privacy violations, especially since information about people can be inferred from a completely unexpected source, which is therefore lacking in protection against such violations."Musical preference isn't the only way in which you can be identified online. For instance, your browsing history can give away your identity. Listening to your favorite tunes while searching Google for a new recipe isn't as innocuous as you might think.
Stay in touch with Derek on Twitter and Facebook. His most recent book is "Hero's Dose: The Case For Psychedelics in Ritual and Therapy."
Do you sound friendly? Hostile? And which voice would be more likely to buy something?
It retrieves its analysis of the speaking style you used when you phoned other companies the software firm services. The computer has concluded you are "friendly and talkative." Using predictive routing, it connects you to a customer service agent who company research has identified as being especially good at getting friendly and talkative customers to buy more expensive versions of the goods they're considering.
This hypothetical situation may sound as if it's from some distant future. But automated voice-guided marketing activities like this are happening all the time.
If you hear "This call is being recorded for training and quality control," it isn't just the customer service representative they're monitoring.
It can be you, too.
When conducting research for my forthcoming book, "The Voice Catchers: How Marketers Listen In to Exploit Your Feelings, Your Privacy, and Your Wallet," I went through over 1,000 trade magazine and news articles on the companies connected to various forms of voice profiling. I examined hundreds of pages of U.S. and EU laws applying to biometric surveillance. I analyzed dozens of patents. And because so much about this industry is evolving, I spoke to 43 people who are working to shape it.
It soon became clear to me that we're in the early stages of a voice-profiling revolution that companies see as integral to the future of marketing.
Thanks to the public's embrace of smart speakers, intelligent car displays and voice-responsive phones – along with the rise of voice intelligence in call centers – marketers say they are on the verge of being able to use AI-assisted vocal analysis technology to achieve unprecedented insights into shoppers' identities and inclinations. In doing so, they believe they'll be able to circumvent the errors and fraud associated with traditional targeted advertising.
Not only can people be profiled by their speech patterns, but they can also be assessed by the sound of their voices – which, according to some researchers, is unique and can reveal their feelings, personalities and even their physical characteristics.
Flaws in targeted advertising
Top marketing executives I interviewed said that they expect their customer interactions to include voice profiling within a decade or so.
Part of what attracts them to this new technology is a belief that the current digital system of creating unique customer profiles – and then targeting them with personalized messages, offers and ads – has major drawbacks.
A simmering worry among internet advertisers, one that burst into the open during the 2010s, is that customer data often isn't up to date, profiles may be based on multiple users of a device, names can be confused and people lie.
These are all barriers to understanding individual shoppers.
Voice analysis, on the other hand, is seen as a solution that makes it nearly impossible for people to hide their feelings or evade their identities.
Building out the infrastructure
Most of the activity in voice profiling is happening in customer support centers, which are largely out of the public eye.
But there are also hundreds of millions of Amazon Echoes, Google Nests and other smart speakers out there. Smartphones also contain such technology.
All are listening and capturing people's individual voices. They respond to your requests. But the assistants are also tied to advanced machine learning and deep neural network programs that analyze what you say and how you say it.
Amazon and Google – the leading purveyors of smart speakers outside China – appear to be doing little voice analysis on those devices beyond recognizing and responding to individual owners. Perhaps they fear that pushing the technology too far will, at this point, lead to bad publicity.
Nevertheless, the user agreements of Amazon and Google – as well as Pandora, Bank of America and other companies that people access routinely via phone apps – give them the right to use their digital assistants to understand you by the way you sound. Amazon's most public application of voice profiling so far is its Halo wristband, which claims to know the emotions you're conveying when you talk to relatives, friends and employers.
The company assures customers it doesn't use Halo data for its own purposes. But it's clearly a proof of concept – and a nod toward the future.
Patents point to the future
The patents from these tech companies offer a vision of what's coming.
In one Amazon patent, a device with the Alexa assistant picks up a woman's speech irregularities that imply a cold through using "an analysis of pitch, pulse, voicing, jittering, and/or harmonicity of a user's voice, as determined from processing the voice data." From that conclusion, Alexa asks if the woman wants a recipe for chicken soup. When she says no, it offers to sell her cough drops with one-hour delivery.
Another Amazon patent suggests an app to help a store salesperson decipher a shopper's voice to plumb unconscious reactions to products. The contention is that how people sound allegedly does a better job indicating what people like than their words.
And one of Google's proprietary inventions involves tracking family members in real time using special microphones placed throughout a home. Based on the pitch of voice signatures, Google circuitry infers gender and age information – for example, one adult male and one female child – and tags them as separate individuals.
The company's patent asserts that over time the system's "household policy manager" will be able to compare life patterns, such as when and how long family members eat meals, how long the children watch television, and when electronic game devices are working – and then have the system suggest better eating schedules for the kids, or offer to control their TV viewing and game playing.
In the West, the road to this advertising future starts with firms encouraging users to give them permission to gather voice data. Firms gain customers' permission by enticing them to buy inexpensive voice technologies.
When tech companies have further developed voice analysis software – and people have become increasingly reliant on voice devices – I expect the companies to begin widespread profiling and marketing based on voice data. Hewing to the letter if not the spirit of whatever privacy laws exist, the companies will, I expect, forge ahead into their new incarnations, even if most of their users joined before this new business model existed.
This classic bait and switch marked the rise of both Google and Facebook. Only when the numbers of people flocking to these sites became large enough to attract high-paying advertisers did their business models solidify around selling ads personalized to what Google and Facebook knew about their users.
By then, the sites had become such important parts of their users' daily activities that people felt they couldn't leave, despite their concerns about data collection and analysis that they didn't understand and couldn't control.
This strategy is already starting to play out as tens of millions of consumers buy Amazon Echoes at giveaway prices.
The dark side of voice profiling
Here's the catch: It's not clear how accurate voice profiling is, especially when it comes to emotions.
It is true, according to Carnegie Mellon voice recognition scholar Rita Singh, that the activity of your vocal nerves is connected to your emotional state. However, Singh told me that she worries that with the easy availability of machine-learning packages, people with limited skills will be tempted to run shoddy analyses of people's voices, leading to conclusions that are as dubious as the methods.
She also argues that inferences that link physiology to emotions and forms of stress may be culturally biased and prone to error. That concern hasn't deterred marketers, who typically use voice profiling to draw conclusions about individuals' emotions, attitudes and personalities.
While some of these advances promise to make life easier, it's not difficult to see how voice technology can be abused and exploited. What if voice profiling tells a prospective employer that you're a bad risk for a job that you covet or desperately need? What if it tells a bank that you're a bad risk for a loan? What if a restaurant decides it won't take your reservation because you sound low class, or too demanding?
Consider, too, the discrimination that can take place if voice profilers follow some scientists' claims that it is possible to use an individual's vocalizations to tell the person's height, weight, race, gender and health.
People are already subjected to different offers and opportunities based on the personal information companies have collected. Voice profiling adds an especially insidious means of labeling. Today, some states such as Illinois and Texas require companies to ask for permission before conducting analysis of vocal, facial or other biometric features.
But other states expect people to be aware of the information that's collected about them from the privacy policies or terms of service – which means they rarely will. And the federal government hasn't enacted a sweeping marketing surveillance law.
With the looming widespread adoption of voice analysis technology, it's important for government leaders to adopt policies and regulations that protect the personal information revealed by the sound of a person's voice.
One proposal: While the use of voice authentication – or using a person's voice to prove their identity – could be allowed under certain carefully regulated circumstances, all voice profiling should be prohibited in marketers' interactions with individuals. This prohibition should also apply to political campaigns and to government activities without a warrant.
That seems like the best way to ensure that the coming era of voice profiling is constrained before it becomes too integrated into daily life and too pervasive to control.
And is anyone protecting children's data?
- The market for smart toys is rapidly expanding and could grow to $18 billion by 2023.
- Smart toys can help with learning but pose risks if they are not designed to protect children's data and safety.
- Many companies are developing smart toys ethically and responsibly, with makers of AI-powered smart toys encouraged to apply to the Smart Toy Awards.
Imagine a child born this year who will be surrounded by technology at every phase of her childhood. When she is three years old, Sophie's parents buy her a smart doll that uses facial recognition and artificial intelligence (AI) to watch, listen to, and learn from her.
Like many children, Sophie will come to love this toy. And like previous generations of children with a favorite doll or teddy bear, she will carry it around with her, talk with it, and sleep with it beside her for many years.
If the smart doll is designed responsibly, this toy could be her best friend; if not, it will be a surveillance tool that records her every move and word spoken in its presence by her, her friends, and even her parents.
Smart toys use AI to learn about the child user and personalize the play or learning experience for them. They can learn a child's favourite colour, song, and learn to recognize that child and other familiar people in that child's life. While this may sound futuristic, there are many smart toys that already provide these capabilities. The market for these toys is rapidly expanding and will grow to $18 billion by 2023.
To address this urgent use of AI, the World Economic Forum recently launched the Smart Toy Awards to recognize ethically and responsibly designed AI-powered toys that create an innovative and healthy play experience for children.
Smart toys provide enormous promise for children. They can customize learning based on data they gather about children; they can teach computer programming skills to children; and they can help children with disabilities develop cognitive, motor, and social skills.
But at the same time, smart toys provide large potential risks if they are not designed to protect children's data, safety, and cybersecurity.
A cautionary tale
The example of Sophie's smart doll is not far-fetched. In 2017, My Friend Cayla – an early smart toy that used facial and voice recognition – was declared an illegal surveillance tool in many countries.
If the Cayla doll was connected to a phone, data was sent to the manufacturer and a third-party company for processing and storage. And anyone with the My Friend Cayla app on their phone within 30 feet of a toy could access the toy and listen to the child user.
Germany issued a "kill order" for the doll and required parents to destroy it "with a hammer." Today, the only surviving Cayla dolls in Germany reside in the Spy Museum in Berlin.
The risks posed by smart toys
Sophie applies to college when she is 18 years old. If her smart doll had collected data on her from the age of 3 to 9, the company who built the toy could know her better than her parents. Without adequate data protections, the company could also sell this data to the colleges to which she is applying or other third parties.
After college, Sophie applies to a job. If the employer bought data gathered on Sophie as a child, they could learn about her strengths and weaknesses. What if Sophie bullied her younger sister, yelled at her parents, or refused to do her homework as a child? All these actions conducted in the privacy of the family's home could be known by the company and sold to third parties who could use this information to discriminate against Sophie. The family's life is no longer private.
Today, data is gold but gathering data on children is inherently problematic. As a company gathers data about children through Sophie's doll, they may have a responsibility to act or intervene. Imagine that Sophie tells her doll about suicidal thoughts and self-harm. Should the company be required to alert the parents and call 911?
The more data that a smart toy gathers the more complex scenarios smart toy companies will likely face. Every company designing a smart toy with the capabilities to gather this information must consider these worst-case scenarios as they develop toys to protect the safety of the child user and those around them.
Developing responsible and ethical smart toys
Despite these significant risks, ethical and responsible smart toys are being developed. The Smart Toy Awards have developed four key governance criteria for companies developing AI-powered toys: data privacy and cybersecurity; accessibility and transparency; age appropriateness; and healthy play.
Sophie's smart doll illustrates the importance of strong data privacy and clearly communicating to adults buying the toy what the smart doll does and how it operates. This must be communicated in the Terms of Service in language understandable by non-technologically literate audiences. At minimum, Smart Toys should meet COPPA requirements in the US and GDPR in the EU.
Parents and guardians should understand with whom children's data is being shared and for what purpose. Companies should empower parents, guardians, and children to make their own decisions about how children's data is being used. And companies should not sell children's data to third parties.
Data privacy is a foundation for ethical and responsible smart toys, but they must also be designed to be accessible, transparent, age appropriate, and promote healthy play and children's mental health.
The future of childhood
Sophie's doll doesn't necessarily pose concern for her and her parents, and data collected on her won't hinder her future if the data is carefully protected. In the EU, GDPR provides the right to be forgotten, and a similar policy could allow children like Sophie to request that all data collected on them as children by their smart toys be deleted when they turn 18 years old, so they would have a fresh start as they begin adulthood.
Sophie and all children should have a fair shot at childhood, education, careers, and life. The data collected on them as children should not be used to discriminate against them in the future.
Smart toys like Sophie's doll can play a pivotal role in childhoods, catalyzing creativity and critical thinking skills. Many companies are developing smart toys with careful consideration for ethics and responsibility. We urge companies to adopt our governance criteria as they're designing and developing smart toys.
Childhood is a sacred time and parents will do everything they can to protect their children's experiences. This won't be possible unless stakeholders work together across the private, public, and nonprofit sectors to develop ethical, responsible, and innovative smart toys that protect and foster the essence of childhood.
The attack on the Capitol forces us to confront an existential question about privacy.
- The insurrection attempt at the Capitol was captured by thousands of cell phones and security cameras.
- Many protestors have been arrested after their identity was reported to the FBI.
- Surveillance experts warn about the dangers of using facial recognition to monitor protests.
If ever there were a reason to wear masks, the insurrection at the Capitol last week would have been it. But many of those present believed the anti-mask rhetoric being used as a distraction from the nation's skyrocketing death rate. In fact, the day might even prove to have been a superspreader event, with at least two congresspeople becoming infected after the siege.
Those involved in the attempted coup d'état were not concerned about a virus. Nor, apparently, were they worried about shielding themselves from the tens of thousands of hours of recorded video taken by thousands of phones. In a strange merging of social media and dark web chat rooms come to life, separating actual insurrectionists from revolutionary tourists could prove to be a cumbersome vocation. One thing is certain: identifying them is not difficult.
Instagram-worthy sieges bring us to a longstanding existential question: should law enforcement be allowed to use AI and cell phone data to prosecute offenders?
Of the many security failures that day, one stood out: the small number of arrests for a breach of outsized magnitude. As the nation ogled at an unemployed actor turned conspiracy shaman behind the speaker's chair in real-time, scenes of horrendous violence took hours, even days, to be released. In a game of seemingly futile catch-up, federal agencies opened tip lines to identify the insurrectionists that should have easily been in their grasp.
But the public responded.
Brad Templeton: Today's Surveillance Society is Beyond Orwellian
There's the ex-wife of a retired Air Force lieutenant colonel whose neck gaiter was pulled down; the patriotic cohort of Internet detectives crowd-sourcing information for the FBI; the director of the infamous pseudoscience film, "Plandemic," praising the "patriots" that breached the building moments after he left the siege himself; and that unemployed actor who regularly attended QAnon events leaving the most public trail imaginable, and who is currently in custody facing serious charges.
Fish in barrels, all of them. What of the remaining thousands?
This privacy discussion is not new. Arthur Holland Michel, founder and co-director of the Center for the Study of the Drone at Bard College, warned Big Think in 2019 about the dangers of surveillance technology—specifically, in this case, a camera known as Gorgon Stare.
"Say there is a big public protest. With this camera, you can follow thousands of protesters back to their homes. Now you have a list of the home addresses of all the people involved in a political movement. If on their way home you witness them committing some crime—breaking a traffic regulation or frequenting a location that is known to be involved in the drug trade—you can use that surveillance data against them to essentially shut them up. That's why we have laws that prevent the use of surveillance technologies because it is human instinct to abuse them. That's why we need controls."
Late last year, University of Miami students pushed back against school administrators using facial recognition software for potentially insidious means—a protest not limited to that campus. Can you place students refusing to attend classes during a pandemic with armed insurrectionists attempting to change the results of a democratic election? Not even close. More to the point, however, we should leave political leanings out of the equation when deciding who we think should be monitored.
Protesters enter the U.S. Capitol Building on January 06, 2021 in Washington, DC. Congress held a joint session today to ratify President-elect Joe Biden's 306-232 Electoral College win over President Donald Trump.
Credit: Win McNamee/Getty Images
Shortly after the siege, the New Yorker's Ronan Farrow helped reveal the identity of the aforementioned lieutenant colonel while conservatives claim the riots were actually antifa—a conspiracy theory that's been peddled before. Politics simply can't be avoided in this age. Still, Albert Fox Cahn, founder of the Surveillance Technology Oversight Project, doesn't believe the insurrection attempt justifies an uptick in facial recognition technology.
"We don't need a cutting-edge surveillance dragnet to find the perpetrators of this attack: They tracked themselves. They livestreamed their felonies from the halls of Congress, recording each crime in full HD. We don't need facial recognition, geofences, and cell tower data to find those responsible, we need police officers willing to do their job."
The New Orleans City Council recently banned similar surveillance technologies due to fears that it would unfairly target minorities. San Francisco was the first city to outright ban facial recognition nearly two years ago. Cahn's point is that the FBI shouldn't be using AI to cover for the government's failure to protect the Capitol. Besides, the insurrectionists outed themselves on their own social media feeds.
When Pandora's box cracks open, it's hard to push the monster back in. Naomi Klein detailed the corporate takeover of New Orleans after Hurricane Katrina in "The Shock Doctrine." Real estate brokers, charter school companies, and government agencies didn't cause the flood, but they certainly profited from it. The fear is that companies like Clearview AI, which saw a 26 percent spike in usage of its facial recognition service following the attack, will be incentivized, as will police departments to use such technology for any means they choose.
Cahn comes to a similar conclusion: don't expose American citizens to the "anti-democratic technology" known as facial recognition. New Yorkers had to endure subway backpack checks for nearly a decade after 9/11; this slope is even slipperier.As the US braces for further "armed protests" in all 50 states over the coming week, phones need to keep capturing footage. Bystanders need to remain safe, of course. But if last week was any indication, the insurrectionists have difficulty deciphering between social media and real life. Their feeds should reveal enough.
Stay in touch with Derek on Twitter and Facebook. His most recent book is "Hero's Dose: The Case For Psychedelics in Ritual and Therapy."
A heated debate is occurring at the University of Miami.
- Students say they were identified with facial recognition technology after a protest at the University of Miami; campus police claim this isn't true.
- Over 60 universities nationwide have banned facial recognition; a few colleges, such as USC, regularly use it.
- Civil rights groups in Miami have called for the University of Miami to have talks on this topic.
Silicon Valley has led the technological revolution since the early seventies. Journalist Don Hoefler coined the term "Silicon Valley USA" to pin the region dominating the growing computer industry. The products imagined and invented in that small coastal area in Northern California have changed the world, yet a deep mistrust of technology has always seeded in the very same locale. Dystopian fervor never lies far away from utopian bliss.
Case in point: in May, the San Francisco Board of Advisors blocked the use of facial recognition technology by police forces by a vote of 8-to-1. Using this emerging tech to identify criminals was not as important as protecting civil liberties to this council. Critics have long feared the potential emergence of a surveillance state. The Board agreed.
Arthur Holland Michel has been studying surveillance technology for years. In his book, "Eyes in the Sky," he warns that such tech in the hands of police, even if promoted as protective of citizens, "could engender a style of relentless activity-based intelligence that treats all individuals as unknown unknowns—possible criminals who can only be discerned through persistent surveillance."
There's already creepy precedent. Insurance companies employ drones to spy on claimants suspected of lying about home claims or injury compensation—a practice legal in America. As Michel says of high-tech surveillance, "Everyone I've met who has been involved, even peripherally, with the all-seeing eye believes that they have created a force for good."
Arthur Holland Michel: The Future of Surveillance Technology
Americans don't always agree with that assessment, especially on college campuses. Over 60 universities—Harvard, MIT, and UCLA are on the list—have banned facial recognition. Of the few schools that utilize it, USC lets students enter their rooms via face scans; the software also ensures intruders cannot access buildings.
These are great uses of this technology. You could argue it's how any progress with our devices should work: in service of people. The problem, of course, is that those in power don't tend to stop when they have a little taste of the possibilities.
University of Miami is the latest school to be embroiled in a battle over facial recognition. The ACLU of Florida was joined by 21 other groups when requesting that the university hold an open forum so that students can express their concerns. A piece of their letter is below.
This call for action was inspired after a September incident in which students protested returning for in-person classes during the pandemic. The students, concerned about their health, predominantly wore face masks. Still, a number of them were identified, leading to concerns that facial recognition was used. Campus police denied it—the chief even claimed the tech "doesn't work," though that notion has been refuted—yet civil liberties groups are worried that an invasion of privacy occurred.
Lia Holland, a member of the digital rights nonprofit Fight for the Future, wants answers from school administrators.
"UMiami is struggling to answer to their creepy surveillance practices, and clarify whether they are using their own facial recognition system, or Florida's state facial recognition database."
Credit: Pixel Shot / Adobe Stock
The police chief in question, David Rivero, claims overhead surveillance cameras provided identification at the protest. Yet speaking of another case involving facial-recognition software, he's on the record stating, "We were able to [easily] identify and arrest him. We've [detected] a few bad guys that way."
The letter sent to the Board of Administrators includes the following demands:
- Issue a campus-wide policy banning non-personal use of facial recognition technology, and issue a statement that you have done so.
- Immediately schedule an open forum with students and faculty/staff to discuss community concerns and clarify how student activists who participated in First Amendment protected protest activities were identified by campus police.
- Immediately schedule a meeting with the UMiami Employee Student Alliance (UMESA) to address their COVID-19 safety concerns, the subject of the original protest.
There's no doubt facial-recognition technology has a place in law enforcement. Victims of unsolved crimes are relieved when the perpetrators are brought to justice, regardless of the means. As Michel writes, some police forces are already surveilling large regions of their districts using the Gorgon Stare, a camera used from airplanes. Cameras are ubiquitous, and that's not going to change.As a society, we need honest discussions regarding the application of surveillance. Nearly every citizen in China has already been logged by facial recognition software, which has led to human rights abuses. While the stated intention of this tech by American police is pure, good intentions are known to pave the way...well, we know how that ends.
Stay in touch with Derek on Twitter and Facebook. His new book is "Hero's Dose: The Case For Psychedelics in Ritual and Therapy."