Companies can identify you from your music preferences, as well as influence and profit from your behavior.
- New research discovered that you can be identified from just three song choices.
- This type of information can be exploited by streaming services through targeted advertising.
- The researchers are calling for musical preference to be considered in regulations regarding online privacy.
While the focus on music piracy dominated the media for years, an equally important (and far less discussed) phenomenon occurred during the transition from broadcast radio to streaming. People were no longer beholden to the gatekeepers known as DJs. Today, listeners have the entire history of music at their fingertips. Each person is now their own DJ.
If it's free, you are the product
Though this might appear empowering, every advancement comes at a cost. Because listeners changed how they consumed music (namely, from radio broadcasts to personalized online streams), companies had to change their monetization strategy. Now, you are the product.
When you curate a playlist, you are inadvertently sending tons of data to different companies, with Spotify, YouTube, and Apple Music leading the way. As it turns out, according to a new study from Israeli researchers — Ariel University's Dr. Ron Hirschprung and Tel Aviv University's Dr. Ori Leshman — your musical tastes reveal more about your personality than you likely ever imagined.
Musical selection is a quasi-identifier
There are different ways in which you can be identified. Identifiers, such as your social security number, are highly specific and unique to you. But then there are quasi-identifiers — things like age, gender, and occupation — that can also give away your identity. The authors claim that musical selection is a quasi-identifier, and they argue that, as with other forms of sensitive data, our playlists should be considered when constructing privacy laws.
In their paper, they write, "[T]he combination of Big-Data, together with the availability of computational power — which is notoriously known for its potential of privacy violation — introduces a privacy threat from an unexpected angle: listening to music."
To prove their point, the researchers divided undergraduate students into four groups with roughly 35 volunteers in each. Every member submitted three songs from their playlist of favorite tracks. Then, the researchers picked five members at random in each group, and the remaining volunteers were asked to vote to determine if they could match the members with their playlists.
Photo: cherryandbees / Adobe Stock
Even to the surprise of the researchers, the participants were right between 80 percent and 100 percent of the time. Incredibly, these students did not know one another well and were not aware in advance of anyone's musical preferences.
There are many outward signs that mark us in the eyes of others: what we wear, what we eat, how we style our hair, our mannerisms and posture, and even where we stand at parties. Other people pick up on these subtle clues, which in turn allows them to predict our personalities. In this study, the volunteers were able to identify the musical preferences of strangers simply by observing their outward appearances.
Of course, companies notice similar things and are able to exploit what they learn about us. In a press release, the authors stated:
"Music can become a form of characterization, and even an identifier. It provides commercial companies like Google and Spotify with additional and more in-depth information about us as users of these platforms. In the digital world we live in today, these findings have far-reaching implications on privacy violations, especially since information about people can be inferred from a completely unexpected source, which is therefore lacking in protection against such violations."Musical preference isn't the only way in which you can be identified online. For instance, your browsing history can give away your identity. Listening to your favorite tunes while searching Google for a new recipe isn't as innocuous as you might think.
Stay in touch with Derek on Twitter and Facebook. His most recent book is "Hero's Dose: The Case For Psychedelics in Ritual and Therapy."
And is anyone protecting children's data?
- The market for smart toys is rapidly expanding and could grow to $18 billion by 2023.
- Smart toys can help with learning but pose risks if they are not designed to protect children's data and safety.
- Many companies are developing smart toys ethically and responsibly, with makers of AI-powered smart toys encouraged to apply to the Smart Toy Awards.
Imagine a child born this year who will be surrounded by technology at every phase of her childhood. When she is three years old, Sophie's parents buy her a smart doll that uses facial recognition and artificial intelligence (AI) to watch, listen to, and learn from her.
Like many children, Sophie will come to love this toy. And like previous generations of children with a favorite doll or teddy bear, she will carry it around with her, talk with it, and sleep with it beside her for many years.
If the smart doll is designed responsibly, this toy could be her best friend; if not, it will be a surveillance tool that records her every move and word spoken in its presence by her, her friends, and even her parents.
Smart toys use AI to learn about the child user and personalize the play or learning experience for them. They can learn a child's favourite colour, song, and learn to recognize that child and other familiar people in that child's life. While this may sound futuristic, there are many smart toys that already provide these capabilities. The market for these toys is rapidly expanding and will grow to $18 billion by 2023.
To address this urgent use of AI, the World Economic Forum recently launched the Smart Toy Awards to recognize ethically and responsibly designed AI-powered toys that create an innovative and healthy play experience for children.
Smart toys provide enormous promise for children. They can customize learning based on data they gather about children; they can teach computer programming skills to children; and they can help children with disabilities develop cognitive, motor, and social skills.
But at the same time, smart toys provide large potential risks if they are not designed to protect children's data, safety, and cybersecurity.
A cautionary tale
The example of Sophie's smart doll is not far-fetched. In 2017, My Friend Cayla – an early smart toy that used facial and voice recognition – was declared an illegal surveillance tool in many countries.
If the Cayla doll was connected to a phone, data was sent to the manufacturer and a third-party company for processing and storage. And anyone with the My Friend Cayla app on their phone within 30 feet of a toy could access the toy and listen to the child user.
Germany issued a "kill order" for the doll and required parents to destroy it "with a hammer." Today, the only surviving Cayla dolls in Germany reside in the Spy Museum in Berlin.
The risks posed by smart toys
Sophie applies to college when she is 18 years old. If her smart doll had collected data on her from the age of 3 to 9, the company who built the toy could know her better than her parents. Without adequate data protections, the company could also sell this data to the colleges to which she is applying or other third parties.
After college, Sophie applies to a job. If the employer bought data gathered on Sophie as a child, they could learn about her strengths and weaknesses. What if Sophie bullied her younger sister, yelled at her parents, or refused to do her homework as a child? All these actions conducted in the privacy of the family's home could be known by the company and sold to third parties who could use this information to discriminate against Sophie. The family's life is no longer private.
Today, data is gold but gathering data on children is inherently problematic. As a company gathers data about children through Sophie's doll, they may have a responsibility to act or intervene. Imagine that Sophie tells her doll about suicidal thoughts and self-harm. Should the company be required to alert the parents and call 911?
The more data that a smart toy gathers the more complex scenarios smart toy companies will likely face. Every company designing a smart toy with the capabilities to gather this information must consider these worst-case scenarios as they develop toys to protect the safety of the child user and those around them.
Developing responsible and ethical smart toys
Despite these significant risks, ethical and responsible smart toys are being developed. The Smart Toy Awards have developed four key governance criteria for companies developing AI-powered toys: data privacy and cybersecurity; accessibility and transparency; age appropriateness; and healthy play.
Sophie's smart doll illustrates the importance of strong data privacy and clearly communicating to adults buying the toy what the smart doll does and how it operates. This must be communicated in the Terms of Service in language understandable by non-technologically literate audiences. At minimum, Smart Toys should meet COPPA requirements in the US and GDPR in the EU.
Parents and guardians should understand with whom children's data is being shared and for what purpose. Companies should empower parents, guardians, and children to make their own decisions about how children's data is being used. And companies should not sell children's data to third parties.
Data privacy is a foundation for ethical and responsible smart toys, but they must also be designed to be accessible, transparent, age appropriate, and promote healthy play and children's mental health.
The future of childhood
Sophie's doll doesn't necessarily pose concern for her and her parents, and data collected on her won't hinder her future if the data is carefully protected. In the EU, GDPR provides the right to be forgotten, and a similar policy could allow children like Sophie to request that all data collected on them as children by their smart toys be deleted when they turn 18 years old, so they would have a fresh start as they begin adulthood.
Sophie and all children should have a fair shot at childhood, education, careers, and life. The data collected on them as children should not be used to discriminate against them in the future.
Smart toys like Sophie's doll can play a pivotal role in childhoods, catalyzing creativity and critical thinking skills. Many companies are developing smart toys with careful consideration for ethics and responsibility. We urge companies to adopt our governance criteria as they're designing and developing smart toys.
Childhood is a sacred time and parents will do everything they can to protect their children's experiences. This won't be possible unless stakeholders work together across the private, public, and nonprofit sectors to develop ethical, responsible, and innovative smart toys that protect and foster the essence of childhood.
For only $70, you can get two years of protection when you browse or stream online with Private Internet Access.
- Whether it's for work, school, or pleasure, it's no secret that the world spends the majority of the day online.
- There's bound to be sensitive information floating around on your laptop or mobile device.
- To ensure you have the fastest, most secure connection every time you log on, you can snag a two-year subscription to Private Internet Access VPN for just $69.95.
Do you ever go a day without being online? Unless you're taking a conscious break, the answer is most likely no. This is why it's crucial for you to grab a two-year subscription to Private Internet Access VPN which is on sale for a limited time.
The service boasts over ten years of experience in the virtual private network space, resulting in a VPN that's one of the best-reviewed and highest-rated platforms around. For example, it's earned 4.7 out of 5 stars on the App Store, 4.5 out of 5 on Google Play, and CNET named it one of the Best VPN Services of 2021.
With Private Internet Access, your connection will be paired with the largest global server network so you'll have access to whatever you want, no matter where you are. For instance, if you're back to traveling and there's online information that's restricted in the area you're in, this VPN is a way around that.
On the flip side, anonymity is another major feature, ultimately protecting you from any hackers attempting to gain access to your private data. Every time you get online, you'll have an invisibility cloak covering you. You won't have to worry about all these fancy benefits slowing your connection down either. The VPN guarantees tip-top speed and also allows up to 10 devices to simultaneously connect to it.
There's no doubt this is what your WFH setup has been missing all along. Don't wait; take advantage of the over 70% discount on a two-year subscription to Private Internet Access VPN (regularly $258).
Prices subject to change.
When you buy something through a link in this article or from our shop, Big Think earns a small commission. Thank you for supporting our team's work.
The attack on the Capitol forces us to confront an existential question about privacy.
- The insurrection attempt at the Capitol was captured by thousands of cell phones and security cameras.
- Many protestors have been arrested after their identity was reported to the FBI.
- Surveillance experts warn about the dangers of using facial recognition to monitor protests.
If ever there were a reason to wear masks, the insurrection at the Capitol last week would have been it. But many of those present believed the anti-mask rhetoric being used as a distraction from the nation's skyrocketing death rate. In fact, the day might even prove to have been a superspreader event, with at least two congresspeople becoming infected after the siege.
Those involved in the attempted coup d'état were not concerned about a virus. Nor, apparently, were they worried about shielding themselves from the tens of thousands of hours of recorded video taken by thousands of phones. In a strange merging of social media and dark web chat rooms come to life, separating actual insurrectionists from revolutionary tourists could prove to be a cumbersome vocation. One thing is certain: identifying them is not difficult.
Instagram-worthy sieges bring us to a longstanding existential question: should law enforcement be allowed to use AI and cell phone data to prosecute offenders?
Of the many security failures that day, one stood out: the small number of arrests for a breach of outsized magnitude. As the nation ogled at an unemployed actor turned conspiracy shaman behind the speaker's chair in real-time, scenes of horrendous violence took hours, even days, to be released. In a game of seemingly futile catch-up, federal agencies opened tip lines to identify the insurrectionists that should have easily been in their grasp.
But the public responded.
Brad Templeton: Today's Surveillance Society is Beyond Orwellian
There's the ex-wife of a retired Air Force lieutenant colonel whose neck gaiter was pulled down; the patriotic cohort of Internet detectives crowd-sourcing information for the FBI; the director of the infamous pseudoscience film, "Plandemic," praising the "patriots" that breached the building moments after he left the siege himself; and that unemployed actor who regularly attended QAnon events leaving the most public trail imaginable, and who is currently in custody facing serious charges.
Fish in barrels, all of them. What of the remaining thousands?
This privacy discussion is not new. Arthur Holland Michel, founder and co-director of the Center for the Study of the Drone at Bard College, warned Big Think in 2019 about the dangers of surveillance technology—specifically, in this case, a camera known as Gorgon Stare.
"Say there is a big public protest. With this camera, you can follow thousands of protesters back to their homes. Now you have a list of the home addresses of all the people involved in a political movement. If on their way home you witness them committing some crime—breaking a traffic regulation or frequenting a location that is known to be involved in the drug trade—you can use that surveillance data against them to essentially shut them up. That's why we have laws that prevent the use of surveillance technologies because it is human instinct to abuse them. That's why we need controls."
Late last year, University of Miami students pushed back against school administrators using facial recognition software for potentially insidious means—a protest not limited to that campus. Can you place students refusing to attend classes during a pandemic with armed insurrectionists attempting to change the results of a democratic election? Not even close. More to the point, however, we should leave political leanings out of the equation when deciding who we think should be monitored.
Protesters enter the U.S. Capitol Building on January 06, 2021 in Washington, DC. Congress held a joint session today to ratify President-elect Joe Biden's 306-232 Electoral College win over President Donald Trump.
Credit: Win McNamee/Getty Images
Shortly after the siege, the New Yorker's Ronan Farrow helped reveal the identity of the aforementioned lieutenant colonel while conservatives claim the riots were actually antifa—a conspiracy theory that's been peddled before. Politics simply can't be avoided in this age. Still, Albert Fox Cahn, founder of the Surveillance Technology Oversight Project, doesn't believe the insurrection attempt justifies an uptick in facial recognition technology.
"We don't need a cutting-edge surveillance dragnet to find the perpetrators of this attack: They tracked themselves. They livestreamed their felonies from the halls of Congress, recording each crime in full HD. We don't need facial recognition, geofences, and cell tower data to find those responsible, we need police officers willing to do their job."
The New Orleans City Council recently banned similar surveillance technologies due to fears that it would unfairly target minorities. San Francisco was the first city to outright ban facial recognition nearly two years ago. Cahn's point is that the FBI shouldn't be using AI to cover for the government's failure to protect the Capitol. Besides, the insurrectionists outed themselves on their own social media feeds.
When Pandora's box cracks open, it's hard to push the monster back in. Naomi Klein detailed the corporate takeover of New Orleans after Hurricane Katrina in "The Shock Doctrine." Real estate brokers, charter school companies, and government agencies didn't cause the flood, but they certainly profited from it. The fear is that companies like Clearview AI, which saw a 26 percent spike in usage of its facial recognition service following the attack, will be incentivized, as will police departments to use such technology for any means they choose.
Cahn comes to a similar conclusion: don't expose American citizens to the "anti-democratic technology" known as facial recognition. New Yorkers had to endure subway backpack checks for nearly a decade after 9/11; this slope is even slipperier.As the US braces for further "armed protests" in all 50 states over the coming week, phones need to keep capturing footage. Bystanders need to remain safe, of course. But if last week was any indication, the insurrectionists have difficulty deciphering between social media and real life. Their feeds should reveal enough.
Stay in touch with Derek on Twitter and Facebook. His most recent book is "Hero's Dose: The Case For Psychedelics in Ritual and Therapy."
Neuroscientists and ethicists wants to ensure that neurotechnologies remain benevolent.
- Columbia University neuroscience professor Rafael Yuste is advocating for the UN to adopt "neuro-rights."
- Neurotechnology is a growing field that includes a range of technologies that influence higher brain activities.
- Ethicists fear that these technologies will be misused and abuses of privacy and even consciousness could follow.
Out-of-body experiences recur throughout spiritual literature. Thought to signify a spiritual "essence" co-existing alongside biology, OBEs began to be viewed in a different light when they were replicated in a laboratory in 2007. University College London researchers induced OBEs in volunteers through the use of head-mounted video displays. Other means for inducing OBEs include electrical and magnetic stimulation of the brain.
If a well-placed magnet causes you to "leave" your body, what else is possible with a little transcranial stimulation?
This question is of growing concern as wearable scanners become increasingly common. Last week, Columbia University neuroscience professor Rafael Yuste advocated for the United Nations to adopt "neuro-rights" into its Universal Declaration of Human Rights thanks to a burgeoning industry promising to alter—some would say manipulate—consciousness.
As Yuste phrased it during an online conference,
"If you can record and change neurons, you can in principle read and write the minds of people. This is not science fiction. We are doing this in lab animals successfully."
Neurotechnology is a growing field that includes a range of technologies that influence higher brain activities. Therapeutics designed to repair and improve brain function are included in this discipline—interventions for sleep problems, overstimulation, motor coordination, epilepsy, even depression.
So are more insidious intentions, however. You can imagine such devices in the hands of a cult leader, for example. Or perhaps a political leader steeling up their base. If the human imagination can create an idea, it can be transformed into reality, and not all humans are benevolent.
Rafael Yuste - a neuroscientist exploring the ethics of neural identity
The ethical question is not new. For example, the debate over embryonic stem cells raged for years. Promises of trait enhancement concerned people who thought scientists would play the role of a god. While that debate has mostly died down, the use of neurotechnology by militaries and tech companies—particularly concerning privacy—will be contentious for decades.
Cognitive liberty is a term assigned to those who believe every individual must be allowed to maintain their own agency. An extension of the concept of freedom of thought, cognitive liberty is defined as "the right of each individual to think independently and autonomously, to use the full power of his or her mind, and to engage in multiple modes of thought," as written by neuroethicist Dr. Wrye Sententia and legal theorist Richard Glen Boire.
The challenges to cognitive liberty include privacy, which they argue must encompass the domain of inner thought; autonomy, so that thought processes remain the province of the individual; and choice, provided that the individual is not harming others.
Yuste believes the U.N.'s declaration, which was created in the wake of World War II in 1948, needs immediate revision. Deep brain stimulation is already an FDA-approved procedure. Whereas social media creates its own addiction and mental health problems, a sense of agency still exists. When tech has the capability to get "under the skull and get at our neurons," as Johns Hopkins professor of neurology and neuroscience, John Krakauer, says, a sense of urgency exists.
For Yuste it's completely a matter of agency—and liberty.
"This is the first time in history that humans can have access to the contents of people's minds. We have to think very carefully about how we are going to bring this into society."
Stay in touch with Derek on Twitter and Facebook. His new book is "Hero's Dose: The Case For Psychedelics in Ritual and Therapy."