Machine learning is a powerful and imperfect tool that should not go unmonitored.
- When you harness the power and potential of machine learning, there are also some drastic downsides that you've got to manage.
- Deploying machine learning, you face the risk that it be discriminatory, biased, inequitable, exploitative, or opaque.
- In this article, I cover six ways that machine learning threatens social justice and reach an incisive conclusion: The remedy is to take on machine learning standardization as a form of social activism.
When you harness the power and potential of machine learning, there are also some drastic downsides that you've got to manage. Deploying machine learning, you face the risk that it be discriminatory, biased, inequitable, exploitative, or opaque. In this article, I cover six ways that machine learning threatens social justice and reach an incisive conclusion: The remedy is to take on machine learning standardization as a form of social activism.
When you use machine learning, you aren't just optimizing models and streamlining business. You're governing. In essence, the models embody policies that control access to opportunities and resources for many people. They drive consequential decisions as to whom to investigate, incarcerate, set up on a date, or medicate – or to whom to grant a loan, insurance coverage, housing, or a job.
For the same reason that machine learning is valuable—that it drives operational decisions more effectively—it also wields power in the impact it has on millions of individuals' lives. Threats to social justice arise when that impact is detrimental, when models systematically limit the opportunities of underprivileged or protected groups.
Here are six ways machine learning threatens social justice
Credit: metamorworks via Shutterstock
1) Blatantly discriminatory models are predictive models that base decisions partly or entirely on a protected class. Protected classes include race, religion, national origin, gender, gender identity, sexual orientation, pregnancy, and disability status. By taking one of these characteristics as an input, the model's outputs – and the decisions driven by the model – are based at least in part on membership in a protected class. Although models rarely do so directly, there is precedent and support for doing so.
This would mean that a model could explicitly hinder, for example, black defendants for being black. So, imagine sitting across from a person being evaluated for a job, a loan, or even parole. When they ask you how the decision process works, you inform them, "For one thing, our algorithm penalized your score by seven points because you're black." This may sound shocking and sensationalistic, but I'm only literally describing what the model would do, mechanically, if race were permitted as a model input.
2) Machine bias. Even when protected classes are not provided as a direct model input, we find, in some cases, that model predictions are still inequitable. This is because other variables end up serving as proxies to protected classes. This is a bit complicated, since it turns out that models that are fair in one sense are unfair in another.
For example, some crime risk models succeed in flagging both black and white defendants with equal precision – each flag tells the same probabilistic story, regardless of race – and yet the models falsely flag black defendants more often than white ones. A crime-risk model called COMPAS, which is sold to law enforcement across the US, falsely flags white defendants at a rate of 23.5%, and Black defendants at 44.9%. In other words, black defendants who don't deserve it are erroneously flagged almost twice as much as white defendants who don't deserve it.
3) Inferring sensitive attributes—predicting pregnancy and beyond. Machine learning predicts sensitive information about individuals, such as sexual orientation, whether they're pregnant, whether they'll quit their job, and whether they're going to die. Researchers have shown that it is possible to predict race based on Facebook likes. These predictive models deliver dynamite.
In a particularly extraordinary case, officials in China use facial recognition to identify and track the Uighurs, a minority ethnic group systematically oppressed by the government. This is the first known case of a government using machine learning to profile by ethnicity. One Chinese start-up valued at more than $1 billion said its software could recognize "sensitive groups of people." It's website said, "If originally one Uighur lives in a neighborhood, and within 20 days six Uighurs appear, it immediately sends alarms" to law enforcement.
4) A lack of transparency. A computer can keep you in jail, or deny you a job, a loan, insurance coverage, or housing – and yet you cannot face your accuser. The predictive models generated by machine learning to drive these weighty decisions are generally kept locked up as a secret, unavailable for audit, inspection, or interrogation. Such models, inaccessible to the public, perpetrate a lack of due process and a lack of accountability.
Two ethical standards oppose this shrouding of electronically-assisted decisions: 1) model transparency, the standard that predictive models be accessible, inspectable, and understandable. And 2) the right to explanation, the standard that consequential decisions that are driven or informed by a predictive model are always held up to that standard of transparency. Meeting those standards would mean, for example, that a defendant be told which factors contributed to their crime risk score -- which aspects of their background, circumstances, or past behavior caused the defendant to be penalized. This would provide the defendant the opportunity to respond accordingly, establishing context, explanations, or perspective on these factors.
5) Predatory micro-targeting. Powerlessness begets powerlessness – and that cycle can magnify for consumers when machine learning increases the efficiency of activities designed to maximize profit for companies. Improving the micro-targeting of marketing and the predictive pricing of insurance and credit can magnify the cycle of poverty. For example, highly-targeted ads are more adept than ever at exploiting vulnerable consumers and separating them from their money.
And insurance pricing can lead to the same result. With insurance, the name of the game is to charge more for those at higher risk. Left unchecked, this process can quickly slip into predatory pricing. For example, a churn model may find that elderly policyholders don't tend to shop around and defect to better offers, so there's less of an incentive to keep their policy premiums in check. And pricing premiums based on other life factors also contributes to a cycle of poverty. For example, individuals with poor credit ratings are charged more for car insurance. In fact, a low credit score can increase your premium more than an at-fault car accident.
6) The coded gaze. If a group of people is underrepresented in the data from which the machine learns, the resulting model won't work as well for members of that group. This results in exclusionary experiences and discriminatory practices. This phenomenon can occur for both facial image processing and speech recognition.
Recourse: Establish machine learning standards as a form of social activism
To address these problems, take on machine learning standardization as a form of social activism. We must establish standards that go beyond nice-sounding yet vague platitudes such as "be fair", "avoid bias", and "ensure accountability". Without being precisely defined, these catch phrases are subjective and do little to guide concrete action. Unfortunately, such broad language is fairly common among the principles released by many companies. In so doing, companies protect their public image more than they protect the public.
People involved in initiatives to deploy machine learning have a powerful, influential voice. These relatively small numbers of people mold and set the trajectory for systems that automatically dictate the rights and resources that great numbers of consumers and citizens gain access to.
Famed machine learning leader and educator Andrew Ng drove it home: "AI is a superpower that enables a small team to affect a huge number of people's lives... Make sure the work you do leaves society better off."
And Allan Sammy, Director, Data Science and Audit Analytics at Canada Post, clarified the level of responsibility: "A decision made by an organization's analytic model is a decision made by that entity's senior management team."
Implementing ethical data science is as important as ensuring a self-driving car knows when to put on the breaks.
Establishing well-formed ethical standards for machine learning will be an intensive, ongoing process. For more, watch this short video, in which I provide some specifics meant to kick-start the process.
Eric Siegel, Ph.D., is a leading consultant and former Columbia University professor who makes machine learning understandable and captivating. He is the founder of the long-running Predictive Analytics World and the Deep Learning World conference series and the instructor of the end-to-end, business-oriented Coursera specialization Machine learning for Everyone. Stay in touch with Eric on Twitter @predictanalytic.
Antisocial is a deep dive into the extremist views.
- The New Yorker's Adam Marantz spent three years embedded with leading alt-right voices.
- His book, Antisocial, carries you deep inside the mindset and motivation behind online trolling.
- To get back on track, Marantz believes we need a "new moral vocabulary."
Barack Obama loved one of Martin Luther King, Jr.'s quotes so much he had it woven into an Oval Office rug. You've likely heard it: "The arc of the moral universe is long, but it bends toward justice." The idea itself was borrowed from an 1853 sermon by abolitionist minister Theodore Parker. King's truncation of Parker's sentiment is worth noting, as the sentence was part of a longer text in which the nineteenth century minister expressed doubt in understanding what the moral universe even is. His sentiment was more inquisitive than declarative.
Hoping for justice is part of our biological design, a function more of prayer than certainty. Applying it to reality runs you into problems. For example, what moral lesson can we pull from humans playing a part in causing the devastating brush fires in Australia, which thus far have killed an estimated 500 million animals? How does one even begin to make the case for justice? This question isn't limited to one continent. Not a week passes that doesn't include dozens if not hundreds of cases that will never resolve in a manner bending toward justice.
Andrew Marantz, a New Yorker staff writer and author of the new book, Antisocial: Online Extremists, Techno-Utopians, and the Hijacking of the American Conversation, asked his own questions about justice in relationship to the growing influence of technology on American political and social discourse. Embedding himself with leading figures in the alt-right over the course of three years, he writes that if King thought justice was a metaphysical construct, the Civil Rights leader would never would have marched across bridges. As Marantz puts it, "The arc of history bends the way people bend it."
Justice isn't objective. It is determined by social mores and national laws. What one finds abhorrent another believes morally justified. Marantz takes a step back: the loose-knit online contingent of folks that comprise the alt-right felt castrated by multiculturalism, betrayed by the notion of a white America. (Their memories are short, as they conveniently overlook how the whites populated this land.)
Upon being handed powerful tools to broadcast their voice, they never stopped to question if it was a good idea. They just hit "post." As Marantz notes in his TED Talk below, one leading figure has nothing more than a phone, a laptop, an iPad, and a belligerent, racist attitude. From those pieces he constructs a six-figure "career" from his living room.
Inside the bizarre world of internet trolls and propagandists | Andrew Marantz
If you think there's a coherent plan behind this overrepresented minority broadcasting on Twitter, Facebook, Periscope, and YouTube, rethink that assumption. Marantz begins his book at the DeploraBall, an unofficial inaugural celebration organized by alt-right conspiracy theorists and internet trolls in 2017. Commenting on the movement from the bird's-eye view, Marantz sums up the motivation behind the political momentum that placed Donald Trump into office.
"They took for granted that the old institutions ought to be burned to the ground, and they used the tools at their disposal—new media, especially social media—to light as many matches as possible. As for what kind of society might emerge from the ashes, they had no coherent vision and showed little interest in developing one. They were not, like William Buckley, standing athwart history, yelling 'Stop'; they were holding liberal democracy in a headlock, yelling 'Stop or I'll shoot!'"
Marantz does his best to sympathize with the characters he writes about, a commendable feat in itself. He approaches reporting in what is now considered an old school style: credibility. He didn't accept gifts (including Uber rides or coffee), allowed his subjects to speak their voices, and asked pointed questions while letting them speak their grievances. Indeed, the strongest parts of the book, and ironically the most frustrating, occur when you're in the living room of one of these aspirational provocateurs as they play with their children.
Frustrating because, as with Twitter fights and trolling in general, you're reminded that all of us share one nation. We have the capability to be so much better than this. Yet debatable policy disagreements are regularly broadcast as existential threats for clickbait to drive ad revenue. The real focus of our collective anger, corporate leaders and the politicians they purchase, own much of the blame for this polarization. It just seems impossible to remember that fact while scrolling on a six-inch screen.
That said, Marantz does not give a free pass to the white nationalist movement. Being Jewish, he recognized the personal danger he placed himself in. Marantz also considers the role of the modern journalist. He might pay for breakfast to avoid conflicts of interest, but that doesn't make supporting leaders of this movement easy. Some ideologies simply do not bend toward justice.
"To treat these as legitimate topics of debate is to be not neutral but complicit. Sometimes, even for a journalist, there is no such thing as not picking a side."
Andrew Marantz (via Twitter)
King's quote is a recurring theme throughout the book; so is the Overton window. Named after Joseph P. Overton, a former senior VP of the Mackinac Center for Public Policy, this window is the range of policies a politician can discuss without appearing too extreme or biased. The window shifts as we become inoculated to more extreme ideas. What seemed impossible a decade ago becomes common. You get an open discussion of racist and xenophobic policies that would have once seemed unthinkable.
Don't mistake this window for critical thinking. If, at times, it feels like social media is ruled by emotionally incompetent and intellectually stymied adults who never took the opportunity to mature from grade school, you're not far off. Sometimes all Marantz has to do is stick a microphone in front of their mouths and let them speak. It's maddening, listening to them shrug off thoughtfulness and honest debate. Defaulting to "free speech," which they all do, is to forget (or be ignorant of) the fact that with free speech comes responsibility.
We cannot troll our way out of this mess. As Marantz concludes, we need a "new moral vocabulary" to address the scourge of anti-Semitic, racist, and xenophobic garbage being lightly disguised (or not at all) in our national discourse. I purposely avoided naming the figures in his book because they already receive too much oxygen. One high point is that many have been de-platformed in recent years, cutting off their precious revenue streams.
No book has captured the alt-right as powerfully and honestly as Antisocial. It is a reminder of how badly we need to redefine the Overton window with a new vocabulary. Teaching everyone this language will be one of our greatest challenges in this new decade.
Study identifies predictors of which students are likely to do well in education.
- Researchers looked at data from 5,000 students and found 2 factors that were strongly linked to academic success.
- Students with genetic predisposition towards academics were much more likely to go to University.
- Equally important was having well-educated parents with wealth.
Will your child be a good student? A new study claims it's possible to predict how successful kids will be in academics at the moment of their birth.
An international research team discovered that the genetic differences and the socioeconomic status of the parents were key in establishing future success in school. Interestingly, just having good genes is not the most important factor. Having parents with their own great education and wealth has more of an impact.
The study, which looked at data from 5,000 children born in the UK between 1994 and 1996, found that among those who made it to University, about 47% of the children had a genetic predisposition for education but were from a poorer background. Tellingly, compare that to 62% of the kids who made it to University while having a low genetic predisposition for academics but had parents with money.
The kids who did the best, with 77% going to University, had both rich, well-educated parents, and were blessed with good genes for academics.
On the flip side, among the children with less genetic propensity and whose families were on the low end of prosperity, only 21% made it to University.
For their analysis, the researchers looked at test results at key stages of the children's education, data about their parents work and education, as well as genome-wide polygenic scoring to look at the effects of inherited genetic differences.
The study's lead author, Professor Sophie von Stumm from U.K.'s University of York, said their study captured "the effects of both nature and nurture".
She noted that their research also indicated that growing up with privilege can have a negative "protective effect", adding "Having a genetic makeup that makes you more inclined to education does make a child from a disadvantaged background more likely to go to university, but not as likely as a child with a lower genetic propensity from a more advantaged background."
How can we best help students? Cultivate their love for learning.
Professor von Stumm also pointed out that ultimately the study showed how unequal access to education can be among children. "Where you come from has a huge impact on how well you do in school," she said.
The researchers, who hailed from UK's University College London and Kings College London, as well as the University of New Mexico in the U.S., hope to use the study to identify the children most at risk of getting a poor education.
Transformation of big companies is really important if we want to create a system that is fairer, more sustainable and less unequal.
- Large companies can and should ask themselves "Where can we collaborate? Where can we pre-compete?"
- Both collaboration and competition can help big business be the force for good.
- B Corp certifications can lead to companies being more purposeful and transparent.