Get smarter, faster. Subscribe to our daily newsletter.
Six ways machine learning threatens social justice
Machine learning is a powerful and imperfect tool that should not go unmonitored.
- When you harness the power and potential of machine learning, there are also some drastic downsides that you've got to manage.
- Deploying machine learning, you face the risk that it be discriminatory, biased, inequitable, exploitative, or opaque.
- In this article, I cover six ways that machine learning threatens social justice and reach an incisive conclusion: The remedy is to take on machine learning standardization as a form of social activism.
When you harness the power and potential of machine learning, there are also some drastic downsides that you've got to manage. Deploying machine learning, you face the risk that it be discriminatory, biased, inequitable, exploitative, or opaque. In this article, I cover six ways that machine learning threatens social justice and reach an incisive conclusion: The remedy is to take on machine learning standardization as a form of social activism.
When you use machine learning, you aren't just optimizing models and streamlining business. You're governing. In essence, the models embody policies that control access to opportunities and resources for many people. They drive consequential decisions as to whom to investigate, incarcerate, set up on a date, or medicate – or to whom to grant a loan, insurance coverage, housing, or a job.
For the same reason that machine learning is valuable—that it drives operational decisions more effectively—it also wields power in the impact it has on millions of individuals' lives. Threats to social justice arise when that impact is detrimental, when models systematically limit the opportunities of underprivileged or protected groups.
Here are six ways machine learning threatens social justice
Credit: metamorworks via Shutterstock
1) Blatantly discriminatory models are predictive models that base decisions partly or entirely on a protected class. Protected classes include race, religion, national origin, gender, gender identity, sexual orientation, pregnancy, and disability status. By taking one of these characteristics as an input, the model's outputs – and the decisions driven by the model – are based at least in part on membership in a protected class. Although models rarely do so directly, there is precedent and support for doing so.
This would mean that a model could explicitly hinder, for example, black defendants for being black. So, imagine sitting across from a person being evaluated for a job, a loan, or even parole. When they ask you how the decision process works, you inform them, "For one thing, our algorithm penalized your score by seven points because you're black." This may sound shocking and sensationalistic, but I'm only literally describing what the model would do, mechanically, if race were permitted as a model input.
2) Machine bias. Even when protected classes are not provided as a direct model input, we find, in some cases, that model predictions are still inequitable. This is because other variables end up serving as proxies to protected classes. This is a bit complicated, since it turns out that models that are fair in one sense are unfair in another.
For example, some crime risk models succeed in flagging both black and white defendants with equal precision – each flag tells the same probabilistic story, regardless of race – and yet the models falsely flag black defendants more often than white ones. A crime-risk model called COMPAS, which is sold to law enforcement across the US, falsely flags white defendants at a rate of 23.5%, and Black defendants at 44.9%. In other words, black defendants who don't deserve it are erroneously flagged almost twice as much as white defendants who don't deserve it.
3) Inferring sensitive attributes—predicting pregnancy and beyond. Machine learning predicts sensitive information about individuals, such as sexual orientation, whether they're pregnant, whether they'll quit their job, and whether they're going to die. Researchers have shown that it is possible to predict race based on Facebook likes. These predictive models deliver dynamite.
In a particularly extraordinary case, officials in China use facial recognition to identify and track the Uighurs, a minority ethnic group systematically oppressed by the government. This is the first known case of a government using machine learning to profile by ethnicity. One Chinese start-up valued at more than $1 billion said its software could recognize "sensitive groups of people." It's website said, "If originally one Uighur lives in a neighborhood, and within 20 days six Uighurs appear, it immediately sends alarms" to law enforcement.
4) A lack of transparency. A computer can keep you in jail, or deny you a job, a loan, insurance coverage, or housing – and yet you cannot face your accuser. The predictive models generated by machine learning to drive these weighty decisions are generally kept locked up as a secret, unavailable for audit, inspection, or interrogation. Such models, inaccessible to the public, perpetrate a lack of due process and a lack of accountability.
Two ethical standards oppose this shrouding of electronically-assisted decisions: 1) model transparency, the standard that predictive models be accessible, inspectable, and understandable. And 2) the right to explanation, the standard that consequential decisions that are driven or informed by a predictive model are always held up to that standard of transparency. Meeting those standards would mean, for example, that a defendant be told which factors contributed to their crime risk score -- which aspects of their background, circumstances, or past behavior caused the defendant to be penalized. This would provide the defendant the opportunity to respond accordingly, establishing context, explanations, or perspective on these factors.
5) Predatory micro-targeting. Powerlessness begets powerlessness – and that cycle can magnify for consumers when machine learning increases the efficiency of activities designed to maximize profit for companies. Improving the micro-targeting of marketing and the predictive pricing of insurance and credit can magnify the cycle of poverty. For example, highly-targeted ads are more adept than ever at exploiting vulnerable consumers and separating them from their money.
And insurance pricing can lead to the same result. With insurance, the name of the game is to charge more for those at higher risk. Left unchecked, this process can quickly slip into predatory pricing. For example, a churn model may find that elderly policyholders don't tend to shop around and defect to better offers, so there's less of an incentive to keep their policy premiums in check. And pricing premiums based on other life factors also contributes to a cycle of poverty. For example, individuals with poor credit ratings are charged more for car insurance. In fact, a low credit score can increase your premium more than an at-fault car accident.
6) The coded gaze. If a group of people is underrepresented in the data from which the machine learns, the resulting model won't work as well for members of that group. This results in exclusionary experiences and discriminatory practices. This phenomenon can occur for both facial image processing and speech recognition.
Recourse: Establish machine learning standards as a form of social activism
To address these problems, take on machine learning standardization as a form of social activism. We must establish standards that go beyond nice-sounding yet vague platitudes such as "be fair", "avoid bias", and "ensure accountability". Without being precisely defined, these catch phrases are subjective and do little to guide concrete action. Unfortunately, such broad language is fairly common among the principles released by many companies. In so doing, companies protect their public image more than they protect the public.
People involved in initiatives to deploy machine learning have a powerful, influential voice. These relatively small numbers of people mold and set the trajectory for systems that automatically dictate the rights and resources that great numbers of consumers and citizens gain access to.
Famed machine learning leader and educator Andrew Ng drove it home: "AI is a superpower that enables a small team to affect a huge number of people's lives... Make sure the work you do leaves society better off."
And Allan Sammy, Director, Data Science and Audit Analytics at Canada Post, clarified the level of responsibility: "A decision made by an organization's analytic model is a decision made by that entity's senior management team."
Implementing ethical data science is as important as ensuring a self-driving car knows when to put on the breaks.
Establishing well-formed ethical standards for machine learning will be an intensive, ongoing process. For more, watch this short video, in which I provide some specifics meant to kick-start the process.
Eric Siegel, Ph.D., is a leading consultant and former Columbia University professor who makes machine learning understandable and captivating. He is the founder of the long-running Predictive Analytics World and the Deep Learning World conference series and the instructor of the end-to-end, business-oriented Coursera specialization Machine learning for Everyone. Stay in touch with Eric on Twitter @predictanalytic.
- Prejudice AI? Machine Learning Can Pick up Society's Biases - Big ... ›
- Your safety depends on machine learning - Big Think ›
- Dawn of a new era: AI, machine learning, and robotics - Big Think ›
How would the ability to genetically customize children change society? Sci-fi author Eugene Clark explores the future on our horizon in Volume I of the "Genetic Pressure" series.
- A new sci-fi book series called "Genetic Pressure" explores the scientific and moral implications of a world with a burgeoning designer baby industry.
- It's currently illegal to implant genetically edited human embryos in most nations, but designer babies may someday become widespread.
- While gene-editing technology could help humans eliminate genetic diseases, some in the scientific community fear it may also usher in a new era of eugenics.
Tribalism and discrimination<p>One question the "Genetic Pressure" series explores: What would tribalism and discrimination look like in a world with designer babies? As designer babies grow up, they could be noticeably different from other people, potentially being smarter, more attractive and healthier. This could breed resentment between the groups—as it does in the series.</p><p>"[Designer babies] slowly find that 'everyone else,' and even their own parents, becomes less and less tolerable," author Eugene Clark told Big Think. "Meanwhile, everyone else slowly feels threatened by the designer babies."</p><p>For example, one character in the series who was born a designer baby faces discrimination and harassment from "normal people"—they call her "soulless" and say she was "made in a factory," a "consumer product." </p><p>Would such divisions emerge in the real world? The answer may depend on who's able to afford designer baby services. If it's only the ultra-wealthy, then it's easy to imagine how being a designer baby could be seen by society as a kind of hyper-privilege, which designer babies would have to reckon with. </p><p>Even if people from all socioeconomic backgrounds can someday afford designer babies, people born designer babies may struggle with tough existential questions: Can they ever take full credit for things they achieve, or were they born with an unfair advantage? To what extent should they spend their lives helping the less fortunate? </p>
Sexuality dilemmas<p>Sexuality presents another set of thorny questions. If a designer baby industry someday allows people to optimize humans for attractiveness, designer babies could grow up to find themselves surrounded by ultra-attractive people. That may not sound like a big problem.</p><p>But consider that, if designer babies someday become the standard way to have children, there'd necessarily be a years-long gap in which only some people are having designer babies. Meanwhile, the rest of society would be having children the old-fashioned way. So, in terms of attractiveness, society could see increasingly apparent disparities in physical appearances between the two groups. "Normal people" could begin to seem increasingly ugly.</p><p>But ultra-attractive people who were born designer babies could face problems, too. One could be the loss of body image. </p><p>When designer babies grow up in the "Genetic Pressure" series, men look like all the other men, and women look like all the other women. This homogeneity of physical appearance occurs because parents of designer babies start following trends, all choosing similar traits for their children: tall, athletic build, olive skin, etc. </p><p>Sure, facial traits remain relatively unique, but everyone's more or less equally attractive. And this causes strange changes to sexual preferences.</p><p>"In a society of sexual equals, they start looking for other differentiators," he said, noting that violet-colored eyes become a rare trait that genetically engineered humans find especially attractive in the series.</p><p>But what about sexual relationships between genetically engineered humans and "normal" people? In the "Genetic Pressure" series, many "normal" people want to have kids with (or at least have sex with) genetically engineered humans. But a minority of engineered humans oppose breeding with "normal" people, and this leads to an ideology that considers engineered humans to be racially supreme. </p>
Regulating designer babies<p>On a policy level, there are many open questions about how governments might legislate a world with designer babies. But it's not totally new territory, considering the West's dark history of eugenics experiments.</p><p>In the 20th century, the U.S. conducted multiple eugenics programs, including immigration restrictions based on genetic inferiority and forced sterilizations. In 1927, for example, the Supreme Court ruled that forcibly sterilizing the mentally handicapped didn't violate the Constitution. Supreme Court Justice Oliver Wendall Holmes wrote, "… three generations of imbeciles are enough." </p><p>After the Holocaust, eugenics programs became increasingly taboo and regulated in the U.S. (though some states continued forced sterilizations <a href="https://www.uvm.edu/~lkaelber/eugenics/" target="_blank">into the 1970s</a>). In recent years, some policymakers and scientists have expressed concerns about how gene-editing technologies could reanimate the eugenics nightmares of the 20th century. </p><p>Currently, the U.S. doesn't explicitly ban human germline genetic editing on the federal level, but a combination of laws effectively render it <a href="https://academic.oup.com/jlb/advance-article/doi/10.1093/jlb/lsaa006/5841599#204481018" target="_blank" rel="noopener noreferrer">illegal to implant a genetically modified embryo</a>. Part of the reason is that scientists still aren't sure of the unintended consequences of new gene-editing technologies. </p><p>But there are also concerns that these technologies could usher in a new era of eugenics. After all, the function of a designer baby industry, like the one in the "Genetic Pressure" series, wouldn't necessarily be limited to eliminating genetic diseases; it could also work to increase the occurrence of "desirable" traits. </p><p>If the industry did that, it'd effectively signal that the <em>opposites of those traits are undesirable. </em>As the International Bioethics Committee <a href="https://academic.oup.com/jlb/advance-article/doi/10.1093/jlb/lsaa006/5841599#204481018" target="_blank" rel="noopener noreferrer">wrote</a>, this would "jeopardize the inherent and therefore equal dignity of all human beings and renew eugenics, disguised as the fulfillment of the wish for a better, improved life."</p><p><em>"Genetic Pressure Volume I: Baby Steps"</em><em> by Eugene Clark is <a href="http://bigth.ink/38VhJn3" target="_blank">available now.</a></em></p>
It's hard to stop looking back and forth between these faces and the busts they came from.
- A quarantine project gone wild produces the possibly realistic faces of ancient Roman rulers.
- A designer worked with a machine learning app to produce the images.
- It's impossible to know if they're accurate, but they sure look plausible.
How the Roman emperors got faced<a href="https://payload.cargocollective.com/1/6/201108/14127595/2K-ENGLISH-24x36-Educational_v8_WATERMARKED_2000.jpg" ><img type="lazy-image" data-runner-src="https://assets.rebelmouse.io/eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJpbWFnZSI6Imh0dHBzOi8vYXNzZXRzLnJibC5tcy8yNDQ2NDk2MS9vcmlnaW4uanBnIiwiZXhwaXJlc19hdCI6MTYyOTUzMzIxMX0.OwHMrgKu4pzu0eCsmOUjybdkTcSlJpL_uWDCF2djRfc/img.jpg?width=980" id="775ca" class="rm-shortcode" data-rm-shortcode-id="436000b6976931b8320313478c624c82" data-rm-shortcode-name="rebelmouse-image" alt="lineup of emperor faces" data-width="1440" data-height="963" /></a>
Credit: Daniel Voshart<p>Voshart's imaginings began with an AI/neural-net program called <a href="https://www.artbreeder.com" target="_blank">Artbreeder</a>. The freemium online app intelligently generates new images from existing ones and can combine multiple images into…well, who knows. It's addictive — people have so far used it to generate nearly 72.7 million images, says the site — and it's easy to see how Voshart fell down the rabbit hole.</p><p>The Roman emperor project began with Voshart feeding Artbreeder images of 800 busts. Obviously, not all busts have weathered the centuries equally. Voshart told <a href="https://www.livescience.com/ai-roman-emperor-portraits.html" target="_blank" rel="noopener noreferrer">Live Science</a>, "There is a rule of thumb in computer programming called 'garbage in garbage out,' and it applies to Artbreeder. A well-lit, well-sculpted bust with little damage and standard face features is going to be quite easy to get a result." Fortunately, there were multiple busts for some of the emperors, and different angles of busts captured in different photographs.</p><p>For the renderings Artbreeder produced, each face required some 15-16 hours of additional input from Voshart, who was left to deduce/guess such details as hair and skin coloring, though in many cases, an individual's features suggested likely pigmentations. Voshart was also aided by written descriptions of some of the rulers.</p><p>There's no way to know for sure how frequently Voshart's guesses hit their marks. It is obviously the case, though, that his interpretations look incredibly plausible when you compare one of his emperors to the sculpture(s) from which it was derived.</p><p>For an in-depth description of Voshart's process, check out his posts on <a href="https://medium.com/@voshart/photoreal-roman-emperor-project-236be7f06c8f" target="_blank">Medium</a> or on his <a href="https://voshart.com/ROMAN-EMPEROR-PROJECT" target="_blank" rel="noopener noreferrer">website</a>.</p><p>It's fascinating to feel like you're face-to-face with these ancient and sometimes notorious figures. Here are two examples, along with some of what we think we know about the men behind the faces.</p>
Caligula<img type="lazy-image" data-runner-src="https://assets.rebelmouse.io/eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJpbWFnZSI6Imh0dHBzOi8vYXNzZXRzLnJibC5tcy8yNDQ2NDk4Mi9vcmlnaW4uanBnIiwiZXhwaXJlc19hdCI6MTY3MzQ1NTE5NX0.LiTmhPQlygl9Fa9lxay8PFPCSqShv4ELxbBRFkOW_qM/img.jpg?width=980" id="7bae0" class="rm-shortcode" data-rm-shortcode-id="ce795c554490fe0a36a8714b86f55b16" data-rm-shortcode-name="rebelmouse-image" data-width="992" data-height="558" />
One of numerous sculptures of Caligula, left
Nero<img type="lazy-image" data-runner-src="https://assets.rebelmouse.io/eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJpbWFnZSI6Imh0dHBzOi8vYXNzZXRzLnJibC5tcy8yNDQ2NTAwMC9vcmlnaW4uanBnIiwiZXhwaXJlc19hdCI6MTY1NTQ2ODU0NX0.AgYuQZzRQCanqehSI5UeakpxU8fwLagMc_POH7xB3-M/img.jpg?width=980" id="a8825" class="rm-shortcode" data-rm-shortcode-id="9e0593d79c591c97af4bd70f3423885e" data-rm-shortcode-name="rebelmouse-image" data-width="992" data-height="558" />
One of numerous sculptures of Nero, left
To understand ourselves and our place in the universe, "we should have humility but also self-respect," Frank Wilczek writes in a new book.
Debating is cognitively taxing but also important for the health of a democracy—provided it's face-to-face.
- New research at Yale identifies the brain regions that are affected when you're in disagreeable conversations.
- Talking with someone you agree with harmonizes brain regions and is less energetically taxing.
- The research involves face-to-face dialogues, not conversations on social media.
There are two kinds of identity politics. One is good. The other, very bad. | Jonathan Haidt<span style="display:block;position:relative;padding-top:56.25%;" class="rm-shortcode" data-rm-shortcode-id="6f0e52833af5d35adab591bb92d79f8e"><iframe type="lazy-iframe" data-runner-src="https://www.youtube.com/embed/l-_yIhW9Ias?rel=0" width="100%" height="auto" frameborder="0" scrolling="no" style="position:absolute;top:0;left:0;width:100%;height:100%;"></iframe></span><p>Unsurprisingly, harmonious synchronization of brain states occurred when volunteers agreed, similar to <a href="https://www.researchgate.net/publication/322764116_Creativity_and_Flow_in_Surgery_Music_and_Cooking_An_Interview_with_Neuroscientist_Charles_Limb" target="_blank">group flow</a>—the coordination of brain waves that hip-hop and jazz musicians (among others) experience when performing together. Coordination exceeds the social, into the neurological. As the team writes, "talking during agreement was characterized by increased activity in a social and attention network including right supramarginal gyrus, bilateral frontal eye-fields, and left frontopolar regions."</p><p>This contrasts with argumentative behavior, in which "the frontoparietal system including bilateral dorsolateral prefrontal cortex, left supramarginal gyrus, angular gyrus, and superior temporal gyrus showed increased activity while talking during disagreement."</p><p>Senior author Joy Hirsch notes that our brain is essentially a social processing network. The evolutionary success of humans is thanks to our ability to coordinate. Dissonance is exhausting. Overall, <a href="https://www.sciencedaily.com/releases/2021/01/210113090938.htm" target="_blank">she says</a>, "it just takes a lot more brain real estate to disagree than to agree," comparing arguments to a symphony orchestra playing different music. </p><p>As the team notes, language, visual, and social systems are all dynamically intertwined inside of our brain. For most of history, yelling at one another in comment sections was impossible. Arguments had to occur the old-fashioned way: while staring at the source of your discontent. </p>
People of the "left-wing" side yell at a Trump supporter during a "Demand Free Speech" rally on Freedom Plaza on July 6, 2019 in Washington, DC.
Credit: Stephanie Keith/Getty Images<p>Leading us to an interesting question: do the same brain regions fire when you're screaming with your fingers on your Facebook feed? Given the lack of visual feedback from the person on the other side of the argument, likely not—as it is unlikely that many people would argue in the same manner when face-to-face with a person on the other side of a debate. We are generally more civil in real life than on a screen.</p><p>The researchers point out that seeing faces causes complex neurological reactions that must be interpreted in real-time. For example, gazing into someone's eyes requires higher-order processing that must be dealt with during the moment. Your brain coordinates to make sense of the words being spoken <em>and</em> pantomimes being witnessed. This combination of verbal and visual processes are "generally associated with high-level cognitive and linguistic functions."</p><p>While arguing is more exhausting, it also sharpens your senses—when a person is present, at least. Debating is a healthy function of society. Arguments force you to consider other viewpoints and potentially come to different conclusions. As with physical exercise, which makes you stronger even though it's energetically taxing, disagreement propels societies forward.</p>In this study, every participant was forced to <em>listen</em> to the other person. As this research was focused on live interactions, it adds to the literature of cognitive processing during live interactions and offers insights into the cognitive tax of anger. Even anger is a net positive when it forces both sides to think through their thoughts and feelings on a matter. As social animals, we need that tension in our lives in order to grow. Yelling into the void of a comments section? Not so helpful. <p>--</p><p><em>Stay in touch with Derek on <a href="http://www.twitter.com/derekberes" target="_blank">Twitter</a> and <a href="https://www.facebook.com/DerekBeresdotcom" target="_blank">Facebook</a>. His most recent book is</em> "<em><a href="https://www.amazon.com/gp/product/B08KRVMP2M?pf_rd_r=MDJW43337675SZ0X00FH&pf_rd_p=edaba0ee-c2fe-4124-9f5d-b31d6b1bfbee" target="_blank" rel="noopener noreferrer">Hero's Dose: The Case For Psychedelics in Ritual and Therapy</a>."</em></p>
In a joint briefing at the 101st American Meteorological Society Annual Meeting, NASA and NOAA revealed 2020's scorching climate data.