Machine learning is a powerful and imperfect tool that should not go unmonitored.
- When you harness the power and potential of machine learning, there are also some drastic downsides that you've got to manage.
- Deploying machine learning, you face the risk that it be discriminatory, biased, inequitable, exploitative, or opaque.
- In this article, I cover six ways that machine learning threatens social justice and reach an incisive conclusion: The remedy is to take on machine learning standardization as a form of social activism.
Here are six ways machine learning threatens social justice<img type="lazy-image" data-runner-src="https://assets.rebelmouse.io/eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJpbWFnZSI6Imh0dHBzOi8vYXNzZXRzLnJibC5tcy8yNDUyMDgxNC9vcmlnaW4uanBnIiwiZXhwaXJlc19hdCI6MTY0MzM0NjgxOH0.zHvEEsYGbNA-lnkq4nss7vwVkZlrKkuKf0XASf7A7Jg/img.jpg?width=980" id="05f07" class="rm-shortcode" data-rm-shortcode-id="a7089b6621166f5a2df77d975f8b9f74" data-rm-shortcode-name="rebelmouse-image" data-width="1000" data-height="563" />
Credit: metamorworks via Shutterstock<p><strong></strong><strong>1) </strong><strong>Blatantly discriminatory models</strong> are predictive models that base decisions partly or entirely on a protected class. Protected classes include race, religion, national origin, gender, gender identity, sexual orientation, pregnancy, and disability status. By taking one of these characteristics as an input, the model's outputs – and the decisions driven by the model – are based at least in part on membership in a protected class. Although models rarely do so directly, there is <a href="https://www.youtube.com/watch?v=eSlzy1x6Fy0" target="_blank">precedent</a> and <a href="https://www.youtube.com/watch?v=wfpNN8ASIq4" target="_blank">support</a> for doing so.</p><p>This would mean that a model could explicitly hinder, for example, black defendants for being black. So, imagine sitting across from a person being evaluated for a job, a loan, or even parole. When they ask you how the decision process works, you inform them, "For one thing, our algorithm penalized your score by seven points because you're black." This may sound shocking and sensationalistic, but I'm only literally describing what the model would do, mechanically, if race were permitted as a model input. </p><p><strong>2) Machine bias</strong>. Even when protected classes are not provided as a direct model input, we find, in some cases, that model predictions are still inequitable. This is because other variables end up serving as proxies to protected classes. This is <a href="https://coursera.org/share/51350b8fb12a5937bbddc0e53a4f207d" target="_blank" rel="noopener noreferrer">a bit complicated</a>, since it turns out that models that are fair in one sense are unfair in another. </p><p>For example, some crime risk models succeed in flagging both black and white defendants with equal precision – each flag tells the same probabilistic story, regardless of race – and yet the models falsely flag black defendants more often than white ones. A crime-risk model called COMPAS, which is sold to law enforcement across the US, falsely flags white defendants at a rate of 23.5%, and Black defendants at 44.9%. In other words, black defendants who don't deserve it are <a href="https://coursera.org/share/df6e6ba7108980bb7eeae0ba22123ac1" target="_blank" rel="noopener noreferrer">erroneously flagged almost twice as much</a> as white defendants who don't deserve it.</p><p><strong>3) Inferring sensitive attributes</strong>—predicting pregnancy and beyond. Machine learning predicts sensitive information about individuals, such as sexual orientation, whether they're pregnant, whether they'll quit their job, and whether they're going to die. Researchers have shown that it is possible to <a href="https://youtu.be/aNwvXhcq9hk" target="_blank" rel="noopener noreferrer">predict race based on Facebook likes</a>. These predictive models deliver dynamite.</p><p>In a particularly extraordinary case, officials in China use facial recognition to <a href="https://www.nytimes.com/2019/04/14/technology/china-surveillance-artificial-intelligence-racial-profiling.html" target="_blank" rel="noopener noreferrer">identify and track the Uighurs, a minority ethnic group</a> systematically oppressed by the government. This is the first known case of a government using machine learning to profile by ethnicity. One Chinese start-up valued at more than $1 billion said its software could recognize "sensitive groups of people." It's website said, "If originally one Uighur lives in a neighborhood, and within 20 days six Uighurs appear, it immediately sends alarms" to law enforcement.</p>
Recourse: Establish machine learning standards as a form of social activism<p>To address these problems, take on machine learning standardization as a form of social activism. We must establish standards that go beyond nice-sounding yet vague platitudes such as "be fair", "avoid bias", and "ensure accountability". Without being precisely defined, these catch phrases are subjective and do little to guide concrete action. Unfortunately, such broad language is fairly common among the principles released by many companies. In so doing, companies protect their public image more than they protect the public.<br></p><p>People involved in initiatives to deploy machine learning have a powerful, influential voice. These relatively small numbers of people mold and set the trajectory for systems that automatically dictate the rights and resources that great numbers of consumers and citizens gain access to.</p><p>Famed machine learning leader and educator Andrew Ng drove it home: "AI is a superpower that enables a small team to affect a huge number of people's lives... Make sure the work you do leaves society better off."</p><p>And Allan Sammy, Director, Data Science and Audit Analytics at Canada Post, clarified the level of responsibility: "A decision made by an organization's analytic model is a decision made by that entity's senior management team."</p><p>Implementing ethical data science is as important as ensuring a self-driving car knows when to put on the breaks.</p><p>Establishing well-formed ethical standards for machine learning will be an intensive, ongoing process. For more, <a href="https://youtu.be/ToSj0ZkJHBQ" target="_blank">watch this short video</a>, in which I provide some specifics meant to kick-start the process.</p>
In spite of a government mandate, females are often treated as afterthoughts in scientific research.
- A new study finds that though more females are included in experiments, sex-specific data often goes un-analyzed.
- Only about a third of studies analyzed published participant breakdown by sex.
- Some researchers say considering females more fully as research subjects is logistically too challenging.
In 2016, the National Institutes of Health (NIH) issued a directive that scientists receiving NIH funding must consider sex as a biological variable in pre-clinical research on vertebrate animals and human cells and tissues. According to a new study published in eLife that looked at over 700 journal articles, the number of women included as participants in pre-clinical research has jumped from 28 percent in 2009 to 49 percent in 2019. However, it's also unfortunately still the case that few studies actually consider sex as a biological influence that may potentially affect outcomes, and that data from women participants continues to be simply combined with data from men.
Study co-author Nicole C. Woitowich of Northwestern University's Feinberg School of Medicine tells INSIDE Higher Ed, "In the last 10 years, there has been a major in increase in sex inclusion, but it's still not where it's needs to be."
What's missing in current research
Image source: Hush Naidoo/Unsplash
Woitowich and others see two particularly problematic aspects to the continuing disregard of sex as a meaningful biological research variable.
First, female-specific data is rarely considered in study conclusions, despite the fact that it may have implications for women's health. According to L. Syd M Johnson of SUNY Update Medical University, who was not involved with the study, "This becomes highly problematic both scientifically and ethically, because women, children, and the elderly also need medical care, and they shouldn't be treated as if they have adult, male bodies. When they are excluded from research, and from the reported results, treatment for them becomes, effectively, off-label.
Second, Woitowich tells INSIDE Higher Ed it's, "troublesome to me as a scientist [that] a little under one-third [of studies] did not even report the number of males and females used as subjects." This makes it impossible for scientists to replicate the results. "If I don't have all the information," Woitowich says, "I'm left guessing."
On top of that, Woitowich laments that too much of the female-focused research that is undertaken is what's been called "bikini science," research surrounding issues related to female reproductive organs.
Why is this happening?
Image source: Image Point Fr/Shutterstock
"Many scientists, I don't even know if this is on their radar," says Woitowich. She proposes, therefore, that in the short term it may be the research gatekeepers — the funding entities, journal editors, and peer reviewers — who will have to step up and demand more inclusive science. She expresses surprise that they aren't already doing more to enforce the NIH's mandate. In the longer term, training for medical students should include a fuller awareness of the role that can be played by sex differences in research.
In a 2014 letter to the journal Nature, Janine A. Clayton and Francis S. Collins of the NIH admitted the problem even extends to female researchers. Noting that roughly half of the scientists doing NIH-funded research are women: "There has not been a corresponding revolution in experimental design and analyses in cell and animal research — despite multiple calls to action."
Another possible explanation
Image source: Ousa Chea/Unsplash
There are some researchers who feel that a greater inclusion of women and their data in studies would unnecessarily complicate the problems inherent in designing research and getting it funded.
In a 2015 letter to the journal Science, a group of researchers wrote that sex considerations added an additional investigational layer to research, one that was often irrelevant to the purpose of a research project. They asserted that, "nonhypothesis-driven documentation of sex differences in basic laboratory research is more likely to introduce conceptual and empirical problems in research on sex and gender than bring new clarity to differences in men's and women's health outcomes."
The writers also suggested that sex may be less of a biological variable than gender and weight. If, for example, women are more likely to be taking multiple pharmaceuticals than men and tend to be lighter in weight, these factors may be more influential on experiment outcomes than sex. Reluctant to commit to considering sex as a variable, they suggested instead two generalized studies to determine if it should be, writing, "we see a stronger empirical basis for directed funding initiatives in two areas: scientific validation of preclinical models for studying human sex differences, and human studies of the interaction of sex- and gender-related variables in producing health outcomes that vary by sex."
Image source: Valeriy Lebedev/Shutterstock
A 2019 analysis by Harvard University's GenderSci Lab found that basic science researchers, "repeated again and again that their experiments were in large part constrained by practicalities of various sorts. These practicalities were often used to explain why they don't or can't account for sex in their research," says the lab's Annika Gompers. Among the practicalities noted were the acquisition of study materials such as cells from deceased patients, test animals, fat from cosmetic surgery patients, and so on. Gompers said researchers often simply work with what they can get.
She adds, "While my participants recognize that considering sex can be important for the generalizability of results, in practice it is often impractical if not impossible to incorporate sex as a variable into biomedical research. Such a finding is consistent with scholars who have long looked at science as practice and observed how practicalities — as mundane as the availability of materials — are often central to the reduction of complexity into 'doable problems.'"
As far as sample composition goes, the choice of subjects may have to do with researchers wanting to avoid the constraints and costs of the safety regulations that accompany studies of pregnant women, women of child-bearing age whom may become pregnant, children, and the elderly.
Finally, though it may be that having enough females in a sample to draw valid conclusions would likely require larger participant cohorts. Woitowich's co-author, Smith College's Anneliese Beery, says that fears of doubled sample sizes are overblown, asserting that such increases in participant numbers would be "not actually necessary."
Avoiding wasted research opportunities
One of the authors of that Science letter was Harvard's Sarah S. Richardson, who suggests a sort of middle path, though it does give researchers license to ignore the NIH requirement as they see fit. Richardson proposes something she calls "sex contextualism," which is the "simple view that the definition of sex and sex-related variables, and whether they are relevant in biological research, depends on the research context."
Science journalist Angela Saini agrees , saying, "While it's valuable to include a broad spectrum of people in studies, it doesn't necessarily follow that the sex differences will be significant or important. So disaggregating for sex, while useful sometimes, doesn't always matter."
The above points, however, don't seem to acknowledge the potential for findings important specifically to female health, and seem more concerned with protecting the efficacy of studies that benefit males.
In any event, Woitowich finds that things are progressing more slowly than the NIH and others may have hoped. While Beery says it's "exciting to see increased inclusion of female subjects across so many different fields of biology," there are potentially meaningful scientific insights being lost. The disinclination toward fully collecting and analyzing female data for research experiments "means we are still missing out on the opportunity to understand when there are sex differences and losing statistical power when sex differences go unnoticed."
Do you know the implicit biases you have? Here are some ways to find them out.
- A study finds that even becoming aware of your own implicit bias can help you overcome it.
- We all have biases. Some of them are helpful — others not so much.
How we can curb the effects of implicit biases<img type="lazy-image" data-runner-src="https://assets.rebelmouse.io/eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJpbWFnZSI6Imh0dHBzOi8vYXNzZXRzLnJibC5tcy8yMTA3MDgyNS9vcmlnaW4uanBnIiwiZXhwaXJlc19hdCI6MTYzOTczNzk1MH0.WrHhXZnq_IEuhwWCqj542Yj_Ny9OyS69eeSgtIbKCtE/img.jpg?width=1245&coordinates=0%2C130%2C0%2C130&height=700" id="c8c5b" class="rm-shortcode" data-rm-shortcode-id="37cbee2b5fecb9415542a2f9b256aa21" data-rm-shortcode-name="rebelmouse-image" />
Image source: Radachynskyi Serhii / Shutterstock / Big Think<p>New research, <a href="https://go.redirectingat.com/?id=66960X1516588&xs=1&url=https%3A%2F%2Fwww.nature.com%2Farticles%2Fs41562-019-0686-3" target="_blank">published</a> in <em>Nature Human Behavior </em>on August 26, suggests the gender bias, which continues to prevent women from advancing in science, has a lot to do with its hidden underbelly — human blindspots. During the study, French researchers discovered that more women were promoted after the scientists in charge of awarding research positions became consciously aware of the impact of their implicit bias. </p><p>When it was no longer being highlighted, their biases discriminatory effect re-asserted itself, with award grants regressing to their traditional, pro-male pattern. Other research suggests that diversity training <a href="https://www.washingtonpost.com/news/on-leadership/wp/2016/07/01/to-improve-diversity-dont-make-people-go-to-diversity-training-really-2/" target="_blank">doesn't really help</a> and may even exacerbate the problem it seeks to address. </p><p>We can glean a new approach, though — one that could result in better outcomes — from the new research.</p>
About the study<img type="lazy-image" data-runner-src="https://assets.rebelmouse.io/eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJpbWFnZSI6Imh0dHBzOi8vYXNzZXRzLnJibC5tcy8yMTA3MDQ3Ny9vcmlnaW4uanBnIiwiZXhwaXJlc19hdCI6MTYxMzAyOTM3NX0.g6-J9vIM86-oD9uvA6hI7fWstSf4T6l-x1Or7vYoNC4/img.jpg?width=980" id="90426" class="rm-shortcode" data-rm-shortcode-id="7e0f4d9453c682b9d09aa0cab90e1e34" data-rm-shortcode-name="rebelmouse-image" />
Image source: Tartila/Shutterstock/Big Think
How do I know if implicit bias is affecting my judgement?<img type="lazy-image" data-runner-src="https://assets.rebelmouse.io/eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJpbWFnZSI6Imh0dHBzOi8vYXNzZXRzLnJibC5tcy8yMTA3MDg0OS9vcmlnaW4uanBnIiwiZXhwaXJlc19hdCI6MTYxMjEwNTk4Nn0.gNTCpjO6Y2-wVfA3TzKHT1k75RJbGrvF6Wkl6VZLaOA/img.jpg?width=1245&coordinates=237%2C51%2C237%2C51&height=700" id="1e62b" class="rm-shortcode" data-rm-shortcode-id="8a541cecc5949ab591f2fe9e50bdabfc" data-rm-shortcode-name="rebelmouse-image" />
Image source: AlexandreNunes / Shutterstock / Big Think<p>While the study looked at gender bias, of course, it's not the only variety to be concerned about, others pervade our culture: race bias, ethnicity bias, anti-LGBTQ bias, age bias, anti-Muslim bias, and so on. There are a couple of online methods available for sussing out our own. Note that if the researchers are correct, then just making yourself aware of your implicit biases can help you combat them.</p><p>The IAT mentioned above is one widely used way to identify your own bias issues. <a href="https://implicit.harvard.edu/implicit/index.jsp" target="_blank">Project Implicit</a> — from psychologists at Harvard, the University of Virginia, and the University of Washington — offers a <a href="https://implicit.harvard.edu/implicit/takeatest.html" target="_blank">self-test</a> you can take. Be aware, though, that the IAT requires multiple tests to produce <a href="https://www.psychologicalscience.org/observer/the-iat-how-and-when-it-works" target="_blank">a meaningful result</a>.</p><p>If you're willing to invest a little time, there's also the <a href="http://www.lookdifferent.org/what-can-i-do/bias-cleanse" target="_blank">"bias cleanse"</a> offered by MTV in partnership with the <a href="http://kirwaninstitute.osu.edu/" target="_blank">Kirwan Institute for the Study of Race and Ethnicity</a>. It's a seven-day program aimed at helping you sort out implicit gender, race, or anti-LGBTQ biases you may be harboring. Each day you receive three eye-opening email thought exercises, one for each type of bias. </p><p>Side note: Did you know that more people die in female-named hurricanes because they're typically perceived as less threatening? We didn't.</p>
Step 1<p>It's a well-worn bromide that simply acknowledging you have a problem is the first step to solving it, but the new study provides supporting evidence that this is especially true when dealing with implicit biases — a pernicious, stubborn problem in our society. Our brains are clever beasties, silently putting together shortcuts that reduce our cognitive load. We just need to be smarter about seeing and consciously assessing them if we can ever hope to be the people that we hope to be. That may mean, on occasion, being humble enough to receive feedback in the form of callouts. </p>
Stereotyping isn't about "bad people doing bad things." It's about our subconscious biases, and how they sneak into organizational structures.
Psychologist Valerie Purdie Greenaway is the first African American to be tenured in the sciences at Columbia University, in its entire 263 year history. Despite her celebrated position—and, in fact, perhaps because of it—she still struggles with perception, subtle stereotyping, and the enormous stakes of being one of few women of color in a leadership role. Here, Valerie Purdie Greenaway speaks with diversity and inclusion expert Jennifer Brown about being "the only" in a workplace, whether that is along lines of gender, race, culture, or sexual orientation, and how organizations and individuals can do more to recognize and address their biases. That also means letting go of the idea that stereotyping is a malevolent case of "bad people doing bad things." What does discrimination really look like day to day? Most of it is subconscious, subtle, and is deeply embedded into the structure of organizations, which can have an impact on performance, mentorship, and staff turnover. Do you recognize any of your own behavior in this discussion? This live conversation was part of a recent New York panel on diversity, inclusion, and collaboration at work.
AI is leaving human needs and democracy behind in its race to accomplish its current profit-generating goals.
It doesn't have to be this way, but for now it is: AI's primary purpose is to maximize profits. For all of the predictions of its benefits to society, right now, that's just window-dressing—a pie-in-the-sky vision of a world we don't actually inhabit. While some like Elon Musk issue dire warnings against finding ourselves beneath the silicon thumbs of robot overlords, the fact is we're already under threat. As long as AI is dedicated to economic goals and not societal concerns, its tunnel vision is a problem. And as so often seems to be the case these days, the benefits will go to the already wealthy and powerful.