Six ways machine learning threatens social justice
Machine learning is a powerful and imperfect tool that should not go unmonitored.
- When you harness the power and potential of machine learning, there are also some drastic downsides that you've got to manage.
- Deploying machine learning, you face the risk that it be discriminatory, biased, inequitable, exploitative, or opaque.
- In this article, I cover six ways that machine learning threatens social justice and reach an incisive conclusion: The remedy is to take on machine learning standardization as a form of social activism.
When you harness the power and potential of machine learning, there are also some drastic downsides that you've got to manage. Deploying machine learning, you face the risk that it be discriminatory, biased, inequitable, exploitative, or opaque. In this article, I cover six ways that machine learning threatens social justice and reach an incisive conclusion: The remedy is to take on machine learning standardization as a form of social activism.
When you use machine learning, you aren't just optimizing models and streamlining business. You're governing. In essence, the models embody policies that control access to opportunities and resources for many people. They drive consequential decisions as to whom to investigate, incarcerate, set up on a date, or medicate – or to whom to grant a loan, insurance coverage, housing, or a job.
For the same reason that machine learning is valuable—that it drives operational decisions more effectively—it also wields power in the impact it has on millions of individuals' lives. Threats to social justice arise when that impact is detrimental, when models systematically limit the opportunities of underprivileged or protected groups.
Here are six ways machine learning threatens social justice
Credit: metamorworks via Shutterstock
1) Blatantly discriminatory models are predictive models that base decisions partly or entirely on a protected class. Protected classes include race, religion, national origin, gender, gender identity, sexual orientation, pregnancy, and disability status. By taking one of these characteristics as an input, the model's outputs – and the decisions driven by the model – are based at least in part on membership in a protected class. Although models rarely do so directly, there is precedent and support for doing so.
This would mean that a model could explicitly hinder, for example, black defendants for being black. So, imagine sitting across from a person being evaluated for a job, a loan, or even parole. When they ask you how the decision process works, you inform them, "For one thing, our algorithm penalized your score by seven points because you're black." This may sound shocking and sensationalistic, but I'm only literally describing what the model would do, mechanically, if race were permitted as a model input.
2) Machine bias. Even when protected classes are not provided as a direct model input, we find, in some cases, that model predictions are still inequitable. This is because other variables end up serving as proxies to protected classes. This is a bit complicated, since it turns out that models that are fair in one sense are unfair in another.
For example, some crime risk models succeed in flagging both black and white defendants with equal precision – each flag tells the same probabilistic story, regardless of race – and yet the models falsely flag black defendants more often than white ones. A crime-risk model called COMPAS, which is sold to law enforcement across the US, falsely flags white defendants at a rate of 23.5%, and Black defendants at 44.9%. In other words, black defendants who don't deserve it are erroneously flagged almost twice as much as white defendants who don't deserve it.
3) Inferring sensitive attributes—predicting pregnancy and beyond. Machine learning predicts sensitive information about individuals, such as sexual orientation, whether they're pregnant, whether they'll quit their job, and whether they're going to die. Researchers have shown that it is possible to predict race based on Facebook likes. These predictive models deliver dynamite.
In a particularly extraordinary case, officials in China use facial recognition to identify and track the Uighurs, a minority ethnic group systematically oppressed by the government. This is the first known case of a government using machine learning to profile by ethnicity. One Chinese start-up valued at more than $1 billion said its software could recognize "sensitive groups of people." It's website said, "If originally one Uighur lives in a neighborhood, and within 20 days six Uighurs appear, it immediately sends alarms" to law enforcement.
4) A lack of transparency. A computer can keep you in jail, or deny you a job, a loan, insurance coverage, or housing – and yet you cannot face your accuser. The predictive models generated by machine learning to drive these weighty decisions are generally kept locked up as a secret, unavailable for audit, inspection, or interrogation. Such models, inaccessible to the public, perpetrate a lack of due process and a lack of accountability.
Two ethical standards oppose this shrouding of electronically-assisted decisions: 1) model transparency, the standard that predictive models be accessible, inspectable, and understandable. And 2) the right to explanation, the standard that consequential decisions that are driven or informed by a predictive model are always held up to that standard of transparency. Meeting those standards would mean, for example, that a defendant be told which factors contributed to their crime risk score -- which aspects of their background, circumstances, or past behavior caused the defendant to be penalized. This would provide the defendant the opportunity to respond accordingly, establishing context, explanations, or perspective on these factors.
5) Predatory micro-targeting. Powerlessness begets powerlessness – and that cycle can magnify for consumers when machine learning increases the efficiency of activities designed to maximize profit for companies. Improving the micro-targeting of marketing and the predictive pricing of insurance and credit can magnify the cycle of poverty. For example, highly-targeted ads are more adept than ever at exploiting vulnerable consumers and separating them from their money.
And insurance pricing can lead to the same result. With insurance, the name of the game is to charge more for those at higher risk. Left unchecked, this process can quickly slip into predatory pricing. For example, a churn model may find that elderly policyholders don't tend to shop around and defect to better offers, so there's less of an incentive to keep their policy premiums in check. And pricing premiums based on other life factors also contributes to a cycle of poverty. For example, individuals with poor credit ratings are charged more for car insurance. In fact, a low credit score can increase your premium more than an at-fault car accident.
6) The coded gaze. If a group of people is underrepresented in the data from which the machine learns, the resulting model won't work as well for members of that group. This results in exclusionary experiences and discriminatory practices. This phenomenon can occur for both facial image processing and speech recognition.
Recourse: Establish machine learning standards as a form of social activism
To address these problems, take on machine learning standardization as a form of social activism. We must establish standards that go beyond nice-sounding yet vague platitudes such as "be fair", "avoid bias", and "ensure accountability". Without being precisely defined, these catch phrases are subjective and do little to guide concrete action. Unfortunately, such broad language is fairly common among the principles released by many companies. In so doing, companies protect their public image more than they protect the public.
People involved in initiatives to deploy machine learning have a powerful, influential voice. These relatively small numbers of people mold and set the trajectory for systems that automatically dictate the rights and resources that great numbers of consumers and citizens gain access to.
Famed machine learning leader and educator Andrew Ng drove it home: "AI is a superpower that enables a small team to affect a huge number of people's lives... Make sure the work you do leaves society better off."
And Allan Sammy, Director, Data Science and Audit Analytics at Canada Post, clarified the level of responsibility: "A decision made by an organization's analytic model is a decision made by that entity's senior management team."
Implementing ethical data science is as important as ensuring a self-driving car knows when to put on the breaks.
Establishing well-formed ethical standards for machine learning will be an intensive, ongoing process. For more, watch this short video, in which I provide some specifics meant to kick-start the process.
Eric Siegel, Ph.D., is a leading consultant and former Columbia University professor who makes machine learning understandable and captivating. He is the founder of the long-running Predictive Analytics World and the Deep Learning World conference series and the instructor of the end-to-end, business-oriented Coursera specialization Machine learning for Everyone. Stay in touch with Eric on Twitter @predictanalytic.
- Prejudice AI? Machine Learning Can Pick up Society's Biases - Big ... ›
- Your safety depends on machine learning - Big Think ›
- Dawn of a new era: AI, machine learning, and robotics - Big Think ›
Northwell Health CEO Michael Dowling has an important favor to ask of the American people.
- Michael Dowling is president and CEO of Northwell Health, the largest health care system in New York state. In this PSA, speaking as someone whose company has seen more COVID-19 patients than any other in the country, Dowling implores Americans to wear masks—not only for their own health, but for the health of those around them.
- The CDC reports that there have been close to 7.9 million cases of coronavirus reported in the United States since January. Around 216,000 people have died from the virus so far with hundreds more added to the tally every day. Several labs around the world are working on solutions, but there is currently no vaccine for COVID-19.
- The most basic thing that everyone can do to help slow the spread is to practice social distancing, wash your hands, and to wear a mask. The CDC recommends that everyone ages two and up wear a mask that is two or more layers of material and that covers the nose, mouth, and chin. Gaiters and face shields have been shown to be less effective at blocking droplets. Homemade face coverings are acceptable, but wearers should make sure they are constructed out of the proper materials and that they are washed between uses. Wearing a mask is the most important thing you can do to save lives in your community.
Two massive clouds of dust in orbit around the Earth have been discussed for years and finally proven to exist.
- Hungarian astronomers have proven the existence of two "pseudo-satellites" in orbit around the earth.
- These dust clouds were first discovered in the sixties, but are so difficult to spot that scientists have debated their existence since then.
- The findings may be used to decide where to put satellites in the future and will have to be considered when interplanetary space missions are undertaken.
What are they?<img type="lazy-image" data-runner-src="https://assets.rebelmouse.io/eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJpbWFnZSI6Imh0dHBzOi8vYXNzZXRzLnJibC5tcy8xODgyMDA0NC9vcmlnaW4uanBnIiwiZXhwaXJlc19hdCI6MTYzNTM1ODc0Mn0.NH33LuauIo__sUBi4tvhwxDcsvhflDFD-Nhx9FjlSNk/img.jpg?width=1245&coordinates=148%2C0%2C149%2C0&height=700" id="cec96" class="rm-shortcode" data-rm-shortcode-id="acb78abe2ab46a17e419ad30906751d6" data-rm-shortcode-name="rebelmouse-image" />
Artist's impression of the Kordylewski cloud in the night sky (with its brightness greatly enhanced) at the time of the observations.
G. Horváth<p>The<a href="https://en.wikipedia.org/wiki/Kordylewski_cloud" target="_blank"> Kordylewski clouds</a> are two dust clouds first observed by Polish astronomer Kazimierz Kordylewski in 1961. They are situated at two of the <a href="https://www.space.com/30302-lagrange-points.html" target="_blank">Lagrange points</a> in Earth's orbit. These points are locations where the gravity of two objects, such as the Earth and the Moon or a planet and the Sun, equals the centripetal required to orbit the objects while staying in the same relative position. There are five of these spots between the Earth and Moon. The clouds rest at what are called points four and five, forming a triangle with the clouds and the Earth at the three corners.</p><p>The clouds are enormous, taking up the same space in the night sky as twenty lunar discs; covering an area of 45,000 miles. They are roughly 250,000 miles away, about the same distance from us as the Moon. They are entirely comprised of specks of dust which reflect the light of the sun so faintly most astronomers that looked for them were unable to see them at all. </p><p>The clouds themselves are probably ancient, but the model that the scientists created to learn about them suggests that the individual dust particles that comprise them can be blown away by solar wind and replaced by the dust from other cosmic sources like comet tails. This means that the clouds hardly move but are <a href="https://www.nationalgeographic.com/science/2018/11/news-earth-moon-dust-clouds-satellites-planets-space/" target="_blank">eternally changing</a>. </p>
How did they discover this?<img type="lazy-image" data-runner-src="https://assets.rebelmouse.io/eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJpbWFnZSI6Imh0dHBzOi8vYXNzZXRzLnJibC5tcy8xODgyMDAzNi9vcmlnaW4uanBnIiwiZXhwaXJlc19hdCI6MTY1Nzc4MjQ4MX0.7uU9OqmQcWw5Ll1UXAav0PCu4nTg-GdJdAWADHanC7c/img.jpg?width=1245&coordinates=0%2C180%2C0%2C181&height=700" id="952fb" class="rm-shortcode" data-rm-shortcode-id="a778280a20f1c54cd2c14c8313224be2" data-rm-shortcode-name="rebelmouse-image" />
"In this picture the central region of the Kordylewski dust cloud is visible (bright red pixels). The straight tilted lines are traces of satellites."
J. Slíz-Balogh<p>In their study published in the <a href="https://academic.oup.com/mnras" target="_blank">Monthly Notices of the Royal Astronomical Society</a>, Hungarian astronomers Judit Slíz-Balogh, András Barta, and Gábor Horváth described how they were able to find the dust clouds using polarized lenses.</p><p>Since the clouds were expected to polarize the light that bounces off of them, by configuring the telescopes to look for this kind of light the clouds were much easier to spot. What the scientists observed, polarized light in patterns that extended outside the view of the telescope lens, was in line with the predictions of their mathematical model and ruled out other possible sources. </p>
Why are we just learning this now?<img type="lazy-image" data-runner-src="https://assets.rebelmouse.io/eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJpbWFnZSI6Imh0dHBzOi8vYXNzZXRzLnJibC5tcy8xODgyMDAzOS9vcmlnaW4uanBnIiwiZXhwaXJlc19hdCI6MTY2MjUyNDMyMH0.Zl8GmQ_rJHiL4b7hN0r_YBmgb6_ZqIRvqOVuko2ubpw/img.jpg?width=1245&coordinates=0%2C141%2C0%2C185&height=700" id="87afe" class="rm-shortcode" data-rm-shortcode-id="dd4c0b5088e601d7279cc5eb226f8b7b" data-rm-shortcode-name="rebelmouse-image" />
"Mosaic pattern of the angle of polarization around the L5 point (white dot) of the Earth-Moon system. The five rectangular windows correspond to the imaging telescope with which the patterns of the Kordylewski cloud were measured."
J. Slíz-Balogh<p>The objects, being dust clouds, are very faint and hard to see. While Kordylewski observed them in 1961, other astronomers have looked there and given mixed reports over the following decades. This discouraged many astronomers from joining the search, as study co-author Judit Slíz-Balogh <a href="https://ras.ac.uk/news-and-press/research-highlights/earths-dust-cloud-satellites-confirmed" target="_blank">explained</a>, <em>"The Kordylewski clouds are two of the toughest objects to find, and though they are as close to Earth as the Moon are largely overlooked by researchers in astronomy. It is intriguing to confirm that our planet has dusty pseudo-satellites in orbit alongside our lunar neighbor."</em></p>
Will this have any impact on space travel?<span style="display:block;position:relative;padding-top:56.25%;" class="rm-shortcode" data-rm-shortcode-id="c3d797fff5430c64afcb5a49bddc3616"><iframe type="lazy-iframe" data-runner-src="https://www.youtube.com/embed/Ou8N3v9SFPE?rel=0" width="100%" height="auto" frameborder="0" scrolling="no" style="position:absolute;top:0;left:0;width:100%;height:100%;"></iframe></span><p>Lagrange points have been put forward as excellent locations for a space station or satellites like the <a href="https://jwst.nasa.gov/about.html" target="_blank">James Webb Telescope</a> to be put into orbit, as they would require little fuel to stay in place. Knowing about a massive dust cloud that could damage sensitive equipment already being there could save money and lives in the future. While we only know about the clouds at Lagrange points four and five right now, the study's authors suggest there could be more at the other points.</p><p>While the discovery of a couple of dust clouds might not seem all that impressive, it is the result of a half-century of astronomical and mathematical work and reminds us that wonders are still hidden in our cosmic backyard. While you might never need to worry about these clouds again, there is nothing wrong with looking at the sky with wonder at the strange and fantastic things we can discover. </p>
New cancer-scanning technology reveals a previously unknown detail of human anatomy.
- Scientists using new scanning technology and hunting for prostate tumors get a surprise.
- Behind the nasopharynx is a set of salivary glands that no one knew about.
- Finding the glands may allow for more complication-free radiation therapies.
PSMA PET/CT technology<span style="display:block;position:relative;padding-top:56.25%;" class="rm-shortcode" data-rm-shortcode-id="676e611b970c9b516cace0870447b325"><iframe type="lazy-iframe" data-runner-src="https://www.youtube.com/embed/RHAyoQF09X4?rel=0" width="100%" height="auto" frameborder="0" scrolling="no" style="position:absolute;top:0;left:0;width:100%;height:100%;"></iframe></span><p>PSMA PET/CT is a new combination of <a href="https://www.mayoclinic.org/tests-procedures/pet-scan/about/pac-20385078" target="_blank">PET scans</a> and <a href="https://www.mayoclinic.org/tests-procedures/ct-scan/about/pac-20393675" target="_blank">CT scans</a> that is believed to offer a more reliable means of locating prostate cancer metastasis. A <a href="https://www.cancer.gov/news-events/cancer-currents-blog/2020/prostate-cancer-psma-pet-ct-metastasis" target="_blank" rel="noopener noreferrer">study</a> published last spring suggests it may be the most accurate way to diagnose prostate cancer metastasis than any method previously available.</p><p>Prior to PSMA PET/CT, the primary way to look for metastatic prostate cancer was to image the body using x-ray-based CT scans and to perform bone scans, since bone is where prostate cancer often spreads. CT scans, however, often miss small tumors, and bone scans can generate false positives as a result of other damage or abnormalities that have nothing to do with prostate cancer.</p><p>PSMA PET/CT scans track the travels of an intravenously administered radioactive glucose tracer throughout the body. For hunting down prostate cancer, this tracer contains a molecule that binds to the <a href="https://www.ncbi.nlm.nih.gov/pmc/articles/PMC1472940/" target="_blank">PSMA</a> protein that's present in large amounts in prostate tumors. The molecule is linked to a radioisotope, <a href="https://netrf.org/2018/11/13/gallium-68-scan-for-neuroendocrine-tumors/" target="_blank" rel="noopener noreferrer">gallium-68</a> (Ga-68).</p><p>In last spring's research, PSAM PET/CT was shown to be 27 percent more accurate than previous methods at finding metastases (92 percent accuracy as opposed to 65 percent). In addition, it was found to be much less likely to produce false positives, and it was particularly good at detecting tumors far removed from the prostate.</p>
A good kind of avoidance behavior<p>"Radiation therapy can damage the salivary glands," says Vogel, "which may lead to complications. Patients may have trouble eating, swallowing, or speaking, which can be a real burden."</p><p>The researchers looked back through the cases of 723 patients who had undergone radiation treatment, interested in seeing if inadvertent radiation of the tubarial glands was associated with the complications experienced by the patients. It turned out that this <em>was</em> the case: In cases where more radiation had been delivered to this area, patients did indeed report more in the way of complications of the type one would expect when salivary glands are radiated.</p><p>Now that we know the tubarial salivary glands exist, therapists can stay out of their way. Vogel says, "For most patients, it should technically be possible to avoid delivering radiation to this newly discovered location of the salivary gland system in the same way we try to spare known glands."</p><p>He's hopeful that that things may be about to get at least a bit better for cancer patients: "Our next step is to find out how we can best spare these new glands and in which patients. If we can do this, patients may experience less side effects which will benefit their overall quality of life after treatment."</p>
A new survey found that 27 percent of millennials are saving more money due to the pandemic, but most can't stay within their budgets.