Once a week.
Subscribe to our weekly newsletter.
Six ways machine learning threatens social justice
Machine learning is a powerful and imperfect tool that should not go unmonitored.
- When you harness the power and potential of machine learning, there are also some drastic downsides that you've got to manage.
- Deploying machine learning, you face the risk that it be discriminatory, biased, inequitable, exploitative, or opaque.
- In this article, I cover six ways that machine learning threatens social justice and reach an incisive conclusion: The remedy is to take on machine learning standardization as a form of social activism.
When you harness the power and potential of machine learning, there are also some drastic downsides that you've got to manage. Deploying machine learning, you face the risk that it be discriminatory, biased, inequitable, exploitative, or opaque. In this article, I cover six ways that machine learning threatens social justice and reach an incisive conclusion: The remedy is to take on machine learning standardization as a form of social activism.
When you use machine learning, you aren't just optimizing models and streamlining business. You're governing. In essence, the models embody policies that control access to opportunities and resources for many people. They drive consequential decisions as to whom to investigate, incarcerate, set up on a date, or medicate – or to whom to grant a loan, insurance coverage, housing, or a job.
For the same reason that machine learning is valuable—that it drives operational decisions more effectively—it also wields power in the impact it has on millions of individuals' lives. Threats to social justice arise when that impact is detrimental, when models systematically limit the opportunities of underprivileged or protected groups.
Here are six ways machine learning threatens social justice
Credit: metamorworks via Shutterstock
1) Blatantly discriminatory models are predictive models that base decisions partly or entirely on a protected class. Protected classes include race, religion, national origin, gender, gender identity, sexual orientation, pregnancy, and disability status. By taking one of these characteristics as an input, the model's outputs – and the decisions driven by the model – are based at least in part on membership in a protected class. Although models rarely do so directly, there is precedent and support for doing so.
This would mean that a model could explicitly hinder, for example, black defendants for being black. So, imagine sitting across from a person being evaluated for a job, a loan, or even parole. When they ask you how the decision process works, you inform them, "For one thing, our algorithm penalized your score by seven points because you're black." This may sound shocking and sensationalistic, but I'm only literally describing what the model would do, mechanically, if race were permitted as a model input.
2) Machine bias. Even when protected classes are not provided as a direct model input, we find, in some cases, that model predictions are still inequitable. This is because other variables end up serving as proxies to protected classes. This is a bit complicated, since it turns out that models that are fair in one sense are unfair in another.
For example, some crime risk models succeed in flagging both black and white defendants with equal precision – each flag tells the same probabilistic story, regardless of race – and yet the models falsely flag black defendants more often than white ones. A crime-risk model called COMPAS, which is sold to law enforcement across the US, falsely flags white defendants at a rate of 23.5%, and Black defendants at 44.9%. In other words, black defendants who don't deserve it are erroneously flagged almost twice as much as white defendants who don't deserve it.
3) Inferring sensitive attributes—predicting pregnancy and beyond. Machine learning predicts sensitive information about individuals, such as sexual orientation, whether they're pregnant, whether they'll quit their job, and whether they're going to die. Researchers have shown that it is possible to predict race based on Facebook likes. These predictive models deliver dynamite.
In a particularly extraordinary case, officials in China use facial recognition to identify and track the Uighurs, a minority ethnic group systematically oppressed by the government. This is the first known case of a government using machine learning to profile by ethnicity. One Chinese start-up valued at more than $1 billion said its software could recognize "sensitive groups of people." It's website said, "If originally one Uighur lives in a neighborhood, and within 20 days six Uighurs appear, it immediately sends alarms" to law enforcement.
4) A lack of transparency. A computer can keep you in jail, or deny you a job, a loan, insurance coverage, or housing – and yet you cannot face your accuser. The predictive models generated by machine learning to drive these weighty decisions are generally kept locked up as a secret, unavailable for audit, inspection, or interrogation. Such models, inaccessible to the public, perpetrate a lack of due process and a lack of accountability.
Two ethical standards oppose this shrouding of electronically-assisted decisions: 1) model transparency, the standard that predictive models be accessible, inspectable, and understandable. And 2) the right to explanation, the standard that consequential decisions that are driven or informed by a predictive model are always held up to that standard of transparency. Meeting those standards would mean, for example, that a defendant be told which factors contributed to their crime risk score -- which aspects of their background, circumstances, or past behavior caused the defendant to be penalized. This would provide the defendant the opportunity to respond accordingly, establishing context, explanations, or perspective on these factors.
5) Predatory micro-targeting. Powerlessness begets powerlessness – and that cycle can magnify for consumers when machine learning increases the efficiency of activities designed to maximize profit for companies. Improving the micro-targeting of marketing and the predictive pricing of insurance and credit can magnify the cycle of poverty. For example, highly-targeted ads are more adept than ever at exploiting vulnerable consumers and separating them from their money.
And insurance pricing can lead to the same result. With insurance, the name of the game is to charge more for those at higher risk. Left unchecked, this process can quickly slip into predatory pricing. For example, a churn model may find that elderly policyholders don't tend to shop around and defect to better offers, so there's less of an incentive to keep their policy premiums in check. And pricing premiums based on other life factors also contributes to a cycle of poverty. For example, individuals with poor credit ratings are charged more for car insurance. In fact, a low credit score can increase your premium more than an at-fault car accident.
6) The coded gaze. If a group of people is underrepresented in the data from which the machine learns, the resulting model won't work as well for members of that group. This results in exclusionary experiences and discriminatory practices. This phenomenon can occur for both facial image processing and speech recognition.
Recourse: Establish machine learning standards as a form of social activism
To address these problems, take on machine learning standardization as a form of social activism. We must establish standards that go beyond nice-sounding yet vague platitudes such as "be fair", "avoid bias", and "ensure accountability". Without being precisely defined, these catch phrases are subjective and do little to guide concrete action. Unfortunately, such broad language is fairly common among the principles released by many companies. In so doing, companies protect their public image more than they protect the public.
People involved in initiatives to deploy machine learning have a powerful, influential voice. These relatively small numbers of people mold and set the trajectory for systems that automatically dictate the rights and resources that great numbers of consumers and citizens gain access to.
Famed machine learning leader and educator Andrew Ng drove it home: "AI is a superpower that enables a small team to affect a huge number of people's lives... Make sure the work you do leaves society better off."
And Allan Sammy, Director, Data Science and Audit Analytics at Canada Post, clarified the level of responsibility: "A decision made by an organization's analytic model is a decision made by that entity's senior management team."
Implementing ethical data science is as important as ensuring a self-driving car knows when to put on the breaks.
Establishing well-formed ethical standards for machine learning will be an intensive, ongoing process. For more, watch this short video, in which I provide some specifics meant to kick-start the process.
Eric Siegel, Ph.D., is a leading consultant and former Columbia University professor who makes machine learning understandable and captivating. He is the founder of the long-running Predictive Analytics World and the Deep Learning World conference series and the instructor of the end-to-end, business-oriented Coursera specialization Machine learning for Everyone. Stay in touch with Eric on Twitter @predictanalytic.
- Prejudice AI? Machine Learning Can Pick up Society's Biases - Big ... ›
- Your safety depends on machine learning - Big Think ›
- Dawn of a new era: AI, machine learning, and robotics - Big Think ›
So much for rest in peace.
- Australian scientists found that bodies kept moving for 17 months after being pronounced dead.
- Researchers used photography capture technology in 30-minute intervals every day to capture the movement.
- This study could help better identify time of death.
We're learning more new things about death everyday. Much has been said and theorized about the great divide between life and the Great Beyond. While everyone and every culture has their own philosophies and unique ideas on the subject, we're beginning to learn a lot of new scientific facts about the deceased corporeal form.
An Australian scientist has found that human bodies move for more than a year after being pronounced dead. These findings could have implications for fields as diverse as pathology to criminology.
Dead bodies keep moving
Researcher Alyson Wilson studied and photographed the movements of corpses over a 17 month timeframe. She recently told Agence France Presse about the shocking details of her discovery.
Reportedly, she and her team focused a camera for 17 months at the Australian Facility for Taphonomic Experimental Research (AFTER), taking images of a corpse every 30 minutes during the day. For the entire 17 month duration, the corpse continually moved.
"What we found was that the arms were significantly moving, so that arms that started off down beside the body ended up out to the side of the body," Wilson said.
The researchers mostly expected some kind of movement during the very early stages of decomposition, but Wilson further explained that their continual movement completely surprised the team:
"We think the movements relate to the process of decomposition, as the body mummifies and the ligaments dry out."
During one of the studies, arms that had been next to the body eventually ended up akimbo on their side.
The team's subject was one of the bodies stored at the "body farm," which sits on the outskirts of Sydney. (Wilson took a flight every month to check in on the cadaver.)Her findings were recently published in the journal, Forensic Science International: Synergy.
Implications of the study
The researchers believe that understanding these after death movements and decomposition rate could help better estimate the time of death. Police for example could benefit from this as they'd be able to give a timeframe to missing persons and link that up with an unidentified corpse. According to the team:
"Understanding decomposition rates for a human donor in the Australian environment is important for police, forensic anthropologists, and pathologists for the estimation of PMI to assist with the identification of unknown victims, as well as the investigation of criminal activity."
While scientists haven't found any evidence of necromancy. . . the discovery remains a curious new understanding about what happens with the body after we die.
Metal-like materials have been discovered in a very strange place.
- Bristle worms are odd-looking, spiky, segmented worms with super-strong jaws.
- Researchers have discovered that the jaws contain metal.
- It appears that biological processes could one day be used to manufacture metals.
The bristle worm, also known as polychaetes, has been around for an estimated 500 million years. Scientists believe that the super-resilient species has survived five mass extinctions, and there are some 10,000 species of them.
Be glad if you haven't encountered a bristle worm. Getting stung by one is an extremely itchy affair, as people who own saltwater aquariums can tell you after they've accidentally touched a bristle worm that hitchhiked into a tank aboard a live rock.
Bristle worms are typically one to six inches long when found in a tank, but capable of growing up to 24 inches long. All polychaetes have a segmented body, with each segment possessing a pair of legs, or parapodia, with tiny bristles. ("Polychaeate" is Greek for "much hair.") The parapodia and its bristles can shoot outward to snag prey, which is then transferred to a bristle worm's eversible mouth.
The jaws of one bristle worm — Platynereis dumerilii — are super-tough, virtually unbreakable. It turns out, according to a new study from researchers at the Technical University of Vienna, this strength is due to metal atoms.
Metals, not minerals
Fireworm, a type of bristle wormCredit: prilfish / Flickr
This is pretty unusual. The study's senior author Christian Hellmich explains: "The materials that vertebrates are made of are well researched. Bones, for example, are very hierarchically structured: There are organic and mineral parts, tiny structures are combined to form larger structures, which in turn form even larger structures."
The bristle worm jaw, by contrast, replaces the minerals from which other creatures' bones are built with atoms of magnesium and zinc arranged in a super-strong structure. It's this structure that is key. "On its own," he says, "the fact that there are metal atoms in the bristle worm jaw does not explain its excellent material properties."
Just deformable enough
Credit: by-studio / Adobe Stock
What makes conventional metal so strong is not just its atoms but the interactions between the atoms and the ways in which they slide against each other. The sliding allows for a small amount of elastoplastic deformation when pressure is applied, endowing metals with just enough malleability not to break, crack, or shatter.
Co-author Florian Raible of Max Perutz Labs surmises, "The construction principle that has made bristle worm jaws so successful apparently originated about 500 million years ago."
Raible explains, "The metal ions are incorporated directly into the protein chains and then ensure that different protein chains are held together." This leads to the creation of three-dimensional shapes the bristle worm can pack together into a structure that's just malleable enough to withstand a significant amount of force.
"It is precisely this combination," says the study's lead author Luis Zelaya-Lainez, "of high strength and deformability that is normally characteristic of metals.
So the bristle worm jaw is both metal-like and yet not. As Zelaya-Lainez puts it, "Here we are dealing with a completely different material, but interestingly, the metal atoms still provide strength and deformability there, just like in a piece of metal."
Observing the creation of a metal-like material from biological processes is a bit of a surprise and may suggest new approaches to materials development. "Biology could serve as inspiration here," says Hellmich, "for completely new kinds of materials. Perhaps it is even possible to produce high-performance materials in a biological way — much more efficiently and environmentally friendly than we manage today."
Dealing with rudeness can nudge you toward cognitive errors.
- Anchoring is a common bias that makes people fixate on one piece of data.
- A study showed that those who experienced rudeness were more likely to anchor themselves to bad data.
- In some simulations with medical students, this effect led to higher mortality rates.
Cognitive biases are funny little things. Everyone has them, nobody likes to admit it, and they can range from minor to severe depending on the situation. Biases can be influenced by factors as subtle as our mood or various personality traits.
A new study soon to be published in the Journal of Applied Psychology suggests that experiencing rudeness can be added to the list. More disturbingly, the study's findings suggest that it is a strong enough effect to impact how medical professionals diagnose patients.
Life hack: don't be rude to your doctor
The team of researchers behind the project tested to see if participants could be influenced by the common anchoring bias, defined by the researchers as "the tendency to rely too heavily or fixate on one piece of information when making judgments and decisions." Most people have experienced it. One of its more common forms involves being given a particular value, say in negotiations on price, which then becomes the center of reasoning even when reason would suggest that number should be ignored.
It can also pop up in medicine. As co-author Dr. Trevor Foulk explains, "If you go into the doctor and say 'I think I'm having a heart attack,' that can become an anchor and the doctor may get fixated on that diagnosis, even if you're just having indigestion. If doctors don't move off anchors enough, they'll start treating the wrong thing."
Lots of things can make somebody more or less likely to anchor themselves to an idea. The authors of the study, who have several papers on the effects of rudeness, decided to see if that could also cause people to stumble into cognitive errors. Past research suggested that exposure to rudeness can limit people's perspective — perhaps anchoring them.
In the first version of the study, medical students were given a hypothetical patient to treat and access to information on their condition alongside an (incorrect) suggestion on what the condition was. This served as the anchor. In some versions of the tests, the students overheard two doctors arguing rudely before diagnosing the patient. Later variations switched the diagnosis test for business negotiations or workplace tasks while maintaining the exposure to rudeness.
Across all iterations of the test, those exposed to rudeness were more likely to anchor themselves to the initial, incorrect suggestion despite the availability of evidence against it. This was less significant for study participants who scored higher on a test of how wide of a perspective they tended to have. The disposition of these participants, who answered in the affirmative to questions like, "Before criticizing somebody, I try to imagine how I would feel if I were in his/her place," was able to effectively negate the narrowing effects of rudeness.
What this means for you and your healthcare
The effects of anchoring when a medical diagnosis is on the line can be substantial. Dr. Foulk explains that, in some simulations, exposure to rudeness can raise the mortality rate as doctors fixate on the wrong problems.
The authors of the study suggest that managers take a keener interest in ensuring civility in workplaces and giving employees the tools they need to avoid judgment errors after dealing with rudeness. These steps could help prevent anchoring.
Also, you might consider being nicer to people.