Skip to content
The Future

Harvard’s Cass Sunstein: Algorithms can correct human biases

A tool that can slowly build a better world.

Image: Flickr

Key Takeaways
  • Algorithms help drive the modern world.
  • Algorithms reflect human biases, but some — as Harvard’s Cass Sunstein notes — can be built to help correct our biases.
  • If you build the right algorithm, you might be able to help contribute to a better world.
Sign up for the Smarter Faster newsletter
A weekly newsletter featuring the biggest ideas from the smartest people

Algorithms are part of the engine that drives the modern world. When you search for something on Google, you’re relying on a search engine defined by a specific algorithm. When you see what you see on your news feed on Facebook, you’re not looking at something that comes to you naturally; you’re looking at something defined by a specific algorithm.

There’s been pushback recently on the idea of the efficacy with which algorithms make our world easier (which is part of the way in which algorithms are discussed — that they make our world easier). Some of the pushback is philosophical. Some of the pushback comes from an immediately practical place.

The pushback from a practical place takes the form of an article appearing in October of this year noting that Amazon got rid of an AI recruiting tool it was using that didn’t like women. Another article from ProPublica noted that the algorithm used to determine whether or not a criminal defendant in the United States was liable to re-offend was racially biased.

Part of the reason why some algorithms are having trouble is because there are numerous mathematical ways to define the concept of ‘fair’, nor is every system built with enough flexibility to account for all the different ways in which ‘fairness’ can be defined. Consider a way by which one system assessed potential child abuse in the Pittsburgh region, as flagged in an article in Nature: “And, for reasons that are still not clear, white children that the algorithm scored as at highest risk of maltreatment were less likely to be removed from their homes than were black children given the highest risk scores.”

Cass Sunstein – Nudge: Improving Decisions About Health, Wealth, and Happiness

www.youtube.com

But that doesn’t mean that there aren’t positive things at work among all this — there are, and Harvard Kennedy School professor Cass Sunstein recently released a paper to testify to that fact, arguing that “algorithms can overcome the harmful effects of cognitive biases.”

That being said, it’s worth noting a strange moment in the paper where Sunstein writes that “… the decisions of human judges, with respect to bail decisions, show neither disparate treatment nor disparate impact. As far as I am aware, there is no proof of either.” There is apparent proof, as noted in an article published in the Quarterly Journal of Economics. In the piece, the authors note that “estimates from Miami and Philadelphia show that bail judges are racially biased against black defendants, with substantially more racial bias among both inexperienced and part-time judges.”

The paper from the Quarterly Journal of Economics renders slightly problematic the assertion that an algorithm can simply make an already race-blind decision-making process (‘no proof of either’) more efficient based on the mere potential for criminality. It also renders slightly problematic the notion that an algorithm can more or less reproduce what a Judge considering bail produces but a little bit more efficiently — with less crime and the like.

But this doesn’t necessarily occlude some of what Sunstein points out about the particular algorithm noted in a paper put out under the auspices of the National Bureau of Economic Research:

1. “Use of the algorithm could maintain the same detention rate now produced by human judges and reduce crime by up to 24.7 percent. Alternatively, use of the algorithm could maintain the current level of crime reduction and reduce jail rates by as much as 41.9 percent … thousands of people could be released, pending trial, without adding to the crime rate.”

2. ” … judges release 48.5 percent of the defendants judged by the algorithm to fall in the riskiest 1 percent. Those defendants fail to re-appear in court 56.3 percent of the time. They are rearrested at a rate of 62.7 percent. Judges show leniency to a population that is likely to commit crimes,” treating “high-risk defendants as if they are low-risk when their current charge is relatively minor” while treating “low-risk people as if they are high-risk when their current charge is especially serious.”

3. “If the algorithm is instructed to produce the same crime rate that judges currently achieve, it will jail 40.8 percent fewer African-Americans and 44.6 percent fewer Hispanics. It does this because it detains many fewer people, focused as it is on the riskiest defendants.”

These are seemingly clear results that prove the thesis — that there is a bias that can be corrected to achieve a better result. Even with the complicating results found among inexperienced and part-time judges in Miami and Philadelphia, we can see that some judges interpret ‘noise’ as a signal. An algorithm can help provide a necessary aspect of clarity, and — potentially — justice.

Sign up for the Smarter Faster newsletter
A weekly newsletter featuring the biggest ideas from the smartest people

Related

Up Next