Harvard's Cass Sunstein: Algorithms can correct human biases

A tool that can slowly build a better world.

Harvard's Cass Sunstein: Algorithms can correct human biases
Image: Flickr
  • Algorithms help drive the modern world.
  • Algorithms reflect human biases, but some — as Harvard's Cass Sunstein notes — can be built to help correct our biases.
  • If you build the right algorithm, you might be able to help contribute to a better world.

Algorithms are part of the engine that drives the modern world. When you search for something on Google, you're relying on a search engine defined by a specific algorithm. When you see what you see on your news feed on Facebook, you're not looking at something that comes to you naturally; you're looking at something defined by a specific algorithm.

There's been pushback recently on the idea of the efficacy with which algorithms make our world easier (which is part of the way in which algorithms are discussed — that they make our world easier). Some of the pushback is philosophical. Some of the pushback comes from an immediately practical place.

The pushback from a practical place takes the form of an article appearing in October of this year noting that Amazon got rid of an AI recruiting tool it was using that didn't like women. Another article from ProPublica noted that the algorithm used to determine whether or not a criminal defendant in the United States was liable to re-offend was racially biased.

Part of the reason why some algorithms are having trouble is because there are numerous mathematical ways to define the concept of 'fair', nor is every system built with enough flexibility to account for all the different ways in which 'fairness' can be defined. Consider a way by which one system assessed potential child abuse in the Pittsburgh region, as flagged in an article in Nature: "And, for reasons that are still not clear, white children that the algorithm scored as at highest risk of maltreatment were less likely to be removed from their homes than were black children given the highest risk scores."

But that doesn't mean that there aren't positive things at work among all this — there are, and Harvard Kennedy School professor Cass Sunstein recently released a paper to testify to that fact, arguing that "algorithms can overcome the harmful effects of cognitive biases."

That being said, it's worth noting a strange moment in the paper where Sunstein writes that "... the decisions of human judges, with respect to bail decisions, show neither disparate treatment nor disparate impact. As far as I am aware, there is no proof of either." There is apparent proof, as noted in an article published in the Quarterly Journal of Economics. In the piece, the authors note that "estimates from Miami and Philadelphia show that bail judges are racially biased against black defendants, with substantially more racial bias among both inexperienced and part-time judges."

The paper from the Quarterly Journal of Economics renders slightly problematic the assertion that an algorithm can simply make an already race-blind decision-making process ('no proof of either') more efficient based on the mere potential for criminality. It also renders slightly problematic the notion that an algorithm can more or less reproduce what a Judge considering bail produces but a little bit more efficiently — with less crime and the like.

But this doesn't necessarily occlude some of what Sunstein points out about the particular algorithm noted in a paper put out under the auspices of the National Bureau of Economic Research:

1. "Use of the algorithm could maintain the same detention rate now produced by human judges and reduce crime by up to 24.7 percent. Alternatively, use of the algorithm could maintain the current level of crime reduction and reduce jail rates by as much as 41.9 percent ... thousands of people could be released, pending trial, without adding to the crime rate."

2. " … judges release 48.5 percent of the defendants judged by the algorithm to fall in the riskiest 1 percent. Those defendants fail to re-appear in court 56.3 percent of the time. They are rearrested at a rate of 62.7 percent. Judges show leniency to a population that is likely to commit crimes," treating "high-risk defendants as if they are low-risk when their current charge is relatively minor" while treating "low-risk people as if they are high-risk when their current charge is especially serious."

3. "If the algorithm is instructed to produce the same crime rate that judges currently achieve, it will jail 40.8 percent fewer African-Americans and 44.6 percent fewer Hispanics. It does this because it detains many fewer people, focused as it is on the riskiest defendants."

These are seemingly clear results that prove the thesis — that there is a bias that can be corrected to achieve a better result. Even with the complicating results found among inexperienced and part-time judges in Miami and Philadelphia, we can see that some judges interpret 'noise' as a signal. An algorithm can help provide a necessary aspect of clarity, and — potentially — justice.

Marijuana addiction has risen in places where it's legal

While legalization has benefits, a new study suggests it may have one big drawback.

BSIP/Universal Images Group via Getty Images
Politics & Current Affairs
  • A new study finds that rates of marijuana use and addiction have gone up in states that have recently legalized the drug.
  • The problem was most severe for those over age of 26, with cases of addiction rising by a third.
  • The findings complicate the debate around legalization.
Keep reading Show less

Coffee and green tea may lower death risk for some adults

Tea and coffee have known health benefits, but now we know they can work together.

Credit: NIKOLAY OSMACHKO from Pexels
Surprising Science
  • A new study finds drinking large amounts of coffee and tea lowers the risk of death in some adults by nearly two thirds.
  • This is the first study to suggest the known benefits of these drinks are additive.
  • The findings are great, but only directly apply to certain people.
Keep reading Show less

Why San Francisco felt like the set of a sci-fi flick

But most city dwellers weren't seeing the science — they were seeing something out of Blade Runner.

Brittany Hosea-Small / AFP / Getty Images
Surprising Science

On Sept. 9, many West Coast residents looked out their windows and witnessed a post-apocalyptic landscape: silhouetted cars, buildings and people bathed in an overpowering orange light that looked like a jacked-up sunset.

Keep reading Show less
Politics & Current Affairs

America of the 1930s saw thousands of people become Nazi

Nazi supporters held huge rallies and summer camps for kids throughout the United States in the 1930s.

Scroll down to load more…