SDS 484: Algorithm Aversion

Podcast Guest: Jon Krohn

July 1, 2021

Welcome back to the FiveMinuteFriday episode of the SuperDataScience Podcast! 

This week I talk about a costly cognitive bias.

 

Research indicates, algorithms trained on high-quality historical data predicted the future better than human forecasters were able to. Despite this, people are often afflicted with a common cognitive bias called algorithm aversion which is a preference for a forecast from a human, despite the high potential for error in the human forecasts. People are even more likely to be averse to algorithms after they’ve seen the algorithm perform, even if the algorithm outperforms a human. 
In 2015, research out of the University of Pennsylvania found this is rooted in people losing confidence in an algorithm quicker than that of a human when the algorithm and the human make the same mistake. My take-home message from this is for you to check yourself when you find yourself being wary of an algorithm, especially if you can “show your work”. If you’re working on a team with people who seem to be showing signs of this, kindly point out the cognitive fallacy in this line of thinking. 
ITEMS MENTIONED IN THIS PODCAST:
DID YOU ENJOY THE PODCAST?
  • Have you exhibited algorithm aversion in your work and how can you remind yourself to check the bias?
  • Download The Transcript

Podcast Transcript

(00:05):
This is Five-Minute Friday, on Algorithm Aversion. 

(00:19):
Research indicates that in many domains, algorithms trained on high-quality historical data predict the future better than human forecasters can. Despite this, people are susceptible to an unfortunate cognitive bias called algorithm aversion, which is a costly preference for a forecast from a human instead of from a higher accuracy forecast by a statistical model or a machine learning model. 
(00:49):
People are especially adverse to relying on forecasts from algorithms after they’ve seen them perform, even in situations where they’ve seen the algorithm outperform a human-forecaster alternative.
(01:03):
In research published in 2015 by Berkeley Dietvorst and his colleagues at the University of Pennsylvania, they observed that this erroneous algorithm aversion is caused by people losing confidence more quickly in an algorithm forecaster relative to a human forecaster when the algorithm and the human make the same mistake.
(01:25):
Now you are aware of this unfair cognitive bias against machines, my take-home message for you today is to check yourself when you find yourself being wary of an algorithm. If you can demonstrate to yourself using validation data that the algorithm performs above human accuracy and you’re deploying the algorithm in a scenario where the training data are representative of the production data, then you should feel comfortable trusting the model’s predictions. If you’re working with other professionals, perhaps clients, who are sceptical that your model can be trusted, perhaps you can gently let them know that they may be experiencing a commonplace, if nevertheless unfounded, aversion to algorithms.
(02:13):
If you’d like to learn more about this phenomenon, check out the paper Algorithm Aversion: People Erroneously Avoid Algorithms after Seeing Them Err which was published in the Journal of Experimental Psychology. In the show notes, we provide a link to a freely available version of the paper from the Penn Libraries Scholarly Commons.
(02:33):
All right. That’s it for today’s episode. Thanks for listening and I’m looking forward to another round of SuperDataScience with you very soon.  
Show All

Share on

Related Podcasts