From “Filter Bubbles”, “Echo Chambers”, and “Rabbit Holes” to “Feedback Loops”

Luke Thorburn is a doctoral researcher in safe and trusted AI at King’s College London; Jonathan Stray is a Senior Scientist at The Center for Human-Compatible Artificial Intelligence (CHAI), Berkeley; and Priyanjana Bengani is a Senior Research Fellow at the Tow Center for Digital Journalism at Columbia University. Luke Thorburn While concepts such as filter bubbles, echo chambers, and rabbit holes are part of the popular wisdom about what is wrong with recommender systems and have received significant attention from academics, evidence for their existence is mixed. It seems like almost every research paper uses different definitions, and reaches different conclusions depending on how the concepts are formalized. When people try to make the question more precise, they usually head in different directions, so the results they come up with are no longer comparable. In this post, we recap the history of these concepts, describe the limitations of existing research, and argue that the concepts are ultimately too muddied to serve as useful frameworks for empirical work. Instead, we propose that research should focus on feedback loops between three variables: what is engaged with, what is shown, and what is thought by users. This framework can help us understand the strengths and weaknesses of the wide range of previous work which asks whether recommenders — on social media in particular — are causing political effects such as polarization and radicalization, or mental health effects such as eating disorders or depression. Currently, there are studies which show that bots programmed to watch partisan or unhealthy…From “Filter Bubbles”, “Echo Chambers”, and “Rabbit Holes” to “Feedback Loops”