Making Amplification Measurable

Luke Thorburn is a doctoral researcher in safe and trusted AI at King’s College London; Jonathan Stray is a Senior Scientist at The Center for Human-Compatible Artificial Intelligence (CHAI), Berkeley; and Priyanjana Bengani is a Senior Research Fellow at the Tow Center for Digital Journalism at Columbia University. Luke Thorburn The term “amplification” is ubiquitous in discussions of recommender systems and even legislation, but is not well-defined. We have previously argued that the term is too ambiguous to be used in law, and that the three most common interpretations — comparison with a counterfactual baseline algorithm, mere distribution, or a lack of user agency — are either difficult to justify, or difficult to measure consistently. That said, the way recommender systems are designed does influence how widely different types of content are distributed. Amplification is an evocative term to describe this reality, and remains a prominent subject of research. For these reasons, it would be useful to have a common understanding of what amplification is and how to measure it. In this piece, we propose five properties that measures of amplification should have if they are to be useful in discussions of recommender policy. Specifically, they should: (1) define the content of interest, (2) define an appropriate baseline, (3) focus on impressions as the relevant outcome, (4) isolate the effect of the algorithm from that of human behavior, and (5) specify the time horizon over which they apply. The first three properties are relatively easy to satisfy, while the latter involve causal inference and systemic equilibria,…Making Amplification Measurable