The Algorithmic Management of Misinformation That Protects Liberty

Content moderation algorithms can be designed to reduce the spread of misinformation while protecting the very rights they threaten says Richard Mackenzie-Gray Scott, a postdoctoral researcher at the University of Oxford. There is sustained sentiment in many democracies that social media platforms could do more to mitigate misinformation, even if its consumption may not necessarily always lead to anything bad. Yet many regulatory approaches risk jeopardizing free speech. Whether it is expanding the grounds for intermediary liability, downgrading or removing content, or deplatforming users, free speech may be compromised by efforts aimed at decreasing the existence and reach of misinformation. But there are measures with potential to both reduce misinformation and protect speech.  What has been overlooked in discussions about counteracting misinformation is the role of another human right: freedom of thought. This freedom helps us to form ideas while interacting with the world. It shapes our decision-making and guides our conduct. And, our freedom of thought influences our freedom of speech. The connection can be considered as an ‘ongoing, cyclic, social process’. Elements of freedom of thought include exposure to and digestion of information, interacting with interlocutors, and reflection on related exchanges, which is why censorship may affect the freedom of thought of actual and potential recipients of information and ideas. Similarly, for speech to be free, the thinking that precedes it requires cognitive liberty. Despite the uncertainties regarding the relationship between belief and behavior, providing opportunities that encourage individuals to think freely may decrease the volume of reactive…The Algorithmic Management of Misinformation That Protects Liberty