When AI Systems Fail: The Toll on the Vulnerable Amidst Global Crisis

Reactive measures to address biased AI features and the spread of misinformation on social media platforms are not enough, says Nadah Feteih, an Employee Fellow with the Institute for Rebooting Social Media at the Berkman Klein Center and a Tech Policy Fellow with the Goldman School of Public Policy at UC Berkeley.  Image by Jamillah Knowles & Reset.Tech Australia / © https://au.reset.tech/ / Better Images of AI / Detail from Connected People / CC-BY 4.0 Social media is vital in enabling independent journalism, exposing human rights abuses, and facilitating digital activism. These platforms have allowed marginalized communities to reclaim the narrative by sharing their lived realities and documenting crises in real-time. However, decisions made by social media companies chiefly prioritize profits; tackling integrity issues and addressing technical problems that further the spread of harmful content appear to be at odds with their incentives. While there may be tension in reconciling user expectations and features motivated by platform business models, users and tech workers exceedingly feel silenced from biased mistakes made during times of crisis. The stakes are even higher when these mistakes subsequently exacerbate real-world harm.  Consider two recent examples, both of which involve technical “errors” in Meta products that resulted in dehumanizing misrepresentations of Palestinians amidst the ongoing situation in the region. The first instance was reported on October 19 by 404 Media. When users had text in their bios that included “Palestinian” and an Arabic phrase that means “Praise be to God,” Instagram auto-translated the Arabic text to…When AI Systems Fail: The Toll on the Vulnerable Amidst Global Crisis