Brian J. Chen is the Policy Director of Data & Society. US President Joe Biden and Vice President Kamala Harris at the signing of an executive order on artificial intelligence, October 30, 2023. Source On November 1, delivering prepared remarks coinciding with the UK’s AI Safety Summit, US Vice President Kamala Harris offered a carefully-worded definition of “AI safety.” “We must consider and address the full spectrum of AI risk,” she said, “threats to humanity as a whole, as well as threats to individuals, communities, to our institutions, and to our most vulnerable populations.” Even as many applauded the Vice President’s evocation of current harms, the speech firmly anchored policymaking to “safe AI.” Certainly, the maneuver is smart politics. But the now-ubiquitous call to “AI safety,” if history is any guide, may limit the spectrum of debate and reconfigure the solutions available to remediate algorithmic harms. US carceral scholarship, in particular, illustrates how calls to “safety” have flattened structural solutions into administrative projects. Undoubtedly, “AI safety” is having a moment. Two weeks ago, the Biden administration announced its executive order on AI, entitled “Safe, Secure, and Trustworthy AI.” Only a few days later, UK Prime Minister Rishi Sunak announced the UK’s new AI Safety Institute. The US launched its own safety consortium that same day. So, “safe AI” is popular. But what is it really? The authors of a forthcoming paper offer a provocation: what if “AI safety” is not a concept, but a community? By locating it as a cohort…The Siren Song of “AI Safety”