Audio of this conversation is available via your favorite podcast service. Over the past few months, there have been a range of voices calling for the urgent regulation of artificial intelligence. Comparisons to the problems of nuclear proliferation abound, so perhaps it’s no surprise that some want a new international body similar to the International Atomic Energy Agency (IAEA). But when it comes to AI and global governance, there’s already a lot in play- from ethics councils to various schemes for industry governance, activity on standards, various international agreements, and legislation that will have international impact, such as the EU’s AI Act. To get my head around the complicated, evolving ecology of global AI governance, I spoke to two of the three authors of a recent paper in the Annual Review of Law and Social Science that attempts to take stock of and explore the tensions between different approaches. Michael Veale, an associate professor in the Faculty of Laws at University College London, where he works on the intersection of computer science, law, and policy. Robert Gorwa, a postdoctoral researcher at the Berlin Social Science Center, a large publicly-funded research institute in Germany. Justin Hendrix: I’m going to talk to you today about this new paper you have out, “AI and Global Governance: Modalities, Rationalities, and Tensions.” There’s also a third author on this paper. Michael Veale: Yeah, that’s Kira Matus. Kira’s based at the Hong Kong University of Science and Technology. She’s a chemist by training initially, but worked…Exploring Global Governance of Artificial Intelligence