Regulating Transparency in Audiovisual Generative AI: How Legislators Can Center Human Rights

Raquel Vazquez Llorente is the Head of Law and Policy — Technology Threats and Opportunities at WITNESS. Sam Gregory is the Executive Director of WITNESS. Image by Alan Warburton / © BBC / Better Images of AI / Virtual Human / CC-BY 4.0 In an era marked by rapid technological advancements, the interplay between innovation and human rights has never been more crucial. While there are creative and commercial benefits to generative AI and synthetic media, these tools are connected to a range of harms that are impacting disproportionately those communities that were already vulnerable to mis- and disinformation, or targeted and discriminated against because of their gender, race, ethnicity, or religion. Given the lack of public understanding of AI, the rapidly increasing verisimilitude of audiovisual outputs, and the absence of robust transparency and accountability, generative AI is also deepening distrust of both specific items of content as well as broader ecosystems of media and information. As policymakers and regulators grapple with the complexities of a media landscape that features both AI generated and non-synthetic content (and the mix of both), there is much to be learned from the human rights field. Democracy defenders, journalists and others documenting war crimes and abuses around the world have long faced claims by perpetrators and the powerful dismissing their content as fake or edited. They have also grappled with similar questions about the effectiveness, scalability and downsides to tracking and sharing how a piece of content is made. In this blog post, we explore…Regulating Transparency in Audiovisual Generative AI: How Legislators Can Center Human Rights