Renée DiResta is the technical research manager at Stanford Internet Observatory. Dave Willner is a Non-Resident Fellow in the Program on Governance of Emerging Technologies at Stanford Cyber Policy Center. US President Joe Biden and Vice President Kamala Harris at the signing of an executive order on artificial intelligence, October 30, 2023. Source On the morning of October 30th, the Biden White House released its long-awaited Executive Order on Safe, Secure, and Trustworthy Artificial Intelligence. There are many promising aspects of the document, and the initiatives it outlines. But on the subject of content integrity – including issues related to provenance, authenticity, synthetic media detection and labeling – the order overemphasizes technical solutions. Background The launch and widespread adoption of OpenAI’s ChatGPT and text-to-image tools in late 2022 captivated the public, making hundreds of millions of people worldwide aware of both how useful – and how potentially destabilizing – generative AI technology can be. The shift is profound: just as social media democratized content distribution, shifting the gatekeepers and expanding participation in the public conversation, generative AI has democratized synthetic content creation at scale. The White House’s Executive Order raises and addresses many of the most prominent concerns that the American public has about artificial intelligence, including the potential negative impacts on national security, privacy, employment, and information manipulation. But it also highlights the importance of remaining competitive, and the opportunities associated with leveraging a powerful new technology for the benefit of society. The guidance in the Executive Order, it’s important…White House AI Executive Order Takes On Complexity of Content Integrity Issues