Prithvi Iyer is Program Manager at Tech Policy Press. Fritzchens Fritz / Better Images of AI / GPU shot etched 5 / CC-BY 4.0 Foundation models like LLaMA and DALL-E 3 have garnered substantial commercial investment and public attention due to their role in various generative AI applications. However, a significant issue looms over their use: the lack of transparency in how these models are developed and deployed. Transparency is vital for ensuring public accountability, fostering scientific progress, and enabling effective governance of digital technologies. Without sufficient transparency, it’s challenging for stakeholders to evaluate these foundation models, their societal impact, and their role in our lives. This lack of transparency around foundation models mirrors the opacity of social media platforms. Past social media scandals, such as the Rohingya crisis in Myanmar and the Cambridge Analytica scandal in the United States, highlight the social costs associated with a lack of transparency in content moderation and data sharing by the platforms. To prevent similar crises with generative AI, it’s crucial that tech companies prioritize transparency. Transparency is critical to digital technologies like AI because most lay persons do not understand what it is or how it actually works. Researchers from Stanford, Harvard, and Princeton have published a paper titled “The Foundation Model Transparency Index (FMTI)” to address this pressing issue. This index serves as a resource for governments and civil society organizations to hold AI companies to account, and also to provide model developers with tangible steps to improve model transparency. The…Looking Beyond the Black Box: Transparency and Foundation Models