Abhishek Gupta is a Fellow, Augmented Collective Intelligence at the BCG Henderson Institute, Senior Responsible AI Leader & Expert at BCG, and Founder & Principal Researcher at the Montreal AI Ethics Institute. Alexa Steinbrück / Better Images of AI / Explainable AI / CC-BY 4.0 The enthusiasm for generative AI systems has taken the world by storm. Organizations of all sorts– including businesses, governments, and nonprofit organizations– are excited about its applications, while regulators and policymakers show varying levels of desire to regulate and govern it. Old hands in the field of cybersecurity and governance, risk & compliance (GRC) functions see a much more practical challenge as organizations move to deploy ChatGPT, DALL-E 2, Midjourney, Stable Diffusion, and dozens of other products and services to accelerate their workflows and gain productivity. An upsurge of unreported and unsanctioned generative AI use has brought forth the next iteration of the classic “Shadow IT“ problem: Shadow AI. What is Shadow IT? For those unfamiliar with Shadow IT, these are often code snippets, libraries, solutions, products, services, and apps on managed devices that lurk outside the oversight of corporate, nonprofit, and government IT departments. Shadow IT can threaten an organization’s cybersecurity, privacy, and data confidentiality. For example, they increase the likelihood of data breaches and ransomware infiltrating the corporate network, often costing the organization more than $1m for each incident, according to the Verizon 2023 Data Breach Incident Report. Well-defined and enforced policies, including strict network monitoring, device usage, and other oversight mechanisms (mostly) work well…Beware the Emergence of Shadow AI