Beyond Red Teaming: Facilitating User-based Data Donation to Study Generative AI

Zeve Sanderson is the Executive Director and Joshua A. Tucker is one of the co-founders and co-directors of NYU’s Center for Social Media and Politics (CSMaP). Shutterstock Academics, technologists, policymakers, and journalists alike are in the midst of trying to ascertain the benefits and harms of generative AI tools. One of the primary and most discussed approaches for identifying potential harms is red teaming, which was included in President Biden’s executive order this week. In this approach, someone attempts to get an AI system to do something that it shouldn’t — with the goal of then fixing the issue. As a team from Data & Society details, red teaming “often remains a highly technical exercise” that works in some cases, while failing in others.  One of the limitations of red teaming is that it doesn’t give us information about how people actually use a generative AI system. Instead, red teaming focuses on learning information about how a system could be used by bad actors. While vital, this approach leaves us flying blind when understanding how these tools are being used in the wild — used by whom, for what purposes, and to what effects.  This limitation is especially stark in the context of consumer-facing generative AI tools. While there are well-established cases of citizen red-teamers figuring out how to get chatbots to break their own rules (and then posting the results on social media), we know little about how most people are using these services. And with ChatGPT reaching an estimated over…Beyond Red Teaming: Facilitating User-based Data Donation to Study Generative AI