Transparency Won’t Be Enough for AI Accountability

Elizabeth (Bit) Meehan is a political science PhD candidate at George Washington University.  Christina Montgomery (IBM), Gary Marcus (NYU), and Sam Altman (OpenAI). On the surface, the Senate Judiciary Subcommittee Hearing on Oversight of AI went much differently for OpenAI CEO Sam Altman than it did for Meta CEO Mark Zuckerberg in his first Congressional hearing in 2018. Altman received a more conciliatory welcome and meaningful discussion over the harms, biases, and future of AI compared to the tense hearing over the same issues on social media platforms. But like Zuckerberg, Altman used the opportunity to call for more regulation of his industry. Although the Senators and witnesses suggested a range of regulatory solutions, such as licensing and testing requirements, one regulatory concept appeared to appeal to everyone: transparency. The need for transparency for AI companies and systems was invoked several times throughout the oversight hearing, including from the committee’s Chairman, Sen. Richard Blumenthal (D-CT): “We can start with transparency. AI companies ought to be required to test their systems, disclose known risks, and allow independent researcher access. We can establish scorecards and nutrition labels to encourage competition based on safety and trustworthiness, limitations on use.” NYU Professor Emeritus Gary Marcus echoed Sen. Blumenthal: “Transparency is absolutely critical here to understand the political ramifications, the bias ramifications, and so forth. We need transparency about the data. We need to know more about how the models work. We need to have scientists have access to them.” Likewise, IBM Chief Privacy & Trust…Transparency Won’t Be Enough for AI Accountability