Brandie Nonnecke, PhD, is Director of the CITRIS Policy Lab & Associate Research Professor at the Goldman School of Public Policy at UC Berkeley. Pres. Joe Biden announced his administration has secured voluntary commitments from seven U.S. companies to ensure AI safety, July 21, 2023. Source: White House It’s easy to criticize the White House AI Commitments as too weak, too vague, and too late. The EU will soon pass comprehensive AI legislation that will have global ramifications, placing legal requirements on AI developers to build safe, secure, and trustworthy AI—the very goals the White House seeks to achieve. But in the United States, where AI-related legislative and regulatory efforts are often impeded by party politics and where laissez-faire capitalism has a stronghold, these voluntary commitments may well be the strongest possible steps to hold AI companies accountable. Here are three reasons why. First, the Federal Trade Commission (FTC) has the capacity to hold AI companies accountable for their misdeeds, and the White House AI commitments support this authority. The FTC Act authorizes the FTC to protect consumers from deceptive and unfair practices. The seven companies that joined the White House announcement– Amazon, Anthropic, Google, Inflection, Meta, Microsoft, and OpenAI– have committed to conduct rigorous internal and external security testing of their AI systems and to publicly report system capabilities, limitations, domains of appropriate and inappropriate use, safety evaluations, and implications for societal risks such as fairness and bias. These processes will generate extensive documentation of their procedures and findings,…Three Reasons the White House AI Commitments Are a Game Changer