The inherent paradox of AI regulation 

Nary a day goes by when we don’t learn about a new regulation of artificial intelligence (AI). This is not surprising, with AI being widely touted as the most powerful new technology of the 21st century, but because there is no agreed-upon definition of AI, as I’ve previously written in these pages, and the landscape is constantly growing and changing, many new regulations are steeped in contradiction.  Among such regulatory paradoxes is a tendency to regulate an activity only when it is conducted using AI, when (from the end user’s perspective) the exact same human activity is unregulated. For example, impersonations and parodies of celebrities and politicians have been around since ancient times, and are often considered acceptable commentary. And yet, we may be moving toward an environment in which a human impersonator and an AI-generated impersonator who appear exactly the same and do exactly the same thing are classified, for regulatory purposes, entirely differently.  The current chair of the Federal Trade Commission (FTC), Lina Khan, is a brilliant attorney who is attempting to address such paradoxes in emerging FTC AI regulations. During a recent Carnegie Endowment program, I asked Khan how the FTC deals with the paradox of regulating some AI activities when the exact same human activities might not be regulated. She replied that the commission’s focus is the opposite: ensuring “that using AI doesn’t give you some type of free pass.”  Even at this early stage, we can see that LLGAI (“large language” because computers use the…The inherent paradox of AI regulation