Kaylee Williams is a Ph.D. student of communications at the Columbia Journalism School. Shutterstock Since the first significant wave of “deepfakes” in 2017, cybersecurity experts, journalists, and even politicians have been sounding the alarm about the potential dangers of digitally-generated video. The prevailing narrative surrounding these deceptive clips is that they might someday be weaponized by politically-motivated bad actors—such as authoritarian governments, domestic extremists, hackers, etc.—to defame political elites and undermine democracy in the process. For example, in May 2019, when a digitally-altered video of House Speaker Nancy Pelosi went viral on Facebook, news outlets were quick to catastrophize. “Fake videos could be the next big problem in the 2020 elections,” a CNBC headline read. “The 2020 campaigns aren’t ready for deepfakes,” Axios echoed. In a House Intelligence Committee hearing held shortly after the video was debunked in the press, Representative Adam Schiff (D-CA) noted that this technology had “the capacity to disrupt entire campaigns, including that for the presidency.” The problem with this popular understanding of the dangers of deepfakes is that it focuses almost solely on potential harms to the political sphere, and fails to acknowledge the reality (and severity) of the situation at hand. Contrary to popular belief, the vast majority of deepfake videos available online—approximately 96 percent, according to one 2019 study—are pornographic in nature; not political. More importantly, these explicit videos are almost always created and distributed online without the consent of the women (and it’s almost always women) they depict. These videos constitute a…Exploring Legal Approaches to Regulating Nonconsensual Deepfake Pornography