Political momentum is building to regulate the spread of nonconsensual explicit deepfakes as the issue of the digitally altered images has moved from a potential threat to a reality. Several bipartisan bills introduced in Congress aim to mitigate the spread of nonconsensual explicit images made using artificial intelligence (AI), an issue that has not only plagued public figures and celebrities, but everyday people and even kids. “The past year, it’s really been a new thing where it’s forced itself — where we’ve got a real big problem,” said Anna Olivarius, the founding partner of McAllister Olivarius, a transatlantic law firm specializing in cases of race and gender discrimination. In January, explicit AI-generated images made to look like Taylor Swift circulated online, bringing mass attention to the issue. The outcry inspired lawmakers and the White House to push platforms to enforce their rules and prevent the spread of such images. While the spread of the Swift deepfakes put a spotlight on the rise of nonconsensual AI porn, the issue has become more widespread. Schools have even been forced to grapple with the new form of cyberbullying and harassment as students create and spread deepfakes of their peers in a largely unregulated space. “It’s impacting tons of everyday people,” Olivarius said. Lawmakers have also been victims. Rep. Alexandria Ocasio-Cortez (D-N.Y.), who is one of the lawmakers spearheading a bill to fight explicit deepfakes, spoke about being targeted by nonconsensual explicit deepfakes herself in an April interview with…How Congress is fighting the rise of nonconsensual AI porn