Justin Hendrix is CEO and Editor of Tech Policy Press. Websites promoted by the botnet “fox8”. Source: Kai-Cheng Yang When it comes to imagining the harms of generative AI systems, the potential of such technologies to be used for disinformation and political manipulation is perhaps the mosts obvious. At a Senate Judiciary subcommittee hearing in May, OpenAI CEO Sam Altman called the risk that systems such as ChatGPT may be used to manipulate elections “one of my areas of greatest concern.” In a paper released in January, researchers at OpenAI, Stanford and Georgetown pointed to “the prospect of highly scalable—and perhaps even highly persuasive—campaigns by those seeking to covertly influence public opinion.” Now, researchers Kai-Cheng Yang and Filippo Menczer at the Indiana University Observatory on Social Media have shared a preprint paper describing “a Twitter botnet that appears to employ ChatGPT to generate human-like content.” The cluster of 1,140 accounts, which the researchers dubbed “fox8”, appear to promote crypto/blockchain/NFT-related content. The accounts were not discovered with fancy forensic tools– detection of machine generated text is still difficult for machines– but by searching for “self revealing” text that may have been “posted accidentally by LLM-powered bots in the absence of a suitable filtering mechanism.” Using Twitter’s API, the researchers searched Twitter for the phrase ‘as an ai language model’ over the six month period between October 2022, and April 2023. This phrase is a common response generated by ChatGPT when it receives a prompt that violates its usage policies. This produced…Researchers Identify False Twitter Personas Likely Powered by ChatGPT