Generative AI is Already Catalyzing Disinformation. How Long Until Chatbots Manipulate Us Directly?

Zak Rogoff is research manager at Ranking Digital Rights. Mural in Brooklyn, New York. 2021. Over the last few years, it’s become clear that unscrupulous companies and politicians are willing to pursue any new technology that promises the ability to manipulate opinion at scale. Generative AI represents the latest wave of such technologies. Despite the fact that the potential harms are already apparent, law- and policymakers have to date failed to put the necessary guardrails in place. The Perils of Personal Data and Political Manipulation In 2018, concerns that were long the particular focus of tech experts and the digital rights community exploded into the mainstream, as the Cambridge Analytica scandal revealed the massive collection of personal data with the goal of manipulating the outcome of elections: not only the 2016 election of Donald Trump and the Brexit referendum in the UK, but elections in another 200 in countries globally as well. Though the scandal captured headlines for weeks, with proclamations that “Facebook trust levels” had “collapsed,” as The Guardian put it just one year later, “the Cambridge Analytica scandal changed the world—but it didn’t change Facebook.”  But Cambridge Analytica was the tip of the proverbial iceberg, as politicians kicked off other experiments to try to manipulate the public. For instance, a 2022 Human Rights Watch report details political manipulation powered by user data that took place during the reelection of Hungary’s leader Viktor Orbán, four years after the Cambridge Analytica revelations became public. The report found that Hungarian political…Generative AI is Already Catalyzing Disinformation. How Long Until Chatbots Manipulate Us Directly?