Autocrats Will Benefit Most from Twitter’s New Approach to State-Affiliated Media

E. Rosalie Li is a researcher and recent interdisciplinary graduate from the Johns Hopkins Bloomberg School of Public Health. On April 4, Elon Musk’s Twitter added a new label to the account for National Public Radio (NPR). The decades-old independent broadcaster is now classified as “state-affiliated.”  The move came even though Twitter’s policy said that funding from a government source alone does not warrant the label of “state-affiliated media,” provided the outlet has editorial independence, as in the case of the BBC in the UK. Contrast that with outlets such as Russia’s RT and Sputnik, where even employees have highlighted the lack of editorial freedom in these Kremlin-backed organizations, bolstering the case for their classification as “state-affiliated.”  Twitter’s (or perhaps more accurately, Musk’s) seemingly arbitrary decision risks undermining the distinction between ethical, state-financed journalism and propaganda arms controlled by autocratic regimes. A gift to dictatorial leaders who hide behind claims of persecution, these corrosive changes occur in an increasingly hostile online environment, where censorship and suppression of free speech are major concerns. And, it raises serious questions about whether Musk is a responsible steward of such an important platform. NPR and RT: Apples and Oranges  As of April 5, 2023, Twitter’s policy on “Government and state-affiliated” outlets still seemed to exclude an outlet such as NPR, which receives only a small portion of its budget from government sources: “State-financed media organizations with editorial independence, like the BBC in the U.K. for example, are not defined as state-affiliated media for the…Autocrats Will Benefit Most from Twitter’s New Approach to State-Affiliated Media

NPR is Not RT: Twitter’s New State-Affiliated Media Policy is Misleading

Joseph Bodnar is a research analyst at the Alliance for Securing Democracy at the German Marshall Fund, where he tracks Russian propaganda and disinformation. On April 4, Twitter placed a state-affiliated media label on NPR’s account. The label is meant to provide users with context when they see a media account that is under a state’s editorial control, like Russia’s RT and China’s People’s Daily that lack organizational and financial firewalls to insulate their coverage from government interference. NPR doesn’t fit that description. The outlet gets less than 1% of its funding from the federal government. The other 99% comes largely from corporate sponsorships, membership drives, and fees from affiliate radio stations. This arrangement ensures that NPR remains free from state control. Twitter’s move to add a state media label to NPR’s account therefore equates editorially independent media with propaganda outlets used by autocratic regimes to do things like cover up war crimes and cultural genocide.   At the time Twitter labeled NPR, the platform’s own policy explicitly named the public broadcaster as an example of media that receives state funding but maintains its editorial freedom. NPR did nothing to force Twitter’s policy change. The way the platform makes content moderation decisions changed. A team that understood state-backed media and information campaigns used to oversee those policies. Now, rules are being dictated by a person whose ideas often seem to reflect advice given by trolls.   Regardless, Twitter’s labeling of NPR does not appear to be part of any broader policy change—at…NPR is Not RT: Twitter’s New State-Affiliated Media Policy is Misleading

Evaluating New Technology for Equitable and Secure Voter Verification

Dr. Juan E. Gilbert is the Andrew Banks Family Preeminence Endowed Professor and Chair of the Computer & Information Science & Engineering Department at the University of Florida. He leads the Computing for Social Good Lab, where Jasmine McKenzie, Alaina Smith and London Thompson are PhD students. Shutterstock Elections are the bedrock of democracy. As such, access to voting is essential; however, there have been severe challenges over the decades to voting access for people of color, those with disabilities and other marginalized groups in the United States. One of those challenges revolves around the verification of voter eligibility. New technologies may present solutions to this problem, but substantial research is necessary to verify the efficacy and address the downsides of any new tools and techniques that determine who has access to the franchise. Essentially, voter verification determines who has access to vote. Voter verification methods vary across the U.S. by state. Each state requires some form of identification to register and vote. These requirements have often served as tools to disenfranchise communities of color.  For example, in Texas, a pistol license granted by the Department of Public Safety is an acceptable form of voter identification (ID); however, a student ID from a Texas public university is not. A driver’s license is the primary form of voter identification in most states; however, voters of color and the elderly may use public transportation and may not have a state-issued driver’s license. These disparities in state criteria have the effect of disenfranchising…Evaluating New Technology for Equitable and Secure Voter Verification

What Generative AI Reveals About the Limits of Technological Innovation

Dr. Joe Bak-Coleman is an associate research scientist at the Craig Newmark Center for Journalism Ethics and Security at Columbia University and an RSM assembly fellow at the Berkman Klein Center’s Institute for Rebooting Social Media. March 1940 meeting of scientists developing the atomic bomb in the Radiation Laboratory at Berkeley, California: Ernest O. Lawrence, Arthur H. Compton, Vannevar Bush, James B. Conant, Karl T. Compton, and Alfred L. Loomis. Wikimedia Over the past month, generative AI has ignited a flurry of discussion about the implications of software that can generate everything from photorealistic images to academic papers and functioning code. During that time period, mass adoption has begun in earnest, with generative AI integrated into everything from Photoshop and search engines to software development tools. Microsoft’s Bing has integrated a large language model (LLM) into its search feature, complete with hallucinations of basic fact, oddly manipulative expressions of love, and the occasional “Heil Hitler.” Google’s Bard has fared similarly– getting textbook facts about planetary discovery wrong in its demo. A viral image of the pope in “immaculate drip” created by Midjourney even befuddled experts and celebrities alike who, embracing their inner Fox Mulder, just wanted to believe. Even in the wake of Silicon Valley Bank’s collapse and slowdown in the tech industry, the funding, adoption, and embrace of these technologies appears to have occurred before their human counterparts could generate– much less agree on– a complete list of things to be concerned about. Academics have raised the alarm about…What Generative AI Reveals About the Limits of Technological Innovation

Can Piaget Explain Jair Bolsonaro?

Paulo Blikstein is an Associate Professor at Teachers College, Columbia University, an Affiliate Associate Professor in the Department of Computer Science at Columbia University, and Director of the Transformative Learning Technologies Lab and of the Lemann Center for Brazilian Studies. Renato Russo is a doctoral student at Teachers College and a researcher at the Transformative Learning Technologies Lab. Swiss cognitive scientist Jean Piaget demonstrated that there is nothing more resilient than a theory we create on our own. Narratives and stories are powerful, but they lack one crucial property, in comparison: they don’t make us feel as clever and intellectually capable. We propose that this pleasure and feeling of self-efficacy in theorizing – also proven by decades of neuroscience research – is closely related to current political communication and democracy-threatening events that took place in Brazil last January. Starting with the election of Lula in late October, thousands of Bolsonaro supporters spent as much as two months camping in front of military facilities, mobilized around the claim of rigged elections, culminating in the siege of the Brazilian capital on January 8th, 2023.  “Fake news” explains part of a campaign that elected Brazil’s Jair Bolsonaro four years ago and that kept part of his constituency mobilized. But it is only part of the story. Research by media studies and communication scholars, such as Francesca Tripodi, Alice Marwick, and Ethan Zuckerman, has shown how extremists resort to epistemological practices that in a way resemble those of scientific communities. Drawing on this scholarship,…Can Piaget Explain Jair Bolsonaro?

Project Demonstrates Potential of New Transparency Standard for Synthetic Media

Justin Hendrix is CEO and Editor of Tech Policy Press. The views expressed here are his own. With the proliferation of tools to generate synthetic media, including images and video, there is a great deal of interest in how to mark content artifacts to prove their provenance and disclose other information about how they were generated and edited. This week, Truepic, a firm that aims to provide authenticity infrastructure for the Internet, and Revel.ai, a creative studio that bills itself as a leader in the ethical production of synthetic content, released a “deepfake” video “signed” with such a marking to disclose its origin and source. The experiment could signal how standards adopted by content creators, publishers and platforms might permit the more responsible use of synthetic media by providing viewers with signals that demonstrate transparency. The video features a message delivered by a synthetic representation of Nina Schick, the creator of ‘The Era of Generative AI’ online community and author of the book ‘DEEPFAKES.’ The project follows years of effort by a wide variety of actors, including tech and media firms as well as nonprofit organizations and NGOs, to create the conditions for such signals to meet an interoperable standard. The video is compliant with the open content provenance standard developed by the Coalition for Content Provenance and Authenticity (C2PA), an alliance between Adobe, Intel, Microsoft, Truepic, and a British semiconductor and software design company called Arm. A joint development foundation intended to produce such a standard, the C2PA itself emerged…Project Demonstrates Potential of New Transparency Standard for Synthetic Media

Unpacking the Privacy Implications of Extended Reality

Daniel Berrick, JD, is a Policy Counsel and Jameson Spivack is Senior Policy Analyst, Immersive Technologies at the Future of Privacy Forum. Shutterstock It wasn’t long ago that the “metaverse” was the seeming buzzword for the year. Although the hype cycle has moved on to generative AI and the uses of ChatGPT, major companies, universities – even fashion brands – continue to invest in immersive projects and platforms. But what does that mean for the average consumer?  What people call the “metaverse” today is actually a collection of technologies, including but not limited to extended reality (XR)—an umbrella term for virtual reality (VR), augmented reality (AR), and mixed reality (MR) tools. XR provides new ways for people of all ages to engage with content, not only for gaming, but also for education, health, productivity, and socializing. While these applications have big potential to change the way that individuals go about their daily lives, before people make big investments in personal XR devices it is important for them to understand what data these devices and applications collect, how they use this data, and what it all means for privacy. In addition to this clarity and transparency, there is a strong case for implementing regulatory safeguards to ensure privacy protections for everyone in the US. The Future of Privacy Forum’s recently published infographic identifies what and how data is collected, where it is used, and the risks it may raise. XR relies on—and even requires—large volumes and varieties of data that are…Unpacking the Privacy Implications of Extended Reality

Learning from the Past to Shape the Future of Digital Trust and Safety

David Sullivan is the founding Executive Director of the Digital Trust & Safety Partnership, which is made up of technology companies committed to developing industry best practices to ensure consumer safety and trust when using digital services. From “puffer jacket Pope” deepfakes to rapidly proliferating age verification requirements for social media, public interest in online safety is at an all-time high. Across the United States and around the world, not a day goes by without some news of a powerful new digital technology, concern about how that technology could be used for abuse, and accompanying calls for regulation.  This surge of interest in safety is a good thing. With 66 percent of the world’s population using the internet, most of the planet has a stake in how digital services manage safety risks. At the same time, with so many new entrants joining this discussion, we risk forgetting the lessons learned from debates that have been raging since the internet’s inception.  The importance of learning from the past was on display recently at the South by Southwest conference in Austin, Texas, where on a panel on the future of content moderation, we spent most of our time talking about the history of trust and safety over several decades.  Since that discussion, several lessons became apparent about the evolution of online trust and safety mapped across four distinct eras.   1. Community moderation on the pre-commercial internet  In the beginning, there was the primordial pre-commercial internet. This was a world of bulletin boards…Learning from the Past to Shape the Future of Digital Trust and Safety

Can Tech Promote Social Cohesion?

Tim Bernard recently completed an MBA at Cornell Tech, focusing on tech policy and trust & safety issues. He previously led the content moderation team at Seeking Alpha, and worked in various capacities in the education sector. There is arguably a broad consensus that social media presents a challenge to democracy and social cohesion, even if the degree and precise mechanics of that challenge are still contested. An emerging community of engineers and thinkers is also invested in the idea that the power of tech platforms to stoke division might instead be used to promote social cohesion, if the design of their systems can be re-engineered with that goal. If platforms such as Facebook and Twitter have contributed to phenomena such as polarization, the thinking goes, then perhaps they or their successors can do the opposite.   A couple of hundred people interested in exploring this hypothesis came together in San Francisco in February for the inaugural Designing Tech for Social Cohesion conference, which was the first public event by the Council on Technology and Social Cohesion. The Council is convened by a group of organizations—including Search for Common Ground, the Toda Peace Institute, Braver Angels, More in Common, and the Alliance for Peacebuilding—that work in peacebuilding (often known as bridge building in the US), together with the Center for Humane Technology, which advocates for building tech that contributes to a “humane future that supports our well-being, democratic functioning, and shared information environment.” The Council and conference were initially inspired by…Can Tech Promote Social Cohesion?

AI Will Break Online Search

AI is going to give super powers to Blackhat SEO, breaking online search as we know it. RIP magic box at the top of the browser where we all reflexively type “how to [blank]”, “[blank] near me”, “best [blank] Reddit”, and anything else we seek a semi-reliable answer to at any given moment. Online search results are about to get flooded with Astroturf and spam content at a level never before possible. People have discussed Google Search results going down hill for years. A lot of factors influence why Google search lost some of its magic, I’ve written thousands of words on the topic. Experts have universal consensus of two factors for search quality decline, the volume of things on the internet (quantity) and the lower average caliber of all those things (quality). Not the only factors, but everyone from former Google execs to an ad agency demon like myself agree those have major impact on Google’s search product. If this was only a deluge of content from real human people posting on Twitter, TiKToK and poorly edited blogs the companies in the business of sorting an parsing could handle it. The problem is attempted manipulation of ranking within those companies indexes of the web. AI is about to scale those blackhat systems. AI Will Clog The Internet’s Toilets “Typeface, a startup developing an AI-powered dashboard for drafting marketing copy and images, emerged from stealth this week with $65 million in venture equity backing”, so begins the TechCrunch article about…AI Will Break Online Search