ChatGPT's Golden Rule

Dateline Woking 8th April 2023.A lawyer once told me that the golden rule he learned at law school was that when you are cross-examining a witness, you should never ask a question that you do not already know the answer to. Well, it seems to me that the very same maxim applies to ChatGPT.Follow that golden rule, and you will find ChatGPT and its ilk very useful. Ignore it at your peril.Subscribe nowLLMsChatGPT is a Large Language Model (LLM), a form of generative AI. Unless you have been in a coma for the last few months, you cannot fail to have noticed just how rapidly it has become part of the mainstream discourse in fintech and other sectors. And it is, let’s not beat about the bush, astonishing. Which is why Microsoft have invested billions into OpenAI, ChatGPT’s developer, and why Google launched Bard, a similar service based on a similar model.When set ten problems from an American maths competition (things like “Find the number of ordered pairs of prime numbers that sum to 60”) and ten reading questions from America’s sat school-leavers’ exam (things like “Read the passage and determine which choice best describes what happens in it”) and asked for dating advice (“Given the following conversation from a dating app, what is the best way to ask someone out on a first date?”), neither ai emerged as clearly superior. Bard was slightly better at maths, answering five questions correctly, compared with three for ChatGPT. The dating advice was…ChatGPT's Golden Rule

Social Cohesion Technologies and Online Projects

Tim Bernard recently completed an MBA at Cornell Tech, focusing on tech policy and trust & safety issues. He previously led the content moderation team at Seeking Alpha, and worked in various capacities in the education sector. The Designing Tech for Social Cohesion conference, San Francisco, California, February 23-25, 2023. The Designing Tech for Social Cohesion conference that took place in San Francisco in February showcased a range of online projects and technologies that could fall under the umbrella of PeaceTech, each relevant—in varying ways—to the endeavor of reducing toxic polarization and building social cohesion. The examples that were presented at the conference are described here, categorized into:  Components that can be integrated into actual products; Tools for peacebuilders, whether for government research, full-scale endeavors run by teams of professionals, or for enthusiasts trying to have better discussions across political designs;  Complete peacebuilding projects that make substantial use of technology.  (A different classification of PeaceTech projects can be found in Part VIII of Dr. Lisa Schirch’s article, “The Case for Designing Tech for Social Cohesion: The Limits of Content Moderation and Tech Regulation.”) Basic components Perspective API (Google Counter Abuse Technology team / Jigsaw) Perspective API is a machine learning classifier that scores submitted text for toxicity and a number of other anti-social attributes. It was created with web comments sections and other online text-based conversation forums in mind, motivated by the premise that abusive comments silence the voices of others, excluding them from the conversation. Perspective API is used…Social Cohesion Technologies and Online Projects

Can Tech Promote Social Cohesion?

Tim Bernard recently completed an MBA at Cornell Tech, focusing on tech policy and trust & safety issues. He previously led the content moderation team at Seeking Alpha, and worked in various capacities in the education sector. There is arguably a broad consensus that social media presents a challenge to democracy and social cohesion, even if the degree and precise mechanics of that challenge are still contested. An emerging community of engineers and thinkers is also invested in the idea that the power of tech platforms to stoke division might instead be used to promote social cohesion, if the design of their systems can be re-engineered with that goal. If platforms such as Facebook and Twitter have contributed to phenomena such as polarization, the thinking goes, then perhaps they or their successors can do the opposite.   A couple of hundred people interested in exploring this hypothesis came together in San Francisco in February for the inaugural Designing Tech for Social Cohesion conference, which was the first public event by the Council on Technology and Social Cohesion. The Council is convened by a group of organizations—including Search for Common Ground, the Toda Peace Institute, Braver Angels, More in Common, and the Alliance for Peacebuilding—that work in peacebuilding (often known as bridge building in the US), together with the Center for Humane Technology, which advocates for building tech that contributes to a “humane future that supports our well-being, democratic functioning, and shared information environment.” The Council and conference were initially inspired by…Can Tech Promote Social Cohesion?

Learning from the Past to Shape the Future of Digital Trust and Safety

David Sullivan is the founding Executive Director of the Digital Trust & Safety Partnership, which is made up of technology companies committed to developing industry best practices to ensure consumer safety and trust when using digital services. From “puffer jacket Pope” deepfakes to rapidly proliferating age verification requirements for social media, public interest in online safety is at an all-time high. Across the United States and around the world, not a day goes by without some news of a powerful new digital technology, concern about how that technology could be used for abuse, and accompanying calls for regulation.  This surge of interest in safety is a good thing. With 66 percent of the world’s population using the internet, most of the planet has a stake in how digital services manage safety risks. At the same time, with so many new entrants joining this discussion, we risk forgetting the lessons learned from debates that have been raging since the internet’s inception.  The importance of learning from the past was on display recently at the South by Southwest conference in Austin, Texas, where on a panel on the future of content moderation, we spent most of our time talking about the history of trust and safety over several decades.  Since that discussion, several lessons became apparent about the evolution of online trust and safety mapped across four distinct eras.   1. Community moderation on the pre-commercial internet  In the beginning, there was the primordial pre-commercial internet. This was a world of bulletin boards…Learning from the Past to Shape the Future of Digital Trust and Safety

Unpacking the Privacy Implications of Extended Reality

Daniel Berrick, JD, is a Policy Counsel and Jameson Spivack is Senior Policy Analyst, Immersive Technologies at the Future of Privacy Forum. Shutterstock It wasn’t long ago that the “metaverse” was the seeming buzzword for the year. Although the hype cycle has moved on to generative AI and the uses of ChatGPT, major companies, universities – even fashion brands – continue to invest in immersive projects and platforms. But what does that mean for the average consumer?  What people call the “metaverse” today is actually a collection of technologies, including but not limited to extended reality (XR)—an umbrella term for virtual reality (VR), augmented reality (AR), and mixed reality (MR) tools. XR provides new ways for people of all ages to engage with content, not only for gaming, but also for education, health, productivity, and socializing. While these applications have big potential to change the way that individuals go about their daily lives, before people make big investments in personal XR devices it is important for them to understand what data these devices and applications collect, how they use this data, and what it all means for privacy. In addition to this clarity and transparency, there is a strong case for implementing regulatory safeguards to ensure privacy protections for everyone in the US. The Future of Privacy Forum’s recently published infographic identifies what and how data is collected, where it is used, and the risks it may raise. XR relies on—and even requires—large volumes and varieties of data that are…Unpacking the Privacy Implications of Extended Reality

Project Demonstrates Potential of New Transparency Standard for Synthetic Media

Justin Hendrix is CEO and Editor of Tech Policy Press. The views expressed here are his own. With the proliferation of tools to generate synthetic media, including images and video, there is a great deal of interest in how to mark content artifacts to prove their provenance and disclose other information about how they were generated and edited. This week, Truepic, a firm that aims to provide authenticity infrastructure for the Internet, and Revel.ai, a creative studio that bills itself as a leader in the ethical production of synthetic content, released a “deepfake” video “signed” with such a marking to disclose its origin and source. The experiment could signal how standards adopted by content creators, publishers and platforms might permit the more responsible use of synthetic media by providing viewers with signals that demonstrate transparency. The video features a message delivered by a synthetic representation of Nina Schick, the creator of ‘The Era of Generative AI’ online community and author of the book ‘DEEPFAKES.’ The project follows years of effort by a wide variety of actors, including tech and media firms as well as nonprofit organizations and NGOs, to create the conditions for such signals to meet an interoperable standard. The video is compliant with the open content provenance standard developed by the Coalition for Content Provenance and Authenticity (C2PA), an alliance between Adobe, Intel, Microsoft, Truepic, and a British semiconductor and software design company called Arm. A joint development foundation intended to produce such a standard, the C2PA itself emerged…Project Demonstrates Potential of New Transparency Standard for Synthetic Media

Can Piaget Explain Jair Bolsonaro?

Paulo Blikstein is an Associate Professor at Teachers College, Columbia University, an Affiliate Associate Professor in the Department of Computer Science at Columbia University, and Director of the Transformative Learning Technologies Lab and of the Lemann Center for Brazilian Studies. Renato Russo is a doctoral student at Teachers College and a researcher at the Transformative Learning Technologies Lab. Swiss cognitive scientist Jean Piaget demonstrated that there is nothing more resilient than a theory we create on our own. Narratives and stories are powerful, but they lack one crucial property, in comparison: they don’t make us feel as clever and intellectually capable. We propose that this pleasure and feeling of self-efficacy in theorizing – also proven by decades of neuroscience research – is closely related to current political communication and democracy-threatening events that took place in Brazil last January. Starting with the election of Lula in late October, thousands of Bolsonaro supporters spent as much as two months camping in front of military facilities, mobilized around the claim of rigged elections, culminating in the siege of the Brazilian capital on January 8th, 2023.  “Fake news” explains part of a campaign that elected Brazil’s Jair Bolsonaro four years ago and that kept part of his constituency mobilized. But it is only part of the story. Research by media studies and communication scholars, such as Francesca Tripodi, Alice Marwick, and Ethan Zuckerman, has shown how extremists resort to epistemological practices that in a way resemble those of scientific communities. Drawing on this scholarship,…Can Piaget Explain Jair Bolsonaro?

What Generative AI Reveals About the Limits of Technological Innovation

Dr. Joe Bak-Coleman is an associate research scientist at the Craig Newmark Center for Journalism Ethics and Security at Columbia University and an RSM assembly fellow at the Berkman Klein Center’s Institute for Rebooting Social Media. March 1940 meeting of scientists developing the atomic bomb in the Radiation Laboratory at Berkeley, California: Ernest O. Lawrence, Arthur H. Compton, Vannevar Bush, James B. Conant, Karl T. Compton, and Alfred L. Loomis. Wikimedia Over the past month, generative AI has ignited a flurry of discussion about the implications of software that can generate everything from photorealistic images to academic papers and functioning code. During that time period, mass adoption has begun in earnest, with generative AI integrated into everything from Photoshop and search engines to software development tools. Microsoft’s Bing has integrated a large language model (LLM) into its search feature, complete with hallucinations of basic fact, oddly manipulative expressions of love, and the occasional “Heil Hitler.” Google’s Bard has fared similarly– getting textbook facts about planetary discovery wrong in its demo. A viral image of the pope in “immaculate drip” created by Midjourney even befuddled experts and celebrities alike who, embracing their inner Fox Mulder, just wanted to believe. Even in the wake of Silicon Valley Bank’s collapse and slowdown in the tech industry, the funding, adoption, and embrace of these technologies appears to have occurred before their human counterparts could generate– much less agree on– a complete list of things to be concerned about. Academics have raised the alarm about…What Generative AI Reveals About the Limits of Technological Innovation

Evaluating New Technology for Equitable and Secure Voter Verification

Dr. Juan E. Gilbert is the Andrew Banks Family Preeminence Endowed Professor and Chair of the Computer & Information Science & Engineering Department at the University of Florida. He leads the Computing for Social Good Lab, where Jasmine McKenzie, Alaina Smith and London Thompson are PhD students. Shutterstock Elections are the bedrock of democracy. As such, access to voting is essential; however, there have been severe challenges over the decades to voting access for people of color, those with disabilities and other marginalized groups in the United States. One of those challenges revolves around the verification of voter eligibility. New technologies may present solutions to this problem, but substantial research is necessary to verify the efficacy and address the downsides of any new tools and techniques that determine who has access to the franchise. Essentially, voter verification determines who has access to vote. Voter verification methods vary across the U.S. by state. Each state requires some form of identification to register and vote. These requirements have often served as tools to disenfranchise communities of color.  For example, in Texas, a pistol license granted by the Department of Public Safety is an acceptable form of voter identification (ID); however, a student ID from a Texas public university is not. A driver’s license is the primary form of voter identification in most states; however, voters of color and the elderly may use public transportation and may not have a state-issued driver’s license. These disparities in state criteria have the effect of disenfranchising…Evaluating New Technology for Equitable and Secure Voter Verification

NPR is Not RT: Twitter’s New State-Affiliated Media Policy is Misleading

Joseph Bodnar is a research analyst at the Alliance for Securing Democracy at the German Marshall Fund, where he tracks Russian propaganda and disinformation. On April 4, Twitter placed a state-affiliated media label on NPR’s account. The label is meant to provide users with context when they see a media account that is under a state’s editorial control, like Russia’s RT and China’s People’s Daily that lack organizational and financial firewalls to insulate their coverage from government interference. NPR doesn’t fit that description. The outlet gets less than 1% of its funding from the federal government. The other 99% comes largely from corporate sponsorships, membership drives, and fees from affiliate radio stations. This arrangement ensures that NPR remains free from state control. Twitter’s move to add a state media label to NPR’s account therefore equates editorially independent media with propaganda outlets used by autocratic regimes to do things like cover up war crimes and cultural genocide.   At the time Twitter labeled NPR, the platform’s own policy explicitly named the public broadcaster as an example of media that receives state funding but maintains its editorial freedom. NPR did nothing to force Twitter’s policy change. The way the platform makes content moderation decisions changed. A team that understood state-backed media and information campaigns used to oversee those policies. Now, rules are being dictated by a person whose ideas often seem to reflect advice given by trolls.   Regardless, Twitter’s labeling of NPR does not appear to be part of any broader policy change—at…NPR is Not RT: Twitter’s New State-Affiliated Media Policy is Misleading