An Optimist’s Guide to Reining In Big Tech

Paul M. Barrett is the deputy director and senior research scholar of the Center for Business and Human Rights at New York University’s Stern School of Business, where he writes about technology’s effects on democracy. Source This book review (co-published with Just Security) discusses Regulating Digital Industries: How Public Oversight Can Encourage Competition, Protect Privacy, and Ensure Free Speech by Mark MacCarthy (Brookings Institution Press, 452 pages). Mark MacCarthy’s many years in Washington,  first as a regulatory analyst and Congressional staffer and then in corporate advocacy, have left him an unlikely optimist about regulation. He believes that now is the time to rein in the technology giants whose heft and influence have made them targets for pending antitrust lawsuits, Congressional reform attempts, and harsh rhetoric from President Joe Biden and his predecessor. In his new book, Regulating Digital Industries: How Public Oversight Can Encourage Competition, Protect Privacy, and Ensure Free Speech, MacCarthy presents an ambitious strategy for reviving the pro-regulatory energy of the 1930s. He argues that updated New Deal zeal ought to animate a new stand-alone agency to oversee companies in the fields of social media (for example, Meta, owner of Facebook and Instagram, Google, owner of YouTube, and ByteDance, owner of TikTok); online search (Google and Microsoft); e-commerce (Amazon); advertising technology (Google); and mobile app infrastructure (Apple and Google). MacCarthy’s idealism may strike readers as naive, given the fractiousness and cynicism of today’s Washington. And yet, his cogent, highly detailed volume will be a page-turner for policy wonks…An Optimist’s Guide to Reining In Big Tech

Confronting the Threat of Deepfakes in Politics

Numa Dhamani is an engineer and researcher working at the intersection of technology and society. Maggie Engler is an engineer and researcher currently working on safety for large language models at Inflection AI. On July 17, 2023, Never Back Down, a political action committee (PAC) supporting Ron DeSantis, created an ad attacking former President Donald Trump. The ad accuses Trump of targeting Iowa Governor Kim Reynolds, but a person familiar with the ad confirmed that Trump’s voice criticizing Reynolds was AI-generated, where the content appears to be based on a post by Trump on Truth Social. The ad ran statewide in Iowa the following day with at least a $1 million ad buy.  This is not an isolated incident — deepfakes have been circulating on the internet for several years now, and it is particularly concerning how deepfakes are, and will be, weaponized in politics. Last month, AI-generated audio recordings of politicians were released days before a tight election in Slovakia discussing election fraud. Deepfakes — AI-generated images, video, and audio are certainly a threat to democratic processes worldwide, but just how real of a threat are they?  Next year, more than 2 billion voters will head to the polls in a record-breaking number of elections around the world, including in the United States, India, and the European Union. Deepfakes are already emerging in the lead up to the 2024 U.S. presidential election, from Trump himself circulating a fake image of himself kneeling in prayer on Truth Social to a…Confronting the Threat of Deepfakes in Politics

Final Fantasy Singer Susan Calloway is Banned From Toronto Game Convention For Liking Tweets From Riley Gaines

If you’re tired of censorship and dystopian threats against civil liberties, subscribe to Reclaim The Net. Singer Susan Calloway, known for her contribution to the Final Fantasy XIV video game, has been disinvited from the upcoming Final Fantasy enthusiast gathering, KupoCon, set to take place in Toronto. This decision followed revelations that her social media posts on X displayed liked posts from celebrated female athlete Riley Gaines who has continually expressed her reservations about male involvement in women’s sports. KupoCon disclosed an official declaration on their website, stating the reasons behind the sudden withdrawal of Calloway’s much-awaited appearance: “On Monday, it was brought to our attention that a series of offensive posts had been interacted with by Susan Calloway’s X account. This included comments and reactions. These interactions spanned almost a year. Promptly addressing this issue, we reached out to Susan for clarification, recognizing that her account had been previously hacked. Regrettably, the ensuing events triggered a wave of abusive comments directed not only at Susan and her supporters, but KupoCon, attendees and the KupoCon team.” Deemed “offensive,” the posts liked by Calloway’s account encompassed a meme regarding socialist college students shared by Turning Point USA, a post by Riley Gaines arguing against males participating in women’s sports matches, and more. Calloway encountered criticism from some vocal Final Fantasy fans due to these social media interactions. KupoCon thereafter revealed that she would no longer grace their imminent Toronto event. The group fell short of stipulating the cause behind the sudden…Final Fantasy Singer Susan Calloway is Banned From Toronto Game Convention For Liking Tweets From Riley Gaines

Microsoft and Meta Detail Plans To Combat “Election Disinformation” Which Includes Meme Stamp-Style Watermarks and Reliance on “Fact Checkers”

If you’re tired of censorship and dystopian threats against civil liberties, subscribe to Reclaim The Net. And so it begins. In fact, it hardly ever stops – another election cycle in well on its way in the US. But what has emerged these last few years, and what continues to crop up the closer the election day gets, is the role of the most influential social platforms/tech companies. Pressure on them is sometimes public, but mostly not, as the Twitter Files have taught us; and it is with this in mind that various announcements about combating “election disinformation” coming from Big Tech should be viewed. Although, one can never discount the possibility that some – say, Microsoft – are doing it quite voluntarily. That company has now come out with what it calls “new steps to protect elections,” and is framing this concern for election integrity more broadly than just the goings-on in the US. From the EU to India and many, many places in between, elections will be held over the next year or so, says Microsoft, however, these democratic processes are at peril. “While voters exercise this right, another force is also at work to influence and possibly interfere with the outcomes of these consequential contests,” said a blog post co-authored by Microsoft Vice Chair and President Brad Smith. By “another force,” could Smith possibly mean, Big Tech? No. It’s “multiple authoritarian nation states” he’s talking about, and Microsoft’s “Election Protection Commitments” seek to counter that threat in…Microsoft and Meta Detail Plans To Combat “Election Disinformation” Which Includes Meme Stamp-Style Watermarks and Reliance on “Fact Checkers”

EU Parliament Agrees on Digital ID Introduction and Pro-Censorship Chief Suggests CBDC Integration

If you’re tired of censorship and dystopian threats against civil liberties, subscribe to Reclaim The Net. The European Parliament (EP) and the bloc’s member-countries have reached a provisional deal on the digital ID framework, and now EU Commissioner for Internal Market Thierry Breton is reported as suggesting CBDC (central bank digital currency) integration should follow. The provisional agreement on what’s known as the eID (European Digital Identity) regulation is being presented by the EU Council (that worked on the agreement together with the EP) as a safe and trusted option, and also one that “protects democratic rights and values.” Opponents, like Dutch EP member (MEP) Rob Roos, took to X, though, to announce the news, and brand it as “very bad.” The reason, according to Roos, is that in the process of striking a deal the two EU institutions “ignored all the privacy experts and security specialists.” https://video.reclaimthenet.org/articles/Rob_Roos-1722304545676497141.mp4 Commissioner Breton wasted no time – perhaps on purpose, building on a momentum that was no doubt difficult to get going – to say that now that there is a Digital ID Wallet, “we have to put something in it.” The MEP sees his comments as suggesting that Breton is talking about a link between eID and (future) CBDCs. In his own post on X, Breton was in a positively celebratory mood, congratulating those who worked on this outcome, calling it, “a giant step and a world premiere.” And one that, according to him, guarantees top levels of security and privacy –…EU Parliament Agrees on Digital ID Introduction and Pro-Censorship Chief Suggests CBDC Integration

Transcript: US House Hearing on “Advances in Deepfake Technology”

Gabby Miller is Staff Writer for Tech Policy Press. Witnesses testify on Capitol Hill at a hearing on “Advances in Deepfake Technology,” November 8, 2023. On Wednesday, the US House of Representatives Oversight Subcommittee on Cybersecurity, Information Technology, and Government Innovation held a hearing on the risks and challenges posed by “Advances in Deepfake Technology.”  The session was chaired by Rep. Nancy Mace (R-SC), who opened the discussion emphasizing the dual nature of AI as both a tool for innovation and a potential weapon for harm, highlighting its use in creating highly realistic, synthetic images and video. Rep. Mace pointed out the distribution of AI-generated pornographic images and the exploitation of children, citing a letter from the attorneys general of 54 states and territories urging action against AI’s use in generating child sexual abuse material (CSAM). Witnesses included: Dr. David Doermann, Interim Chair, Computer Science and Engineering, State University of New York at Buffalo (written statement) Sam Gregory, Executive Director, WITNESS (written statement) Mounir Ibrahim, Vice President of Public Affairs and Impact, Truepic (written statement) Spencer Overton, Professor of Law, George Washington University School of Law (written statement) Witnesses testified about the dangers deepfakes pose, especially their role in non-consensual pornography, cyberbullying, and misinformation. There was a consensus on the urgent need for both public awareness and legislative action to combat the misuse of deepfake technology, with suggestions for incorporating digital literacy into education and developing technologies to detect and mark AI-generated content. Professor Overton focused on how women, people…Transcript: US House Hearing on “Advances in Deepfake Technology”

Bipartisan Letter Calls on Biden To Drop Charges Against Julian Assange

If you’re tired of censorship and dystopian threats against civil liberties, subscribe to Reclaim The Net. Marjorie Taylor Greene, a Republican, and Alexandria Ocasio-Cortez, a Democrat, have established an alliance in their mutual objective to liberate the Australian founder of WikiLeaks, journalist Julian Assange. Together with 14 other US Congress members, this unlikely duo has penned a candid letter to President Joe Biden, appealing for an immediate halt to the US’s extradition and prosecution plans against Assange. The collective voice warns of potential harm to US-Australia bilateral relations if Assange’s prosecution continues. The letter has been signed by Reps. Alexandria Ocasio-Cortez (D-N.Y.), Jamaal Bowman (D-N.Y.), Ayanna Pressley (D-Mass.), Greg Casar (D-Texas), Ilhan Omar (D-Minn.), Cori Bush (D-Mo.), Rashida Tlaib (D-Mich.), Eric Burlison (R-Mo.), Marjorie Taylor Greene (R-Ga.), Paul Gosar (R-Az.), Jesús “Chuy” García (D-Ill.), Pramila Jayapal (D-Wash.), Matthew Rosendale (R-Mont.), and Sen. Rand Paul (R-Ky.).   The open letter to the President, emphatically states that “it is the duty of journalists to seek out sources, including documentary evidence, in order to report to the public on the activities of the government.” The group further cautions that such a frivolous prosecution could potentially criminalize standard journalistic practices, thereby suppressing the workings of a free press. The collective plea demands that this case be concluded in the shortest possible time frame. Assange remains confined in the Belmarsh prison in London, where he is resisting the US’s extradition efforts intended to charge him under the Espionage Act among other charges. This is in…Bipartisan Letter Calls on Biden To Drop Charges Against Julian Assange

Want to Keep Teens Safe Online? Listen to Them

Michal Luria is a Research Fellow at the Center for Democracy & Technology and holds a doctorate in human-computer interaction from Carnegie Mellon University. Aliya Bhatia is a Policy Analyst at the Center for Democracy & Technology’s Free Expression Project. Shutterstock Earlier this week, members of the Senate Judiciary Committee held another hearing on the harms young people face online, including sexual solicitation, misogyny, and links to buy drugs. In response, Senators are proposing draconian restrictions on teens’ access to content or entire online services, mandatory parental surveillance, while some state lawmakers even prefer digital curfews.  However, young people, it turns out, have a thing or two to say about how to keep themselves safe online. We know because we asked them. In new research by the Center for Democracy & Technology, we spoke with 32 people between the ages of 14 and 21 to understand how they feel about unwanted messages online and how they keep themselves safe. The young people we spoke to define “unwanted, unpleasant, or concerning” messages as unsolicited messages that come from strangers, including sexual content. We asked them to submit a diary entry every time they received an unwanted message and found that these were not equally distributed. Participants in the study received as many as seven unwanted interactions over three weeks, six participants received only one message and seven received none at all.  “I feel like people vastly overestimate how many unwanted messages we get on platforms,” said one participant. “The risks and…Want to Keep Teens Safe Online? Listen to Them

Perhaps YouTube Fixed Its Algorithm. It Did Not Fix its Extremism Problem

Cameron Ballard is Director of Research at Pluro Labs, a non-profit that harnesses AI to deepen and defend democracy in the United States and globally. Recent research appears to suggest that YouTube has substantially addressed the problem of online “rabbit holes” that lead individuals to extreme content and misinformation. The reality is that despite whatever improvements have been made to its algorithms, YouTube is still a massive repository of dangerous content that spreads across other social media and messaging apps both organically and through recommendations, particularly in non-English speaking communities. Too little is known about these phenomena, but what is clear is that YouTube is hardly without fault when it comes to the overall volume of hatred, conspiracy theories, and misinformation on social media. Algorithms are not the entire story Algorithms are undeniably influential in modern life. They affect not just the online content we consume, but access to credit, employment, medical treatment, judicial sentences, and more. The companies that make them present them as an inscrutable system, impossible for outsiders to understand. The supposed complexity of an algorithm is used not just for marketing; it also allows tech companies to shirk responsibility for their own policies and development priorities. When something goes wrong, a “bad algorithm” is blamed.  However, if you peel back the layers of statistical complexity, at the end of the day, an algorithm is just a set of instructions; a recipe. If you go to a restaurant and are told the food is bad just because…Perhaps YouTube Fixed Its Algorithm. It Did Not Fix its Extremism Problem