OpenAI's warning shot shows the fragile state of EU regulatory dominance

On May 24, OpenAI, the company that created ChatGPT, announced that it may discontinue business in the European Union (EU). This announcement followed the European Parliament’s recent vote to adopt its new AI Act. In response to criticism from EU industry chief Thierry Breton, OpenAI CEO Sam Altman tweeted a day later that OpenAI has no plans to leave Europe. Yet the very threat of such a departure underscores the need for continued dialogue on AI regulations. As competition increases, regulatory collisions within this multi-trillion-dollar industry could be disastrous. A U.S.-EU misalignment could generate huge inefficiencies, duplicated efforts and opportunity costs. Most importantly, the mere possibility that OpenAI would depart could signal the demise or significant weakening of European regulatory primacy — something known as the “Brussels Effect” — given the company’s widespread influence and applications. The Brussels Effect highlights how the EU’s market forces alone are enough to incentivize multinational companies to voluntarily abide by its regulations and encourage other countries to adopt similar laws on their own. In recent years, however, the EU has implemented various interventionist policies that critics argue hinder innovation within and outside the region. One of these policies is the General Data Protection Regulation. Immediately after its enactment in May 2016, it prompted many data privacy law copycats worldwide, including within many U.S. states. However, many have argued its vague provisions and lack of guidance for companies looking to comply have rendered it ineffective. Another example is the EU’s Digital Markets Act (DMA), enacted…OpenAI's warning shot shows the fragile state of EU regulatory dominance

Musk Says Twitter Has “No Actual Choice” But Comply With Government Censorship Demands

Following criticism for complying with governments’ censorship demands, Twitter owner Elon Musk said that the platform has “no actual choice” when it comes to such requests. When he announced interest in buying Twitter last year, Musk claimed to be a “free speech absolutist.” He argued that “we have free speech” when “someone you don’t like” is “allowed to say something you don’t like.” At the time, Musk added that under his leadership, Twitter would “be very reluctant to delete things,” shy from permanent bans, and allow all legal speech. In recent weeks, Musk has faced some criticism over the removal of content and accounts at the request of the Turkish government ahead of the elections. On Sunday, columnist Matthew Yglesias tweeted that since Musk took over, Twitter has complied with a majority of censorship demands from governments. Musk responded: “Please point out where we had an actual choice and we will reverse it.” Musk had previously said that Twitter would comply with social media laws around the world even if the laws sometimes contradict his vision for absolute free speech. In a separate tweet last year, he said, “By ‘free speech,’ I simply mean that which matches the law. I am against censorship that goes far beyond the law.” In the last removal request report before Musk took over, Twitter claimed it received over 47,000 removal requests in the last half of 2021, and complied with 51% of the requests. However, even the previous Twitter complied with censorship requests from…Musk Says Twitter Has “No Actual Choice” But Comply With Government Censorship Demands

Telegram Says It Won’t Respond to Political Censorship Requests

Telegram said it will not participate in political censorship. The messaging service has not been cooperating with the Malaysian Communications and Digital Ministry to take down political content. Speaking to the News Strait Times, a spokesperson for Telegram, Remi Vaughn, said that the platform actively moderates harmful content, including public pornography and the sale of illegal drugs, as well as removing content that violates its terms of service. Vaughn added that the platform actively monitors public content and also responds to reports from users through the app and an email address. “Telegram will not, however, participate in any form of political censorship,” Vaughn added in the statement. Last week, Communications and Digital Minister Fahmi Fadzil confirmed that Telegram has not been complying with requests from the government since January. He added that the Malaysian Communication and Multimedia Commission (MCMC), which regulates tech platforms, will consider how to respond to the messaging service’s non-compliance. The post Telegram Says It Won’t Respond to Political Censorship Requests appeared first on Reclaim The Net.Telegram Says It Won’t Respond to Political Censorship Requests

Adobe Tool To Stamp Out “Misinformation” Is Being Added Directly To Modern Cameras

A transparency report from Adobe about how it plans to co-operate with Australia’s Code of Practice on Disinformation and Misinformation, has revealed the extent to which its new rollout of technology that embeds data in images in order to curb misinformation is faring. The Content Authenticity Initiative (CAI), which was launched by Adobe in collaboration with Twitter and the New York Times last year announced a partnership with Nikon and Leica to bring image marking technology to the Nikon Z9 and Leica M11 cameras. The technology, at least according to CAI, will increase trust in photographers’ digital work by securing provenance information at the point of capture, including location, time, and how the image was taken. Provenance is the facts of a piece of digital content like origin. CAI, launched in 2019, aims to restore trust in images people see online by embedding provenance information from the time an image is first captured. According to its website, the CAI is “a group working together to fight misinformation and add a layer of verifiable trust to all types of digital content, starting with photo and video, through provenance and attribution solutions.” The CAI also created the Coalition for Content Provenance and Authenticity (C2PA), a camera industry standard that will help establish trustworthy content and attributing of creators. “As a leader at the forefront of innovative photography and the first camera manufacturer to join both the CAI and C2PA, Nikon’s use of CAI technology will accelerate implementation of the provenance technology for millions…Adobe Tool To Stamp Out “Misinformation” Is Being Added Directly To Modern Cameras

Financial Services, Financial Health And Banking’s Future

Dateline: Paris, 31st May 2023.Apple now has a high-yield savings account (at 4.15%) for its Apple Card customers. The account is provided in a partnership with Goldman Sachs. It offers a higher rate than Goldman’s own offerings, is FDIC insured and consumers can fund it from their Apple Cash balances or directly from a linked bank account. Why would Goldman Sachs (which has just reported a 19% decline in first-quarter profits on weaker revenue, higher expenses and a $470 million loss from selling some of its consumer loans) do this?Well, it’s because they have no choice: it is the future of banking.Subscribe nowIt’s Apple, It’s NewsThere’s no reason to doubt that Apple will make a success of this. Despite the fact that it is not in the top ten of best interest rates, the fact is that as Ted Rossman from Bankrate says, while higher yield savings accounts have been around for a while, this while resonance in the mainstream and “The fact that Apple is involved makes it news”. Indeed it does.Share Read moreFinancial Services, Financial Health And Banking’s Future

Emerging AI Governance is an Opportunity for Business Leaders to Accelerate Innovation and Profitability

Abhishek Gupta is the Senior Responsible AI Leader & Expert with the Boston Consulting Group (BCG) and also the Founder & Principal Researcher at the Montreal AI Ethics Institute; Risto Uuk is a Policy Researcher at the Future of Life Institute; Richard Mallah is an AI Safety Researcher at Future of Life Institute; and Frances Pye is an Associate at the Boston Consulting Group. Fritzchens Fritz / Better Images of AI / GPU shot etched 1 / CC-BY 4.0 As AI capabilities rapidly advance, especially in generative AI, there is a growing need for systems of governance to ensure we develop AI responsibly in a way that is beneficial for society. Much of the current Responsible AI (RAI) discussion focuses on risk mitigation. Although important, this precautionary narrative overlooks the means through which regulation and governance can promote innovation.  Suppose companies across industries take a proactive approach to corporate governance. In that case, we argue that this could boost innovation (similar to the whitepaper from the UK Government on a pro-innovation approach to AI regulation) and profitability for individual companies as well as for the entire industry that designs, develops, and deploys AI. This can be achieved through a variety of mechanisms we outline below, including increased quality of systems, project viability, a safety race to the top, usage feedback, and increased funding and signaling from governments.  Organizations that recognize this early can not only boost innovation and profitability sooner but also potentially benefit from a first-mover advantage. 1. Impactful…Emerging AI Governance is an Opportunity for Business Leaders to Accelerate Innovation and Profitability

Remote Query Execution: A Powerful Way to do Privacy-Protecting Research on Platform Data

Jonathan Stray, Senior Scientist at The Center for Human-Compatible Artificial Intelligence (CHAI), Berkeley and Brandie Nonnecke, founding director of the CITRIS Policy Lab at UC Berkeley, provide comment to the European Commission on researcher access to platform data under the Digital Services Act. Shutterstock The recently passed EU Digital Services Act (DSA) includes a provision for external researchers to request access to internal platform data, for the purpose of evaluating certain systemic risks of very large online platforms (including illegal content, threats to elections, effects on mental health, and more). The Act says that user privacy, trade secrets, and data security must be respected, but it doesn’t say how. The European Commission invited public comment to determine how best to administer researchers’ access. This comment builds upon our UC Berkeley submission, further detailing an approach to enable researcher data access which is simple and powerful, yet protects the rights of users and platforms. It is based on a straightforward idea: send the researcher’s analysis code to the platform data, rather than sending platform data to researchers. The process would work like this: Platforms publish synthetic data sets — fake data with the same format as the real data Researchers develop their query and analysis code on this synthetic data, then submit their code to the platform for execution The query can perform arbitrarily complex analysis but returns only aggregated results to the researcher. There is no standard name for this data access strategy, even though it has been used in many contexts. In…Remote Query Execution: A Powerful Way to do Privacy-Protecting Research on Platform Data

Being Broke Is Changing Ethics In Journalism

Jonathan Snowden wrote an incredibly thoughtful article about ethics in MMA media, and journalism more broadly. What Snowden points out is just how arbitrary the lines of ethics often are in reporting. I have another point. The ethical lines in reporting are changing, because they cannot be sustained. Look, I’m an ad agency demon who flits into the press core sometimes, normally with a disclaimer that most of my income is from an ad agency and not writing. But I know as much about the media industry as most working reporters. Most members of the press are just a few steps down the ad dollar hall from me. Advertisers who pay for ads hire firms like mine to manage that spend. Those firms send money to ad tech companies who take a cut, and in turn share a cut of that money with publishers, who dole out fees to reporters. It’s a pyramid scheme of sorts. Snowden asks what’s the difference between receiving thousands of dollars of free tickets to cover events (ethically fine) and receiving free hotel rooms (ethically compromised)? The answer is obfuscation. It’s morally a-okay to have a publication’s accounting department pay for hotel rooms with ad dollars from the event the journalists are covering, but skip a step, and it’s over. The outlets often have no claim that money from event advertising isn’t the money used to fund the travel expenses of reporters. In the modern media economy, a tentpole event, like a large boxing match,…Being Broke Is Changing Ethics In Journalism

Twitter is now worth one-third what Elon Musk paid

Ever since Elon Musk acquired Twitter for $44 billion last year, it’s been a widely agreed upon stance that he greatly overpaid for the social media platform.However, the amount in which he overpaid seems to be widening post-acquisition.According to Fidelity, Twitter is now worth around 33 percent what the billionaire originally paid for it. That puts Twitter’s value at roughly $15 billion. The number comes from the investment firm’s own valuation of its own stake in Musk’s Twitter, which Fidelity helped finance.Twitter’s valuation from Fidelity follows a pattern since Musk took over in October of last year. Fidelity has consistently downgraded its own holdings in the company, knocking the value of its stake by 56 percent just a month after the acquisition closed. By the end of February, Fidelity further downgraded its stake by more than 63 percent before knocking it down by a full two-thirds this month.Despite Musk’s recent claims about Twitter soon breaking even or even becoming profitable, the company’s outlook has not been particularly good. Twitter lost around half of its biggest advertisers when Musk took over. Many still had not returned by earlier this year and those who had continued to advertise on the platform were spending a significant amount less.Musk turned to subscription-based revenue models like Twitter Blue and Subscriptions to make up for those losses, but even those have proven to be unsuccessful. Twitter Blue is an $8 per month subscription service that gives premium features, such as longer tweets and videos as well as…Twitter is now worth one-third what Elon Musk paid

The AI Anti-Utopia, And Other Stories

Software is replacing artists and writers who enjoy their work while warehouse staffers pee in bottles to make quota. AI is turning into a shitty anti-utopia. As I write this, the Writers Guild of America (WGA) has been on strike for 28 days. In a negotiation with the Alliance of Motion Picture and Television Producers (AMPTP), the WGA is seeking what amounts to four things: Better residuals Minimum staffing requirements Shorter exclusivity periods A commitment that AI will not replace them House Keeping This article was written for broad syndication, along with links to my other recent work. Since I missed a couple of weeks, I have a few more pieces than normal.  Recent Articles The Advertising Pyramid Scheme Streaming companies can earn more from an ad supported account than most consumers are willing to pay for the service. Creating a bazar world were everyone wants in on the ads. Offering ad inventory, big data targeting, or AI for an ad tech stack is incredibly profitable. A startup called Telly plans on giving away millions of premium TVs with an attached smaller second TV used to serve ads. Via Telly The advertising economy is starting to resemble a collection of interconnected multi-level marketing schemes. Some kind of intermediary firm sits atop each pyramid brokering the serving of ads to eyeballs. Under that are ad agencies, and tech firms that work on sales and bid management. The bottom are ad buyers, perhaps too few to sustain all these pyramids. Grief tech…The AI Anti-Utopia, And Other Stories