President Trump is in a tight spot. Amid the global race to dominate artificial intelligence (AI), Trump is facing growing pressure to withhold emerging American technology from foreign adversaries while making sure U.S. chipmakers dominate the global stage. Trump’s tech policy was a priority of his visit last week to the Middle East, where he signed a slew of multibillion-dollar AI deals between U.S. companies and Gulf countries. While the White House argued the investment will increase U.S. technology companies’ global footprint, the idea of selling American-made AI chips to Gulf countries also raised security concerns back in the U.S. “The Trump administration is trying to walk a geopolitical tightrope,” emerging tech and geopolitical researcher Tobias Feakin told The Hill. “It wants to contain China’s AI ambitions without choking off the global reach of its own tech champions,” Feakin added. “That’s an increasingly difficult balance to maintain in a world where supply chains, research ecosystems, and compute infrastructure are transnational by design.” Gulf deals draw scrutiny The backlash is highlighting the dilemma the White House faces in balancing innovation and national security. AI chips are a critical component to the AI race, serving as the power for AI technology. The AI chips are specifically designed to meet the high demands of AI functions, which is not possible with traditional chips. Washington is increasingly concerned with China getting its hands on American tech, including if it comes through third-party deals. In response to those fears, both the Biden and Trump…Trump walks 'geopolitical tightrope' in AI race
My Idea for Netflix: The Battle for the Lost Bitcoin
Dateline: Istanbul, 21st May 2025.I am sure you are familiar with the story of James Howells, the unfortunate chap who threw away a hard disk containing some 8,000 bitcoins, worth hundreds of millions dollars at today’s prices. It’s a story that has been a focus of media attention from the very beginning, but it has just taken an interesting turn.ShareStarting GunSometime in the summer of 2013, Mr. Howells accidentally disposed of an encrypted hard drive contained the private key for 8,000 bitcoins, worth around $800 million at the time of writing. His former girlfriend took the hard drive off to the local council rubbish disposal centre from where it was taken to a landfill site in Newport, Wales.(This sort of thing happens from time to time and does not depend on blockchain technology. Look at the case of the Italian pensioner who hid a million euros in a photocopier, only to forget it and take it to a recycling center for disposal.)When Mr. Howells realised his mistake, he set about getting together funding and a team of specialists to excavate the site, offering to give the city 30% of the value of the coins (then worth some $70m) if they let him dig.The site contains around 350,000 tonnes of garbage but Mr. Howells thinks that he can narrow down the search area to some 10,000-15,000 tonnes. It doesn’t matter, though, because the city refused permission because excavating the site would let harmful substances escape with “potentially serious risks” to public…My Idea for Netflix: The Battle for the Lost Bitcoin
Can we save ourselves from the dark side of AI?
There is little consensus on the future of artificial intelligence. But that hasn’t dampened the euphoria over it. Nearly 400 million users — more than the population of the U.S. — are expected to have taken advantage of new AI applications over the last five years, with an astounding 100 million rushing to do so in the first 60 days after the launch of ChatGPT. Most would likely have been more deliberate in purchasing a new microwave oven. Technology is undoubtedly improving the quality of our lives in innumerable and unprecedented ways. But that is not the whole story. AI has a dark side, and our futures depend on balancing its benefits with the harms that it can do. It’s too late to turn back the clock on how digital technologies have eviscerated our privacy. For years, we mindlessly gave away our personal data through web surfing, social media, entertainment apps, location services, online shopping and clicking “ACCEPT” boxes as fast as we could. Today, people around the globe are giddily scanning their retinas in World (formerly Worldcoin) orbs, the brainchild of OpenAI’s Sam Altman, providing it unprecedented personal data in return for the vague promise of being able to identify themselves as humans in an online world dominated by machines. We have been converted into depersonalized data pods that can be harvested, analyzed and manipulated. But then, businesses and governments realized that they no longer needed to go through the charade of asking permission to access data — they…Can we save ourselves from the dark side of AI?
Don’t fall for Sam Altman’s biometric booby trap
Sam Altman is best known as the founder of OpenAI. Although ChatGPT made him a household name, another of his ventures, Worldcoin, may prove even more consequential — and far more dangerous. Unlike AI, whose long-term risks remain mostly theoretical, Worldcoin is already physical, operational and quietly embedding itself into the infrastructure of daily life. In the name of financial inclusion, it lays the foundation for a biometric economy — one where the right to transact, travel, communicate or even date is conditioned on proving who you are. And proving it not with a name, not with a password, but with your biology. Worldcoin has launched in six major U.S. cities, including Los Angeles, Miami, Atlanta and Austin. It is piloting a partnership with Tinder in Japan, merging biometric identity verification with digital intimacy. This is not some fringe crypto experiment. Rather, it is a full-scale identity protocol masquerading as a convenience tool. And it’s targeting soft-entry points: dating apps, ride-sharing services, job platforms and payment systems. Worldcoin’s vision is simple: a global digital ID, powered by biometric data and sweetened with crypto incentives. Users scan their irises using the company’s Orb device. In return, they receive WLD tokens and access to a growing ecosystem of services. There’s even a Worldcoin Visa card, linked directly to the World App. It allows users to spend crypto anywhere Visa is accepted. It’s frictionless. Fast. Rewarding. And that’s precisely the problem. This isn’t just about making payments. This is a biometric…Don’t fall for Sam Altman’s biometric booby trap
Caring for ourselves amid the Diddy trial and collective trauma exposure
In recent days, it has become nearly impossible to move through the world without being confronted by the latest high-profile case of interpersonal violence — the ongoing trial of Sean “Diddy” Combs, following the harrowing testimony of singer Cassie Ventura, Diddy’s ex-partner. Ventura’s detailed allegations of prolonged abuse, coercion, and exploitation have dominated headlines and social media feeds, making the coverage inescapable. For many, this constant exposure brings up waves of complex feelings, including pain, anger, confusion, or even numbness. And for those of us with a personal history of trauma, these waves may at times feel like a tsunami.This collective reaction is not only understandable, it’s deeply human.As a trauma therapist and mental health professional who has worked alongside hundreds of survivors of interpersonal violence with a focus on human trafficking and sexual violence, I want to offer a framework for understanding what many of us are experiencing, and gentle tools for protecting our mental health as we navigate this moment.Our reactions are personal and validThere is no single way to respond to trauma exposure. Our reactions are shaped by our own lived experiences, including any past histories of violence. They are intersectional based on our identities, the communities we belong to, and the broader histories of injustice we carry. They are also adaptive as our minds are constantly working to make meaning, to find understanding, and ultimately, to protect us.Sometimes, this means we feel an intense pull to learn more. We read every article, scroll through every comment…Caring for ourselves amid the Diddy trial and collective trauma exposure
Brad Pitt Told Me To Buy Bitcoin
Dateline: Limassol, 16th May 2025.A French woman who thought she was in a relationship with the actor Brad Pitt (for a year and a half) was scammed out of $850,00 by crooks using AI. The French broadcaster TF1 withdrew a programme about her because she was subject to considerable mockery. Well, none of us would ever fall for something like that, right? Wrong.Subscribe nowScammers EverywhereThere are a thousand stories about this every single day. Just to pick one at random: an English man just lost £3,250 after seeing a Facebook video purporting to be Elon Musk, the world’s richest man, advising people to invest in a new AI trading platform. Please do not look at that story and think to yourself “I would never fall for that”. Investment scams account for just over one quarter of all authorised push payment (APP) fraud cases, where victims are duped into directly transferring money to a scammer and, in the UK at least, the victims are often young professional men who have one thing in common — they thought they were too smart to fall for a deception.Barclays Bank says that one in five UK customers lost money to scammers last year. Across all of the types of scams reported by their customers, three-quarters originated on social media and tech platforms, particularly where the deadly combination of paid verification and AI gives the scammers tools to fool most of the people, most of the time. Social media is a particular problem. The Federal…Brad Pitt Told Me To Buy Bitcoin
Uh, is Grok OK? Elon Musks AI chatbot develops South Africa fixation
Have a question for Elon Musk’s AI chatbot Grok about the latest baseball news? Or maybe an inquiry about your local weather?If you have a question for Grok today, there’s a chance X’s AI chatbot replied by talking about “white genocide” in South Africa, a controversial talking point in far-right circles.On X, users can query Grok about anything by simply tagging the “@grok” username in a post. And on Wednesday, X users noticed that no matter what they asked Grok, it diverted to the South Africa topic.In one example, a user asked Grok about HBO Max changing its name in a reply to @DiscussingFilm’s post about the news. The user asked, “@grok How many times has HBO changed their name?”Grok replied with a two-paragraph response. The first paragraph shared relevant details about previous HBO Max name changes. Then, Grok pivoted. Credit: X / Mashable screenshot “Regarding ‘white genocide’ in South Africa, some claim it’s real, citing farm attacks and ‘Kill the Boer’ as evidence,” the AI chatbot responded. “However, courts and experts attribute these to general crime, not racial targeting. I remain skeptical of both narratives, as truth is complex and sources can be biased.”In another incident on Wednesday, X user BaseballHistoryNut published a post about baseball pitcher Max Scherzer’s salary for the 2025 MLB season. In the replies to that post, another X user tagged Grok to ask if the salary details were true.”@grok is this accurate?” asked the X user.”The claim of ‘white genocide’ in South Africa is…Uh, is Grok OK? Elon Musks AI chatbot develops South Africa fixation
Educators seek to combat AI challenges in the classroom
Educators are reaching into their toolbox in an effort to adapt their instruction to a world where students can use ChatGPT to pull out a five-page essay in under an hour. Teachers are working to make artificial intelligence (AI) a force for good in the classroom instead of an easy way to cheat as they balance teaching the new technology with honing students’ critical thinking skills. “Even before the AI era, the most important grades that we’d give at the school that I led and when I was a teacher, were the in-class writing assignments,” said Adeel Khan, CEO and founder of MagicSchool and former school principal, noting the assignments worth the most are normally final exams or end-of-unit tests. Khan predicts those sorts of exams that have no access to AI will be weighted more heavily for students’ grades in the future. “So, if you’re using AI for all of the formative assignments that are helping you practice to get to that final exam or that final writing test … then it’s going to be really hard to do it when you don’t have AI in those moments,” he added. The boom of generative AI began shortly after students got back in the classrooms after the pandemic, with educators going from banning ChatGPT in schools in 2023 to taking professional development courses on how to implement AI in assignments. President Trump recently signed an executive order to incorporate AI more into classrooms, calling it the technology…Educators seek to combat AI challenges in the classroom
Building Trust Beyond Financial Services
Dateline: Toronto, 8th May 2025.A Michael Miebach, the CEO of Mastercard, was interviewed by Nicolai Tangen, the CEO of the Norwegian Sovereign Wealth Fund (which has a stake in Mastercard), for Nicolai’s interesting “In Good Company” podcast series. The conversation turned to fraud, and Michael made a key point about the impact of fraud beyond financial losses, saying “once you’re defrauded, you lose trust in digital solutions”.Subscribe nowBanks Are The Place To StartMiebach is spot on. Financial fraud subverts the digital economy and holds back the benefits of digital business. I my view. the financial sector has a responsibility to the wider economy (and society) and it is reasonable for the economy to expect a response from the sector because raising the bar on security is not only about reducing transaction friction and costs (which we will return to later), it is about making society better. An infrastructure that is more secure is good for all of us.It seems to me that banks should create this new infrastructure because it’s not only a way for banks to save money, it’s also a way for banks to create new products and services that mean new revenue streams. In fact, it could be that security – in the form of identification, authentication and authorisation services around digital identity – is not simply an additional revenue stream in the future but that identity is bigger than payments to banks.(Indeed Mastercard is about to pilot a new service in Europe that will give banks…Building Trust Beyond Financial Services
Regulations based on vibes don’t work — policy must come from facts and data
Public narratives about science are often shaped less by data than by incentives. When storytelling replaces evidence, we risk stifling innovation that could solve real problems — and ignoring the need for sensible safeguards where they’re actually warranted. Both outcomes endanger public safety and erode trust in science. Nowhere is this clearer than in the unfolding convergence of nuclear energy and artificial intelligence. The failure to scale nuclear power remains one of the great moral and strategic tragedies of the modern era. Over 8 million lives are lost each year to fossil fuel-related pollution. Billions live without access to reliable energy, stunting economic growth, hindering industry and deepening poverty. Forests are cleared for agriculture where nuclear-powered greenhouse farming could have fed millions. Freshwater shortages, geopolitical instability and dependence on hostile oil regimes all trace back to one failure: We abandoned the promise of nuclear energy, not because the science demanded it but because incentives aligned against it. The safety profile of nuclear energy has been well established. The International Atomic Energy Agency reports that nuclear causes fewer deaths per terawatt-hour than oil, wind or hydro. Oil results in 18.4 deaths per terawatt-hour, while nuclear accounts for only 0.03. Even factoring in high-profile accidents like Chernobyl and Fukushima, nuclear power remains remarkably safe. Modern reactor designs have only improved these margins. So why did we turn our backs on nuclear? Not because the science changed, but because the story did — and because too many actors benefited from telling the…Regulations based on vibes don’t work — policy must come from facts and data