Can Europe’s Laws Keep Up with AI’s New Arms Race?

Ben Lennett, a tech policy researcher and writer focused on understanding the impact of social media and digital platforms on democracy, is an editor at Tech Policy Press. A full two years before the introduction of ChatGPT jarred public debate around the harms of big tech, Europe was already working out a legal framework to regulate the use of artificial intelligence (AI) technologies. The AI Act proposes a risk-based approach to regulation, focused on identifying potentially harmful uses of AI systems and placing requirements and obligations on companies to take steps to minimize the risk to the public.  A presentation from the European Commission visualized the AI Act’s regulatory structure as a pyramid with a small handful of banned uses at the top. These uses, such as social scoring or predictive policing, pose an unacceptable risk to the public and are therefore prohibited. One level down, high-risk uses, including medical devices and uses of AI in essential government services, are permitted but with requirements to establish and implement risk management processes. Further down, lower-risk uses like consumer-facing services are allowed, but subject to transparency obligations, including notifying users they are interacting with an AI system and labeling deepfakes. And finally, at the bottom, minimal or no-risk uses are permitted with no restrictions.  It is a prudent approach that recognizes that AI is a set of different technologies, tools, and methods that can be either utilized to benefit the public or intentionally or unintentionally implemented in a manner that creates significant…Can Europe’s Laws Keep Up with AI’s New Arms Race?