Justin Hendrix is CEO and Editor of Tech Policy Press. Views expressed here are his own. Fritzchens Fritz / Better Images of AI / GPU shot etched 3 / CC-BY 4.0 One of the points of contention in the drafting of the European Union’s AI Act is how to classify risk across the chain of actors involved in developing and deploying systems that incorporate artificial intelligence. Last year, the Council of the European Union introduced new language into the proposed Act taking into account “where AI systems can be used for many different purposes (general purpose AI), and where there may be circumstances where general purpose AI technology gets integrated into another system which may become high-risk.” Now, even as EU policymakers continue to refine versions of the AI Act, a group of international AI experts have published a joint policy brief arguing that GPAI systems carry serious risks and must not be exempt under the EU legislation. The experts, including Amba Kak and Dr. Sarah Myers West from the AI Now Institute, Dr. Alex Hanna and Dr. Timnit Gebru from the Distributed AI Research Institute, Maximilian Gahntz of the Mozilla Foundation, Irene Solaiman from Hugging Face, Dr. Mehtab Khan from the Yale Law School Information Society Project (ISP), and independent researcher Dr. Zeerak Talat, are joined in sum by more than 50 institutional and individual signatories. The brief makes five main points: GPAI is an expansive category. For the EU AI Act to be future proof, it must apply across…Experts Urge EU to Regard General Purpose AI as Serious Risk