Investing in AI Safety through Training and Education

Maria Pepelassis is a technology policy analyst based in London. Nine small images with schematic representations of differently shaped neural networks. Alexa Steinbrück / Better Images of AI / CC-BY 4.0 The European Parliament recently adopted a political agreement on the long-awaited Artificial Intelligence Act (AI Act), which now awaits a plenary vote on 14 June. This legislation, if enacted, introduces a clear legal framework that establishes concrete obligations and restrictions for the development and deployment of AI-powered products and services. While the top-down approach of the AI Act may aide Europe in mitigating the most evident harms posed to citizens by these novel technologies, the question remains if it is sufficient to meet the challenge of preparing 447 million people to responsibly use the AI-powered tools that will drive the EU’s digital and green transformations.   Hurry Up and Regulate Efforts to introduce horizontal measures on artificial intelligence have been ongoing since the AI Act was first introduced in April 2021; however, many of the most contentious elements of the bill are responsive to much more recent breakthroughs. Policymakers and regulators across the globe looked on warily as over 100 million monthly users engaged with the AI-powered bot ChatGPT in just two months since its launch. Italy only recently lifted a temporary ban on ChatGPT, and imposed requirements on its developer, OpenAI, to introduce meaningful measures to comply with privacy and age-verification requirements. In line with such measures, rapporteurs on the AI Act successfully introduced data processing safeguards and limitations…Investing in AI Safety through Training and Education