First, Do No Harm: Algorithms, AI, and Digital Product Liability

Marc Pfeifffer is the Assistant Director of the Bloustein Local Government Research Center at the Rutgers Bloustein School of Planning and Public Policy. Shutterstock Some lawmakers and policy advocates in the United States have proposed the creation of a standalone federal agency to conduct safety reviews of artificial intelligence (AI) and other algorithmic systems which would be submitted by industry for “approval.” But the reality is that such approaches would be destined to fail. Technological innovation moves much more quickly than government policy development. The time it takes to establish licensing, regulatory, and permitting procedures will inevitably slow innovation and its potential economic and societal benefits to a crawl. A new federal agency needs time to coalesce; the time frame for staffing and establishing practices is a speculative and time-consuming undertaking.  Of course, adding such mechanisms to the portfolio of existing agencies is less risky, but is still not without similar challenges and the added risk of inter-agency conflicts. Either way, regulation that foresees every possible risk is not attainable, and no government agency will have the answer to all of the potential harms of a technology with the potential to change so much of how we live and work. Instead, we need to develop new procedures to manage risk. One approach would be to reimagine liability laws, updating them for the age of AI. Enhancements to current U.S. liability laws to address algorithmic harms would force developers to consider and manage the full range of potential risks engendered by…First, Do No Harm: Algorithms, AI, and Digital Product Liability