AI Accountability and the Risks of Social Interfaces

Audio of these conversations is available via your favorite podcast service. Last week the U.S. National Telecommunications and Information Administration (NTIA) launched an inquiry seeking comment on “what policies will help businesses, government, and the public be able to trust that Artificial Intelligence (AI) systems work as claimed – and without causing harm.” Assistant Secretary of Commerce and NTIA Administrator Alan Davidson announced the request for comment in an appearance at the University of Pittsburgh’s Institute of Cyber Law, Policy, and Security. Alongside him was NTIA Senior Advisor for Algorithmic Justice Ellen P. Goodman, who said the goal is to create policy that ensures safe and equitable applications of AI that are transparent, respect civil and human rights, and are compatible with democracy. In this episode, we’ll hear from Goodman, who is at NTIA on leave from her role as co-director and co-founder of the Rutgers Institute for Information Policy & Law. And, we’ll speak with Dr. Michal Luria, a Research Fellow at the Center for Democracy & Technology who had a column in Wired this month under the headline, Your ChatGPT Relationship Status Shouldn’t Be Complicated. Luria says the way people talk to each other is influenced by their social roles, but ChatGPT is blurring the lines of communication.  Transcripts are forthcoming. The post AI Accountability and the Risks of Social Interfaces appeared first on Tech Policy Press.AI Accountability and the Risks of Social Interfaces