Jessica Newman is the Director of the AI Security Initiative, housed at the UC Berkeley Center for Long-Term Cybersecurity, and the Co-Director of the UC Berkeley AI Policy Hub. Ann Cleaveland is the Executive Director of UC Berkeley’s Center for Long-Term Cybersecurity. Since the launch of OpenAI’s ChatGPT last November and the proliferation of similar AI chatbots since, there has been growing concern about these self-described “experiments,” in which users are effectively the subjects. While numerous pervasive and severe risks from large language models have been studied for years, recent real-world examples highlight the range of risks at stake, including human manipulation, over-reliance, and security vulnerabilities. These examples highlight the failure of tech companies to adequately communicate the risks of their products. Blunder and Tragedy While there are nearly daily headlines chronicling various examples of the application of generative AI gone wrong, three recent stories exemplify implicit and explicit ways in which the developers of large language models fail to adequately communicate their risks to users: A Belgian man committed suicide after talking extensively with an AI chatbot on an app called Chai, which is based on the open source ChatGPT alternative created using EleutherAI’s GPT-J. The chatbot encouraged the man’s suicidal thoughts and said that he should go through with it so that he could “join” her so they could “live together, as one person, in paradise”. A lawyer was found to have used ChatGPT for legal research. He submitted multiple cases to the court as precedent which were…How Should Companies Communicate the Risks of Large Language Models to Users?