Artificial intelligence (AI) has become a major disruptor. Our digital society has facilitated its advances, with opportunities to impact every facet of life, including health care, transportation and security. It has also created threats that have prompted some to call for greater restraint on its development and implementation. The risks have been well defined, including loss of jobs, spreading of misinformation and even the development of highly autonomous weapons. A group of technology leaders has called for a pause in certain levels of AI development (with capabilities up to OpenAI’s GPT-4), claiming that some AI systems may pose “profound risks to society and humanity.” When Geoffrey Hinton left his position at Google because of his concerns about AI, this brought even more attention to these potential risks. Chatbots have garnered significant attention, providing human-like interactions that would score well on the Turing Test, the procedure proposed by Alan Turing to assess whether a computer system appears human-like in its communication. ChatGPT has been at the forefront of such advances, with concerns raised in several domains, particuarly education. For example, ChatGPT was able to pass a bar exam taken by lawyers and score above the median on the MCAT exam used for admission to medical school. The risks of chatbots have been well documented. They can generate misinformation through social media and other communication vectors. They can be harnessed during political campaigns to sway voters with propaganda. They can foment social unrest with targeted messaging that can incite angst and even…How much restraint is needed with AI?