Trooper Sanders is CEO of Benefits Data Trust, a data and technology-powered nonprofit solving America’s $80 billion benefits access challenge. Image by Alan Warburton / © BBC / Better Images of AI / Medicine / CC-BY 4.0 For leaders of human services agencies delivering food, health care, and other critical assistance to people and families in need, the fizzy talk about artificial intelligence (AI) may seem enticing. Dedicated public servants on the front lines of fighting poverty, illness, and exclusion made it through the unprecedented demands of the COVID-19 pandemic only to find themselves adjusting to a new normal– serving their communities while severely understaffed, overstretched, and reliant upon systems and practices from a bygone era. The purveyors of AI say their wares will bring an unprecedented boost to productivity, knowledge, and wealth creation. The truth is that AI brings many benefits, but it is wise for governments and non-profits to proceed with caution. The promise of AI is tempered by well-known risks such as exacerbating bias and discrimination, contributing to a toxic workplace, and menacing privacy. These failings should be of particular concern to the leaders of human services agencies. Indeed, rogue code can upend an agency’s mission to improve and save lives. An ongoing scandal in Australia tells a cautionary tale. In 2015, the Australian government used AI to turbocharge efforts to track and recover suspected fraudulent social security payments. The cabinet minister in charge of social services at the time said he would be a “strong welfare…Humans Must Control Human-Serving AI