Artificial Intelligence and Your Voice

Audio of this conversation is available via your favorite podcast service. Today’s guest is Wiebke Hutiri, a researcher with a particular expertise in design patterns for detecting and mitigating bias in AI systems. Her recent work has focused on voice biometrics, including work on an open source project called Fair EVA that gathers resources for researchers and developers to audit bias and discrimination in voice technology. I spoke to Hutiri about voice biometrics, voice synthesis, and a range of issues and concerns these technologies present alongside their benefits. What follows is a lightly edited transcript of the discussion. Wiebke Hutiri: So my name is Wiebke Hutiri. I am an almost finished PhD candidate at the Technical University of Delft in the Netherlands, meaning that my thesis is submitted, but from submission to graduation, it always takes a bit of time. Justin Hendrix: And can you tell me a bit about your research about your dissertation and your interest more broadly?  Wiebke Hutiri: Yeah, so my dissertation has been broadly in the field of responsible AI. And the approach that I’ve taken is one where we’ve had for several numbers of years, we’ve had a pretty large number of researchers devoting themselves to algorithmic fairness and bias to finding ways of improving it, mitigating, detecting. But we’ve got a large number of applications that use or increasingly use AI. We developers and engineers don’t necessarily have a background and expertise in bias and fairness. And so my thesis was really about, trying to…Artificial Intelligence and Your Voice