Emily Tucker is the Executive Director at the Center on Privacy & Technology at Georgetown Law, where she is also an adjunct professor of law. Illustrations drawn from Le mécanisme de la parole, suivi de la description d’une machine parlante (The mechanism of speech, followed by the description of a talking machine), Wolfgang von Kempelen, 1791. Source If you were a tech company executive, why might you want to build an algorithm capable of duping people into interacting with it as though it were human? This is perhaps the most fundamental question one would hope journalists covering the roll-out of a technology–acknowledged by its purveyors to be dangerous–to ask. But it is a question that is almost entirely missing amidst the recent hype over what internet beat writers have giddily dubbed the chatbot “arms race.” In place of rudimentary corporate accountability reporting are a multitude of hot takes on whether chatbots are yet approaching the “Hollywood dream” of a computer superintelligence, industry gossip about panic-mode at companies with underperforming chatbots, and transcripts of chatbot “conversations” presented uncritically in the same amused/bemused way one might share an uncanny fortune cookie message at the end of a heady dinner. All of this coverage quotes recklessly from the executives and venture capitalists themselves, who issue vague, grandiose prophecies of the doom that threatens us as a result of the products they are building. Remarkably little thought is given to how such apocalyptic pronouncements might benefit the makers and purveyors of these technologies. When…Our Future Inside The Fifth Column- Or, What Chatbots Are Really For