Gary Marcus, cofounder of the Center for the Advancement of Trustworthy AI, has for years been highly critical of generative artificial intelligence and large language model applications like OpenAI’s ChatGPT. These programs consume vast quantities of data to perform various functions, from creating new cocktail recipes to sharing insights about the folding sequences of proteins. Marcus recently wrote that there are “not one, but many, serious, unsolved problems at the core of generative AI.” He isn’t alone. During an interview earlier this month, theoretical physicist Michio Kaku dismissed AI chat bots as “glorified tape recorders” that are only a “warped mirror of what’s on the internet the last 20 years.” Yet that hasn’t stopped popular culture, business blogs, and tech enthusiasts from contemplating their supposedly revolutionary implications. There are many unknowns about general artificial intelligence and its role in American society, but one point is becoming clear: open-source AI tools are turning the internet into an even murkier den of confusion. One of Marcus’s chief concerns is that these models can create self-amplifying echo chambers of flawed or even fabricated information, both intentionally and unintentionally. AI researchers Maggie Harrison and Jathan Sadowski have each drawn attention to what the latter cleverly termed “Habsburg AI,” which appears when AI-generated information is fed back into another AI program on a loop. What results is a sort of information “inbreeding” that drives the AI mad, causing it to spew abominations of data. Yet even absent these conditions, human influence on the information filtering process creates opportunities for additional forms of distortion. Practices known as search-engine poisoning, keyword stuffing, or spamdexing involve programmers boosting the visibility of…Entering the age of artificial truth