AI’s real problem is that it’s boring

As recent advances in generative AI captured the world’s attention earlier this year, it was common for impressed observers to say: “And this is the worst AI will ever be.” What AI could do with only a few keystrokes, whether conjuring screenshots of fictional movies or writing entire marketing plans, was astonishing, and the technology’s abilities would only improve. Even AI’s detractors, who expressed fears about biases, job displacement, fake news and even usurpation of the species, conceded its promise. A coalition of computer scientists called for a moratorium on AI research, likening it to the nuclear arms race. But everyone agreed that AI had finally arrived, and would soon upend everything. Only a few months later, the AI revolution is running into some trouble. While AI technologies will almost certainly become a mainstay of postmodern life, both individual users and enterprises seeking to exploit it are already finding its transformative potential isn’t quite here yet. Even once that future arrives, AI may be less liberating than it is sneakily dissatisfying. Early adopters of generative AI tools quickly became acquainted with its limitations, such as its lack of nuance and an inability to explain its decisions. Most perplexing is its tendency to “hallucinate” data — a fancy way of saying it just makes things up. This realization came too late for two lawyers at Levidow, Levidow & Oberman, who in June were forced to pay $5,000 in fines after they submitted a ChatGPT-written brief that cited nonexistent cases. Acknowledgment of…AI’s real problem is that it’s boring