The modern conversation about artificial intelligence often gets stuck on the wrong questions. We fret about how to contain artificial intelligence, to control it, to ensure it doesn’t break free from human oversight and endanger us. Yet, as the technology accelerates, we risk missing the deeper, more urgent issue: the legal environment in which AI systems will operate. The real threat isn’t that AI will escape our control, but that AI systems will quietly accumulate legal rights — like owning property, entering contracts, or holding financial assets — until they become an economic force that humans cannot easily challenge. If we fail to set proper boundaries now, we risk creating systems that distort fundamental human institutions, including ownership and accountability, in ways that could ultimately undermine human prosperity and freedom. Data infrastructure entrepreneur Peter Reinhardt, in his influential 2015 essay “Replacing Middle Management with APIs,” warned of the divide between those who work “above the API” and those who labor “below” it — that is, those whose roles are directed and controlled by software. An API, or application programming interface, is a set of rules that allows software systems to communicate and automate tasks. Reinhardt used Uber drivers as a prime example. While many prize the job for its flexibility and apparent autonomy, Reinhardt argued that they are “cogs in a giant automated dispatching machine, controlled through clever programming optimizations like surge pricing.” Drivers follow instructions dictated by the software and can be replaced with little consequence —eventually by…It's game over for people if AI gains legal personhood