The New York Times’s lawsuit against OpenAI and Microsoft highlights an uncomfortable contradiction in how we view creativity and learning. While the Times accuses these companies of copyright infringement for training AI on their content, this ignores a fundamental truth: AI systems learn exactly as humans do, by absorbing, synthesizing and transforming existing knowledge into something new. Consider how human creators work. No writer, artist or musician exists in a vacuum. For example, without ancient Greek mythology, we wouldn’t have DC’s pantheon of superheroes, including cinematic staples such as Superman, Wonder Woman and Aquaman. These characters draw unmistakably clear inspiration from the likes of Zeus, Athena and Poseidon, respectively. Without the gods of Mount Olympus as inspiration, there would be no comic book heroes today to save the world (and the summer box office). This pattern of learning, absorbing and transforming is precisely how large language models operate. They don’t plagiarize or reproduce; they learn patterns and relationships from vast amounts of information, just as humans do. When a novelist reads thousands of books throughout their lifetime, those works shape their writing style, vocabulary and narrative instincts. We don’t accuse them of copyright infringement because we understand that transforming influences into original expression is the essence of creativity. Critics will argue that AI companies profit from others’ work without compensation. This argument misses a crucial distinction between reference and reproduction. When large language models generate text that bears stylistic similarities to works they trained on, it’s no different from a human author…Should AI be treated the same way as people are when it comes to copyright law?