If a powerful technology poses significant risks to business and society, should it ever be freely available? Many argue that AI falls into this category. Some even warn of existential threats. Since the advent of foundation models like ChatGPT, debates among AI experts, executives and regulators have centered around whether these models should be open-sourced. But this has been the wrong focus all along. The emergence of DeepSeek, and its creators’ decision to open-source an AI model almost on par with frontier models (for significantly cheaper), shifts the debate. The question is no longer “if” but “how” we can open-source AI — maximizing benefits while managing safety and misuse concerns. Open-source AI takes the idea beyond just code to include data, algorithms and model weights — the learned parameters from training AI. A fully open-source AI system includes open datasets, open-source code and open model weights, but many organizations only release the model weights, which limits the ability to fully understand or rebuild the system. This becomes more complicated if the weights are trained on data that is not disclosed, potentially raising liability concerns. While openness can encourage innovation, it can also bring up questions about responsibility and security risks. But the “unexpected” rise of DeepSeek could indicate that we may be on a one-way path for AI foundation models. The shift toward openness of these models, which can fuel applications very broadly and create financial value that can further support model improvements, may prove simply inevitable. Just like Linux…Open-source AI is definitely happening — the only question is how