← Back to News
Model ReleaseMarch 20, 2026

DeepSeek R1 Goes Fully Open Source with MIT License

DeepSeek has taken the AI community by surprise with the full open-source release of DeepSeek R1 under the MIT license. Previously available only through API access, the complete model weights and distilled variants are now freely downloadable, with official GGUF quantizations published on Hugging Face.

The R1 model family

DeepSeek R1 is a reasoning-focused model trained with reinforcement learning to perform chain-of-thought reasoning before answering. The full model is massive at 671B parameters in a mixture-of-experts configuration. More practically useful are the distilled variants: R1-Distill-Qwen-1.5B, R1-Distill-Qwen-7B, R1-Distill-Llama-8B, R1-Distill-Qwen-14B, R1-Distill-Qwen-32B, and R1-Distill-Llama-70B. These distilled models capture much of the reasoning ability in sizes that run on consumer hardware.

Local inference highlights

The 8B distilled variant is particularly impressive. It outperforms many general-purpose 13B models on math and logic tasks while fitting in just 6GB of VRAM in Q4_K_M quantization. The reasoning traces are visible in the output, showing step-by-step problem solving that helps users verify the logic. The 32B distill is the sweet spot for users with 24GB GPUs, offering near-frontier reasoning capability.

Impact on the industry

The MIT license means R1 can be used commercially without restrictions, fine-tuned, merged with other models, and redistributed. This has already sparked a wave of community fine-tunes and merges. Several providers have built R1-based products within weeks of the release. The move also puts competitive pressure on other labs to open their reasoning models.

Running R1 locally

All distilled variants work with Ollama, LM Studio, and llama.cpp. RunThisModel shows compatibility grades for the 1.5B and 8B distills, which are the most popular choices for local users. The 1.5B variant runs on virtually any hardware, while the 8B variant needs a GPU with at least 6GB of VRAM for comfortable use.