Best reasoning models
Chain-of-thought / o1-style local thinkers
Models trained to show their work. Ideal for math, code, and multi-step logic puzzles. All run with `<think>` traces enabled.
- 1
DeepSeek
DeepSeek R1 Distill 8B
Compact reasoning model. Good reasoning capabilities in a small package.
8B≥ 5.08 GB - 2
DeepSeek
DeepSeek R1 Distill 1.5B
Compact reasoning model distilled from DeepSeek R1. Strong chain-of-thought in a tiny package.
1.5B≥ 1.54 GB - 3
Microsoft
Phi-4
Microsoft's 14B parameter model. Punches well above its weight on reasoning.
14B≥ 8.93 GB - 4
Alibaba
Qwen 2.5 32B
Premium 32B model. Top-tier reasoning. Mac with 32GB+ RAM.
32B≥ 18.99 GB - 5
Alibaba
Qwen 2.5 14B
Strong 14B model with excellent coding and reasoning. iPad Pro recommended.
14B≥ 8.87 GB - 6
Google
Gemma 3 27B
Google's flagship open model. Near GPT-4 quality. Needs 20GB+ RAM.
27B≥ 15.91 GB
Not sure which fits your machine? Auto-detect your hardware →