AI2
OLMoE 1B-7B
Fully open MoE — 7 B total, only 1.3 B active per token. Tiny footprint, surprisingly capable.
About This Model
OLMoE from AI2 is the most accessible MoE on this list. 7 B total parameters means it fits on a 6 GB GPU at Q4, but only 1.3 B activate per token — so inference is fast even on modest hardware. Fully open: weights, training data, and recipes all released under Apache-2.0.
Check Your Hardware
See which quantizations of OLMoE 1B-7B your hardware can run.
Quantization Options
| Quantization | Bits | File Size | VRAM Needed | RAM Needed | Quality |
|---|---|---|---|---|---|
| Q4_K_M | 4.5 | 4.2 GB | 5 GB | 8 GB | 85% |
| Q8_0 | 8 | 7.4 GB | 8 GB | 10 GB | 98% |
Context window & KV cache
Adds 0.50 GB to VRAMLong chats and RAG inputs cost real memory. Drag to see how 32K vs 128K context shifts your grade.
Model native max: 4K tokens. KV-cache estimate is approximate (±30 %); real usage depends on attention layout.
How to run OLMoE 1B-7B
Pick a runtime — copy & paste. Commands are pre-filled with this model’s repo.
GUI. Browse → download → chat. MLX on Apple Silicon.
LM Studio home →- 1
Open LM Studio
Go to the 🔍 Search tab.
- 2
Search for
bartowski/OLMoE-1B-7B-0924-Instruct-GGUF - 3
Download
Pick the Q4_K_M quant — best balance of size vs. quality.
- 4
Chat
Hit ▶ Load Model and start chatting. Toggle 'Local Server' to expose an OpenAI-compatible API on :1234.
Community benchmarks
Real tokens/sec reports from people running OLMoE 1B-7B on actual hardware.
No community runs yet for this model. Be the first to submit your numbers.
See It In Action
Real model outputs generated via RunThisModel.com — watch responses stream in real time.
Outputs generated by real AI models via RunThisModel.com. Generation speed shown is from cloud inference. Local speeds vary by hardware — check your device.
Frequently Asked Questions
How much VRAM do I need to run OLMoE 1B-7B?
OLMoE 1B-7B requires 5GB VRAM minimum with Q4_K_M quantization. For full precision, you need 8GB VRAM.
What is the best quantization for OLMoE 1B-7B?
Q4_K_M offers the best balance of quality and VRAM usage. Q8_0 is near-lossless if you have enough VRAM.