Anthracite
Magnum v4 72B
Qwen2.5-72B fine-tuned on Claude-Opus-style literary data. Highest-quality long-form prose at the 72B class. Apache 2.0.
Check Your Hardware
See which quantizations of Magnum v4 72B your hardware can run.
Quantization Options
| Quantization | Bits | File Size | VRAM Needed | RAM Needed | Quality |
|---|---|---|---|---|---|
| BF16 | 16 | 144 GB | 144.5 GB | 145 GB | 100% |
| Q4_K_M | 4.5 | 43.2 GB | 43.7 GB | 44.2 GB | 85% |
Context window & KV cache
Adds 2.50 GB to VRAMLong chats and RAG inputs cost real memory. Drag to see how 32K vs 128K context shifts your grade.
Model native max: 128K tokens. KV-cache estimate is approximate (±30 %); real usage depends on attention layout.
How to run Magnum v4 72B
Pick a runtime — copy & paste. Commands are pre-filled with this model’s repo.
GUI. Browse → download → chat. MLX on Apple Silicon.
LM Studio home →- 1
Open LM Studio
Go to the 🔍 Search tab.
- 2
Search for
bartowski/magnum-v4-72b-GGUF - 3
Download
Pick the Q4_K_M quant — best balance of size vs. quality.
- 4
Chat
Hit ▶ Load Model and start chatting. Toggle 'Local Server' to expose an OpenAI-compatible API on :1234.
Community benchmarks
Real tokens/sec reports from people running Magnum v4 72B on actual hardware.
No community runs yet for this model. Be the first to submit your numbers.
Self-host serving plan
Want to host Magnum v4 72Bfor many users? Or run it on a card that’s technically too small? Slide the knobs.
VRAM needed
46.3 GB
43.7 GB weights + 2.1 GB KV
Aggregate tok/s
1
across 1 user
Per-user tok/s
1
72 B dense
⚠ Will spill 22.3 GB of weights to system RAM (~5× slower per offloaded layer). Use llama.cpp’s --cpu-offload-gb or vLLM’s --swap-space.
Throughput is a sub-linear estimate: doubling users adds ~70 % of single-user TPS until ~8, then plateaus on memory bandwidth. MoE models scale concurrency much better because each user activates a different subset of experts.
See It In Action
Real model outputs generated via RunThisModel.com — watch responses stream in real time.
Outputs generated by real AI models via RunThisModel.com. Generation speed shown is from cloud inference. Local speeds vary by hardware — check your device.
Frequently Asked Questions
How much VRAM do I need to run Magnum v4 72B?
Magnum v4 72B requires 43.7GB VRAM minimum with BF16 quantization. For full precision, you need 144.5GB VRAM.
What is the best quantization for Magnum v4 72B?
Q4_K_M offers the best balance of quality and VRAM usage. Q8_0 is near-lossless if you have enough VRAM.