Wan-AI
Wan 2.2 TI2V 5B
Open-weights text-to-video and image-to-video model. Generates 5-second 480p clips on a single 24 GB card. The current open-source video sweet spot.
About This Model
Wan 2.2 TI2V 5B is the most accessible open-source text-to-video model in 2026, and the first one that genuinely runs on a single 24 GB consumer card without exotic offloading. Generation speed lands around 30-60 seconds for a 5-second 480p clip on an RTX 4090.
Check Your Hardware
See which quantizations of Wan 2.2 TI2V 5B your hardware can run.
Quantization Options
| Quantization | Bits | File Size | VRAM Needed | RAM Needed | Quality |
|---|---|---|---|---|---|
| FP16 | 16 | 10 GB | 16 GB | 24 GB | 100% |
| Q8 | 8 | 5 GB | 10 GB | 16 GB | 95% |
How to run Wan 2.2 TI2V 5B
Pick a runtime — copy & paste. Commands are pre-filled with this model’s repo.
GUI. Browse → download → chat. MLX on Apple Silicon.
LM Studio home →- 1
Open LM Studio
Go to the 🔍 Search tab.
- 2
Search for
Wan-AI/Wan2.2-TI2V-5B - 3
Download
Pick the FP16 quant — best balance of size vs. quality.
- 4
Chat
Hit ▶ Load Model and start chatting. Toggle 'Local Server' to expose an OpenAI-compatible API on :1234.
Community benchmarks
Real seconds-per-image reports from people running Wan 2.2 TI2V 5B on actual hardware.
No community runs yet for this model. Be the first to submit your numbers.
Frequently Asked Questions
How much VRAM do I need to run Wan 2.2 TI2V 5B?
Wan 2.2 TI2V 5B requires 10GB VRAM minimum with FP16 quantization. For full precision, you need 16GB VRAM.
What is the best quantization for Wan 2.2 TI2V 5B?
Q4_K_M offers the best balance of quality and VRAM usage. Q8_0 is near-lossless if you have enough VRAM.