HuggingFace

SmolLM2 1.7B

Capable 1.7B model from HuggingFace. Good balance for mobile devices.

1.7B parameterssmollmapache-2.08K context1.6GB - 2.3GB VRAM

Check Your Hardware

See which quantizations of SmolLM2 1.7B your hardware can run.

Quantization Options

QuantizationBitsFile SizeVRAM NeededRAM NeededQuality
Q4_K_M4.51.1 GB1.6 GB2.5 GB
85%
Q8_081.8 GB2.3 GB3.5 GB
98%

See It In Action

Real model outputs generated via RunThisModel.com — watch responses stream in real time.

Llama 3.3 70B responding...

Outputs generated by real AI models via RunThisModel.com. Generation speed shown is from cloud inference. Local speeds vary by hardware — check your device.

Frequently Asked Questions

How much VRAM do I need to run SmolLM2 1.7B?

SmolLM2 1.7B requires 1.6GB VRAM minimum with Q4_K_M quantization. For full precision, you need 2.3GB VRAM.

What is the best quantization for SmolLM2 1.7B?

Q4_K_M offers the best balance of quality and VRAM usage. Q8_0 is near-lossless if you have enough VRAM.