IBM

Granite 3.0 1B-A400M

Tiny IBM MoE for edge and CPU inference. 1.3 B total, only 400 M active.

1.3B parametersgranitemoeapache-2.04K context1.5GB - 1.5GB VRAM

About This Model

Granite 3.0 1B-A400M is IBM stab at edge-class MoE. Active param count of 400 M means it can run usefully on phones, microcontrollers with 4 GB RAM, or CPU-only setups. The MoE structure preserves quality from a much bigger dense equivalent.

Check Your Hardware

See which quantizations of Granite 3.0 1B-A400M your hardware can run.

Quantization Options

QuantizationBitsFile SizeVRAM NeededRAM NeededQuality
Q4_K_M4.50.85 GB1.5 GB4 GB
85%

Context window & KV cache

Adds 0.09 GB to VRAM

Long chats and RAG inputs cost real memory. Drag to see how 32K vs 128K context shifts your grade.

Model native max: 4K tokens. KV-cache estimate is approximate (±30 %); real usage depends on attention layout.

How to run Granite 3.0 1B-A400M

Pick a runtime — copy & paste. Commands are pre-filled with this model’s repo.

GUI. Browse → download → chat. MLX on Apple Silicon.

LM Studio home →
  1. 1

    Open LM Studio

    Go to the 🔍 Search tab.

  2. 2

    Search for

    bartowski/granite-3.0-1b-a400m-instruct-GGUF
  3. 3

    Download

    Pick the Q4_K_M quant — best balance of size vs. quality.

  4. 4

    Chat

    Hit ▶ Load Model and start chatting. Toggle 'Local Server' to expose an OpenAI-compatible API on :1234.

Community benchmarks

Real tokens/sec reports from people running Granite 3.0 1B-A400M on actual hardware.

No community runs yet for this model. Be the first to submit your numbers.

See It In Action

Real model outputs generated via RunThisModel.com — watch responses stream in real time.

Llama 3.3 70B responding...

Outputs generated by real AI models via RunThisModel.com. Generation speed shown is from cloud inference. Local speeds vary by hardware — check your device.

Frequently Asked Questions

How much VRAM do I need to run Granite 3.0 1B-A400M?

Granite 3.0 1B-A400M requires 1.5GB VRAM minimum with Q4_K_M quantization. For full precision, you need 1.5GB VRAM.

What is the best quantization for Granite 3.0 1B-A400M?

Q4_K_M offers the best balance of quality and VRAM usage. Q8_0 is near-lossless if you have enough VRAM.