Quality × Hardware

Open-source AI leaderboard

Models ranked by composite quality from public benchmarks. Pair this with your hardware grade — pick the smartest model that actually fits.

Quality × VRAM matrix

Each cell lists the strongest models in that quality and VRAM band. Top-left = smartest. Bottom-right = biggest fit.

Full ranking (69 models)

#ModelCompositeMMLUHumanEvalGSM8KArena ELOMin VRAM
1Qwen 2.5 Coder 14B

Alibaba · 14B · code

89.789.78.87 GB
2Qwen 2.5 32B

Alibaba · 32B · llm

87.883.388.495.9123518.99 GB
3Llama 3.1 70B Instruct

Meta · 70B · llm

85.579.580.595.1124840.1 GB
4Yi Coder 9B

01.AI · 9B · code

85.485.45.46 GB
5Qwen 2.5 Coder 3B

Alibaba · 3B · code

84.184.12.46 GB
6Qwen 2.5 14B

Alibaba · 14B · llm

83.679.783.594.212088.87 GB
7Gemma 3 27B

Google · 27B · llm

83.376.987.889.0121815.91 GB
8DeepSeek R1 Distill 8B

DeepSeek · 8B · llm

80.070.580.55.08 GB
9Qwen 2.5 7B Instruct

Alibaba · 7.6B · llm

79.874.284.891.611755.3 GB
10Phi-4

Microsoft · 14B · llm

79.784.882.695.48.93 GB
11InternLM 2.5 7B

Shanghai AI Lab · 7.7B · llm

79.472.886.04.89 GB
12DeepSeek Coder 6.7B

DeepSeek · 6.7B · code

78.678.64.3 GB
13Qwen 2.5 Coder 7B

Alibaba · 7.6B · code

78.067.688.44.86 GB
14Phi-4 Mini 3.8B

Microsoft · 3.8B · llm

76.867.374.488.62.82 GB
15DeepSeek R1 Distill 1.5B

DeepSeek · 1.5B · llm

74.965.91.54 GB
16Falcon 3 10B

TII · 10B · llm

73.173.16.36 GB
17Phi-3.5 Mini 3.8B

Microsoft · 3.8B · llm

72.668.962.886.22.73 GB
18EXAONE 3.5 7.8B

LG AI · 7.8B · llm

71.165.476.84.94 GB
19Qwen 2.5 Coder 1.5B

Alibaba · 1.5B · code

70.770.71.54 GB
20MiniCPM-V 2.6

OpenBMB · 2B · multimodal

69.72.1 GB
21Yi 1.5 9B Chat

01.AI · 9B · llm

69.769.75.46 GB
22Qwen 2.5 3B

Alibaba · 3B · llm

68.965.674.486.710952.46 GB
23Gemma 3 12B

Google · 12B · llm

68.568.768.37.3 GB
24Mistral Small 22B

Mistral AI · 22B · llm

68.473.070.2114812.93 GB
25Granite 3.3 8B

IBM · 8B · llm

68.164.771.45.1 GB
26Falcon 3 7B

TII · 7B · llm

67.467.45 GB
27Llama 3.1 8B Instruct

Meta · 8B · llm

66.268.462.280.511155.08 GB
28Qwen 2.5 1.5B

Alibaba · 1.5B · llm

65.260.961.673.21.54 GB
29DeepSeek Coder 1.3B

DeepSeek · 1.3B · code

65.265.21.31 GB
30Gemma 2 9B Instruct

Google · 9.2B · llm

65.271.340.276.711905.87 GB
31Yi 1.5 6B Chat

01.AI · 6B · llm

64.164.13.92 GB
32Mistral Nemo 12B

Mistral AI · 12B · llm

62.468.056.77.46 GB
33Qwen 2.5 Coder 0.5B

Alibaba · 0.5B · code

61.661.61.13 GB
34OpenChat 3.5 7B

OpenChat · 7B · llm

60.665.855.54.57 GB
35OLMo 2 7B

Allen AI · 7B · llm

60.460.44.67 GB
36Phi-3.5 Vision

Microsoft · 4.2B · multimodal

60.23.2 GB
37Solar 10.7B

Upstage · 10.7B · llm

60.065.911166.52 GB
38Qwen2-VL 2B

Alibaba · 2.2B · multimodal

57.51.42 GB
39EXAONE 3.5 2.4B

LG AI · 2.4B · llm

56.459.753.12.03 GB
40CodeGemma 7B

Google · 8.5B · code

56.156.15.46 GB
41Nemotron Mini 4B

NVIDIA · 4B · llm

56.156.13.01 GB
42Falcon 3 3B

TII · 3B · llm

55.755.72.37 GB
43Gemma 3 4B

Google · 4B · llm

54.758.151.22.82 GB
44LLaVA 1.6 7B

LLaVA · 7B · multimodal

54.660.05 GB
45Llama 3.2 3B Instruct

Meta · 3.2B · llm

54.263.435.077.710632.38 GB
46Danube 3 4B

H2O.ai · 4B · llm

53.953.92.73 GB
47Granite 3.3 2B

IBM · 2B · llm

52.452.41.94 GB
48Mistral 7B Instruct v0.3

Mistral AI · 7.3B · llm

47.060.130.550.54.57 GB
49StableLM Zephyr 3B

Stability AI · 3B · llm

46.046.02.09 GB
50PaliGemma 3B

Google · 3B · multimodal

45.52.5 GB
51Falcon 3 1B

TII · 1B · llm

42.942.91.48 GB
52Moondream 2

Moondream · 1.8B · multimodal

42.81.5 GB
53SmolLM2 1.7B

HuggingFace · 1.7B · llm

41.851.432.31.48 GB
54Yi Coder 1.5B

01.AI · 1.5B · code

41.541.51.4 GB
55Rocket 3B

Pansophic · 3B · llm

41.041.02.09 GB
56Qwen 2.5 0.5B

Alibaba · 0.5B · llm

39.947.530.541.60.96 GB
57Gemma 2 2B

Google · 2.6B · llm

37.651.317.723.911302.09 GB
58Code Llama 13B Instruct

Meta · 13B · code

36.036.07.83 GB
59StarCoder2 7B

BigCode · 7B · code

35.435.44.66 GB
60Llama 3.2 1B Instruct

Meta · 1.24B · llm

33.549.318.044.49891.25 GB
61Stable Code 3B

Stability AI · 3B · code

32.432.42.09 GB
62Gemma 3 1B

Google · 1B · llm

31.738.824.61.25 GB
63StarCoder2 3B

BigCode · 3B · code

31.731.72.26 GB
64Code Llama 7B

Meta · 7B · code

31.731.74.3 GB
65CodeGemma 2B

Google · 2B · code

31.131.12.02 GB
66SmolLM2 135M

HuggingFace · 0.135B · llm

30.130.10.64 GB
67Danube 3 500M

H2O.ai · 0.5B · llm

28.428.40.8 GB
68SmolLM2 360M

HuggingFace · 0.36B · llm

24.035.812.20.75 GB
69TinyLlama 1.1B

TinyLlama · 1.1B · llm

15.525.55.51.12 GB

Source: Public benchmarks aggregated from HF Open LLM Leaderboard, Chatbot Arena, BigCodeBench, MMLU-Pro, Math-500, and the original model technical reports. Numbers are best-effort April 2026 snapshots; some are reported by the model authors themselves and have not been independently verified. Last updated 2026-04-29.