← Back to News
Site UpdateFebruary 1, 2026

RunThisModel Now Covers 109 AI Models with Hardware Compatibility Checks

RunThisModel has reached a significant milestone with 109 AI models in our compatibility database, each with verified hardware requirements based on actual GGUF file sizes from Hugging Face. When we launched, the site covered around 30 popular LLMs. Today the database spans eight categories and includes models for every major local AI use case.

What we cover now

The database includes 60 language models from families like Llama, Qwen, Gemma, Mistral, Phi, DeepSeek, and Falcon. For coding, we track 12 specialized models including CodeLlama, StarCoder2, Qwen2.5 Coder, and Yi-Coder. Image generation is represented by 6 models covering Stable Diffusion, SDXL, and FLUX. Speech recognition includes 8 Whisper variants from Tiny to Large v3 Turbo. We also cover 11 TTS models across 10 languages, 3 multimodal models, 5 embedding models, and MusicGen for audio generation.

Verified file sizes

A key improvement this quarter was verifying every GGUF file size against actual Hugging Face repository data. Our sync scripts pull exact byte counts from HF, replacing the estimated sizes we previously used. This corrected VRAM requirements for 145 model quantizations, many of which were off by 20 to 50 percent. Your compatibility grades are now based on real data.

AI-generated reviews

Every model now has an AI-generated review covering its strengths, weaknesses, best use cases, and how it compares to alternatives in its size class. These reviews are generated by Qwen2.5 72B using detailed prompts that incorporate benchmark data, community feedback, and our own testing notes.

What is next

We are tracking 81 candidate models discovered by our trending model scanner. The most promising additions include new coding models, multilingual specialists, and next-generation reasoning models. We aim to reach 150 models by mid-2026 while maintaining our commitment to verified, accurate hardware requirements for every entry.