Back to RunThisModel

Best AI Models for GTX 1650

4GB VRAM nvidia turing

VRAM
4GB
Generation
turing
MSRP
$149
Vendor
nvidia
Runs Perfectly (S/A/B)
74
With Offloading (C/D)
25
Cannot Run (F)
10
S

Excellent (65 models)

Runs great with room for large context windows

Llama 3.2 3B Instruct

Meta

S
3.2BChat / LLMQ5_K_M2.7GB

Qwen 2.5 3B

Alibaba

S
3BChat / LLMQ4_K_M2.5GB

Qwen 2.5 Coder 3B

Alibaba

S
3BCodingQ4_K_M2.5GB

Falcon 3 3B

TII

S
3BChat / LLMQ4_K_M2.4GB

StableLM Zephyr 3B

Stability AI

S
3BChat / LLMQ4_K_M2.1GB

Rocket 3B

Pansophic

S
3BChat / LLMQ4_K_M2.1GB

StarCoder2 3B

BigCode

S
3BCodingQ4_K_M2.3GB

Stable Code 3B

Stability AI

S
3BCodingQ4_K_M2.1GB

PaliGemma 3B

Google

S
3BMultimodalQ4_K_M2.5GB

Gemma 2 2B

Google

S
2.6BChat / LLMQ4_K_M2.1GB

EXAONE 3.5 2.4B

LG AI

S
2.4BChat / LLMQ4_K_M2.0GB

Qwen2-VL 2B

Alibaba

S
2.2BMultimodalQ8_02.0GB

CodeGemma 2B

Google

S
2BCodingQ4_K_M2.0GB

MiniCPM-V 2.6

OpenBMB

S
2BMultimodalQ4_K_M2.1GB

Granite 3.3 2B

IBM

S
2BChat / LLMQ4_K_M1.9GB

Moondream 2

Moondream

S
1.8BMultimodalQ4_K_M1.5GB

SmolLM2 1.7B

HuggingFace

S
1.7BChat / LLMQ8_02.2GB

Qwen 2.5 1.5B

Alibaba

S
1.5BChat / LLMQ8_02.3GB

DeepSeek R1 Distill 1.5B

DeepSeek

S
1.5BChat / LLMQ8_02.3GB

Qwen 2.5 Coder 1.5B

Alibaba

S
1.5BCodingQ8_02.3GB

Yi Coder 1.5B

01.AI

S
1.5BCodingQ8_02.0GB

DeepSeek Coder 1.3B

DeepSeek

S
1.3BCodingQ8_01.8GB

Llama 3.2 1B Instruct

Meta

S
1.24BChat / LLMQ8_01.7GB

TinyLlama 1.1B

TinyLlama

S
1.1BChat / LLMQ8_01.6GB

Gemma 3 1B

Google

S
1BChat / LLMQ8_01.5GB

Falcon 3 1B

TII

S
1BChat / LLMQ8_02.2GB

Stable Diffusion 2.1 Base (CoreML)

Stability AI / Apple

S
0.86BImage GenerationCoreML-Palettized1.6GB

Stable Diffusion 1.5 (CoreML)

Runway

S
0.86BImage GenerationCoreML-Palettized2.5GB

Stable Diffusion 1.5 (GGUF)

Runway / GPUStack

S
0.86BImage GenerationQ8_02.3GB

Stable Diffusion 2.1 (GGUF)

Stability AI

S
0.86BImage GenerationQ8_02.7GB

Whisper Large v3 Turbo

OpenAI

S
0.81BSpeech RecognitionQ8_02.0GB

Whisper Medium

OpenAI

S
0.77BSpeech RecognitionQ8_01.9GB

Distil-Whisper Large v3

HuggingFace

S
0.76BSpeech RecognitionQ8_01.9GB

BGE Reranker v2 M3

BAAI

S
0.568BRerankerFP161.6GB

Qwen 2.5 0.5B

Alibaba

S
0.5BChat / LLMQ8_01.1GB

Qwen 2.5 Coder 0.5B

Alibaba

S
0.5BCodingQ8_01.1GB

Danube 3 500M

H2O.ai

S
0.5BChat / LLMQ8_01.0GB

SmolLM2 360M

HuggingFace

S
0.36BChat / LLMQ8_00.9GB

BGE Large EN v1.5

BAAI

S
0.335BEmbeddingFP161.1GB

MusicGen Small

Meta

S
0.3BAudio GenerationONNX-Q4F160.8GB

Whisper Small

OpenAI

S
0.24BSpeech RecognitionQ8_00.9GB

Nomic Embed Text v1.5

Nomic AI

S
0.137BEmbeddingFP160.8GB

SmolLM2 135M

HuggingFace

S
0.135BChat / LLMFP160.8GB

Kokoro 82M TTS

Kokoro

S
0.082BText to SpeechONNX-Q8F160.6GB

Whisper Base

OpenAI

S
0.074BSpeech RecognitionQ8_00.3GB

Whisper Base English

OpenAI

S
0.074BSpeech RecognitionQ8_00.3GB

Whisper Tiny English (Quantized)

OpenAI

S
0.039BSpeech RecognitionQ5_10.1GB

Whisper Tiny

OpenAI

S
0.039BSpeech RecognitionQ8_00.2GB

BGE Small EN v1.5

BAAI

S
0.033BEmbeddingQ8_00.1GB

Snowflake Arctic Embed S

Snowflake

S
0.033BEmbeddingQ8_00.1GB

Jina Reranker Tiny EN

Jina AI

S
0.033BRerankerFP160.1GB

all-MiniLM-L6-v2

Sentence Transformers

S
0.023BEmbeddingQ8_00.1GB

Piper TTS - Amy (English)

Rhasspy

S
0.02BText to SpeechONNX0.1GB

Piper TTS - Lessac (English)

Rhasspy

S
0.02BText to SpeechONNX0.1GB

Piper TTS - LibriTTS-R (English)

Rhasspy

S
0.02BText to SpeechONNX0.6GB

Piper TTS - Spanish (MLS)

Rhasspy

S
0.02BText to SpeechONNX0.1GB

Piper TTS - French (Siwis)

Rhasspy

S
0.02BText to SpeechONNX0.5GB

Piper TTS - German (Thorsten)

Rhasspy

S
0.02BText to SpeechONNX0.1GB

Piper TTS - Chinese (Huayan)

Rhasspy

S
0.02BText to SpeechONNX0.1GB

Piper TTS - Japanese (Kokoro)

Rhasspy

S
0.02BText to SpeechONNX0.1GB

Piper TTS - Korean

Rhasspy

S
0.02BText to SpeechONNX0.1GB

Piper TTS - Russian (Irina)

Rhasspy

S
0.02BText to SpeechONNX0.1GB

Piper TTS - Portuguese (Faber)

Rhasspy

S
0.02BText to SpeechONNX0.1GB

Piper TTS - Italian (Riccardo)

Rhasspy

S
0.02BText to SpeechONNX0.5GB

Piper TTS - Arabic (Kareem)

Rhasspy

S
0.02BText to SpeechONNX0.1GB
A

Great (6 models)

Runs well with good performance

B

Good (3 models)

Runs but may be tight on memory

C

Possible (12 models)

Needs partial CPU offloading, slower performance

D

Struggling (13 models)

Heavy offloading required, very slow

F

Cannot Run (10 models)

Insufficient hardware to run this model

Similar GPUs

Frequently Asked Questions

What AI models can I run on the NVIDIA GeForce GTX 1650?
The NVIDIA GeForce GTX 1650 with 4GB VRAM can run 74 AI models smoothly (grade B or better), including Llama 3.2 3B Instruct, Qwen 2.5 3B, Qwen 2.5 Coder 3B and 71 more.
How much VRAM does the NVIDIA GeForce GTX 1650 have?
The NVIDIA GeForce GTX 1650 has 4GB of VRAM, which is sufficient for running many local AI models.
Is the NVIDIA GeForce GTX 1650 good for AI?
With 4GB VRAM, the NVIDIA GeForce GTX 1650 can run 74 out of 109 models in our database at grade B or better. You may need smaller quantizations for larger models.