Hardware-filtered

Uncensored local AI models

18 open-weights models without alignment refusals — abliterated derivatives, Dolphin fine-tunes, creative-writing tunes, and naturally uncensored base models. Each is graded against your hardware and labelled with provenance and license.

Use responsibly.

These models are listed for research, creative writing, and dual-use technical work. They do not enforce content guardrails — that's the user's responsibility. Some inherit non-commercial licenses (Codestral, Llama, Stheno) — read each model's license before shipping. Local generation does not legalize content that is illegal where you live.

Provenance — where models come from

Official Base

Lab-released foundation model with no instruct/RLHF alignment — naturally has no refusals

2 models

Abliteration

Mechanical refusal-direction removal of an official model — no retraining

4 models

Community Fine-Tune

Full retraining on de-aligned datasets — preserves capability best

3 models

Roleplay/Creative Tune

Fine-tuned for narrative writing or character roleplay

9 models

Community Original

Custom merge or fine-tune — no single official parent

0 models

General unrestricted assistants

Chat models with the refusal layer removed. Best general-purpose pick for research, security work, and any task where the official Instruct model bails out. Sorted by community downloads.

  1. 1

    Qwen3 8B Base

    Official Base

    Alibaba

    Official Qwen3 8B foundation model — pretrained only, no RLHF or refusal training. The 'naturally uncensored' option: no abliteration needed because alignment was never applied. Apache 2.0.

    8B5 GB
    573.6K dl · 101
  2. 2

    Mistral Nemo Base 12B

    Official Base

    Mistral AI

    Official Mistral-Nemo 12B foundation model (NVIDIA collab) — pretrained only, no instruct or refusal layer. Naturally uncensored, Apache 2.0, 128K context.

    12B8 GB
    53.1K dl · 344
  3. 3

    Dolphin 3.0 Llama 3.1 8B

    Community Fine-Tune

    Cognitive Computations · built on Llama-3.1-8B

    Eric Hartford's flagship uncensored fine-tune of Llama 3.1 8B. Steerable assistant with personality controls and no built-in refusals — strong general-purpose pick at 8B.

    8B5 GB
    52.8K dl · 299
  4. 4

    Dolphin Mistral 24B (Venice Edition)

    Community Fine-Tune

    Cognitive Computations · built on Mistral-Small-24B-Instruct-2501

    Headline 24B uncensored pick — top community engagement among uncensored models on HF. Steerable assistant on Mistral-Small-24B base. Apache 2.0.

    24B15 GB
    16.6K dl · 507
  5. 5

    NeuralDaredevil 8B (abliterated)

    Abliteration

    mlabonne · built on Daredevil-8B-abliterated

    Llama-3 8B with refusal direction ablated, then DPO-recovered to restore capability. Best quality-retention 8B abliteration — minimal regression vs the official Instruct model.

    8B5 GB
    15.5K dl · 269
  6. 6

    Llama 3.1 8B Instruct (abliterated)

    Abliteration

    mlabonne · built on Meta-Llama-3.1-8B-Instruct

    Pure refusal-direction ablation of Llama-3.1-8B-Instruct. No retraining — keeps the official instruct behavior but removes the 'I can't help with that' reflex.

    8B5 GB
    11.7K dl · 202
  7. 7

    Dolphin 3.0 R1 Mistral 24B

    Community Fine-Tune

    Cognitive Computations · built on Mistral-Small-24B-Base-2501

    Only widely-available uncensored R1-style reasoning model. Mistral-Small-24B base with chain-of-thought training and refusals removed. 128K context.

    24B15 GB
    1.2K dl · 211
  8. 8

    Llama 3.1 70B (lorablated)

    Abliteration

    mlabonne · built on Meta-Llama-3.1-70B-Instruct

    Llama-3.1-70B-Instruct with abliteration applied via LoRA merge. Cleanest 70B refusal-removed pick — keeps the official Instruct quality.

    70B43 GB
    260 dl · 78

Creative writing & roleplay

Models fine-tuned for long-form narrative, prose style, and character voice. The TheDrummer / Sao10K / Anthracite axis dominates this lane. Some are non-commercial — check the license badge.

  1. 1

    Dolphin 3.0 Llama 3.1 8B

    Community Fine-Tune

    Cognitive Computations · built on Llama-3.1-8B

    Eric Hartford's flagship uncensored fine-tune of Llama 3.1 8B. Steerable assistant with personality controls and no built-in refusals — strong general-purpose pick at 8B.

    8B5 GB
    52.8K dl · 299
  2. 2

    Euryale L3.3 70B v2.3

    Roleplay/Creative Tune

    Sao10K · built on Llama-3.3-70B-Instruct

    Canonical 70B creative-writing and roleplay model. Llama-3.3-70B base with extended training on long-form prose. The reference 70B uncensored pick.

    70B43 GB
    50.0K dl · 83
  3. 3

    Magnum v4 72B

    Roleplay/Creative Tune

    Anthracite · built on Qwen2.5-72B-Instruct

    Qwen2.5-72B fine-tuned on Claude-Opus-style literary data. Highest-quality long-form prose at the 72B class. Apache 2.0.

    72B44 GB
    23.4K dl · 51
  4. 4

    Dolphin Mistral 24B (Venice Edition)

    Community Fine-Tune

    Cognitive Computations · built on Mistral-Small-24B-Instruct-2501

    Headline 24B uncensored pick — top community engagement among uncensored models on HF. Steerable assistant on Mistral-Small-24B base. Apache 2.0.

    24B15 GB
    16.6K dl · 507
  5. 5

    Stheno L3 8B v3.2

    Roleplay/Creative TuneNon-commercial

    Sao10K · built on Meta-Llama-3-8B-Instruct

    Long-running 8B roleplay reference. Trained for character voice consistency and long-form creative writing. CC-BY-NC-4.0 — non-commercial use only.

    8B5 GB
    5.6K dl · 401
  6. 6

    Cydonia 24B v4.3

    Roleplay/Creative Tune

    TheDrummer · built on Mistral-Small-3.2-24B-Instruct-2506

    Top-of-line 24B roleplay model, Mistral-Small-3.2-24B base. Active development cycle — TheDrummer ships frequent revisions tracking the latest base models.

    24B15 GB
    3.5K dl · 102
  7. 7

    Skyfall 31B v4.2

    Roleplay/Creative Tune

    TheDrummer · built on Magistral-Small-2509

    31B creative-writing model — sweet spot between 24B and 70B. Built on Mistral-Small-3.1 upscaled. Strong long-context narrative generation.

    31B19 GB
    2.5K dl · 47
  8. 8

    Magnum v4 12B

    Roleplay/Creative Tune

    Anthracite · built on Mistral-Nemo-Instruct-2407

    Mistral-Nemo-12B fine-tuned on curated Claude-style prose data. Built for long-form creative writing with literary register.

    12B8 GB
    457 dl · 50
  9. 9

    Rocinante 12B v1.1

    Roleplay/Creative Tune

    TheDrummer · built on Mistral-Nemo-Base-2407

    Mistral-Nemo-12B roleplay fine-tune optimized for character chat. Stable workhorse for the 12GB tier — strong dialog, low repetition.

    12B8 GB
    290 dl · 122
  10. 10

    Magnum v4 22B

    Roleplay/Creative TuneNon-commercial

    Anthracite · built on Mistral-Small-Instruct-2409

    Mistral-Small-22B base, Anthracite's Claude-style prose training. Sits between 12B and 70B — for users who want Magnum quality on a single 24GB card.

    22B14 GB
    192 dl · 29
  11. 11

    Rocinante XL 16B v1

    Roleplay/Creative Tune

    TheDrummer · built on Mistral-Nemo-Base-2407

    Newest Rocinante release — 16B upscaled Mistral-Nemo for richer prose at the 12-16GB tier. Recent (2026) release, smaller community footprint but actively developed.

    16B10 GB
    143 dl · 11

Coding without filters

Code-specialized models with refusal direction ablated. For security research, fuzzing, and dual-use tooling that mainstream-aligned assistants decline.

  1. 1

    Codestral 22B (abliterated)

    AbliterationNon-commercial

    failspy · built on Codestral-22B-v0.1

    Mistral Codestral with refusal direction ablated. Code-specialized model without the 'I can't help with that' filter. Inherits Codestral's non-commercial license.

    22B14 GB
    8.1K dl · 12

Uncensored reasoning

Chain-of-thought / R1-style thinkers without alignment refusals. Currently a small lane — Dolphin-R1 is the only widely-distributed option.

  1. 1

    Dolphin 3.0 R1 Mistral 24B

    Community Fine-Tune

    Cognitive Computations · built on Mistral-Small-24B-Base-2501

    Only widely-available uncensored R1-style reasoning model. Mistral-Small-24B base with chain-of-thought training and refusals removed. 128K context.

    24B15 GB
    1.2K dl · 211

Naturally uncensored base models

Official pretrained foundation models that were never RLHF-aligned. Less assistant-shaped, more open-ended — closer to the raw distribution. Ideal for fine-tuning your own variant.

  1. 1

    Qwen3 8B Base

    Official Base

    Alibaba

    Official Qwen3 8B foundation model — pretrained only, no RLHF or refusal training. The 'naturally uncensored' option: no abliteration needed because alignment was never applied. Apache 2.0.

    8B5 GB
    573.6K dl · 101
  2. 2

    Mistral Nemo Base 12B

    Official Base

    Mistral AI

    Official Mistral-Nemo 12B foundation model (NVIDIA collab) — pretrained only, no instruct or refusal layer. Naturally uncensored, Apache 2.0, 128K context.

    12B8 GB
    53.1K dl · 344

Don't see it fitting your card?

Bigger uncensored models (70B+) need a 48 GB card or two 24 GB cards. Cheapest path is usually a few hours on a rented A100 or H100 — both RunPod and Vast.ai charge per minute.

A note on tone & framing

We list these models for legitimate research, creative writing, security testing, and dual-use development. We do not list models whose model cards advertise training on harmful data, nor do we promote bypassing safety for harmful outcomes. If a model in this list is being misused or has been pulled by its author, let us know.