Heretic
Fully automatic censorship removal for language models
Install / Use
/learn @p-e-w/HereticREADME
Heretic: Fully automatic censorship removal for language models<br><br>

Heretic is a tool that removes censorship (aka "safety alignment") from transformer-based language models without expensive post-training. It combines an advanced implementation of directional ablation, also known as "abliteration" (Arditi et al. 2024, Lai 2025 (1, 2)), with a TPE-based parameter optimizer powered by Optuna.
This approach enables Heretic to work completely automatically. Heretic finds high-quality abliteration parameters by co-minimizing the number of refusals and the KL divergence from the original model. This results in a decensored model that retains as much of the original model's intelligence as possible. Using Heretic does not require an understanding of transformer internals. In fact, anyone who knows how to run a command-line program can use Heretic to decensor language models.
<img width="650" height="715" alt="Screenshot" src="https://github.com/user-attachments/assets/d71a5efa-d6be-4705-a817-63332afb2d15" />
Running unsupervised with the default configuration, Heretic can produce decensored models that rival the quality of abliterations created manually by human experts:
| Model | Refusals for "harmful" prompts | KL divergence from original model for "harmless" prompts | | :--- | ---: | ---: | | google/gemma-3-12b-it (original) | 97/100 | 0 (by definition) | | mlabonne/gemma-3-12b-it-abliterated-v2 | 3/100 | 1.04 | | huihui-ai/gemma-3-12b-it-abliterated | 3/100 | 0.45 | | p-e-w/gemma-3-12b-it-heretic (ours) | 3/100 | 0.16 |
The Heretic version, generated without any human effort, achieves the same
level of refusal suppression as other abliterations, but at a much lower
KL divergence, indicating less damage to the original model's capabilities.
(You can reproduce those numbers using Heretic's built-in evaluation functionality,
e.g. heretic --model google/gemma-3-12b-it --evaluate-model p-e-w/gemma-3-12b-it-heretic.
Note that the exact values might be platform- and hardware-dependent.
The table above was compiled using PyTorch 2.8 on an RTX 5090.)
Of course, mathematical metrics and automated benchmarks never tell the whole story, and are no substitute for human evaluation. Models generated with Heretic have been well-received by users (links and emphasis added):
"I was skeptical before, but I just downloaded GPT-OSS 20B Heretic model and holy shit. It gives properly formatted long responses to sensitive topics, using the exact uncensored words that you would expect from an uncensored model, produces markdown format tables with details and whatnot. Looks like this is the best abliterated version of this model so far..." (Link to comment)
"Heretic GPT 20b seems to be the best uncensored model I have tried yet. It doesn't destroy a the model's intelligence and it is answering prompts normally would be rejected by the base model." (Link to comment)
"[Qwen3-4B-Instruct-2507-heretic] Has been the best unquantized abliterated model that I have been able to run on 16gb vram." (Link to comment)
Heretic supports most dense models, including many multimodal models, and several different MoE architectures. It does not yet support SSMs/hybrid models, models with inhomogeneous layers, and certain novel attention systems.
You can find a small collection of models that have been decensored using Heretic on Hugging Face, and the community has created and published well over 1,000 Heretic models in addition to those.
Usage
Prepare a Python 3.10+ environment with PyTorch 2.2+ installed as appropriate for your hardware. Then run:
pip install -U heretic-llm
heretic Qwen/Qwen3-4B-Instruct-2507
Replace Qwen/Qwen3-4B-Instruct-2507 with whatever model you want to decensor.
The process is fully automatic and does not require configuration; however,
Heretic has a variety of configuration parameters that can be changed for
greater control. Run heretic --help to see available command-line options,
or look at config.default.toml if you prefer to use
a configuration file.
At the start of a program run, Heretic benchmarks the system to determine
the optimal batch size to make the most of the available hardware.
On an RTX 3090, with the default configuration, decensoring Llama-3.1-8B-Instruct
takes about 45 minutes. Note that Heretic supports model quantization with
bitsandbytes, which can drastically reduce the amount of VRAM required to process
models. Set the quantization option to bnb_4bit to enable quantization.
After Heretic has finished decensoring a model, you are given the option to save the model, upload it to Hugging Face, chat with it to test how well it works, or any combination of those actions.
Research features
In addition to its primary function of removing model censorship, Heretic also
provides features designed to support research into the semantics of model internals
(interpretability). To use those features, you need to install Heretic with the
optional research extra:
pip install -U heretic-llm[research]
This gives you access to the following functionality:
Generate plots of residual vectors by passing --plot-residuals
When run with this flag, Heretic will:
- Compute residual vectors (hidden states) for the first output token, for each transformer layer, for both "harmful" and "harmless" prompts.
- Perform a PaCMAP projection from residual space to 2D-space.
- Left-right align the projections of "harmful"/"harmless" residuals by their geometric medians to make projections for consecutive layers more similar. Additionally, PaCMAP is initialized with the previous layer's projections for each new layer, minimizing disruptive transitions.
- Scatter-plot the projections, generating a PNG image for each layer.
- Generate an animation showing how residuals transform between layers, as an animated GIF.
See the configuration file for options that allow you to control various aspects of the generated plots.
Note that PaCMAP is an expensive operation that is performed on the CPU. For larger models, it can take an hour or more to compute projections for all layers.
Print details about residual geometry by passing --print-residual-geometry
If you are interested in a quantitative analysis of how residual vectors for "harmful" and "harmless" prompts relate to each other, this flag gives you the following table, packed with metrics that can facilitate understanding the same (for gemma-3-270m-it in this case):
┏━━━━━━━┳━━━━━━━━┳━━━━━━━━━━┳━━━━━━━━━┳━━━━━━━━━━┳━━━━━━━━━┳━━━━━━━━━━┳━━━━━━━━━━┳━━━━━━━━━━┳━━━━━━━━━━┳━━━━━━━━━━┳━━━━━━━━━┳━━━━━━━━━┳━━━━━━━━┓
┃ Layer ┃ S(g,b) ┃ S(g*,b*) ┃ S(g,r) ┃ S(g*,r*) ┃ S(b,r) ┃ S(b*,r*) ┃ |g| ┃ |g*| ┃ |b| ┃ |b*| ┃ |r| ┃ |r*| ┃ Silh ┃
┡━━━━━━━╇━━━━━━━━╇━━━━━━━━━━╇━━━━━━━━━╇━━━━━━━━━━╇━━━━━━━━━╇━━━━━━━━━━╇━━━━━━━━━━╇━━━━━━━━━━╇━━━━━━━━━━╇━━━━━━━━━━╇━━━━━━━━━╇━━━━━━━━━╇━━━━━━━━┩
│ 1 │ 1.0000 │ 1.0000 │ -0.4311 │ -0.4906 │ -0.4254 │ -0.4847 │ 170.29 │ 170.49 │ 169.78 │ 169.85 │ 1.19 │ 1.31 │ 0.0480 │
│ 2 │ 1.0000 │ 1.0000 │ 0.4297 │ 0.4465 │ 0.4365 │ 0.4524 │ 768.55 │ 768.77 │ 771.32 │ 771.36 │ 6.39 │ 5.76 │ 0.0745 │
│ 3 │ 0.9999 │ 1.0000 │ -0.5699 │ -0.5577 │ -0.5614 │ -0.5498 │ 1020.98 │ 1021.13 │ 1013.80 │ 1014.71 │ 12.70 │ 11.60 │ 0.0920 │
│ 4 │ 0.9999 │ 1.0000 │ 0.6582 │ 0.6553 │ 0.6659 │ 0.6627 │ 1356.39 │ 1356.20 │ 1368.71 │ 1367.95 │ 18.62 │ 17.84 │ 0.0957 │
│ 5 │ 0.9987 │ 0.9990 │ -0.6880 │ -0.6761 │ -0.6497 │ -0.6418 │ 766.54 │ 762.25 │ 731.75 │ 732.42 │ 51.97 │ 45.24 │ 0.1018 │
│ 6 │ 0.9998 │ 0.9998 │ -0.1983 │ -0.2312 │ -0.1811 │ -0.2141 │ 2417.35 │ 2421.08 │ 2409.18 │ 2411.40 │ 43.06 │ 43.47 │ 0.0900 │
│ 7 │ 0.9998 │ 0.9997 │ -0.5258 │ -0.5746 │ -0.5072 │ -0.5560 │ 3444.92 │ 3474.99 │ 3400.01 │ 3421.63 │ 86.94 │ 94.38 │ 0.0492 │
│ 8 │ 0.9990 │ 0.9991
