Bergson
Mapping out the "memory" of neural nets with data attribution
Install / Use
/learn @EleutherAI/BergsonREADME
Bergson
This library enables you to trace the memory of deep neural nets with gradient-based data attribution techniques. We currently focus on TrackStar, as described in Scalable Influence and Fact Tracing for Large Language Model Pretraining by Chang et al. (2024), Magic, and also include support for several alternative influence functions.
We view attribution as a counterfactual question: If we "unlearned" this training sample, how would the model's behavior change? This formulation ties attribution to some notion of what it means to "unlearn" a training sample. Here we focus on a very simple notion of unlearning: taking a gradient ascent step on the loss with respect to the training sample.
Core features
- Gradient store for serial queries. We provide collection-time gradient compression for efficient storage, and integrate with FAISS for fast KNN search over large stores.
- On-the-fly queries. Query gradients without disk I/O overhead via a single pass over a dataset with a set of precomputed query gradients.
- Experiment with multiple query strategies based on LESS.
- Ideal for compression-free gradients.
- Per-token scores.
- Train‑time gradient collection. Capture gradients produced during training with a ~17% performance overhead.
- Scalable. We use FSDP2, BitsAndBytes, and other performance optimizations to support large models, datasets, and clusters.
- Integrated with HuggingFace Transformers and Datasets. We also support on-disk datasets in a variety of formats.
- Structured gradient views and per-attention head gradient collection. Bergson enables mechanistic interpretability via easy access to per‑module or per-attention head gradients.
Announcements
March 2026
- Support MAGIC
February 2026
- Support per-token gradients
January 2026
- Support EK-FAC
- [Experimental] Support distributing preconditioners across nodes and devices for VRAM-efficient computation through the GradientCollectorWithDistributedPreconditioners. If you would like this functionality exposed via the CLI please get in touch! https://github.com/EleutherAI/bergson/pull/100
Installation
pip install bergson
Quickstart
To construct an index of randomly projected gradients:
bergson build runs/index --model EleutherAI/pythia-14m --dataset NeelNanda/pile-10k --truncation --token_batch_size 4096
To collect Trackstar attribution scores:
bergson trackstar runs/trackstar --model EleutherAI/pythia-14m --query.dataset NeelNanda/pile-10k --data.dataset NeelNanda/pile-10k --data.truncation --token_batch_size 4096 --query.truncation --query.split "train[:20]"
To use MAGIC on a GPT-2 WikiText fine-tune:
bergson magic examples/magic/gpt2_wikitext_tiny.yaml
Usage
There are two ways to use Bergson. The first is to write an index of dataset gradients to disk using build then query it programmatically or using the Attributor or query CLI. The second is to specify your query upfront, then map over the dataset and collect and process gradients on the fly. When using this second strategy only influence scores will be saved.
You can build an index of gradients for each training sample from the command line, using bergson as a CLI tool:
bergson build <output_path> --model <model_name> --dataset <dataset_name>
This will create a directory at <output_path> containing the gradients for each training sample in the specified dataset. The --model and --dataset arguments should be compatible with the Hugging Face transformers library. By default it assumes that the dataset has a text column, but you can specify other columns using --prompt_column and optionally --completion_column. The --help flag will show you all available options.
You can also use the library programmatically to build an index. The collect_gradients function is just a bit lower level the CLI tool, and allows you to specify the model and dataset directly as arguments. The result is a HuggingFace dataset which contains a handful of new columns, including gradients, which contains the gradients for each training sample. You can then use this dataset to compute attributions.
At the lowest level of abstraction, the GradientCollector context manager allows you to efficiently collect gradients for each individual example in a batch during a backward pass, simultaneously randomly projecting the gradients to a lower-dimensional space to save memory. If you use Adafactor normalization we will do this in a very compute-efficient way which avoids computing the full gradient for each example before projecting it to the lower dimension. There are two main ways you can use GradientCollector:
- Using a
closureargument, which enables you to make use of the per-example gradients immediately after they are computed, during the backward pass. If you're computing summary statistics or other per-example metrics, this is the most efficient way to do it. - Without a
closureargument, in which case the gradients are collected and returned as a dictionary mapping module names to batches of gradients. This is the simplest and most flexible approach but is a bit more memory-intensive.
On-the-fly Query
You can score a large dataset against a previously built query index without saving its gradients to disk:
bergson score <output_path> --model <model_name> --dataset <dataset_name> --query_path <existing_index_path> --score individual --aggregation mean
We provide a utility to reduce a dataset into its mean or sum query gradient, for use as a query index:
bergson reduce <output_path> --model <model_name> --dataset <dataset_name> --aggregation mean --unit_normalize
Index Query
We provide a query Attributor which supports unit normalized gradients and KNN search out of the box. Access it via CLI with
bergson query --index <index_path> --model <model_name> --unit_norm
or programmatically with
from bergson import Attributor, FaissConfig
attr = Attributor(args.index, device="cuda")
...
query_tokens = tokenizer(query, return_tensors="pt").to("cuda:0")["input_ids"]
# Query the index
with attr.trace(model.base_model, 5) as result:
model(query_tokens, labels=query_tokens).loss.backward()
model.zero_grad()
To efficiently query on-disk indexes, perform ANN searches, and explore many other scalability features add a FAISS config:
attr = Attributor(args.index, device="cuda", faiss_cfg=FaissConfig("IVF1,SQfp16", mmap_index=True))
with attr.trace(model.base_model, 5) as result:
model(query_tokens, labels=query_tokens).loss.backward()
model.zero_grad()
Training Gradients
Gradient collection during training is supported via an integration with HuggingFace's Trainer and SFTTrainer classes. Training gradients are saved in the original order corresponding to their dataset items, and when the track_order flag is set the training steps associated with each training item are separately saved.
from bergson import GradientCollectorCallback, prepare_for_gradient_collection
callback = GradientCollectorCallback(
path="runs/example",
track_order=True,
)
trainer = Trainer(
model=model,
args=training_args,
train_dataset=dataset,
eval_dataset=dataset,
callbacks=[callback],
)
trainer = prepare_for_gradient_collection(trainer)
trainer.train()
Attention Head Gradients
By default Bergson collects gradients for named parameter matrices, but per-attention head gradients may be collected by configuring an AttentionConfig for each module of interest.
from bergson import AttentionConfig, IndexConfig, DataConfig
from transformers import AutoModelForCausalLM
model = AutoModelForCausalLM.from_pretrained("RonenEldan/TinyStories-1M", trust_remote_code=True, use_safetensors=True)
collect_gradients(
model=model,
data=data,
processor=processor,
path="runs/split_attention",
attention_cfgs={
# Head configuration for the TinyStories-1M transformer
"h.0.attn.attention.out_proj": AttentionConfig(num_heads=16, head_size=4, head_dim=2),
},
)
GRPO
Where a reward signal is available we compute gradients using a weighted advantage estimate based on Dr. GRPO:
bergson build <output_path> --model <model_name> --dataset <dataset_name> --reward_column <reward_column_name>
Numerical Stability
Some models produce inconsistent per-example gradients when batched together. This is caused by nondeterminism in optimized SDPA attention backends (flash, memory-efficient) — the diagnostic tests both padding-induced and equal-length batch divergence to pinpoint the source.
Use the built-in diagnostic to check your model:
bergson test_model_configuration --model <model_name>
This automatically tests escalating configurations and reports exactly which flags (if any) you need:
# If force_math_sdp alone is sufficient:
bergson build <output_path> --model <model_name> --force_math_sdp
# If fp32 with TF32 matmuls is sufficient (cheaper than full fp32):
bergson build <output_path> --model <model_name> --precision fp32 --use_tf32_matmuls --force_math_sdp
# If full fp32 precision is required:
bergson build <output_path> --model <model_name> --precision fp32 --force_math_sdp
Performance impact
Benchmarked on A100-80GB with 500 documents from pile-10k:
| Model | Settings | Build time | vs bf16 baseline |
|-------|----------|------------|------------------|
| Pythia-160M | bf16 | 31.2s | — |
| Pythia-160M | bf16 + --force_math_sdp | 31.0s | -0.7% |
| Pythia-160M | fp32 + --use_tf32_matmuls | 26.6s | -14.7% |
| Pythia-160M | fp32 + --use_tf32_matmuls + --force_math_sdp | 27.5s | -11.9% |
| Pythia
Related Skills
node-connect
345.4kDiagnose OpenClaw node connection and pairing failures for Android, iOS, and macOS companion apps
frontend-design
104.6kCreate distinctive, production-grade frontend interfaces with high design quality. Use this skill when the user asks to build web components, pages, or applications. Generates creative, polished code that avoids generic AI aesthetics.
openai-whisper-api
345.4kTranscribe audio via OpenAI Audio Transcriptions API (Whisper).
qqbot-media
345.4kQQBot 富媒体收发能力。使用 <qqmedia> 标签,系统根据文件扩展名自动识别类型(图片/语音/视频/文件)。
