Trak
A fast, effective data attribution method for neural networks in PyTorch
Install / Use
/learn @MadryLab/TrakREADME
TRAK: Attributing Model Behavior at Scale
[docs & tutorials] [blog post] [website]
In our paper, we introduce a new data attribution method called TRAK (Tracing with the
Randomly-Projected After Kernel). Using TRAK, you can make accurate
counterfactual predictions (e.g., answers to questions of the form “what would
happen to this prediction if these examples are removed from the training set?").
Computing data attribution with TRAK is 2-3 orders of magnitude cheaper than
comparably effective methods, e.g., see our evaluation on:

Usage
Check our docs for more detailed examples and
tutorials on how to use TRAK. Below, we provide a brief blueprint of using TRAK's API to compute attribution scores.
Make a TRAKer instance
from trak import TRAKer
model, checkpoints = ...
train_loader = ...
traker = TRAKer(model=model, task='image_classification', train_set_size=...)
Compute TRAK features on training data
for model_id, checkpoint in enumerate(checkpoints):
traker.load_checkpoint(checkpoint, model_id=model_id)
for batch in loader_train:
# batch should be a tuple of inputs and labels
traker.featurize(batch=batch, ...)
traker.finalize_features()
Compute TRAK scores for target examples
targets_loader = ...
for model_id, checkpoint in enumerate(checkpoints):
traker.start_scoring_checkpoint(checkpoint,
model_id=model_id,
exp_name='test',
num_targets=...)
for batch in targets_loader:
traker.score(batch=batch, ...)
scores = traker.finalize_scores(exp_name='test')
Then, you can use the compute TRAK scores to analyze your model's behavior. For example, here are the most (positively and negatively) impactful examples for a ResNet18 model trained on ImageNet for three targets from the ImageNet validation set:

Check out the
quickstart for a
complete ready-to-run example notebook. You can also find several end-to-end
examples in the examples/ directory.
Contributing
We welcome contributions to this project! Please see our contributing guidelines for more information.
Citation
If you use this code in your work, please cite using the following BibTeX entry:
@inproceedings{park2023trak,
title = {TRAK: Attributing Model Behavior at Scale},
author = {Sung Min Park and Kristian Georgiev and Andrew Ilyas and Guillaume Leclerc and Aleksander Madry},
booktitle = {International Conference on Machine Learning (ICML)},
year = {2023}
}
Installation
To install the version of our package which contains a fast, custom CUDA
kernel for the JL projection step, use
pip install traker[fast]
You will need compatible versions of gcc and CUDA toolkit to install it. See
the installation FAQs for tips
regarding this. To install the basic version of our package that requires no
compilation, use
pip install traker
Questions?
Please send an email to trak@mit.edu
Maintainers
Related Skills
YC-Killer
2.7kA library of enterprise-grade AI agents designed to democratize artificial intelligence and provide free, open-source alternatives to overvalued Y Combinator startups. If you are excited about democratizing AI access & AI agents, please star ⭐️ this repository and use the link in the readme to join our open source AI research team.
best-practices-researcher
The most comprehensive Claude Code skills registry | Web Search: https://skills-registry-web.vercel.app
research_rules
Research & Verification Rules Quote Verification Protocol Primary Task "Make sure that the quote is relevant to the chapter and so you we want to make sure that we want to have it identifie
groundhog
398Groundhog's primary purpose is to teach people how Cursor and all these other coding agents work under the hood. If you understand how these coding assistants work from first principles, then you can drive these tools harder (or perhaps make your own!).
