Dattri
`dattri` is a PyTorch library for developing, benchmarking, and deploying efficient data attribution algorithms.
Install / Use
/learn @TRAIS-Lab/DattriREADME
A Library for Efficient Data Attribution
Quick Start | Algorithms | Metrics | Benchmark Settings | Benchmark Results
What is dattri ?
dattri is a PyTorch library for developing, benchmarking, and deploying efficient data attribution algorithms. You may use dattri to
- Deploy existing data attribution methods to PyTorch models
- e.g., Influence Function, TracIn, RPS, TRAK, ...
- Develop new data attribution methods with efficient implementation of low-level utility functions
- e.g., Hessian (HVP/IHVP), Fisher Information Matrix (IFVP), random projection, dropout ensembling, ...
- Benchmark data attribution methods with standard benchmark settings
- e.g., MNIST-10+LR/MLP, CIFAR-10/2+ResNet-9, MAESTRO + Music Transformer, Shakespeare + nanoGPT, ...
Key Features
- 🚀 Efficient: Optimized low-level primitives (HVP, Random Projection) for scaling to large models.
- 🧩 Modular: Decouples attribution algorithms, models, and tasks for maximum flexibility.
- 📊 Comprehensive Benchmarks: Ready-to-use benchmarks across Computer Vision, NLP, and more.
- 🔌 Easy Integration: Seamlessly works with existing PyTorch models and workflows.
Contents
- A Library for Efficient Data Attribution
Quick Start
Installation
pip install dattri
If you want to use sjlt to accelerate the random projection, you may install the version with sjlt by
pip install dattri[sjlt]
[!NOTE] It's highly recommended using a device support CUDA to run dattri, especially for large models or datasets.
[!NOTE] It's required to have CUDA if you want to install and use the sjlt version
dattri[sjlt]to accelerate the random projection.
Recommended environment setup
It's not required to follow the exact same steps in this section. But this is a verified environment setup flow that may help users to avoid most of the issues during the installation.
conda create -n dattri python=3.10
conda activate dattri
conda install -c "nvidia/label/cuda-12.4.0" cuda-toolkit
pip3 install torch==2.6.0 --index-url https://download.pytorch.org/whl/cu124
pip install dattri[sjlt]
Apply data attribution methods on PyTorch models
One can apply different data attribution methods to PyTorch Models. One only needs to define:
- loss function used for model training (will be used as the target function to be attributed if no other target function is provided).
- trained model checkpoints (one or more).
- the data loaders for training samples and test samples (e.g.,
train_loader,test_loader). - (optional) target function to be attributed if it's not the same as the loss function.
The following is an example of using IFAttributorCG and AttributionTask to apply data attribution to a PyTorch model.
Please reference here for the guide on how to properly define train/test data for Attributor and loss/target function.
More examples can be found here.
import torch
from torch import nn
from dattri.algorithm import IFAttributorCG
from dattri.task import AttributionTask
from dattri.benchmark.datasets.mnist import train_mnist_lr, create_mnist_dataset
from dattri.benchmark.utils import SubsetSampler
dataset_train, dataset_test = create_mnist_dataset("./data")
train_loader = torch.utils.data.DataLoader(
dataset_train,
batch_size=1000,
sampler=SubsetSampler(range(1000)),
)
test_loader = torch.utils.data.DataLoader(
dataset_test,
batch_size=100,
sampler=SubsetSampler(range(100)),
)
model = train_mnist_lr(train_loader)
def f(params, data_target_pair):
x, y = data_target_pair
loss = nn.CrossEntropyLoss()
yhat = torch.func.functional_call(model, params, x)
return loss(yhat, y)
task = AttributionTask(loss_func=f,
model=model,
checkpoints=model.state_dict())
attributor = IFAttributorCG(
task=task,
max_iter=10,
regularization=1e-2
)
attributor.cache(train_loader)
with torch.no_grad():
score = attributor.attribute(train_loader, test_loader)
Use low-level utility functions to develop new data attribution methods
HVP/IHVP
Hessian-vector product (HVP), inverse-Hessian-vector product (IHVP) are widely used in data attribution methods. dattri provides efficient implementation to these operators by torch.func. This example shows how to use the CG implementation of the IHVP implementation.
import torch
from dattri.func.hessian import ihvp_cg, ihvp_at_x_cg
def f(x, param):
return torch.sin(x / param).sum()
x = torch.randn(2)
param = torch.randn(1)
v = torch.randn(5, 2)
# ihvp_cg method
ihvp_func = ihvp_cg(f, argnums=0, max_iter=2) # argnums=0 indicates that the param of (x, param) to be passed to ihvp_func is the model parameter
ihvp_result_1 = ihvp_func((x, param), v) # both (x, param) and v as the inputs
# ihvp_at_x_cg method: (x, param) is cached
ihvp_at_x_func = ihvp_at_x_cg(f, x, param, argnums=0, max_iter=2)
ihvp_result_2 = ihvp_at_x_func(v) # only v as the input
# the above two will give the same result
assert torch.allclose(ihvp_result_1, ihvp_result_2)
Random Projection
It has been shown that long vectors will retain most of their relative information when projected down to a smaller feature dimension. To reduce the computational cost, random projection is widely used in data attribution methods. Following is an example to use random_project. The implementation leverages sjlt.
from dattri.func.random_projection import random_project
# initialize the projector based on users' needs
project_func = random_project(tensor, tensor.size(0), proj_dim=512)
# obtain projected tensors
projected_tensor = project_func(torch.full_like(tensor))
Normally speaking, tensor is probably the gradient of the loss/target function and has a large dimension (i.e., the number of parameters).
Dropout Ensemble
Recent studies found that ensemble methods can significantly improve the performance of data attribution, DROPOUT ENSEMBLE is one of these ensemble methods. One may prepare their model with
from dattri.model_util.dropout import activate_dropout
# initialize a torch.nn.Module model
model = MLP()
# (option 1) activate all dropout layers
model = activate_dropout(model, dropout_prob=0.2)
# (option 2) activate specific dropout layers
# here "dropout1" and "dropout2" are the names of dropout layers within the model
model = activate_dropout(model, ["dropout1", "dropout2"], dropout_prob=0.2)
Supported Algorithms
We have implemented most of the state-of-the-art methods. The categories and reference paper of the algorithms are listed in the following table. | Family | Algorithms | | :-----------------------------------------------: | :------------------------------------------------------------------: | | IF | Explicit | | | CG | | | LiSSA | | | Arnoldi | | | DataInf | | | EK-FAC | | | [RelatIF
Related Skills
tmux
345.9kRemote-control tmux sessions for interactive CLIs by sending keystrokes and scraping pane output.
diffs
345.9kUse the diffs tool to produce real, shareable diffs (viewer URL, file artifact, or both) instead of manual edit summaries.
terraform-provider-genesyscloud
Terraform Provider Genesyscloud
blogwatcher
345.9kMonitor blogs and RSS/Atom feeds for updates using the blogwatcher CLI.
