Adapters
A Unified Library for Parameter-Efficient and Modular Transfer Learning
Install / Use
/learn @adapter-hub/AdaptersREADME
Adapters is an add-on library to HuggingFace's Transformers, integrating 10+ adapter methods into 20+ state-of-the-art Transformer models with minimal coding overhead for training and inference.
Adapters provides a unified interface for efficient fine-tuning and modular transfer learning, supporting a myriad of features like full-precision or quantized training (e.g. Q-LoRA, Q-Bottleneck Adapters, or Q-PrefixTuning), adapter merging via task arithmetics or the composition of multiple adapters via composition blocks, allowing advanced research in parameter-efficient transfer learning for NLP tasks.
Note: The Adapters library has replaced the
adapter-transformerspackage. All previously trained adapters are compatible with the new library. For transitioning, please read: https://docs.adapterhub.ml/transitioning.html.
Installation
adapters currently supports Python 3.9+ and PyTorch 2.0+.
After installing PyTorch, you can install adapters from PyPI ...
pip install -U adapters
... or from source by cloning the repository:
git clone https://github.com/adapter-hub/adapters.git
cd adapters
pip install .
Quick Tour
Load pre-trained adapters:
from adapters import AutoAdapterModel
from transformers import AutoTokenizer
model = AutoAdapterModel.from_pretrained("roberta-base")
tokenizer = AutoTokenizer.from_pretrained("roberta-base")
model.load_adapter("AdapterHub/roberta-base-pf-imdb", source="hf", set_active=True)
print(model(**tokenizer("This works great!", return_tensors="pt")).logits)
Adapt existing model setups:
import adapters
from transformers import AutoModelForSequenceClassification
model = AutoModelForSequenceClassification.from_pretrained("t5-base")
adapters.init(model)
model.add_adapter("my_lora_adapter", config="lora")
model.train_adapter("my_lora_adapter")
# Your regular training loop...
Flexibly configure adapters:
from adapters import ConfigUnion, PrefixTuningConfig, ParBnConfig, AutoAdapterModel
model = AutoAdapterModel.from_pretrained("microsoft/deberta-v3-base")
adapter_config = ConfigUnion(
PrefixTuningConfig(prefix_length=20),
ParBnConfig(reduction_factor=4),
)
model.add_adapter("my_adapter", config=adapter_config, set_active=True)
Easily compose adapters in a single model:
from adapters import AdapterSetup, AutoAdapterModel
import adapters.composition as ac
model = AutoAdapterModel.from_pretrained("roberta-base")
qc = model.load_adapter("AdapterHub/roberta-base-pf-trec")
sent = model.load_adapter("AdapterHub/roberta-base-pf-imdb")
with AdapterSetup(ac.Parallel(qc, sent)):
print(model(**tokenizer("What is AdapterHub?", return_tensors="pt")))
Useful Resources
HuggingFace's great documentation on getting started with Transformers can be found here. adapters is fully compatible with Transformers.
To get started with adapters, refer to these locations:
- Colab notebook tutorials, a series notebooks providing an introduction to all the main concepts of (adapter-)transformers and AdapterHub
- https://docs.adapterhub.ml, our documentation on training and using adapters with adapters
- https://adapterhub.ml to explore available pre-trained adapter modules and share your own adapters
- Examples folder of this repository containing HuggingFace's example training scripts, many adapted for training adapters
Implemented Methods
Currently, adapters integrates all architectures and methods listed below:
| Method | Paper(s) | Quick Links | | --- | --- | --- | | Bottleneck adapters | Houlsby et al. (2019)<br> Bapna and Firat (2019)<br> Steitz and Roth (2024) | Quickstart, Notebook | | AdapterFusion | Pfeiffer et al. (2021) | Docs: Training, Notebook | | MAD-X,<br> Invertible adapters | Pfeiffer et al. (2020) | Notebook | | AdapterDrop | Rücklé et al. (2021) | Notebook | | MAD-X 2.0,<br> Embedding training | Pfeiffer et al. (2021) | Docs: Embeddings, Notebook | | Prefix Tuning | Li and Liang (2021) | Docs | | Parallel adapters,<br> Mix-and-Match adapters | He et al. (2021) | Docs | | Compacter | Mahabadi et al. (2021) | Docs | | LoRA | Hu et al. (2021) | Docs | | MTL-LoRA | Yang et al., 2024 | Docs | | (IA)^3 | Liu et al. (2022) | Docs | | Vera | Kopiczko et al., 2024 | Docs | DoRA | Liu et al., 2024 | Docs | UniPELT | Mao et al. (2022) | Docs | | Prompt Tuning | Lester et al. (2021) | Docs | | QLoRA | Dettmers et al. (2023) | Notebook | | ReFT | Wu et al. (2024) | Docs | | Adapter Task Arithmetics | Chronopoulou et al. (2023)<br> Zhang et al. (2023) | Docs, Notebook|
Supported Models
We currently support the PyTorch versions of all models listed on the Model Overview page in our documentation.
Developing & Contributing
To get started with developing on Adapters yourself and learn more about ways to contribute, please see https://docs.adapterhub.ml/contributing.html.
Citation
If you use _Adapters
