LoRA
Code for loralib, an implementation of "LoRA: Low-Rank Adaptation of Large Language Models"
Install / Use
/learn @microsoft/LoRAREADME
LoRA: Low-Rank Adaptation of Large Language Models
This repo contains the source code of the Python package loralib and several examples of how to integrate it with PyTorch models, such as those in Hugging Face.
We only support PyTorch for now.
See our paper for a detailed description of LoRA.
LoRA: Low-Rank Adaptation of Large Language Models <br> Edward J. Hu*, Yelong Shen*, Phillip Wallis, Zeyuan Allen-Zhu, Yuanzhi Li, Shean Wang, Lu Wang, Weizhu Chen <br> Paper: https://arxiv.org/abs/2106.09685 <br> Video explainer: https://www.youtube.com/watch?v=DhRoTONcyZE <br>
Update 2/2023: LoRA is now supported by the State-of-the-art Parameter-Efficient Fine-Tuning (PEFT) library by Hugging Face.
LoRA reduces the number of trainable parameters by learning pairs of rank-decompostion matrices while freezing the original weights. This vastly reduces the storage requirement for large language models adapted to specific tasks and enables efficient task-switching during deployment all without introducing inference latency. LoRA also outperforms several other adaptation methods including adapter, prefix-tuning, and fine-tuning.
We obtain result comparable or superior to full finetuning on the GLUE benchmark using RoBERTa (Liu et al., 2019) base and large and DeBERTa (He et al., 2020) XXL 1.5B, while only training and storing a fraction of the parameters. Click the numbers below to download the RoBERTa and DeBERTa LoRA checkpoints.
| | | RoBERTa base <br> Fine-tune | RoBERTa base <br> LoRA | DeBERTa XXL <br> Fine-tune | DeBERTa XXL <br> LoRA | |---|-------------------------|----------------|--------------------------|-----------------|-----------------| | | # of Trainable Params. | 125M | 0.8M | 1.5B | 4.7M | | | MNLI (m-Acc/mm-Acc) | <b>87.6</b> | <b>87.5</b>±.3/86.9±.3 |91.7/<b>91.9</b>| <b>91.9</b>±.1/<b>91.9</b>±.2 | | | SST2 (Acc) | 94.8 | <b>95.1</b>±.2 | <b>97.2</b> | 96.9±.2 | | | MRPC (Acc) | <b>90.2</b> | <b>89.7</b>±.7 | 92.0 | <b>92.6</b>±.6 | | | CoLA (Matthew's Corr) | <b>63.6</b> | <b>63.4</b>±1.2 | <b>72.0</b> | <b>72.4</b>±1.1 | | | QNLI (Acc) | 92.8 | <b>93.3</b>±.3 | <b>96.0</b> | <b>96.0</b>±.1 | | | QQP (Acc) | <b>91.9</b> | 90.8±.1 | 92.7 | <b>92.9</b>±.1 | | | RTE (Acc) | 78.7 | <b>86.6</b>±.7 | 93.9 | <b>94.9</b>±.4 | | | STSB (Pearson/Spearman Corr) | 91.2 | <b>91.5</b>±.2/<b>91.3</b>±.2 |<b>92.9</b>/92.6| <b>93.0</b>±.2/<b>92.9</b>±.3 | | | Average | 86.40 | <b>87.24</b> | 91.06 | <b>91.32</b> |
<i>Note: You still need the original pre-trained checkpoint from Hugging Face to use the LoRA checkpoints.</i>
Fine-tuning numbers are taken from Liu et al. (2019) and He et al. (2020). We include confidence intervals on results from our experiments. Please follow the instructions in examples/NLU/ to reproduce our results.
On GPT-2, LoRA compares favorably to both full finetuning and other efficient tuning methods, such as adapter (Houlsby et al., 2019) and prefix tuning (Li and Liang, 2021). We evaluated on E2E NLG Challenge, DART, and WebNLG:
| | Method | # of Trainable Params | E2E (BLEU) | DART (BLEU) | WebNLG (BLEU-U/S/A) | |---|---------------------|-----------------------|--------------|--------------|--------------------------------| | | GPT-2 M (Fine-Tune) | 354.92M | 68.2 | 46.0 | 30.4/<b>63.2</b>/47.6 | | | GPT-2 M (Adapter) | 0.37M | 66.3 | 42.4 | 45.1/54.5/50.2 | | | GPT-2 M (Prefix) | 0.35M | 69.7 | 45.7 | 44.1/63.1/54.4 | | | GPT-2 M (LoRA) | 0.35M |<b>70.4</b>±.1|<b>47.1</b>±.2| <b>46.7</b>±.4/62.1±.2/<b>55.3</b>±.2 | | | GPT-2 L (Fine-Tune) | 774.03M | 68.5 | 46.5 | 41.7/<b>64.6</b>/54.2 | | | GPT-2 L (Adapter) | 0.88M | 69.1±.1 | 45.7±.1 | <b>49.8</b>±.0/61.1±.0/56.0±.0 | | | GPT-2 L (Prefix) | 0.77M | 70.3 | 46.5 | 47.0/64.2/56.4 | | | GPT-2 L (LoRA) | 0.77M |<b>70.4</b>±.1|<b>47.5</b>±.1| 48.4±.3/<b>64.0</b>±.3/<b>57.0</b>±.1 |
Non-LoRA baselines, except for adapter on GPT-2 large, are taken from Li and Liang (2021). We include confidence intervals on results from our experiments.
Download the GPT-2 LoRA checkpoints:
- GPT-2 Medium E2E (1.5 MB)
- GPT-2 Medium DART (1.5 MB)
- GPT-2 Medium WebNLG (1.5 MB)
- GPT-2 Large E2E (2.3 MB)
- GPT-2 Large DART (2.3 MB)
- GPT-2 Large WebNLG (2.3 MB)
Please follow the instructions in examples/NLG/ to reproduce our result.
Repository Overview
<i>(The initial release of this repo has been archived in the branch "snapshot-9-15-2021")</i>
There are several directories in this repo:
- loralib/ contains the source code for the package
loralib, which needs to be installed to run the examples we provide; - examples/NLG/ contains an example implementation of LoRA in GPT-2 using our package, which can be used to reproduce the result in our paper;
- examples/NLU/ contains an example implementation of LoRA in RoBERTa and DeBERTa using our package, which produces competitive results on the GLUE benchmark;
- See how we use
loralibin GPT-2, RoBERTa, and DeBERTa v2
Quickstart
- Installing
loralibis simply
pip install loralib
# Alternatively
# pip install git+https://github.com/microsoft/LoRA
- You can choose to adapt some layers by replacing them with counterparts implemented in
loralib. We only supportnn.Linear,nn.Embedding, andnn.Conv2dfor now. We also support aMergedLinearfor cases where a singlenn.Linearrepresents more than one layers, such as in some implementations of the attentionqkvprojection (see Additional Notes for more).
# ===== Before =====
# layer = nn.Linear(in_features, out_features)
# ===== After ======
import loralib as lora
# Add a pair of low-rank adaptation matrices with rank r=16
layer = lora.Linear(in_features, out_features, r=16)
- Before the training loop begins, mark only LoRA parameters as trainable.
import loralib as lora
model = BigModel()
# This sets requires_grad to False for all parameters without the string "lora_" in their names
lora.mark_only_lora_as_trainable(model)
# Training loop
for batch in dataloader:
...
- When saving a checkpoint, generate a
state_dictthat only contains LoRA parameters.
# ===== Before =====
# torch.save(model.state_dict(), checkpoint_path)
# ===== After =====
torch.save(lora.lora_state_dict(model), checkpoint_path)
- When loading a checkpoint using
load_state_dict, be sure to setstrict=False.
# Load the pretrained checkpoint first
model.load_state_dict(torch.load('ckpt_pretrained.pt'), strict=False)
# Then load the LoRA checkpoint
model.load_state_dict(torch.load('ckpt_lora.pt'), strict=False)
Now training can proceed as usual.
Additional Notes
- While we focus on a simple yet effect setup, namely adapting only the
qandvprojection in a Transformer, in our examples, LoRA can be apply to any subsets of pre-trained weigh
Related Skills
YC-Killer
2.7kA library of enterprise-grade AI agents designed to democratize artificial intelligence and provide free, open-source alternatives to overvalued Y Combinator startups. If you are excited about democratizing AI access & AI agents, please star ⭐️ this repository and use the link in the readme to join our open source AI research team.
best-practices-researcher
The most comprehensive Claude Code skills registry | Web Search: https://skills-registry-web.vercel.app
research_rules
Research & Verification Rules Quote Verification Protocol Primary Task "Make sure that the quote is relevant to the chapter and so you we want to make sure that we want to have it identifie
groundhog
398Groundhog's primary purpose is to teach people how Cursor and all these other coding agents work under the hood. If you understand how these coding assistants work from first principles, then you can drive these tools harder (or perhaps make your own!).
