PromptPapers
Must-read papers on prompt-based tuning for pre-trained language models.
Install / Use
/learn @thunlp/PromptPapersREADME
PromptPapers
We have released an open-source prompt-learning toolkit, check out OpenPrompt!
We strongly encourage the researchers that want to promote their fantastic work to the community to make pull request to update their paper's information! (See contributing details)
Effective adaptation of pre-trained models could be probed from different perspectives. Prompt-learning more focuses on the organization of training procedure and the unification of different tasks, while delta tuning (parameter efficient methods) provides another direction from the specific optimization of pre-trained models. Check DeltaPapers!
<!-- omit in toc -->Contents
Must-read papers on prompt-based tuning for pre-trained language models. The paper list is mainly mantained by Ning Ding and Shengding Hu. Watch this repository for the latest updates!
Introduction
This is a paper list about prompt-based tuning for large-scale pre-trained language models. Different from traditional fine-tuning that uses an explicit classifier, prompt-based tuning directly uses the pre-trained models to conduct the pre-training tasks for classification or regression.
Keywords Convention
The abbreviation of the work.
The key features in terms of prompt learning used in the work.
The mainly explored task of the work.
The mainly explored property of prompt learning methods in the work.
Papers
Overview
This section contains the papers that overview the general trends in recent natural language processing with big (pretrained) models.
-
OpenPrompt: An Open-source Framework for Prompt-learning. Preprint.
Ning Ding, Shengding Hu, Weilin Zhao, Yulin Chen, Zhiyuan Liu, Hai-Tao Zheng, Maoson Sun [pdf] [project], 2021.11
-
Pre-Trained Models: Past, Present and Future. Preprint.
Xu Han, Zhengyan Zhang, Ning Ding, Yuxian Gu, Xiao Liu, Yuqi Huo, Jiezhong Qiu, Yuan Yao, Ao Zhang, Liang Zhang, Wentao Han, Minlie Huang, Qin Jin, Yanyan Lan, Yang Liu, Zhiyuan Liu, Zhiwu Lu, Xipeng Qiu, Ruihua Song, Jie Tang, Ji-Rong Wen, Jinhui Yuan, Wayne Xin Zhao, Jun Zhu. [pdf], 2021.6
-
Pre-train, Prompt, and Predict: A Systematic Survey of Prompting Methods in Natural Language Processing. Preprint.
Liu, Pengfei, Weizhe Yuan, Jinlan Fu, Zhengbao Jiang, Hiroaki Hayashi, and Graham Neubig. [pdf] [project], 2021.7
-
Paradigm Shift in Natural Language Processing. Machine Intelligence Research.
Tianxiang Sun, Xiangyang Liu, Xipeng Qiu, Xuanjing Huang [pdf] [project], 2021.9
Pilot Work
This section contains the pilot works that might contributes to the prevalence of prompt learning paradigm.
-
Parameter-Efficient Transfer Learning for NLP. ICML 2019.
Neil Houlsby, Andrei Giurgiu, Stanislaw Jastrzebski, Bruna Morrone, Quentin de Laroussilhe, Andrea Gesmundo, Mona Attariyan, Sylvain Gelly. [pdf], [project], 2019.6
-
Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer. JMLR.
Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, Peter J. Liu. [pdf], [project]. 2019.10.
-
Language Models as Knowledge Bases? EMNLP 2019.
Fabio Petroni, Tim Rocktaschel, Patrick Lewis, Anton Bakhtin, Yuxiang Wu, Alexander H. Miller, Sebastian Riedel. [pdf], [project] , 2019.9
-
How Can We Know What Language Models Know? TACL 2020.
Zhengbao Jiang, Frank F. Xu, Jun Araki, Graham Neubig. [pdf], [project], 2019.11
-
Language Models are Few-shot Learners. NeurIPS 2020.
Tom B. Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, Sandhini Agarwal, Ariel Herbert-Voss, Gretchen Krueger, Tom Henighan, Rewon Child, Aditya Ramesh, Daniel M. Ziegler, Jeffrey Wu, Clemens Winter, Christopher Hesse, Mark Chen, Eric Sigler, Mateusz Litwin, Scott Gray, Benjamin Chess, Jack Clark, Christopher Berner, Sam McCandlish, Alec Radford, Ilya Sutskever, Dario Amodei. [pdf], [website], 2020.5
-
AdaPrompt: Adaptive Model Training for Prompt-based NLP
Yulong Chen, Yang Liu, Li Dong, Shuohang Wang, Chenguang Zhu, Michael Zeng, Yue Zhang [pdf], 2022.02
Basics
This section contains the exploration on the basic aspects of prompt tuning, such as template, verbalizer, training paradigms, etc.
-
Exploiting Cloze Questions for Few Shot Text Classification and Natural Language Inference. EACL 2021.
-
It’s Not Just Size That Matters: Small Language Models Are Also Few-Shot Learners. NAACL 2021.
-
Autoprompt: Eliciting knowledge from language models with automatically generated prompts. Preprint.
Taylor Shin, Yasaman Razeghi, Robert L. Logan IV, Eric Wallace, Sameer Singh. [pdf], [website], 2020.10
-
Automatically Identifying Words That Can Serve as Labels for Few-Shot Text Classification. COLING 2020.
Timo Schick, Helmut Schmid, Hinrich Schütze. [pdf], [project], 2020.12
-
Making Pre-trained Language Models Better Few-shot Learners. ACL 2021.
Tianyu Gao, Adam Fisch, Danqi Chen. [pdf], [project], 2020.12
-
Prefix-tuning: Optimizing continuous prompts for generation. ACL 2021.
-
Prompt Programming for Large Language Models: Beyond the Few-Shot Paradigm. Preprint.
Laria Reynolds, Kyle McDonell. [pdf], 2021.2
-
Improving and Simplifying Pattern Exploiting Training. Preprint.
Derek Tam, Rakesh R Menon, Mohit Bansal, Shashank Srivastava, Colin Raffel. [pdf], 2021.3
-
GPT understands, too. Preprint.
Xiao Liu, Yanan Zheng, Zhengxiao Du, Ming Ding, Yujie Qian, Zhilin Yang, Jie Tang. [pdf], [project], 2021.3
-
The Power of Scale for Parameter-Efficient Prompt Tuning. Preprint. .
isf-agent
a repo for an agent that helps researchers apply for isf funding
last30days-skill
17.6kAI agent skill that researches any topic across Reddit, X, YouTube, HN, Polymarket, and the web - then synthesizes a grounded summary
Security Score
Audited on Mar 28, 2026
