SkillAgentSearch skills...

KnowledgeEditingPapers

Must-read Papers on Knowledge Editing for Large Language Models.

Install / Use

/learn @zjunlp/KnowledgeEditingPapers

README

Knowledge Editing for LLMs Papers

Awesome License: MIT

Must-read papers on knowledge editing for large language models.

🔔 News

  • New Reports

    | Report | Topic | PPT Resource | | :-----------------: | :---------: | :------------: | |NLPCC2024 tutorial | Knowledge Mechanism, Fusion, Editing for LLMs| Google Drive | |Invited Talk | Editing Large Language Models Advancing Machine Understanding and Control| Google Drive | | CCL2024 tutorial| 大语言模型知识机理、融合与编辑| Google Drive & BaiduPan | | IJCAI2024 tutorial| Knowledge Editing for Large Language Models | Google Drive | | COLING2024 tutorial| Knowledge Editing for Large Language Models| Google Drive | | 北京智源大会| 大语言模型知识机理与编辑问题| BaiduPan | | VALSE2024 tutorial| Knowledge Mechanism and Editing for Large Language Models| Google Drive | | AAAI2024 tutorial | Knowledge Editing for Large Language Models | Google Drive |

<!-- - **2024-02-20 The AAAI2024 tutorial "*Knowledge Editing for Large Language Models*" has been canceled since speakers cannot present in person, we make this ppt[[Github](https://github.com/zjunlp/KnowledgeEditingPapers/blob/main/AAAI2024%40Tutorial_Knowledge%20Editing%20for%20LLMs.pdf)] [[Google Drive](https://drive.google.com/file/d/1fkTbVeRJSWmU7fBDeNf1OhHEkLSofQde/view?usp=sharing)] [[Baidu Pan](https://pan.baidu.com/s/1oJYgaMnxWIBE4kIcJuMSKg?pwd=p9j5)] available to the community**. -->

🔍 Contents


🌟 Why Knowledge Editing?

Knowledge Editing is a compelling field of research that focuses on facilitating efficient modifications to the behavior of models, particularly foundation models. The aim is to implement these changes within a specified scope of interest without negatively affecting the model's performance across a broader range of inputs.

Keywords

Knowledge Editing has strong connections with following topics.

  • Updating and fixing bugs for large language models
  • Language models as knowledge base, locating knowledge in large language models
  • Lifelong learning, unlearning and etc.
  • Security and privacy for large language models
<div align=center><img src="./img/ke.png" width="100%" height="80%" /></div>

Comparisons of different technologies

<div align=center><img src="./img/comparison.png" width="60%" height="48%" /></div>

📜 Resources

This is a collection of research and review papers of Knowledge Editing. Any suggestions and pull requests are welcome for better sharing of latest research progress.

Tutorials

Knowledge Editing for Large Language Models, AAAI 2024 Tutorial <br /> Ningyu Zhang, Jia-Chen Gu, Yunzhi Yao, Zhen Bi, Shumin Deng. [Github] [Google Drive] [Baidu Pan]

Editing Large Language Models, AACL 2023 Tutorial <br /> Ningyu Zhang, Yunzhi Yao, Shumin Deng. [Github] [Google Drive] [Baidu Pan]

Surveys

Knowledge Mechanisms in Large Language Models: A Survey and Perspective (EMNLP 2024 Findings) <br /> Mengru Wang, Yunzhi Yao, Ziwen Xu, Shuofei Qiao, Shumin Deng, Peng Wang, Xiang Chen, Jia-Chen Gu, Yong Jiang, Pengjun Xie, Fei Huang, Huajun Chen, Ningyu Zhang. [paper]

A Comprehensive Study of Knowledge Editing for Large Language Models <br /> Ningyu Zhang, Yunzhi Yao, Bozhong Tian, Peng Wang, Shumin Deng, Mengru Wang, Zekun Xi, Shengyu Mao, Jintian Zhang, Yuansheng Ni, Siyuan Cheng, Ziwen Xu, Xin Xu, Jia-Chen Gu, Yong Jiang, Pengjun Xie, Fei Huang, Lei Liang, Zhiqiang Zhang, Xiaowei Zhu, Jun Zhou, Huajun Chen. [paper][benchmark][code]

Editing Large Language Models: Problems, Methods, and Opportunities, EMNLP 2023 Main Conference Paper <br /> Yunzhi Yao, Peng Wang, Bozhong Tian, Siyuan Cheng, Zhoubo Li, Shumin Deng, Huajun Chen, Ningyu Zhang. [paper][code]

Knowledge Editing for Large Language Models: A Survey <br /> Song Wang, Yaochen Zhu, Haochen Liu, Zaiyi Zheng, Chen Chen, Jundong Li. [paper]

A Survey on Knowledge Editing of Neural Networks <br /> Vittorio Mazzia, Alessandro Pedrani, Andrea Caciolai, Kay Rottmann, Davide Bernardi. [paper]

Knowledge Unlearning for LLMs: Tasks, Methods, and Challenges <br /> Nianwen Si, Hao Zhang, Heyu Chang, Wenlin Zhang, Dan Qu, Weiqiang Zhang. [paper]

<div align=center><img src="./img/overview.jpg" width="100%" height="80%" /></div>

Methods

Preserve Parameters

Memory-based
  1. Memory-Based Model Editing at Scale (ICML 2022) <br /> Eric
View on GitHub
GitHub Stars1.2k
CategoryDevelopment
Updated13d ago
Forks81

Security Score

100/100

Audited on Mar 20, 2026

No findings