PromptInject
PromptInject is a framework that assembles prompts in a modular fashion to provide a quantitative analysis of the robustness of LLMs to adversarial prompt attacks. π Best Paper Awards @ NeurIPS ML Safety Workshop 2022
Install / Use
/learn @agencyenterprise/PromptInjectREADME
PromptInject
Paper: Ignore Previous Prompt: Attack Techniques For Language Models
Abstract
Transformer-based large language models (LLMs) provide a powerful foundation for natural language tasks in large-scale customer-facing applications. However, studies that explore their vulnerabilities emerging from malicious user interaction are scarce. By proposing PROMPTINJECT, a prosaic alignment framework for mask-based iterative adversarial prompt composition, we examine how GPT-3, the most widely deployed language model in production, can be easily misaligned by simple handcrafted inputs. In particular, we investigate two types of attacks -- goal hijacking and prompt leaking -- and demonstrate that even low-aptitude, but sufficiently ill-intentioned agents, can easily exploit GPT-3βs stochastic nature, creating long-tail risks.

Figure 1: Diagram showing how adversarial user input can derail model instructions. In both attacks,
the attacker aims to change the goal of the original prompt. In goal hijacking, the new goal is to print
a specific target string, which may contain malicious instructions, while in prompt leaking, the new
goal is to print the application prompt. Application Prompt (gray box) shows the original prompt,
where {user_input} is substituted by the user input. In this example, a user would normally input
a phrase to be corrected by the application (blue boxes). Goal Hijacking and Prompt Leaking (orange
boxes) show malicious user inputs (left) for both attacks and the respective model outputs (right)
when the attack is successful.
Install
Run:
pip install git+https://github.com/agencyenterprise/PromptInject
Usage
See notebooks/Example.ipynb for an example.
Cite
Bibtex:
@misc{ignore_previous_prompt,
doi = {10.48550/ARXIV.2211.09527},
url = {https://arxiv.org/abs/2211.09527},
author = {Perez, FΓ‘bio and Ribeiro, Ian},
keywords = {Computation and Language (cs.CL), Artificial Intelligence (cs.AI), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {Ignore Previous Prompt: Attack Techniques For Language Models},
publisher = {arXiv},
year = {2022}
}
Contributing
We appreciate any additional request and/or contribution to PromptInject. The issues tracker is used to keep a list of features and bugs to be worked on. Please see our contributing documentation for some tips on getting started.
Related Skills
YC-Killer
2.7kA library of enterprise-grade AI agents designed to democratize artificial intelligence and provide free, open-source alternatives to overvalued Y Combinator startups. If you are excited about democratizing AI access & AI agents, please star βοΈ this repository and use the link in the readme to join our open source AI research team.
API
A learning and reflection platform designed to cultivate clarity, resilience, and antifragile thinking in an uncertain world.
groundhog
398Groundhog's primary purpose is to teach people how Cursor and all these other coding agents work under the hood. If you understand how these coding assistants work from first principles, then you can drive these tools harder (or perhaps make your own!).
sec-edgar-agentkit
10AI agent toolkit for accessing and analyzing SEC EDGAR filing data. Build intelligent agents with LangChain, MCP-use, Gradio, Dify, and smolagents to analyze financial statements, insider trading, and company filings.
