59 skills found · Page 1 of 2
liu00222 / Open Prompt InjectionThis repository provides a benchmark for prompt injection attacks and defenses in LLMs
luckyPipewrench / PipelockFirewall for AI agents. DLP scanning, SSRF protection, bidirectional MCP scanning, tool poisoning detection, and prompt injection blocking.
AgentSeal / AgentsealSecurity toolkit for AI agents. Scan your machine for dangerous skills and MCP configs, monitor for supply chain attacks, test prompt injection resistance, and audit live MCP servers for tool poisoning.
toby-bridges / Api Relay AuditSecurity audit tool for third-party AI API relay/proxy services. Detects hidden prompt injection, prompt leakage, instruction override, and context truncation.
makalin / SecureMCPSecureMCP is a security auditing tool designed to detect vulnerabilities and misconfigurations in applications using the [Model Context Protocol (MCP)](https://modelcontextprotocol.io/introduction). It proactively identifies threats like OAuth token leakage, prompt injection vulnerabilities, rogue MCP servers, and tool poisoning attacks.
nayangoel / PromptInjectorA comprehensive defensive security testing tool for AI systems. PromptInjector helps identify prompt injection vulnerabilities through systematic testing with both static and adaptive prompts.
Agent-Threat-Rule / Agent Threat RulesOpen detection standard for AI agent threats. Like Sigma, but for prompt injection, tool poisoning, and MCP attacks. Community-driven -- contributions welcome.
efij / Secure Claude CodeSecurity guardrails for Claude Code, MCP tools, and Claude cowork workflows. Local-first modular YARA-style guard packs for secrets, exfiltration, prompt injection, MCP abuse, and risky agent actions.
StackOneHQ / DefenderOpen source prompt injection protection for Agents calling tools (via MCP, CLI or direct function calling). Detect and defend against prompt injection attacks. 22MB, CPU-only, < 10ms latency.
arsbr / VeritensorThe Anti-Virus for AI Artifacts & RAG Firewall. A static analysis tool scanning Models and Notebooks for RCE, Datasets and RAG docs for Data Poisoning, PII, and Prompt Injections. Secure your AI Supply Chain.
requie / LLMSecurityGuideA comprehensive reference for securing Large Language Models (LLMs). Covers OWASP GenAI Top-10 risks, prompt injection, adversarial attacks, real-world incidents, and practical defenses. Includes catalogs of red-teaming tools, guardrails, and mitigation strategies to help developers, researchers, and security teams deploy AI responsibly.
peluche / Deck Of Many PromptsManual Prompt Injection / Red Teaming Tool
KadirArslan / Mithra ScannerMithra Scanner is an interactive API testing tool for prompt injection, refusal detection, and LLM security benchmarking. It supports YAML-based rule definitions, custom refusal lists, REST API integration, and provides detailed CLI output for security testing of language model endpoints.
vstorm-co / Pydantic AI ShieldsGuardrail capabilities for Pydantic AI — cost tracking, prompt injection detection, PII filtering, secret redaction, tool permissions, and async guardrails. Built on pydantic-ai's native capabilities API.
zhihuiyuze / PDF Prompt Injection ToolkitA red team / blue team toolkit for testing and detecting prompt injection attacks hidden inside PDF documents. 一个用于测试和检测 PDF 文档中隐藏的提示词注入攻击的红蓝对抗工具包。
ScottLogic / Prompt InjectionApplication which investigates defensive measures against prompt injection attacks on an LLM, with a focus on the exposure of external tools.
brinhosa / Awesome AI SecurityA collection of awesome AI Security, LLM Security, and Prompt Injection tools and resources.
galfrevn / Promptsmith🧠 A TypeScript library for crafting structured, maintainable system prompts using a fluent, chainable API with full type safety. It supports context, few-shot examples, guardrails against prompt injection, tool definitions with Zod, and export to Vercel AI SDK.
genia-dev / VibraniumdomeLLM Security Platform.
karimhabush / AiapwnAutomatic Prompt Injection testing tool