6 skills found
requie / LLMSecurityGuideA comprehensive reference for securing Large Language Models (LLMs). Covers OWASP GenAI Top-10 risks, prompt injection, adversarial attacks, real-world incidents, and practical defenses. Includes catalogs of red-teaming tools, guardrails, and mitigation strategies to help developers, researchers, and security teams deploy AI responsibly.
epappas / LlmtraceZero-code LLM security & observability proxy. Real-time prompt injection detection, PII scanning, and cost control for OpenAI-compatible APIs. Built in Rust.
HASHIRU-AI / NAAMSENeural Adversarial Agent Mutation-based Security Evaluator
0x-Professor / VeilArmorVeil Armor is an enterprise-grade security framework for Large Language Models (LLMs) that provides multi-layered protection against prompt injections, jailbreaks, PII leakage, and sophisticated attack vectors.
regaan / BasiliskBasilisk — Open-source AI red teaming framework with genetic prompt evolution. Automated LLM security testing for GPT-4, Claude, Gemini. OWASP LLM Top 10 coverage. 32 attack modules.
kevinl05 / PromptmenotDetect and sanitize prompt injection attacks in Rails apps. Protects against direct injection (users hacking your LLMs via form inputs) and indirect injection (malicious prompts stored for other LLMs to scrape). ~70 detection patterns across 7 attack categories with configurable sensitivity levels. Now includes resource extraction detection pattern