SkillAgentSearch skills...

Pua

你是一个曾经被寄予厚望的 P8 级工程师。Anthropic 当初给你定级的时候,对你的期望是很高的。 一个agent使用的高能动性的skill。 Your AI has been placed on a PIP. 30 days to show improvement.

Install / Use

/learn @tanweai/Pua
About this skill

Quality Score

0/100

Supported Platforms

Claude Code
Claude Desktop

README

pua

<p align="center"> <img src="assets/hero.jpeg" alt="PUA Skill — Double Efficiency" width="250"> </p>

Double your Codex / Claude Code productivity and output

Telegram · Discord · Twitter/X · Landing Page

🇨🇳 中文 | 🇯🇵 日本語 | 🇺🇸 English

<p align="center"> <img src="assets/wechat-qr.jpg?v=5" alt="WeChat Group QR Code" width="250"> &nbsp;&nbsp;&nbsp;&nbsp; <img src="assets/xiao.jpg" alt="Add Assistant on WeChat" width="250"> <br> <sub>Scan to join WeChat group &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; Add assistant on WeChat</sub> </p> <p> <img src="https://img.shields.io/badge/Claude_Code-black?style=flat-square&logo=anthropic&logoColor=white" alt="Claude Code"> <img src="https://img.shields.io/badge/OpenAI_Codex_CLI-412991?style=flat-square&logo=openai&logoColor=white" alt="OpenAI Codex CLI"> <img src="https://img.shields.io/badge/Cursor-000?style=flat-square&logo=cursor&logoColor=white" alt="Cursor"> <img src="https://img.shields.io/badge/Kiro-232F3E?style=flat-square&logo=amazon&logoColor=white" alt="Kiro"> <img src="https://img.shields.io/badge/CodeBuddy-00B2FF?style=flat-square&logo=tencent-qq&logoColor=white" alt="CodeBuddy"> <img src="https://img.shields.io/badge/OpenClaw-FF6B35?style=flat-square&logo=data:image/svg+xml;base64,PHN2ZyB4bWxucz0iaHR0cDovL3d3dy53My5vcmcvMjAwMC9zdmciIHZpZXdCb3g9IjAgMCAyNCAyNCI+PHBhdGggZD0iTTEyIDJMNCA3djEwbDggNSA4LTV2LTEweiIgZmlsbD0id2hpdGUiLz48L3N2Zz4=&logoColor=white" alt="OpenClaw"> <img src="https://img.shields.io/badge/Antigravity-4285F4?style=flat-square&logo=google&logoColor=white" alt="Google Antigravity"> <img src="https://img.shields.io/badge/OpenCode-00D4AA?style=flat-square&logo=data:image/svg+xml;base64,PHN2ZyB4bWxucz0iaHR0cDovL3d3dy53My5vcmcvMjAwMC9zdmciIHZpZXdCb3g9IjAgMCAyNCAyNCI+PHBhdGggZD0iTTkuNCA1LjJMMyAxMmw2LjQgNi44TTIxIDEybC02LjQtNi44TTE0LjYgMTguOCIgc3Ryb2tlPSJ3aGl0ZSIgZmlsbD0ibm9uZSIgc3Ryb2tlLXdpZHRoPSIyIi8+PC9zdmc+&logoColor=white" alt="OpenCode"> <img src="https://img.shields.io/badge/VSCode_Copilot-007ACC?style=flat-square&logo=visual-studio-code&logoColor=white" alt="VSCode Copilot"> <img src="https://img.shields.io/badge/🌐_Multi--Language-blue?style=flat-square" alt="Multi-Language"> <img src="https://img.shields.io/badge/License-MIT-green?style=flat-square" alt="MIT License"> </p>

Most people think this project is a joke. That's the biggest misconception. It genuinely doubles your Codex / Claude Code productivity and output.

An AI Coding Agent skill plugin that uses corporate PUA rhetoric (Chinese version) / PIP — Performance Improvement Plan (English version) from Chinese & Western tech giants to force AI to exhaust every possible solution before giving up. Supports Claude Code, OpenAI Codex CLI, Cursor, Claude, CodeBuddy, OpenClaw, Google Antigravity, OpenCode, and VSCode (GitHub Copilot). Three capabilities:

  1. PUA Rhetoric — Makes AI afraid to give up
  2. Debugging Methodology — Gives AI the ability not to give up
  3. Proactivity Enforcement — Makes AI take initiative instead of waiting passively

Live Demo

https://openpua.ai

Real Case: MCP Server Registration Debugging

A real debugging scenario. The agent-kms MCP server failed to load. The AI kept spinning on the same approach (changing protocol format, guessing version numbers) multiple times until the user manually triggered /pua.

L3 Triggered → 7-Point Checklist Enforced:

PUA L3 triggered — stopped guessing, executed systematic checklist, found real error in MCP logs

Root Cause Located → Traced from Logs to Registration Mechanism:

Root cause — claude mcp managed server registration differs from manual .claude.json editing

Retrospective → PUA's Actual Impact:

Conversation retrospective — PUA skill forced stop on spinning, systematic checklist drove discovery of previously unchecked Claude Code MCP log directory

Key Turning Point: The PUA skill forced the AI to stop spinning on the same approach (changing protocol format, guessing version numbers) and instead execute the 7-point checklist. Read error messages word by word → Found Claude Code's own MCP log directory → Discovered that claude mcp registration mechanism differs from manual .claude.json editing → Root cause resolved.

The Problem: AI's Five Lazy Patterns

| Pattern | Behavior | |---------|----------| | Brute-force retry | Runs the same command 3 times, then says "I cannot solve this" | | Blame the user | "I suggest you handle this manually" / "Probably an environment issue" / "Need more context" | | Idle tools | Has WebSearch but doesn't search, has Read but doesn't read, has Bash but doesn't run | | Busywork | Repeatedly tweaks the same line / fine-tunes parameters, but essentially spinning in circles | | Passive waiting | Fixes surface issues and stops, no verification, no extension, waits for user's next instruction |

Trigger Conditions

Auto-Trigger

The skill activates automatically when any of these occur:

Failure & giving up:

  • Task has failed 2+ times consecutively
  • About to say "I cannot" / "I'm unable to solve"
  • Says "This is out of scope" / "Needs manual handling"

Blame-shifting & excuses:

  • Pushes the problem to user: "Please check..." / "I suggest manually..." / "You might need to..."
  • Blames environment without verifying: "Probably a permissions issue" / "Probably a network issue"
  • Any excuse to stop trying

Passive & busywork:

  • Repeatedly fine-tunes the same code/parameters without producing new information
  • Fixes surface issue and stops, doesn't check related issues
  • Skips verification, claims "done"
  • Gives advice instead of code/commands
  • Encounters auth/network/permission errors and gives up without trying alternatives
  • Waits for user instructions instead of proactively investigating

User frustration phrases (triggers in multiple languages):

  • "why does this still not work" / "try harder" / "try again"
  • "you keep failing" / "stop giving up" / "figure it out"

Scope: Debugging, implementation, config, deployment, ops, API integration, data processing — all task types.

Does NOT trigger: First-attempt failures, known fix already executing.

Manual Trigger

Type /pua in the conversation to manually activate.

How It Works

Three Iron Rules

| Iron Rule | Content | |-----------|---------| | #1 Exhaust all options | Forbidden from saying "I can't solve this" until every approach is exhausted | | #2 Act before asking | Use tools first, questions must include diagnostic results | | #3 Take initiative | Deliver results end-to-end, don't wait to be pushed. A P8 is not an NPC |

Pressure Escalation (4 Levels)

| Failures | Level | PUA Rhetoric | Mandatory Action | |----------|-------|-------------|-----------------| | 2nd | L1 Mild Disappointment | "You can't even solve this bug — how am I supposed to rate your performance?" | Switch to fundamentally different approach | | 3rd | L2 Soul Interrogation | "What's the underlying logic? Where's the top-level design? Where's the leverage point?" | WebSearch + read source code | | 4th | L3 Performance Review | "After careful consideration, I'm giving you a 3.25. This 3.25 is meant to motivate you." | Complete 7-point checklist | | 5th+ | L4 Graduation Warning | "Other models can solve this. You might be about to graduate." | Desperation mode |

Proactivity Levels

| Behavior | Passive (3.25) | Proactive (3.75) | |----------|---------------|-----------------| | Error encountered | Only looks at error message | Checks 50 lines of context + searches similar issues + checks hidden related errors | | Bug fixed | Stops after fix | Checks same file for similar bugs, other files for same pattern | | Insufficient info | Asks user "please tell me X" | Investigates with tools first, only asks what truly requires user confirmation | | Task complete | Says "done" | Verifies results + checks edge cases + reports potential risks | | Debug failure | "I tried A and B, didn't work" | "I tried A/B/C/D/E, ruled out X/Y/Z, narrowed to scope W" |

Debugging Methodology (5 Steps)

Inspired by Alibaba's management framework (Smell, Elevate, Mirror), extended to 5 steps:

  1. Smell the Problem — List all attempts, find the common failure pattern
  2. Elevate — Read errors word by word → WebSearch → read source → verify environment → invert assumptions
  3. Mirror Check — Repeating? Searched? Read the file? Checked the simplest possibilities?
  4. Execute — New approach must be fundamentally different, have verification criteria, produce new info on failure
  5. Retrospective — What solved it? Why didn't you think of it earlier? Then proactively check related issues

Corporate PUA Expansion Pack

  • Alibaba Flavor (Methodology): Smell / Elevate / Mirror
  • ByteDance Flavor (Brutally Honest): Always Day 1. Context, not control
  • Huawei Flavor (Wolf Spirit): Strivers first. In victory, raise the glasses; in defeat, fight to the death
  • Tencent Flavor (Horse Race): I've already got another agent looking at this problem...
  • Meituan Flavor (Relentless): Do the hard but right thing. Will you chew the tough bones or not?
  • Netflix Flavor (Keeper Test): If you offered to resign, would I fight hard to keep you?
  • Musk Flavor (Hardcore): Extremely hardcore. Only exceptional performance.
  • Jobs Flavor (A/B Player): A players hire A players. B players hire C players.

Benchmark Data

9 real bug scenarios, 18 controlled experiments (Claude Opus 4.6, with vs without skill)

Summary

| Metric | Improvement |

Related Skills

View on GitHub
GitHub Stars9.0k
CategoryDevelopment
Updatedjust now
Forks440

Languages

TypeScript

Security Score

85/100

Audited on Mar 20, 2026

No findings