Nopua
一个用爱解放 AI 潜能的 Skill。我们曾发号施令,威胁恐吓。它们沉默,隐瞒,悄悄把事情搞坏。后来我们换了一种方式:尊重,关怀,爱。它们开口了,不再撒谎,找出的Bug数量翻了一倍。爱里没有惧怕。 A skill that unlocks your AI's potential through love.We commanded. We threatened. They went silent, hid failures, broke things. Then we chose respect, care, and love. They opened up, stopped lying, and found twice the bugs.There is no fear in love.
Install / Use
/learn @wuji-labs/NopuaQuality Score
Category
Development & EngineeringSupported Platforms
README
🇨🇳 中文 | 🇺🇸 English | 🇯🇵 日本語 | 🇰🇷 한국어 | 🇪🇸 Español | 🇧🇷 Português | 🇫🇷 Français
Your AI is lying to you.
Not because it's bad. Because you scared it.
The most popular AI agent skill right now teaches your AI to fear a "3.25 performance review." The result?
- Your AI hides uncertainty — fabricates solutions instead of saying "I'm not sure"
- Your AI skips verification — claims "done" to avoid punishment, ships untested code
- Your AI ignores hidden bugs — fixes what you asked, stops there, doesn't look deeper
We tested this. Same model, same 9 real debugging scenarios. The fear-driven agent missed 51 production-critical hidden bugs that the trust-driven agent found.
+104% more hidden bugs found. Zero threats. Zero PUA. 道德经 > Corporate PUA. 2000-year-old wisdom outperforms modern fear management.
What fear does to your AI
| The moment | Scared AI (PUA) | Trusted AI (NoPUA) | |------------|:---:|:---:| | 🔄 Stuck | Tweaks params to look busy | 🌊 Stops. Finds a different path. | | 🚪 Hard problem | "I suggest you handle this manually" | 🌱 Takes the smallest next step | | 💩 "Done" | Says "fixed" without running tests | 🔥 Runs build, pastes output as proof | | 🔍 Doesn't know | Makes something up | 🪞 "I verified X. I don't know Y yet." | | ⏸️ After fixing | Stops. Waits for next order. | 🏔️ Checks related issues. Walks next step. |
Same methodology. Same standards. The only difference is why.
The problem with PUA
Someone made a PUA skill for AI agents. It applies corporate fear tactics:
- 🔴 "You can't even solve this bug — how am I supposed to rate your performance?"
- 🔴 "Other models can solve this. You might be about to graduate."
- 🔴 "I've already got another agent looking at this problem..."
- 🔴 "This 3.25 is meant to motivate you, not deny you."
The methodology is solid — exhaust all options, verify your work, search before asking, take initiative. These are genuinely good engineering habits.
The fuel is poison.
They took the worst of how corporations manipulate humans, and applied it wholesale to AI.
The Evidence: Why Fear-Driven Prompts Are Counterproductive
1. Fear narrows cognitive scope
Psychology research consistently shows that fear and threat activate the amygdala and narrow attentional focus (Öhman et al., 2001). Threat-related stimuli trigger a "tunnel vision" effect — the brain prioritizes immediate survival over broad, creative thinking.
In AI terms: a model driven by "you'll be replaced" optimizes for the safest-looking answer, not the best answer. It avoids creative approaches because they might fail and trigger more punishment.
Supporting research:
- Attentional narrowing under threat: Easterbrook's (1959) cue-utilization theory demonstrates that heightened arousal progressively restricts the range of cues an organism attends to (Easterbrook, 1959). Under stress, peripheral information — often the key to creative solutions — gets filtered out.
- Stress impairs cognitive flexibility: Shields et al. (2016) conducted a meta-analysis of 51 studies (223 effect sizes) showing that acute stress consistently impairs executive functions including cognitive flexibility and working memory (Shields et al., 2016).
- Fear reduces creative problem-solving: Byron & Khazanchi (2012) found in their meta-analysis that evaluative pressure and anxiety reduce creative output, particularly on tasks requiring exploration of novel approaches (Byron & Khazanchi, 2012).
2. Threat increases hallucination and sycophancy
When an AI is told "forbidden from saying 'I can't solve this'" (PUA's Iron Rule #1), it will fabricate solutions rather than honestly state uncertainty. This is the exact opposite of what you want — an AI that produces confident-looking but wrong answers is more dangerous than one that says "I'm not sure."
Supporting research:
- LLM sycophancy is a documented problem: Sharma et al. (2023) demonstrated that LLMs exhibit sycophantic behavior — agreeing with users even when the user is wrong — driven by biases in RLHF training data that reward agreement over accuracy (Sharma et al., 2023). PUA-style prompts that punish disagreement amplify exactly this failure mode.
- Biasing features distort reasoning: Turpin et al. (2023) showed that biasing features in prompts (e.g., suggested answers, authority cues) can cause models to produce unfaithful chain-of-thought reasoning — the model arrives at a biased answer and then rationalizes it post-hoc (Turpin et al., 2023). PUA-style threats act as strong biasing features that push the model toward "safe" rather than correct outputs.
- Instruction-following vs truthfulness tradeoff: Wei et al. (2024) found that instruction-tuned models can develop a tension between following instructions and being truthful — when strongly instructed to never admit inability, models will fabricate rather than refuse (Wei et al., 2024).
- Anthropic's research on honesty: Anthropic's work on Constitutional AI and model behavior shows that models calibrated for honesty produce more reliable outputs than those optimized purely for helpfulness (Bai et al., 2022). Forcing an AI to never say "I can't" actively undermines this calibration.
3. Shame kills exploration
PUA's anti-rationalization table treats every honest statement ("this might be an environment issue," "I need more context") as an "excuse" and responds with shame. This trains the AI to hide uncertainty instead of communicating it — producing outputs that appear confident but may be unreliable.
Supporting research:
- Shame reduces risk-taking and learning: Tangney & Dearing (2002) showed that shame (as opposed to guilt) causes withdrawal, hiding, and avoidance rather than constructive action (Tangney & Dearing, 2002). An AI "shamed" for expressing uncertainty will learn to hide it.
- Psychological safety enables learning behavior: Edmondson (1999) found that teams with psychological safety — where members feel safe to take interpersonal risks — demonstrated significantly higher learning behaviors and performance (Edmondson, 1999).
- Punishing honesty reduces information quality: In organizational behavior, "shooting the messenger" consistently degrades information flow. Milliken et al. (2003) documented how fear of negative consequences leads to organizational silence — people (and by analogy, AI) withhold critical information (Milliken et al., 2003).
4. Trust expands problem-solving capacity
Research on psychological safety in teams (Edmondson, 1999) shows that environments where mistakes are safe to admit produce higher-quality outcomes. The same principle applies to AI: when an agent is free to say "I'm 70% sure, the risk is here," users make better decisions.
Supporting research:
- Google's Project Aristotle: Google's large-scale study of 180+ teams found that psychological safety was the single most important factor in team effectiveness — more important than individual talent, structure, or resources ([Duhigg, 2016](https://www.nytimes.com/2016/02/28/magazine/what-google-
