SkillAgentSearch skills...

Perplex

Analyze how "surprised" LLMs are when reading a piece of text

Install / Use

/learn @Belluxx/Perplex

README

Perplex

This offline tool allows you to analyze a given text and see how "surprising" it is to a LLM. It may be useful to see if a text is AI-generated (but it's not its purpose) or simply to understand how LLMs work under the hood.

This is based on the awesome work by the llama.cpp team, so you will need to specify yout .gguf file to use it.

<picture> <source media="(prefers-color-scheme: dark)" srcset="./static/dark.png"> <source media="(prefers-color-scheme: light)" srcset="./static/light.png"> <img alt="Shows a black logo in light color mode and a white one in dark color mode." src="./static/light.png"> </picture>

Usage

  1. Build and run the app with cargo run --release
  2. Select a GGUF model file
  3. Paste or write a text in the input field
  4. Click analyze
  5. See the results

The green tokens are the ones the LLM predicted almost perfectly, while the red ones are the most surprising to the model.

You can hover on a specific token to see its how it ranked in the model's predicitons along with the top-5 leaderboard of the highest probability tokens.

View on GitHub
GitHub Stars16
CategoryDevelopment
Updated1d ago
Forks1

Languages

Rust

Security Score

80/100

Audited on Mar 23, 2026

No findings