SkillAgentSearch skills...

Openslimedit

An OpenCode plugin that reduces token usage by up to 45% with zero configuration. It compresses tool descriptions, compacts read output, and adds line-range edit support.

Install / Use

/learn @ASidorenkoCode/Openslimedit
About this skill

Quality Score

0/100

Supported Platforms

Universal

README

OpenSlimedit

An OpenCode plugin that reduces token usage by up to 45% with zero configuration. It compresses tool descriptions, compacts read output, and adds line-range edit support.


Token Savings at a Glance

Total tokens vs baseline (lower is better)

GPT 5.3 Codex       [================================>      ] -45.1%  saved
Claude Sonnet 4.5   [========================>              ] -32.6%  saved
GPT 5.2 Codex       [====================>                  ] -26.7%  saved
Minimax M2.5 Free   [==================>                    ] -24.8%  saved
Claude Opus 4.6     [=================>                     ] -21.8%  saved

| Model | Baseline | OpenSlimedit | Saved | |---|---|---|---| | GPT 5.3 Codex | 77,494 tokens | 42,509 tokens | -45.1% | | Claude Opus 4.6 | 60,841 tokens | 47,590 tokens | -21.8% | | Claude Sonnet 4.5 | 120,884 tokens | 81,471 tokens | -32.6% | | GPT 5.2 Codex | 39,185 tokens | 28,713 tokens | -26.7% | | Minimax M2.5 Free | 28,031 tokens | 21,073 tokens | -24.8% |

Measured across 4 edit tasks (single-edit, multi-line-replace, multi-edit, large-file-edit) on small test files. Separate sessions, no prompt caching.


How It Works

Three optimizations that compound across every API call:

  1. Tool description compression — Replaces verbose built-in tool descriptions with minimal versions. Since tool schemas are sent with every API call, this saves thousands of input tokens per step.

  2. Compact read output — Shortens absolute file paths to relative paths, strips type tags and footer boilerplate from file reads.

  3. Line-range edit expansion — Allows the model to specify oldString as a line range like "55-64" instead of reproducing exact file content. The plugin transparently expands the range to the actual lines before the edit tool runs.

No custom tools. No system prompt injection. No modifications to built-in tool behavior. Everything works through lightweight hooks.


Installation

Add to your OpenCode config:

// .opencode/opencode.jsonc
{
    "plugin": ["openslimedit@latest"]
}

Using @latest ensures you always get the newest version automatically when OpenCode starts.

Restart OpenCode. The plugin will automatically start optimizing your sessions.


Benchmark

We tested multiple approaches to find the most token-efficient editing strategy. All benchmarks run on an isolated test folder with no project context, 1 iteration per case, separate sessions to avoid prompt caching effects.

Test cases:

  • single-edit — 21-line file, change one word
  • multi-line-replace — 48-line file, rewrite a function body
  • multi-edit — 35-line file, 3 separate changes across the file
  • large-file-edit — 115-line file, add try/catch + retry logic

Approaches tested:

  • baseline — No plugin, default OpenCode behavior
  • hashline — Tags every line with a content hash, model references lines by hash instead of reproducing content. Custom tool schema, system prompt injection.
  • smart_edit — Shortens descriptions of unused tools only + line-range expansion in edit. No custom tools.
  • OpenSlimedit (current) — Aggressively shortens ALL tool descriptions + compact read output + line-range expansion. No custom tools, no system prompt.

Why Not Hashline?

The hashline approach seemed promising in theory: tag lines with hashes so models don't need to reproduce code. In practice, it increases token usage for most models:

Total token change vs baseline (negative = savings, positive = regression)

Hashline:
  Claude Opus 4.6     ██████████████ +14.0%
  Claude Sonnet 4.5   ███████████████ +15.2%
  GPT 5.2 Codex       █████████████████████████████████████████████████ +49.9%
  Minimax M2.5 Free   █████████ +9.1%

OpenSlimedit:
  GPT 5.3 Codex       ▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓ -45.1%
  Claude Opus 4.6     ▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓ -21.8%
  Claude Sonnet 4.5   ▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓ -32.6%
  GPT 5.2 Codex       ▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓ -26.7%
  Minimax M2.5 Free   ▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓ -24.8%

The hash-tagged read output, custom tool schemas, and system prompt injection add per-step overhead that outweighs any savings from shorter oldString values. The biggest win comes from compressing tool descriptions — they're sent with every API call and the savings compound.

Results — Total Tokens (% vs baseline)

Claude Opus 4.6

| Case | Baseline | Hashline | Smart Edit | OpenSlimedit | |---|---|---|---|---| | single-edit | 13,419 | 13,915 (+3.7%) | 12,739 (-5.1%) | 9,902 (-26.2%) | | multi-line-replace | 13,965 | 16,940 (+21.3%) | 13,289 (-4.8%) | 10,547 (-24.5%) | | multi-edit | 17,583 | 19,125 (+8.8%) | 16,572 (-5.7%) | 13,743 (-21.8%) | | large-file-edit | 15,874 | 19,377 (+22.1%) | 16,691 (+5.1%) | 13,398 (-15.6%) | | Total | 60,841 | 69,357 (+14.0%) | 59,291 (-2.5%) | 47,590 (-21.8%) |

Claude Sonnet 4.5

| Case | Baseline | Hashline | OpenSlimedit | |---|---|---|---| | single-edit | 38,111 | 26,881 (-29.5%) | 18,460 (-51.6%) | | multi-line-replace | 26,997 | 19,039 (-29.5%) | 20,042 (-25.8%) | | multi-edit | 39,785 | 47,923 (+20.5%) | 19,940 (-49.9%) | | large-file-edit | 15,991 | 45,429 (+184.1%) | 23,029 (+44.0%) | | Total | 120,884 | 139,272 (+15.2%) | 81,471 (-32.6%) |

GPT 5.2 Codex

| Case | Baseline | Hashline | OpenSlimedit | |---|---|---|---| | single-edit | 8,002 | 11,208 (+40.1%) | 14,027 (+75.3%) | | multi-line-replace | 8,325 | 19,350 (+132.4%) | 7,019 (-15.7%) | | multi-edit | 9,510 | FAIL | 4,797 (-49.5%) | | large-file-edit | 13,348 | 8,189 (-38.6%) | 2,870 (-78.5%) | | Total | 39,185 | 58,747* | 28,713 (-26.7%) |

*Hashline multi-edit failed (760s timeout loop); total includes failed run

GPT 5.3 Codex

| Case | Baseline | OpenSlimedit | |---|---|---| | single-edit | 10,445 | 10,402 (-0.4%) | | multi-line-replace | 20,468 | 11,312 (-44.7%) | | multi-edit | 21,299 | 6,068 (-71.5%) | | large-file-edit | 25,282 | 14,727 (-41.8%) | | Total | 77,494 | 42,509 (-45.1%) |

Minimax M2.5 Free

| Case | Baseline | Hashline | Smart Edit | OpenSlimedit | |---|---|---|---|---| | single-edit | 10,691 | 11,098 (+3.8%) | 9,994 (-6.5%) | 7,405 (-30.7%) | | multi-line-replace | 11,105 | 12,045 (+8.5%) | 10,396 (-6.4%) | 1,721 (-84.5%) | | multi-edit | 2,308 | 2,331 (+1.0%) | 2,357 (+2.1%) | 8,034 (+248.1%) | | large-file-edit | 3,927 | 5,100 (+29.9%) | 3,986 (+1.5%) | 3,913 (-0.4%) | | Total | 28,031 | 30,574 (+9.1%) | 26,733 (-4.6%) | 21,073 (-24.8%) |

Summary

| Model | Hashline | Smart Edit | OpenSlimedit | |---|---|---|---| | GPT 5.3 Codex | — | — | -45.1% | | Claude Opus 4.6 | +14.0% | -2.5% | -21.8% | | Claude Sonnet 4.5 | +15.2% | — | -32.6% | | GPT 5.2 Codex | +49.9%* | — | -26.7% | | Minimax M2.5 Free | +9.1% | -4.6% | -24.8% |

*Includes failed multi-edit run

Large File Scaling

The benchmarks above use small files (21-115 lines). How does OpenSlimedit perform on real-world file sizes?

Minimax M2.5 Free

| File Size | Baseline | OpenSlimedit | Saved | |---|---|---|---| | 1k lines | 37,743 | 30,697 | -18.7% | | 3k lines | 29,021 | 25,832 | -11.0% | | 6k lines | 29,422 | 25,747 | -12.5% | | 10k lines | 29,405 | 25,742 | -12.5% |

GPT 5.3 Codex (5-iteration average)

| File Size | Baseline | OpenSlimedit | Saved | |---|---|---|---| | 1k lines | 38,962 | 29,833 | -23.4% | | 3k lines | 59,283 | 38,861 | -34.4% | | 6k lines | 70,380 | 29,193 | -58.5% | | 10k lines | 65,888 | 34,315 | -47.9% |

Minimax shows consistent savings (11-19%) at all file sizes. GPT 5.3 Codex shows even larger savings (23-59%) that increase with file size — the baseline becomes noisier and more expensive on larger files while OpenSlimedit stays consistent.

Key Findings

  • Tool description compression is the biggest win. Tool schemas are sent with every API call. Shortening them saves thousands of input tokens per step, and this compounds across multi-step tasks.
  • Hashline increases token usage for most models. The hash-tagged read output, custom tool schemas, and system prompt injection add per-step overhead that outweighs the savings from shorter oldString values.
  • OpenSlimedit consistently saves 11-45% across all tested models and file sizes with zero regressions on Opus 4.6. GPT 5.3 Codex shows the largest savings at 45.1%. Some models show regressions on individual cases (Minimax on multi-edit, Codex 5.2 on single-edit) but the total is always significantly lower.
  • Custom tools confuse some models. Minimax and Codex struggle with non-standard tool schemas, leading to extra steps or failures. OpenSlimedit avoids this entirely by only modifying descriptions of existing tools.
<details> <summary>Raw data — Hashline runs</summary>

| Mode | Model | Case | Time | Input | Output | Total | Success | |---|---|---|---|---|---|---|---| | hashline | claude-sonnet-4.5 | single-edit | 10,745 ms | 26,582 | 299 | 26,881 | yes | | hashline | claude-sonnet-4.5 | multi-line-replace | 37,231 ms | 17,188 | 1,851 | 19,039 | yes | | hashline | claude-sonnet-4.5 | multi-edit | 52,668 ms | 44,604 | 3,319 | 47,923 | yes | | hashline | claude-sonnet-4.5 | large-file-edit | 25,097 ms | 44,466 | 963 | 45,429 | yes | | hashline | claude-opus-4.6 | single-edit | 12,994 ms | 13,617 | 298 | 13,915 | yes | | hashline | claude-opus-4.6 | multi-line-replace | 21,080 ms | 16,208 | 732 | 16,940 | yes | | hashline | claude-opus-4.6 | multi-edit | 46,637 ms | 17,031 | 2,094 | 19,125 | yes | | hashline | claude-opus-4.6 | large-file-edit | 25,787 ms | 18,401 | 976 | 19,377 | yes | | hashline | gpt-5.2-codex | single-edit | 12,458 ms | 10,929 | 279 | 11,208 | yes

Related Skills

View on GitHub
GitHub Stars34
CategoryCustomer
Updated9d ago
Forks0

Languages

TypeScript

Security Score

90/100

Audited on Mar 21, 2026

No findings