CoolPrompt
Automatic Prompt Optimization Framework
Install / Use
/learn @CTLab-ITMO/CoolPromptREADME
CoolPrompt is a framework for automative prompting creation and optimization.
Practical cases
- Automatic prompt engineering for solving tasks using LLM
- (Semi-)automatic generation of markup for fine-tuning
- Formalization of response quality assessment using LLM
- Prompt tuning for agent systems
Core features
- Optimize prompts with our autoprompting optimizers: HyPE, ReflectivePrompt, DistillPrompt
- LLM-Agnostic Choice: work with your custom llm (from open-sourced to proprietary) using supported Langchain LLMs
- Generate synthetic evaluation data when no input dataset is provided
- Evaluate prompts incorporating multiple metrics for both classification and generation tasks
- Retrieve feedbacks to interpret prompt optimization results
- Automatic task detecting for scenarios without explicit user-defined task specifications
Quick install
- Install with pip:
pip install coolprompt
- Install with git:
git clone https://github.com/CTLab-ITMO/CoolPrompt.git
pip install -r requirements.txt
Quick start
Import and initialize PromptTuner using model qwen3-4b-instruct via HuggingFace
from coolprompt.assistant import PromptTuner
prompt_tuner = PromptTuner()
prompt_tuner.run('Write an essay about autumn')
print(prompt_tuner.final_prompt)
# You are an expert writer and seasonal observer tasked with composing a rich,
# well-structured, and vividly descriptive essay on the theme of autumn...
Examples
See more examples in notebooks to familiarize yourself with our framework
About project
- The framework is developed by Computer Technologies Lab (CT-Lab) of ITMO University.
- <a href="https://github.com/CTLab-ITMO/CoolPrompt/blob/master/docs/API.md">API Reference</a>
Contributing
- We welcome and value any contributions and collaborations, so please contact us. For new code check out <a href="https://github.com/CTLab-ITMO/CoolPrompt/blob/master/docs/CONTRIBUTING.md">CONTRIBUTING.md</a>.
Reference
For technical details and full experimental results, please check our papers.
<a href="https://www.fruct.org/files/publications/volume-38/fruct38/Kul.pdf">CoolPrompt</a>
@INPROCEEDINGS{11239071,
author={Kulin, Nikita and Zhuravlev, Viktor and Khairullin, Artur and Sitkina, Alena and Muravyov, Sergey},
booktitle={2025 38th Conference of Open Innovations Association (FRUCT)},
title={CoolPrompt: Automatic Prompt Optimization Framework for Large Language Models},
year={2025},
volume={},
number={},
pages={158-166},
keywords={Technological innovation;Systematics;Large language models;Pipelines;Manuals;Prediction algorithms;Libraries;Prompt engineering;Optimization;Synthetic data},
doi={10.23919/FRUCT67853.2025.11239071}
}
<a href="https://ntv.ifmo.ru/file/article/23927.pdf">ReflectivePrompt</a>
@misc{zhuravlev2025reflectivepromptreflectiveevolutionautoprompting,
title={ReflectivePrompt: Reflective evolution in autoprompting algorithms},
author={Viktor N. Zhuravlev and Artur R. Khairullin and Ernest A. Dyagin and Alena N. Sitkina and Nikita I. Kulin},
year={2025},
eprint={2508.18870},
archivePrefix={arXiv},
primaryClass={cs.CL},
url={https://arxiv.org/abs/2508.18870},
}
<a href="https://arxiv.org/pdf/2508.18992">DistillPrompt</a>
@misc{dyagin2025automaticpromptoptimizationprompt,
title={Automatic Prompt Optimization with Prompt Distillation},
author={Ernest A. Dyagin and Nikita I. Kulin and Artur R. Khairullin and Viktor N. Zhuravlev and Alena N. Sitkina},
year={2025},
eprint={2508.18992},
archivePrefix={arXiv},
primaryClass={cs.CL},
url={https://arxiv.org/abs/2508.18992},
}
Related Skills
node-connect
338.0kDiagnose OpenClaw node connection and pairing failures for Android, iOS, and macOS companion apps
claude-opus-4-5-migration
83.4kMigrate prompts and code from Claude Sonnet 4.0, Sonnet 4.5, or Opus 4.1 to Opus 4.5
frontend-design
83.4kCreate distinctive, production-grade frontend interfaces with high design quality. Use this skill when the user asks to build web components, pages, or applications. Generates creative, polished code that avoids generic AI aesthetics.
model-usage
338.0kUse CodexBar CLI local cost usage to summarize per-model usage for Codex or Claude, including the current (most recent) model or a full model breakdown. Trigger when asked for model-level usage/cost data from codexbar, or when you need a scriptable per-model summary from codexbar cost JSON.
