Promptulate
πLightweight Large language model automation and Autonomous Language Agents development framework. Build your LLM Agent Application in a pythonic way!
Install / Use
/learn @Undertone0809/PromptulateREADME
Overview
Promptulate is an AI Agent application development framework crafted by Cogit Lab, which offers developers an extremely concise and efficient way to build Agent applications through a Pythonic development paradigm. The core philosophy of Promptulate is to borrow and integrate the wisdom of the open-source community, incorporating the highlights of various development frameworks to lower the barrier to entry and unify the consensus among developers. With Promptulate, you can manipulate components like LLM, Agent, Tool, RAG, etc., with the most succinct code, as most tasks can be easily completed with just a few lines of code. π
π‘ Features
- π Pythonic Code Style: Embraces the habits of Python developers, providing a Pythonic SDK calling approach, putting everything within your grasp with just one
pne.chatfunction to encapsulate all essential functionalities. - π§ Model Compatibility: Supports nearly all types of large models on the market and allows for easy customization to meet specific needs.
- π΅οΈββοΈ Diverse Agents: Offers various types of Agents, such as WebAgent, ToolAgent, CodeAgent, etc., capable of planning, reasoning, and acting to handle complex problems. Atomize the Planner and other components to simplify the development process.
- π Low-Cost Integration: Effortlessly integrates tools from different frameworks like LangChain, significantly reducing integration costs.
- π¨ Functions as Tools: Converts any Python function directly into a tool usable by Agents, simplifying the tool creation and usage process.
- πͺ Lifecycle and Hooks: Provides a wealth of Hooks and comprehensive lifecycle management, allowing the insertion of custom code at various stages of Agents, Tools, and LLMs.
- π» Terminal Integration: Easily integrates application terminals, with built-in client support, offering rapid debugging capabilities for prompts.
- β±οΈ Prompt Caching: Offers a caching mechanism for LLM Prompts to reduce repetitive work and enhance development efficiency.
- π€ Powerful OpenAI Wrapper: With pne, you no longer need to use the openai sdk, the core functions can be replaced with pne.chat, and provides enhanced features to simplify development difficulty.
- π§° Streamlit Component Integration: Quickly prototype and provide many out-of-the-box examples and reusable streamlit components.
The following diagram shows the core architecture of promptulate:

The core concept of Promptulate is we hope to provide a simple, pythonic and efficient way to build AI applications, which means you don't need to spend a lot of time learning the framework. We hope to use pne.chat() to do most of the works, and you can easily build any AI application with just a few lines of code.
Below,
pnestands for Promptulate, which is the nickname for Promptulate. Thepanderepresent the beginning and end of Promptulate, respectively, andnstands for 9, which is a shorthand for the nine letters betweenpande.
Supported Base Models
Promptulate integrates the capabilities of litellm, supporting nearly all types of large models on the market, including but not limited to the following models:
| Provider | Completion | Streaming | Async Completion | Async Streaming | Async Embedding | Async Image Generation | | ------------- | ------------- | ------------- | ------------- | ------------- | ------------- | ------------- | | openai | β | β | β | β | β | β | | azure | β | β | β | β | β | β | | aws - sagemaker | β | β | β | β | β | | aws - bedrock | β | β | β | β |β | | google - vertex_ai [Gemini] | β | β | β | β | | google - palm | β | β | β | β | | google AI Studio - gemini | β | | β | | | | mistral ai api | β | β | β | β | β | | cloudflare AI Workers | β | β | β | β | | cohere | β | β | β | β | β | | anthropic | β | β | β | β | | huggingface | β | β | β | β | β | | replicate | β | β | β | β | | together_ai | β | β | β | β | | openrouter | β | β | β | β | | ai21 | β | β | β | β | | baseten | β | β | β | β | | vllm | β | β | β | β | | nlp_cloud | β | β | β | β | | aleph alpha | β | β | β | β | | petals | β | β | β | β | | ollama | β | β | β | β | | deepinfra | β | β | β | β | | perplexity-ai | β | β | β | β | | Groq AI | β | β | β | β | | anyscale | β | β | β | β | | voyage ai | | | | | β | | xinference [Xorbits Inference] | | | | | β |
The powerful model support of pne allows you to easily build any third-party model calls.
Now let's see how to run local llama3 models of ollama with pne.
import promptulate as pne
resp: str = pne.chat(model="ollama/llama2", messages=[{"content": "Hello, how are you?", "role": "user"}])
π 2024.5.14 OpenAI launched their newest "omni" model, offering improved speed and pricing compared to turbo.
You can use the available multimodal capabilities of it in any of your promptulate applications!
import promptulate as pne
messages=[
{
"role": "user",
"content": [
{"type": "text", "text": "What's in this image?"},
{
"type": "image_url",
"image_url": "https://upload.wikimedia.org/wikipedia/commons/thumb/d/dd/Gfp-wisconsin-madison-the-nature-boardwalk.jpg/2560px-Gfp-wisconsin-madison-the-nature-boardwalk.jpg",
},
],
}
]
resp = pne.chat(model="gpt-4o", messages=messages)
print(resp)
Use provider/model_name to call the model, and you can easily build any third-party model calls.
For more models, please visit the litellm documentation.
You can also see how to use pne.chat() in the Getting Started/Official Documentation.
π Related Documentation
- Getting Started/Official Documentation
- Current Development Plan
- Contributing/Developer's Manual
- Frequently Asked Questions
- PyPI Repository
π Examples
-
Build a chatbot using pne+streamlit to chat with GitHub repo
-
[Build a math application with agent [Steamlit, ToolAg
Related Skills
node-connect
341.6kDiagnose OpenClaw node connection and pairing failures for Android, iOS, and macOS companion apps
claude-opus-4-5-migration
84.6kMigrate prompts and code from Claude Sonnet 4.0, Sonnet 4.5, or Opus 4.1 to Opus 4.5
frontend-design
84.6kCreate distinctive, production-grade frontend interfaces with high design quality. Use this skill when the user asks to build web components, pages, or applications. Generates creative, polished code that avoids generic AI aesthetics.
model-usage
341.6kUse CodexBar CLI local cost usage to summarize per-model usage for Codex or Claude, including the current (most recent) model or a full model breakdown. Trigger when asked for model-level usage/cost data from codexbar, or when you need a scriptable per-model summary from codexbar cost JSON.
