SkillAgentSearch skills...

Promptulate

πŸš€Lightweight Large language model automation and Autonomous Language Agents development framework. Build your LLM Agent Application in a pythonic way!

Install / Use

/learn @Undertone0809/Promptulate

README

<p align="center"> <img src="./docs/public/banner.png" alt="promptulate" style="border-radius: 15px;"/> </p> <p align="center"> <a target="_blank" href=""> <img src="https://img.shields.io/github/license/Undertone0809/promptulate.svg?style=flat-square" /> </a> <a target="_blank" href=''> <img src="https://img.shields.io/github/release/Undertone0809/promptulate/all.svg?style=flat-square"/> </a> <a href="https://pypi.org/project/promptulate" target="_blank"> <img src="https://img.shields.io/pypi/pyversions/promptulate.svg?color=%2334D058" alt="Supported Python versions"> </a> <a href="https://t.me/zeeland0809" target="_blank"> <img src="https://img.shields.io/badge/Telegram-join%20chat-2CA5E0?logo=telegram&logoColor=white" alt="chat on Telegram"> </a> <a target="_blank" href=''> <img src="https://static.pepy.tech/personalized-badge/promptulate?period=month&units=international_system&left_color=grey&right_color=blue&left_text=Downloads/Week"/> </a> </p>

English δΈ­ζ–‡

Overview

Promptulate is an AI Agent application development framework crafted by Cogit Lab, which offers developers an extremely concise and efficient way to build Agent applications through a Pythonic development paradigm. The core philosophy of Promptulate is to borrow and integrate the wisdom of the open-source community, incorporating the highlights of various development frameworks to lower the barrier to entry and unify the consensus among developers. With Promptulate, you can manipulate components like LLM, Agent, Tool, RAG, etc., with the most succinct code, as most tasks can be easily completed with just a few lines of code. πŸš€

πŸ’‘ Features

  • 🐍 Pythonic Code Style: Embraces the habits of Python developers, providing a Pythonic SDK calling approach, putting everything within your grasp with just one pne.chat function to encapsulate all essential functionalities.
  • 🧠 Model Compatibility: Supports nearly all types of large models on the market and allows for easy customization to meet specific needs.
  • πŸ•΅οΈβ€β™‚οΈ Diverse Agents: Offers various types of Agents, such as WebAgent, ToolAgent, CodeAgent, etc., capable of planning, reasoning, and acting to handle complex problems. Atomize the Planner and other components to simplify the development process.
  • πŸ”— Low-Cost Integration: Effortlessly integrates tools from different frameworks like LangChain, significantly reducing integration costs.
  • πŸ”¨ Functions as Tools: Converts any Python function directly into a tool usable by Agents, simplifying the tool creation and usage process.
  • πŸͺ Lifecycle and Hooks: Provides a wealth of Hooks and comprehensive lifecycle management, allowing the insertion of custom code at various stages of Agents, Tools, and LLMs.
  • πŸ’» Terminal Integration: Easily integrates application terminals, with built-in client support, offering rapid debugging capabilities for prompts.
  • ⏱️ Prompt Caching: Offers a caching mechanism for LLM Prompts to reduce repetitive work and enhance development efficiency.
  • πŸ€– Powerful OpenAI Wrapper: With pne, you no longer need to use the openai sdk, the core functions can be replaced with pne.chat, and provides enhanced features to simplify development difficulty.
  • 🧰 Streamlit Component Integration: Quickly prototype and provide many out-of-the-box examples and reusable streamlit components.

The following diagram shows the core architecture of promptulate:

promptulate-architecture

The core concept of Promptulate is we hope to provide a simple, pythonic and efficient way to build AI applications, which means you don't need to spend a lot of time learning the framework. We hope to use pne.chat() to do most of the works, and you can easily build any AI application with just a few lines of code.

Below, pne stands for Promptulate, which is the nickname for Promptulate. The p and e represent the beginning and end of Promptulate, respectively, and n stands for 9, which is a shorthand for the nine letters between p and e.

Supported Base Models

Promptulate integrates the capabilities of litellm, supporting nearly all types of large models on the market, including but not limited to the following models:

| Provider | Completion | Streaming | Async Completion | Async Streaming | Async Embedding | Async Image Generation | | ------------- | ------------- | ------------- | ------------- | ------------- | ------------- | ------------- | | openai | βœ… | βœ… | βœ… | βœ… | βœ… | βœ… | | azure | βœ… | βœ… | βœ… | βœ… | βœ… | βœ… | | aws - sagemaker | βœ… | βœ… | βœ… | βœ… | βœ… | | aws - bedrock | βœ… | βœ… | βœ… | βœ… |βœ… | | google - vertex_ai [Gemini] | βœ… | βœ… | βœ… | βœ… | | google - palm | βœ… | βœ… | βœ… | βœ… | | google AI Studio - gemini | βœ… | | βœ… | | | | mistral ai api | βœ… | βœ… | βœ… | βœ… | βœ… | | cloudflare AI Workers | βœ… | βœ… | βœ… | βœ… | | cohere | βœ… | βœ… | βœ… | βœ… | βœ… | | anthropic | βœ… | βœ… | βœ… | βœ… | | huggingface | βœ… | βœ… | βœ… | βœ… | βœ… | | replicate | βœ… | βœ… | βœ… | βœ… | | together_ai | βœ… | βœ… | βœ… | βœ… | | openrouter | βœ… | βœ… | βœ… | βœ… | | ai21 | βœ… | βœ… | βœ… | βœ… | | baseten | βœ… | βœ… | βœ… | βœ… | | vllm | βœ… | βœ… | βœ… | βœ… | | nlp_cloud | βœ… | βœ… | βœ… | βœ… | | aleph alpha | βœ… | βœ… | βœ… | βœ… | | petals | βœ… | βœ… | βœ… | βœ… | | ollama | βœ… | βœ… | βœ… | βœ… | | deepinfra | βœ… | βœ… | βœ… | βœ… | | perplexity-ai | βœ… | βœ… | βœ… | βœ… | | Groq AI | βœ… | βœ… | βœ… | βœ… | | anyscale | βœ… | βœ… | βœ… | βœ… | | voyage ai | | | | | βœ… | | xinference [Xorbits Inference] | | | | | βœ… |

The powerful model support of pne allows you to easily build any third-party model calls.

Now let's see how to run local llama3 models of ollama with pne.

import promptulate as pne

resp: str = pne.chat(model="ollama/llama2", messages=[{"content": "Hello, how are you?", "role": "user"}])

🌟 2024.5.14 OpenAI launched their newest "omni" model, offering improved speed and pricing compared to turbo.

You can use the available multimodal capabilities of it in any of your promptulate applications!

import promptulate as pne

messages=[
    {
        "role": "user",
        "content": [
            {"type": "text", "text": "What's in this image?"},
            {
                "type": "image_url",
                "image_url": "https://upload.wikimedia.org/wikipedia/commons/thumb/d/dd/Gfp-wisconsin-madison-the-nature-boardwalk.jpg/2560px-Gfp-wisconsin-madison-the-nature-boardwalk.jpg",
            },
        ],
    }
]
resp = pne.chat(model="gpt-4o", messages=messages)
print(resp)

Use provider/model_name to call the model, and you can easily build any third-party model calls.

For more models, please visit the litellm documentation.

You can also see how to use pne.chat() in the Getting Started/Official Documentation.

πŸ“— Related Documentation

πŸ“ Examples

Related Skills

View on GitHub
GitHub Stars584
CategoryDevelopment
Updated5h ago
Forks40

Languages

Python

Security Score

100/100

Audited on Mar 30, 2026

No findings