Promptflowx
promptflowx is a simple and powerful tool for building prompt-driven workflows.
Install / Use
/learn @10cl/PromptflowxREADME
promptflowx
This is the Typescript version of promptflow. join us to make prompt flow better by participating discussions, opening issues, submitting PRs.
Prompt flow is a suite of development tools designed to streamline the end-to-end development cycle of LLM-based AI applications, from ideation, prototyping, testing, evaluation to production deployment and monitoring. It makes prompt engineering much easier and enables you to build LLM apps with production quality.
With prompt flow, you will be able to:
- Create and iteratively develop flow
- Create executable flows that link LLMs, prompts, JavaScript code and other together.
- Debug and iterate your flows, especially the interaction with LLMs with ease.
- Evaluate flow quality and performance
- Evaluate your flow's quality and performance with larger datasets.
- Streamlined development cycle for production
- Deploy your flow to the serving platform you choose or integrate into your app's code base easily.
Feature comparison
<table style="width: 100%;"> <tr> <th align="center">Feature</th> <th align="center">promptflowx</th> <th align="center">promptflow</th> </tr> <tr> <td align="center">Programming Approach</td> <td align="center">TypeScript</td> <td align="center">Python Code</td> </tr> <tr> <td align="center">IDE</td> <td align="center"><a href="http://github.com/10cl/chatdev">ChatDev</a></td> <td align="center">VS Code</td> </tr> <tr> <td align="center">WorkFlow</td> <td align="center">✅</td> <td align="center">✅</td> </tr> <tr> <td align="center">Supported Context</td> <td align="center">✅</td> <td align="center">❌</td> </tr> <tr> <td align="center">One-click Deployment</td> <td align="center">✅</td> <td align="center">❌</td> </tr> </table>Installation
To get started quickly, you can use a pre-built development environment. Click the button below to edit your promptflowx in the Extension, and then continue the readme!
<a href="https://chrome.google.com/webstore/detail/chatdev-visualize-your-ai/dopllopmmfnghbahgbdejnkebfcmomej?utm_source=github"><img src="https://github.com/10cl/promptflowx/blob/main/screenshots/chrome-logo.png" width="200" alt="Get ChatDev for Chromium"></a> <a href="https://microsoftedge.microsoft.com/addons/detail/ceoneifbmcdiihmgfjeodiholmbpmibm?utm_source=github"><img src="https://github.com/10cl/promptflowx/blob/main/screenshots/edge-logo.png" width="160" alt="Get ChatDev for Microsoft Edge"></a>
more detail: https://github.com/10cl/chatdev?tab=readme-ov-file#-installation
If you want to get started in your local environment, first install the packages:
Ensure you have a node environment.
npm install promptflowx
Quick Start ⚡
Create a chatbot with prompt flow
creates folder named my_chatbot and initiate a prompt flow(flow.dag.yaml) from a chat template like:
desc: "ChatBot Template"
outputs:
reference: ${ChatBot_Template}
nodes:
- name: ChatBot_Template
source:
code: "{intro}, we are chatting. I say to you: {prompt}. what you might say?"
inputs:
prompt: ${inputs.input_text}
intro: "I want you to play a text-based adventure game. I play a character in this text-based adventure game."
Setup a connection for your LLM API
For LLM request, establish a connection by your define, each node will request the api, you can change the node Context or other things here:
export async function nodeRequest(node: PromptFlowNode, prompt: string): Promise<string> {
try {
console.log("\n>>>>>>>>>>> Prompt START >>>>>>>>>>>>>>>>\n" + prompt + "\n>>>>>>>>>>> Prompt END >>>>>>>>>>>>>>>>\n");
const response = await axios.get('https://api.example.com/data');
return response.data.data;
} catch (error) {
// Handle errors that occur during fetching
console.error('Error fetching data from LLM API:', error);
throw error; // You can choose to throw the error or return a default value
}
}
Chat with your flow
In the my_chatbot folder, there's a flow.dag.yaml file that outlines the flow, including inputs/outputs, nodes, connection, and the LLM model, etc
Interact with your chatbot by execute the code:
const yaml = fs.readFileSync(path.join(__dirname, "flow.dag.yaml"), "utf8");
const context = {
/* for defined your own api*/
promptflowx: {
libs: await promptflowx.buildLib(yaml, __dirname),
request: nodeRequest,
}
} as Context
await promptflowx.execute(context, yaml, 'Hello.');
Next Step! Continue with the Tutorial 👇 section to delve deeper into prompt flow.
Tutorial 🏃♂️
Prompt flow is a tool designed to build high quality LLM apps, the development process in prompt flow follows these steps: develop a flow, improve the flow quality, deploy the flow to production.
Develop your own LLM apps
Browser Extension
We also offer a Browser extension (a flow designer) for an interactive flow development experience with UI.

You can install it from the <a href="https://chrome.google.com/webstore/detail/chatdev-visualize-your-ai/dopllopmmfnghbahgbdejnkebfcmomej?utm_source=github">chrome store</a>.
Context
Each node will as the Global scope within the flow operates in JavaScript.
for example, in ChatDev, we set window as the Context scope.
await promptflowx.execute(window/*Context*/, yaml, 'Hello.'/*prompt*/);
Templates
promptflowx template is a string that contains any number of template tags. Tags are indicated by the double mustaches that surround them. {{person}} is a tag, as is {{person}}. In both examples we refer to person as the tag's key. There are several types of tags available in promptflowx, described below.
Variables
The most basic tag type is a simple variable. A {{name}} tag renders the value of the name key in the current context. If there is no such key, nothing is rendered.
All variables are HTML-escaped by default. If you want to render unescaped HTML, use the triple mustache: {{{name}}}. You can also use & to unescape a variable.
View:
{
"name": "Chris",
"company": "<b>GitHub</b>"
}
Template:
* {{name}}
* {{age}}
* {{company}}
* {{{company}}}
* {{&company}}
{{=<% %>=}}
* {{company}}
<%={{ }}=%>
Output:
* Chris
*
* <b>GitHub</b>
* <b>GitHub</b>
* <b>GitHub</b>
* {{company}}
JavaScript's dot notation may be used to access keys that are properties of objects in a view.
View:
{
"name": {
"first": "Michael",
"last": "Jackson"
},
"age": "RIP"
}
Template:
* {{name.first}} {{name.last}}
* {{age}}
Output:
* Michael Jackson
* RIP
Sections
Sections render blocks of text zero or more times, depending on the value of the key in the current context.
A section begins with a pound and ends with a slash. That is, {{person}} begins a person section, while {{/person}} ends it. The text between the two tags is referred to as that section's "block".
The behavior of the section is determined by the value of the key.
False Values or Empty Lists
If the person key does not exist, or exists and has a value of null, undefined, false, 0, or NaN, or is an empty string or an empty list, the block will not be rendered.
View:
{
"person": false
}
Template:
Shown.
{{person}}
Never shown!
{{/person}}
Output:
Shown.
Non-Empty Lists
If the person key exists and is not null, undefined, or false, and is not an empty list the block will be rendered one or more times.
When the value is a list, the block is rendered once for each item in the list. The context of the block is set to the current item in the list for each iteration. In this way we can loop over collections.
View:
{
"stooges": [
{ "name": "Moe" },
{ "name": "Larry" },
{ "name": "Curly" }
]
}
Template:
{{stooges}}
<b>{{name}}</b>
{{/stooges}}
Output:
Related Skills
node-connect
344.1kDiagnose OpenClaw node connection and pairing failures for Android, iOS, and macOS companion apps
frontend-design
96.8kCreate distinctive, production-grade frontend interfaces with high design quality. Use this skill when the user asks to build web components, pages, or applications. Generates creative, polished code that avoids generic AI aesthetics.
openai-whisper-api
344.1kTranscribe audio via OpenAI Audio Transcriptions API (Whisper).
qqbot-media
344.1kQQBot 富媒体收发能力。使用 <qqmedia> 标签,系统根据文件扩展名自动识别类型(图片/语音/视频/文件)。
