Langdict
Build complex LLM Applications with Python Dictionary
Install / Use
/learn @DongjunLee/LangdictREADME
LangDict
LangDict is a framework for building agents (Compound AI Systems) using only specifications in a Python Dictionary. The framework is simple and intuitive to use for production.
The prompts are similar to a feature specification, which is all you need to build an LLM Module. LangDict was created with the design philosophy that building LLM applications should be as simple as possible. Build your own LLM Application with minimal understanding of the framework.
<p align="center"> <img src="https://github.com/LangDict/langdict/blob/main/images/module.png" style="inline" width=800> </p>An Agent can be built by connecting multiple Modules. At LangDict, we focus on the intuitive interface, modularity, extensibility, and reusability of PyTorch's nn.Module. If you have experience developing Neural Networks with PyTorch, you will understand how to use it right away.
Modules
| Task | Name | Code |
| ---- | ---- | ---- |
| Ranking | RankGPT | Code |
| Compression | TextCompressor (LLMLingua-2) | Code |
| RAG | SELF-RAG | Code |
Key Features
<details> <summary>LLM Applicaiton framework for simple, intuitive, specification-based development</summary>chitchat = LangDict.from_dict({
"messages": [
("system", "You are a helpful AI bot. Your name is {name}."),
("human", "Hello, how are you doing?"),
("ai", "I'm doing well, thanks!"),
("human", "{user_input}"),
],
"llm": {
"model": "gpt-4o-mini",
"max_tokens": 200
},
"output": {
"type": "string"
}
})
# format placeholder is key of input dictionary
chitchat({
"name": "LangDict",
"user_input": "What is your name?"
})
</details>
<details>
<summary>Simple interface (Stream / Batch) </summary>
rag = RAG()
single_inputs = {
"conversation": [{"role": "user", "content": "How old is Obama?"}]
}
# invoke
rag(single_inputs)
# stream
rag(single_inputs, stream=True)
# batch
batch_inputs = [{ ... }, { ...}, ...]
rag(batch_inputs, batch=True)
</details>
<details>
<summary>Modularity: Extensibility, Modifiability, Reusability</summary>
class RAG(Module):
def __init__(self, docs: List[str]):
super().__init__()
self.query_rewrite = LangDictModule.from_dict({ ... }) # Module
self.search = Retriever(docs=docs) # Module
self.answer = LangDictModule.from_dict({ ... }) # Module
def forward(self, inputs: Dict):
query_rewrite_result = self.query_rewrite({
"conversation": inputs["conversation"],
})
doc = self.search(query_rewrite_result)
return self.answer({
"conversation": inputs["conversation"],
"context": doc,
})
</details>
<details>
<summary>Easy to change trace options (Console, Langfuse, LangSmith)</summary>
# Apply Trace option to all modules
rag = RAG()
# Console Trace
rag.trace(backend="console")
# Langfuse
rag.trace(backend="langfuse")
# LangSmith
rag.trace(backend="langsmith")
</details>
<details>
<summary>Easy to change hyper-paramters (Prompt, Paramter)</summary>
rag = RAG()
rag.save_json("rag.json")
# Modify "rag.json" file
rag.load_json("rag.json")
</details>
Quick Start
Install LangDict:
$ pip install langdict
Example
LangDict
- Build LLM Module with the specification.
from langdict import LangDict
_SPECIFICATION = {
"messages": [
("system", "You are a helpful AI bot. Your name is {name}."),
("human", "Hello, how are you doing?"),
("ai", "I'm doing well, thanks!"),
("human", "{user_input}"),
],
"llm": {
"model": "gpt-4o-mini",
"max_tokens": 200
},
"output": {
"type": "string"
}
}
chitchat = LangDict.from_dict(_SPECIFICATION)
chitchat({
"name": "LangDict",
"user_input": "What is your name?"
})
>>> 'My name is LangDict. How can I assist you today?'
Module
- Build a agent by connecting multiple modules.
from typing import Any, Dict, List
from langdict import Module, LangDictModule
_QUERY_REWRITE_SPECIFICATION = { ... }
_ANSWER_SPECIFICATION = { ... }
class RAG(Module):
def __init__(self, docs: List[str]):
super().__init__()
self.query_rewrite = LangDictModule.from_dict(_QUERY_REWRITE_SPECIFICATION)
self.search = SimpleRetriever(docs=docs) # Module
self.answer = LangDictModule.from_dict(_ANSWER_SPECIFICATION)
def forward(self, inputs: Dict[str, Any]):
query_rewrite_result = self.query_rewrite({
"conversation": inputs["conversation"],
})
doc = self.search(query_rewrite_result)
return self.answer({
"conversation": inputs["conversation"],
"context": doc,
})
rag = RAG()
inputs = {
"conversation": [{"role": "user", "content": "How old is Obama?"}]
}
rag(inputs)
>>> 'Barack Obama was born on August 4, 1961. As of now, in September 2024, he is 63 years old.'
- Streaming
rag = RAG()
# Stream
for token in rag(inputs, stream=True):
print(f"token > {token}")
>>>
token > Bar
token > ack
token > Obama
token > was
token > born
token > on
token > August
token >
token > 4
...
- Get observability with a single line of code.
rag = RAG()
# Trace
rag.trace(backend="console")
- Save and load the module as a JSON file.
rag = RAG()
rag.save_json("rag.json")
rag.load_json("rag.json")
Dependencies
LangDict requires the following:
LangChain- LangDict consists of PromptTemplate + LLM + Output Parser.- langchain
- langchain-core
LiteLLM- Call 100+ LLM APIs in OpenAI format.
Optional
Langfuse- If you use langfuse with the Trace option, you need to install it separately.
Related Skills
node-connect
349.7kDiagnose OpenClaw node connection and pairing failures for Android, iOS, and macOS companion apps
claude-opus-4-5-migration
109.7kMigrate prompts and code from Claude Sonnet 4.0, Sonnet 4.5, or Opus 4.1 to Opus 4.5
frontend-design
109.7kCreate distinctive, production-grade frontend interfaces with high design quality. Use this skill when the user asks to build web components, pages, or applications. Generates creative, polished code that avoids generic AI aesthetics.
model-usage
349.7kUse CodexBar CLI local cost usage to summarize per-model usage for Codex or Claude, including the current (most recent) model or a full model breakdown. Trigger when asked for model-level usage/cost data from codexbar, or when you need a scriptable per-model summary from codexbar cost JSON.
