L2m2
L2M2 is a minimal Python library for asynchronous, multi-provider LLM inference.
Install / Use
/learn @pkelaita/L2m2README
L2M2: A Simple Python LLM Manager 💬👍
L2M2 ("LLM Manager" → "LLMM" → "L2M2") is a tiny and very simple LLM manager for Python that exposes lots of models through a unified API.

Advantages
- Simple: Completely unified interface – just swap out the model name.
- Tiny: Only one external dependency (aiohttp). No BS dependency graph.
- Private: Compatible with self-hosted models on your own infrastructure.
- Fast: Fully asynchronous and non-blocking if concurrent calls are needed.
Features
- 70+ regularly updated supported models from popular hosted providers.
- Support for self-hosted models via Ollama.
- Manageable chat memory – even across multiple models or with concurrent memory streams.
- JSON mode
- Prompt loading tools
Supported API-based Models
L2M2 supports <!--start-model-count-->71<!--end-model-count--> models from <!--start-prov-list-->OpenAI, Google, Anthropic, Cohere, Mistral, Groq, Replicate, Cerebras, and Moonshot AI<!--end-prov-list-->. The full list of supported models can be found here.
Usage (Full Docs)
Requirements
- Python >= 3.10
- At least one valid API key for a supported provider, or a working Ollama installation (their docs).
Installation
pip install l2m2
Environment Setup
If you plan to use an API-based model, at least one of the following environment variables is set in order for L2M2 to automatically activate the provider.
| Provider | Environment Variable |
| ----------------------- | --------------------- |
| OpenAI | OPENAI_API_KEY |
| Anthropic | ANTHROPIC_API_KEY |
| Cohere | CO_API_KEY |
| Google | GOOGLE_API_KEY |
| Groq | GROQ_API_KEY |
| Replicate | REPLICATE_API_TOKEN |
| Mistral (La Plateforme) | MISTRAL_API_KEY |
| Cerebras | CEREBRAS_API_KEY |
| Moonshot AI | MOONSHOT_API_KEY |
Otherwise, ensure Ollama is running – by default L2M2 looks for it at http://localhost:11434, but this can be configured.
Basic Usage
from l2m2.client import LLMClient
client = LLMClient()
response = client.call(model="gpt-5", prompt="Hello world")
print(response)
For the full usage guide, including memory, asynchronous usage, local models, JSON mode, and more, see Usage Guide.
Planned Features
- Streaming responses
- Support for AWS Bedrock, Azure OpenAI, and Google Vertex APIs.
- Support for structured outputs where available (OpenAI, Google, Cohere, Groq, Mistral, Cerebras)
- Response format customization: i.e., token use, cost, etc.
- Support other self-hosted providers (vLLM and GPT4all) outside of Ollama
- Support for batch APIs where available (OpenAI, Anthropic, Google, Groq, Mistral)
- Support for embeddings as well as inference
- Port this project over to TypeScript
- ...etc.
Contributing
Contributions are welcome! Please see the below contribution guide.
- Requirements
- Setup
- Clone this repository and create a Python virtual environment.
- Install dependencies:
make init. - Create a feature branch and an issue with a description of the feature or bug fix.
- Develop
- Run lint, typecheck and tests:
make(make lint,make type, andmake testcan also be run individually). - Generate test coverage:
make coverage. - If you've updated the supported models, run
make update-docsto reflect those changes in the README. - Make sure to run
make toxregularly to backtest your changes back to 10.0 (you'll need to have all versions of Python between 3.10 and 3.14 installed to do this locally. If you don't, this project's CI will still be able to backtest on all of these versions once you push your changes).
- Run lint, typecheck and tests:
- Integration Test
- Create a
.envfile at the project root with your API keys for all of the supported providers (OPENAI_API_KEY, etc.). - Integration test your local changes by running
make itl("integration test local"). - Once your changes are ready to build, run
make build(make sure you uninstall any existing distributions). - Run the integration tests against the distribution with
make itest.
- Create a
- Contribute
- Create a PR and ping me for a review.
- Merge!
Contact
If you have requests, suggestions, or any other questions about l2m2 please shoot me a note at pierce@kelaita.com, open an issue on Github, or DM me on Slack.
Related Skills
node-connect
351.4kDiagnose OpenClaw node connection and pairing failures for Android, iOS, and macOS companion apps
frontend-design
110.7kCreate distinctive, production-grade frontend interfaces with high design quality. Use this skill when the user asks to build web components, pages, or applications. Generates creative, polished code that avoids generic AI aesthetics.
openai-whisper-api
351.4kTranscribe audio via OpenAI Audio Transcriptions API (Whisper).
qqbot-media
351.4kQQBot 富媒体收发能力。使用 <qqmedia> 标签,系统根据文件扩展名自动识别类型(图片/语音/视频/文件)。
