Meridian
Meridian - A graph-powered AI chat application integrating intelligent parallelization for advanced, aggregated conversational experiences. Nuxt3 & Python.
Install / Use
/learn @MathisVerstrepen/MeridianREADME
Meridian - Graph-Powered Conversational AI
</div>Table of Contents
- ✨ Introduction
- 🌟 Key Features
- 🛠️ Technologies Used
- 🏗️ Production Deployment
- 🧑💻 Local Development
- 📄 API Documentation
- 🗺️ Project Structure
- 🤝 Contributing
- 🐛 Issues and Bug Reports
- 📄 License
✨ Introduction
Meridian is an open-source, graph-based platform for building, visualizing, and interacting with complex AI workflows. Instead of traditional linear chats, Meridian uses a visual canvas where you can connect different AI models, data sources, and logic blocks to create powerful and dynamic conversational agents.
This graph-based approach allows for sophisticated context management, branching conversations, and advanced execution patterns like parallel model querying and conditional routing. It provides both a powerful visual graph for building workflows and a clean, feature-rich chat interface for interacting with them.
<p align="center"> <img src="docs/imgs/main-canvas-view.png" alt="main-canvas-view"/> </p> <p align="center"> <img src="docs/imgs/main-chat-view.png" alt="main-chat-view"/> </p>🌟 Key Features
-
Visual Graph Canvas: At its core, Meridian provides an interactive canvas where you can build, manage, and visualize AI workflows as interconnected nodes.
-
Modular Node System:
- Input Nodes: Provide context from various sources, including plain text (
Prompt), local files (Attachment), and entire GitHub repositories (GitHub).
- Generator Nodes: The processing units of the graph.
Text-to-Text: A standard Large Language Model (LLM) call.Parallelization: Executes a prompt against multiple LLMs simultaneously and uses an aggregator model to synthesize the results into a single, comprehensive answer.Routing: Dynamically selects the next node or model based on the input, enabling conditional logic in your workflows.
- Input Nodes: Provide context from various sources, including plain text (
-
Integrated Chat & Graph Experience:
- A feature-rich chat interface that serves as a user-friendly view of the graph's execution.
- The ability to create complex branching conversations that are naturally represented and manageable in the graph.
-
Rich Content & Tooling:
- Full Markdown support for text formatting.
- LaTeX rendering for mathematical and scientific notation.
- Syntax highlighting for over 200 languages in code blocks.
- AI-powered Mermaid.js diagram generation for visualizing data and processes.
- Deep GitHub integration to use code from repositories as context for the AI.
-
Execution & Orchestration Engine:
- Run entire graphs or specific sub-sections (e.g., all nodes upstream or downstream from a selected point).
- A visual execution plan that shows the sequence of node processing in real-time.
-
Flexible Model Backend:
- Powered by OpenRouter.ai, providing access to a vast array of proprietary and open-source models (from OpenAI, Anthropic, Google, Mistral, and more).
- Granular control over model parameters on both global and per-canvas levels.
-
Enterprise-Ready Foundation:
- Secure authentication with support for OAuth (GitHub, Google) and standard username/password.
- Persistent and robust data storage using PostgreSQL for structured data and Neo4j for the graph engine.
- Cost and token usage tracking for each model call, providing full transparency.
- Monitoring and Error Tracking: Optional integration with Sentry for real-time performance monitoring and error tracking in both frontend and backend services.
[!TIP] Detailed overview of the features in the Features.md file.
🛠️ Technologies Used
- Frontend:
- Backend:
🏗️ Production Deployment
Meridian offers multiple deployment options to suit different needs and environments. Choose the approach that best fits your infrastructure and requirements.
Prerequisites
- Docker and Docker Compose installed on your machine
- Yq (from Mike Farah) for TOML configuration processing
- Git (for cloning the repository)
Deployment Options
Option 1: Quick Start with Pre-built Images (Recommended)
Use pre-built images from GitHub Container Registry for the fastest deployment.
-
Clone the repository:
git clone https://github.com/MathisVerstrepen/Meridian.git cd Meridian/docker -
Create your configuration:
cp config.example.toml config.tomlEdit
config.tomlwith your production settings. See Configuration Guide for details. -
Deploy with pre-built images:
chmod +x run.sh ./run.sh prod -d -
Access the application: Open
http://localhost:3000(or your configured port) in your web browser.
Option 2: Build from Source
Build images locally for customization or when pre-built images aren't suitable.
-
Clone and configure:
git clone https://github.com/MathisVerstrepen/Meridian.git cd Meridian/docker cp config.example.toml config.toml # Edit config.toml with your settings -
Deploy with local builds:
chmod +x run.sh ./run.sh build -d -
Force rebuild (if needed):
./run.sh build --force-rebuild -d
Essential Configuration
Before deploying, you must configure these critical settings in your config.toml:
Required Settings
[api]
# Get your API key from https://openrouter.ai/
MASTER_OPEN_ROUTER_API_KEY = "sk-or-v1-your-api-key-here"
# Generate secure secrets with: python -c "import os; print(os.urandom(32).hex())"
BACKEND_SECRET = "your-64-character-hex-secret"
JWT_SECRET_KEY = "your-64-character-hex-secret"
[ui]
NUXT_SESSION_PASSWORD = "your-64-character-hex-secret"
[database]
POSTGRES_PASSWORD = "your-secure-database-password"
[neo4j]
NEO4J_PASSWORD = "your-secure-neo4j-password"
Optional: Sentry for Monitoring
To enable performance monitoring and error tracking, provide your Sentry DSN. If left empty, Sentry will be disabled.
[sentry]
SENTRY_DSN = "your-sentry-dsn-here"
📚 Detailed Configuration Guide: See Config.md for complete configuration options and OAuth setup instructions.
Management Commands
Starting Services
# Production mode with pre-built images
./run.sh prod -d
# Build mode (compile locally)
./run.sh build -d
# Force rebuild without cache
./run.sh build --force-rebuild -d
Stopping Services
# Stop services (preserve data)
./run.sh prod down
# Stop services and remove volumes (⚠️ deletes all data)
./run.sh prod down -v
Monitoring and Maintenance
# View logs
docker compose -f docker-compose.prod.yml logs -f
# Check service status
docker compose -f docker-compose.prod.yml ps
# Update to latest images
docker compose -f docker-compose.prod.yml pull
./run.sh prod down
./run.sh prod -d
🧑💻 Local Development
Set up Meridian for local development with hot reloading, debugging capabilities, and direct access to logs. This setup runs databases in Docker while keeping the application services local for optimal development experience.
Prerequisites
- Docker and Docker Compose installed on your machine
- Yq (from Mike Farah) for TOML configuration processing
- Python 3.11 or higher for the backend
- Node.js 18+ and npm/pnpm for the frontend
- Git (for cloning the repository)
Development Setup
1. Clone and Configure
# Clone the repository
git clone https://github.com/MathisVerstrepen/Meridian.git
cd Meridian/docker
# Create local development configuration
cp config.local.example.toml config.local.toml
2. Configure for Development
Edit config.local.toml with your development settings:
[general]
ENV = "dev"
NAME = "meridian_dev"
[ui]
NITRO_PORT = 3000
NUXT_PUBLIC_API_B
