Chatten
RAG application (backend & frontend) with sources retriveal and highlighting on the Databricks Platform
Install / Use
/learn @renardeinside/ChattenREADME
🚀 Chatten
RAG with sources, built with Dash, FastAPI, and the Databricks platform.
💫 Showcase
Here is an example of the app in action:
<div style="text-align: center;"> <img src="assets/showcase.gif" alt="chatten" style="max-width: 800px; height: auto;"> </div>🛠 Developer Setup
To install the project, ensure you have the following dependencies:
- 📦 uv: for managing the project
- 🚀 Databricks CLI: for deploying the app
- 🌐 Node.js: for building the UI
📥 Installation Steps
-
Clone the repo:
git clone <repo-url> -
Run sync:
uv sync --all-packages -
Configure environment variables in the
.envfile:# Name of your Databricks profile CHATTEN_PROFILE=... # Optionally, you can include any bundle or Chatten variables: CHATTEN_CATALOG=... # Optionally, by check default values in databricks.yml BUNDLE_VAR_vsi_endpoint=...
🏗 Development
-
Start the UI watcher in one terminal:
cd packages/chatten_ui && npm run watch -
Run the server in another terminal:
uvicorn chatten_app.app:app --reload
🚀 Deployment
-
Authenticate with Databricks:
databricks auth login -p <profile-name> -
Deploy the app:
# See Makefile for additional variables make deploy profile=fe-az-ws catalog=<catalog-name> -
Run the RAG workflow:
# See Makefile for additional variables make run-rag profile=fe-az-ws catalog=<catalog-name> -
Grant app principal access to the Volume.
-
Run the app:
make run-app profile=fe-az-ws catalog=<catalog-name> -
Open the app from Databricks Workspace 🎉
🤖 Agent Serving Endpoint Response Parsing
Check the api_app source code for details on how the agent serving endpoint response is parsed.
Specifically, the /chat API endpoint handles:
- Parsing responses
- Sending messages to the chat interface
🏛 App Implementation Details
The implementation consists of a FastAPI backend with two sub-apps:
- Dash app (
/route): Uses a custom component to render the chat UI. - FastAPI app (
/apiroute): Provides API endpoints for the chat.
The chat is implemented as a custom Dash component, which is a React-based UI element that communicates with the FastAPI backend.
The /api app interacts with the Databricks Serving Endpoint to handle chat requests and responses. Another route is responsible for serving PDF files from Databricks Volume.
📂 Code Structure
📦 chatten # Main package (FastAPI + Dash app)
┣ 📂 packages/chatten_ui # Dash UI & custom chat component
┣ 📂 packages/chatten_rag # RAG workflow implementation
┣ 📂 packages/chatten_app # FastAPI & Dash apps
┣ 📂 src/chatten # Common config & utilities
┣ 📂 src/chatten/app.py # Main entry point
🏗 Technologies Used
🔥 Core Platform
- Databricks
- Apps - App serving
- Asset Bundles - Deployment
- Mosaic AI Model Serving - Model serving
- Mosaic AI Vector Search - Vector search
🏗 Frameworks & Libraries
Related Skills
node-connect
342.0kDiagnose OpenClaw node connection and pairing failures for Android, iOS, and macOS companion apps
claude-opus-4-5-migration
84.7kMigrate prompts and code from Claude Sonnet 4.0, Sonnet 4.5, or Opus 4.1 to Opus 4.5
frontend-design
84.7kCreate distinctive, production-grade frontend interfaces with high design quality. Use this skill when the user asks to build web components, pages, or applications. Generates creative, polished code that avoids generic AI aesthetics.
model-usage
342.0kUse CodexBar CLI local cost usage to summarize per-model usage for Codex or Claude, including the current (most recent) model or a full model breakdown. Trigger when asked for model-level usage/cost data from codexbar, or when you need a scriptable per-model summary from codexbar cost JSON.
