Omi
AI wearables. Put it on, speak, transcribe, automatically
Install / Use
/learn @BasedHardware/OmiREADME
omi
A 2nd brain you trust more than your 1st
Omi captures your screen and conversations, transcribes in real-time, generates summaries and action items, and gives you an AI chat that remembers everything you've seen and heard. Works on desktop, phone and wearables. Fully open source.
Trusted by 300,000+ professionals.
<br> <br>Website · Docs · Discord · Twitter · DeepWiki
</div>Quick Start
git clone https://github.com/BasedHardware/omi.git && cd omi/desktop && ./run.sh --yolo
Builds the macOS app, connects to the cloud backend, and launches. No env files, no credentials, no local backend.
Requirements: macOS 14+, Xcode (includes Swift & code signing), Node.js
Full Installation
For local development with the full backend stack:
# 1. Install prerequisites
xcode-select --install
curl --proto '=https' --tlsv1.2 -sSf https://sh.rustup.rs | sh
# 2. Clone and configure
git clone https://github.com/BasedHardware/omi.git
cd omi/desktop
cp Backend-Rust/.env.example Backend-Rust/.env
# 3. Build and run (starts Rust backend + auth + Cloudflare tunnel + Swift app)
./run.sh
See desktop/README.md for environment variables and credential setup.
Mobile App
cd app && bash setup.sh ios # or: bash setup.sh android
<img src='https://upload.wikimedia.org/wikipedia/commons/3/3c/Download_on_the_App_Store_Badge.svg' alt="Download on the App Store" height="50px" width="180px"> <img src='https://upload.wikimedia.org/wikipedia/commons/7/78/Google_Play_Store_badge_EN.svg' alt='Get it on Google Play' height="50px" width="180px"> · Try in Browser
How It Works
┌─────────────────────────────────────────────────────────┐
│ Your Devices │
│ │
│ ┌──────────┐ ┌──────────────┐ ┌───────────────────┐ │
│ │ Omi │ │ macOS App │ │ Mobile App │ │
│ │ Wearable │ │ (Swift/Rust) │ │ (Flutter) │ │
│ └────┬─────┘ └──────┬───────┘ └────────┬──────────┘ │
│ │ BLE │ HTTPS/WS │ │
└───────┼────────────────┼───────────────────┼─────────────┘
│ │ │
▼ ▼ ▼
┌─────────────────────────────────────────────────────────┐
│ Omi Backend (Python) │
│ │
│ ┌─────────┐ ┌──────────┐ ┌─────────┐ ┌──────────┐ │
│ │ Listen │ │ Pusher │ │ VAD │ │ Diarizer │ │
│ │ (REST) │ │ (WS) │ │ (GPU) │ │ (GPU) │ │
│ └─────────┘ └──────────┘ └─────────┘ └──────────┘ │
│ │
│ ┌─────────┐ ┌──────────┐ ┌─────────┐ ┌──────────┐ │
│ │ Deepgram│ │ Firestore│ │ Redis │ │ LLMs │ │
│ │ (STT) │ │ (DB) │ │ (Cache) │ │ (AI) │ │
│ └─────────┘ └──────────┘ └─────────┘ └──────────┘ │
└─────────────────────────────────────────────────────────┘
| Component | Path | Stack |
|-----------|------|-------|
| macOS app | desktop/ | Swift, SwiftUI, Rust backend |
| Mobile app | app/ | Flutter (iOS & Android) |
| Backend API | backend/ | Python, FastAPI, Firebase |
| Firmware | omi/ | nRF, Zephyr, C |
| Omi Glass | omiGlass/ | ESP32-S3, C |
| SDKs | sdks/ | React Native, Swift, Python |
| AI Personas | web/personas-open-source/ | Next.js |
Create Your Own App (1 min)
- Download the Omi app and create a webhook at webhook.site
- In the app: Explore → Create an App → Select Capability → Paste Webhook URL → Install
- Start speaking — real-time transcript appears on webhook.site
See the full guide.
Documentation
Getting Started
API & SDKs
- API Reference — REST endpoints for memories, conversations, action items
- Python SDK
- Swift SDK
- React Native SDK
- MCP Server — Model Context Protocol integration
Building Apps
- App Development Guide
- Example Apps — GitHub, Slack, OmiMentor
- Audio Streaming Apps
- Custom Chat Tools
- Submit to App Store
Architecture
Hardware
Omi Hardware
Open-source AI wearables that pair with the mobile app for 24h+ continuous capture.
- Buy Omi Dev Kit — nRF, BLE, coin cell battery
- Buy Omi Glass Dev Kit — ESP32-S3, camera + audio
- Open Source Hardware Designs
License
MIT — see LICENSE
Related Skills
node-connect
343.3kDiagnose OpenClaw node connection and pairing failures for Android, iOS, and macOS companion apps
claude-opus-4-5-migration
92.1kMigrate prompts and code from Claude Sonnet 4.0, Sonnet 4.5, or Opus 4.1 to Opus 4.5
frontend-design
92.1kCreate distinctive, production-grade frontend interfaces with high design quality. Use this skill when the user asks to build web components, pages, or applications. Generates creative, polished code that avoids generic AI aesthetics.
model-usage
343.3kUse CodexBar CLI local cost usage to summarize per-model usage for Codex or Claude, including the current (most recent) model or a full model breakdown. Trigger when asked for model-level usage/cost data from codexbar, or when you need a scriptable per-model summary from codexbar cost JSON.
