Coco
coco is an opensource conversation collector. or simply a fitness tracker for your conversations. coco is private by default. it runs on your hardware.
Install / Use
/learn @mitralabs/CocoREADME
This repo is no longer under active development.
coco
coco is an open source recording device that's supposed to not forget. Every recorded conversation is sent to the backend, transcribed, stored in a database and made available to a LLM via Chatinterface. If you want to, and have some compute, fully private/local.<br>
So far, it was developed by a small team, with lots of fun (see our website for more information).<br>
A substantial part of the development was funded by hessian.ai, thank you for making this possible!
Step by Step Guide to run coco on a Mac:
Note: We developed on Mac OS. So you might run into troubles on different OSes. Feel free to contact us, and we try to help as much as possible. Skip all steps not needed on your machine. Most likely the "Basic Setup"
Before you begin:
You need Git, Docker, ffmpeg, pip and cmake installed. See the later, if that is not the case:
- Install Homebrew, it makes a lot easier. -> Make sure to follow all the instructions in your commandline during the installation process.
- Install ffmpeg (audio library), git, and cmake via the commandline
brew install ffmpeg git cmake - Install Docker Desktop (Note: The Docker Engine without Desktop Client might work fine as well.) Installing means opening and running through the wizard after downloading!
- Install pip via
python3 -m pip install --upgrade pip - Optional: Install VS Code. It is needed for the coco firmware. (For convenience, add it to path by using the
>shell commandwithin vs code)
Middleware Setup:
Chat Interface
You can use whatever Chatinterface you like that supports the MCP Protocol aka acts as a MCP Client. See here for more information.
If you plan on using Ollama as Inference Engine (see below), we suggest Librechat as Chatinterface since it supports MCP. Otherwise, it's probably easiest to start with Claude Desktop. We added Instructions for the setup of both. Just continue below.
LLM Inference & Embeddings
- Install Ollama
- Download a chat model from ollama, make sure that it supports tool use or function calling. We strongly suggest testing different models to find one that best suits your hardware.
- Download an embedding model from ollama as well, we currently suggest bge-m3
Final (Backend) Setup:
- Now open a terminal in the directory you want to clone coco to.
- Git clone this repo
git clone https://github.com/mitralabs/coco.git - cd into the "services" subdirectory
cd coco/services - Follow this ReadMe to install the additional services.
Well done. Now and lastly, to set up your coco device, follow this ReadMe
Additional Notes:
- Nico Stellwag wrote a paper on the RAG pipeline. The final code before submisson can be found on the hack-nico Branch in the RAG-Folder.
- All the code that was developed during the hessian.ai funding period is on the hessian-ai branch.
Related Skills
node-connect
351.2kDiagnose OpenClaw node connection and pairing failures for Android, iOS, and macOS companion apps
frontend-design
110.6kCreate distinctive, production-grade frontend interfaces with high design quality. Use this skill when the user asks to build web components, pages, or applications. Generates creative, polished code that avoids generic AI aesthetics.
openai-whisper-api
351.2kTranscribe audio via OpenAI Audio Transcriptions API (Whisper).
qqbot-media
351.2kQQBot 富媒体收发能力。使用 <qqmedia> 标签,系统根据文件扩展名自动识别类型(图片/语音/视频/文件)。
