SkillAgentSearch skills...

Coco

coco is an opensource conversation collector. or simply a fitness tracker for your conversations. coco is private by default. it runs on your hardware.

Install / Use

/learn @mitralabs/Coco
About this skill

Quality Score

0/100

Supported Platforms

Universal

README

This repo is no longer under active development.

coco

coco is an open source recording device that's supposed to not forget. Every recorded conversation is sent to the backend, transcribed, stored in a database and made available to a LLM via Chatinterface. If you want to, and have some compute, fully private/local.<br>

So far, it was developed by a small team, with lots of fun (see our website for more information).<br>

A substantial part of the development was funded by hessian.ai, thank you for making this possible!

Step by Step Guide to run coco on a Mac:

Note: We developed on Mac OS. So you might run into troubles on different OSes. Feel free to contact us, and we try to help as much as possible. Skip all steps not needed on your machine. Most likely the "Basic Setup"

Before you begin:

You need Git, Docker, ffmpeg, pip and cmake installed. See the later, if that is not the case:

  • Install Homebrew, it makes a lot easier. -> Make sure to follow all the instructions in your commandline during the installation process.
  • Install ffmpeg (audio library), git, and cmake via the commandline brew install ffmpeg git cmake
  • Install Docker Desktop (Note: The Docker Engine without Desktop Client might work fine as well.) Installing means opening and running through the wizard after downloading!
  • Install pip via python3 -m pip install --upgrade pip
  • Optional: Install VS Code. It is needed for the coco firmware. (For convenience, add it to path by using the >shell command within vs code)

Middleware Setup:

Chat Interface

You can use whatever Chatinterface you like that supports the MCP Protocol aka acts as a MCP Client. See here for more information.

If you plan on using Ollama as Inference Engine (see below), we suggest Librechat as Chatinterface since it supports MCP. Otherwise, it's probably easiest to start with Claude Desktop. We added Instructions for the setup of both. Just continue below.

LLM Inference & Embeddings

  1. Install Ollama
  2. Download a chat model from ollama, make sure that it supports tool use or function calling. We strongly suggest testing different models to find one that best suits your hardware.
  3. Download an embedding model from ollama as well, we currently suggest bge-m3

Final (Backend) Setup:

  1. Now open a terminal in the directory you want to clone coco to.
  2. Git clone this repo git clone https://github.com/mitralabs/coco.git
  3. cd into the "services" subdirectory cd coco/services
  4. Follow this ReadMe to install the additional services.

Well done. Now and lastly, to set up your coco device, follow this ReadMe

Additional Notes:

  1. Nico Stellwag wrote a paper on the RAG pipeline. The final code before submisson can be found on the hack-nico Branch in the RAG-Folder.
  2. All the code that was developed during the hessian.ai funding period is on the hessian-ai branch.

Related Skills

View on GitHub
GitHub Stars29
CategoryDevelopment
Updated15d ago
Forks1

Languages

Jupyter Notebook

Security Score

90/100

Audited on Mar 23, 2026

No findings