SkillAgentSearch skills...

Snapcast

Synchronous multiroom audio player

Install / Use

/learn @snapcast/Snapcast
About this skill

Quality Score

0/100

Supported Platforms

Universal

README

Snapcast

<picture> <source media="(prefers-color-scheme: dark)" srcset="doc/Snapcast_800_dark.png"> <source media="(prefers-color-scheme: light)" srcset="doc/Snapcast_800.png"> <img alt="Snapcast" src="doc/Snapcast_800.png"> </picture>

Synchronous audio player

CI Github Releases GitHub Downloads Donate

Snapcast is a multiroom client-server audio player, where all clients are time synchronized with the server to play perfectly synced audio. It's not a standalone player, but an extension that turns your existing audio player into a Sonos-like multiroom solution.
Audio is captured by the server and routed to the connected clients. Several players can feed audio to the server in parallel and clients can be grouped to play the same audio stream.
One of the most generic ways to use Snapcast is in conjunction with the music player daemon (MPD) or Mopidy.

Overview

How does it work

The Snapserver reads PCM chunks from configurable stream sources:

  • Named pipe, e.g. /tmp/snapfifo
  • ALSA to capture line-in, microphone, alsa-loop (to capture audio from other players)
  • TCP
  • stdout of a process
  • Many more

The chunks are encoded and tagged with the local time. Supported codecs are:

  • PCM lossless uncompressed
  • FLAC lossless compressed [default]
  • Vorbis lossy compression
  • Opus lossy low-latency compression

The encoded chunks are sent via a TCP connection to the Snapclients. Each client does continuous time synchronization with the server, so that the client is always aware of the local server time. Every received chunk is first decoded and added to the client's chunk-buffer. Knowing the server's time, the chunk is played out using a system dependend low level audio API (e.g. ALSA) at the appropriate time. Time deviations are corrected by playing faster/slower, which is done by removing/duplicating single samples (a sample at 48kHz has a duration of ~0.02ms).

Typically the deviation is below 0.2ms.

For more information on the binary protocol, please see the documentation.

Installation

You can either install Snapcast from a prebuilt package (recommended for new users), or build and install snapcast from source.

Install Linux packages (recommended for beginners)

Snapcast packages are available for several Linux distributions:

Install using Homebrew

On macOS and Linux, snapcast can be installed using Homebrew:

brew install snapcast

Installation from source

Please follow this guide to build Snapcast for

Configuration

After installation, Snapserver and Snapclient are started with the command line arguments that are configured in /etc/default/snapserver and /etc/default/snapclient. Allowed options are listed in the man pages (man snapserver, man snapclient) or by invoking the snapserver or snapclient with the -h option.

Server

The server configuration is done in /etc/snapserver.conf. Different audio sources can by configured in the [stream] section with a list of source options, e.g.:

[stream]
source = pipe:///tmp/snapfifo?name=Radio&sampleformat=48000:16:2&codec=flac
source = file:///home/user/Musik/Some%20wave%20file.wav?name=File

Available stream sources are:

  • pipe: read audio from a named pipe
  • alsa: read audio from an alsa device
  • librespot: launches librespot and reads audio from stdout
  • airplay: launches airplay and read audio from stdout
  • file: read PCM audio from a file
  • process: launches a process and reads audio from stdout
  • tcp: receives audio from a TCP socket, can act as client or server
  • pipewire: direct audio capture from PipeWire
  • jack: receives audio from a Jack server
  • meta: read and mix audio from other stream sources

Client

The client will use as audio backend the system's low level audio API to have the best possible control and most precise timing to achieve perfectly synced playback.

Available audio backends are configured using the --player command line parameter:

| Backend | OS | Description | Parameters | | --------- | ------- | ------------ | ---------- | | alsa | Linux | ALSA | buffer_time=<total buffer size [ms]> (default 80, min 10)<br>fragments=<number of buffers> (default 4, min 2) | | pulse | Linux | PulseAudio | buffer_time=<buffer size [ms]> (default 100, min 10)<br>server=<PulseAudio server> - default not-set: use the default server<br>property=<key>=<value> set PA property, can be used multiple times (default media.role=music) | | oboe | Android | Oboe, using OpenSL ES on Android 4.1 and AAudio on 8.1 | | | opensl | Android | OpenSL ES | | | coreaudio | macOS | Core Audio | | | wasapi | Windows | Windows Audio Session API | | | sld2 | All | SDL2 Audio (e.g. for LG webOS TVs) | | | file | All | Write audio to file | filename=<filename> (<filename> = stdout, stderr, null or a filename)<br>mode=[w\|a] (w: write (discarding the content), a: append (keeping the content) |

Parameters are appended to the player name, e.g. --player alsa:buffer_time=100. Use --player <name>:? to get a list of available options.
For some audio backends you can configure the PCM device using the -s or --soundcard parameter, the device is chosen by index or name. Available PCM devices can be listed with -l or --list
If you are running MPD and Shairport-sync into a soundcard that only supports 48000 sample rate, you can use --sampleformat <arg> and the snapclient will resample the audio from shairport-sync, for example, which is 44100 (i.e. --sampleformat 48000:16:*)

Test

You can test your installation by copying random data into the server's fifo file

cat /dev/urandom > /tmp/snapfifo

All connected clients should play random noise now. You might raise the client's volume with "alsamixer". It's also possible to let the server play a WAV file. Simply configure a file stream in /etc/snapserver.conf, and restart the server:

[stream]
source = file:///home/user/Musik/Some%20wave%20file.wav?name=test

When you are using a Raspberry Pi, you might have to change your audio output to the 3.5mm jack:

# The last number is the audio output with 1 being the 3.5 jack, 2 being HDMI and 0 being auto.
amixer cset numid=3 1

To setup WiFi on a Raspberry Pi, you can follow this guide

Control

Snapcast can be controlled using a JSON-RPC API over plain TCP, HTTP(S), or Websockets:

  • Set client's volume
  • Mute clients
  • Rename clients
  • Assign a client to a stream
  • Manage groups
  • ...

WebApp

The server is shipped with Snapweb, this WebApp can be reached under http://<snapserver host>:1780 or https://<snapserver host>:1788, if SSL is enabled (see HTTPS configuration).

<picture> <source media="(prefers-color-scheme: dark)" srcset="https://raw.githubusercontent.com/snapcast/snapweb/master/snapweb_dark.png"> <source media="(prefers-color-scheme: light)" srcset="https://raw.githubusercontent.com/snapcast/snapweb/master/snapweb_light.png"> <img alt="Snapweb" src="https://raw.githubusercontent.com/snapcast/snapweb/master/snapweb_light.png"> </picture>

Android client

There is an Android client snapdroid available in Releases and on Google Play

Snapcast for Android

Contributions

There is also an unofficial WebApp from @atoomic atoomic/snapcast-volume-ui. This app lists all clients connected to a server and allows you to control individually the volume of each client. Once installed, you can use any mobile device, laptop, desktop, or browser.

There is also an unofficial FHEM module from @unimatrix27 which integrates a Snapcast controller into the FHEM home automation system.

There is a snapcast component for Home Assistant which integrates a Snapcast controller in to the Home Assistant home automation system a

Related Skills

docs-writer

99.0k

`docs-writer` skill instructions As an expert technical writer and editor for the Gemini CLI project, you produce accurate, clear, and consistent documentation. When asked to write, edit, or revie

model-usage

335.4k

Use CodexBar CLI local cost usage to summarize per-model usage for Codex or Claude, including the current (most recent) model or a full model breakdown. Trigger when asked for model-level usage/cost data from codexbar, or when you need a scriptable per-model summary from codexbar cost JSON.

pr

for a github pr, please respond in the following format - ## What type of PR is this? - [ ] 🍕 Feature - [ ] 🐛 Bug Fix - [ ] 📝 Documentation - [ ] 🧑‍💻 Code Refactor - [ ] 🔧 Other ## Description <!-- What changed and why? Optional: include screenshots or other supporting artifacts. --> ## Related Issues <!-- Link issues like: Fixes #123 --> ## Updated requirements or dependencies? - [ ] Requirements or dependencies added/updated/removed - [ ] No requirements changed ## Testing - [ ] Tests added/updated - [ ] No tests needed **How to test or why no tests:** <!-- Describe test steps or explain why tests aren't needed --> ## Checklist - [ ] Self-reviewed the code - [ ] Tests pass locally - [ ] No console errors/warnings ## [optional] What gif best describes this PR?

arscontexta

2.9k

Claude Code plugin that generates individualized knowledge systems from conversation. You describe how you think and work, have a conversation and get a complete second brain as markdown files you own.

View on GitHub
GitHub Stars7.5k
CategoryContent
Updated7m ago
Forks530

Languages

C++

Security Score

100/100

Audited on Mar 25, 2026

No findings