Sidecar
Gossip-based service discovery. Docker native, but supports static discovery, too.
Install / Use
/learn @newrelic/SidecarREADME
Sidecar 
The main repo for this project is the NinesStack fork
Sidecar is a dynamic service discovery platform requiring no external coordination service. It's a peer-to-peer system that uses a gossip protocol for all communication between hosts. Sidecar health checks local services and announces them to peer systems. It's Docker-native so your containerized applications work out of the box. It's designed to be Available, Partition tolerant, and eventually consistent—where "eventually" is a very short time window on the matter of a few seconds.
Sidecar is part of a small ecosystem of tools. It can stand entirely alone or can also leverage:
-
Lyft's Envoy Proxy - In less than a year it is fast becoming a core microservices architecture component. Sidecar implements the Envoy proxy SDS, CDS, LDS (V1) and gRPC (V2) APIs. These allow a standalone Envoy to be entirely configured by Sidecar. This is best used with NinesStack's Envoy proxy container.
-
haproxy-api - A separation layer that allows Sidecar to drive HAproxy in a separate container. It also allows a local HAproxy to be configured against a remote Sidecar instance.
-
sidecar-executor - A Mesos executor that integrates with Sidecar, allowing your containers to be health checked by Sidecar for both service health and service discovery. Also supports a number of extra features including Vault integration for secrets management.
-
sidecar-dns - a working, but WIP, project to serve DNS SRV records from Sidecar services state.
Overview in Brief
Services communicate to each other through a proxy (Envoy or HAproxy) instance on each host that is itself managed and configured by Sidecar. This is, in effect, a half service mesh where outbound connections go through the proxy, but inbound requests do not. This has most of the advantages of service mesh with a lot less complexity to manage. It is inspired by Airbnb's SmartStack. But, we believe it has a few advantages over SmartStack:
- Eventually consistent model - a better fit for real world microservices
- Native support for Docker (works without Docker, too!)
- No dependence on Zookeeper or other centralized services
- Peer-to-peer, so it works on your laptop or on a large cluster
- Static binary means it's easy to deploy, and there is no interpreter needed
- Tiny memory usage (under 20MB) and few execution threads means its very light weight
See it in Action: We presented Sidecar at Velocity 2015 and recorded a YouTube video demonstrating Sidecar with Centurion, deploying services in Docker containers, and seeing Sidecar discover and health check them. The second video shows the current state of the UI which is improved since the first video.
Complete Overview and Theory

Sidecar is an eventually consistent service discovery platform where hosts learn about each others' state via a gossip protocol. Hosts exchange messages about which services they are running and which have gone away. All messages are timestamped and the latest timestamp always wins. Each host maintains its own local state and continually merges changes in from others. Messaging is over UDP except when doing anti-entropy transfers.
There is an anti-entropy mechanism where full state exchanges take place between peer nodes on an intermittent basis. This allows for any missed messages to propagate, and helps keep state consistent across the cluster.
Sidecar hosts join a cluster by having a set of cluster seed hosts passed to them on the command line at startup. Once in a cluster, the first thing a host does is merge the state directly from another host. This is a big JSON blob that is delivered over a TCP session directly between the hosts.
Now the host starts continuously polling its own services and reviewing the services that it has in its own state, sleeping a couple of seconds in between. It announces its services as UDP gossip messages every couple of seconds, and also announces tombstone records for any services which have gone away. Likewise, when a host leaves the cluster, any peers that were notified send tombstone records for all of its services. These eventually converge and the latest records should propagate everywhere. If the host rejoins the cluster, it will announce new state every few seconds so the services will be picked back up.
There are lifespans assigned to both tombstone and alive records so that:
- A service that was not correctly tombstoned will go away in short order
- We do not continually add to the tombstone state we are carrying
Because the gossip mechanism is UDP and a service going away is a higher priority message, each tombstone is sent twice initially, followed by once a second for 10 seconds. This delivers reliable messaging of service death.
Timestamps are all local to the host that sent them. This is because we can have clock drift on various machines. But if we always look at the origin timestamp they will at least be comparable to each other by all hosts in the cluster. The one exception to this is that if clock drift is more than a second or two, the alive lifespan may be negatively impacted.
Running it
You can download the latest release from the GitHub Releases page.
If you'd rather build it yourself, you should install the latest version of the Go compiler. Sidecar has not been tested with gccgo, only the mainstream Go compiler.
It's a Go application and the dependencies are all vendored into the vendor/
directory so you should be able to build it out of the box.
$ go build
Or you can run it like this:
$ go run *.go --cluster-ip <boostrap_host>
You always need to supply at least one IP address or hostname with the
--cluster-ip argument (or via the SIDECAR_SEEDS environment variable). If
are running solo, or are the first member, this can be your own hostname. You
may specify the argument multiple times to have multiple hosts. It is
recommended to use more than one when possible.
Note: --cluster-ip will overwrite the values passed into the SIDECAR_SEEDS
environment variable.
Running in a Container
The easiest way to deploy Sidecar to your Docker fleet is to run it in a container itself. Instructions for doing that are provided.
Nitro Software maintains builds of the Docker container image on Docker Hub. Note that the README describes how to configure this container.
Configuration
Sidecar configuration is done through environment variables, with a few options also supported on the command line. Once the configuration has been parsed, Sidecar will use Rubberneck to print out the values that were used. The environment variable are as follows. Defaults are in bold at the end of the line:
-
SIDECAR_LOGGING_LEVEL: The logging level to use (debug, info, warn, error) info -
SIDECAR_LOGGING_FORMAT: Logging format to use (text, json) text -
SIDECAR_DISCOVERY: Which discovery backends to use as a csv array (static, docker, kubernetes_api)[ docker ] -
SIDECAR_SEEDS: csv array of IP addresses used to seed the cluster. -
SIDECAR_CLUSTER_NAME: The name of the Sidecar cluster. Restricts membership to hosts with the same cluster name. -
SIDECAR_BIND_PORT: Manually override the Memberlist bind port 7946 -
SIDECAR_ADVERTISE_IP: Manually override the IP address Sidecar uses for cluster membership. -
SIDECAR_EXCLUDE_IPS: csv array of IPs to exclude from interface selection[ 192.168.168.168 ] -
SIDECAR_STATS_ADDR: An address to send performance stats to. none -
SIDECAR_PUSH_PULL_INTERVAL: How long to wait between anti-entropy syncs. 20s -
SIDECAR_GOSSIP_MESSAGES: How many times to gather messages per round. 15 -
SIDECAR_DEFAULT_CHECK_ENDPOINT: Default endpoint to health check services on/version -
SERVICES_NAMER: Which method to use to extract service names. In both cases it will fall back to image name. (docker_label,regex)docker_label. -
SERVICES_NAME_MATCH: The regexp to use to extract the service name from the container name. -
SERVICES_NAME_LABEL: The Docker label to use to identify service namesServiceName -
DOCKER_URL: How to connect to Docker if Docker discovery is enabled.unix:///var/run/docker.sock -
STATIC_CONFIG_FILE: The config file to use if static discovery is enabledstatic.json -
LISTENERS_URLS: If we want to statically configure any event listeners, the URLs should go in a csv array here. See Listeners section below for more on dynamic listeners. -
HAPROXY_DISABLE: Disable management of HAproxy entirely. This is useful if you need to run without a proxy or are using something like haproxy-api to manage HAproxy based on Sidecar events. You should also use this setting if you are using Envoy as your proxy. -
HAPROXY_RELOAD_COMMAND: The reload com


