SkillAgentSearch skills...

Dray

An engine for managing the execution of container-based workflows.

Install / Use

/learn @CenturyLinkLabs/Dray
About this skill

Quality Score

0/100

Supported Platforms

Universal

README

dray

Dray Logo

Circle CI GoDoc Docker Hub Analytics

An engine for managing the execution of container-based workflows.

Most common Docker use cases involve using containers for hosting long-running services. These are things like a web application, database or message queue -- services that are running continuously, waiting to service requests.

Another interesting use case for Docker is to wrap short-lived, single-purpose tasks. Perhaps it's a Ruby app that needs to be execute periodically or a set of bash scripts that need to be executed in sequence. Much like the services described above, these things can be wrapped in a Docker container to provide an isolated execution environment. The only real difference is that the task containers exit when they've finished their work while the service containers run until they are explicitly stopped.

Once you start using task containers, you may find it useful to execute a set of these containers together in sequence. Maybe you want to string together a set of tasks and have the output of one container feed the input of the next container. Something like unix pipes:

cat customers.txt | sort | uniq | wc -l

This is the service that Dray provides. Dray allows you to define a serial workflow, or job, as a list of Docker containers with each container encapsulating a step in the workflow. Dray will ensure that each step of the workflow (each container) is started in the correct order and handles the work of marshaling data between the different steps.

NOTE

This repo is no longer being maintained. Users are welcome to fork it, but we make no warranty of its functionality.

Overview

Dray is a Go application that provides a RESTful API for managing jobs. A job is simply a list of Docker containers to be executed in sequence that is posted to Dray as a JSON document:

{  
  "name":"Word Job",
  "steps":[  
    {  
      "source":"centurylink/randword"
    },
    {  
      "source":"centurylink/upper"
    },
    {  
      "source":"centurylink/reverse"
    }
  ]
}

The JSON above describes a job named "Word Job" which consists of three steps. Each step references the name of a Docker image to be executed.

When receiving this job description, Dray will immediately return a response containing an ID for the job and then execute the "centurylink/randword" image . As the container is executing Dray will capture any data written to the container's stdout stream so that it can be passed along to the next step in the list (there are other output channels you can use, but stdout is the default).

Once the "randword" container exits, Dray will inspect the exit code for the container. If, and only if, the exit code is zero, Dray will start the "centurylink/upper" container and pass any data captured in the previous step to that container's stdin stream.

Dray will continue executing each of the steps in this manner, marshalling the stdout of one step to the stdin of the next step, until all of the steps have been completed (or until one of the steps exits with a non-zero exit code).

That status of a running job can be queried at any point by hitting Dray's /jobs/(id) endpoint. Additionally, any output generated by the job can be viewed by querying the /jobs/(id)/log endpoint.

Note that the example above is a working job description that you can execute on your own Dray installation -- each of the referenced images can be found on the Docker Hub.

Running

Dray is packaged as a small Docker image and can easily be executed with the Docker run command.

Dray relies on Redis for persisting information about jobs so you'll first need to start one of the numerous Redis Docker images. In the example below we're simply using the official Redis image:

docker run -d --name redis redis

Once Redis is running, you can start the Dray container with the following:

docker run -d --name dray \
  --link redis:redis \
  -v /var/run/docker.sock:/var/run/docker.sock \
  -p 3000:3000 \
  centurylink/dray:latest

The Dray container must be linked to the Redis container using the --link flag so that Dray can find the correct Redis endpoint. The Redis container can be named anything you like, but the alias used in the --link flag must be "redis".

Since Dray interacts with the Docker API in order launch containers it needs access to the Docker API socket. When starting the container, the -v flag needs to be used to make the Docker socket available inside the container.

In the example above, the -p flag is used to map the Dray API endpoint (listening on port 3000 in the container) to port 3000 on the host machine. In situations where you don't need a mapped port (like when linking another container to the Dray container) the -p flag can be omitted.

If you'd like to use Docker Compose to start Dray, the following docker-compose.yml is equivalent to the steps shown above

dray:                                                                                                                   
  image: centurylink/dray
  links:
   - redis
  volumes:
   - /var/run/docker.sock:/var/run/docker.sock
  ports:
   - "3000:3000"
redis:
  image: redis
  

With this docker-compose.yml file you can start Redis and Dray by simply issuing a docker-compose up -d command.

Configuration

The Dray service can be configured by injecting environment variables into the container when it is started. At this time, Dray supports the following configuration variables:

  • LOG_LEVEL - Valid values are "panic", "fatal", "error", "warn", "info" and "debug". By default, Dray writes messages at and above the "info" level. To increase the amount of logging, set the log level to "debug".

Environment variables can be passed to the Dray container by using the -e flag as part of the Docker run command:

docker run -d --name dray \
  --link redis:redis \
  -e LOG_LEVEL=debug \
  -v /var/run/docker.sock:/var/run/docker.sock \
  -p 3000:3000 \
  centurylink/dray:latest
  

Example

Below is an actual Dray job description that is being used as part of the Panamax project. The goal of this job is to provision a cluster of servers on AWS and then install some software on those servers.

{  
  "name":"aws=fleet",
  "environment":[  
    { "variable":"AWS_ACCESS_KEY_ID", "value":"xxxxxx" },
    { "variable":"AWS_SECRET_ACCESS_KEY", "value":"xxxxxxx" },
    { "variable":"REGION", "value":"us-west-2a" },
    { "variable":"NODE_COUNT", "value":"2" },
    { "variable":"VM_SIZE", "value":"t2.small" },
    { "variable":"REMOTE_TARGET_NAME", "value":"AWS - Fleet-CoreOS" }
  ],
  "steps":[  
    {  
      "name":"Step 1",
      "source":"centurylink/cluster-deploy:aws.fleet"
    },
    {  
      "name":"Step 2",
      "source":"centurylink/cluster-deploy:agent"
    },
    {  
      "name":"Step 3",
      "source":"centurylink/remote-agent-install:latest"
    }
  ]
}

This job uses environment variables to pass a bunch of configuration data into the different steps. Things like the AWS credentials and node count can be passed-in at run-time instead of being hard-coded into the images themselves.

This job uses Dray's data marshalling to pass information between the different steps. Step 1 provisions a cluster of virtual serves and the IP addresses of those servers are needed in step 2. The first step simply writes those IP addresses to the stdout stream where they are captured by Dray and passed to the stdin stream of the second step.

The way this job is structured, job templates can be created for different cloud providers by simply swapping-out the provider-specific steps and changing some environment variables.

API

Dray jobs are created and monitored using the API endpoints described below.

Create Job

POST /jobs

Submits a new job for execution. The execution of the job happens asynchronous to the API call -- the API will respond immediately while execution happens in the background.

The response body will echo back the submitted job description including the ID assigned to the job. The returned job ID can be used to retrieve information about the job using either the /jobs/(id) or /jobs/(id)/log endpoints.

Input:

job

  • name (string) - Optional. Name of job.
  • environment (array of envVar) - Optional. List of environment variables. Environment variables specified at the job level will be injected into all job steps.
  • steps (array of step) - Required. List of job steps.

envVar

  • variable (string) - Required. Name of the environment variable.
  • value (string) - Required. Value of the environment variable.

step

  • name (string) - Optional. Name of step.
  • environment (array of envVar) - Optional. List of environment variables to be injected into this step's container.
  • source (string) - Required. Name of the Docker image to be executed for this step. If the tag is omitted from the image name, will default to "latest".
  • `o
View on GitHub
GitHub Stars386
CategoryDevelopment
Updated4mo ago
Forks37

Languages

Go

Security Score

92/100

Audited on Oct 30, 2025

No findings