Spydra
Ephemeral Hadoop clusters using Google Compute Platform
Install / Use
/learn @spotify/SpydraREADME
Spydra (Beta / Inactive)
Note This project is inactive.
Ephemeral Hadoop clusters using Google Compute Platform
Description
Spydra is "Hadoop Cluster as a Service" implemented as a library utilizing Google Cloud Dataproc
and Google Cloud Storage. The intention of Spydra is to enable the use of ephemeral Hadoop clusters while hiding
the complexity of cluster lifecycle management and keeping troubleshooting simple. Spydra is designed to be integrated
as a hadoop jar replacement.
Spydra is part of Spotify's effort to migrate its data infrastructure to Google Compute Platform and is being used in
production. The principles and the design of Spydra are based on our experiences in scaling and maintaining our Hadoop
cluster to over 2500 nodes and over 100 PBs of capacity running about 20,000 independent jobs per day.
Spydra supports submitting data processing jobs to Dataproc as well as to existing on-premise Hadoop infrastructure
and is designed to ease the migration to and/or dual use of Google Cloud Platform and on-premise infrastructure.
Spydra is designed to be very configurable and allows the usage of all job types and configurations supported by the
gcloud dataproc clusters create and
gcloud dataproc jobs submit commands.
Development Status
Spydra is the rewrite of a concept that has been developed at Spotify for more than a year. The current version of
Spydra is in beta, used in production at Spotify, and actively developed and supported by our data infrastructure team.
Spydra is in beta and things might change but we are aiming at not breaking the currently exposed APIs and configuration.
Spydra at Spotify
At Spotify, Spydra is being used for our on-going migration to Google Cloud Platform. It handles the
submission of on-premise Hadoop jobs as well as Dataproc jobs, simplifying the switch from on-premise Hadoop
to Dataproc.
Spydra is packaged in a docker image that is used to deploy data
pipelines. This docker image includes Hadoop tools and configurations to be able to submit to our on-premise Hadoop
cluster as well as an installation of gcloud and other basic dependencies
required to execute Hadoop jobs in our environment. Pipelines are then scheduled using Styx
and orchestrated by Luigi which then invokes Spydra instead of hadoop jar.
Design
Spydra is built as a wrapper around Google Cloud Dataproc and designed not to have any central component. It exposes
all functionality supported by Dataproc via its own configuration while adding some defaults. Spydra manages
clusters and submits jobs invoking the gcloud dataproc command. Spydra ensures that clusters are eventually deleted
by updating a heartbeat marker in the cluster's metadata and utilizes initialization-actions
to set up a self-deletion script on the cluster to handle the deletion of the cluster in the event of client failures.
For submitting jobs to an existing on-premise Hadoop infrastructure, Spydra utilizes the hadoop jar command which is
required to be installed and configured in the environment.
For Dataproc as well as on-premise submissions, Spydra will act similar to hadoop jar and print out driver output.
Credentials
Spydra is designed to ease the usage of Google Compute Platform credentials by utilizing
service accounts. The same credential that is
used locally by Spydra to manage the cluster and submit jobs, is also by default forwarded to the Hadoop cluster when
calling Dataproc. This means that access rights to resources need only be given to a single set of credentials.
Storing Execution Data and Logs
To make job execution data available after an ephemeral cluster was shut down, and to provide similar functionality to
the Hadoop MapReduce History Server, Spydra stores execution data and logs on Google Cloud Storage, grouping it by
a user-defined client id. Typically client id is unique per job. The execution data and logs are then made available via
Spydra commands. These allow spinning up a local MapReduce History Server to access execution data and logs
as well as dumping them.
Autoscaler
Spydra has an experimental autoscaler which can be executed on the cluster. It monitors the current resource
utilization on the cluster and scales the cluster according to a user defined utilization factor and maximum worker count
by adding preemptible VMs. Note that the use of
preemptible VMs might negatively impact performance as nodes might be shut down any time.
The autoscaler is being installed on the cluster using a Dataproc initialization-action.
Cluster Pooling
Spydra has experimental support for cluster pooling withing a single Google Compute Platform project. Cluster pooling
can be used to limit the resources used by the job submissions, and also limit the cluster initialization overhead.
The maximum number of clusters to be used can be defined as well as their maximum lifetime. Upon job submission, a random cluster
is chosen to submit the job into. When reaching their maximum lifetime, pooled clusters are being deleted by the self-deletion
mechanism.
Usage
Installation
There's a pre-built Spydra on maven central. This is built using the parameters from .travis.yml, the bucket spydra-init-actions is provided for by Spotify.
Prerequisites
To be able to use Dataproc and on-premise Hadoop, a few things need to be set up before using Spydra.
- Java 8
- A Google Cloud Platform project with the right (Google Cloud Dataproc API) APIs enabled
- A service account with
project editor rights in your project. The service account can be specified in two ways:
- A JSON key for the service account, and the environment variable GOOGLE_APPLICATION_CREDENTIALS needs to point to the location of this service account JSON key. This cannot be a user credential.
- If the GOOGLE_APPLICATION_CREDENTIALS environment variable is not set, Spydra will attempt to use application default credentails. In a local development environment application default credentials can be obtained by authenticating with the command gcloud auth application-default login. When running on Google Compute Platform managed nodes, the application default credentials are provided by the default service account of the node.
- gcloud needs to be installed
gcloudneeds to be authenticated using the service account- hadoop jar needs to be installed and configured to submit to your cluster
Spydra CLI
Spydra CLI supports multiple sub-commands:
submit- submitting jobs to on-premise Hadoop and GCP Dataprocrun-jhs- embedded history serverdump-logs- viewing logsdump-history- viewing history
Submission
$ java -jar spydra/target/spydra-VERSION-jar-with-dependencies.jar submit --help
usage: submit [options] [jobArgs]
--clientid <arg> client id, used as identifier in job history output
--spydra-json <arg> path to the spydra configuration json
--jar <arg> main jar path, overwrites the configured one if
set
--jars <arg> jar files to be shipped with the job, can occur
multiple times, overwrites the configured ones if
set
--job-name <arg> job name, used as dataproc job id
-n,--dry-run Do a dry run without executing anything
Only a few basic things can be supplied on the command line; a client-id (an arbitrary identifier
of the client running Spydra), the main and additional JAR files for the job, and arguments for
the job. For any use-case requiring more details, the user needs to create a JSON file and supply
the path to that as a parameter. All the command-line options will override the corresponding
options in the JSON config. Apart from all the command-line options and some general settings,
it can also transparently pass along parameters to the gcloud command for
cluster creation or
job submission.
A job name can also be supplied. This will be sanitized and have a unique identifier attached to it, which will then be used as the Dataproc job ID. This is useful in finding the job in the Google Cloud Console.
