SkillAgentSearch skills...

Firehawk

The goals of this project are to setup an AWS VPC, Storage, a VPN, License Servers, and batch workloads for SideFX Houdini

Install / Use

/learn @firehawkvfx/Firehawk
About this skill

Quality Score

0/100

Supported Platforms

Universal

README

Firehawk

Firehawk is a work in progress for VFX rendering infrastructure, using multi-cloud capable and open source tooling where possible.

It uses AWS Cloud 9 as a seed instance to simplify launching the infrastructure. The scheduler implemented presently is Deadline - it provides Usage Based Licenses for many types of software to provide access for artists at low cost and free to use the scheduler on AWS instances. It is possible to build images to support other schedulers.

The primary reason for this project's creation is to provide low cost high powered cloud capability for Side FX Houdini users, and to provde a pathway for artists to roll their own cloud with any software they choose.

Firehawk uses these multi cloud capable techologies:
Hashicorp Vault - for dynamic secrets management, and authentication.
Hashicorp Terraform - for orchestration.
Hashicorp Consul - for DNS / service discovery.
Hashicorp Vagrant - for client side Open VPN deployment.
OpenVPN - for a private gateway between the client network and cloud.
Redhat Ansible - For consistent provisioning in some packer templates.
Redhat Centos
Canonical Ubuntu

Current implementation uses AWS.

Backers

Please see BACKERS.md for a list of generous backers that have made this project possible!

I want to extend my deep gratitude to the support provided by:

  • Side FX for providing licenses enabling this project
  • AWS for contributing cloud resources.

I also want to take a moment to thank Andrew Paxson who has contributed his knowledge to the project.

And especially to the other companies providing the open source technologies that make this project possible: Hashicorp, OpenVPN, Redhat, Canonical

Firehawk-Main

The Firehawk Main VPC (WIP) deploys Hashicorp Vault into a private VPC with auto unsealing.

This deployment uses Cloud 9 to simplify management of AWS Secret Keys. You will need to create a custom profile to allow the cloud 9 instance permission to create these resources with Terraform.

Policies

  • In cloudformation run these templates to init policies and defaults:
    • modules/cloudformation-cloud9-vault-iam/cloudformation_devadmin_policies.yaml
    • modules/cloudformation-cloud9-vault-iam/cloudformation_cloud9_policies.yaml
    • modules/cloudformation-cloud9-vault-iam/cloudformation_ssm_parameters_firehawk.yaml

Follow the guide here to create a codebuild service role: https://docs.aws.amazon.com/codebuild/latest/userguide/setting-up.html

We will set the name of the policy as: CodeBuildServiceRolePolicyFirehawk

{ "Version": "2012-10-17", "Statement": [ { "Sid": "VisualEditor0", "Effect": "Allow", "Action": "kms:", "Resource": "" }, { "Effect": "Allow", "Action": [ "ssm:DescribeParameters" ], "Resource": "" }, { "Effect": "Allow", "Action": [ "ssm:GetParameters" ], "Resource": "" }, { "Sid": "CloudWatchLogsPolicy", "Effect": "Allow", "Action": [ "logs:CreateLogGroup", "logs:CreateLogStream", "logs:PutLogEvents" ], "Resource": "" }, { "Sid": "CodeCommitPolicy", "Effect": "Allow", "Action": [ "codecommit:GitPull" ], "Resource": "" }, { "Sid": "S3GetObjectPolicy", "Effect": "Allow", "Action": [ "s3:GetObject", "s3:GetObjectVersion" ], "Resource": "" }, { "Sid": "S3PutObjectPolicy", "Effect": "Allow", "Action": [ "s3:PutObject" ], "Resource": "" }, { "Sid": "ECRPullPolicy", "Effect": "Allow", "Action": [ "ecr:BatchCheckLayerAvailability", "ecr:GetDownloadUrlForLayer", "ecr:BatchGetImage" ], "Resource": "" }, { "Sid": "ECRAuthPolicy", "Effect": "Allow", "Action": [ "ecr:GetAuthorizationToken" ], "Resource": "" }, { "Sid": "S3BucketIdentity", "Effect": "Allow", "Action": [ "s3:GetBucketAcl", "s3:GetBucketLocation" ], "Resource": "*" } ] }

And then create a role attaching the above policy. This role will be named: CodeBuildServiceRoleFirehawk

Also attach the policies named: IAMFullAccess AdministratorAccess AmazonEC2FullAccess AmazonS3FullAccess

WARNING: These are overly permissive for development and should be further restricted. (TODO: define restricted policies)

Creating The Cloud9 Environment

  • In AWS Management Console | Cloud9: Select Create Environment

  • Ensure you have selected: Create a new no-ingress EC2 instance for environment (access via Systems Manager) This will create a Cloud 9 instance with no inbound access.

  • Ensure the instance type is at least m5.large (under other instance types)

  • Select Amazon Linux 2 platform.

  • Ensure you add tags:

resourcetier=main

The tag will define the environment in the shell.

  • Once up, in AWS Management Console | EC2 : Select the instance, and change the instance profile to your Cloud9CustomAdminRoleFirehawk

  • Connect to the session through AWS Management Console | Cloud9.

  • When connected, disable "AWS Managed Temporary Credentials" ( Select the Cloud9 Icon in the top left | AWS Settings ) Your instance should now have permission to create and destroy any resource with Terraform.

Create the Hashicorp Vault deployment

  • Clone the repo, and install required binaries and packages.
git clone --recurse https://github.com/firehawkvfx/firehawk-main.git
cd firehawk; ./install-packages
./deploy/firehawk-main/scripts/resize.sh
  • Initialise the environment variables to spin up the resources.
source ./update_vars.sh
  • Initialise required SSH Keys, KMS Keys, certificates and S3 Buckets. Note: it is important you are mindful if you run destroy in init/ as this will destroy the SSL Certificates used in images required to establish connections with Vault and Consul.
cd init
terragrunt run-all apply
  • Ensure you reboot the instance after this point, or DNS for consul will not function properly (dnsmasq requires this).

  • If you have Deadline Certificates (required for third party / Houdini UBL) you should go to the ublcerts bucket just created and ensure the zip file containing the certs exists at ublcertszip/certs.zip in the S3 Bucket. The Deadline DB / License forwarder has access to this bucket to install the certificates on deployment.

Build Images

For each client instance we build a base AMI to run OS updates (you only need to do this infrequently). Then we build the complete AMI from the base AMI to speed up subsequent builds (the base AMI provides better reproducible results from ever changing software updates).

  • Build Base AMI's
source ./update_vars.sh
cd deploy/packer-firehawk-amis/modules/firehawk-base-ami
./build.sh
  • When this is complete you can build the final AMI's which will use the base AMI's
cd deploy/packer-firehawk-amis/modules/firehawk-ami
./build.sh
  • Check that you have images for the bastion, vault client, and vpn server in AWS Management Console | Ami's. If any are missing you may wish to try running the contents of the script manually.

First time Vault deployment

The first time you launch Vault, it will not have any config stored in the S3 backend yet. Once you have completed these steps you wont have to run them again.

  • Source environment variables to pickup the AMI ID's. They should be listed as they are found:
source ./update_vars.sh
  • Deploy Vault.
cd vault-init
./init
  • After around 10 minutes, we should see this in the log:
Initializing vault...
Recovery Key 1: 23l4jh13l5jh23ltjh25=

Initial Root Token: s.lk235lj235k23j525jn

Success! Vault is initialized

During init, it also created an admin token, and logged in with that token. You can check this with:

vault token lookup
  • Store the root token, admin token, and recovery key in an encrypted password manager. If you have problems with any steps in vault-init, and you wish to start from scratch, you can use the ./destroy script to start over. You may also delete the contents of the S3 bucket storing the vault data for a clean install.

  • Next we can use terraform to configure vault... You can use a shell script to aid this:

./configure
  • After this step you should now be using an admin token

  • Store all the above mentioned sensitive output (The recovery key, root token, and admin) in an encrypted password manager for later use.

  • Ensure you are joined to the consul cluster:

sudo /opt/consul/bin/run-consul --client --cluster-tag-key "${consul_cluster_tag_key}" --cluster-tag-value "${consul_cluster_tag_value}"
consul catalog services

This should show 2 services: consul and vault.

  • Now ensure updates to the vault config will work with your admin token.
TF_VAR_configure_vault=true terragrunt run-all apply

Congratulations! You now have a fully configured vault.

Continue to deploy the rest of the resources from deploy/

cd ../deploy
terragrunt run-all apply

Install the deadline certificate service

If you are running Ubuntu 18 or Mac OS, its possible to install a service on your local system to make aquiring certificates for deadline easier. The service can monitor a message queue for cre

View on GitHub
GitHub Stars56
CategoryDevelopment
Updated4mo ago
Forks6

Languages

HCL

Security Score

92/100

Audited on Nov 17, 2025

No findings