Gobuildcache
gobuildcache is an OSS remote build cache for Go, similar in spirit for Depot's remote caching feature.
Install / Use
/learn @richardartoul/GobuildcacheREADME
Overview
gobuildcache implements the gocacheprog interface defined by the Go compiler over a variety of storage backends, the most important of which is S3 Express One Zone (henceforth referred to as S3OZ). Its primary purpose is to accelerate CI (both compilation and tests) for large Go repositories. You can think of it as a self-hostable and OSS version of Depot's remote cache feature.
Effectively, gobuildcache leverages S3OZ as a distributed build cache for concurrent go build or go test processes regardless of whether they're running on a single machine or distributed across a fleet of CI VMs. This dramatically improves CI performance for large Go repositories because each CI process will behave as if running with an almost completely pre-populated build cache, even if the CI process was started on a completely ephemeral VM that has never compiled code or executed tests for the repository before.
gobuildcache is highly sensitive to the latency of the remote storage backend, so it works best when running on self-hosted runners in AWS targeting an S3 Express One Zone bucket in the same region (and ideally same availability zone) as the self-hosted runners. That said, it doesn't have to be used that way. For example, if you're using Github's hosted runners or self-hosted runners outside of AWS, you can use a different storage solution like Tigris or Google Cloud Storage (GCS). For GCP users, enabling GCS Anywhere Cache can provide performance similar to S3OZ for read-heavy workloads. See examples/github_actions_tigris.yml for an example of using gobuildcache with Tigris.
Quick Start
Installation
go install github.com/richardartoul/gobuildcache@latest
Usage
export GOCACHEPROG=gobuildcache
go build ./...
go test ./...
By default, gobuildcache uses an on-disk cache stored in the OS default temporary directory. This is useful for testing and experimentation with gobuildcache, but provides no benefits over the Go compiler's built-in cache, which also stores cached data locally on disk.
For "production" use-cases in CI, you'll want to configure gobuildcache to use S3 Express One Zone, Google Cloud Storage, or a similarly low latency distributed backend.
Using S3
export GOBUILDCACHE_BACKEND_TYPE=s3
export GOBUILDCACHE_S3_BUCKET=$BUCKET_NAME
You'll also have to provide AWS credentials. gobuildcache embeds the AWS V2 S3 SDK so any method of providing credentials to that library will work, but the simplest is to use environment variables as demonstrated below.
export GOCACHEPROG=gobuildcache
export GOBUILDCACHE_BACKEND_TYPE=s3
export GOBUILDCACHE_S3_BUCKET=$BUCKET_NAME
export GOBUILDCACHE_AWS_REGION=$BUCKET_REGION
export GOBUILDCACHE_AWS_ACCESS_KEY_ID=$AWS_ACCESS_KEY
export GOBUILDCACHE_AWS_SECRET_ACCESS_KEY=$AWS_SECRET_ACCESS_KEY
export GOBUILDCACHE_AWS_SESSION_TOKEN=$AWS_SESSION_TOKEN # optional, for temporary credentials
go build ./...
go test ./...
Note: All configuration environment variables support both
GOBUILDCACHE_<KEY>and<KEY>forms (e.g., bothGOBUILDCACHE_S3_BUCKETandS3_BUCKETwork). The prefixed version takes precedence if both are set. The prefixed form is strongly recommended for AWS variables (GOBUILDCACHE_AWS_REGION,GOBUILDCACHE_AWS_ACCESS_KEY_ID,GOBUILDCACHE_AWS_SECRET_ACCESS_KEY,GOBUILDCACHE_AWS_SESSION_TOKEN) — by using the prefixed form instead of the standardAWS_*variables, you avoid those values being inherited by other processes in the same environment (e.g., test binaries spawned bygo test). If the prefixed variable is set to an empty string, it falls through to the unprefixed version (or default).
Using Google Cloud Storage (GCS)
export GOBUILDCACHE_BACKEND_TYPE=gcs
export GOBUILDCACHE_GCS_BUCKET=$BUCKET_NAME
GCS authentication uses Application Default Credentials. You can provide credentials in one of the following ways:
- Service Account JSON file (recommended for CI):
export GOOGLE_APPLICATION_CREDENTIALS=/path/to/service-account-key.json
export GOCACHEPROG=gobuildcache
export GOBUILDCACHE_BACKEND_TYPE=gcs
export GOBUILDCACHE_GCS_BUCKET=$BUCKET_NAME
go build ./...
go test ./...
- Metadata service (when running on GCP):
# No credentials file needed - uses metadata service automatically
export GOCACHEPROG=gobuildcache
export GOBUILDCACHE_BACKEND_TYPE=gcs
export GOBUILDCACHE_GCS_BUCKET=$BUCKET_NAME
go build ./...
go test ./...
- gcloud CLI credentials (for local development):
gcloud auth application-default login
export GOCACHEPROG=gobuildcache
export GOBUILDCACHE_BACKEND_TYPE=gcs
export GOBUILDCACHE_GCS_BUCKET=$BUCKET_NAME
go build ./...
go test ./...
GCS Anywhere Cache (Recommended for Performance)
For improved performance, especially in read-heavy workloads, consider enabling GCS Anywhere Cache. Anywhere Cache provides an SSD-backed zonal read cache that can significantly reduce latency for frequently accessed cache objects.
Benefits:
- Lower read latency: Cached reads from the same zone can achieve single-digit millisecond latency, comparable to S3OZ for repeated access
- Reduced costs: Lower data transfer costs, especially for multi-region buckets, and reduced retrieval fees
- Better performance: Especially beneficial when multiple CI jobs access the same cached artifacts
- Automatic scaling: Cache capacity and bandwidth scale automatically based on usage
Requirements:
- Bucket must be in a supported region/zone
- CI runners should be in the same zone as the cache for optimal performance
- Anywhere Cache is most effective for read-heavy workloads with high cache hit ratios
Setup:
- Verify your bucket region/zone supports Anywhere Cache
- Enable Anywhere Cache on your GCS bucket
- Configure the cache in the same zone as your CI runners for best performance
- Set admission policy to "First miss" for faster warm-up (caches on first access)
- Configure TTL based on your needs (1 hour to 7 days, default 24 hours)
# Enable Anywhere Cache using gcloud CLI
# Replace ZONE_NAME with the zone where your CI runners are located
gcloud storage buckets update gs://YOUR_BUCKET_NAME \
--enable-anywhere-cache \
--anywhere-cache-zone=ZONE_NAME \
--anywhere-cache-admission-policy=FIRST_MISS \
--anywhere-cache-ttl=7d
Note:
- Anywhere Cache only accelerates reads. Writes still go directly to the bucket, but since
gobuildcacheperforms writes asynchronously, this typically doesn't impact build performance. - First-time access to an object will still hit the bucket (cache miss), but subsequent reads will be served from the cache.
- For best results, ensure your CI runners and cache are in the same zone.
For more details, including availability by region, see the GCS Anywhere Cache documentation.
AWS Credentials Permissions
Your credentials must have the following permissions:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"s3:GetObject",
"s3:PutObject",
"s3:DeleteObject",
"s3:ListBucket",
"s3:HeadBucket",
"s3:HeadObject"
],
"Resource": [
"arn:aws:s3:::$BUCKET_NAME",
"arn:aws:s3:::$BUCKET_NAME/*"
]
},
{
"Effect": "Allow",
"Action": [
"s3express:CreateSession"
],
"Resource": [
"arn:aws:s3express:$REGION:$ACCOUNT_ID:bucket/$BUCKET_NAME"
]
}
]
}
GCS Credentials Permissions
Your GCS service account must have the following IAM roles or permissions:
storage.objects.create- to upload cache objectsstorage.objects.get- to download cache objectsstorage.objects.delete- to delete cache objects (for clearing)storage.objects.list- to list objects (for clearing)
The simplest way is to grant the Storage Object Admin role to your service account:
gcloud projects add-iam-policy-binding PROJECT_ID \
--member="serviceAccount:SERVICE_ACCOUNT_EMAIL" \
--role="roles/storage.objectAdmin"
Or for more granular control, create a custom role with only the required permissions.
Github Actions Example
See the examples directory for examples of how to use gobuildcache in a Github Actions workflow.
Lifecycle Policies
It's recommended to configure a lifecycle policy on your storage bucket to automatically expire old cache entries and control storage costs. Build cache data is typically only useful for a limited time (e.g., a few days to a week), after which it's likely stale.
S3 Lifecycle Policy
Here's a sample S3 lifecycle policy that expires objects after 7 days and aborts incomplete multipart uploads after 24 hours:
{
"Rules": [
{
"Id": "ExpireOldCacheEntries
Related Skills
node-connect
351.8kDiagnose OpenClaw node connection and pairing failures for Android, iOS, and macOS companion apps
frontend-design
110.9kCreate distinctive, production-grade frontend interfaces with high design quality. Use this skill when the user asks to build web components, pages, or applications. Generates creative, polished code that avoids generic AI aesthetics.
openai-whisper-api
351.8kTranscribe audio via OpenAI Audio Transcriptions API (Whisper).
qqbot-media
351.8kQQBot 富媒体收发能力。使用 <qqmedia> 标签,系统根据文件扩展名自动识别类型(图片/语音/视频/文件)。
