Igneous
Scalable Neuroglancer compatible Downsampling, Meshing, Skeletonizing, Contrast Normalization, Transfers and more.
Install / Use
/learn @seung-lab/IgneousREADME
Igneous
# A few examples. You can also script Igneous. Read on!
$ igneous image xfer gs://other-lab/data file://./my-data --queue ./xfer-queue --shape 2048,2048,64
$ igneous image downsample file://./my-data --mip 0 --queue ./ds-queue
$ igneous execute -x ./ds-queue # -x exit when finished
$ igneous mesh forge s3://my-data/seg --mip 2 --queue sqs://mesh-queue
$ igneous --parallel 4 execute sqs://mesh-queue
$ igneous skeleton forge s3://my-data/seg --mip 2 --queue sqs://mesh-queue
$ igneous skeleton merge s3://my-data/seg --queue sqs://mesh-queue
$ igneous execute sqs://mesh-queue
$ igneous --help
Igneous is a TaskQueue and CloudVolume based pipeline for producing and managing visualizable Neuroglancer Precomputed volumes. It uses CloudVolume for accessing data on AWS S3, Google Storage, or the local filesystem. It can operate in the cloud using an SQS task queuing system or run locally on a single machine or cluster (using a file based SQS emulation).
Igneous is useful for downsampling, transferring, deleting, meshing, and skeletonizing large images. There are a few more esoteric functions too. You can watch a video tutorial here.
Originally by Nacho and Will.
Pre-Built Docker Container
You can use this container for scaling big jobs horizontally or to experiment with Igneous within the container.
https://hub.docker.com/r/seunglab/igneous/
Installation
You'll need Python 3, pip, (possibly) a C++ compiler (e.g. g++ or clang), and virtualenv. It's tested under Ubuntu 16.04 and Mac OS Monterey.
pip install igneous-pipeline
Manual Installation
Sometimes it's useful to tweak tasks for special circumstances, and so you'll want to use a developer installation.
git clone git@github.com:seung-lab/igneous.git
cd igneous
virtualenv venv
source venv/bin/activate
pip install numpy
pip install -r requirements.txt
python setup.py develop
Igneous is intended as a self-contained pipeline system and not as a library. Such uses are possible, but not supported. If specific functionality is needed, please open an issue and we can break that out into a library as has been done with several algorithms such as tinybrain, zmesh, and kimimaro.
Sample Local Use
Below we show three ways to use Igneous on a local workstation or cluster. As an example, we generate meshes for an already-existing Precomputed segmentation volume.
In Memory Queue (Simple Execution)
This procedure is good for running small jobs as it is very simple, allows you to make use of parallelization, but on the downside it is brittle. If a job fails, you may have to restart the entire task set.
from taskqueue import LocalTaskQueue
import igneous.task_creation as tc
# Mesh on 8 cores, use True to use all cores
cloudpath = 'gs://bucket/dataset/labels'
tq = LocalTaskQueue(parallel=8)
tasks = tc.create_meshing_tasks(cloudpath, mip=3, shape=(256, 256, 256))
tq.insert(tasks)
tq.execute()
tasks = tc.create_mesh_manifest_tasks(cloudpath)
tq.insert(tasks)
tq.execute()
print("Done!")
Filesystem Queue (Producer-Consumer)
This procedure is more robust as tasks can be restarted if they fail. The queue is written to the filesystem and as such can be used by any processor that can read and write files to the selected directory. Thus, there is the potential for local cluster processing. Conceptually, a single producer script populates a filesystem queue ("FileQueue") and then typically one worker per a core consumes each task. The FileQueue allows for leasing a task for a set amount of time. If the task is not completed, it recycles into the available task pool. The order with which tasks are consumed is not guaranteed, but is approximately FIFO (a random task is selected from the next 100 to avoid conflicts) if all goes well.
This mode is very new, so please report any issues. You can read about the queue design here. In particular, we expect you may see problems with NFS or other filesystems that have problems with networked file locking. However, purely local use should generally be issue free. You can read more tips on using FileQueue here. You can remove a FileQueue by deleting its containing directory.
Note that the command line tool ptq ("Python Task Queue") is co-installed with Igneous and can be used to monitor queue status using e.g. ptq status $QUEUENAME.
Producer Script
from taskqueue import TaskQueue
import igneous.task_creation as tc
# Mesh on 8 cores, use True to use all cores
cloudpath = 'gs://bucket/dataset/labels'
tq = TaskQueue("fq:///path/to/queue/directory")
tasks = tc.create_meshing_tasks(cloudpath, mip=3, shape=(256, 256, 256))
tq.insert(tasks)
tq.execute()
tasks = tc.create_mesh_manifest_tasks(cloudpath)
tq.insert(tasks)
tq.execute()
print("Tasks created!")
Consumer Script
from taskqueue import TaskQueue
import igneous.tasks # magic import needed to provide task definitions
tq = TaskQueue("fq:///path/to/queue/directory")
tq.poll(
verbose=True, # prints progress
lease_seconds=600, # allow exclusive 10 min per task
tally=True # makes tq.completed work, logs 1 byte per completed task
)
Sample Cloud Use
Igneous is intended to be used with Kubernetes (k8s). A pre-built docker container is located on DockerHub as seunglab/igneous. A sample deployment.yml (used with kubectl create -f deployment.yml) is located in the root of the repository.
As Igneous is based on CloudVolume, you'll need to create a google-secret.json or aws-secret.json to access buckets located on these services.
You'll need to create an Amazon SQS queue to store the tasks you generate. Google's TaskQueue was previously supported but the API changed. It may be supported in the future.
Populating the SQS Queue
There's a bit of an art to achieving high performance on SQS. You can read more about it here.
import sys
from taskqueue import TaskQueue
import igneous.task_creation as tc
cloudpath = sys.argv[1]
# Get qurl from the SQS queue metadata, visible on the web dashboard when you click on it.
tq = TaskQueue("sqs://queue-url")
tasks = tc.create_downsampling_tasks(
cloudpath, mip=0,
fill_missing=True, preserve_chunk_size=True
)
tq.insert(tasks)
print("Done!")
Executing Tasks in the Cloud
The following instructions are for Google Container Engine, but AWS has similar tools.
# Create a Kubernetes cluster
# e.g.
export PROJECT_NAME=example
export CLUSTER_NAME=example
export NUM_NODES=5 # arbitrary
# Create a Google Container Cluster
gcloud container --project $PROJECT_NAME clusters create $CLUSTER_NAME --zone "us-east1-b" --machine-type "n1-standard-16" --image-type "GCI" --disk-size "50" --scopes "https://www.googleapis.com/auth/compute","https://www.googleapis.com/auth/devstorage.full_control","https://www.googleapis.com/auth/taskqueue","https://www.googleapis.com/auth/logging.write","https://www.googleapis.com/auth/cloud-platform","https://www.googleapis.com/auth/servicecontrol","https://www.googleapis.com/auth/service.management.readonly","https://www.googleapis.com/auth/trace.append" --num-nodes $NUM_NODES --network "default" --enable-cloud-logging --no-enable-cloud-monitoring
# Bind the kubectl command to this cluster
gcloud config set container/cluster $CLUSTER_NAME
# Give the cluster permission to read and write to your bucket
# You only need to include services you'll actually use.
kubectl create secret generic secrets \
--from-file=$HOME/.cloudvolume/secrets/google-secret.json \
--from-file=$HOME/.cloudvolume/secrets/aws-secret.json \
--from-file=$HOME/.cloudvolume/secrets/boss-secret.json
# Create a Kubernetes deployment
kubectl create -f deployment.yml # edit deployment.yml in root of repo
# Resizing the cluster
gcloud container clusters resize $CLUSTER_NAME --num-nodes=20 # arbitrary node count
kubectl scale deployment igneous --replicas=320 # 16 * nodes b/c n1-standard-16 has 16 cores
# Spinning down
# Important: This will leave the kubernetes master running which you
# will be charged for. You can also fully delete the cluster.
gcloud container clusters resize $CLUSTER_NAME --num-nodes=0
kubectl delete deployment igneous
Command Line Interface (CLI)
Igneous also comes with a command line interface for performing some routine tasks. We currently support downsample, xfer, mesh, skeleton, and execute and plan to add more Igneous functions as well. Check igneous --help to see the current menu of functions and their options.
The principle of the CLI is specify a source layer, a destination layer (if applicable), and a TaskQueue (e.g. sqs:// or fq://). First, populate the queue with the correct task type and then execute against it.
The CLI is intended to handle typical tasks t
Related Skills
node-connect
341.6kDiagnose OpenClaw node connection and pairing failures for Android, iOS, and macOS companion apps
prose
341.6kOpenProse VM skill pack. Activate on any `prose` command, .prose files, or OpenProse mentions; orchestrates multi-agent workflows.
claude-opus-4-5-migration
84.6kMigrate prompts and code from Claude Sonnet 4.0, Sonnet 4.5, or Opus 4.1 to Opus 4.5
frontend-design
84.6kCreate distinctive, production-grade frontend interfaces with high design quality. Use this skill when the user asks to build web components, pages, or applications. Generates creative, polished code that avoids generic AI aesthetics.
