Runqy
The open-source distributed task queue. From simple scripts to GPU-intensive inference—your workers handle it all. Runqy distributes the tasks.
Install / Use
/learn @Publikey/RunqyREADME
<p align="center"> <img src="assets/demo.gif" alt="Runqy demo — from zero to task result in 90 seconds" width="800"> </p>
Why Runqy?
🌍 Workers run anywhere — Your laptop, on-prem servers, AWS, Azure, Runpod, any machine with an internet connection. Learn more →
🚀 Zero-touch deployment — Workers pull code from Git, install dependencies, and start processing automatically. No manual setup. Learn more →
📄 Simple YAML config — Define a queue in a few lines. One YAML file, one queue. Learn more →
🔐 Built-in secrets — Pass secrets to workers via encrypted env vars. Learn more →
🐍 Go server + Python SDK — Robust Go server, familiar Python developer experience. Learn more →
📊 Web monitoring UI — Real-time dashboard with Prometheus metrics. Learn more →
Feature Comparison
| Feature | Runqy | Celery | Temporal | Modal | BullMQ | Inngest | |---------|-------|--------|----------|-------|--------|---------| | Self-hosted | ✅ | ✅ | ✅ | ❌ | ✅ | ❌ | | Workers anywhere | ✅ | ❌ | ❌ | ❌ | ❌ | ❌ | | Auto-deploy from Git | ✅ | ❌ | ❌ | ✅ | ❌ | ❌ | | Deployment YAML | ✅ | ❌ | ❌ | ✅ | ❌ | ❌ | | Built-in secrets | ✅ | ❌ | ❌ | ✅ | ❌ | ❌ | | Monitoring UI | ✅ | ❌ | ✅ | ✅ | ✅ | ✅ |
Quick Start
Get Runqy running in under 60 seconds:
# 1. Start the stack
curl -O https://raw.githubusercontent.com/Publikey/runqy/main/docker-compose.quickstart.yml
docker-compose -f docker-compose.quickstart.yml up -d
# 2. Enqueue a task
pip install runqy-python
python -c "
from runqy_python import RunqyClient
client = RunqyClient('http://localhost:3000', api_key='dev-api-key')
task = client.enqueue('quickstart-oneshot', {'message': 'Hello World!'})
print(f'Task ID: {task.task_id}')
"
# 3. Check results
open http://localhost:3000/monitoring/
See the Quickstart Guide for the full walkthrough.
Define a Queue
A queue is a simple YAML file:
queues:
image-resize:
priority: 5
deployment:
# Worker code: https://github.com/acme/image-worker
git_url: "https://github.com/acme/image-worker.git"
branch: "main"
startup_cmd: "python main.py"
mode: "one_shot"
Deploy it:
runqy config create -f queue.yaml
See the Queue Configuration Reference for all options.
Write a Task
from runqy import task, load
@load
def setup():
"""Load models once when worker starts"""
import torch
return torch.load('my_model.pt')
@task
def process_image(image_url: str, model) -> dict:
"""Runs on every task execution"""
result = model.predict(image_url)
return {"prediction": result, "confidence": 0.95}
See the Python SDK Reference for the full API.
Enqueue Tasks
Three ways to enqueue:
# CLI
runqy task enqueue -q image-resize -p '{"image":"img001.jpg","size":256}'
# REST API
curl -s POST localhost:3000/queue/add \
-H "X-API-Key: dev-api-key" \
-d '{"queue":"image-resize","data":{"image":"img002.jpg"}}'
# Python SDK
from runqy_python import RunqyClient
client = RunqyClient('http://localhost:3000', api_key='dev-api-key')
task = client.enqueue('image-resize', {'image': 'img003.jpg'})
See the API Reference for all endpoints.
Examples
Explore real-world use cases:
- quickstart-oneshot — Simple task execution
- quickstart-longrunning — Long-running worker processes
- data-pipeline — Multi-step data processing (API calls, ETL)
- webhook-processor — Event-driven webhook handling (Stripe, GitHub)
- scheduled-tasks — Cron-like healthchecks, reports, and cleanup
- multi-queue — Priority-based routing (critical, standard, bulk)
- gpu-inference — GPU-accelerated image generation with Stable Diffusion
- star-runqy — Vault secrets management tutorial
Installation
Quick Install
Linux/macOS:
curl -fsSL https://raw.githubusercontent.com/publikey/runqy/main/install.sh | sh
Windows (PowerShell):
iwr https://raw.githubusercontent.com/publikey/runqy/main/install.ps1 -useb | iex
Docker
docker pull ghcr.io/publikey/runqy:latest
From Source
git clone https://github.com/Publikey/runqy.git
cd runqy
go build -o runqy ./app
See the Installation Guide for detailed instructions.
Requirements
- Redis + PostgreSQL
Server Configuration
Configure the server via environment variables:
export REDIS_HOST=localhost:6379
export RUNQY_API_KEY=your-secret-key
See the Configuration Reference for all options.
CLI Reference
Manage your deployment locally or remotely:
runqy queue list # List all queues
runqy config create -f queue.yaml # Deploy a queue
runqy task enqueue -q myqueue -p '{"key":"value"}' # Enqueue task
runqy task list myqueue # List tasks
runqy task get myqueue <task_id> # Get task result
runqy worker list # List active workers
See the CLI Reference for all commands.
Monitoring
Access the built-in web dashboard at /monitoring:
Queue Overview — Status, pending/active/completed counts, latency per queue:
<p align="center"> <img src="docs/images/monitoring-queues.png" alt="Runqy Queues" width="700"> </p>Workers — CPU/RAM usage, assigned queues, heartbeat status:
<p align="center"> <img src="docs/images/monitoring-workers.png" alt="Runqy Workers" width="700"> </p> </details>Runqy also exposes Prometheus metrics at /metrics. See the Monitoring Guide for Grafana dashboards and alerting.
Architecture
Tasks flow from clients → runqy server → queues → workers running anywhere. Workers are stateless and pull code from Git on startup.
<p align="center"> <img src="assets/architecture.png" alt="runqy architecture" width="700"> </p>Zero-touch Deployment: Workers connect to the server, pull your code from Git, install dependencies, and start processing — no manual setup required.
<p align="center"> <img src="assets/code_pull.png" alt="zero-touch deployment" width="700"> </p>Links
- 📖 Documentation — Complete guides and API reference
- 🌐 Website — Project homepage
- 🐍 Python SDK — Client library
- 🔧 Worker Runtime — Task processor
- 🤝 Contributing — How to contribute
- 📄 License — MIT License
<p align="center"> <strong>Your workers, your machines, your rules.</strong><br> Built on <a href="https://github.com/hibiken/asynq">asynq</a> • Made with ❤️ for AI developers </p>
Related Skills
node-connect
349.2kDiagnose OpenClaw node connection and pairing failures for Android, iOS, and macOS companion apps
xurl
349.2kA CLI tool for making authenticated requests to the X (Twitter) API. Use this skill when you need to post tweets, reply, quote, search, read posts, manage followers, send DMs, upload media, or interact with any X API v2 endpoint.
prose
349.2kOpenProse VM skill pack. Activate on any `prose` command, .prose files, or OpenProse mentions; orchestrates multi-agent workflows.
frontend-design
109.5kCreate distinctive, production-grade frontend interfaces with high design quality. Use this skill when the user asks to build web components, pages, or applications. Generates creative, polished code that avoids generic AI aesthetics.
