SkillAgentSearch skills...

Srunx

A modern Python library for SLURM workload manager integration with workflow orchestration capabilities.

Install / Use

/learn @ksterx/Srunx
About this skill

Quality Score

0/100

Supported Platforms

Universal

README

srunx

PyPI Python 3.12+ License Actions Status

A modern Python library for SLURM workload manager integration with workflow orchestration capabilities.

Features

  • 🧩 Workflow Orchestration: YAML-based workflow definitions with Prefect integration
  • Fine-Grained Parallel Execution: Jobs execute immediately when their specific dependencies complete, not entire workflow phases
  • 🔗 Branched Dependency Control: Independent branches in dependency graphs run simultaneously without false dependencies
  • 📊 Real-Time Monitoring: Track job states and GPU resource availability with automatic state detection
  • 🔔 Notification System: Slack integration and custom callbacks for job state changes
  • 🔌 Remote SSH Integration: Submit and monitor SLURM jobs on remote servers via SSH
  • 📝 Template System: Customizable Jinja2 templates for SLURM scripts
  • 🛡️ Type Safe: Full type hints and mypy compatibility
  • 🖥️ CLI Tools: Command-line interfaces for both job management and workflows
  • 🚀 Simple Job Submission: Easy-to-use API for submitting SLURM jobs
  • ⚙️ Flexible Configuration: Support for various environments (conda, venv, sqsh)
  • 📋 Job Management: Submit, monitor, cancel, and list jobs
  • 🔄 Error Recovery: Graceful handling of SLURM command failures and network issues

Installation

Using uv (Recommended)

uv add srunx

Using pip

pip install srunx

Development Installation

git clone https://github.com/ksterx/srunx.git
cd srunx
uv sync --dev

Quick Start

You can try the workflow example:

cd examples
srunx flow run sample_workflow.yaml
graph TD
    A["Job A"]
    B1["Job B1"]
    B2["Job B2"]
    C["Job C"]
    D["Job D"]

    A --> B1
    A --> C
    B1 --> B2
    B2 --> D
    C --> D

Jobs run precisely when they're ready, minimizing wasted compute hours. The workflow engine provides fine-grained dependency control: when Job A completes, B1 and C start immediately in parallel. As soon as B1 finishes, B2 starts regardless of C's status. Job D waits only for both B2 and C to complete, enabling maximum parallelization.

Job and Resource Monitoring

srunx provides comprehensive monitoring capabilities for tracking job states and GPU resource availability on SLURM clusters.

Monitor Commands

The srunx monitor command provides three monitoring modes:

srunx monitor jobs      # Monitor SLURM job state changes
srunx monitor resources # Monitor GPU resource availability
srunx monitor cluster   # Scheduled periodic status reports

Job Monitoring

Monitor SLURM jobs until completion or continuously track state changes:

# Monitor single job until completion
srunx monitor jobs 12345

# Monitor multiple jobs
srunx monitor jobs 12345 12346 12347

# Monitor all your jobs
srunx monitor jobs --all

# Continuous monitoring with state change notifications
srunx monitor jobs 12345 --continuous

# With custom poll interval and timeout
srunx monitor jobs 12345 --interval 30 --timeout 3600

# With Slack notifications
srunx monitor jobs 12345 --continuous --notify $WEBHOOK_URL

Resource Monitoring

Monitor GPU resource availability and wait for sufficient resources:

# Display current resource availability
srunx monitor resources

# Display resources for specific partition
srunx monitor resources --partition gpu

# Wait for 4 GPUs to become available
srunx monitor resources --min-gpus 4

# Continuous resource monitoring with notifications
srunx monitor resources --min-gpus 2 --continuous --notify $WEBHOOK_URL

# Show output in JSON format
srunx monitor resources --format json

Scheduled Cluster Reports

Send periodic SLURM cluster status reports via Slack:

# Send hourly status reports
srunx monitor cluster --schedule 1h --notify $WEBHOOK_URL

# Send reports every 30 minutes
srunx monitor cluster --schedule 30m --notify $WEBHOOK_URL

# Daily reports at 9 AM (cron format)
srunx monitor cluster --schedule "0 9 * * *" --notify $WEBHOOK_URL

# Customize report contents
srunx monitor cluster --schedule 1h --notify $WEBHOOK_URL \
  --include jobs,resources,running --max-jobs 20

Programmatic Monitoring

from srunx import Slurm
from srunx.monitor import JobMonitor, ResourceMonitor
from srunx.monitor.types import MonitorConfig
from srunx.callbacks import SlackCallback

client = Slurm()

# Submit a job
job = client.submit(job)

# Monitor until completion
monitor = JobMonitor(
    job_ids=[job.job_id],
    config=MonitorConfig(poll_interval=30, timeout=3600)
)
monitor.watch_until()  # Blocks until job completes or timeout

# Continuous monitoring with callbacks
slack_callback = SlackCallback(webhook_url="your_webhook_url")
monitor = JobMonitor(
    job_ids=[job.job_id],
    config=MonitorConfig(poll_interval=10, notify_on_change=True),
    callbacks=[slack_callback]
)
monitor.watch_continuous()  # Ctrl+C to stop

Report includes:

  • 📊 Job Queue Status: Pending, running, completed, failed job counts
  • 🎮 GPU Resources: Total, in-use, available GPUs with utilization percentage
  • 🖥️ Node Statistics: Total, idle, down nodes
  • 👤 User Jobs: Your personal job queue status (optional)

Schedule formats:

  • Interval: 1h, 30m, 1d (hours, minutes, days)
  • Cron: "0 9 * * *" (minute hour day month weekday)

Resource monitoring

resource_monitor = ResourceMonitor( min_gpus=4, partition="gpu", config=MonitorConfig(poll_interval=60, timeout=7200) ) resource_monitor.watch_until() # Blocks until resources available


### Monitor Multiple Jobs

```bash
# Monitor multiple jobs simultaneously
srunx monitor 12345 12346 12347

# Monitor all your jobs
srunx monitor $(srunx list --format json | jq -r '.[].job_id | @sh')

Advanced Monitoring Features

  • Automatic State Detection: Monitors detect PENDING → RUNNING → COMPLETED/FAILED transitions
  • Error Recovery: Gracefully handles SLURM command failures and network issues
  • Timeout Support: Configure maximum monitoring duration with automatic cleanup
  • Callback System: Integrate with Slack, email, or custom notification systems
  • Resource Thresholds: Wait for specific GPU counts before proceeding with workflows

Remote SSH Integration

srunx includes full SSH integration, allowing you to submit and monitor SLURM jobs on remote servers. This functionality was integrated from the ssh-slurm project.

SSH Quick Start

# Submit a script to a remote SLURM server
srunx ssh script.sh --host myserver

# Using SSH config profiles
srunx ssh script.py --profile dgx-server

# Direct connection parameters
srunx ssh script.sh --hostname dgx.example.com --username researcher --key-file ~/.ssh/dgx_key

SSH Profile Management

Create and manage connection profiles for easy access to remote servers:

# Add a profile using SSH config
srunx ssh profile add myserver --ssh-host dgx1 --description "Main DGX server"

# Add a profile with direct connection details
srunx ssh profile add dgx-direct --hostname dgx.example.com --username researcher --key-file ~/.ssh/dgx_key --description "Direct DGX connection"

# List all profiles
srunx ssh profile list

# Set current default profile
srunx ssh profile set myserver

# Show profile details
srunx ssh profile show myserver

# Update profile settings
srunx ssh profile update myserver --description "Updated description"

# Remove a profile
srunx ssh profile remove old-server

SSH Environment Variables

Environment variables can be managed in profiles and passed during job submission:

# Pass environment variables during job submission
srunx ssh train.py --host myserver --env CUDA_VISIBLE_DEVICES=0,1,2,3
srunx ssh script.py --host myserver --env WANDB_PROJECT=my_project --env-local WANDB_API_KEY

# Environment variables in profiles (stored in profile configuration)
# Add profile with environment variables
srunx ssh profile add gpu-server --hostname gpu.example.com --username user --key-file ~/.ssh/key

# Common environment variables are automatically detected and transferred:
# - HF_TOKEN, HUGGING_FACE_HUB_TOKEN
# - WANDB_API_KEY, WANDB_ENTITY, WANDB_PROJECT  
# - OPENAI_API_KEY, ANTHROPIC_API_KEY
# - CUDA_VISIBLE_DEVICES
# - And many more ML/AI related variables

SSH Job Submission Options

# Basic job submission
srunx ssh train.py --host myserver

# Job with custom name and monitoring
srunx ssh experiment.sh --profile dgx-server --job-name "ml-experiment-001"

# Pass environment variables
srunx ssh script.py --host myserver --env CUDA_VISIBLE_DEVICES=0,1 --env-local WANDB_API_KEY

# Custom polling and timeout
srunx ssh long_job.sh --host myserver --poll-interval 30 --timeout 7200

# Submit without monitoring
srunx ssh background_job.sh --host myserver --no-monitor

# Keep uploaded files for debugging
srunx ssh debug_script.py --host myserver --no-cleanup

SSH Connection Methods

srunx supports multiple connection methods (in priority order):

  1. SSH Config Host (--host flag): Uses entries from ~/.ssh/config
  2. Saved Profiles (--profile flag): Uses connection profiles stored in config
  3. Direct Parameters: Specify connection details directly
  4. Current Profile: Falls back to the default profile if set

SSH Configuration Files

  • SSH Config: ~/.ssh/config - Standard SSH configuration
  • srunx Profiles: ~/.config/ssh-slurm.json - SSH profile storage with environment variables

SSH Advanced Usage Examples

# Machine L
View on GitHub
GitHub Stars11
CategoryDevelopment
Updated6h ago
Forks0

Languages

Python

Security Score

90/100

Audited on Mar 30, 2026

No findings