Biomero
BIOMERO.analyzer - A python library for easy connecting between OMERO (jobs) and a Slurm cluster
Install / Use
/learn @NL-BioImaging/BiomeroREADME
BIOMERO — BioImage Analysis in OMERO
🚀 This package is part of BIOMERO 2.0 — For complete deployment and FAIR infrastructure setup, start with the NL-BIOMERO Documentation 📖
The BIOMERO framework, for BioImage analysis in OMERO, allows you to run (FAIR) bioimage analysis workflows directly from OMERO on a high-performance compute (HPC) cluster, remotely through SSH.
BIOMERO 2.0
We have released an enhanced BIOMERO experience!
BIOMERO 2.0 is a complete ecosystem that includes:
- BIOMERO.analyzer (this Python library) - The core analysis engine
- BIOMERO.scripts - OMERO scripts for HPC integration
- BIOMERO.importer - Automated data import service
- OMERO.biomero - Modern web interface plugin
Full workflow tracking is now supported via a database and dashboard. The OMERO.biomero plugin provides an intuitive interface in OMERO.web. Every workflow run is uniquely identifiable, and resulting assets are accessible directly in OMERO.
NL-BIOMERO provides a full containerized deployment stack and documentation:
- Repository: NL-BIOMERO
- Documentation: NL-BIOMERO GitHub Pages <-- start reading here
📊 Highlights
| Feature | BIOMERO 1.x | BIOMERO 2.0 | |---------|------------|-------------| | Workflow Tracking | Logs only | Full database events | | Interface | Scripts only | Modern web plugin + scripts | | Progress Monitoring | Manual | Live dashboards | | Job History | None | Complete execution history | | Analytics | None | Integrated Metabase |
BIOMERO Python library (BIOMERO.analyzer)
The BIOMERO framework consists of this Python library biomero (also known as BIOMERO.analyzer), together with the BIOMERO.scripts that can be run directly from the OMERO web interface.
The package includes the SlurmClient class, which provides SSH-based connectivity and interaction with a Slurm (high-performance compute) cluster. The package enables users to submit jobs, monitor job status, retrieve job output, and perform other Slurm-related tasks. Additionally, the package offers functionality for configuring and managing paths to Slurm data and Singularity images (think Docker containers...), as well as specific FAIR image analysis workflows and their associated repositories.
Overall, the biomero package simplifies the integration of HPC functionality within the OMERO platform for admins and provides an efficient and end-user-friendly interface towards both the HPC and FAIR workflows.
⚠️ Warning: Default settings are intended for short/medium jobs. For long workflows (>45min) please change some default settings:
- Slurm jobs will timeout after 45 minutes — see Time Limit on Slurm
- OMERO scripts (including BIOMERO.scripts) will timeout after 60 minutes — adjust OMERO script timeout
Overview
In the figure below we show our BIOMERO framework, for BioImage analysis in OMERO.
BIOMERO 1.0 consists of this Python library (biomero) and the integrations within OMERO, currently through our BIOMERO.scripts.
For the BIOMERO 2.0 setup, see NL-BIOMERO for deployment; and for details on the design and FAIR features, see our latest preprint: “BIOMERO 2.0: end-to-end FAIR infrastructure for bioimaging data import, analysis, and provenance”
Deploy with NL-BIOMERO
For the easiest deployment and integration with other FAIR infrastructure, use the NL-BIOMERO stack:
- NL-BIOMERO deployment repo: https://github.com/NL-BioImaging/NL-BIOMERO
- OMERO.biomero OMERO.web plugin: https://github.com/NL-BioImaging/OMERO.biomero
- Prebuilt BIOMERO processor container: https://hub.docker.com/r/cellularimagingcf/biomero
Quickstart
For a quick overview of what this library can do for you, we can install an example setup locally with Docker:
- Setup a local OMERO w/ this library:
- Follow Quickstart of https://github.com/NL-BioImaging/NL-BIOMERO
- Setup a local Slurm w/ SSH access:
- Follow Quickstart of https://github.com/NL-BioImaging/NL-BIOMERO-Local-Slurm
- Upload some data with OMERO.insight to
localhostserver (... we are working on a web importer ... TBC) - Try out some scripts from https://github.com/NL-BioImaging/biomero-scripts (already installed in step 1!):
- Run script
biomero>admin>SLURM Init environment... - Get a coffee or something. This will take at least 10 min to download all the workflow images.
- Select your image / dataset and run script
biomero>__workflows>SLURM Run Workflow...- Select at least one of the
Select how to import your results, e.g. changeImport into NEW Datasettext tohello world - Select a fun workflow, e.g.
cellpose.- Change the
nuc channelto the channel to segment (note that 0 is for grey, so 1,2,3 for RGB) - Uncheck the
use gpu(step 2, our HPC cluster, doesn't come with GPU support built into the containers)
- Change the
- Refresh your OMERO
Exploretab to see yourhello worlddataset with a mask image when the workflow is done!
- Select at least one of the
- Run script
Prerequisites & Getting Started with BIOMERO
Installation
Using pip (Cross-platform)
BIOMERO can be installed via pip with different dependency sets:
# Basic library only (core BIOMERO functionality)
python3 -m pip install biomero
# With BIOMERO.scripts requirements (includes BIOMERO.scripts dependencies)
python3 -m pip install biomero[full]
# Latest development version with BIOMERO.scripts requirements
python3 -m pip install 'git+https://github.com/NL-BioImaging/biomero[full]'
# With test dependencies (for local testing/development)
python3 -m pip install biomero[test]
# With both test and BIOMERO.scripts requirements
python3 -m pip install biomero[test,full]
Dependency sets explained:
- Default (no extras): Core BIOMERO library for basic functionality
[test]: Adds pytest, coverage tools for local testing and development[full]: Adds BIOMERO.scripts requirements (ezomero,tifffile,omero-metadata,omero-cli-zarr) which require complex dependencies likezeroc-iceandomero-py
Note: The [full] dependencies require complex packages like zeroc-ice and omero-py that need system libraries. For OMERO integration, you may need to install these separately or use conda.
Slurm Requirements
Note: This library has only been tested on Slurm versions 21.08.6 and 22.05.09 !
Your Slurm cluster/login node needs to have:
- SSH access w/ public key (headless)
- SCP access (generally comes with SSH)
- 7zip installed
- Singularity/Apptainer installed
- (Optional) Git installed, if you want your own job scripts
- Slurm accounting enabled
OMERO Requirements
Your OMERO processing node needs to have:
- SSH client and access to the Slurm cluster (w/ private key / headless)
- SCP access to the Slurm cluster
- Python3.10+
- This library installed
- With BIOMERO.scripts dependencies:
python3 -m pip install biomero[full] - Latest development:
python3 -m pip install 'git+https://github.com/NL-BioImaging/biomero[full]'
- With BIOMERO.scripts dependencies:
- Configuration setup at
/etc/slurm-config.ini
Your OMERO server node needs to have:
- Some OMERO example scripts installed to interact with this library:
- My examples on github:
https://github.com/NL-BioImaging/biomero-scripts - Install those at
/opt/omero/server/OMERO.server/lib/scripts/slurm/, e.g. `git clone https://github.com/NL-BioImaging/biomero-scripts.git <pat
- My examples on github:
Related Skills
claude-opus-4-5-migration
84.7kMigrate prompts and code from Claude Sonnet 4.0, Sonnet 4.5, or Opus 4.1 to Opus 4.5
docs-writer
99.6k`docs-writer` skill instructions As an expert technical writer and editor for the Gemini CLI project, you produce accurate, clear, and consistent documentation. When asked to write, edit, or revie
model-usage
342.0kUse CodexBar CLI local cost usage to summarize per-model usage for Codex or Claude, including the current (most recent) model or a full model breakdown. Trigger when asked for model-level usage/cost data from codexbar, or when you need a scriptable per-model summary from codexbar cost JSON.
TrendRadar
50.1k⭐AI-driven public opinion & trend monitor with multi-platform aggregation, RSS, and smart alerts.🎯 告别信息过载,你的 AI 舆情监控助手与热点筛选工具!聚合多平台热点 + RSS 订阅,支持关键词精准筛选。AI 智能筛选新闻 + AI 翻译 + AI 分析简报直推手机,也支持接入 MCP 架构,赋能 AI 自然语言对话分析、情感洞察与趋势预测等。支持 Docker ,数据本地/云端自持。集成微信/飞书/钉钉/Telegram/邮件/ntfy/bark/slack 等渠道智能推送。
