Imcanalysis
This is a pipeline for analysing high-dimensional tissue data in a HPC environment. It includes tools for analysing IMC data in the Scanpy ecosystem, but also several image-based analysis tools. Currently, most of the tools are designed to work with IMC data, but most should be adaptable to other modalities.
Install / Use
/learn @dr-michael-haley/ImcanalysisREADME
[!IMPORTANT] DISCLAIMER I plan to add releases and tests in the future, but for now everything is provided “as is” with no guarantees. I’ll do my best to respond to issues (see Reporting issues), but I’m a single developer and actively use this code in my own projects, so response times can vary. Feedback and suggestions to improve any aspect of the repo are welcome and encouraged! If anything isn't clear then feel free to let me know.
imcanalysis
Toolkit for analysing Imaging Mass Cytometry (IMC) and other spatial-omics data. It combines a Python package (SpatialBiologyToolkit), CLI pipeline stages, SLURM job templates, tutorials, and HPC helper scripts.
Start here
Completely new to the command line / Python tooling? Start with README_NEW_USERS.md, which goes through the absolute basics.
Recommended workflow: this project is HPC-first. The majority of analyses are easiest to run on an HPC cluster via the scripted pipeline (headless SLURM jobs), taking you from raw data (i.e. a folder with MCD files) through standard preprocessing and analysis steps with minimal manual intervention. Afterwards, a smaller minority of work is typically done locally in notebooks for bespoke exploration and figures. Start with README_IMC_HPC.md plus SLURM_scripts/README.md.
Local notebooks (usually after HPC): use README_LOCAL.md and Tutorials/README.md.
Legacy material: older or experimental code exists in External_and_old_code/README.md. These are not tightly maintained and are best suited for advanced users who are comfortable troubleshooting.
Quick setup (advanced: local workstation, using SpatialBiologyToolkit in your own analysis)
This is a quick-start for advanced users who simply want to import and use the SpatialBiologyToolkit Python code in their own local scripts/notebooks (i.e. not running the full HPC pipeline).
For full details, follow README_LOCAL.md. The shortest version is:
- Create the conda env:
conda env create -f Local_envs/sbt_env.yml. - Activate:
conda activate sbt. - Install the package editable (from the repo root):
pip install -e .. - Copy
Tutorials/to an analysis folder outside the repo, then runjupyter labfrom that analysis folder.
Components of repository
- SpatialBiologyToolkit/: the core Python package where the analysis logic lives (preprocessing, denoising, clustering, spatial stats, plotting). If you import anything in Python, it usually comes from here.
- SpatialBiologyToolkit/scripts/: command-line “pipeline stages” that run the core steps in order. These read
config.yamlfrom your dataset folder and are what the SLURM jobs call. - SLURM_scripts/: job templates for running stages on HPC. The
pipeline.conffile maps short names (likepreprocess) to these scripts. - SLURM stage reference (detailed): in-depth table of each stage’s purpose, environment, inputs/outputs, config blocks, run order, and traffic-light status.
- Bash_scripts/: small helper commands (
pl,pll,pls,zipqc,cds) that make it easy to submit or inspect the pipeline on HPC. - Tutorials/: Jupyter notebooks for interactive, exploratory analysis when you want to go beyond the scripted pipeline.
- install/: install/uninstall helpers used by
make install(sets PATH, config file, and permissions on HPC). - Local_envs/: minimal environment for local analysis using SpatialBiologyToolkit.
- HPC_env_files/: environment specifications used to create the conda environments for the pipeline. Automatically installed using
make envs - docs/: documentation sources (Sphinx); the built HTML is in
Documentation/. - External_and_old_code/: legacy or experimental code and notebooks. Useful for advanced users, but not tightly maintained.
Reporting issues
Please use GitHub Issues for bugs and questions. Include:
- the pipeline stage or notebook name
- the environment file used (e.g.
Local_envs/sbt_env.ymlorHPC_env_files/...) - any overrides in
config.yaml - a short log/traceback snippet if available
If you’re unsure whether something is a bug or a usage question, open an issue anyway and tag it as a question.
Related Skills
diffs
342.0kUse the diffs tool to produce real, shareable diffs (viewer URL, file artifact, or both) instead of manual edit summaries.
openpencil
1.9kThe world's first open-source AI-native vector design tool and the first to feature concurrent Agent Teams. Design-as-Code. Turn prompts into UI directly on the live canvas. A modern alternative to Pencil.
ui-ux-pro-max-skill
55.3kAn AI SKILL that provide design intelligence for building professional UI/UX multiple platforms
ui-ux-pro-max-skill
55.3kAn AI SKILL that provide design intelligence for building professional UI/UX multiple platforms
