Omnipose
Omnipose: a high-precision solution for morphology-independent cell segmentation
Install / Use
/learn @kevinjohncutler/OmniposeREADME
Omnipose
.. raw:: html
<img src="https://github.com/kevinjohncutler/omnipose/blob/main/logo3.png?raw=true" width="200" title="bacteria" alt="bacteria" align="right" vspace = "0"> <img src="https://github.com/kevinjohncutler/omnipose/blob/main/logo.png?raw=true" width="200" title="omnipose" alt="omnipose" align="center" vspace = "0">|Downloads| |PyPI version|
Omnipose is a general image segmentation tool that builds on
Cellpose <https://github.com/MouseLand/cellpose>__ in a number of ways
described in our
paper <https://www.nature.com/articles/s41592-022-01639-4>__. It works
for both 2D and 3D images and on any imaging modality or cell shape, so
long as you train it on representative images. We have several
pre-trained models for:
- bacterial phase contrast: trained on a diverse range of bacterial species and morphologies.
- bacterial fluorescence: trained on the subset of the phase data that had a membrane or cytosol tag.
- C. elegans: trained on a couple OpenWorm videos and the
BBBC010 <https://bbbc.broadinstitute.org/BBBC010>__ alive/dead assay. We are working on expanding this significantly with the help of other labs contributing ground-truth data. - cyto2: trained on user data submitted through the Cellpose GUI. Very diverse data, but not necessarily the best quality. This model can be a good starting point for users making their own ground-truth datasets.
Try out Omnipose online
New users can check out the
ZeroCostDL4Mic <https://github.com/HenriquesLab/ZeroCostDL4Mic/wiki>__
Cellpose notebook on Google Colab <https://colab.research.google.com/github/HenriquesLab/ZeroCostDL4Mic/blob/master/Colab_notebooks/Beta%20notebooks/Cellpose_2D_ZeroCostDL4Mic.ipynb>__
to try out our original release of Omnipose. We need to make sure this
gets updated to the most recent version of Omnipose with advanced 3D
features and more built-in models.
Use the GUI
Launch the Omnipose-optimized version of the Cellpose GUI from terminal:
omnipose. Version 0.4.0 and onward will not install the GUI
dependencies by default. When you first run the GUI command, you will be
prompted to install the GUI dependencies. On Ubuntu 2022.04 and later (and
possibly earlier), we found it necessary to run the following to install
some missing system package:
::
sudo apt install libxcb-cursor0 libxcb-xinerama0
Our version of the GUI gives easy access to the parameters you need to
run Omnipose in large batches via CLI or Jupyter notebooks. The
ncolor <https://github.com/kevinjohncutler/ncolor>__ label
representation is now default and can be toggled off for saving masks in
standard format.
Standalone versions of this GUI for Windows, macOS, and Linux are
available on the OSF repository <https://osf.io/xmury/>__.
How to install Omnipose
.. _install_start:
-
Install an
Anaconda <https://www.anaconda.com/download/>__ distribution of Python. Note you might need to use an anaconda prompt if you did not add anaconda to the path. Alternatives like miniconda also work just as well. -
Open an anaconda prompt / command prompt with
condafor python 3 in the path. -
To create a new environment for CPU only, run
::
conda create -n omnipose 'python==3.10.12' pytorch
For users with NVIDIA GPUs, add these additional arguments:
::
torchvision pytorch-cuda=11.8 -c pytorch -c nvidia
See
GPU support <#gpu-support>__ for more details. Python 3.10 is not a strict requirement; seePython compatibility <#python-compatibility>__ for more about choosing your python version. -
To activate this new environment, run
::
conda activate omnipose
-
To install the latest PyPi release of Omnipose, run
::
pip install omnipose
or, for the most up-to-date development version,
::
git clone https://github.com/kevinjohncutler/omnipose.git cd omnipose pip install -e .
.. _install_stop:
.. warning:: If you previously installed Omnipose, please run
.. code-block::
pip uninstall cellpose_omni && pip cache remove cellpose_omni
to prevent version conflicts. See :ref:project structure <project-structure> for more details.
Python compatibility
.. _python_start:
I have tested Omnipose extensively on Python version 3.8.5 and have
encountered issues on some lower versions. Versions up to 3.10.11 have
been confirmed compatible, but I have encountered bugs with the GUI
dependencies on 3.11+. For those users with system or global pyenv
python3 installations, check your python version by running
``python -V`` before making your conda environment and choose a
different version. That way, there is no crosstalk between pip-installed
packages inside and outside your environment. So if you have 3.x.y
installed via pyenv etc., install your environment with 3.x.z instead.
.. _python_stop:
Pyenv versus Conda
~~~~~~~~~~~~~~~~~~
.. _pyenv_start:
Pyenv also works great for creating an environment for installing
Omnipose (and it also works a lot better for installing Napari alongside
it, in my experience - use ``pip install "napari[pyqt6]"`` to ensure no Qt conflicts).
Simply set your global version anywhere from
3.8.5-3.10.11 and run ``pip install omnipose``. I've had no problems
with GPU compatibility with this method on Linux, as pip collects all
the required packages. Conda is technically more reproducible, but often
finicky. You can use pyenv on Windows and macOS too, and as of mid 2024,
it works perfectly on Apple Silicon (better than conda!).
.. _pyenv_stop:
GPU support
~~~~~~~~~~~
.. _gpu_start:
Omnipose runs on CPU on macOS, Windows, and Linux. PyTorch has
historically only supported NVIDIA GPUs, but has more more recently
begun supporting Apple Silicon GPUs. It looks AMD support may be
available these days (ROCm), but I have not tested that out. Windows and
Linux installs are straightforward:
Your PyTorch version (>=1.6) needs to be compatible with your NVIDIA
driver. Older cards may not be supported by the latest drivers and thus
not supported by the latest PyTorch version. See the official
documentation on installing both the `most recent <https://pytorch.org/get-started/locally/>`__ and
`previous <https://pytorch.org/get-started/previous-versions/>`__
combinations of CUDA and PyTorch to suit your needs. Accordingly, you
can get started with CUDA 11.8 by making the following environment:
::
conda create -n omnipose 'python==3.10.12' pytorch torchvision pytorch-cuda=11.8 \
-c pytorch -c nvidia
Note that the official PyTorch command includes torchaudio, but that is
not needed for Omnipose. (*torchvision appears to be necessary these
days*). If you are on older drivers, you can get started with an older
version of CUDA, *e.g.* 10.2:
::
conda create -n omnipose pytorch=1.8.2 cudatoolkit=10.2 -c pytorch-lts
For Apple Silicon, download
`omnipose_mac_environment.yml <omnipose_mac_environment.yml>`__ and
install the environment:
::
conda env create -f <path_to_environment_file>
conda activate omnipose
You may edit this yml to change the name or python version etc. For more
notes on Apple Silicon development, see `this
thread <https://github.com/kevinjohncutler/omnipose/issues/14>`__. On
all systems, remember that you may need to use ipykernel to use the
omnipose environment in a notebook.
.. _gpu_stop:
How to use Omnipose
-------------------
I have a few Jupyter notebooks in the `docs/examples <docs/examples/>`__
directory that demonstrate how to use built-in models. You can also find
all the scripts I used for generating our figures in the
`scripts <scripts/>`__ directory. These cover specific settings for all
of the images found in our paper.
To use Omnipose on bacterial cells, use ``model_type=bact_omni``. For
other cell types, try ``model_type=cyto2_omni``. You can also choose
Cellpose models with ``omni=True`` to engage the Omnipose mask
reconstruction algorithm to alleviate over-segmentation.
How to train Omnipose
---------------------
Training is best done on CLI. I trained the ``bact_phase_omni`` model
using the following command, and you can train custom Omnipose models
similarly:
::
omnipose --train --use_gpu --dir <bacterial dataset directory> --mask_filter _masks \
--n_epochs 4000 --pretrained_model None --learning_rate 0.1 --diameter 0 \
--batch_size 16 --RAdam --img_filter _img --nclasses 3
.. note::
The RAdam optimizer is no longer necessary and may actually be detrimental with the latest
version of Omnipose, in which I have introduced dynamic loss balancing. Leave this out
to use standard SGD, which in recent testing converges faster than RAdam with the new loss function.
On bacterial phase contrast data, I found that Cellpose does not benefit
much from more than 500 epochs but Omnipose continues to improve until
around 4000 epochs. Omnipose outperforms Cellpose at 500 epochs but is
significantly better at 4000. You can use ``--save_every <n>`` and
``--save_each`` to store intermediate model training states to explore
this behavior.
.. _3d-omnipose:
3D Omnipose
-----------
To train a 3D model on image volumes, specify the dimension argument:
``--dim 3``. You may run out of VRAM on your GPU. In that case, you can
specify a smaller crop size, *e.g.*, ``--tyx 50,50,50``. The command I
used in the paper on the *Arabidopsis thaliana* lateral root primordia
dataset was:
::
omnipose --use_gpu --train --dir <path> --mask_filter _masks \
--n_epochs 4000 --pretrained_model None --learning_rate 0.1 --save_every 50 \
--save_each --verbose --look_one_level_down --all_channels --dim 3 \
--RAdam --batch_size 4 --diameter 0 --nclasses 3
To evaluate Omnipose models on 3D data, see the
`examples <docs/examples/>`__. If you run out of GPU memory,
Related Skills
best-practices-researcher
The most comprehensive Claude Code skills registry | Web Search: https://skills-registry-web.vercel.app
groundhog
399Groundhog's primary purpose is to teach people how Cursor and all these other coding agents work under the hood. If you understand how these coding assistants work from first principles, then you can drive these tools harder (or perhaps make your own!).
codebase-to-course
Turn any codebase into a beautiful, interactive single-page HTML course that teaches how the code works to non-technical people. Use this skill whenever someone wants to create an interactive course, tutorial, or educational walkthrough from a codebase or project. Also trigger when users mention 'turn this into a course,' 'explain this codebase interactively,' 'teach this code,' 'interactive tutorial from code,' 'codebase walkthrough,' 'learn from this codebase,' or 'make a course from this project.' This skill produces a stunning, self-contained HTML file with scroll-based navigation, animated visualizations, embedded quizzes, and code-with-plain-English side-by-side translations.
academic-pptx
Use this skill whenever the user wants to create or improve a presentation for an academic context — conference papers, seminar talks, thesis defenses, grant briefings, lab meetings, invited lectures, or any presentation where the audience will evaluate reasoning and evidence. Triggers include: 'conference talk', 'seminar slides', 'thesis defense', 'research presentation', 'academic deck', 'academic presentation'. Also triggers when the user asks to 'make slides' in combination with academic content (e.g., 'make slides for my paper on X', 'create a presentation for my dissertation defense', 'build a deck for my grant proposal'). This skill governs CONTENT and STRUCTURE decisions. For the technical work of creating or editing the .pptx file itself, also read the pptx SKILL.md.
