CellSegmentationTracker
A Python module intended to make cell segmentation, tracking and subsequent (mostly biophysical) statisticial analysis easy
Install / Use
/learn @simonguld/CellSegmentationTrackerREADME
<a name="readme-top"></a>
<!-- PROJECT LOGO --> <br /> <div align="center"> <a href="https://github.com/github_username/repo_name"> <img src="images/logo.png" alt="Logo" width="160" height="160"> </a> <h3 align="center">CellSegmentationTracker</h3> <p align="center"> A Python module intended to make cell segmentation, tracking and subsequent (mostly biophysical) statisticial analysis easy <br /> <br /> <a href="https://github.com/simonguld/CellSegmentationTracker.git">Report Bug</a> · <a href="https://github.com/simonguld/CellSegmentationTracker.git">Request Feature</a> </p> </div> <!-- TABLE OF CONTENTS --> <details> <summary>Table of Contents</summary> <ol> <li> <a href="#about-the-project">About The Project</a> <li> <a href="#getting-started">Getting Started</a> <ul> <li><a href="#prerequisites">Prerequisites</a></li> <li><a href="#installation">Installation</a></li> </ul> </li> <li><a href="#usage">Usage and Limitations</a></li> <ul> <li><a href="#usage">Usage</a></li> <li><a href="#limitations">Limitations</a></li> </ul> <li><a href="#pretrained-models">Pretrained Models</a></li> <li><a href="#documentation">Documentation</a></li> <ul> <li><a href="#parameters">Parameters</a></li> <li><a href="#attributes">Attributes</a></li> <li><a href="#methods">Methods</a></li> </ul> <li><a href="#contributing">Contributing</a></li> <li><a href="#license">License</a></li> <li><a href="#contact">Contact</a></li> <li><a href="#acknowledgments">Acknowledgments</a></li> </ol> </details> <!-- ABOUT THE PROJECT -->About The Project
This module is meant to ease, automate and improve the process of biological cell segmentation, tracking and subsequent (mostly biophysical) statistical analysis. It is being developed primarily as a tool for the biophysicists of the Niels Bohr Institute of the University of Copenhagen, but anyone is more than welcome to use it! It is based on the cell segmentation program <a align="left"><a href="https://www.cellpose.org/">Cellpose</a></a> and the Tracking program <a align="left"><a href="https://imagej.net/plugins/trackmate/">Trackmate</a></a>, without either of which this project would have been impossible. The purpose of the module is to do 3 things, which can be done together or separately:
- Integrate the segmentation and tracking steps into an automated pipeline, and extend the Cellpose-TrackMate functionality so as to allow altering the Cellpose parameters 'flow threshold' and 'cell probability threshold' (which are fixed when using TrackMate). Varying these parameters lead to more flexible and in some cases more accurate segmentations.
- Extract relevant information from the xml file produced in step 1 (or an xml file generated by using TrackMate), and calculate the vector velocities of each cell (which are not available in TrackMate). Collect and save this information as a dataframe
- Provide functions for statistical analysis like e.g. functions for calculating summary statistics and average feature values against time, as well as functions for estimating, interpolating and visualizing the density (or any other scalar) and velocity fields, calculating mean square (MSD) and cage relative mean square displacements (CRMSD) etc. Make it easy to save and plot results.
Getting Started
Prerequisites
- Python 3.8 or 3.9
- Java 8
- Jython 2.7
- Fiji 2.9, and the TrackMate-Cellpose extension
- Cellpose 2.0
- Anaconda
Installation
-
Download and unpack the newest version of Fiji. Follow the instructions on https://imagej.net/software/fiji/downloads.
-
Download and install Java 8 here: https://www.oracle.com/java/technologies/downloads/#java8-windows
-
Download and install Jython. Follow the instructions on https://www.jython.org/installation.html
-
Install Anaconda or Miniconda, if you haven't already. Follow the instructions on https://docs.conda.io/en/latest/miniconda.html
-
Create an conda virtual environment using Python 3.9 or 3.8 (it might also work with newer versions). Follow the instructions on https://pypi.org/project/cellpose/. If you have a GPU available, consider installing the gpu-version; it drastically increases the segmentation speed.
-
Install the TrackMate extension Trackmate-Cellpose. To see how, visit: https://imagej.net/plugins/trackmate/detectors/trackmate-cellpose. Make sure to update it after installation.
-
From the cellpose virtual environment, install CellSegmentationTracker using the following command:
python -m pip install git+https://github.com/simonguld/CellSegmentationTracker.git -
Now you should be good to go!
Usage and Limitations
Usage
As mentioned previously, this module can be used as a pipeline for cell segmentation and tracking with flexible parameter options, to generate csv files that include vector velocity data from an xml file, as well as an aid in the subsequent statistical analysis. It can be used for all or either of those purposes.
All functionality is contained in the class CellSegmentationTracker, which can be imported as follows:
from cellsegmentationtracker import CellSegmentationTracker
To read about the parameters, attributes and methods of CellSegmentationTracker, go to <a align="left"><a href="#documentation">Documentation</a></a>. To see an example of how to use this module and its methods, take a look at the <a align="left"><a href="https://github.com/simonguld/CellSegmentationTracker/blob/main/example_notebook.ipynb">example notebook</a></a>.
Limitations
- As of now, only .tif files are supported as input images
- As of now, only the TrackMate LAP tracker is supported. The LAP tracker is used in all cases (see https://imagej.net/plugins/trackmate/trackers/lap-trackers for more information)
- As of now, it is not possible to apply tracking filters. Instead, the idea is to use a cell segmentation model that is sufficiently specialized to a given data set such that filtering is unnecessary. For more details on this, see <a align="left"><a href="#pretrained-models">Pretrained Models</a></a> below.
Pretrained Models
The pretrained Cellpose models 'CYTO', 'CYTO2' and 'NUCLEI' are of course available when choosing a segmentation model. The user can choose between an additional three models: 'EPI500', 'EPI2500' and 'EPI6000', which have been created by tranfer learning of the Cellpose models, i.e. by training them on specific cell image types (and resolutions) to improve performance on these types of data. The name EPI stems from the fact that all models have been trained on epithelial cells, and the subsequent number indicates the approximate number of cells in an image.
If none of the pretrained models suit your needs, you can train your own model using the Cellpose GUI - it is easy and can be done rather quickly.
EPI 500:
Example Image <br />
<div align="center"> <a href="https://github.com/github_username/repo_name"> <img src="images/EPI500.png" width="480" height="480"> </a> </div>- Trained using the Cellpose model 'CYTO2' as starting point
- Trained on images of a monolayer of epithelial cells with roughly 500 cells per image
- It has been trained so as not not segment cells at the image boundary (to avoid partial segmentations)
- Images created using flourescence microscopy (5 min. between frames)
- Magnification: 40x
- Image size: 2560x2150 (416x351 microns)
- Default paramters for this model:
- FLOW_THRESHOLD = 0.4
- CELLPROB_THRESHOLD = 0.5
- CELL_DIAMETER = 88.7 pixels
EPI 2500:
Example Image <br />
<div align="center"> <a href="https://github.com/github_username/repo_name"> <img src="images/EPI2500.png" width="480" height="480"> </a> </div>- Trained using the Cellpose model 'CYTO' as starting point
- Trained on images of a monolayer of epithelial cells, with roughly 2500 cells per image
- The bright white spots indicate an absence of cells and are not to be segmented
- It has been trained so as not not segment cells at the image boundary (to avoid partial segmentations)
- Images created using light microscopy (10 min. between frames)
- Magnification: 10x
- Image size: 2005x1567 (1303.25x1018.55 microns)
- Default paramters for this model:
- FLOW_THRESHOLD = 0.6
- CELLPROB_THRESHOLD = - 1.0
- CELL_DIAMETER = 37.79 pixels
EPI 6000:
Example Image <br />
<div align="center"> <a href="https://github.com/github_username/repo_name"> <img src="images/EPI6000.png" width="480" height="480"> </a> </div>- Trained using the Cellpose model 'CYTO2' as starting point
- Trained on images of a monolayer of epithelial cells with roughly 6000 cells per image
- It has been trained so as not not segment cells at the image boundary (to avoid partial segmentations)
- Images created using light microscopy (187 seconds between frames)
- Magnification: 10x
- Image size: 2560x2160 (1664x1404 microns)
- Default paramters for this model:
- FLOW_THRESHOLD = 0.5
- CELLPROB_THRESHOLD = 0.0
- CELL_DIAMETER = 30.58 pixels
Documentation
Class definition
class CellSegmentationTracker.CellSegmentationTracker(cellpose_folder_path = None, imagej_filepath = None,
cellpose_python_filepath = None, image_folder_path = None, xml_path = None, output_folder_path = None,
use_model = 'CYTO', custom_model_path = None, show_segmentation = False, cellpose_dict = {},
tr
Related Skills
node-connect
349.7kDiagnose OpenClaw node connection and pairing failures for Android, iOS, and macOS companion apps
frontend-design
109.7kCreate distinctive, production-grade frontend interfaces with high design quality. Use this skill when the user asks to build web components, pages, or applications. Generates creative, polished code that avoids generic AI aesthetics.
openai-whisper-api
349.7kTranscribe audio via OpenAI Audio Transcriptions API (Whisper).
qqbot-media
349.7kQQBot 富媒体收发能力。使用 <qqmedia> 标签,系统根据文件扩展名自动识别类型(图片/语音/视频/文件)。
