MONAILabel
MONAI Label is an intelligent open source image labeling and learning tool.
Install / Use
/learn @Project-MONAI/MONAILabelREADME
MONAI Label
MONAI Label is an intelligent open source image labeling and learning tool that enables users to create annotated datasets and build AI annotation models for clinical evaluation. MONAI Label enables application developers to build labeling apps in a serverless way, where custom labeling apps are exposed as a service through the MONAI Label Server.
MONAI Label is a server-client system that facilitates interactive medical image annotation by using AI. It is an open-source and easy-to-install ecosystem that can run locally on a machine with single or multiple GPUs. Both server and client work on the same/different machine. It shares the same principles with MONAI.
Refer to full MONAI Label documentations for more details or check out our MONAI Label Deep Dive videos series.
Refer to MONAI Label Tutorial series for application and viewer workflows with different medical image tasks. Notebook-like tutorials are created for detailed instructions.
Table of Contents
- Overview
- Getting Started with MONAI Label
- MONAI Label Tutorials
- Cite MONAI Label
- Contributing
- Community
- Additional Resources
Overview
MONAI Label reduces the time and effort of annotating new datasets and enables the adaptation of AI to the task at hand by continuously learning from user interactions and data. MONAI Label allows researchers and developers to make continuous improvements to their apps by allowing them to interact with their apps at the user would. End-users (clinicians, technologists, and annotators in general) benefit from AI continuously learning and becoming better at understanding what the end-user is trying to annotate.
MONAI Label aims to fill the gap between developers creating new annotation applications, and the end users which want to benefit from these innovations.
Highlights and Features
- Framework for developing and deploying MONAI Label Apps to train and infer AI models
- Compositional & portable APIs for ease of integration in existing workflows
- Customizable labeling app design for varying user expertise
- Annotation support via 3DSlicer & OHIF for radiology
- Annotation support via QuPath, Digital Slide Archive, and CVAT for pathology
- Annotation support via CVAT for Endoscopy
- PACS connectivity via DICOMWeb
- Automated Active Learning workflow for endoscopy using CVAT
Supported Matrix
MONAI Label supports many state-of-the-art(SOTA) models in Model-Zoo, and their integration with viewers and monaibundle app. Please refer to monaibundle app page for supported models, including whole body segmentation, whole brain segmentation, lung nodule detection, tumor segmentation and many more.
In addition, you can find a table of the basic supported fields, modalities, viewers, and general data types. However, these are only ones that we've explicitly test and that doesn't mean that your dataset or file type won't work with MONAI Label. Try MONAI for your given task and if you're having issues, reach out through GitHub Issues.
<table> <tr> <th>Field</th> <th>Models</th> <th>Viewers</th> <th>Data Types</th> <th>Image Modalities/Target</th> </tr> <td>Radiology</td> <td> <ul> <li>Segmentation</li> <li>DeepGrow</li> <li>DeepEdit</li> <li>SAM2 (2D/3D)</li> </ul> </td> <td> <ul> <li>3DSlicer</li> <li>MITK</li> <li>OHIF</li> </ul> </td> <td> <ul> <li>NIfTI</li> <li>NRRD</li> <li>DICOM</li> </ul> </td> <td> <ul> <li>CT</li> <li>MRI</li> </ul> </td> <tr> </tr> <td>Pathology</td> <td> <ul> <li>DeepEdit</li> <li>NuClick</li> <li>Segmentation</li> <li>Classification</li> <li>SAM2 (2D)</li> </ul> </td> <td> <ul> <li>Digital Slide Archive</li> <li>QuPath</li> <li>CVAT</li> </ul> </td> <td> <ul> <li>TIFF</li> <li>SVS</li> </ul> </td> <td> <ul> <li>Nuclei Segmentation</li> <li>Nuclei Classification</li> </ul> </td> <tr> </tr> <td>Video</td> <td> <ul> <li>DeepEdit</li> <li>Tooltracking</li> <li>InBody/OutBody</li> <li>SAM2 (2D)</li> </ul> </td> <td> <ul> <li>CVAT</li> </ul> </td> <td> <ul> <li>JPG</li> <li>3-channel Video Frames</li> </ul> </td> <td> <ul> <li>Endoscopy</li> </ul> </td> <tr> </table>Getting Started with MONAI Label
MONAI Label requires a few steps to get started:
- Step 1: Install MONAI Label
- Step 2: Download a MONAI Label sample app or write your own custom app
- Step 3: Install a compatible viewer and supported MONAI Label Plugin
- Step 4: Prepare your Data
- Step 5: Launch MONAI Label Server and start Annotating!
Step 1 Installation
Current Stable Version
<a href="https://pypi.org/project/monailabel/#history"><img alt="GitHub release (latest SemVer)" src="https://img.shields.io/github/v/release/project-monai/monailabel"></a>
<pre>pip install -U monailabel</pre>MONAI Label supports the following OS with GPU/CUDA enabled. For more details instruction, please see the installation guides.
GPU Acceleration (Optional Dependencies)
Following are the optional dependencies which can help you to accelerate some GPU based transforms from MONAI. These dependencies are enabled by default if you are using projectmonai/monailabel docker.
Development version
To install the latest features using one of the following options:
<details> <summary><strong>Git Checkout (developer mode)</strong></summary> <a href="https://github.com/Project-MONAI/MONAILabel"><img alt="GitHub tag (latest SemVer)" src="https://img.shields.io/github/v/tag/Project-MONAI/monailabel"></a> <br> <pre> git clone https://github.com/Project-MONAI/MONAILabel pip install -r MONAILabel/requirements.txt export PATH=$PATH:`pwd`/MONAILabel/monailabel/scripts</pre> <p>If you are using DICOM-Web + OHIF then you have to build OHIF package separate. Please refer [here](https://github.com/Project-MONAI/MONAILabel/tree/main/plugins/ohif#development-setup).</p> </details> <details> <summary><strong>Docker</strong></summary> <img alt="Docker Image Version (latest semver)" src="https://img.shields.io/docker/v/projectmonai/Related Skills
YC-Killer
2.7kA library of enterprise-grade AI agents designed to democratize artificial intelligence and provide free, open-source alternatives to overvalued Y Combinator startups. If you are excited about democratizing AI access & AI agents, please star ⭐️ this repository and use the link in the readme to join our open source AI research team.
best-practices-researcher
The most comprehensive Claude Code skills registry | Web Search: https://skills-registry-web.vercel.app
mentoring-juniors
Community-contributed instructions, agents, skills, and configurations to help you make the most of GitHub Copilot.
groundhog
399Groundhog's primary purpose is to teach people how Cursor and all these other coding agents work under the hood. If you understand how these coding assistants work from first principles, then you can drive these tools harder (or perhaps make your own!).
