MedLSAM
MedLSAM: Localize and Segment Anything Model for 3D Medical Images
Install / Use
/learn @openmedlab/MedLSAMREADME
MedLSAM: Localize and Segment Anything Model for 3D Medical Images
<!-- select Model and/or Data and/or Code as needed> ### Welcome to OpenMEDLab! 👋 <!-- **Here are some ideas to get you started:** 🙋♀️ A short introduction - what is your organization all about? 🌈 Contribution guidelines - how can the community get involved? 👩💻 Useful resources - where can the community find your docs? Is there anything else the community should know? 🍿 Fun facts - what does your team eat for breakfast? 🧙 Remember, you can do mighty things with the power of [Markdown](https://docs.github.com/github/writing-on-github/getting-started-with-writing-and-formatting-on-github/basic-writing-and-formatting-syntax) --> <!-- Insert the project banner here <div align="center"> <a href="https://"><img width="1000px" height="auto" src="https://github.com/openmedlab/sampleProject/blob/main/banner_sample.png"></a> </div> --><!-- Select some of the point info, feel free to delete [](https://twitter.com/opendilab) [](https://pypi.org/project/DI-engine/)            [](https://codecov.io/gh/opendilab/DI-engine)  [](https://github.com/openmedlab/MedLSAM) [](https://github.com/opendilab/DI-engine/network)  [](https://github.com/opendilab/DI-engine/issues) [](https://github.com/opendilab/DI-engine/pulls) [](https://github.com/opendilab/DI-engine/graphs/contributors) [](https://github.com/opendilab/DI-engine/blob/master/LICENSE) -->
Key Features
- Foundation Model for 3D Medical Image Localization: MedLAM: MedLSAM introduces MedLAM as a foundational model for the localization of 3D medical images.
- First Fully-Automatic Medical Adaptation of SAM: MedLSAM is the first complete medical adaptation of the Segment Anything Model (SAM). The primary goal of this work is to significantly reduce the annotation workload in medical image segmentation.
- Segment Any Anatomy Target Without Additional Annotation: MedLSAM is designed to segment any anatomical target in 3D medical images without the need for further annotations, contributing to the automation and efficiency of the segmentation process.
Updates
- 2024.1.9: Release the training code
- 2023.10.15: Accelerate the inference speed. Add Sub-Patch Localization (SPL).
- 2023.07.01: Code released.
Details
<!-- Insert a pipeline of your algorithm here if got one -->The Segment Anything Model (SAM) has recently emerged as a groundbreaking model in the field of image segmentation. Nevertheless, both the original SAM and its medical adaptations necessitate slice-by-slice annotations, which directly increase the annotation workload with the size of the dataset. We propose MedLSAM to address this issue, ensuring a constant annotation workload irrespective of dataset size and thereby simplifying the annotation process. Our model introduces a few-shot localization framework capable of localizing any target anatomical part within the body. To achieve this, we develop a Localize Anything Model for 3D Medical Images (MedLAM), utilizing two self-supervision tasks: relative distance regression (RDR) and multi-scale similarity (MSS) across a comprehensive dataset of 14,012 CT scans. We then establish a methodology for accurate segmentation by integrating MedLAM with SAM. By annotating only six extreme points across three directions on a few templates, our model can autonomously identify the target anatomical region on all data scheduled for annotation. This allows our framework to generate a 2D bounding box for every slice of the image, which are then leveraged by SAM to carry out segmentations. We conducted experiments on two 3D datasets covering 38 organs and found that MedLSAM matches the performance of SAM and its medical adaptations while requiring only minimal extreme point annotations for the entire dataset. Furthermore, MedLAM has the potential to be seamlessly integrated with future 3D SAM models, paving the way for enhanced performance.
Fig.1 The overall segmentation pipeline of MedLSAM.
Feedback and Contact
- Email: lyc745307452@gmail.com
Get Started
Main Requirements
torch>=1.11.0
tqdm
nibabel
scipy
SimpleITK
monai
Installation
- Create a virtual environment
conda create -n medlsam python=3.10 -yand activate itconda activate medlsam - Install Pytorch
git clone https://github.com/openmedlab/MedLSAM- Enter the MedSAM folder
cd MedLSAMand runpip install -e .
Download Model
Download MedLAM checkpoint, SAM checkpoint, MedSAM checkpoint and place them at checkpoint/medlam.pth, checkpoint/sam_vit_b_01ec64.pth and checkpoint/medsam_vit_b.pth
Inference
GPU requirement
We recommend using a GPU with 12GB or more memory for inference.
Data preparation
- StructSeg Task1 HaN OAR
- WORD (Request for access is required to download this dataset.)
Note: You can also download other CT datasets and place them any place you want. MedLSAM will automaticly apply the preprocessing procedure during the inference time, so please do not normalize the original CT images!!!
After downloading the datasets, you should sort the data into "support" and "query" groups. This does not require moving the actual image files. Rather, you need to create separate lists of file paths for each group.
For each group ("support" and "query"), perform the following steps:
- Create a .txt file listing the paths to the image files.
- Create another .txt file listing the paths to the corresponding label files.
Ensure that the ordering of images and labels aligns in both lists. These lists will direct MedLSAM to the appropriate files during the inference process.
The file names are not important, as long as the ordering of images and labels aligns in both lists.
Example format for the .txt files:
image.txt
/path/to/your/dataset/image_1.nii.gz
...
/path/to/your/dataset/image_n.nii.gz
label.txt
/path/to/your/dataset/label_1.nii.gz
...
/path/to/your/dataset/label_n.nii.gz
Config preparation
MedLAM and MedLSAM load their configurations from a .txt file. The structure of the file is as follows:
[data]
support_image_ls = config/data/StructSeg_HaN/support_image.txt
support_label_ls = config/data/StructSeg_HaN/support_label.txt
query_image_ls = config/data/StructSeg_HaN/query_image.txt
query_label_ls = config/data/StructSeg_HaN/query_label.txt
gt_slice_threshold = 10
bbox_mode = SPL
slice_interval = 2
fg_class = [1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22]
seg_save_path = result/npz/StructSeg
seg_png_save_path = result/png/StructSeg
[vit]
net_type = vit_b
[weight]
medlam_load_path = checkpoint/medlam.pth
vit_load_path = checkpoint/medsam_20230423_vit_b_0.0.1.pth
Each of the parameters is explained as follows:
support_image_ls: The path to the list of support image files. It is recommended to use between 3 and 10 support images.support_label_ls: The path to the list of support label files.query_image_ls: The path to the list of query image files.query_label_ls: The path to the list of query label files.- `
