SkillAgentSearch skills...

EMCAD

Official repository of CVPR 2024 paper "EMCAD: Efficient Multi-scale Convolutional Attention Decoding for Medical Image Segmentation"

Install / Use

/learn @SLDGroup/EMCAD
About this skill

Quality Score

0/100

Supported Platforms

Universal

README

EMCAD

Official Pytorch implementation of the paper EMCAD: Efficient Multi-scale Convolutional Attention Decoding for Medical Image Segmentation published in CVPR 2024. arxiv code video <br> Md Mostafijur Rahman, Mustafa Munir, Radu Marculescu

<p>The University of Texas at Austin</p>

🔍 Check out our papers: LoMix [NeurIPS 2025], EfficientMedNeXt [MICCAI 2025], EffiDec3D [CVPR 2025], MK-UNet [ICCVW 2025], PP-SAM [CVPRW 2024], G-CASCADE [WACV 2024], MERIT [MIDL 2023], CASCADE [WACV 2023]

Update

🚀 January 12, 2026: Polyp training and inference code released!!!!

➡️ Please follow our CASCADE training and inference code for ACDC dataset!!!

🚀 May 6, 2025: Synapse inference code released!!!

🚀 September 12, 2024: Synapse training code released!!!

Architecture

<p align="center"> <img src="EMCAD_architecture.jpg" width=100% height=40% class="center"> </p>

Quantitative Results

<p align="center"> <img src="avg_dice_flops.png" width=46.8% height=65% class="center"> <img src="avg_dice_params.png" width=45% height=40% class="center"> </p>

Qualitative Results

<p align="center"> <img src="qualitative_results_synapse.png" width=100% height=40% class="center"> </p> <p align="center"> <img src="qualitative_results_clinicdb.png" width=80% height=25% class="center"> </p>

Usage:

Recommended environment:

Please run the following commands.

conda create -n emcadenv python=3.8
conda activate emcadenv

pip install torch==1.11.0+cu113 torchvision==0.12.0+cu113 torchaudio==0.11.0 --extra-index-url https://download.pytorch.org/whl/cu113

pip install mmcv-full -f https://download.openmmlab.com/mmcv/dist/cu113/torch1.11.0/index.html

pip install -r requirements.txt

Data preparation:

  • Synapse Multi-organ dataset: Sign up in the official Synapse website and download the dataset. Then split the 'RawData' folder into 'TrainSet' (18 scans) and 'TestSet' (12 scans) following the TransUNet's lists and put in the './data/synapse/Abdomen/RawData/' folder. Finally, preprocess using python ./utils/preprocess_synapse_data.py or download the preprocessed data and save in the './data/synapse/' folder. Note: If you use the preprocessed data from TransUNet, please make necessary changes (i.e., remove the code segment (line# 88-94) to convert groundtruth labels from 14 to 9 classes) in the utils/dataset_synapse.py.

  • ACDC dataset: Download the preprocessed ACDC dataset from Google Drive and move into './data/ACDC/' folder.

  • Polyp datasets: Download the splited polyp datasets from Google Drive and move into './data/polyp/' folder.

Pretrained model:

You should download the pretrained PVTv2 model from Google Drive or PVT GitHub, and then put it in the './pretrained_pth/pvt/' folder for initialization.

Training:

cd into EMCAD
python -W ignore train_synapse.py --root_path /path/to/train/data --volume_path path/to/test/data --encoder pvt_v2_b2         # replace --root_path and --volume_path with your actual path to data.

Trained Weights on Synapse Dataset:

You can download the trained weights on Synapse dataset from Google Drive.

Testing:

cd into EMCAD 

Acknowledgement

We are very grateful for these excellent works timm, CASCADE, MERIT, G-CASCADE, PP-SAM, PraNet, Polyp-PVT and TransUNet, which have provided the basis for our framework.

Citations

@inproceedings{rahman2024emcad,
  title={Emcad: Efficient multi-scale convolutional attention decoding for medical image segmentation},
  author={Rahman, Md Mostafijur and Munir, Mustafa and Marculescu, Radu},
  booktitle={Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition},
  pages={11769--11779},
  year={2024}
}

Related Skills

View on GitHub
GitHub Stars325
CategoryHealthcare
Updated3d ago
Forks20

Languages

Python

Security Score

80/100

Audited on Mar 23, 2026

No findings