SkillAgentSearch skills...

Rankseg

Boost segmentation model mIoU/Dice instantly WITHOUT retraining. A plug-and-play, training-free optimization module. Published in NeurIPS & JMLR. Compatible with SAM, DeepLab, SegFormer, and more. 🧩

Install / Use

/learn @rankseg/Rankseg

README

<div align="center">

🧩 RankSEG

Boost Segmentation Performance Instantly via Direct Dice/IoU Post-Optimization

PyPI License Python PyTorch GitHub Stars Documentation Hugging Face Spaces Open In Colab δΈ­ζ–‡ζ–‡ζ‘£

JMLR NeurIPS

Quick Start | Key Features | Benchmarks | Citation

</div>

RankSEG is a plug-and-play post-processing module that boosts segmentation performance (Dice/IoU) during inference. It works with ANY pre-trained probabilistic segmentation model (SAM, DeepLab, SegFormer, etc.) without any retraining or fine-tuning.

Explore RankSEG by reading our documentation.

🌟 Why RankSEG?

Conventional methods use argmax or fixed thresholding, which are not theoretically optimized for non-decomposable metrics like Dice or IoU. RankSEG bridges this gap by directly optimizing the target metric, yielding "free" performance gains.

<div align="center"> <p align="center"><b>Demo: RankSEG vs. Argmax on <i>fashn-human-parser</i></b></p> <img src="./fig/fashn-ai-fashn-human-parser.gif" alt="RankSEG vs Argmax Comparison" width="80%"> </div>

⚑ Quick Start

RankSEG is designed to be dropped into your existing inference pipeline with just a few lines of code.

1. Installation

pip install -U rankseg

2. Basic Usage (3 Lines of Code)

from rankseg import RankSEG
import torch.nn.functional as F

# 1. Initialize RankSEG (optimizing for Dice)
rankseg = RankSEG(metric='dice')

# 2. Get probability output from YOUR model
# probs: (Batch, Class, H, W)
probs = F.softmax(model_logits, dim=1)

# 3. Get optimized predictions (Instantly!)
preds = rankseg.predict(probs)

πŸ’‘ Try it now: Open In Colab

✨ Key Features

  • πŸš€ Performance Boost: Consistently improves mIoU/mDice scores over standard argmax.
  • πŸ”Œ Zero Effort: Compatible with any PyTorch model. No retraining, no fine-tuning.
  • πŸ†“ Training-Free: Purely post-processing. Works with frozen weights.
  • ⚑ Real-time Inference: Efficient RMA (Reciprocal Moment Approximation) solver.
  • 🧩 Versatile: Supports semantic (multi-class) and binary (multi-label) tasks.

πŸ“Š Benchmarks

RankSEG delivers consistent gains across various architectures and datasets without touching a single weight.

| Model | Dataset | mIoU (Argmax) | mIoU (RankSEG) | Gain | | :--- | :--- | :---: | :---: | :---: | | DeepLabV3+ | PASCAL VOC | 77.25% | 78.14% | +0.89% | | SegFormer | PASCAL VOC | 77.57% | 78.59% | +1.02% | | UPerNet | PASCAL VOC | 79.52% | 80.31% | +0.79% | | SegFormer | ADE20K | 40.00% | 40.82% | +0.82% | | UPerNet | ADE20K | 42.86% | 43.84% | +0.98% |

Detailed results available in our NeurIPS 2025 paper.

πŸ› οΈ Integrations & Demos

| Framework | Task | Quick Start | | :--- | :--- | :---: | | Standard PyTorch | Semantic Segmentation | Colab | | Segment Anything (SAM) | Zero-shot Segmentation | Colab | | Hugging Face | Interactive Demo | Spaces | | PaddleSeg | Docs | Docker |

πŸ”— Citation

If you use RankSEG in your research, please cite our papers:

  • Dai, B., & Li, C. (2023). RankSEG: A Consistent Ranking-based Framework for Segmentation. Journal of Machine Learning Research, 24(224), 1-50. [link]
  • Wang, Z., & Dai, B. (2025). RankSEG-RMA: An Efficient Segmentation Algorithm via Reciprocal Moment Approximation. Advances in Neural Information Processing Systems (NeurIPS 2025). [link]
@article{dai2023rankseg,
  title={RankSEG: A Consistent Ranking-based Framework for Segmentation},
  author={Dai, Ben and Li, Chunlin},
  journal={Journal of Machine Learning Research},
  volume={24},
  number={224},
  pages={1--50},
  url={https://www.jmlr.org/papers/v24/22-0712.html},
  year={2023}
}

@inproceedings{wang2025rankseg,
  title={RankSEG-RMA: An Efficient Segmentation Algorithm via Reciprocal Moment Approximation},
  author={Wang, Zixun and Dai, Ben},
  booktitle={Advances in Neural Information Processing Systems},
  url={https://arxiv.org/abs/2510.15362},
  year={2025}
}

<div align="center"> <p>Star us on GitHub if RankSEG helps your project! ⭐</p> </div>

Related Skills

View on GitHub
GitHub Stars54
CategoryEducation
Updated1d ago
Forks2

Languages

Jupyter Notebook

Security Score

100/100

Audited on Apr 8, 2026

No findings