Rankseg
Boost segmentation model mIoU/Dice instantly WITHOUT retraining. A plug-and-play, training-free optimization module. Published in NeurIPS & JMLR. Compatible with SAM, DeepLab, SegFormer, and more. π§©
Install / Use
/learn @rankseg/RanksegREADME
π§© RankSEG
Boost Segmentation Performance Instantly via Direct Dice/IoU Post-Optimization
Quick Start | Key Features | Benchmarks | Citation
</div>RankSEG is a plug-and-play post-processing module that boosts segmentation performance (Dice/IoU) during inference. It works with ANY pre-trained probabilistic segmentation model (SAM, DeepLab, SegFormer, etc.) without any retraining or fine-tuning.
Explore RankSEG by reading our documentation.
π Why RankSEG?
Conventional methods use argmax or fixed thresholding, which are not theoretically optimized for non-decomposable metrics like Dice or IoU. RankSEG bridges this gap by directly optimizing the target metric, yielding "free" performance gains.
β‘ Quick Start
RankSEG is designed to be dropped into your existing inference pipeline with just a few lines of code.
1. Installation
pip install -U rankseg
2. Basic Usage (3 Lines of Code)
from rankseg import RankSEG
import torch.nn.functional as F
# 1. Initialize RankSEG (optimizing for Dice)
rankseg = RankSEG(metric='dice')
# 2. Get probability output from YOUR model
# probs: (Batch, Class, H, W)
probs = F.softmax(model_logits, dim=1)
# 3. Get optimized predictions (Instantly!)
preds = rankseg.predict(probs)
β¨ Key Features
- π Performance Boost: Consistently improves mIoU/mDice scores over standard
argmax. - π Zero Effort: Compatible with any PyTorch model. No retraining, no fine-tuning.
- π Training-Free: Purely post-processing. Works with frozen weights.
- β‘ Real-time Inference: Efficient
RMA(Reciprocal Moment Approximation) solver. - π§© Versatile: Supports semantic (multi-class) and binary (multi-label) tasks.
π Benchmarks
RankSEG delivers consistent gains across various architectures and datasets without touching a single weight.
| Model | Dataset | mIoU (Argmax) | mIoU (RankSEG) | Gain | | :--- | :--- | :---: | :---: | :---: | | DeepLabV3+ | PASCAL VOC | 77.25% | 78.14% | +0.89% | | SegFormer | PASCAL VOC | 77.57% | 78.59% | +1.02% | | UPerNet | PASCAL VOC | 79.52% | 80.31% | +0.79% | | SegFormer | ADE20K | 40.00% | 40.82% | +0.82% | | UPerNet | ADE20K | 42.86% | 43.84% | +0.98% |
Detailed results available in our NeurIPS 2025 paper.
π οΈ Integrations & Demos
| Framework | Task | Quick Start |
| :--- | :--- | :---: |
| Standard PyTorch | Semantic Segmentation | |
| Segment Anything (SAM) | Zero-shot Segmentation |
|
| Hugging Face | Interactive Demo |
|
| PaddleSeg |
|
|
π Citation
If you use RankSEG in your research, please cite our papers:
- Dai, B., & Li, C. (2023). RankSEG: A Consistent Ranking-based Framework for Segmentation. Journal of Machine Learning Research, 24(224), 1-50. [link]
- Wang, Z., & Dai, B. (2025). RankSEG-RMA: An Efficient Segmentation Algorithm via Reciprocal Moment Approximation. Advances in Neural Information Processing Systems (NeurIPS 2025). [link]
@article{dai2023rankseg,
title={RankSEG: A Consistent Ranking-based Framework for Segmentation},
author={Dai, Ben and Li, Chunlin},
journal={Journal of Machine Learning Research},
volume={24},
number={224},
pages={1--50},
url={https://www.jmlr.org/papers/v24/22-0712.html},
year={2023}
}
@inproceedings{wang2025rankseg,
title={RankSEG-RMA: An Efficient Segmentation Algorithm via Reciprocal Moment Approximation},
author={Wang, Zixun and Dai, Ben},
booktitle={Advances in Neural Information Processing Systems},
url={https://arxiv.org/abs/2510.15362},
year={2025}
}
<div align="center"> <p>Star us on GitHub if RankSEG helps your project! β</p> </div>
Related Skills
YC-Killer
2.7kA library of enterprise-grade AI agents designed to democratize artificial intelligence and provide free, open-source alternatives to overvalued Y Combinator startups. If you are excited about democratizing AI access & AI agents, please star βοΈ this repository and use the link in the readme to join our open source AI research team.
best-practices-researcher
The most comprehensive Claude Code skills registry | Web Search: https://skills-registry-web.vercel.app
groundhog
400Groundhog's primary purpose is to teach people how Cursor and all these other coding agents work under the hood. If you understand how these coding assistants work from first principles, then you can drive these tools harder (or perhaps make your own!).
last30days-skill
19.9kAI agent skill that researches any topic across Reddit, X, YouTube, HN, Polymarket, and the web - then synthesizes a grounded summary
