SegmenTron
Support PointRend, Fast_SCNN, HRNet, Deeplabv3_plus(xception, resnet, mobilenet), ContextNet, FPENet, DABNet, EdaNet, ENet, Espnetv2, RefineNet, UNet, DANet, HRNet, DFANet, HardNet, LedNet, OCNet, EncNet, DuNet, CGNet, CCNet, BiSeNet, PSPNet, ICNet, FCN, deeplab)
Install / Use
/learn @LikeLy-Journey/SegmenTronREADME
PyTorch for Semantic Segmentation
Introduce
This repository contains some models for semantic segmentation and the pipeline of training and testing models, implemented in PyTorch.

Model zoo
|Model|Backbone|Datasets|eval size|Mean IoU(paper)|Mean IoU(this repo)| |:-:|:-:|:-:|:-:|:-:|:-:| |DeepLabv3_plus|xception65|cityscape(val)|(1025,2049)|78.8|78.93| |DeepLabv3_plus|xception65|coco(val)|480/520|-|70.50| |DeepLabv3_plus|xception65|pascal_aug(val)|480/520|-|89.56| |DeepLabv3_plus|xception65|pascal_voc(val)|480/520|-|88.39| |DeepLabv3_plus|resnet101|cityscape(val)|(1025,2049)|-|78.27| |Danet|resnet101|cityscape(val)|(1024,2048)|79.9|79.34| |Pspnet|resnet101|cityscape(val)|(1025,2049)|78.63|77.00|
real-time models
Model|Backbone|Datasets|eval size|Mean IoU(paper)|Mean IoU(this repo)|FPS| |:-:|:-:|:-:|:-:|:-:|:-:|:-:| |ICnet|resnet50(0.5)|cityscape(val)|(1024,2048)|67.8|-|41.39| |DeepLabv3_plus|mobilenetV2|cityscape(val)|(1024,2048)|70.7|70.3|46.64| |BiSeNet|resnet18|cityscape(val)|(1024,2048)|-|-|39.90| |LEDNet|-|cityscape(val)|(1024,2048)|-|-|31.78| |CGNet|-|cityscape(val)|(1024,2048)|-|-|46.11| |HardNet|-|cityscape(val)|(1024,2048)|75.9|-|69.06| |DFANet|xceptionA|cityscape(val)|(1024,2048)|70.3|-|21.46| |HRNet|w18_small_v1|cityscape(val)|(1024,2048)|70.3|70.5|66.01| |Fast_SCNN|-|cityscape(val)|(1024,2048)|68.3|68.9|145.77|
FPS was tested on V100.
Environments
- python 3
- torch >= 1.1.0
- torchvision
- pyyaml
- Pillow
- numpy
INSTALL
python setup.py develop
if you do not want to run CCNet, you do not need to install, just comment following line in segmentron/models/__init__.py
from .ccnet import CCNet
Dataset prepare
Support cityscape, coco, voc, ade20k now.
Please refer to DATA_PREPARE.md for dataset preparation.
Pretrained backbone models
pretrained backbone models will be download automatically in pytorch default directory(~/.cache/torch/checkpoints/).
Code structure
├── configs # yaml config file
├── segmentron # core code
├── tools # train eval code
└── datasets # put datasets here
Train
Train with a single GPU
CUDA_VISIBLE_DEVICES=0 python -u tools/train.py --config-file configs/cityscapes_deeplabv3_plus.yaml
Train with multiple GPUs
CUDA_VISIBLE_DEVICES=0,1,2,3 ./tools/dist_train.sh ${CONFIG_FILE} ${GPU_NUM} [optional arguments]
Eval
Eval with a single GPU
You can download trained model from model zoo table above, or train by yourself.
CUDA_VISIBLE_DEVICES=0 python -u ./tools/eval.py --config-file configs/cityscapes_deeplabv3_plus.yaml \
TEST.TEST_MODEL_PATH your_test_model_path
Eval with a multiple GPUs
CUDA_VISIBLE_DEVICES=0,1,2,3 ./tools/dist_test.sh ${CONFIG_FILE} ${GPU_NUM} \
TEST.TEST_MODEL_PATH your_test_model_path
References
Related Skills
openhue
342.0kControl Philips Hue lights and scenes via the OpenHue CLI.
sag
342.0kElevenLabs text-to-speech with mac-style say UX.
weather
342.0kGet current weather and forecasts via wttr.in or Open-Meteo
tweakcc
1.5kCustomize Claude Code's system prompts, create custom toolsets, input pattern highlighters, themes/thinking verbs/spinners, customize input box & user message styling, support AGENTS.md, unlock private/unreleased features, and much more. Supports both native/npm installs on all platforms.
