SkillAgentSearch skills...

PaddleSeg

Easy-to-use image segmentation library with awesome pre-trained model zoo, supporting wide-range of practical tasks in Semantic Segmentation, Interactive Segmentation, Panoptic Segmentation, Image Matting, 3D Segmentation, etc.

Install / Use

/learn @PaddlePaddle/PaddleSeg

README

简体中文 | English

<div align="center"> <p align="center"> <img src="./docs/images/paddleseg_logo.png" align="middle" width = "500" /> </p>

飞桨高性能图像分割开发套件,端到端完成从训练到部署的全流程图像分割应用。

License Version python version support os stars

</div> <div align="center"> <img src="https://github.com/shiyutang/files/blob/9590ea6bfc36139982ce75b00d3b9f26713934dd/teasor.gif" width = "800" /> </div>

<img src="./docs/images/seg_news_icon.png" width="20"/> 最新动态

  • 🔥[2024-11-05] 添加语义分割领域低代码全流程开发能力:
    • 飞桨低代码开发工具PaddleX,支持了图像分割领域的低代码全流程开发能力:

      • 🎨 模型丰富一键调用:将通用语义分割和图像异常检测涉及的19个模型整合为2条模型产线,通过极简的Python API一键调用,快速体验模型效果。此外,同一套API,也支持图像分类、目标检测、文本图像智能分析、通用OCR、时序预测等共计200+模型,形成20+单功能模块,方便开发者进行模型组合使用
      • 🚀 提高效率降低门槛:提供基于统一命令图形界面两种方式,实现模型简洁高效的使用、组合与定制。支持高性能部署、服务化部署和端侧部署等多种部署方式。此外,对于各种主流硬件如英伟达GPU、昆仑芯、昇腾、寒武纪和海光等,进行模型开发时,都可以无缝切换
    • 增加图像异常检测算法SFTPM

  • [2023-10-29] :fire: PaddleSeg 2.9版本发布!详细发版信息请参考Release Note
    • 增加对多标签分割Multi-label segmentation,提供数据转换代码及结果可视化,实现对一系列语义分割模型的多标签分割支持。
    • 发布轻量视觉大模型MobileSAM,实现更快速的SAM推理。
    • 支持量化蒸馏训练压缩功能Quant Aware Distillation Training Compression,对PP-LiteSeg、PP-MobileSeg、OCRNet、SegFormer-B0增加量化训练压缩功能,提升推理速度。

<img src="https://user-images.githubusercontent.com/48054808/157795569-9fc77c85-732f-4870-9be0-99a7fe2cff27.png" width="20"/> 简介

PaddleSeg是基于飞桨PaddlePaddle的端到端图像分割套件,内置45+模型算法140+预训练模型,支持配置化驱动API调用开发方式,打通数据标注、模型开发、训练、压缩、部署的全流程,提供语义分割、交互式分割、Matting、全景分割四大分割能力,助力算法在医疗、工业、遥感、娱乐等场景落地应用。

<div align="center"> <img src="https://github.com/shiyutang/files/raw/main/teasor_new.gif" width = "800" /> </div>

<img src="./docs/images/feature.png" width="20"/> 特性

  • 高精度:跟踪学术界的前沿分割技术,结合高精度训练的骨干网络,提供45+主流分割网络、150+的高质量预训练模型,效果优于其他开源实现。

  • 高性能:使用多进程异步I/O、多卡并行训练、评估等加速策略,结合飞桨核心框架的显存优化功能,大幅度减少分割模型的训练开销,让开发者更低成本、更高效地完成图像分割训练。

  • 模块化:源于模块化设计思想,解耦数据准备、分割模型、骨干网络、损失函数等不同组件,开发者可以基于实际应用场景出发,组装多样化的配置,满足不同性能和精度的要求。

  • 全流程:打通数据标注、模型开发、模型训练、模型压缩、模型部署全流程,经过业务落地的验证,让开发者完成一站式开发工作。

<div align="center"> <img src="https://user-images.githubusercontent.com/14087480/176379006-7f330e00-b6b0-480e-9df8-8fd1090da4cf.png" width = "800" /> </div>

⚡ 快速开始

🔥 低代码全流程开发

<img src="./docs/images/model.png" width="20"/> 产品矩阵

<table align="center"> <tbody> <tr align="center" valign="bottom"> <td> <b>模型</b> </td> <td colspan="2"> <b>组件</b> </td> <td> <b>特色案例</b> </td> </tr> <tr valign="top"> <td> <ul> <details><summary><b>语义分割模型</b></summary> <ul> <li><a href="./configs/pp_liteseg">PP-LiteSeg</a> </li> <li><a href="./configs/pp_mobileseg">PP-MobileSeg</a> </li> <li><a href="./configs/deeplabv3p">DeepLabV3P</a> </li> <li><a href="./configs/ocrnet">OCRNet</a> </li> <li><a href="./configs/mobileseg">MobileSeg</a> </li> <li><a href="./configs/ann">ANN</a></li> <li><a href="./configs/attention_unet">Att U-Net</a></li> <li><a href="./configs/bisenetv1">BiSeNetV1</a></li> <li><a href="./configs/bisenet">BiSeNetV2</a></li> <li><a href="./configs/ccnet">CCNet</a></li> <li><a href="./configs/danet">DANet</a></li> <li><a href="./configs/ddrnet">DDRNet</a></li> <li><a href="./configs/decoupled_segnet">DecoupledSeg</a></li> <li><a href="./configs/deeplabv3">DeepLabV3</a></li> <li><a href="./configs/dmnet">DMNet</a></li> <li><a href="./configs/dnlnet">DNLNet</a></li> <li><a href="./configs/emanet">EMANet</a></li> <li><a href="./configs/encnet">ENCNet</a></li> <li><a href="./configs/enet">ENet</a></li> <li><a href="./configs/espnetv1">ESPNetV1</a></li> <li><a href="./configs/espnet">ESPNetV2</a></li> <li><a href="./configs/fastfcn">FastFCN</a></li> <li><a href="./configs/fastscnn">Fast-SCNN</a></li> <li><a href="./configs/gcnet">GCNet</a></li> <li><a href="./configs/ginet">GINet</a></li> <li><a href="./configs/glore">GloRe</a></li> <li><a href="./configs/gscnn">GSCNN</a></li> <li><a href="./configs/hardnet">HarDNet</a></li> <li><a href="./configs/fcn">HRNet-FCN</a></li> <li><a href="./configs/hrnet_w48_contrast">HRNet-Contrast</a></li> <li><a href="./configs/isanet">ISANet</a></li> <li><a href="./configs/pfpn">PFPNNet</a></li> <li><a href="./configs/pointrend">PointRend</a></li> <li><a href="./configs/portraitnet">PotraitNet</a></li> <li><a href="./configs/pp_humanseg_lite">PP-HumanSeg-Lite</a></li> <li><a href="./configs/pspnet">PSPNet</a></li> <li><a href="./configs/pssl">PSSL</a></li> <li><a href="./configs/segformer">SegFormer</a></li> <li><a href="./configs/segmenter">SegMenter</a></li> <li><a href="./configs/segmne">SegNet</a></li> <li><a href="./configs/setr">SETR</a></li> <li><a href="./configs/sfnet">SFNet</a></li> <li><a href="./configs/stdcseg">STDCSeg</a></li> <li><a href="./configs/u2net">U<sup>2</sup>Net</a></li> <li><a href="./configs/unet">UNet</a></li> <li><a href="./configs/unet_plusplus">UNet++</a></li> <li><a href="./configs/unet_3plus">UNet3+</a></li> <li><a href="./configs/upernet">UperNet</a></li> <li><a href="./configs/rtformer">RTFormer</a></li> <li><a href="./configs/uhrnet">UHRNet</a></li> <li><a href="./configs/topformer">TopFormer</a></li> <li><a href="./configs/mscale_ocrnet">MscaleOCRNet-PSA</a></li> <li><a href="./configs/cae">CAE</a></li> <li><a href="./configs/maskformer">MaskFormer</a></li> <li><a href="./configs/vit_adapter">ViT-Adapter</a></li> <li><a href="./configs/hrformer">HRFormer</a></li> <li><a href="./configs/lpsnet">LPSNet</a></li> <li><a href="./configs/segnext">SegNeXt</a></li> <li><a href="./configs/knet">K-Net</a></li> </ul> </details> <details><summary><b>交互式分割模型</b></summary> <ul> <li><a href="./EISeg">EISeg</a></li> <li>RITM</li> <li>EdgeFlow</li> </ul> </details> <details><summary><b>图像抠图模型</b></summary> <ul> <li><a href="./Matting/configs/ppmattingv2">PP-MattingV2</a></li> <li><a href="./Matting/configs/ppmatting">PP-MattingV1</a></li> <li><a href="./Matting/configs/dim/dim-vgg16.yml">DIM</a></li> <li><a href="./Matting/configs/modnet/modnet-hrnet_w18.yml">MODNet</a></li> <li><a href="./Matting/configs/human_matting/human_matting-resnet34_vd.yml">PP-HumanMatting</a></li> <li><a href="./Matting/configs/rvm">RVM</a></li> </ul> </details> <details><summary><b>全景分割</b></summary> <ul> <li><a href="./contrib/PanopticSeg/configs/mask2former">Mask2Former</a></li> <li><a href="./contrib/PanopticSeg/configs/panoptic_deeplab">Panoptic-DeepLab</a></li> </ul> </details> </td> <td> <details><summary><b>骨干网络</b></summary> <ul> <li><a href="./paddleseg/models/backbones/hrnet.py">HRNet</a></li> <li><a href="./paddleseg/models/backbones/resnet_cd.py">ResNet</a></li> <li><a href="./paddleseg/models/backbones/stdcnet.py">STDCNet</a></li> <li><a href="./paddleseg/models/backbones/mobilenetv2.py">MobileNetV2</a></li> <li><a href="./paddleseg/models/backbones/mobilenetv3.py">MobileNetV3</a></li> <li><a href="./paddleseg/models/backbones/shufflenetv2.py">ShuffleNetV2</a></li> <li><a href="./paddleseg/models/backbones/ghostnet.py">GhostNet</a></li> <li><a href="./paddleseg/models/backbones/lite_hrnet.py">LiteHRNet</a></li> <li><a href="./paddleseg/models/backbones/xception_deeplab.py">XCeption</a></li> <li><a href="./paddleseg/models/backbones/vision_transformer.py">VIT</a></li> <li><a href="./paddleseg/models/backbones/mix_transformer.py">MixVIT</a></li> <li><a href="./paddleseg/models/backbones/swin_transformer.py">Swin Transformer</a></li> <li><a href="./paddleseg/models/backbones/top_transformer.py">TopTransformer</a></li> <li><a href="./paddleseg/models/backbones/hrformer.py">HRTransformer</a></li> <li><a href="./paddleseg/models/backbones/mscan.py">MSCAN</a></li> </ul> </details> <details><summary><b>损失函数</b></summary> <ul> <li><a href="./paddleseg/models/losses/binary_cross_entropy_loss.py">Binary CE Loss</a></li> <li><a href="./paddleseg/models/losses/bootstrapped_cross_entropy_loss.py">Bootstrapped CE Loss</a></li> <li><a href="./paddleseg/models/losses/cross_entropy_loss.py">Cross Entropy Loss</
View on GitHub
GitHub Stars9.3k
CategoryCustomer
Updated14h ago
Forks1.7k

Languages

Python

Security Score

100/100

Audited on Mar 24, 2026

No findings