CLAN
( TPAMI2022 / CVPR2019 Oral ) Taking A Closer Look at Domain Shift: Category-level Adversaries for Semantics Consistent Domain Adaptation
Install / Use
/learn @RoyalVane/CLANREADME
Taking A Closer Look at Domain Shift: Category-level Adversaries for Semantics Consistent Domain Adaptation (CVPR2019)
This is a pytorch implementation of CLAN.
Oral Presentation Video
Prerequisites
- Python 3.6
- GPU Memory >= 11G
- Pytorch 1.0.0
Getting started
-
Download The GTA5 Dataset
-
Download The SYNTHIA Dataset
-
Download The Cityscapes Dataset
-
Download The imagenet pretraind model
The data folder is structured as follows:
├── data/
│ ├── Cityscapes/
| | ├── gtFine/
| | ├── leftImg8bit/
│ ├── GTA5/
| | ├── images/
| | ├── labels/
│ ├── SYNTHIA/
| | ├── RAND_CITYSCAPES/
│ └──
└── model/
│ ├── DeepLab_resnet_pretrained.pth
...
Train
CUDA_VISIBLE_DEVICES=0 python CLAN_train.py --snapshot-dir ./snapshots/GTA2Cityscapes
Evaluate
CUDA_VISIBLE_DEVICES=0 python CLAN_evaluate.py --restore-from ./snapshots/GTA2Cityscapes/GTA5_100000.pth --save ./result/GTA2Cityscapes_100000
Our pretrained model is available via Google Drive
Compute IoU
python CLAN_iou.py ./data/Cityscapes/gtFine/val result/GTA2Cityscapes_100000
Tip: The best-performance model might not be the final one in the last epoch. If you want to evaluate every saved models in bulk, please use CLAN_evaluate_bulk.py and CLAN_iou_bulk.py, the result will be saved in an Excel sheet.
CUDA_VISIBLE_DEVICES=0 python CLAN_evaluate_bulk.py
python CLAN_iou_bulk.py
Visualization Results
<p align="left"> <img src="https://github.com/RoyalVane/CLAN/blob/master/gifs/video_1.gif" width="420" height="210" alt="(a)"/> <img src="https://github.com/RoyalVane/CLAN/blob/master/gifs/video_2.gif" width="420" height="210" alt="(b)"/> </p> <p align="left"> <img src="https://github.com/RoyalVane/CLAN/blob/master/gifs/video_3.gif" width="420" height="210" alt="(c)"/> <img src="https://github.com/RoyalVane/CLAN/blob/master/gifs/video_4.gif" width="420" height="210" alt="(d)"/> </p>This code is heavily borrowed from the baseline AdaptSegNet
Citation
If you use this code in your research please consider citing
@article{luo2021category,
title={Category-Level Adversarial Adaptation for Semantic Segmentation using Purified Features},
author={Luo, Yawei and Liu, Ping and Zheng, Liang and Guan, Tao and Yu, Junqing and Yang, Yi},
journal={IEEE Transactions on Pattern Analysis \& Machine Intelligence (TPAMI)},
year={2021},
}
@inproceedings{luo2019Taking,
title={Taking A Closer Look at Domain Shift: Category-level Adversaries for Semantics Consistent Domain Adaptation},
author={Luo, Yawei and Zheng, Liang and Guan, Tao and Yu, Junqing and Yang, Yi},
booktitle={The IEEE Conference on Computer Vision and Pattern Recognition (CVPR)},
year={2019}
}
Related works
Related Skills
YC-Killer
2.7kA library of enterprise-grade AI agents designed to democratize artificial intelligence and provide free, open-source alternatives to overvalued Y Combinator startups. If you are excited about democratizing AI access & AI agents, please star ⭐️ this repository and use the link in the readme to join our open source AI research team.
openclaw-plugin-loom
Loom Learning Graph Skill This skill guides agents on how to use the Loom plugin to build and expand a learning graph over time. Purpose - Help users navigate learning paths (e.g., Nix, German)
groundhog
398Groundhog's primary purpose is to teach people how Cursor and all these other coding agents work under the hood. If you understand how these coding assistants work from first principles, then you can drive these tools harder (or perhaps make your own!).
sec-edgar-agentkit
10AI agent toolkit for accessing and analyzing SEC EDGAR filing data. Build intelligent agents with LangChain, MCP-use, Gradio, Dify, and smolagents to analyze financial statements, insider trading, and company filings.

