CMOM
Code for <Domain Adaptive Video Semantic Segmentation via Cross-Domain Moving Object Mixing> in WACV 2023
Install / Use
/learn @kyusik-cho/CMOMREADME
Domain-Adaptive-Video-Semantic-Segmentation-via-Cross-Domain-Moving-Object-Mixing
Code for <Domain Adaptive Video Semantic Segmentation via Cross-Domain Moving Object Mixing> in WACV 2023
[paper] [demo]
Prerequisites
Installation:
- Conda enviroment
conda create -n CMOM python=3.6
conda activate CMOM
conda install -c menpo opencv
pip install kornia
pip install importlib-metadata
- Clone ADVENT
git clone https://github.com/valeoai/ADVENT.git
pip install -e ./ADVENT
- Clone the repo
git clone https://github.com/kyusik-cho/CMOM.git
pip install -e ./CMOM
Data preparation:
Download Cityscapes, VIPER, SYNTHIA-Seq.
Ensure the file structure is as follows.
- Cityscapes-Seq
<data_dir>/Cityscapes/
<data_dir>/Cityscapes/leftImg8bit_sequence
<data_dir>/Cityscapes/gtFine
- VIPER
<data_dir>/Viper/
<data_dir>/Viper/train/img
<data_dir>/Viper/train/cls
- SYNTHIA-Seq
<data_dir>/SynthiaSeq/
<data_dir>/SynthiaSeq/SEQS-04-DAWN
Optical Flow Estimation:
We followed DA-VSN to get optical flow.
Please follow their policy to get estimated optical flow.
Pseudo labels
Download the pseudo labels here and put them under <root_dir>/cmom.
Or run make_pseudolabel.py with DA-VSN pretrained model.
Pre-trained model:
Download the pre-trained models and put them under <root_dir>/pretrained_models.
When training a model, you can start with either DA-VSN pretrained model or DeepLab ImageNet pretrained models.
Evaluation on Pretrained Models
python test.py --cfg configs/cmom_viper2city_pretrained.yml
python test.py --cfg configs/cmom_syn2city_pretrained.yml
Train
python train.py --cfg configs/cmom_viper2city.yml --tensorboard
python train.py --cfg configs/cmom_syn2city.yml --tensorboard
Test
python test.py --cfg configs/cmom_viper2city.yml
python test.py --cfg configs/cmom_syn2city.yml
Acknowledgement
This code is based on the following open-source projects.
