Sat2Density
The official implementation of the paper: Sat2Density: Faithful Density Learning from Satellite-Ground Image Pairs (ICCV 2023)
Install / Use
/learn @qianmingduowan/Sat2DensityREADME
Sat2Density: Faithful Density Learning from Satellite-Ground Image Pairs
Ming Qian, Jincheng Xiong, Gui-Song Xia, Nan Xue
IEEE/CVF International Conference on Computer Vision (ICCV), 2023
Project Page | Paper | Data | Install.md
💡 Changelog
- [2026/01]. 🏆 Sat2Density++ is accepted by T-PAMI! Key updates include:
- [2025/05]. The journal extension "Seeing through Satellite Images at Street Views" (Sat2Density++) is available on ArXiv Project Page.
<p align="center" float="left"> <img src="docs/figures/demo/case1.sat.gif" alt="drawing" width="19%"> <img src="docs/figures/demo-density/case1.gif" alt="drawing" width="38%"> <img src="docs/figures/demo/case1.render.gif" alt="drawing" width="38%"> </p>
<p align="center" float="left"> <img src="docs/figures/demo/case2.sat.gif" alt="drawing" width="19%"> <img src="docs/figures/demo-density/case2.gif" alt="drawing" width="38%"> <img src="docs/figures/demo/case2.render.gif" alt="drawing" width="38%"> </p>
<p align="center" float="left"> <img src="docs/figures/demo/case3.sat.gif" alt="drawing" width="19%"> <img src="docs/figures/demo-density/case3.gif" alt="drawing" width="38%"> <img src="docs/figures/demo/case3.render.gif" alt="drawing" width="38%"> </p>
<p align="center" float="left"> <img src="docs/figures/demo/case4.sat.gif" alt="drawing" width="19%"> <img src="docs/figures/demo-density/case4.gif" alt="drawing" width="38%"> <img src="docs/figures/demo/case4.render.gif" alt="drawing" width="38%"> </p>
See the Project Page for more results and a brief video introduction to Sat2Density.
Checkpoints Downloading
Two checkpoints for CVACT and CVUSA can be found from this url. You can also run the following command to download them.
bash scripts/download_weights.sh
QuickStart Demo
Video Synthesis
Example Usage
python test.py --yaml=sat2density_cvact \
--test_ckpt_path=2u87bj8w \
--task=test_vid \
--demo_img=demo_img/case1/satview-input.png \
--sty_img=demo_img/case1/groundview.image.png \
--save_dir=results/case1
We visualize our .vtk shape files with Paraview.
Illumination Interpolation
<!-- ``` bash inference/quick_demo_interpolation.sh ``` -->python test.py --task=test_interpolation \
--yaml=sat2density_cvact \
--test_ckpt_path=2u87bj8w \
--sty_img1=demo_img/case9/groundview.image.png \
--sty_img2=demo_img/case7/groundview.image.png \
--demo_img=demo_img/case3/satview-input.png \
--save_dir=results/case2
Train & Inference
- We trained our model using 1 V100 32GB GPU. The training phase will take about 20 hours.
- For data preparation, please check out data.md.
Inference
To test Center Ground-View Synthesis setting If you want save results, please add --task=vis_test
# CVACT
python offline_train_test.py --yaml=sat2density_cvact --test_ckpt_path=2u87bj8w
# CVUSA
python offline_train_test.py --yaml=sat2density_cvusa --test_ckpt_path=2cqv8uh4
To test inference with different illumination
# CVACT
bash inference/single_style_test_cvact.sh
# CVUSA
bash inference/single_style_test_cvusa.sh
To test synthesis ground videos
bash inference/synthesis_video.sh
Training
Training command
# CVACT
CUDA_VISIBLE_DEVICES=X python train.py --yaml=sat2density_cvact
# CVUSA
CUDA_VISIBLE_DEVICES=X python train.py --yaml=sat2density_cvusa
Citation
If you use this code for your research, please cite
@InProceedings{Qian_2023_Sat2Density,
author = {Qian, Ming and Xiong, Jincheng and Xia, Gui-Song and Xue, Nan},
title = {Sat2Density: Faithful Density Learning from Satellite-Ground Image Pairs},
booktitle = {Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV)},
month = {October},
year = {2023},
pages = {3683-3692}
}
@ARTICLE{Qian_2026_Sat2Densitypp,
author={Qian, Ming and Tan, Bin and Wang, Qiuyu and Zheng, Xianwei and Xiong, Hanjiang and Xia, Gui-Song and Shen, Yujun and Xue, Nan},
journal={IEEE Transactions on Pattern Analysis and Machine Intelligence},
title={Seeing through Satellite Images at Street Views},
year={2026},
volume={},
number={},
pages={1-18},
doi={10.1109/TPAMI.2026.3652860}
}
Related Skills
YC-Killer
2.7kA library of enterprise-grade AI agents designed to democratize artificial intelligence and provide free, open-source alternatives to overvalued Y Combinator startups. If you are excited about democratizing AI access & AI agents, please star ⭐️ this repository and use the link in the readme to join our open source AI research team.
API
A learning and reflection platform designed to cultivate clarity, resilience, and antifragile thinking in an uncertain world.
best-practices-researcher
The most comprehensive Claude Code skills registry | Web Search: https://skills-registry-web.vercel.app
groundhog
398Groundhog's primary purpose is to teach people how Cursor and all these other coding agents work under the hood. If you understand how these coding assistants work from first principles, then you can drive these tools harder (or perhaps make your own!).
