VMA
A general map auto annotation framework based on MapTR, with high flexibility in terms of spatial scale and element type
Install / Use
/learn @hustvl/VMAREADME
ArXiv Preprint (arXiv 2304.09807)
</div>News
Aug. 30th, 2023: We release an initial version of VMA.Aug. 9th, 2023: Code will be released in around 3 weeks.

https://github.com/hustvl/VMA/assets/40697001/ec099b41-835a-409d-a007-9766c414a483
TL;DR VMA is a general map auto annotation framework based on MapTR, with high flexibility in terms of spatial scale and element type.
Getting Started
- Installation
- Prepare Dataset
- Inference on SD data (we only provide some samples of SD data for inference, since SD data is owned by Horizon)
- Train and Eval on NYC data
Auto Annotation Results
Remote sensing:
Urban scene:
Highway scene:

Citation
If you find VMA is useful in your research or applications, please consider giving us a star 🌟 and citing it by the following BibTeX entry.
@inproceedings{VMA,
title={VMA: Divide-and-Conquer Vectorized Map Annotation System for Large-Scale Driving Scene},
author={Chen, Shaoyu and Zhang, Yunchi and Liao, Bencheng, Xie, Jiafeng and Cheng, Tianheng and Sui, Wei and Zhang, Qian and Liu, Wenyu and Huang, Chang and Wang, Xinggang},
booktitle={arXiv preprint arXiv:2304.09807},
year={2023}
}
Related Skills
node-connect
335.2kDiagnose OpenClaw node connection and pairing failures for Android, iOS, and macOS companion apps
frontend-design
82.5kCreate distinctive, production-grade frontend interfaces with high design quality. Use this skill when the user asks to build web components, pages, or applications. Generates creative, polished code that avoids generic AI aesthetics.
openai-whisper-api
335.2kTranscribe audio via OpenAI Audio Transcriptions API (Whisper).
commit-push-pr
82.5kCommit, push, and open a PR
