FASTopoWM
This is the official project repository for "FASTopo: Fast-Slow Lane Segment Topology Reasoning with Latent World Models"
Install / Use
/learn @YimingYang23/FASTopoWMREADME
Yiming Yang<sup>1,2</sup> , Hongbin Lin<sup>1,2</sup>, Yueru Luo<sup>1,2</sup>, Suzhong Fu<sup>1,2</sup>, Chao Zheng<sup>3</sup>, Xinrui Yan<sup>3</sup>, Shuqi Mei<sup>3</sup>, Kun Tang<sup>3</sup>, Shuguang Cui<sup>2,1</sup>, Zhen Li<sup>2,1</sup>
<sup>1</sup> FNii-Shenzhen <sup>2</sup> SSE, CUHK-Shenzhen, <sup>3</sup> T Lab, Tencent
</div>This repository is built upon LaneSegNet.
Motivation
<div align="center">
Framework
<div align="center">
Visualizations
The visualization results demonstrate that our predictions maintain robust temporal consistency, reflected in the stable alignment of lane segment coordinates and topological structures as the ego vehicle moves.
<div align="center">

Prerequisites
- 4 x 40G memory A100 GPUs or 4 x 32G memory V100 GPUs (for batch size = 2)
Prepare Dataset
Following OpenLane-V2 repo to download the Image and the Map Element Bucket data. Run the following script to collect data for this repo.
cd TopoStreamer
mkdir data
ln -s {Path to OpenLane-V2 repo}/data/OpenLane-V2 ./data/
python ./tools/data_process.py
python ./tools/tracking/dist_track.sh
After setup, the hierarchy of folder data is described below:
data/OpenLane-V2
├── train
| └── ...
├── val
| └── ...
├── test
| └── ...
├── data_dict_subset_A_train_lanesegnet.pkl
├── data_dict_subset_A_val_lanesegnet.pkl
├── data_dict_subset_A_train_lanesegnet_gt_tracks.pkl
├── data_dict_subset_A_val_lanesegnet_gt_tracks.pkl
├── ...
Installation
We recommend using conda to run the code.
conda create -n fastopowm python=3.8 -y
conda activate fastopowm
# (optional) If you have CUDA installed on your computer, skip this step.
conda install cudatoolkit=11.1.1 -c conda-forge
pip install torch==1.9.0+cu111 torchvision==0.10.0+cu111 torchaudio==0.9.0 -f https://download.pytorch.org/whl/torch_stable.html
Install mm-series packages.
pip install mmcv-full==1.5.2 -f https://download.openmmlab.com/mmcv/dist/cu111/torch1.9.0/index.html
pip install mmdet==2.26.0
pip install mmsegmentation==0.29.1
pip install mmdet3d==1.0.0rc6
Install other required packages.
pip install -r requirements.txt
Train
We recommend using 4 GPUs for training. The training logs will be saved to work_dirs/stream.
mkdir -p work_dirs/stream
./tools/dist_train.sh 4 && ./tools/dist_train_stage2.sh 4
Evaluate
./tools/dist_test.sh 4
For per frame visualization, you can run:
./tools/dist_test.sh 4 --show
Related resources
We acknowledge all the open-source contributors for the following projects to make this work possible:
Related Skills
node-connect
342.0kDiagnose OpenClaw node connection and pairing failures for Android, iOS, and macOS companion apps
frontend-design
84.7kCreate distinctive, production-grade frontend interfaces with high design quality. Use this skill when the user asks to build web components, pages, or applications. Generates creative, polished code that avoids generic AI aesthetics.
openai-whisper-api
342.0kTranscribe audio via OpenAI Audio Transcriptions API (Whisper).
commit-push-pr
84.7kCommit, push, and open a PR
Security Score
Audited on Mar 6, 2026
