SkillAgentSearch skills...

FASTopoWM

This is the official project repository for "FASTopo: Fast-Slow Lane Segment Topology Reasoning with Latent World Models"

Install / Use

/learn @YimingYang23/FASTopoWM
About this skill

Quality Score

0/100

Supported Platforms

Universal

README

<div align="center"> <h2 align="center"> FASTopoWM: Fast-Slow Lane Segment Topology Reasoning with Latent World Models </h1>

Yiming Yang<sup>1,2</sup> , Hongbin Lin<sup>1,2</sup>, Yueru Luo<sup>1,2</sup>, Suzhong Fu<sup>1,2</sup>, Chao Zheng<sup>3</sup>, Xinrui Yan<sup>3</sup>, Shuqi Mei<sup>3</sup>, Kun Tang<sup>3</sup>, Shuguang Cui<sup>2,1</sup>, Zhen Li<sup>2,1</sup>

<sup>1</sup> FNii-Shenzhen <sup>2</sup> SSE, CUHK-Shenzhen, <sup>3</sup> T Lab, Tencent

arXiv

</div>

This repository is built upon LaneSegNet.

Motivation

<div align="center">

vis1

</div>

Framework

<div align="center">

vis1

</div>

Visualizations

The visualization results demonstrate that our predictions maintain robust temporal consistency, reflected in the stable alignment of lane segment coordinates and topological structures as the ego vehicle moves.

<div align="center">

vis3

vis4

</div>

Prerequisites

  • 4 x 40G memory A100 GPUs or 4 x 32G memory V100 GPUs (for batch size = 2)

Prepare Dataset

Following OpenLane-V2 repo to download the Image and the Map Element Bucket data. Run the following script to collect data for this repo.

cd TopoStreamer
mkdir data

ln -s {Path to OpenLane-V2 repo}/data/OpenLane-V2 ./data/
python ./tools/data_process.py
python ./tools/tracking/dist_track.sh

After setup, the hierarchy of folder data is described below:

data/OpenLane-V2
├── train
|   └── ...
├── val
|   └── ...
├── test
|   └── ...
├── data_dict_subset_A_train_lanesegnet.pkl
├── data_dict_subset_A_val_lanesegnet.pkl
├── data_dict_subset_A_train_lanesegnet_gt_tracks.pkl
├── data_dict_subset_A_val_lanesegnet_gt_tracks.pkl
├── ...

Installation

We recommend using conda to run the code.

conda create -n fastopowm python=3.8 -y
conda activate fastopowm

# (optional) If you have CUDA installed on your computer, skip this step.
conda install cudatoolkit=11.1.1 -c conda-forge

pip install torch==1.9.0+cu111 torchvision==0.10.0+cu111 torchaudio==0.9.0 -f https://download.pytorch.org/whl/torch_stable.html

Install mm-series packages.

pip install mmcv-full==1.5.2 -f https://download.openmmlab.com/mmcv/dist/cu111/torch1.9.0/index.html
pip install mmdet==2.26.0
pip install mmsegmentation==0.29.1
pip install mmdet3d==1.0.0rc6

Install other required packages.

pip install -r requirements.txt

Train

We recommend using 4 GPUs for training. The training logs will be saved to work_dirs/stream.

mkdir -p work_dirs/stream
./tools/dist_train.sh 4 && ./tools/dist_train_stage2.sh 4

Evaluate

./tools/dist_test.sh 4 

For per frame visualization, you can run:

./tools/dist_test.sh 4 --show

Related resources

We acknowledge all the open-source contributors for the following projects to make this work possible:

Related Skills

View on GitHub
GitHub Stars8
CategoryDevelopment
Updated24d ago
Forks2

Security Score

70/100

Audited on Mar 6, 2026

No findings