OccProphet
[ICLR 2025] OccProphet: Pushing Efficiency Frontier of Camera-Only 4D Occupancy Forecasting with Observer-Forecaster-Refiner Framework
Install / Use
/learn @JLChen-C/OccProphetREADME
OccProphet: Pushing Efficiency Frontier of Camera-Only 4D Occupancy Forecasting with Observer-Forecaster-Refiner Framework
<p align="center"> <a href="https://jlchen-c.github.io/OccProphet/"> <img src="https://img.shields.io/badge/OccProphet-Project_Page-_?labelColor=F9F2FE&color=yellow"></a> <a href="https://arxiv.org/abs/2502.15180"> <img src="https://img.shields.io/badge/Arxiv-_?label=OccProphet&labelColor=F9F2FE&color=red"></a> <a href="LICENSE"> <img src="https://img.shields.io/github/license/JLChen-C/OccProphet?labelColor=F9F2FE"></a> </p> </div>🔍 Overview
OccProphet is a camera-only 4D occupancy forecasting framework, offering high efficiency in both training and inference, with excellent forecasting performance.
OccProphet has the following features:
- Flexibility: OccProphet only relies on pure camera inputs, making it flexible and can be easily adapted to different traffic scenarios.
- High Efficiency: OccProphet is both training- and inference-friendly, with a lightweight Observer-Forecaster-Refiner pipeline. Minimum 1 RTX 4090 GPU works for training and inference.
- High Performance: OccProphet achieves state-of-the-art performance on three real-world 4D occupancy forecasting datasets: nuScenes, Lyft-Level5 and nuScenes-Occupancy.
🔥 Latest News
- [2025/10/01] Code and checkpoints of OccProphet are released.
🔧 Installation
We follow the installation instructions in Cam4DOcc.
- Create and activate a conda environment:
conda create -n occprophet python=3.7 -y
conda activate occprophet
- Install PyTorch
pip install torch==1.10.1+cu113 torchvision==0.11.2+cu113 torchaudio==0.10.1 -f https://download.pytorch.org/whl/cu113/torch_stable.html
- Install GCC-6
conda install -c omgarcia gcc-6
- Install MMCV, MMDetection, and MMSegmentation
pip install mmcv-full==1.4.0
pip install mmdet==2.14.0
pip install mmsegmentation==0.14.1
pip install yapf==0.40.1
- Install MMDetection3D
git clone https://github.com/open-mmlab/mmdetection3d.git
cd mmdetection3d
git checkout v0.17.1
python setup.py install
- Install other dependecies
pip install timm==0.9.12 huggingface-hub==0.16.4 safetensors==0.4.2
pip install open3d-python==0.7.0.0
pip install PyMCubes==0.1.4
pip install spconv-cu113
pip install fvcore
pip install setuptools==59.5.0
- Install Lyft-Level5 dataset SDK
pip install lyft_dataset_sdk
- Install OccProphet
cd ..
git clone https://github.com/JLChen-C/OccProphet.git
cd OccProphet
export PYTHONPATH="."
python setup.py develop
export OCCPROPHET_DIR="$(pwd)"
Optional: If you encounter issues for training or inference on GMO + GSO tasks, follow the instructions below to fix the issues
- Install Numba and LLVM-Lite
pip install numba==0.55.0
pip install llvmlite==0.38.0
# Reinstall setuptools if you encounter this issue: AttributeError: module 'distutils' has no attribute 'version'
# pip install setuptools==59.5.0
- Modify the files:
In Line 5, file $PATH_TO_ANACONDA/envs/occprophet/lib/python3.7/site-packages/mmdet3d-0.17.1-py3.7-linux-x86_64.egg/mmdet3d/datasets/pipelines/data_augment_utils.py:
Replace
"from numba.errors import NumbaPerformanceWarning"
with
"from numba.core.errors import NumbaPerformanceWarning"
In Line 30, $PATH_TO_ANACONDA/envs/occprophet/lib/python3.7/site-packages/nuscenes/eval/detection/data_classes.py
Replace
"self.class_names = self.class_range.keys()"
with
"self.class_names = list(self.class_range.keys())"
- Install dependecies for visualization
sudo apt-get install Xvfb
pip install xvfbwrapper
pip install mayavi
📚 Dataset Preparation
-
Create your data folder
$DATAand download the datasets below to$DATA- nuScenes V1.0 full dataset
- nuScenes-Occupancy dataset, and pickle files nuscenes_occ_infos_train.pkl and nuscenes_occ_infos_val.pkl
- Lyft-Level5 dataset
-
Link the datasets to the OccProphet folder
mkdir $OCCPROPHET_DIR/data
ln -s $DATA/nuscenes $OCCPROPHET_DIR/data/nuscenes
ln -s $DATA/nuscenes-occupancy $OCCPROPHET_DIR/data/nuscenes-occupancy
ln -s $DATA/lyft $OCCPROPHET_DIR/data/lyft
- Move the pickle files nuscenes_occ_infos_train.pkl and nuscenes_occ_infos_val.pkl to nuscenes dataset root:
mv $DATA/nuscenes_occ_infos_train.pkl $DATA/nuscenes/nuscenes_occ_infos_train.pkl
mv $DATA/nuscenes_occ_infos_val.pkl $DATA/nuscenes/nuscenes_occ_infos_val.pkl
- The dataset structure should be organized as the file tree below:
OccProphet
├── data/
│ ├── nuscenes/
│ │ ├── maps/
│ │ ├── samples/
│ │ ├── sweeps/
│ │ ├── lidarseg/
│ │ ├── v1.0-test/
│ │ ├── v1.0-trainval/
│ │ ├── nuscenes_occ_infos_train.pkl
│ │ ├── nuscenes_occ_infos_val.pkl
│ ├── nuScenes-Occupancy/
│ ├── lyft/
│ │ ├── maps/
│ │ ├── train_data/
│ │ ├── images/ # from train images, containing xxx.jpeg
│ ├── cam4docc
│ │ ├── GMO/
│ │ │ ├── segmentation/
│ │ │ ├── instance/
│ │ │ ├── flow/
│ │ ├── MMO/
│ │ │ ├── segmentation/
│ │ │ ├── instance/
│ │ │ ├── flow/
│ │ ├── GMO_lyft/
│ │ │ ├── ...
│ │ ├── MMO_lyft/
│ │ │ ├── ...
- The datas generation pipeline of GMO, GSO, and other tasks is integrated in the dataloader. You can directly run the training and evaluation scripts. It may take several hours for generation of each task during the first epoch, the following epochs will be much faster.
- You can also generate the dataset without any training or inference by setting
only_generate_dataset = Truein the config file, or just adding--cfg-options model.only_generate_dataset=Trueafter your command.
- You can also generate the dataset without any training or inference by setting
🧪 Training
- To launch the training, change your working directory to
$OCCPROPHET_DIRand run the following command:
CUDA_VISIBLE_DEVICES=$YOUR_GPU_IDS PORT=$PORT bash run.sh $CONFIG $NUM_GPUS
- Argument explanation:<br>
$YOUR_GPU_IDS: The GPU ids you want to use<br>$PORT: The connection port of distributed training<br>$CONFIG: The config path<br>$NUM_GPUS: The number of available GPUs
For example, you can launch the training on GPUs 0, 1, 2, and 3 with the config file ./projects/configs/occprophet/OccProphet_4x1_inf-GMO_nuscenes.py as follows:
CUDA_VISIBLE_DEVICES=0,1,2,3 PORT=26000 bash run.sh ./projects/configs/occprophet/OccProphet_4x1_inf-GMO_nuscenes.py 4
- Optional: The default is set to $2\times$
data.samples_per_gpufor faster data loading. If the training is stopped due to out of cpu memory, you can try to set thedata.workers_per_gpu=1in the config file, or just adding--cfg-options data.workers_per_gpu=1after your command:
CUDA_VISIBLE_DEVICES=0,1,2,3 PORT=26000 bash run.sh ./projects/configs/occprophet/OccProphet_4x1_inf-GMO_nuscenes.py 4 --cfg-options data.workers_per_gpu=1
🔬 Evaluation
To launch the evaluation, change your working directory to $OCCPROPHET_DIR and run the following command:
CUDA_VISIBLE_DEVICES=$YOUR_GPU_IDS PORT=$PORT bash run_eval.sh $CONFIG $CHECKPOINT $NUM_GPUS --evaluate
-
Argument explanation:<br>
$YOUR_GPU_IDS: The GPU ids you want to use<br>$PORT: The connection port of distributed evaluation<br>$CONFIG: The config path<br>$CHECKPOINT: The checkpoint path<br>$NUM_GPUS: The number of available GPUs -
For example, you can launch the evaluation on GPUs 0, 1, 2, and 3 with the config file
./projects/configs/occprophet/OccProphet_4x1_inf-GMO_nuscenes.pyas follows:
CUDA_VISIBLE_DEVICES=0,1,2,3 PORT=26006 bash run_eval.sh ./projects/configs/occprophet/OccProphet_4x1_inf-GMO_nuscenes.py ./work_dirs/occprophet/OccProphet_4x1_inf-GMO_nuscenes/OccProphet_4x1_inf-GMO_nuscenes.pth 4
-
The default evaluation measure the IoU of all future frames, you can change the evaluated time horizon by modifying the following settings in the config file or just adding them after your command.
- For example, if you want to evaluate the IoU of the present frame, you can set
model.test_present=Truein the config file, or just adding--cfg-options model.test_present=Trueafter your command:
- For example, if you want to evaluate the IoU of the present frame, you can set
CUDA_VISIBLE_DEVICES=0,1,2,3 PORT=26006 bash run_eval.sh ./projects/configs/occprophet/OccProphet_4x1_inf-GMO_nuscenes.py ./work_dirs/occprophet/OccProphet_4x1_inf-GMO_nuscenes/OccProphet_4x1_inf-GMO_nuscenes.pth 4 --cfg-options model.test_present=True
- Fine-grained Evaluation: you can evaluate the IoU of the X-th frame by setting
model.test_time_indices=Xin the config file, or just adding--cfg-options model.test_time_indices=Xafter your command.<br> For example, if you want to evaluate the IoU of the 5-th frame from the last, you can setmodel.test_time_indices=-5in the config file, or just adding--cfg-options model.test_time_indices=-5after your command:
CUDA_VISIBLE_DEVICES=0,1,2,3 PORT=26006 bash run_eval.sh ./projects/configs/occprophet/OccProphet_4x1_inf-GMO_nuscenes.py ./work_dirs/occprophet/OccProphet_4x1_inf-GMO_nuscenes/OccProphet_4x1_inf-GMO_nuscenes.pth 4 --cfg-options model.test_time_indices=-5
- Additional: If you want to save
Related Skills
node-connect
347.0kDiagnose OpenClaw node connection and pairing failures for Android, iOS, and macOS companion apps
frontend-design
107.8kCreate distinctive, production-grade frontend interfaces with high design quality. Use this skill when the user asks to build web components, pages, or applications. Generates creative, polished code that avoids generic AI aesthetics.
openai-whisper-api
347.0kTranscribe audio via OpenAI Audio Transcriptions API (Whisper).
qqbot-media
347.0kQQBot 富媒体收发能力。使用 <qqmedia> 标签,系统根据文件扩展名自动识别类型(图片/语音/视频/文件)。
