Nerfmatch
[ECCV24] The NeRFect Match: Exploring NeRF Features for Visual Localization
Install / Use
/learn @nv-dvl/NerfmatchREADME
NeRFMatch
This repository contains the code release of our paper accepted at ECCV2024:
The NeRFect Match: Exploring NeRF Features for Visual Localization. [Project Page | Paper | Poster]
<!--  --> <p align="center"> <img src="teaser.png" width="800"> </p>Installation
Clone this repository and create a conda envoirnment with the following commands:
# Create conda env
conda env create -f configs/conda/nerfmatch_env.yml
conda activate neumatch
pip install -r configs/conda/requirements.txt
# Install this repo
pip install -e .
Data Preparation
-
Download the 7-Scenes dataset from this link and place them under data/7scenes .
-
Download the Cambridge Landmarks scenes (Great Court, Kings College, Old Hospital, Shop Facade, St. Marys Church) and place them under data/cambrdige.
-
Execute the following command to download our pre-process data annotations and image retrieval pairs and SAM masks on Cambridge Landmarks for NeRF training. The Cambridge Landmarks annotations are converted from original dataset nvm file. 7-Scenes sfm ground truth json files are converted from pgt/sfm/7scenes.
cd data/
bash download_data.sh
cd ..
- Execute the following command to download our pretrained nerf and nerfmatch models.
cd pretrained/
bash download_pretrained.sh
cd ..
After those preparation steps, your data/ directory shall look like:
data
├── 7scenes
│ ├── chess
│ └── ...
├── annotations
│ └── 7scenes_jsons/sfm
│ ├── transforms_*_test.json
│ ├── transforms_*_train.json
│ └── ...
├── cambridge
│ ├── GreatCourt
│ └── ...
├── mask_preprocessed
│ └── cambridge
└── pairs
├── 7scenes
└── cambridge
Training and Evaluation
We refer users to model_train/README.md and model_eval/README.md for training and evaluation instructions.
Licenses
The source code is released under NVIDIA Source Code License v1. The pretrained models are released under CC BY-NC-SA 4.0.
Citation
If you are using our method, please cite:
@article{zhou2024nerfmatch,
title={The NeRFect match: Exploring NeRF features for visual localization},
author={Zhou, Qunjie and Maximov, Maxim and Litany, Or and Leal-Taix{\'e}, Laura},
journal={European Conference on Computer Vision},
year={2024}
}
Related Skills
node-connect
339.5kDiagnose OpenClaw node connection and pairing failures for Android, iOS, and macOS companion apps
frontend-design
83.9kCreate distinctive, production-grade frontend interfaces with high design quality. Use this skill when the user asks to build web components, pages, or applications. Generates creative, polished code that avoids generic AI aesthetics.
openai-whisper-api
339.5kTranscribe audio via OpenAI Audio Transcriptions API (Whisper).
commit-push-pr
83.9kCommit, push, and open a PR
