Dsacstar
DSAC* for Visual Camera Re-Localization (RGB or RGB-D)
Install / Use
/learn @vislearn/DsacstarREADME
DSAC* for Visual Camera Re-Localization (RGB or RGB-D)
- Introduction
- Installation
- Data Structure
- Supported Datasets
- Training DSAC*
- Testing DSAC*
- Publications
Change Log
- 5 Jan 2022: Added an
environment.ymlfor easier installation of dependencies. - 10 Jan 2022: Fixed file naming errors of pre-trained models for Cambridge Landmarks
Introduction
DSAC* is a learning-based visual re-localization method, published in TPAMI 2021. After being trained for a specific scene, DSAC* is able to estimate the camera rotation and translation from a single, new image of the same scene. DSAC* is versatile w.r.t what data is available at training and test time. It can be trained from RGB images and ground truth poses alone, or additionally utilize depth maps (measured or rendered) or sparse scene reconstructions for training. During test time, it supports pose estimation from RGB as well as RGB-D inputs.
DSAC* is a combination of Scene Coordinate Regression with CNNs and Differentiable RANSAC (DSAC) for end-to-end training. This code extends and improves our previous re-localization pipeline, DSAC++ with support for RGB-D inputs, support for data augmentation, a leaner network architecture, reduced training and test time, as well as other improvements for increased accuracy.


For more details, we kindly refer to the paper. You find a BibTeX reference of the paper at the end of this readme.
Installation
DSAC* is based on PyTorch, and includes a custom C++ extension which you have to compile and install (but it's easy). The main framework is implemented in Python, including data processing and setting parameters. The C++ extension encapsulates robust pose optimization and the respective gradient calculation for efficiency reasons.
DSAC* requires the following python packages, and we tested it with the package versions in brackets
pytorch (1.6.0)
opencv (3.4.2)
scikit-image (0.16.2)
The repository contains an environment.yml for the use with Conda:
conda env create -f environment.yml
conda activate dsacstar
You compile and install the C++ extension by executing:
cd dsacstar
python setup.py install
Compilation requires access to OpenCV header files and libraries. If you are using Conda, the setup script will look for the OpenCV package in the current Conda environment. Otherwise (or if that fails), you have to set the OpenCV library directory and include directory yourself by editing the setup.py file.
If compilation succeeds, you can import dsacstar in your python scripts. The extension provides four functions: dsacstar.forward_rgb(...), dsacstar.backward_rgb(...), dsacstar.forward_rgbd(...) and dsacstar.backward_rgbd(...). Check our python scripts or the documentation in dsacstar/dsacstar.cpp for reference how to use these functions.
Note: The code does not support OpenCV 4.x at the moment, due to legacy function calls in the dsacstar module. The code can be adjusted for the use with OpenCV 4.x but you might still face compiler compatibility issues when installing OpenCV via Conda. Any prebuilt OpenCV binaries must be compatible to the compiler of your system that compiles the dsacstar module. Compiling OpenCV from source on your system should ensure compiler compatibility.
Data Structure
The datasets folder is expected to contain one sub-folder per self-contained scene (e.g. one indoor room or one outdoor area).
We do not provide any data with this repository.
However, the datasets folder comes with a selection of Python scripts that will download and setup the datasets used in our paper (linux only, please adapt the script for other operating systems).
In the following, we describe the data format expected in each scene folder, but we advice to look at the provided dataset scripts for reference.
Each sub-folder of datasets should be structured by the following sub-folders that implement the training/test split expected by the code:
datasets/<scene_name>/training/
datasets/<scene_name>/test/
Training and test folders contain the following sub-folders:
rgb/ -- image files
calibration/ -- camera calibration files
poses/ -- camera transformation matrix
init/ -- (optional for training) pre-computed ground truth scene coordinates
depth/ -- (optional for training) can be used to compute ground truth scene coordinates on the fly
eye/-- (optional for RGB-D inputs) pre-computed camera coordinates (i.e. back-projected depth maps)
Correspondences of files across the different sub-folders will be established by alphabetical ordering.
Details for image files: Any format supported by scikit-image.
Details for pose files: Text files containing the camera pose h as 4x4 matrix following the 7Scenes/12Scenes convention. The pose transforms camera coordinates e to scene coordinates y, i.e. y = he.
Details for calibration files: Text file. At the moment we only support the camera focal length (one value shared for x- and y-direction, in px). The principal point is assumed to lie in the image center.
Details for init files: (3xHxW) tensor (standard PyTorch file format via torch.save/torch.load) where H and W are the dimension of the output of our network. Since we rescale input images to 480px height, and our network predicts an output that is sub-sampled by a factor of 8, our init files are 60px height. Invalid scene coordinate values should be set to zeros, e.g. when generating scene coordinate ground truth from a sparse SfM reconstruction. For reference how to generate these files, we refer to datasets/setup_cambridge.py where they are generated from sparse SfM reconstructions, or dataset.py where they are generated from dense depth maps and ground truth poses.
Details for depth files: Any format supported by scikit-image. Should have same size as the corresponding RGB image and contain a depth measurement per pixel in millimeters. Invalid depth values should be set to zero.
Details for eye files: Same format, size and conventions as init files but should contain camera coordinates instead of scene coordinates. For reference how to generate these files, we refer to dataset.py where associated scene coordinate tensors are generated from depth maps. Just adapt that code by storing camera coordinates directly, instead of transforming them with the ground truth pose.
Supported Datasets
Prior to using these datasets, please check their orignial licenses (see the website links at the beginning of each section).
7Scenes
7Scenes (MSR) is a small-scale indoor re-localization dataset. The authors provide training/test split information, and a dense 3D scan of each scene, RGB and depth images as well as ground truth poses. We provide the Python script setup_7scenes.py to download the dataset and convert it into our format.
Note that the provided depth images are not yet registered to the RGB images, and using them directly will lead to inferior results. As an alternative, we provide rendered depth maps here (see a note on download stability below). Just extract the archive inside datasets/ and the depth maps should be merged into the respective 7Scenes sub-folders.
For RGB-D experiments we provide pre-computed camera coordinate files (eye/) for all training and test scenes here (see a note on download stability below). We generated them from the original depth maps after doing a custom registration to the RGB images. Just extract the archive inside datasets/ and the coordinate files should be merged into the respective 7Scenes sub-folders.
12Scenes
12Scenes (Stanford) is a small-scale indoor re-localization dataset. The authors provide training/test split information, and a dense 3D scan of each scene, RGB and depth images as well as ground truth poses. We provide the Python script setup_12scenes.py to download the dataset and convert it into our format.
Provided depth images are registered to the RGB images, and can be used directly. However, we provide rendered depth maps here (see a note on download stability below) which we used in our experiments. Just extract the archive inside datasets/ and the depth maps should be merged into the respective 12Scenes sub-folders.
For RGB-D experiments we provide pre-computed camera coordinate files (eye/) for all training and test scenes here (see a note on download stability below). We generated them from the original depth maps after doing a custom registration to the RGB images. Just extract the archive inside datasets/ and the coordinate files should be merged into the respective 12Scenes sub-folders.
Cambridge Landmarks
Cambridge Landmarks is an outdoor re-localization dataset. The dataset comes with a set of RGB images of five landmark buildings in the city of Cambridge (UK). The authors provide training/test split information, and a structure-from-motion (SfM) reconstruction containing a 3D point cloud of each building, and reconstructed camera poses for all images. We pr
