SkillAgentSearch skills...

MACARONS

(CVPR 2023) Official code of MACARONS: Mapping And Coverage Anticipation with RGB ONline Self-supervision. Also contains an updated and improved implementation of our previous work SCONE (NeurIPS 2022), on which this work is built.

Install / Use

/learn @Anttwo/MACARONS

README

<div align="center">

MACARONS: Mapping And Coverage Anticipation with RGB ONline Self-supervision

<font size="4"> <a href="https://imagine.enpc.fr/~guedona/">Antoine Guédon</a>&emsp; <a href="https://www.tmonnier.com/">Tom Monnier</a>&emsp; <a href="https://imagine.enpc.fr/~monasse/">Pascal Monasse</a>&emsp; <a href="https://vincentlepetit.github.io/">Vincent Lepetit</a>&emsp; </font> <br> <img src="./media/trajectories/liberty2_macarons.png" alt="liberty_traj.png" width="400"/> <img src="./media/reconstructions/liberty_1_color.png" alt="liberty_reco.png" width="400"/> <br> <img src="./media/trajectories/pantheon_2_macarons.png" alt="pantheon_traj.png" width="400"/> <img src="./media/reconstructions/pantheon_2_color.png" alt="pantheon_reco.png" width="400"/> <br> </div>

Description

Official PyTorch implementation of MACARONS: Mapping And Coverage Anticipation with RGB ONline Self-supervision (CVPR 2023).<br> Also includes an updated and improved implementation of our previous work SCONE: Surface Coverage Optimization in Unknown Environments by Volumetric Integration (NeurIPS 2022, Spotlight), on which this work is built.

We introduce a method that simultaneously learns to explore new large environments and to reconstruct them in 3D from color images in a self-supervised fashion. This is closely related to the Next Best View problem (NBV), where one has to identify where to move the camera next to improve the coverage of an unknown scene.

<div align="center">

<a href="https://www.youtube.com/watch?v=NlUNFJYuBGs"><img src="./media/thumbnail.PNG" alt="Macarons illustration"></a>

</div>

This repository contains:

  • Scripts to generate ground truth coverage data from 3D meshes
  • Scripts to initialize and train both SCONE and MACARONS models
  • Evaluation pipelines and notebooks to reproduce and visualize results for both MACARONS and SCONE
  • Interactive demos to experiment with the models, built with Gradio
  • Links to download training data from our Google Drive
  • Links to download pretrained weights from our Google Drive
<details> <summary>If you find this code useful, don't forget to <b>star the repo :star:</b> and <b>cite the papers :point_down:</b></summary>
@inproceedings{guedon2023macarons,
  title={MACARONS: Mapping And Coverage Anticipation with RGB Online Self-Supervision},
  author={Gu{\'e}don, Antoine and Monnier, Tom and Monasse, Pascal and Lepetit, Vincent},
  booktitle={Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition},
  pages={940--951},
  year={2023}
}
@article{guedon2022scone,
  title={SCONE: Surface Coverage Optimization in Unknown Environments by Volumetric Integration},
  author={Gu{\'e}don, Antoine and Monasse, Pascal and Lepetit, Vincent},
  journal={Advances in Neural Information Processing Systems},
  volume={35},
  pages={20731--20743},
  year={2022}
}
</details> <details> <summary><b>Major code updates</b></summary>
  • 07/23: Updated a script to automatically generate settings file for custom scenes
  • 07/23: Added a tutorial notebook to reproduce qualitative results with better quality
  • 05/23: first code release
</details>

Installation

1. Create a conda environment

Run the following commands to create an appropriate conda environment.

conda env create -f environment.yml
conda activate macarons

Depending on the config of your machine and cuda drivers, you may have problems creating a working environment. <br> If so, you can install manually the following packages with conda and versions matching your config:

  1. Install numpy
  2. Install matplotlib
  3. Install pytorch
  4. Install pytorch3d
  5. Install gradio

To use jupyter-lab and our rendering functions in notebooks:

  1. Install jupyter-lab
  2. Install plotly
  3. Install nodejs
  4. Install ipywidgets

2. Download Datasets and preprocess data

a) ShapeNetCore.v1

To facilitate the training of SCONE's architecture, we generate training data using 3D meshes from ShapeNetCore v1. In particular, we generate ground truth data on occupancy probability and surface coverage gain from multiple camera viewpoints. <br>

Additionally, because MACARONS incorporates neural modules inspired from SCONE, we suggest using this data as a pretraining tool for MACARONS, before training it in large-scale, unknown scenes with self-supervision from RGB images. This pretraining step improves performance while reducing overall training time.

To generate this data, please start by downloading the ShapeNetCore v1 dataset from the source website. <br> Then, select the ShapeNet object categories on which you want to train and test your model.

In our experiments, we selected the following categories from the downloaded ShapeNetCore.v1 dataset folder:

<div align="center">

| Label | Corresponding Directory | Used for... | | :--------: | :---------------------: | :------------------------: | | Airplane | 02691156 | Training, Validation, Test | | Cabinet | 02933112 | Training, Validation, Test | | Car | 02958343 | Training, Validation, Test | | Chair | 03001627 | Training, Validation, Test | | Lamp | 03636649 | Training, Validation, Test | | Sofa | 04256520 | Training, Validation, Test | | Table | 04379243 | Training, Validation, Test | | Watercraft | 04530566 | Training, Validation, Test | | Bus | 02924116 | Test only | | Bed | 02818832 | Test only | | Bookshelf | 02871439 | Test only | | Bench | 02828884 | Test only | | Guitar | 03467517 | Test only | | Motorbike | 03790512 | Test only | | Skateboard | 04225987 | Test only | | Pistol | 03948459 | Test only |

</div>

You just have to move the directories 02691156, 02933112, 02958343, 03001627, 03636649, 04256520, 04379243, 04530566 to the path ./data/ShapeNetCore.v1/train_categories/. <br>

Similarly, move the directories 02924116, 02818832, 02871439, 02828884, 03467517, 03790512, 04225987, 03948459 to the path ./data/ShapeNetCore.v1/test_categories/. <br>

Finally, go to ./data/ShapeNetCore.v1/ and run the following script using python:

python generate_shapenet_data.py

Generating the training data for all meshes takes time, up to around 10 hours. However, it considerably reduces the training time for all future trainings of SCONE.

b) Dataset of large-scale 3D scenes

We conducted self-supervised training and experiments in large environments using 3D meshes downloaded on the website Sketchfab under the CC license (3D models courtesy of Brian Trepanier and Andrea Spognetta, we thank them for their awesome work). <br>

All download links for the original .blend files can be found on our project webpage. <br> However, we slightly modified the meshes using rotations and scaling operations, and we extracted for each scene both a mesh file (with extension .obj), a material file (with extension .mtl) as well as various metadata in order to facilitate 3D data processing on GPU with PyTorch3D.

You can directly download our preprocessed meshes and textures on our Google Drive. For any scene in the dataset, just download all files in the Google Drive subdirectory and move them to the corresponding subdirectory in ./data/scenes/.

c) Building your own custom dataset of 3D scenes

Once you have a .blend file of your own custom 3D scene (or any .blend file downloaded from Sketchfab, for example), you should extract from it the following files in order to facilitate 3D data processing on GPU with PyTorch3D:

  • A mesh file, with extension .obj
  • A material file, with extension .mtl

To this end, you can use Blender and follow the steps below. <br> Let's say we want to use MACARONS to explore and reconstruct the Statue of Liberty.

  1. First, download the .zip containing the mesh of the Statue of Liberty created by Brian Trepanier on Sketchfab. It should include a .blend file in source and a texture image a-StatueOfLiberty.jpg in textures.
  2. Open the .blend file with Blender, and go to File > Export > Wavefront (.obj), as shown in the following image.<br>
<div align="center"> <img src="./media/blender/export_0.png" alt="blender_export_0.png" width="600"/> </div> 3. Make sure to check `OBJ Objects` and `Material Groups`, and select `Strip Path` as the Path Mode, as shown in the following image. Then, click on `Export OBJ` to output an `.obj` file and a `.mtl` file.<br> <div align="center"> <img src="./media/blender/export_1.png" alt="blender_export_1.png" width="600"/> </div> 4. Finally, move the `.obj` f
View on GitHub
GitHub Stars85
CategoryEducation
Updated4d ago
Forks6

Languages

Jupyter Notebook

Security Score

85/100

Audited on Apr 1, 2026

No findings