SkillAgentSearch skills...

UnboundedNeRFPytorch

State-of-the-art, simple, fast unbounded / large-scale NeRFs.

Install / Use

/learn @sjtuytc/UnboundedNeRFPytorch

README

Unbounded Neural Radiance Fields in Pytorch

<!-- ALL-CONTRIBUTORS-BADGE:START - Do not remove or modify this section -->

All Contributors

<!-- ALL-CONTRIBUTORS-BADGE:END -->

1. Introduction

This is still a research project in progress.

This project aims for benchmarking several state-of-the-art large-scale radiance fields algorithms. We exchangely use terms "unbounded NeRF" and "large-scale NeRF" because we find the techniques behind them are closely related.

Instead of pursuing a big and complicated code system, we pursue a simple code repo with SOTA performance for unbounded NeRFs.

You are expected to get the following results in this repository:

| Benchmark | Methods | PSNR | |-------------------------------|--------------|-----------| | Unbounded Tanks & Temples | NeRF++ | 20.49 | | Unbounded Tanks & Temples | Plenoxels | 20.40 | | Unbounded Tanks & Temples | DVGO | 20.10 | | Unbounded Tanks & Temples | Ours | 20.85 | | Mip-NeRF-360 Benchmark | NeRF | 24.85 | | Mip-NeRF-360 Benchmark | NeRF++ | 26.21 | | Mip-NeRF-360 Benchmark | Mip-NeRF-360 | 28.94 | | Mip-NeRF-360 Benchmark | DVGO | 25.42 | | Mip-NeRF-360 Benchmark | Ours | 28.98 |

<details> <summary> Expand / collapse qualitative results. </summary>

Tanks and Temples:

  • Playground:

https://user-images.githubusercontent.com/31123348/220946729-d88db335-0618-4b75-9fc2-8de577e1ddb5.mp4

  • Truck:

https://user-images.githubusercontent.com/31123348/220946857-0f4b7239-8be6-4fca-9bba-2f2425e857a5.mp4

  • M60:

https://user-images.githubusercontent.com/31123348/220947063-068b94f6-3afb-421d-8746-43bcf9643a37.mp4

  • Train:

https://user-images.githubusercontent.com/31123348/220947239-6528d542-b2b8-45e3-8e69-6e0eff869720.mp4

Mip-NeRF-360 Benchmark:

  • Bicycle:

https://user-images.githubusercontent.com/31123348/220947385-ab31c646-c671-4522-8e4f-a1982d98c753.mp4

  • Stump:

https://user-images.githubusercontent.com/31123348/220947472-47dc4716-095b-45ec-890b-d6afd97de9e9.mp4

  • Kitchen:

https://user-images.githubusercontent.com/31123348/220947597-68f7ec32-c761-4253-955a-a2acc6a2eb25.mp4

  • Bonsai:

https://user-images.githubusercontent.com/31123348/220947686-d8957a2e-ef52-46cf-b437-28de91f55871.mp4

  • Garden:

https://user-images.githubusercontent.com/31123348/220947771-bbd249c0-3d0b-4d25-9b79-d4de9af17c4a.mp4

  • Counter:

https://user-images.githubusercontent.com/31123348/220947818-e5c6b07f-c930-48b2-8aa7-363182dea6be.mp4

  • Room:

https://user-images.githubusercontent.com/31123348/220948025-25ce5cc1-3c9a-450c-920d-98a8f153a0fa.mp4

San Francisco Mission Bay (dataset released by Block-NeRF):

  • Training splits:

    https://user-images.githubusercontent.com/31123348/200509378-4b9fe63f-4fa4-40b1-83a9-b8950d981a3b.mp4

  • Rotation:

    https://user-images.githubusercontent.com/31123348/200509910-a5d8f820-143a-4e03-8221-b04d0db2d050.mov

</details>

Hope our efforts could help your research or projects!

2. News

  • [2023.3.20] This project is renamed to "UnboundedNeRFPytorch" because we find our work is not large enough (e.g., at city level), rigorously speaking.
<details> <summary> Expand / collapse older news. </summary>
  • [2023.2.27] A major update of our repository with better performance and full code release.
  • [2022.12.23] Released several weeks' NeRF. Too many papers pop out these days so the update speed is slow.
  • [2022.9.12] Training Block-NeRF on the Waymo dataset, reaching PSNR 24.3.
  • [2022.8.31] Training Mega-NeRF on the Waymo dataset, loss still NAN.
  • [2022.8.24] Support the full Mega-NeRF pipeline.
  • [2022.8.18] Support all previous papers in weekly classified NeRF.
  • [2022.8.17] Support classification in weekly NeRF.
  • [2022.8.16] Support evaluation scripts and data format standard. Getting some results.
  • [2022.8.13] Add estimated camera pose and release a better dataset.
  • [2022.8.12] Add weekly NeRF functions.
  • [2022.8.8] Add the NeRF reconstruction code and doc for custom purposes.
  • [2022.7.28] The data preprocess script is finished.
  • [2022.7.20] This project started!
</details>

3. Installation

<details> <summary>Expand / collapse installation steps.</summary>
  1. Clone this repository. Use depth == 1 to avoid download a large history.

    git clone --depth=1 git@github.com:sjtuytc/LargeScaleNeRFPytorch.git
    mkdir data
    mkdir logs
    
  2. Create conda environment.

    conda create -n large-scale-nerf python=3.9
    conda activate large-scale-nerf
    
  3. Install pytorch and other libs. Make sure your Pytorch version is compatible with your CUDA.

    pip install --upgrade pip
    conda install pytorch==1.13.1 torchvision==0.14.1 torchaudio==0.13.1 pytorch-cuda=11.6 -c pytorch -c nvidia
    pip install -r requirements.txt
    
  4. Install grid-based operators to avoid running them every time, cuda lib required. (Check via "nvcc -V" to ensure that you have a latest cuda.)

    apt-get install g++ build-essential  # ensure you have g++ and other build essentials, sudo access required.
    cd FourierGrid/cuda
    python setup.py install
    cd ../../
    
  5. Install other libs used for reconstructing custom scenes. This is only needed when you need to build your scenes.

    sudo apt-get install colmap
    sudo apt-get install imagemagick  # required sudo accesss
    conda install pytorch-scatter -c pyg  # or install via https://github.com/rusty1s/pytorch_scatter
    

    You can use laptop version of COLMAP as well if you do not have access to sudo access on your server. However, we found if you do not set up COLMAP parameters properly, you would not get the SOTA performance.

</details>

4. Unbounded NeRF on the public datasets

Click the following sub-section titles to expand / collapse steps.

<details> <summary> 4.1 Download processed data.</summary>
  • Disclaimer: users are required to get permission from the original dataset provider. Any usage of the data must obey the license of the dataset owner.

(1) Unbounded Tanks & Temples. Download data from here. Then unzip the data.

cd data
gdown --id 11KRfN91W1AxAW6lOFs4EeYDbeoQZCi87
unzip tanks_and_temples.zip
cd ../

(2) The Mip-NeRF-360 dataset.

cd data
wget http://storage.googleapis.com/gresearch/refraw360/360_v2.zip
mkdir 360_v2
unzip 360_v2.zip -d 360_v2
cd ../

(3) San Fran Cisco Mission Bay. What you should know before downloading the data:

  • Our processed waymo data is significantly smaller than the original version (19.1GB vs. 191GB) because we store the camera poses instead of raw ray directions. Besides, our processed data is more friendly for Pytorch dataloaders. Download the data in the Google Drive. You may use gdown to download the files via command lines. If you are interested in processing the raw waymo data on your own, please refer to this doc.

The downloaded data would look like this:

data
   |
   |——————360_v2                                    // the root folder for the Mip-NeRF-360 benchmark
   |        └——————bicycle                          // one scene under the Mip-NeRF-360 benchmark
   |        |         └——————images                 // rgb images
   |        |         └——————images_2               // rgb images downscaled by 2
   |        |         └——————sparse                 // camera poses
   |        ...
   |——————tanks_and_temples                         // the root folder for Tanks&Temples
   |        └——————tat_intermediate_M60             // one scene under Tanks&Temples
   |        |         └——————camera_path            // render split camera poses, intrinsics and extrinsics
   |        |         └——————test                   // test split
   |        |         └——————train                  // train split
   |        |         └——————validation             // validation split
   |        ...
   |——————pytorch_waymo_dataset                     // the root folder for San Fran Cisco Mission Bay
   |        └——————cam_info.json                    // extracted cam2img information in dict.
   |        └——————coordinates.pt                   // global camera information used in Mega-NeRF, deprecated
   |        └——————train                            // train data
   |        |         └——————metadata               // meta data per image (camera information, etc)
   |        |         └——————rgbs                   // rgb images
   |        |         └——————split_block_train.json // split block informations
   |        |         └——————train_all_meta.json    // all meta informations in train folder
   |        └——————val                              // val data with the same structure as train
</details> <details> <summary> 4.2 Train models and see the results!</summary>

You only need to run "python run_FourierGrid.py" to finish the train-test-render cycle. Explanations of some arguments:

--program: the program to run, normally --program train will be all you need.
--config: the config pointing to the scene file, e.g., --config FourierGrid/configs/tankstemple_unbounded/truck_single.py.
--num_per_block: number of blocks used in large-scale NeRFs, normally this is set to -1, unless specially needed.

Related Skills

View on GitHub
GitHub Stars1.3k
CategoryEducation
Updated1mo ago
Forks113

Languages

Python

Security Score

100/100

Audited on Mar 3, 2026

No findings