SkillAgentSearch skills...

Jaxngp

JAX implementation of Instant-NGP (NeRF part)

Install / Use

/learn @blurgyy/Jaxngp
About this skill

Quality Score

0/100

Supported Platforms

Universal

README

jaxngp

This repository contains [JAX] implementations of:

  • a multiresolution hash encoder (JAX)
  • an accelerated volume renderer for fast training of NeRFs (CUDA + JAX), with
    • occupancy grid pruning during ray marching
    • early stop during ray color integration
  • an inference-time renderer for real-time rendering of NeRFs (CUDA + JAX)
  • a GUI for visualizing & interacting & exploring NeRFs [@seimeicyx]

Benchmarks

[NeRF-synthetic]

| | mic | ficus | chair | hotdog | materials | drums | ship | lego | average | |:--- |:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:| | @33.7k steps (this codebase) | 37.04 | 33.14 | 35.10 | 37.21 | 29.50 | 25.85 | 30.93 | 35.95 | 33.09 | | @51.2k steps (this codebase) | 37.07 | 33.17 | 35.16 | 37.26 | 29.50 | 25.86 | 30.94 | 36.03 | 33.124 | | paper ([instant-ngp]) | 36.22 | 33.51 | 35.00 | 37.40 | 29.78 | 26.02 | 31.10 | 36.39 | 33.176 |

<sup> For each scene, the network is trained on 100 training images (800x800 each) for 30k steps with default parameters, reported PSNR is averaged across 200 test images. </sup>

Environment Setup

jaxngp manages environments with Nix, but it's also possible to setup the environment with any other package manager (e.g. Conda).

With Nix (recommended)

  1. Install Nix with the official installer or the nix-installer.
  2. With the nix executable available, clone this repository and setup environment:
    $ git clone https://github.com/blurgyy/jaxngp.git
    $ cd jaxngp/
    $ NIXPKGS_ALLOW_UNFREE=1 nix develop --impure
    
    This will download (or build if necessary) all the dependencies, and opens a new shell with all the dependencies configured.

    Note: to avoid the built environment being garbage collected when nix gc or nix-collect-garbage is called, append a --profile <PATH> argument:

    $ NIXPKGS_ALLOW_UNFREE=1 nix develop --impure --profile .git/devshell.profile
    

With Conda

TODO

Running

<!-- > **Note**: All the commands below are run after the environment has been setup. -->

The program's entrance is at python3 -m app.nerf. It provides three subcommands: train, test, and gui. Pass -h|--help to any of the subcommand to see its usage, e.g.:

<details> <summary> <code>python3 -m app.nerf train --help</code> </summary>
usage: __main__.py train [-h] --exp-dir PATH [--raymarch.diagonal-n-steps INT]
                         [--raymarch.perturb | --raymarch.no-perturb]
                         [--raymarch.density-grid-res INT] [--render.bg FLOAT FLOAT FLOAT]
                         [--render.random-bg | --render.no-random-bg]
                         [--scene.sharpness-threshold FLOAT] [--scene.world-scale FLOAT]
                         [--scene.resolution-scale FLOAT] [--scene.camera-near FLOAT]
                         [--logging {DEBUG,INFO,WARN,WARNING,ERROR,CRITICAL}] [--seed INT]
                         [--summary | --no-summary] [--frames-val PATH [PATH ...]]
                         [--ckpt {None}|PATH] [--lr FLOAT] [--tv-scale FLOAT] [--bs INT]
                         [--n-epochs INT] [--n-batches INT] [--data-loop INT] [--validate-every INT]
                         [--keep INT] [--keep-every {None}|INT]
                         [--raymarch-eval.diagonal-n-steps INT]
                         [--raymarch-eval.perturb | --raymarch-eval.no-perturb]
                         [--raymarch-eval.density-grid-res INT] [--render-eval.bg FLOAT FLOAT FLOAT]
                         [--render-eval.random-bg | --render-eval.no-random-bg]
                         PATH [PATH ...]

╭─ positional arguments ───────────────────────────────────────────────────────────────────────────╮
│ PATH [PATH ...]         directories or transform.json files containing data for training         │
│                         (required)                                                               │
╰──────────────────────────────────────────────────────────────────────────────────────────────────╯
╭─ arguments ──────────────────────────────────────────────────────────────────────────────────────╮
│ -h, --help              show this help message and exit                                          │
│ --exp-dir PATH          experiment artifacts are saved under this directory (required)           │
│ --frames-val PATH [PATH ...]                                                                     │
│                         directories or transform.json files containing data for validation       │
│                         (default: )                                                              │
│ --ckpt {None}|PATH      if specified, continue training from this checkpoint (default: None)     │
╰──────────────────────────────────────────────────────────────────────────────────────────────────╯
╭─ raymarch arguments ─────────────────────────────────────────────────────────────────────────────╮
│ raymarching/rendering options during training                                                    │
│ ──────────────────────────────────────────────────────────────────────────────────────────────── │
│ --raymarch.diagonal-n-steps INT                                                                  │
│                         for calculating the length of a minimal ray marching step, the NGP paper │
│                         uses 1024 (appendix E.1) (default: 1024)                                 │
│ --raymarch.perturb, --raymarch.no-perturb                                                        │
│                         whether to fluctuate the first sample along the ray with a tiny          │
│                         perturbation (default: True)                                             │
│ --raymarch.density-grid-res INT                                                                  │
│                         resolution for the auxiliary density/occupancy grid, the NGP paper uses  │
│                         128 (appendix E.2) (default: 128)                                        │
╰──────────────────────────────────────────────────────────────────────────────────────────────────╯
╭─ render arguments ───────────────────────────────────────────────────────────────────────────────╮
│ raymarching/rendering options during training                                                    │
│ ──────────────────────────────────────────────────────────────────────────────────────────────── │
│ --render.bg FLOAT FLOAT FLOAT                                                                    │
│                         background color for transparent parts of the image, has no effect if    │
│                         `random_bg` is True (default: 1.0 1.0 1.0)                               │
│ --render.random-bg, --render.no-random-bg                                                        │
│                         ignore `bg` specification and use random color for transparent parts of  │
│                         the image (default: True)                                                │
╰──────────────────────────────────────────────────────────────────────────────────────────────────╯
╭─ scene arguments ────────────────────────────────────────────────────────────────────────────────╮
│ raymarching/rendering options during training                                                    │
│ ──────────────────────────────────────────────────────────────────────────────────────────────── │
│ --scene.sharpness-threshold FLOAT                                                                │
│                         images with sharpness lower than this value will be discarded (default:  │
│                         -1.0)                                                                    │
│ --scene.world-scale FLOAT                                                                        │
│                         scale both the scene's camera positions and bounding box with this       │
│                         factor (default: 1.0)                                                    │
│ --scene.resolution-scale FLOAT                                                                   │
│                         scale input images in case they are too large, camera intrinsics are     │
│                         also scaled to match the updated image resolution. (default: 1.0)        │
│ --scene.camera-near FLOAT                                                                        │
│                         (default: 0.3)                                                           │
╰──────────────────────────────────────────────────────────────────────────────────────────────────╯
╭─ common arguments ───────────────────────────────────────────────────────────────────────────────╮
│ --logging {DEBUG,INFO,WARN,WARNING,ERROR,CRITICAL}                                               │
│                         log level (default: INFO)                                                │
│ --seed INT              random seed (default: 1000000007)                                        │
│ --summary, --no-summary                                                                          │
│                         display model information after model init (default: False)              │
╰──────────────────────────────────────────────────────────────────────────────────────────────────╯
╭─ train arguments ────────────────────────────────────────────────────────────────────────────────╮
│ training hyper parameters                                                                        │
│ ──────────────────────────────────────────────────────────────────────────────────────────────── │
│ --lr FLOAT              learning rate (
View on GitHub
GitHub Stars37
CategoryDevelopment
Updated1mo ago
Forks3

Languages

Python

Security Score

95/100

Audited on Feb 24, 2026

No findings