AdaPT
A physically based mesh renderer named Ada Path Tracer based on Taichi lang
Install / Use
/learn @Enigmatisms/AdaPTREADME
AdaPT
This renderer, though fully operationing with different version of Taichi lang and different system envs, is now archived.
There will be no more further development for this repo, except some bug fixes, should any get found. My CUDA-PT is an upgrade both in functionalities and performance, though both projects are educational. Please refer to CUDA-PT, if you are interested in a pure GPU path tracer, with nanobind python binding support. The state of archiving might be temporarily lifted, when there is a need for bug fixes.
Issues and PRs will not be accepted in this repo any more, so if you do want to open one, please go to AdaptiveGallery where I host some of the results from this repo.
Ada Path Tracer is a simple Monte Carlo path tracing renderer based on Taichi Lang, with which you can play easily. The name AdaPT is given by my GF and I think this name is brilliant. Currently, this renderer stops at version 1.6.0, since I think I should focus on something else until we have a better version of the backend (Taichi).
This renderer is implemented based on MY OWN understanding of path tracing and other CG knowledge and is presented with completeness, check the features supported below!
Taichi-lang requirements:
- Tested on Taichi 1.4.x - 1.6.x.
- 1.7.x can be used, too. Yet I noticed a significant performance drop in compilation (an with strange warnings unable to be located) but slight runtime performance boost. It is therefore not suggested to run with Taichi-lang 1.7.x.
- Further development (and support for future Taichi-lang versions): currently not on the agenda
Steady state rendering
For more example scenes, please refer to Enigmatisms/AdaptiveGallery
Sports car scene (~290k primitives, 16 bounces, with many different BxDFs):
<p align="center"><img src="https://github.com/Enigmatisms/AdaPT/assets/46109954/b480b716-f6f2-4163-86d9-3b87591297de"/></p>Bathroom scene (~400k primitives, 8 bounces, source can be found here):
<p align="center"><img src="https://github.com/Enigmatisms/AdaPT/assets/46109954/69272001-8acf-4196-9451-cfd4830e4067"/></p>Material orb scene (~500k primitives, 24 bounces, CUDA backend 16 fps):
<p align="center"><img src="https://github.com/Enigmatisms/AdaPT/assets/46109954/79754d30-1ce6-4ab2-a382-42010ed7c5b5"/></p>Kitchen scene (100k+ primitives):
<p align="center"><img src="https://github.com/Enigmatisms/AdaPT/assets/46109954/4c891d25-70ce-4239-9c48-ddf72c72ad4d"/></p>Stuff scene:
<p align="center"><img src="https://github.com/Enigmatisms/AdaPT/assets/46109954/d91b93e4-3084-419d-a310-a5dbb11d77ea"/></p>Storm troopers scene:
<p align="center"><img src="https://github.com/Enigmatisms/AdaPT/assets/46109954/038a7b15-3e88-40e2-82a9-0155ca10ade0"/></p>Heterogeneous medium rendering:
| Janga Smoke (RGB volume) | Desert Tornado (RGB volume) |
| ------------------------- | --------------- |
||
|
Bunny scenes are not uploaded in the repo (90k+ primitives).
| "Spotlight Foggy Bunnies" | "Three Bunnies" |
| ------------------------- | --------------- |
| |
|
| "The cornell spheres" | "The cornell boxes" | "Fresnel Blend" |
| :------------------------------------: | :---------------------------------: | :------------------------------------: |
|
|
|
|
Transient state rendering
Note that the gifs presented here are made by compressed jpeg files and optimized (compressed gif). The actual number of images for making the gif is divided by 2, due to the large size of the resulting gif.
| Transient balls (camera unwarped[^foot]) | Transient cornell box (camera warped[^foot]) |
| :------------------------------------: | :---------------------------------: |
||
|
[^foot]: 'Camera unwarped' means the transient profile shows the time when a position in the scene is hit by emitter ray. 'Camera warped' means the transient profile shows the total time of a position being hit by the emitter ray which should finally transmits to the camera.
Here are the features I currently implemented and support:
- A direct component renderer: a interactive visualizer for direct illumination visualization
- A unidirectional / bidirectional Monte-Carlo MIS path tracer: supports as many bounce times as you wish, and the rendering process is based on Taichi Lang, therefore it can be very fast (not on the first run, the first run of a scene might take a long time due to taichi function inlining, especially for BDPT). The figures displayed above can be rendered within 15-20s (with cuda-backend, GPU supported). The rendering result is displayed incrementally, or maximum iteration number can be pre-set.
- Volumetric path tracer that supports uni/bidirectional path tracing in both bounded and unbounded condition
- A transient renderer with which you can visualize the propagation of the global radiance.
- Texture packing and texture mapping, see
scenes/bunny.xmlfor an configuration example. We support bump map / normal map / roughness map (this is not tested) for now. - Shading normal is supported for a smooth appearance.
- Rendering checkpointing and rich console pannel support.
- Ray tracing accelerating structure, for now, we only support
BVH.KD-treewill be implemented in the future. - Global / indirect illumination & Ability to handle simple caustics
- BRDFs:
Lambertian,Modified Phong(Lafortune and Willems 1994),Fresnel Blend(Ashikhmin and Shirley 2002),Blinn-Phong,Mirror-specular. - BSDFs (with medium): deterministic refractive (glass-like)
- mitusba-like XML scene file definition, supports mesh (from wavefront
.objfile) and analytical sphere. - scene visualizer: which visualizes the scene you are going to render, helping to set some parameters like the relative position and camera pose
- Extremely easy to use and multi-platform / backend (thanks to Taichi), with detailed comments and a passionate maintainer (yes, I myself). Therefore you can play with it with almost no cost (like compiling, environment settings blahblahblah...)
BTW, I am just a starter in CG (ray-tracing stuffs) and Taichi Lang, so there WILL BE BUGS or some design that's not reasonable inside of my code. Also, I haven't review and done extensive profiling & optimization of the code, therefore again --- correctness is not guaranteed (yet the steady state rendering results are compared with mitsuba(0.6) and pbrt-v3)! But, feel free to send issue / pull-request to me if you are interested.
/ Recent activity 
Other branches:
ad: support for inverse rendering (automatic differentiable feature from Taichi), but I'm not able to make it work in this repo, there will be strange exceptions preventing using differentiable rendering.more: BSDF mixture model (mixing different BSDFs). The BSDF management is entirely rewritten, but the code is slow for both runtime and compile time performance though it indeed supports more interesting features. I figure that since mixture is rarely used, we should opt for faster implementation.
Rendering Example
To run the rendering, use:
# For bidirectional path tracer
# Make sure you are in root folder.
# pip install -r requirements.txt # Dependencies should be satisfied
python3 ./render.py --scene cbox --name cbox.xml --iter_num 8000 --arch cuda --type bdpt
# For volumetric path tracer: --type vpt, for vanilla path tracer: --type pt
useful parameters
--scene: in the folder `sce
