Rayrender
A pathtracer for R. Build and render complex scenes and 3D data visualizations directly from R
Install / Use
/learn @tylermorganwall/RayrenderREADME
rayrender
<!-- badges: start --> <!-- badges: end --><img src="man/figures/swordsmall.gif" ></img>
Overview
rayrender is an open source R package for raytracing scenes in created in R. This package provides a tidy R interface to a fast pathtracer written in C++ to render scenes built out of an array of primitives and meshes. rayrender builds scenes using a pipeable iterative interface, and supports diffuse, metallic, dielectric (glass), glossy, microfacet, light emitting materials, as well as procedural and user-specified image/roughness/bump/normal textures and HDR environment lighting. rayrender includes multicore support (with progress bars) via RcppThread, random number generation via the PCG RNG, OBJ/PLY support, and denoising support with Intel Open Image Denoise (OIDN).
Browse the documentation and see more examples at the website (if you aren’t already there):
<a href="https://www.rayrender.net">rayrender.net</a>
<img src="man/figures/rayrendersmall.jpg" ></img>
Installation
# To install the latest version from Github:
# install.packages("devtools")
devtools::install_github("tylermorganwall/rayrender")
Optional: denoising with Intel Open Image Denoise (OIDN)
rayrender can use Intel Open Image Denoise to denoise rendered images
when OIDN is available on your system. If OIDN is not found, rayrender
will still work, just without denoising support.
To get denoising support, you need to install OIDN. You can download the
official binaries from Intel and set the OIDN_PATH argument in your
.Renviron file with the following command line instructions:
macOS
# Download the appropriate binary for your architecture
curl -LO https://github.com/OpenImageDenoise/oidn/releases/download/v2.3.1/oidn-2.3.1.x86_64.macos.tar.gz
# or for Apple Silicon
curl -LO https://github.com/OpenImageDenoise/oidn/releases/download/v2.3.1/oidn-2.3.1.arm64.macos.tar.gz
# Extract the archive
tar -xvzf oidn-2.3.1.x86_64.macos.tar.gz
# or for Apple Silicon
tar -xvzf oidn-2.3.1.arm64.macos.tar.gz
# Set OIDN_PATH in your .Renviron file to the extracted directory
echo "OIDN_PATH=/path/to/extracted/oidn" >> ~/.Renviron
linux
# Download the binary
curl -LO https://github.com/OpenImageDenoise/oidn/releases/download/v2.3.1/oidn-2.3.1.x86_64.linux.tar.gz
# Extract the archive
tar -xvzf oidn-2.3.1.x86_64.linux.tar.gz
# Set OIDN_PATH in your .Renviron file to the extracted directory
echo "OIDN_PATH=/path/to/extracted/oidn" >> ~/.Renviron
Windows (Rtools45)
Windows is slightly trickier and requires Rtools45. The steps are:
- Install
makeandninjavia RTools. - Install ISPC (Intel SPMD Program Compiler).
- Download the OIDN source repository.
- Compile and install OIDN, and point
rayrenderto it viaOIDN_PATH.
OIDN_PATH should point to a directory that contains
include/OpenImageDenoise and lib (or lib64) with the OIDN
libraries.
Install prerequisites
- Install Rtools45 https://cran.r-project.org/bin/windows/Rtools/
- Open the “Rtools45 MinGW UCRT64” shell (ucrt64) in RTools45.
- Inside that shell, install the build tools (including
make,ninja,cmake,git, andispc) viapacman:
pacman -Sy --needed \
mingw-w64-ucrt-x86_64-make \
mingw-w64-ucrt-x86_64-ninja \
mingw-w64-ucrt-x86_64-cmake \
mingw-w64-ucrt-x86_64-ispc
- Make sure the Rtools static-posix toolchain and the MinGW binaries are on PATH (this mirrors the setup used to build OIDN):
export PATH="/c/rtools45/x86_64-w64-mingw32.static.posix/bin:/mingw64/bin:${PATH}"
- Verify the toolchain:
gcc --version
g++ --version
make --version
ninja --version
cmake --version
ispc --version
- Confirm ISPC is available:
ispc --version
Build and install OIDN from source (CPU-only)
All of the following commands are run from the Rtools45 ucrt64 shell:
# Download the OIDN source
git clone --recursive https://github.com/RenderKit/oidn.git
cd oidn
# Create a separate build directory
mkdir build-cpu-static
cd build-cpu-static
# Choose an install prefix; use a simple path without spaces
# This will become C:/local/oidn-static on Windows
OIDN_PREFIX="C:/local/oidn-static"
# Configure OIDN with the Rtools static-posix toolchain, CPU-only, static lib
# Update with your rtools45 path.
cmake \
-G "Ninja" \
-DCMAKE_BUILD_TYPE=Release \
-DCMAKE_C_COMPILER="/path/to/rtools45/x86_64-w64-mingw32.static.posix/bin/gcc.exe" \
-DCMAKE_CXX_COMPILER="/path/to/rtools45/x86_64-w64-mingw32.static.posix/bin/g++.exe" \
-DOIDN_STATIC_LIB=ON \
-DOIDN_DEVICE_CPU=ON \
-DOIDN_DEVICE_SYCL=OFF \
-DOIDN_DEVICE_CUDA=OFF \
-DOIDN_DEVICE_HIP=OFF \
-DOIDN_DEVICE_METAL=OFF \
-DOIDN_APPS=OFF \
-DISPC_EXECUTABLE="$(command -v ispc)" \
-DTBB_DIR="/path/to/rtools45/x86_64-w64-mingw32.static.posix/lib/cmake/TBB" \
-DCMAKE_INSTALL_PREFIX="${OIDN_PREFIX}" \
..
If CMake cannot find TBB in Rtools automatically, add a hint such as:
-DTBB_ROOT=/path/to/rtools45/x86_64-w64-mingw32.static.posix
(adjust the path for your actual Rtools45 install) to the cmake
command above.
Then build and install:
ninja
ninja install
After installation you should have, for example:
C:/local/oidn/include/OpenImageDenoise/oidn.h
C:/local/oidn/lib/libOpenImageDenoise.a (and related libs)
Tell R where OIDN lives
In a regular Windows shell or PowerShell, add to your user .Renviron:
echo 'OIDN_PATH=C:/local/oidn' >> "$HOME/.Renviron"
or edit the file and add the above manually with
devtools::edit_r_environ().
Restart R (or your IDE), then reinstall rayrender from source:
devtools::install_github("tylermorganwall/rayrender", force = TRUE)
After this, rayrender should detect OIDN during configure and enable
denoising support on Windows.
Usage
We’ll first start by rendering a simple scene consisting of the ground,
a sphere, and the included R.obj file. The location of the R.obj
file can be accessed by calling the function r_obj(). First adding the
ground using the render_ground() function. This renders an extremely
large sphere that (at our scene’s scale) functions as a flat surface. We
also add a simple blue sphere to the scene.
library(rayrender)
scene = generate_ground(material=diffuse(checkercolor="grey20")) |>
add_object(sphere(y=0.2,material=glossy(color="#2b6eff",reflectance=0.05)))
render_scene(scene, parallel = TRUE, width = 800, height = 800, samples = 64)
<!-- -->
By default, a scene without any lights includes a blue sky. We can turn
this off either by setting ambient_light = FALSE, or by adding a light
to our scene. We will add an emissive sphere above and behind our
camera.
scene = generate_ground(material=diffuse(checkercolor="grey20")) |>
add_object(sphere(y=0.2,material=glossy(color="#2b6eff",reflectance=0.05))) |>
add_object(sphere(y=10,z=1,radius=4,material=light(intensity=4))) |>
add_object(sphere(z=15,material=light(intensity=70)))
render_scene(scene, parallel = TRUE, width = 800, height = 800, samples = 64)
<!-- -->
Now we’ll add the (included) R .obj file into the scene, using the
obj_model() function. We will scale it down slightly using the
scale_obj argument, and then embed it on the surface of the ball.
scene = generate_ground(material=diffuse(checkercolor="grey20")) |>
add_object(sphere(y=0.2,material=glossy(color="#2b6eff",reflectance=0.05))) |>
add_object(obj_model(r_obj(simple_r = TRUE),
z=1,y=-0.05,scale_obj=0.45,material=diffuse())) |>
add_object(sphere(y=10,z=1,radius=4,material=light(intensity=4))) |>
add_object(sphere(z=15,material=light(intensity=70)))
render_scene(scene, parallel = TRUE, width = 800, height = 800, samples = 64)
<!-- -->
Here we’ll render a grid of different viewpoints.
filename = tempfile()
image1 = render_scene(scene, parallel = TRUE, width = 400, height = 400,
lookfrom = c(7,1,7), samples = 64, plot_scene = FALSE)
image2 = render_scene(scene, parallel = TRUE, width = 400, height = 400,
lookfrom = c(0,7,7), samples = 64, plot_scene = FALSE)
image3 = render_scene(scene, parallel = TRUE, width = 400, height = 400,
lookfrom = c(-7,0,-7), samples = 64, plot_scene = FALSE)
image4 = render_scene(scene, parallel = TRUE, width = 400, height = 400,
lookfrom = c(-7,7,7), samples = 64, plot_scene = FALSE)
rayimage::plot_image_grid(list(image1,image2,image3,image4), dim = c(2,2))
<!-- -->
Here’s another example: We start by generating an empty Cornell box and
rendering it with render_scene(). Setting parallel = TRUE will
utilize all available cores on your machine. The lookat, lookfrom,
aperture, and fov arguments control the camera, and the samples
argument controls how many samples to take at each pixel. Higher sample
counts result in a less noisy image.
scene = generate_cornell()
render_scene(scene, lookfrom=c(278,278,-800),lookat = c(278,278,0), aperture=0, fov=40, samples = 64,
ambient_light=FALSE, parallel=TRUE, width=800, height=800)
<!-- -->
Here we add a m
Related Skills
node-connect
329.0kDiagnose OpenClaw node connection and pairing failures for Android, iOS, and macOS companion apps
frontend-design
81.1kCreate distinctive, production-grade frontend interfaces with high design quality. Use this skill when the user asks to build web components, pages, or applications. Generates creative, polished code that avoids generic AI aesthetics.
openai-whisper-api
329.0kTranscribe audio via OpenAI Audio Transcriptions API (Whisper).
commit-push-pr
81.1kCommit, push, and open a PR
