SkillAgentSearch skills...

Anvil

Code transformation framework for R

Install / Use

/learn @r-xla/Anvil
About this skill

Quality Score

0/100

Supported Platforms

Universal

README

<!-- README.md is generated from README.Rmd. Please edit that file -->

anvil <img src="man/figures/logo.png" align="right" width = "120" />

Package website: release | dev

<!-- badges: start -->

Lifecycle:
experimental R-CMD-check CRAN
status codecov r-universe

<!-- badges: end -->

Composable code transformation framework for R, allowing you to run numerical programs at the speed of light. It currently implements JIT compilation for very fast execution and reverse-mode automatic differentiation. Programs can run on various hardware backends, including CPU and GPU.

Installation

{anvil} can be installed from GitHub or r-universe. Prebuilt Docker images are also available. See the Installation vignette for detailed instructions.

Quick Start

Below, we create a standard R function. We cannot directly call this function, but first need to wrap it in a jit() call. If the resulting function is then called on AnvilArrays – the primary data type in {anvil} – it will be JIT compiled and subsequently executed.

library(anvil)
f <- function(a, b, x) {
  a * x + b
}
f_jit <- jit(f)

a <- nv_scalar(1.0, "f32")
b <- nv_scalar(-2.0, "f32")
x <- nv_scalar(3.0, "f32")

f_jit(a, b, x)
#> AnvilArray
#>  1
#> [ CPUf32{} ]

Through automatic differentiation, we can also obtain the gradient of the above function.

g_jit <- jit(gradient(f, wrt = c("a", "b")))
g_jit(a, b, x)
#> $a
#> AnvilArray
#>  3
#> [ CPUf32{} ] 
#> 
#> $b
#> AnvilArray
#>  1
#> [ CPUf32{} ]

Main Features

  • Automatic Differentiation:
    • Gradients for functions with scalar outputs are supported.
  • Fast:
    • Code is JIT compiled into a single kernel.
    • Runs on different hardware backends, including CPU and GPU.
    • Asyncronous allocation and execution, allowing the accelerator to do it’s job while R interpretes.
  • Extendable:
    • It is possible to add new primitives, transformations, and (with some effort) new backends.
    • The package is written almost entirely in R.
  • Multi-backend:
    • The backend supports execution via XLA as well as an experimental {quickr}-based Fortran backend (CPU only).

When to use this package?

While {anvil} allows to run certain types of programs extremely fast, it only applies to a certain category of problems. Specifically, it is suitable for numerical algorithms, such as optimizing bayesian models, training neural networks or more generally numerical optimization. Another restriction is that {anvil} needs to re-compile the code for each new unique input shape. This has the advantage, that the compiler can make memory optimizations, but the compilation overhead might be a problem for fast running programs.

Platform Support

  • Linux
    • :white_check_mark: CPU backend is fully supported.
    • :white_check_mark: CUDA (NVIDIA GPU) backend is fully supported.
  • Windows
    • :white_check_mark: CPU backend is fully supported.
    • :warning: GPU is only supported via Windows Subsystem for Linux (WSL2).
  • macOS
    • :white_check_mark: CPU backend is supported.
    • :warning: Metal (Apple GPU) backend is available but not fully functional.

Acknowledgments

  • This work is supported by MaRDI.
  • The design of this package was inspired by and borrows from:
  • For JIT compilation, we leverage the OpenXLA project.
View on GitHub
GitHub Stars37
CategoryDevelopment
Updated8h ago
Forks1

Languages

R

Security Score

75/100

Audited on Mar 27, 2026

No findings