ParallelReductionsBenchmark
Thrust, CUB, TBB, AVX2, AVX-512, CUDA, OpenCL, OpenMP, Metal, and Rust - all it takes to sum a lot of numbers fast!
Install / Use
/learn @ashvardanian/ParallelReductionsBenchmarkREADME
Parallel Reductions Benchmark
For CPUs and GPUs in C++, CUDA, and Rust

One of the canonical examples when designing parallel algorithms is implementing parallel tree-like reductions, which is a special case of accumulating a bunch of numbers located in a continuous block of memory.
In modern C++, most developers would call std::accumulate(array.begin(), array.end(), 0), and in Python, it's just a sum(array).
Implementing those operations with high utilization in many-core systems is surprisingly non-trivial and depends heavily on the hardware architecture.
This repository contains several educational examples showcasing the performance differences between different solutions:
- Single-threaded but SIMD-accelerated code:
- SSE, AVX, AVX-512 on x86.
- NEON and SVE on Arm.
- OpenMP
reductionclause vs manualomp parallelscheduling. - Thrust with its
thrust::reduce. - CUB with its
cub::DeviceReduce::Sum. - CUDA kernels with and w/out warp-primitives.
- CUDA kernels with Tensor-Core acceleration.
- BLAS and cuBLAS strided vector and matrix routines.
- OpenCL kernels, eight of them.
- Parallel STL
<algorithm>in GCC with Intel oneTBB. - Reusable thread-pool libraries for C++, like Taskflow.
- Reusable thread-pool libraries for Rust, like Rayon and Tokio.
Notably:
- on arrays with billions of elements, the default
floaterror mounts, and the results become inaccurate unless a Kahan-like scheme is used. - to minimize the overhead Translation Lookaside Buffer (TLB) misses, the arrays are aligned to the OS page size and are allocated in huge pages on Linux, if possible.
- to reduce the memory access latency on many-core Non-Uniform Memory Access (NUMA) systems,
libnumaandpthreadhelp maximize data affinity. - to "hide" latency on wide CPU registers (like
ZMM), expensive Assembly instructions executed on different CPU ports are interleaved.
The examples in this repository were originally written in early 2010s and were updated in 2019, 2022, and 2025. Previously, it also included ArrayFire, Halide, and Vulkan queues for SPIR-V kernels and SyCL.
- Lecture Slides from 2019.
- CppRussia Talk in Russia in 2019.
- JetBrains Talk in Germany & Russia in 2019.
Results
Different hardware would yield different results, but the general trends and observations are:
- Accumulating over 100M
floatvalues generally requiresdoubleprecision or Kahan-like numerical tricks to avoid instability. - Carefully unrolled
for-loop is easier for the compiler to vectorize and faster thanstd::accumulate. - For
float,double, and even Kahan-like schemes, hand-written AVX2 code is faster than auto-vectorization. - Parallel
std::reducefor extensive collections is naturally faster than serialstd::accumulate, but you may not feel the difference betweenstd::execution::parandstd::execution::par_unseqon CPU. - CUB is always faster than Thrust, and even for trivial types and large jobs, the difference can be 50%.
Nvidia DGX-H100
On Nvidia DGX-H100 nodes, with GCC 12 and NVCC 12.1, one may expect the following results:
$ build_release/reduce_bench
You did not feed the size of arrays, so we will use a 1GB array!
Running build_release/reduce_bench
Run on (160 X 2100 MHz CPU s)
CPU Caches:
L1 Data 32 KiB (x160)
L1 Instruction 32 KiB (x160)
L2 Unified 4096 KiB (x80)
L3 Unified 16384 KiB (x2)
Load Average: 3.23, 19.01, 13.71
--------------------------------------------------------------------------------------
Benchmark Time CPU Iterations UserCounters...
--------------------------------------------------------------------------------------
unrolled<f32> 149618549 ns 149615366 ns 95 bytes/s=7.17653G/s error,%=50
unrolled<f64> 146594731 ns 146593719 ns 95 bytes/s=7.32456G/s error,%=0
std::accumulate<f32> 194089563 ns 194088811 ns 72 bytes/s=5.5322G/s error,%=93.75
std::accumulate<f64> 192657883 ns 192657360 ns 74 bytes/s=5.57331G/s error,%=0
openmp<f32> 5061544 ns 5043250 ns 2407 bytes/s=212.137G/s error,%=65.5651u
std::reduce<par, f32> 3749938 ns 3727477 ns 2778 bytes/s=286.336G/s error,%=0
std::reduce<par, f64> 3921280 ns 3916897 ns 3722 bytes/s=273.824G/s error,%=100
std::reduce<par_unseq, f32> 3884794 ns 3864061 ns 3644 bytes/s=276.396G/s error,%=0
std::reduce<par_unseq, f64> 3889332 ns 3866968 ns 3585 bytes/s=276.074G/s error,%=100
sse<f32aligned>@threads 5986350 ns 5193690 ns 2343 bytes/s=179.365G/s error,%=1.25021
avx2<f32> 110796474 ns 110794861 ns 127 bytes/s=9.69112G/s error,%=50
avx2<f32kahan> 134144762 ns 134137771 ns 105 bytes/s=8.00435G/s error,%=0
avx2<f64> 115791797 ns 115790878 ns 121 bytes/s=9.27304G/s error,%=0
avx2<f32aligned>@threads 5958283 ns 5041060 ns 2358 bytes/s=180.21G/s error,%=1.25033
avx2<f64>@threads 5996481 ns 5123440 ns 2337 bytes/s=179.062G/s error,%=1.25001
cub@cuda 356488 ns 356482 ns 39315 bytes/s=3.012T/s error,%=0
warps@cuda 486387 ns 486377 ns 28788 bytes/s=2.20759T/s error,%=0
thrust@cuda 500941 ns 500919 ns 27512 bytes/s=2.14345T/s error,%=0
Observations:
- 286 GB/s upper bound on the CPU.
- 2.2 TB/s using vanilla CUDA approaches.
- 3 TB/s using CUB.
On Nvidia H200 GPUs, the numbers are even higher:
-----------------------------------------------------------------------------------
Benchmark Time CPU Iterations UserCounters...
-----------------------------------------------------------------------------------
cuda/cub 254609 ns 254607 ns 54992 bytes/s=4.21723T/s error,%=0
cuda/thrust 319709 ns 316368 ns 43846 bytes/s=3.3585T/s error,%=0
cuda/thrust/interleaving 318598 ns 314996 ns 43956 bytes/s=3.37021T/s error,%=0
AWS Zen4 m7a.metal-48xl
On AWS Zen4 m7a.metal-48xl instances with GCC 12, one may expect the following results:
$ build_release/reduce_bench
You did not feed the size of arrays, so we will use a 1GB array!
Running build_release/reduce_bench
Run on (192 X 3701.95 MHz CPU s)
CPU Caches:
L1 Data 32 KiB (x192)
L1 Instruction 32 KiB (x192)
L2 Unified 1024 KiB (x192)
L3 Unified 32768 KiB (x24)
Load Average: 4.54, 2.78, 4.94
------------------------------------------------------------------------------------------
Benchmark Time CPU Iterations UserCounters...
------------------------------------------------------------------------------------------
unrolled<f32> 30546168 ns 30416147 ns 461 bytes/s=35.1514G/s error,%=50
unrolled<f64> 31563095 ns 31447017 ns 442 bytes/s=34.0189G/s error,%=0
std::accumulate<f32> 219734340 ns 219326135 ns 64 bytes/s=4.88655G/s error,%=93.75
std::accumulate<f64> 219853985 ns 219429612 ns 64 bytes/s=4.88389G/s error,%=0
openmp<f32> 5749979 ns 5709315 ns 1996 bytes/s=186.738G/s error,%=149.012u
std::reduce<par, f32> 2913596 ns 2827125 ns 4789 bytes/s=368.528G/s error,%=0
std::reduce<par, f64> 2899901 ns 2831183 ns 4874 bytes/s=370.268G/s error,%=0
std::reduce<par_unseq, f32> 3026168 ns 2940291 ns 4461 bytes/s=354.819G/s error,%=0
std::reduce<par_unseq, f64> 3053703 ns 2936506 ns 4797 bytes/s=351.62G/s error,%=0
sse<f32aligned>@threads 10132563 ns 9734108 ns 1000 bytes/s=105.969G/s error,%=0.520837
avx2<f32> 32225620 ns 32045487 ns 435 bytes/s=33.3195G/s error,%=50
avx2<f32kahan> 110283627 ns 110023814 ns 127 bytes/s=9.73619G/s error,%=0
avx2<f64> 55559986 ns 55422069 ns 247 bytes/s=19.3258G/s error,%=0
avx2<f32aligned>@threads 9612120 ns 9277454 ns 1467 bytes/s=111.707G/s error,%=0.521407
avx2<f64>@threads 10091882 ns 9708706 ns 1389 bytes/s=106.397G/s error,%=0.520837
avx512<f32streamed> 55713332 ns 55615555 ns 243 bytes/s=19.2726G/s error,%=50
avx512<f32streamed>@threads 9701513 ns 9383267 ns 1435 bytes/s=110.678G/s error,%=50.2604
avx512<f32unrolled> 48203352 ns 48085623 ns 228 bytes/s=22.2753G/s error,%=50
avx512<f32unrolled>@threads 9275968 ns 8955543 ns 1508 bytes/s=115.755G/s error,%=50.2604
avx512<f32interleaving> 40012581 ns 39939290 ns
Related Skills
node-connect
340.2kDiagnose OpenClaw node connection and pairing failures for Android, iOS, and macOS companion apps
frontend-design
84.1kCreate distinctive, production-grade frontend interfaces with high design quality. Use this skill when the user asks to build web components, pages, or applications. Generates creative, polished code that avoids generic AI aesthetics.
openai-whisper-api
340.2kTranscribe audio via OpenAI Audio Transcriptions API (Whisper).
commit-push-pr
84.1kCommit, push, and open a PR
