BenchmarksPythonJuliaAndCo
Benchmark(s) of numerical programs with Python (and Scipy, Pythran, Numba), Julia and C++.
Install / Use
/learn @Thierry-Dumont/BenchmarksPythonJuliaAndCoREADME
Numerical benchmarks for Julia, Python.
We try to avoid trivial and nonsense benchmarks (for people doing numerics!) like fibonacci, sorting and so on.
We put ourself in the shoes of a typical Matlab or Matlab-like programmer, writing quite short but numerically intensive programs.
Are Python and Julia easy to use and efficient? We compare them with a C++ optimized implementation (and sometimes with a Fortran one)..
The benchmark(s):
-
:new: Callback: callbacks of small and not so small functions.
-
Gaussian: Gaussian elimination with partial pivoting.
-
FeStiff: compute the stiffness matrix for the Poisson equation, discretized with P2 finite elements on triangles.
-
Weno: a classical solver for hyperbolic equations, in dimension 1, with application to Burghers equation and to Convection.
-
Sparse: building a sparse matrix and doing a sparse matrix x vector product.
-
MicroBenchmarks: very simple benchmarks to show the importance of different programing styles.
We will add other numerical significative benchmarks in the (near) future.
Dependencies:
What you need to install:
- python3
- pip (pip3)
- g++ (and/or clang++)
- gfortran
- lapack
- openblas
- cmake
- gnuplot
You can install them using your distribution tool (apt...).
- julia
:exclamation: Julia :exclamation: since 2018-10-08 programs need at least version Version 1.1 (stable version in 2018-10); note that all programs needed adaptation when moving to this version, and will not run with former ones.
Note also that the version packaged with Ubuntu 18-04 is older. Install the stable version from here. Note also that Julia is evolving, and it is possible that the codes need some adaptation to run with later versions of the language.
You also need:
- pythran
- scipy
- Numpy
- numba
to install them, you can just do:
pip install pythran
and so on...
You can also install them from conda.
Note that, for Pythran it seems necessary to create a .pythranrc file in the home directory, to describe which blas is used:
[compiler]
blas=openblas
How to run the Benchmarks ?
-1: First solution: Enter the directories CallBack/, FeStiff/, Gaussian/, MicroBenchmarks/, Sparse/ or Weno/. Then read the README.md, which explains how to run the (local) benchmark.
-2: Second solution: In each of these directories, you will find a script: runAllTests.sh . This script will run the benchmark, for every language (and variant).
The first solution is certainly the safest.
Results:
Have a look at the wiki, where you can find results obtained on my personal computer, as well as some considerations on the different benchmarks and on optimizations implemented.
Related Skills
node-connect
345.4kDiagnose OpenClaw node connection and pairing failures for Android, iOS, and macOS companion apps
claude-opus-4-5-migration
104.6kMigrate prompts and code from Claude Sonnet 4.0, Sonnet 4.5, or Opus 4.1 to Opus 4.5
frontend-design
104.6kCreate distinctive, production-grade frontend interfaces with high design quality. Use this skill when the user asks to build web components, pages, or applications. Generates creative, polished code that avoids generic AI aesthetics.
model-usage
345.4kUse CodexBar CLI local cost usage to summarize per-model usage for Codex or Claude, including the current (most recent) model or a full model breakdown. Trigger when asked for model-level usage/cost data from codexbar, or when you need a scriptable per-model summary from codexbar cost JSON.
