PythonBrainFuck
A very (!) fast BrainFuck interpreter in Python
Install / Use
/learn @alexprengere/PythonBrainFuckREADME
A very (!) fast BrainFuck interpreter in Python
Here is a BrainFuck example:
+++++ +++++ initialize counter (cell #0) to 10
[ use loop to set the next four cells to 70/100/30/10
> +++++ ++ add 7 to cell #1
> +++++ +++++ add 10 to cell #2
> +++ add 3 to cell #3
> + add 1 to cell #4
<<<< - decrement counter (cell #0)
]
> ++ . print 'H'
> + . print 'e'
+++++ ++ . print 'l'
. print 'l'
+++ . print 'o'
> ++ . print ' '
<< +++++ +++++ +++++ . print 'W'
> . print 'o'
+++ . print 'r'
----- - . print 'l'
----- --- . print 'd'
> + . print '!'
> . print '\n'
How to use the interpreter:
python2 ./bf.py hello.bf
Hello World!
Speeding things up
With Pypy
If you try to run a long BrainFuck program like mandel.b, you will realize our interpreter is pretty slow.
python2 ./bf.py examples/mandel.b
# wait 1h45
A first simple way of speeding things up is to use Pypy instead of CPython.
PYPY_VERSION="pypy2.7-v7.3.9"
wget "https://downloads.python.org/pypy/${PYPY_VERSION}-linux64.tar.bz2"
tar -xjf "${PYPY_VERSION}-linux64.tar.bz2"
mv "${PYPY_VERSION}-linux64" pypy
# Only 1m30 now!
./pypy/bin/pypy ./bf.py ./examples/mandel.b
With a JIT
The interpreter is actually written in RPython, so it can be statically compiled using the Pypy toolchain.
Download the latest source of Pypy and uncompress it in a pypy-src folder. Note that you could also install rpython from PyPI.
wget "https://downloads.python.org/pypy/${PYPY_VERSION}-src.tar.bz2"
tar -xjf "${PYPY_VERSION}-src.tar.bz2"
mv "${PYPY_VERSION}-src" pypy-src
Then you can build from the Python script bf.py an executable binary bf-c:
# The compilation will take about 20s
python2 pypy-src/rpython/bin/rpython bf.py
# Mandelbrot now completes in 32s
./bf-c examples/mandel.b
You can rebuild the bf-c using --opt=jit to add a JIT to your BrainFuck interpreter:
# The compilation will take about 7m (you can speed this up by using Pypy)
python2 pypy-src/rpython/bin/rpython --opt=jit bf.py
# Mandelbrot now completes in about 5 seconds(!)
./bf-c examples/mandel.b
Let's compare with a C implementation
I also looked for a fast BrainFuck interpreter, written in C. After compilation with gcc -O3 (6.2), running mandel.b take about 5 seconds to run, so it is in the same order of magnitude as the JIT version (without -O3, it takes 10 seconds).
gcc -O3 ./resources/bff4.c -o bff4
# About 5s
./bff4 < examples/mandel.b
Let's compile the BrainFuck directly
To complete those numbers, I finally tested a Brainfuck to C translator, then compiled the C version of the mandel.b program. With -O3, the compiled mandel.b runs in a bit less than 1 second (without -O3, it takes 15 seconds).
gcc resources/brainfucc.c -o brainfucc
./brainfucc < examples/mandel.b > mandel.c
gcc -O3 mandel.c -o mandel
# 950ms
./mandel
Summary
Here is a summary of the speed gain I could observe on Ubuntu 16.10 (core i7, 8Go of RAM), running mandel.b:
- the initial
bf.pywith CPython (2.7): about 1h45 (baseline) - the initial
bf.pywith Pypy (5.6.0): 1m30s (70x) - the
bf-cwithout JIT: 32s (x200) - the
bf-cwith JIT: 5 seconds (x1250) - the
bff4C implementation: 5 seconds with-O3, 10 seconds without - the
mandelbinary built when compilingmandel.bdirectly: 1 second with-O3, 15 seconds without
The JIT addition contains code from this amazing tutorial on JITs.
If the BrainFuck interpreter bf.py is a bit hairy to look at, you can check out the step_by_step folder to go from the simplest interpreter, then a bit better, then
using only RPython code, then with the JIT-specific code, then with some final optimizations.
Related Skills
node-connect
353.3kDiagnose OpenClaw node connection and pairing failures for Android, iOS, and macOS companion apps
claude-opus-4-5-migration
111.7kMigrate prompts and code from Claude Sonnet 4.0, Sonnet 4.5, or Opus 4.1 to Opus 4.5
frontend-design
111.7kCreate distinctive, production-grade frontend interfaces with high design quality. Use this skill when the user asks to build web components, pages, or applications. Generates creative, polished code that avoids generic AI aesthetics.
model-usage
353.3kUse CodexBar CLI local cost usage to summarize per-model usage for Codex or Claude, including the current (most recent) model or a full model breakdown. Trigger when asked for model-level usage/cost data from codexbar, or when you need a scriptable per-model summary from codexbar cost JSON.
