Kodr
Random Linear Network Coding
Install / Use
/learn @itzmeanjan/KodrREADME
kodr

Random Linear Network Coding
Motivation
When I started looking into erasure coding techniques, I found many implementations of traditional block codes like Reed-Solomon codes or fountain codes like Raptor codes, though no implementation of Random Linear Network Codes (RLNC) - which motivated me to take up this venture of writing kodr. I think RLNC is a great fit for large scale decentralized systems where peers talk to each other, store and retrieve data themselves, by crawling the Distributed Hash Table (DHT). I've a series of blogs on why RLNC is great, it begins @ https://itzmeanjan.in/pages/understanding-rlnc.html.
There are different variants of RLNC, each useful for certain application domain, the choice of using one comes with some tradeoffs. For now only ✅ marked variants are implemented in this package, though the goal is to eventually implement all of them ⏳.
- Full RLNC ✅
- Systematic RLNC ✅
- On-the-fly RLNC
- Sparse RLNC
- Generational RLNC
- Caterpillar RLNC
For learning basics of RLNC, you may want to go through my old blog post @ https://itzmeanjan.in/pages/rlnc-in-depth.html. During encoding, recoding and decoding, kodr interprets each byte of data as an element of finite field $GF(2^8)$. Why?
- It's a good choice because from performance & memory consumption point of view, $GF(2^8)$ keeps a nice balance.
- Working on larger finite field indeed decreases the chance of (randomly) generating linearly dependent pieces (which are useless during decoding), but requires more costly computation & if finite field operations are implemented using lookup tables then memory consumption increases to a great extent.
- On the other hand, working on $GF(2)$, a much smaller field, increases the chance of generating linearly dependent pieces, though with sophisticated design like Fulcrum codes, they can be proved to be beneficial.
- Another point is the larger the finite field, the higher is the cost of storing random sampled coding vectors.
This library provides easy to use API for encoding, recoding and decoding of arbitrary length data.
Installation
Assuming you have Golang (>=1.24) installed, add kodr as a dependency to your project,
go get -u github.com/itzmeanjan/kodr/...
Testing
Run all the tests, from all the packages.
go test -v -cover -count=10 ./...
Benchmarking
For getting a picture of kodr's performance, let's issue following commands.
[!CAUTION] Ensure that you've disabled CPU frequency scaling, when benchmarking, following this guide @ https://github.com/google/benchmark/blob/main/docs/reducing_variance.md.
# Full RLNC
go test -run=xxx -bench=Encoder ./benches/full/
go test -run=xxx -bench=Recoder ./benches/full/
go test -run=xxx -bench=Decoder ./benches/full/
# Systematic RLNC
go test -run=xxx -bench=Encoder ./benches/systematic
go test -run=xxx -bench=Decoder ./benches/systematic
[!NOTE] RLNC Decoder performance denotes, average time required for full data reconstruction from N-many coded pieces. From following tables, it's clear that, for a specific size of message, decoding complexity keeps increasing very fast as we increase number of pieces.
Full RLNC
goos: linux
goarch: amd64
pkg: github.com/itzmeanjan/kodr/benches/full
cpu: 12th Gen Intel(R) Core(TM) i7-1260P
BenchmarkFullRLNCEncoder/1M/16_Pieces-16 1324 880135 ns/op 1265.86 MB/s 65600 B/op 3 allocs/op
BenchmarkFullRLNCEncoder/1M/32_Pieces-16 1300 880526 ns/op 1228.10 MB/s 32848 B/op 3 allocs/op
BenchmarkFullRLNCEncoder/1M/64_Pieces-16 1320 873052 ns/op 1219.89 MB/s 16496 B/op 3 allocs/op
BenchmarkFullRLNCEncoder/1M/128_Pieces-16 1359 867976 ns/op 1217.66 MB/s 8368 B/op 3 allocs/op
BenchmarkFullRLNCEncoder/1M/256_Pieces-16 1303 868170 ns/op 1212.81 MB/s 4400 B/op 3 allocs/op
BenchmarkFullRLNCEncoder/16M/16_Pieces-16 80 14504921 ns/op 1228.95 MB/s 1048640 B/op 3 allocs/op
BenchmarkFullRLNCEncoder/16M/32_Pieces-16 79 14326616 ns/op 1207.65 MB/s 524368 B/op 3 allocs/op
BenchmarkFullRLNCEncoder/16M/64_Pieces-16 80 14304505 ns/op 1191.19 MB/s 262256 B/op 3 allocs/op
BenchmarkFullRLNCEncoder/16M/128_Pieces-16 82 14443587 ns/op 1170.65 MB/s 131248 B/op 3 allocs/op
BenchmarkFullRLNCEncoder/16M/256_Pieces-16 81 14155676 ns/op 1189.84 MB/s 65840 B/op 3 allocs/op
BenchmarkFullRLNCEncoder/32M/16_Pieces-16 40 28980124 ns/op 1230.21 MB/s 2097216 B/op 3 allocs/op
BenchmarkFullRLNCEncoder/32M/32_Pieces-16 40 28965356 ns/op 1194.64 MB/s 1048656 B/op 3 allocs/op
BenchmarkFullRLNCEncoder/32M/64_Pieces-16 40 29065528 ns/op 1172.48 MB/s 524400 B/op 3 allocs/op
BenchmarkFullRLNCEncoder/32M/128_Pieces-16 40 28956767 ns/op 1167.83 MB/s 262320 B/op 3 allocs/op
BenchmarkFullRLNCEncoder/32M/256_Pieces-16 40 28918111 ns/op 1164.87 MB/s 131376 B/op 3 allocs/op
PASS
ok github.com/itzmeanjan/kodr/benches/full 18.616s
goos: linux
goarch: amd64
pkg: github.com/itzmeanjan/kodr/benches/full
cpu: 12th Gen Intel(R) Core(TM) i7-1260P
BenchmarkFullRLNCRecoder/1M/16_Pieces-16 1339 888852 ns/op 1253.73 MB/s 65640 B/op 5 allocs/op
BenchmarkFullRLNCRecoder/1M/32_Pieces-16 1358 876789 ns/op 1234.50 MB/s 32904 B/op 5 allocs/op
BenchmarkFullRLNCRecoder/1M/64_Pieces-16 1353 879587 ns/op 1215.48 MB/s 16584 B/op 5 allocs/op
BenchmarkFullRLNCRecoder/1M/128_Pieces-16 1333 896799 ns/op 1196.79 MB/s 8520 B/op 5 allocs/op
BenchmarkFullRLNCRecoder/1M/256_Pieces-16 1178 1020361 ns/op 1096.15 MB/s 4680 B/op 5 allocs/op
BenchmarkFullRLNCRecoder/16M/16_Pieces-16 80 14806643 ns/op 1203.92 MB/s 1048680 B/op 5 allocs/op
BenchmarkFullRLNCRecoder/16M/32_Pieces-16 81 14417283 ns/op 1200.13 MB/s 524424 B/op 5 allocs/op
BenchmarkFullRLNCRecoder/16M/64_Pieces-16 84 14663685 ns/op 1162.29 MB/s 262344 B/op 5 allocs/op
BenchmarkFullRLNCRecoder/16M/128_Pieces-16 78 14569035 ns/op 1161.70 MB/s 131400 B/op 5 allocs/op
BenchmarkFullRLNCRecoder/16M/256_Pieces-16 79 15372783 ns/op 1099.90 MB/s 66120 B/op 5 allocs/op
BenchmarkFullRLNCRecoder/32M/16_Pieces-16 39 31425659 ns/op 1134.48 MB/s 2097256 B/op 5 allocs/op
BenchmarkFullRLNCRecoder/32M/32_Pieces-16 37 31162572 ns/op 1110.44 MB/s 1048712 B/op 5 allocs/op
BenchmarkFullRLNCRecoder/32M/64_Pieces-16 38 31776846 ns/op 1072.57 MB/s 524488 B/op 5 allocs/op
BenchmarkFullRLNCRecoder/32M/128_Pieces-16 37 30497395 ns/op 1109.38 MB/s 262472 B/op 5 allocs/op
BenchmarkFullRLNCRecoder/32M/256_Pieces-16 37 31401920 ns/op 1074.82 MB/s 131656 B/op 5 allocs/op
PASS
ok github.com/itzmeanjan/kodr/benches/full 66.481s
Notice, how, with small number of pieces, decoding time stays afforable, even for large messages.
goos: linux
goarch: amd64
pkg: github.com/itzmeanjan/kodr/benches/full
cpu: 12th Gen Intel(R) Core(TM) i7-1260P
BenchmarkFullRLNCDecoder/1M/16_Pieces-16 168832 0.0000059 second/decode
BenchmarkFullRLNCDecoder/1M/32_Pieces-16 32136 0.0000328 second/decode
BenchmarkFullRLNCDecoder/1M/64_Pieces-16 4744 0.0002385 second/decode
BenchmarkFullRLNCDecoder/1M/128_Pieces-16 471 0.002227 second/decode
BenchmarkFullRLNCDecoder/1M/256_Pieces-16 16 0.06368 second/decode
BenchmarkFullRLNCDecoder/2M/16_Pieces-16 151488 0.0000067 second/decode
BenchmarkFullRLNCDecoder/2M/32_Pieces-16 26246 0.0000411 second/decode
BenchmarkFullRLNCDecoder/2M/64_Pieces-16 2848 0.0003575 second/decode
BenchmarkFullRLNCDecoder/2M/128_Pieces-16 123 0.008466 second/decode
BenchmarkFullRLNCDecoder/2M/256_Pieces-16 2 0.6190 second/decode
BenchmarkFullRLNCDecoder/16M/16_Pieces-16 24030 0.0000427 second/decode
BenchmarkFullRLNCDecoder/16M/32_Pieces-16 2 0.5769 second/decode
BenchmarkFullRLNCDecoder/16M/64_Pieces-16 1 1.594 second/decode
BenchmarkFullRLNCDecoder/16M/128_Pieces-16 1 3.355 second/decode
BenchmarkFullRLNCDecoder/16M/256_Pieces-16 1 6.483 second/decode
BenchmarkFullRLNCDecoder/32M/16_Pieces-16 2 0.5767 second/decode
BenchmarkFullRLNCDecoder/32M/32_Pieces-16 1 1.619 second/decode
BenchmarkFullRLNCDecoder/32M/64_Pieces-16 1 3.233 second/decode
BenchmarkFullRLNCDecoder/32M/128_Pieces-16 1 6.540 second/decode
BenchmarkFullRLNCDecoder/32M/256_Pieces-16 1 13.07 second/decode
PASS
ok github.com/itzmeanjan/kodr/benches/full 214.901s
Systematic RLNC
goos: linux
goarch: amd64
pkg: github.com/itzmeanjan/kodr/benches/systematic
cpu: 12th Gen Intel(R) Core(TM) i7-1260P
BenchmarkSystematicRLNCEncoder/1M/16Pieces-16 1558 871583 ns/op 1278.28 MB/s 65600 B/op 3 allocs/op
BenchmarkSystematicRLNCEncoder/1M/32Pieces-16 1872 891196 ns/op 1213.40 MB/s 32848 B/op 3 allocs/op
BenchmarkSystematicRLNCEncoder/1M/64Pieces-16 3626 907845 ns/op 1173.13 MB/s 16496 B/op 3 allocs/op
BenchmarkSystematicRLNCEncoder/1M/128Pieces-16 10000 901206 ns/op 117
