K4os.Compression.LZ4
LZ4/LH4HC compression for .NET Standard 1.6/2.0 (formerly known as lz4net)
Install / Use
/learn @MiloszKrajewski/K4os.Compression.LZ4README
K4os.Compression.LZ4
| Name | Nuget | Description |
|:-|:-:|:-|
| K4os.Compression.LZ4 | | Block compression only |
|
K4os.Compression.LZ4.Streams | | Stream compression |
|
K4os.Compression.LZ4.Legacy | | Legacy compatibility |
LZ4
LZ4 is lossless compression algorithm, sacrificing compression ratio for compression/decompression speed. Its compression speed is ~400 MB/s per core while decompression speed reaches ~2 GB/s, not far from RAM speed limits.
This library brings LZ4 to .NET Standard compatible platforms: .NET Core, .NET Framework, Mono, Xamarin, and UWP. Well... theoretically... kind of. Currently, it targets .NET Framework 4.6.2+, .NET Standard 2.0+ and .NET 5.0+.
As it is .NET Standard 2.0+ so all this platforms should be supported although I did not test it on all of them.
LZ4 has been written by Yann Collet and original C sources can be found here
Build
./build.ps1
NOTE: technically, it could be built on Linux as well, but setup process downloads and uses some Windows tools,
like 7z.exe and lz4.exe. It could be adapted, but wasn't. Feel free to send PR.
Changes
Change log can be found here.
Support
Maintaining this library is outside of my daily job completely. Company I work for is not even using it, so I do this completely in my own free time.
So, if you think my work is worth something, you could support me by funding my daily caffeine dose:
(or just use PayPal)
What is 'Fast compression algorithm'?
While compression algorithms you use day-to-day to archive your data work around the speed of 10MB/s giving you quite decent compression ratios, 'fast algorithms' are designed to work 'faster than your hard drive' sacrificing compression ratio.
One of the most famous fast compression algorithms in Google's own Snappy which is advertised as 250MB/s compression, 500MB/s decompression on i7 in 64-bit mode. Fast compression algorithms help reduce network traffic / hard drive load compressing data on the fly with no noticeable latency.
I just tried to compress some sample data (Silesia Corpus) receiving:
- zlib (7zip) - 7.5M/s compression, 110MB/s decompression, 44% compression ratio
- lzma (7zip) - 1.5MB/s compression, 50MB/s decompression, 37% compression ratio
- lz4 - 280MB/s compression, 520MB/s decompression, 57% compression ratio
Note: Values above are for illustration only. they are affected by HDD read/write speed (in fact LZ4 decompression in much faster). The 'real' tests are taking HDD speed out of equation. For detailed performance tests see [Performance Testing] and [Comparison to other algorithms].
Other 'Fast compression algorithms'
There are multiple fast compression algorithms, to name a few: LZO, QuickLZ, LZF, Snappy, FastLZ. You can find comparison of them on LZ4 webpage or here
Usage
This LZ4 library can be used in two distinctive ways: to compress streams and blocks.
Use as blocks
Compression levels
enum LZ4Level
{
L00_FAST,
L03_HC, L04_HC, L05_HC, L06_HC, L07_HC, L08_HC, L09_HC,
L10_OPT, L11_OPT, L12_MAX,
}
There are multiple compression levels. LZ4 comes in 3 (4?) flavors of compression algorithms. You can notice suffixes
of those levels: FAST, HC, OPT and MAX (while MAX is just OPT with "ultra" settings). Please note that
compression speed drops rapidly when not using FAST mode, while decompression speed stays the same (actually,
it is usually faster for high compression levels as there is less data to process).
Utility
static class LZ4Codec
{
static int MaximumOutputSize(int length);
}
Returns maximum size of of a block after compression. Of course, most of the time compressed data will take less space than source data, although in case of incompressible (for example: already compressed) data it may take more.
Example:
var source = new byte[1000];
var target = new byte[LZ4Codec.MaximumOutputSize(source.Length)];
//...
Compression
Block can be compressed using Encode(...) method family. They are relatively low level functions as it is your job
to allocate all memory.
static class LZ4Codec
{
static int Encode(
byte* source, int sourceLength,
byte* target, int targetLength,
LZ4Level level = LZ4Level.L00_FAST);
static int Encode(
ReadOnlySpan<byte> source, Span<byte> target,
LZ4Level level = LZ4Level.L00_FAST);
static int Encode(
byte[] source, int sourceOffset, int sourceLength,
byte[] target, int targetOffset, int targetLength,
LZ4Level level = LZ4Level.L00_FAST);
}
All of them compress source buffer into target buffer and return number of bytes actually used after compression.
If this value is negative it means that error has occurred and compression failed. In most cases mean that target
buffer is too small.
Please note, it might be tempting to use target buffer the same size (or even one byte smaller) then source buffer,
and use copy as a fallback. This will work just fine, yet compression into buffer that is smaller than MaximumOutputSize(source.Length)
is a little bit slower.
Example:
var source = new byte[1000];
var target = new byte[LZ4Codec.MaximumOutputSize(source.Length)];
var encodedLength = LZ4Codec.Encode(
source, 0, source.Length,
target, 0, target.Length);
Decompression
Previously compressed block can be decompressed with Decode(...) functions.
static class LZ4Codec
{
static int Decode(
byte* source, int sourceLength,
byte* target, int targetLength);
static int Decode(
ReadOnlySpan<byte> source, Span<byte> target);
static int Decode(
byte[] source, int sourceOffset, int sourceLength,
byte[] target, int targetOffset, int targetLength);
}
You have to know upfront how much memory you need to decompress, as there is almost no way to guess it. I did not investigate theoretical maximum compression ratio, yet all-zero buffer gets compressed 245 times, therefore when decompressing output buffer would need to be 245 times bigger than input buffer. Yet, encoding itself does not store that information anywhere therefore it is your job.
var source = new byte[1000];
var target = new byte[knownOutputLength]; // or source.Length * 255 to be safe
var decoded = LZ4Codec.Decode(
source, 0, source.Length,
target, 0, target.Length);
NOTE: If I told you that decompression needs potentially 100 times more memory than original data you would think
this is insane. And it is not 100 times, it is 255 times more, so it actually is insane. Please don't do it.
This was for demonstration only. What you need is a way to store original size somehow (I'm not opinionated, do
whatever you think is right) or... you can use LZ4Pickler (see below) or LZ4Stream.
Pickler
Sometimes all you need is to quickly compress a small chunk of data, let's say serialized message to send it over the
network. You can use LZ4Pickler in such case. It does encode original length within a message and handles
incompressible data (by copying).
static class LZ4Pickler
{
static byte[] Pickle(
byte[] source,
LZ4Level level = LZ4Level.L00_FAST);
static byte[] Pickle(
byte[] source, int sourceOffset, int sourceLength,
LZ4Level level = LZ4Level.L00_FAST);
static byte[] Pickle(
ReadOnlySpan<byte> source,
LZ4Level level = LZ4Level.L00_FAST);
static byte[] Pickle(
byte* source, int sourceLength,
LZ4Level level = LZ4Level.L00_FAST);
}
Example:
var source = new byte[1000];
var encoded = LZ4Pickler.Pickle(source);
var decoded = LZ4Pickler.Unpickle(encoded);
Please note that this approach is slightly slower (copy after failed compression) and has one extra memory allocation (as it resizes buffer after compression).
Streams
Stream implementation is in different package (K4os.Compression.LZ4.Streams) as it has dependency on K4os.Hash.xxHash.
It is fully compatible with LZ4 Frame format although not all features are supported on compression
(they are "properly" ignored on decompression).
Stream compression settings
There are some thing which can be configured when compressing data:
class LZ4EncoderSettings
{
long? ContentLength { get; set; } = null;
bool ChainBlocks { get; set; } = true;
int BlockSize { get; set; } = Mem.K64;
bool ContentChecksum { get; set; } = false;
bool BlockChecksum { get; set; } = false;
uint? Dictionary => null;
LZ4Level CompressionLevel { get; set; } = LZ4Level.L00_FAST;
int ExtraMemory { get; set; } = 0;
}
Default options are good enough so you don't change anything. Re
Related Skills
node-connect
340.5kDiagnose OpenClaw node connection and pairing failures for Android, iOS, and macOS companion apps
frontend-design
84.2kCreate distinctive, production-grade frontend interfaces with high design quality. Use this skill when the user asks to build web components, pages, or applications. Generates creative, polished code that avoids generic AI aesthetics.
openai-whisper-api
340.5kTranscribe audio via OpenAI Audio Transcriptions API (Whisper).
commit-push-pr
84.2kCommit, push, and open a PR

