ProbabilisticDataStructures
A probabilistic data structures library for C#
Install / Use
/learn @mattlorimor/ProbabilisticDataStructuresREADME
Probabilistic Data Structures for C<span>#</span> 
This is a C# port of Tyler Treat's work in the BoomFilters golang project. His writing on probabilistic data structures and other computing-related activities can be found here: http://bravenewgeek.com/.
If you're on this page, you probably already know a bit about probabilistic data structures and why you might want to use them. To keep this README smaller, I'll remove some of the exposition Tyler does and keep this closer to a "How to Use" document. I would refer you to his project's README if you are trying to get all the information you possibly can.
The descriptions for each filter were lifted directly from the BoomFilters' README.
Included Structures
- Count-Min Sketch
- Counting Bloom filter
- Cuckoo filter
- Deletable Bloom filter
- HyperLogLog
- Inverse Bloom filter
- MinHash
- PartitionedBloomFilter
- Scalable Bloom filter
- Stable Bloom filter
- TopK
Releases
For now: https://github.com/mattlorimor/ProbabilisticDataStructures/releases
Future: NuGet
Contributions
Pull-requests are welcome, but submitting an issue is probably the best place to start if you have complex critiques or suggestions.
Stable Bloom Filter
This is an implementation of Stable Bloom Filters as described by Deng and Rafiei in Approximately Detecting Duplicates for Streaming Data using Stable Bloom Filters.
A Stable Bloom Filter (SBF) continuously evicts stale information so that it has room for more recent elements. Like traditional Bloom filters, an SBF has a non-zero probability of false positives, which is controlled by several parameters. Unlike the classic Bloom filter, an SBF has a tight upper bound on the rate of false positives while introducing a non-zero rate of false negatives. The false-positive rate of a classic Bloom filter eventually reaches 1, after which all queries result in a false positive. The stable-point property of an SBF means the false-positive rate asymptotically approaches a configurable fixed constant. A classic Bloom filter is actually a special case of SBF where the eviction rate is zero and the cell size is one, so this provides support for them as well (in addition to bitset-based Bloom filters).
Stable Bloom Filters are useful for cases where the size of the data set isn't known a priori and memory is bounded. For example, an SBF can be used to deduplicate events from an unbounded event stream with a specified upper bound on false positives and minimal false negatives.
Usage
using System.Encoding;
using ProbabilisticDataStructures;
namespace FilterExample
{
class Example
{
static void Main()
{
byte[] A_BYTES = Encoding.ASCII.GetBytes("a");
byte[] B_BYTES = Encoding.ASCII.GetBytes("b");
var sbf = StableBloomFilter.NewDefaultStableBloomFilter(10000, 0.01);
Console.WriteLine(string.Format("stable point: {0}", sbf.StablePoint()));
sbf.Add(A_BYTES);
if (sbf.Test(A_BYTES))
{
Console.WriteLine("contains a");
}
if (!sbf.TestAndAdd(B_BYTES))
{
Console.WriteLine("doesn't contain b");
}
if (sbf.Test(B_BYTES))
{
Console.WriteLine("now it contains b!");
}
}
}
}
Scalable Bloom Filter
This is an implementation of a Scalable Bloom Filter as described by Almeida, Baquero, Preguica, and Hutchison in Scalable Bloom Filters.
A Scalable Bloom Filter (SBF) dynamically adapts to the size of the data set while enforcing a tight upper bound on the rate of false positives and a false-negative probability of zero. This works by adding Bloom filters with geometrically decreasing false-positive rates as filters become full. A tightening ratio, r, controls the filter growth. The compounded probability over the whole series converges to a target value, even accounting for an infinite series.
Scalable Bloom Filters are useful for cases where the size of the data set isn't known a priori and memory constraints aren't of particular concern. For situations where memory is bounded, consider using Inverse or Stable Bloom Filters.
Usage
using System.Encoding;
using ProbabilisticDataStructures;
namespace FilterExample
{
class Example
{
static void Main()
{
byte[] A_BYTES = Encoding.ASCII.GetBytes("a");
byte[] B_BYTES = Encoding.ASCII.GetBytes("b");
var sbf = ScalableBloomFilter.NewDefaultScalableBloomFilter(0.01);
sbf.Add(A_BYTES);
if (sbf.Test(A_BYTES))
{
Console.WriteLine("contains a");
}
if (!sbf.TestAndAdd(B_BYTES))
{
Console.WriteLine("doesn't contain b");
}
if (sbf.Test(B_BYTES))
{
Console.WriteLine("now it contains b!");
}
}
}
}
Inverse Bloom Filter
An Inverse Bloom Filter, or "the opposite of a Bloom filter", is a concurrent, probabilistic data structure used to test whether an item has been observed or not. This implementation, originally described and written by Jeff Hodges, replaces the use of MD5 hashing with a non-cryptographic FNV-1 function.
The Inverse Bloom Filter may report a false negative but can never report a false positive. That is, it may report that an item has not been seen when it actually has, but it will never report an item as seen which it hasn't come across. This behaves in a similar manner to a fixed-size hashmap which does not handle conflicts.
This structure is particularly well-suited to streams in which duplicates are relatively close together. It uses a CAS-style approach, which makes it thread-safe.
Usage
using System.Encoding;
using ProbabilisticDataStructures;
namespace FilterExample
{
class Example
{
static void Main()
{
byte[] A_BYTES = Encoding.ASCII.GetBytes("a");
byte[] B_BYTES = Encoding.ASCII.GetBytes("b");
var ibf = new InverseBloomFilter(10000);
ibf.Add(A_BYTES);
if (ibf.Test(A_BYTES))
{
Console.WriteLine("contains a");
}
if (!ibf.TestAndAdd(B_BYTES))
{
Console.WriteLine("doesn't contain b");
}
if (ibf.Test(B_BYTES))
{
Console.WriteLine("now it contains b!");
}
}
}
}
Counting Bloom Filter
This is an implementation of a Counting Bloom Filter as described by Fan, Cao, Almeida, and Broder in Summary Cache: A Scalable Wide-Area Web Cache Sharing Protocol.
A Counting Bloom Filter (CBF) provides a way to remove elements by using an array of n-bit buckets. When an element is added, the respective buckets are incremented. To remove an element, the respective buckets are decremented. A query checks that each of the respective buckets are non-zero. Because CBFs allow elements to be removed, they introduce a non-zero probability of false negatives in addition to the possibility of false positives.
Counting Bloom Filters are useful for cases where elements are both added and removed from the data set. Since they use n-bit buckets, CBFs use roughly n-times more memory than traditional Bloom filters.
See Deletable Bloom Filter for an alternative which avoids false negatives.
Usage
using System.Encoding;
using ProbabilisticDataStructures;
namespace FilterExample
{
class Example
{
static void Main()
{
byte[] A_BYTES = Encoding.ASCII.GetBytes("a");
byte[] B_BYTES = Encoding.ASCII.GetBytes("b");
var cbf = CountingBloomFilter.NewDefaultCountingBloomFilter(1000, 0.01);
cbf.Add(A_BYTES);
if (cbf.Test(A_BYTES))
{
Console.WriteLine("contains a");
}
if (!cbf.TestAndAdd(B_BYTES))
{
Console.WriteLine("doesn't contain b");
}
if (cbf.TestAndRemove(B_BYTES))
{
Console.WriteLine("removed b");
}
}
}
}
Cuckoo Filter
This is an implementation of a Cuckoo Filter as described by Andersen, Kaminsky, and Mitzenmacher in Cuckoo Filter: Practically Better Than Bloom. The Cuckoo Filter is similar to the Counting Bloom Filter in that it supports adding and removing elements, but it does so in a way that doesn't significantly degrade space and performance.
It works by using a cuckoo hashing sch
