Gpuhd
Massively Parallel Huffman Decoding on GPUs
Install / Use
/learn @weissenberger/GpuhdREADME
CUHD - A Massively Parallel Huffman Decoder
A Huffman decoder for processing raw (i.e. unpartitioned) Huffman encoded data on the GPU. It also includes a basic, sequential encoder.
For further information, please refer to our conference paper.
Requirements
- CUDA-enabled GPU with compute capability 3.0 or higher
- GNU/Linux
- GNU compiler version 5.4.0 or higher
- CUDA SDK 8 or higher
- latest proprietary graphics drivers
Compilation process
Configuration
Please edit the Makefile:
-
Set
CUDA_INCLUDEto the include directory of your CUDA installation, e.g.:CUDA_INCLUDE = /usr/local/cuda-9.1/include -
Set
CUDA_LIBto the library directory of your CUDA installation, e.g.:CUDA_LIB = /usr/local/cuda-9.1/lib64 -
Set
ARCHto the compute capability of your GPU, i.e.ARCH = 35for compute capability 3.5. If you'd like to compile the decoder for multiple generations of GPUs, please editNVCC_FLAGSaccordingly.
Test program
The test program will generate a chunk of random, binomially distributed data, encode the data with a specified maximum codeword length and decode the data on the GPU.
Compiling the test program
To compile the test program, configure the Makefile as described above. Run:
make
Running the test program
./bin/demo <compute device index> <size of input in megabytes>
Compiling a static library
To compile a static library, run:
make lib
Related Skills
node-connect
345.9kDiagnose OpenClaw node connection and pairing failures for Android, iOS, and macOS companion apps
frontend-design
106.4kCreate distinctive, production-grade frontend interfaces with high design quality. Use this skill when the user asks to build web components, pages, or applications. Generates creative, polished code that avoids generic AI aesthetics.
openai-whisper-api
345.9kTranscribe audio via OpenAI Audio Transcriptions API (Whisper).
qqbot-media
345.9kQQBot 富媒体收发能力。使用 <qqmedia> 标签,系统根据文件扩展名自动识别类型(图片/语音/视频/文件)。
