2 skills found
Tencent-Hunyuan / Flex Block Attnflex-block-attn: an efficient block sparse attention computation library
kyegomez / SparseAttentionPytorch Implementation of the sparse attention from the paper: "Generating Long Sequences with Sparse Transformers"