SkillAgentSearch skills...

Attention

several types of attention modules written in PyTorch for learning purposes

Install / Use

/learn @knotgrass/Attention

README

This repository implements several types of attention modules in PyTorch, including:

  • Attention: The basic attention module
  • Multi-head Attention: A multi-head attention module that performs attention on multiple different "heads"(each head is a set of Q, K, V) of the input sequence.
  • Multi-Query Attention: A multi-query attention module that allows multiple queries and only one key, value to attend to the same input sequence.
  • Grouped-Query Attention: A grouped query attention module that allows queries to be grouped together (each group include multiple queries and only one key) and attended to jointly.
<p align="center"> <img src="images/grouped-query-attention.png" /> </p>
  • Linformer: which reduces the overall self-attention complexity from O(n<sup>2</sup>) to O(n) in both time and space.
<p align="center"> <img src="images/linformer.png" style="width: 45%; height: 45%"/> </p>

multi-query attention and grouped-query attention modules is an alternative to multi-head attention with much lower memory bandwidth requirements. They has been used in many models, the most famous of which are:

I implemented it in a simple way, with the purpose of understanding how attention works. It is an unoptimized version. If you are looking for attention with better performance, I suggest:

For more information, please see the following papers:

View on GitHub
GitHub Stars53
CategoryEducation
Updated2mo ago
Forks11

Languages

Python

Security Score

85/100

Audited on Jan 27, 2026

No findings