ShallowFF
Zeta implemantion of "Rethinking Attention: Exploring Shallow Feed-Forward Neural Networks as an Alternative to Attention Layers in Transformers"
Install / Use
/learn @kyegomez/ShallowFFREADME
ALR Transformer
ALR Transformer that replaces the original transformer implementation of an joint encoder + decoder block with a feedforward/alr block with a decoder block
Install
pip install alr-transformer
Usage
import torch
from alr_transformer import ALRTransformer
x = torch.randint(0, 100000, (1, 2048))
model = ALRTransformer(
dim = 512,
depth = 6,
num_tokens = 100000,
dim_head = 64,
heads = 8,
ff_mult = 4
)
out = model(x)
print(out)
print(out.shape)
Train
- First git clone the repo then download and then run the following
python3 train.py
Citation
@misc{bozic2023rethinking,
title={Rethinking Attention: Exploring Shallow Feed-Forward Neural Networks as an Alternative to Attention Layers in Transformers},
author={Vukasin Bozic and Danilo Dordervic and Daniele Coppola and Joseph Thommes},
year={2023},
eprint={2311.10642},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
Related Skills
node-connect
353.1kDiagnose OpenClaw node connection and pairing failures for Android, iOS, and macOS companion apps
frontend-design
111.6kCreate distinctive, production-grade frontend interfaces with high design quality. Use this skill when the user asks to build web components, pages, or applications. Generates creative, polished code that avoids generic AI aesthetics.
openai-whisper-api
353.1kTranscribe audio via OpenAI Audio Transcriptions API (Whisper).
qqbot-media
353.1kQQBot 富媒体收发能力。使用 <qqmedia> 标签,系统根据文件扩展名自动识别类型(图片/语音/视频/文件)。

