1,454 skills found · Page 1 of 49
BlinkDL / RWKV LMRWKV (pronounced RwaKuv) is an RNN with great LLM performance, which can also be directly trained like a GPT transformer (parallelizable). We are at RWKV-7 "Goose". So it's combining the best of RNN and transformer - great performance, linear time, constant space (no kv-cache), fast training, infinite ctx_len, and free sentence embedding.
xmu-xiaoma666 / External Attention Pytorch🍀 Pytorch implementation of various Attention Mechanisms, MLP, Re-parameter, Convolution, which is helpful to further understand papers.⭐⭐⭐
brightmart / Text Classificationall kinds of text classification models and more with deep learning
cmhungsteve / Awesome Transformer AttentionAn ultimately comprehensive paper list of Vision Transformer/Attention, including papers, codes, and related websites
benedekrozemberczki / Awesome Graph ClassificationA collection of important graph embedding, classification and representation learning papers with implementations.
Kyubyong / TransformerA TensorFlow Implementation of the Transformer: Attention Is All You Need
PetarV- / GATGraph Attention Networks (https://arxiv.org/abs/1710.10903)
ruvnet / RuVectorRuVector is a High Performance, Real-Time, Self-Learning, Vector Graph Neural Network, and Database built in Rust.
shaoanlu / Faceswap GANA denoising autoencoder + adversarial losses and attention mechanisms for face swapping.
zzw922cn / Awesome Speech Recognition Speech Synthesis PapersAutomatic Speech Recognition (ASR), Speaker Verification, Speech Synthesis, Text-to-Speech (TTS), Language Modelling, Singing Voice Synthesis (SVS), Voice Conversion (VC)
diegoantognini / PyGATPytorch implementation of the Graph Attention Network model by Veličković et. al (2017, https://arxiv.org/abs/1710.10903)
sgrvinod / A PyTorch Tutorial To Image CaptioningShow, Attend, and Tell | a PyTorch Tutorial to Image Captioning
philipperemy / Keras AttentionKeras Attention Layer (Luong and Bahdanau scores).
linto-ai / Whisper TimestampedMultilingual Automatic Speech Recognition with word-level timestamps and confidence
gordicaleksa / Pytorch GATMy implementation of the original GAT paper (Veličković et al.). I've additionally included the playground.py file for visualizing the Cora dataset, GAT embeddings, an attention mechanism, and entropy histograms. I've supported both Cora (transductive) and PPI (inductive) examples!
allenai / Bi Att FlowBi-directional Attention Flow (BiDAF) network is a multi-stage hierarchical process that represents context at different levels of granularity and uses a bi-directional attention flow mechanism to achieve a query-aware context representation without early summarization.
charlesXu86 / Chatbot CN基于金融-司法领域(兼有闲聊性质)的聊天机器人,其中的主要模块有信息抽取、NLU、NLG、知识图谱等,并且利用Django整合了前端展示,目前已经封装了nlp和kg的restful接口
pprp / Awesome Attention Mechanism In CvAwesome List of Attention Modules and Plug&Play Modules in Computer Vision
awslabs / SockeyeSequence-to-sequence framework with a focus on Neural Machine Translation based on PyTorch
The-AI-Summer / Self Attention CvImplementation of various self-attention mechanisms focused on computer vision. Ongoing repository.