138 skills found · Page 2 of 5
Audio-WestlakeU / AudiosslA library built for easier audio self-supervised training, downstream tasks evaluation
instadeepai / TunbertTunBERT is the first release of a pre-trained BERT model for the Tunisian dialect using a Tunisian Common-Crawl-based dataset. TunBERT was applied to three NLP downstream tasks: Sentiment Analysis (SA), Tunisian Dialect Identification (TDI) and Reading Comprehension Question-Answering (RCQA)
ychuest / Awesome LLMs Meet GenomesExplore a comprehensive collection of basic theories, applications, papers, and best practices about Large Language Models (LLMs) in genomes.
uzaymacar / Comparatively Finetuning BertComparatively fine-tuning pretrained BERT models on downstream, text classification tasks with different architectural configurations in PyTorch.
genbio-ai / ModelGeneratorAIDO.ModelGenerator is a software stack powering the development of an AI-driven Digital Organism (AIDO) by enabling researchers to adapt pretrained models and generate finetuned models for downstream tasks.
mlfoundations / ScalingLanguage models scale reliably with over-training and on downstream tasks
sovit-123 / Dinov3 StackA repository to apply DINOv3 models for different downstream tasks: image classification, semantic segmentation, object detection.
yan-hao-tian / ConTNetThis repo contains the code of "ConTNet: Why not use convolution and transformer at the same time?"
slczgwh / REDNDownstream Model Design of Pre-trained Language Model for Relation Extraction Task
IndoNLP / IndonlgThe first-ever vast natural language generation benchmark for Indonesian, Sundanese, and Javanese. We provide multiple downstream tasks, pre-trained IndoGPT and IndoBART models, and a starter code! (EMNLP 2021)
LucaOne / LucaOneTasksThe project of the downstream tasks based on LucaOne's Embedding.
yikunpku / RNA MSMNucleic Acids Research 2024:RNA-MSM model is an unsupervised RNA language model based on multiple sequences that outputs both embedding and attention map to match different types of downstream tasks.
Kenneth-Wong / MMSceneGraphICCV 2021: A brand new hub for Scene Graph Generation methods based on MMdetection (2021). The pipeline of from detection, scene graph generation to downstream tasks (e.g., image cpationing) is supported. Pytorch version implementation of HetH (ECCV 2020) and TopicSG (ICCV 2021) is included.
seopbo / Nlp Tutorialshuggingface를 이용하여 downstream task 수행하기
flexudy-pipe / Sentence DoctorMany Natural Language Processing tasks rely on sentence boundary detection (SBD). Although amazing libraries like spacy provide state of the art SBD, they often depend on text extractors (e.g pdf text extractors or OCR). The quality of these extractors greatly influence the quality of SBD libraries and as a consequence, the performance of downstream models as well. To help address this problem, we fine-tuned a T5 model from the hugging face hub that attempts to reconstruct “broken sentences”
DBC-Lab / Brain MRI EnhancementBME-X: A foundation model for enhancing magnetic resonance images and downstream segmentation, registration and diagnostic tasks
294coder / Efficient MIFTrain your fusion model and test downstream tasks in one repo.
amazon-science / ProbconservDatasets and code for results presented in the ProbConserv paper
behretj / LostFound[RA-L] Lost & Found dynamically tracks object poses from egocentric videos while updating a scene graph, enabling richer semantic 3D understanding for robotic downstream tasks.
MarcLafon / GallopAdaptation of vision-language models (CLIP) to downstream tasks using local and global prompts.