193 skills found · Page 7 of 7
Lemon-cmd / Diffusion Models And Associative MemoryMemorization to Generalization: Emergence of Diffusion Models from Associative Memory
Edric-Matwiejew / QSW MPIQSW_MPI is a python package developed for MPI-parallelised time-series simulation of continuous-time quantum stochastic walks. This model allows for the study of Markovian open quantum systems in the Lindblad formalism, including a generalization of the continuous-time random walk and continuous-time quantum walk.
princeton-nlp / Heuristic Core[ACL 2024] The Heuristic Core: Understanding Subnetwork Generalization in Pretrained Language Models - https://arxiv.org/abs/2403.03942
alceubissoto / Artifact Generalization SkinThis is the official repository of the paper "Artifact-based Domain Generalization of Skin Lesion Models", accepted at the ISIC Workshop @ ECCV 2022.
roamlab / ReactemgReactEMG is a low-latency, high-accuracy model that predicts hand gestures from forearm EMG signals at every timestep. Its masked-segmentation architecture jointly learns EMG features and user intent, enabling zero-shot generalization without subject-specific calibration and making it well-suited for robotic control.
blei-lab / Factorial Network ModelsDiscussion of Durante et al for JSM 2017. Includes factorial network model generalization.
sinzlab / Lurz 2020 CodeCode base for "Generalization in data-driven models of primary visual cortex", Lurz et al. 2020
kah-ve / TrafficSignGANAugmenting existing datasets of traffic signs by using a Generative Adversarial Network to create synthetic images that will increase the accuracy and generalization ability of classification models.
yikuizh / Edlstm Flood Prediction[Journal of Hydrology] Generalization of an Encoder-decoder LSTM model for flood prediction in ungauged catchments
leopoldwhite / KGQuizOfficial repository of "KGQUIZ: Evaluating the Generalization of Encoded Knowledge in Large Language Models". TheWebConf 2024.
xybFight / VRP GeneralizationThe PyTorch Implementation of "Improving Generalization of Neural Vehicle Routing Problem Solvers Through the Lens of Model Architecture"
peterse / Benign Overfitting QuantumCode to accompany the manuscript "Good generalization with overparameterized quantum machine learning models via benign overfitting" (Peters and Schuld, 2022)
kochlisGit / VIT2This repository is the implementation of the paper: ViT2 - Pre-training Vision Transformers for Visual Times Series Forecasting. ViT2 is a framework designed to address generalization & transfer learning limitations of Time-Series-based forecasting models by encoding the time-series to images using GAF and a modified ViT architecture.
liuweitb / Mutual Knowledge Learning NetworkFace forgery techniques such as Generative Adversarial Network (GAN) have been widely used for image synthesis in movie production, journalism, etc. What backfires is that these generative technologies are widely abused to impersonate credible people and distribute illegal, misleading, and confusing information to the public. However, to our dismay, the problem with previous fake face detection methods is that they fail to distinguish between different fake generation modalities (various GANs), so none of these methods generalize to opening counterfeit scenes. These previous methods are almost ineffective in identifying fake faces when faced with unknown forgery approaches. To address this challenge, this paper first further analyzes the weaknesses of GAN-based generators. Our validation experimental results of different face generation models, such as Deepfakes, Face2Face, FaceSwap, etc., found that the faces generated by other models have no generalization. Our experiments revealed that the recent fake faces generated by GANs are still not robust enough because it does not consider enough pixels. Inspired by this finding, we design a novel convolutional neural network that uses frequency texture augmentation and knowledge distillation to enhance its global texture perception, effectively describe textures at different semantic levels in images, and improve robustness. It is worth mentioning that we introduce two core components: Discrete Cosine Transform (DCT) and Knowledge Distillation (KDL). DCT could play the role of image compression and also as image distinguishing between fake faces and real faces. KDL is used to extract features from counterfeit and real image targets, making our model generalize to multiple types of fake face generation methods. Experiments were done on two datasets, Celeb-DF and FaceForenscics++, demonstrating that DCT facilitates deep fakes detection in some cases. Knowledge distillation plays a key role in our model. Our model achieves better and more consistent performance in image processing or cross-domain settings, especially when images are subject to Gaussian noise.
kenjyoung / Model Generalization Code SupplementCode for "The Benefits of Model-Based Generalization in Reinforcement Learning"
WilliamsToTo / IMOA model for improving out-of-domain generalization by learning invariant features.
boa2004plaust / SNWPMThe Code of Generalization Ability of 5G Indoor Positioning Model under Restricted Conditions
Robot-Zhang / FedTEDSource code of "Improving Generalization and Personalization in Model-Heterogeneous Federated Learning"
helenqu / Multimodal Pretraining PmiImpact of Pretraining Word Co-occurrence on Compositional Generalization in Multimodal Models
ZouXinn / OOD AdvThe code for NeurIPS 2023 paper "On the Adversarial Robustness of Out-of-distribution Generalization Models"