133 skills found · Page 2 of 5
ZhihanLee / PPO Based Eco Driving For PriusThe environment code for the paper 'Learning-based Eco-driving Strategy Design for Connected Power-split Hybrid Electric Vehicles at Signalized Corridors'
facebookresearch / SIECode for the paper Self-Supervised Learning of Split Invariant Equivariant Representations
Koukyosyumei / Attack SplitNNreveal the vulnerabilities of SplitNN
NITHISHKUMAR-C / CODSOFT CREDIT CARD FRAUD DETECTIONBuild a machine learning model to identify fraudulent credit card transactions. Preprocess and normalize the transaction data, handle class imbalance issues, and split the dataset into training and testing sets.
SongJgit / FilternetPython learning-aided filters library. Implements Kalman filter, Extended Kalman filter, KalmanNet, Split-KalmanNet and more.
mlpotter / SplitLearningApplied Split Learning in PyTorch with torch.distributed.rpc and torch.distributed.autograd
cuicaihao / Split RasterSplit Raster is an open-source and highly versatile Python package designed to easily break down large images into smaller, more manageable tiles. While the package is particularly useful for deep learning and computer vision tasks, it can be applied to a wide range of applications.
zlijingtao / ResSFLOfficial Repository for ResSFL (accepted by CVPR '22)
SongJgit / Awesome Learning Aided Filter PapersThis repository offers a collection of recent learning-aided filtering research papers, , such as KalmanNet, Split-KalmanNet and DANSE, including sensor fusion, target tracking and so on , with links to code and resources.
bigdata-inha / Split And Bridge(AAAI 2021) Split-and-Bridge: Adaptable Class Incremental Learning within a Single Neural Network
jtirana98 / SFL Workflow OptimizationNo description available
XiankeQiang / AdaptiveSplitFederatedLearningThis is official code for ASFL.
kyuyeonpooh / Split Learning 1d CnnSource codes of paper "Can We Use Split Learning on 1D CNN for Privacy Preserving Training?"
ribesstefano / PROTAC SplitterPROTAC-Splitter is a machine learning framework designed for automated annotation of PROTAC substructures.
OscarcarLi / Label ProtectionCode Repo for paper Label Leakage and Protection in Two-party Split Learning (ICLR 2022).
arpit3043 / Extractive Text SummerizationSummarization systems often have additional evidence they can utilize in order to specify the most important topics of document(s). For example, when summarizing blogs, there are discussions or comments coming after the blog post that are good sources of information to determine which parts of the blog are critical and interesting. In scientific paper summarization, there is a considerable amount of information such as cited papers and conference information which can be leveraged to identify important sentences in the original paper. How text summarization works In general there are two types of summarization, abstractive and extractive summarization. Abstractive Summarization: Abstractive methods select words based on semantic understanding, even those words did not appear in the source documents. It aims at producing important material in a new way. They interpret and examine the text using advanced natural language techniques in order to generate a new shorter text that conveys the most critical information from the original text. It can be correlated to the way human reads a text article or blog post and then summarizes in their own word. Input document → understand context → semantics → create own summary. 2. Extractive Summarization: Extractive methods attempt to summarize articles by selecting a subset of words that retain the most important points. This approach weights the important part of sentences and uses the same to form the summary. Different algorithm and techniques are used to define weights for the sentences and further rank them based on importance and similarity among each other. Input document → sentences similarity → weight sentences → select sentences with higher rank. The limited study is available for abstractive summarization as it requires a deeper understanding of the text as compared to the extractive approach. Purely extractive summaries often times give better results compared to automatic abstractive summaries. This is because of the fact that abstractive summarization methods cope with problems such as semantic representation, inference and natural language generation which is relatively harder than data-driven approaches such as sentence extraction. There are many techniques available to generate extractive summarization. To keep it simple, I will be using an unsupervised learning approach to find the sentences similarity and rank them. One benefit of this will be, you don’t need to train and build a model prior start using it for your project. It’s good to understand Cosine similarity to make the best use of code you are going to see. Cosine similarity is a measure of similarity between two non-zero vectors of an inner product space that measures the cosine of the angle between them. Since we will be representing our sentences as the bunch of vectors, we can use it to find the similarity among sentences. Its measures cosine of the angle between vectors. Angle will be 0 if sentences are similar. All good till now..? Hope so :) Next, Below is our code flow to generate summarize text:- Input article → split into sentences → remove stop words → build a similarity matrix → generate rank based on matrix → pick top N sentences for summary.
Mr-Ace-1997 / Backdoor Attack Against Split Neural Network Based Vertical Federated LearningThe code of the attack scheme in the paper "Backdoor Attack Against Split Neural Network-Based Vertical Federated Learning"
AhmetSencan / MaskSplit Self Supervised Meta Learning For Few Shot Semantic SegmentationCode for our method MaskSplit. Paper is available at https://arxiv.org/abs/2110.12207.
splitlearning / Splitlearning.github.ioSplit Learning Project Pages: Camera Culture group, MIT Media Lab
faresmalik / FeSViBSSource code for MICCAI 2023 paper entitled: 'FeSViBS: Federated Split Learning of Vision Transformer with Block Sampling'