133 skills found · Page 5 of 5
yudi-mars / EC SNNOfficial code for "EC-SNN: Splitting Deep Spiking Neural Networks on Edge Devices" (IJCAI2024)
ryang1119 / ATOSSRepo for "Make Compound Sentences Simple to Analyze: Learning to Split Sentences for Aspect-based Sentiment Analysis" [EMNLP'2024 Findings]
Zi-YuanYang / DC SFLCode of ”Dynamic Corrected Split Federated Learning with Homomorphic Encryption for U-shaped Medical Image Networks“ (Accepted by IEEE JBHI)
huchukato / Stemify Audio Splitter🎵 AI-powered audio separation tool - Split any audio file into vocals, drums, bass, and other instruments using advanced machine learning. Built with React, Flask, and Facebook's Demucs v4 model.
Tirth8038 / Multiclass Image Classification The main aim of the project is to scan the X-rays of human lungs and classify them into 3 given categories like healthy patients, patients with pre-existing conditions, and serious patients who need immediate attention using Convolutional Neural Network. The provided dataset of Grayscale Human Lungs X-ray is in the form of a numpy array and has dimensions of (13260, 64, 64, 1). Similarly, the corresponding labels of X-ray images are of size (13260, 2) with classes (0) if the patient is healthy, (1) if patient has pre-existing conditions or (2) if patient has Effusion/Mass in the lungs. During data exploration, I found that the class labels are highly imbalanced. Thus, for handling such imbalanced class labels, I used Data augmentation techniques such as horizontal & vertical flips, rotation, altering brightness and height & width shift to increase the number of training images to prevent overfitting problem. After preprocessing the data, the dimension of the dataset is (31574, 64, 64, 1). For Model Selection, I built 4 architectures of CNN Model similar to the architecture of LeNet-5, VGGNet, AlexNet with various Conv2D layers followed by MaxPooling2D layers and fitted them with different epochs, batch size and different optimizer learning rate. Moreover, I also built a custom architecture with comparatively less complex structure than previous models. Further to avoid Overfitting, I also tried regularizing Kernel layer and Dense layer using Absolute Weight Regularizer(L1) and to restrict the bias in classification, I used Bias Regularizer in the Dense layer. In addition to this, I also tried applying Dropout with a 20% dropout rate during training and Early Stopping method for preventing overfitting and evaluated that Early Stopping gave better results than Dropout. For evaluation of models, I split the dataset into training,testing and validation split with (60,20,20) ratio and calculated Macro F1 Score , AUC Score on test data and using the Confusion Matrix, I calculated the accuracy by dividing the sum of diagonal elements by sum of all elements. In addition to this, I plotted training vs. validation loss and accuracy graphs to visualize the performance of models. Interestingly, the CNN model similar to VGGNet with 5 Conv2D and 3 MaxPooling layers and 2 Dense layers performed better than other architecture with Macro F1 score of 0.773 , AUC score of 0.911 and accuracy of 0.777.
nin-ed / Split LearningNo description available
filrg / Split Learningスプリットラーニング - Split Learning with PyTorch
skyderby / Track ScannerUsing machine learning to split gps tracks on segments
yoshitomo-matsubara / Bottlefit Split Computing[IEEE WoWMoM 2022] "BottleFit: Learning Compressed Representations in Deep Neural Networks for Effective and Efficient Split Computing"
abedidev / FedSLImplementation of FedSL: Federated Split Learning on Distributed Sequential Data in Recurrent Neural Networks
jtirana98 / Hydra CF In SFL/* paper: Data Heterogeneity and Forgotten Labels in Split Federated Learning (AAAI 26) */ Framework for evaluating Catastrophic Forgetting (CF) in SFL, SL, and FL. Also, contains Hydra an extension for SplitFedV2 to tackle CF.
Jeshima / IndiaFightsCorona Lockdown Covid19 Twitter Sentiment AnalysisI qt worked on corona virus tweet streams mam With hashtags #covid19,#indiafightscorona,#lockdown I did generate the dastset from the stream and procesed according to the working of deep learning algorithms work flow. I reframed my datset with 2 parameters-- tweets full text and sentiment score and worked on 4 algorithms mam. SET 1- DEEP LEARNING ALGOTITHMS: 1.CNN -(used 1csv with train_test_split method ) Accuracy-0.73368 2.LSTM- (used 2csv file seperate for trainingand testing) Training accuracy-0.9457,loss-0.1605 Testing accuracy-0.6557,loss-0.3442 3.FFNN-( used 2csv file seperate for trainingand testing) Training accuracy-0.28,loss-622.3 Testing accuracy-0.14893,loss-141.82 4.ANN with TFIDF Vectorizer(used 1 csv wth train_test_split) The different Ann epoches and models with different learning rate and different drop out value ,Training accuracy ranged btween 0.4752 to 0.6270 and the Validation accuracy ranged 0.2353 constantly On comparing the above 4 algorithms I came to a conclusiom with my understanding Sentiment analysis in tweets can be done efficiently in this order. CNN > LSTM > ANN > FFNN. SET 2-MACHINE LEARNING I did try with Linear Support vector Classifier --1 csv train_test_split method Training accuracy - 0.6666 Testing accuracy(f1score)-0.59471 And with Naive bayes classifier--1 csv train_test_split method Training accuracy - 0.64 Test accuracy -0.5486 SET 3- MODEL CLASSIFICATIONS: I compared my datasets efficiency with 4 models . The accuracies of the model classificatiom are: 1.Baseline Model - 62.86% 2.Reduces Model-65.71% 3.Regularized Model-66.86% 4.Dropout Model-67.43% Efficient modeling order for tweet data-set Dropout model > Regularized model > Reduced model > Baseline model .
ozgurkara99 / Video Dataset Preprocessing Meta LearningSomething-something-v2 video dataset is splitted into 3 meta-sets, namely, meta-training, meta-validation, meta-test. Overall, dataset includes 100 classes that are divided according to CMU [1] The code also provides a dataloader in order to create episodes considering given n-way k-shot learning task. Videos are converted to the frames under sparse-sampling protocol described in TSN [2]
DevLeonardoCommunity / BillsplitSplit expenses when travelling with friends (we're also learning Qwik here)
danielspg / SplitFed LearningSplitFedLearning using MNIST
daler3 / PyVertical PaperVertical Federated Learning with Multi Headed Split Neural Networks
evanwrm / Split Learning DemoSimple Split Learning setup. Proof of Concept & testbed
khoaguin / Split Learning 1D HEPrivacy-preserving Training of a Split 1D CNN Neural Network on Homomorphic Encrypted ECG Data to Detect Heart Diseases
jtirana98 / AdHOC SLNo description available
luigicapogrosso / Split Et ImperaOfficial implementation of the paper "Split-Et-Impera: A Framework for the Design of Distributed Deep Learning Applications" accepted @ DDECS 2023.