15 skills found
ViCCo-Group / FrrsaPython package to conduct feature-reweighted representational similarity analysis.
jwest33 / AbliteratorOrthogonal Projection Abliteration toolkit featuring Norm-Preservation, Null-Space Constaints, Winsorization, and Adaptive Layer Weighting
ducminhkhoi / Feature Weighting And BoostingThis is the unofficially official implementation of the paper "Feature Weighting and Boosting for Few-Shot Segmentation"
24wenjie-li / FDIWNThis repository is an official PyTorch implementation of our paper "Feature Distillation Interaction Weighting Network for Lightweight Image Super-Resolution". (AAAI 2022)
MaurizioFD / CFeCBFThis repository contains the core model we called "Collaborative filtering enhanced Content-based Filtering" published in our UMUAI article "Movie Genome: Alleviating New Item Cold Start in Movie Recommendation"
JohnsonZ-microe / Design Of Real Time Figure Recognition Algorithm Based On FPGAAs the mechine learning techniques being widely implemented in social and industrial field, it gives higher request to the accuracy of recognition of objects’feature. Based on the high-speed performance of FPGA platform, the algorithm proposed in this paper aims to extract the imformation of the gesture, mainly the skin color and contours. Using ellipse model color space division and gaussian function weighting structure, the algorithm can perfectly realize the combination of skin color and contours, and filter the irrelevant imformation effectively
julianalucena / Hybrid Optimization TechniquesCombining global optimization algorithms with adaptive distance and prototype selection algorithms for feature selection and weighting.
wangjiaojuan / An Adaptive Weight Method For Image Retrieval Based Multi Feature FusionAiming at the problem that the retrieval accuracy of existing content-based image retrieval system is low. We propose an adaptive weighting method based on entropy theory and relevance feedback. Firstly, we obtain single feature trust by relevance feedback (supervised) or entropy (unsupervised); Next, We construct a transfer matrix based on trust. Finally, based on the transfer matrix, we get the weight of single feature through several iterations. Our method makes full use of single feature information of image and achieve better performance.
musicalka / Fwr CspFeature Weighting and Regularization for Common Spatial Patterns in EEG-based Motor Imagery Brain-Computer Interfaces
dalwindercheema / FeatweightSource Code for Feature Weighting with Ant Lion optimization in MATLAB
FabianCormier / Cross Domain Transfer Learning From Human Motion To Robot Fault DetectionThe code trains an LSTM-based residual model on human motion data and applies transfer learning to detect robotic joint faults. It preprocesses data, maps robot features to human-like patterns, and fine-tunes a model while freezing early layers. The optimized model is evaluated with class weighting, callbacks, and feature importance analysis.
Jiajun-Xiang / SMA NetPedestrian detection is of great significance due to its wide application in various fields. RGB-T based pedestrian detection has received more extensive attention due to the provided detailed information and thermal sensitivity of pedestrians. However, the existing RGB-T based methods focus on the fused features, while ignoring the robustness and superiority of the extracted features from each single modality. In this paper, a single-modal feature augment network (SMA-Net) is proposed to enhance the features extracted from each branch before feature fusion. Firstly, two single-modal branches are trained separately to optimize the feature extraction of each branch in addition to the training of pedestrian detection based on fused features. To further enhance the single-modal features, fake feature maps generated by random noise are used to combine with the RBG or thermal feature maps. Secondly, a lightweight ROI pooling multiscale fusion module (PMSF) is proposed to obtain more fine-grained and abundant features, in which pooling features of different scales are integrated by adaptively weighting. Finally, a generative constraint strategy is designed to constrain fusion by minimizing the loss function between the generated fusion image and RGB-T pairs. Experimental results on the challenging KAIST multispectral pedestrian dataset demonstrate that the proposed SMA-Net outperforms the state-of-the-art methods in terms of accuracy and computational efficiency.
daksh26022002 / ManthanprojectFake News Detection Fake News Detection in Python In this project, we have used various natural language processing techniques and machine learning algorithms to classify fake news articles using sci-kit libraries from python. Getting Started These instructions will get you a copy of the project up and running on your local machine for development and testing purposes. See deployment for notes on how to deploy the project on a live system. Prerequisites What things you need to install the software and how to install them: Python 3.6 This setup requires that your machine has python 3.6 installed on it. you can refer to this url https://www.python.org/downloads/ to download python. Once you have python downloaded and installed, you will need to setup PATH variables (if you want to run python program directly, detail instructions are below in how to run software section). To do that check this: https://www.pythoncentral.io/add-python-to-path-python-is-not-recognized-as-an-internal-or-external-command/. Setting up PATH variable is optional as you can also run program without it and more instruction are given below on this topic. Second and easier option is to download anaconda and use its anaconda prompt to run the commands. To install anaconda check this url https://www.anaconda.com/download/ You will also need to download and install below 3 packages after you install either python or anaconda from the steps above Sklearn (scikit-learn) numpy scipy if you have chosen to install python 3.6 then run below commands in command prompt/terminal to install these packages pip install -U scikit-learn pip install numpy pip install scipy if you have chosen to install anaconda then run below commands in anaconda prompt to install these packages conda install -c scikit-learn conda install -c anaconda numpy conda install -c anaconda scipy Dataset used The data source used for this project is LIAR dataset which contains 3 files with .tsv format for test, train and validation. Below is some description about the data files used for this project. LIAR: A BENCHMARK DATASET FOR FAKE NEWS DETECTION William Yang Wang, "Liar, Liar Pants on Fire": A New Benchmark Dataset for Fake News Detection, to appear in Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (ACL 2017), short paper, Vancouver, BC, Canada, July 30-August 4, ACL. the original dataset contained 13 variables/columns for train, test and validation sets as follows: Column 1: the ID of the statement ([ID].json). Column 2: the label. (Label class contains: True, Mostly-true, Half-true, Barely-true, FALSE, Pants-fire) Column 3: the statement. Column 4: the subject(s). Column 5: the speaker. Column 6: the speaker's job title. Column 7: the state info. Column 8: the party affiliation. Column 9-13: the total credit history count, including the current statement. 9: barely true counts. 10: false counts. 11: half true counts. 12: mostly true counts. 13: pants on fire counts. Column 14: the context (venue / location of the speech or statement). To make things simple we have chosen only 2 variables from this original dataset for this classification. The other variables can be added later to add some more complexity and enhance the features. Below are the columns used to create 3 datasets that have been in used in this project Column 1: Statement (News headline or text). Column 2: Label (Label class contains: True, False) You will see that newly created dataset has only 2 classes as compared to 6 from original classes. Below is method used for reducing the number of classes. Original -- New True -- True Mostly-true -- True Half-true -- True Barely-true -- False False -- False Pants-fire -- False The dataset used for this project were in csv format named train.csv, test.csv and valid.csv and can be found in repo. The original datasets are in "liar" folder in tsv format. File descriptions DataPrep.py This file contains all the pre processing functions needed to process all input documents and texts. First we read the train, test and validation data files then performed some pre processing like tokenizing, stemming etc. There are some exploratory data analysis is performed like response variable distribution and data quality checks like null or missing values etc. FeatureSelection.py In this file we have performed feature extraction and selection methods from sci-kit learn python libraries. For feature selection, we have used methods like simple bag-of-words and n-grams and then term frequency like tf-tdf weighting. we have also used word2vec and POS tagging to extract the features, though POS
LucasKirsten / CBFW Naive BayesPython implementation of "A Correlation-Based Feature Weighting Filter for Naive Bayes"
ThomasWestfechtel / BIWAABackprop Induced Feature Weighting for Adversarial Domain Adaptation with Iterative Label Distribution Alignment