53 skills found · Page 1 of 2
Einsteinish / Artificial Neural Networks With JupyterArtificial Neural Networks - Gradient descent, BFGS, Regularization with Jupyter notebook
complex-reasoning / RPG[ICLR 2026] RPG: KL-Regularized Policy Gradient (https://arxiv.org/abs/2505.17508)
ZJULearning / DepthInpaintingDepth Image Inpainting with Low Gradient Regularization
dtak / Adversarial Robustness PublicCode for AAAI 2018 accepted paper: "Improving the Adversarial Robustness and Interpretability of Deep Neural Networks by Regularizing their Input Gradients"
hytseng0509 / DropGradRegularizing Meta-Learning via Gradient Dropout
pb1672 / ML ProjectsAndrew Ng's Machine Learning Class Projects Description: Ex1 - Gradient Descent, Newton's Method, Linear Regression Ex2 - Sigmoid Kernels Ex3 - Logistic Regression Implementation Ex4 - Neural Networks implementation for Digit Recognition Ex5 - Regularized Linear Regression, Polynomial Regression Ex6 - SVM (Kernel implementation) for Spam Classification Ex8 - Recommender System (Collaborative Filtering) and Anomaly Detection
JonasGeiping / FullbatchtrainingTraining vision models with full-batch gradient descent and regularization
timnugent / Logistic Regression SgdL1-regularized logistic regression using stochastic gradient descent [machine learning]
rlai-lab / Regularized GradientTDCode repo for Gradient Temporal-Difference Learning with Regularized Corrections paper.
jpzhang1810 / TGROfficial Pytorch implementation for "Transferable Adversarial Attacks on Vision Transformers with Token Gradient Regularization" (CVPR 2023).
cfinlay / TulipScaleable input gradient regularization
OHDSI / Bayes BridgeBayesian sparse regression with regularized shrinkage and conjugate gradient acceleration
ericyeats / Cvnn SecurityPython/PyTorch code for the ICML 2021 Paper: Improving Gradient Regularization using Complex-Valued Neural Networks
CGCL-codes / TransferAttackSurrogatesThe official code of IEEE S&P 2024 paper "Why Does Little Robustness Help? A Further Step Towards Understanding Adversarial Transferability". We study how to train surrogates model for boosting transfer attack.
manyasrinivas2021 / AI BASED FACIAL EMOTION DETECTION USING DEEP LEARNING“AI Based Facial Emotion Detection”, developed using many machine learning algorithms including convolution neural networks (CNN) for a facial expression recognition task. The goal is to classify each facial image into one of the seven facial emotion categories considered in this study.Trained CNN models with different depth using gray-scale images from the Kaggle website.CNN models are developed in Pytorch and exploited Graphics Processing Unit (GPU) computation in order to expedite the training process. In addition to the networks performing based on raw pixel data,Hybrid feature strategy is employed by which trained a novel CNN model with the combination of raw pixel data and Histogram of Oriented Gradients (HOG) features. To reduce the over fitting of the models,different techniques are utilized including dropout and batch normalization in addition to L2 regularization. Cross validation is applied to determine the optimal hyper-parameters and evaluated the performance of the developed models by looking at their training histories. Visualization of different layers of a network is presented to show what features of a face can be learned by CNN models. Based on the emotion the program recommends the music for the user to up flit the mood.
reddyprasade / Machine Learning Interview PreparationPrepare to Technical Skills Here are the essential skills that a Machine Learning Engineer needs, as mentioned Read me files. Within each group are topics that you should be familiar with. Study Tip: Copy and paste this list into a document and save to your computer for easy referral. Computer Science Fundamentals and Programming Topics Data structures: Lists, stacks, queues, strings, hash maps, vectors, matrices, classes & objects, trees, graphs, etc. Algorithms: Recursion, searching, sorting, optimization, dynamic programming, etc. Computability and complexity: P vs. NP, NP-complete problems, big-O notation, approximate algorithms, etc. Computer architecture: Memory, cache, bandwidth, threads & processes, deadlocks, etc. Probability and Statistics Topics Basic probability: Conditional probability, Bayes rule, likelihood, independence, etc. Probabilistic models: Bayes Nets, Markov Decision Processes, Hidden Markov Models, etc. Statistical measures: Mean, median, mode, variance, population parameters vs. sample statistics etc. Proximity and error metrics: Cosine similarity, mean-squared error, Manhattan and Euclidean distance, log-loss, etc. Distributions and random sampling: Uniform, normal, binomial, Poisson, etc. Analysis methods: ANOVA, hypothesis testing, factor analysis, etc. Data Modeling and Evaluation Topics Data preprocessing: Munging/wrangling, transforming, aggregating, etc. Pattern recognition: Correlations, clusters, trends, outliers & anomalies, etc. Dimensionality reduction: Eigenvectors, Principal Component Analysis, etc. Prediction: Classification, regression, sequence prediction, etc.; suitable error/accuracy metrics. Evaluation: Training-testing split, sequential vs. randomized cross-validation, etc. Applying Machine Learning Algorithms and Libraries Topics Models: Parametric vs. nonparametric, decision tree, nearest neighbor, neural net, support vector machine, ensemble of multiple models, etc. Learning procedure: Linear regression, gradient descent, genetic algorithms, bagging, boosting, and other model-specific methods; regularization, hyperparameter tuning, etc. Tradeoffs and gotchas: Relative advantages and disadvantages, bias and variance, overfitting and underfitting, vanishing/exploding gradients, missing data, data leakage, etc. Software Engineering and System Design Topics Software interface: Library calls, REST APIs, data collection endpoints, database queries, etc. User interface: Capturing user inputs & application events, displaying results & visualization, etc. Scalability: Map-reduce, distributed processing, etc. Deployment: Cloud hosting, containers & instances, microservices, etc. Move on to the final lesson of this course to find lots of sample practice questions for each topic!
alanjeffares / TANGOSImplementation of Tabular Neural Gradient Orthogonalization and Specialization (TANGOS). A regularizer for neural networks described in our ICLR 2023 paper.
YangiD / DefenseIQA NTOfficial code for "Defense Against Adversarial Attacks on No-Reference Image Quality Models with Gradient Norm Regularization"
uuujf / SGDNoise[ICML 2019] The Anisotropic Noise in Stochastic Gradient Descent: Its Behavior of Escaping from Sharp Minima and Regularization Effects
drwuHUST / MBGD UR BNMatlab source code of the paper "Y. Cui, D. Wu* and J. Huang*, "Optimize TSK Fuzzy Systems for Classification Problems: Mini-Batch Gradient Descent with Uniform Regularization and Batch Normalization," IEEE Trans. on Fuzzy Systems, 28(12):3065-3075, 2020." Python source code is available at https://github.com/YuqiCui/TSK_BN_UR