HebbianLearning
Pytorch implementation of Hebbian learning algorithms to train deep convolutional neural networks.
Install / Use
/learn @GabrieleLagani/HebbianLearningREADME
Pytorch implementation of Hebbian learning algorithms to train deep convolutional neural networks. A neural network model is trained on various datasets both using Hebbian algorithms and SGD in order to compare the results. Hybrid models with some layers trained by Hebbian learning and other layers trained by SGD are studied. A semi-supervised approach is also considered, where unsupervised Hebbian learning is used to pre-train internal layers of a DNN, followed by SGD fine-tuning of a final linear classifier. The approach performs better than simple end-to-end backprop training in low sample efficiency scenarios.
We also introduce a preliminary version of the neurolab package, a
simple, extensible, Pytorch-based framework for deep learning
experiments, providing functionalities for handling experimental
configurations, reproducibility, checkpointing/resuming experiment state,
hyperparameter search, and other utilities.
This work is a continuation of https://github.com/GabrieleLagani/HebbianLearningThesis and https://github.com/GabrieleLagani/HebbianPCA, containing the latest updates and new Hebbian learning algorithms: Hebbian WTA variants (k-WTA, soft-WTA, ...), Hebbian PCA, Hebbian ICA.
IMPORTANT: the new implementation based on FastHebb is available, making Hebbian experiments more than 20x faster!!
In order to launch an experiment session, type:
PYTHONPATH=<project root> python <project root>/runexp.py --config <dotted.path.to.config.object> --mode <mode> --device <device> --clearhist --restart
Where <dotted.path.to.config.object> is the path, in dotted notation,
to a dictionary (which can be defined anywhere in the code) containing
the configuration parameters that you want to use for your experiment;
<mode> is either train or test; <device> is, for example, cpu
or cuda:0; the --clearhist flag can be used if you don't need
to keep old checkpoint files saved on disk; and the --restart flag
can be used to restart an experiment from scratch, instead of resuming
from a checkpoint.
You can also use:
PYTHONPATH=<project root> python <project root>/runstack.py --stack <dotted.path.to.stack.object> --mode <mode> --device <device> --clearhist --restart
to run full stack of experiments (i.e. a list, which can be defined anywhere in the code).
Example:
python runstack.py --stack stacks.vision.all[cifar10] --mode train --device cuda:0 --clearhist
Author: Gabriele Lagani - gabriele.lagani@gmail.com
Related Skills
YC-Killer
2.7kA library of enterprise-grade AI agents designed to democratize artificial intelligence and provide free, open-source alternatives to overvalued Y Combinator startups. If you are excited about democratizing AI access & AI agents, please star ⭐️ this repository and use the link in the readme to join our open source AI research team.
groundhog
398Groundhog's primary purpose is to teach people how Cursor and all these other coding agents work under the hood. If you understand how these coding assistants work from first principles, then you can drive these tools harder (or perhaps make your own!).
isf-agent
a repo for an agent that helps researchers apply for isf funding
last30days-skill
17.6kAI agent skill that researches any topic across Reddit, X, YouTube, HN, Polymarket, and the web - then synthesizes a grounded summary
