Tnlearn
A Python package that uses task-based neurons to build neural networks.
Install / Use
/learn @NewT123-WM/TnlearnREADME
Tnlearn is an open source python library. It is based on the symbolic regression algorithm to generate task-based neurons, and then utilizes diverse neurons to build neural networks.
Quick links
- Quick links
- Motivation
- Features
- Overview
- Benchmarks
- Resource
- Dependences
- Install
- Quick start
- API documentation
- Citation
- The Team
- License
Motivation
-
NuronAI inspired In the past decade, successful networks have primarily used a single type of neurons within novel architectures, yet recent deep learning studies have been inspired by the diversity of human brain neurons, leading to the proposal of new artificial neuron designs.
-
Task-Based Neuron Design Given the human brain's reliance on task-based neurons, can artificial network design shift from focusing on task-based architecture to task-based neuron design?
-
Enhanced Representation Since there are no universally applicable neurons, task-based neurons could enhance feature representation ability within the same structure, due to the intrinsic inductive bias for the task.
Features
-
Vectorized symbolic regression is employed to find optimal formulas that fit input data.
-
We parameterize the obtained elementary formula to create learnable parameters, serving as the neuron's aggregation function.
Overview
A nice picture describing the structure of tnlearn will be produced here.
Benchmarks
We select several advanced machine learning methods for comparison.
| Method | Venues | Code link | | :-------------: | :----------------------------------------------------------: | :----------------------------------------------------------: | | XGBoost | ACM SIGKDD 2016 | Adopt official code | | LightGBM | NeurIPS 2017 | Implemented by widedeep | | CatBoost | Journal of big data | Adopt official code | | TabNet | AAAI 2021 | Implemented by widedeep | | Tab Transformer | arxiv | Adopt official code | | FT-Transformer | NeurIPS 2021 | Implemented by widedeep | | DANETs | AAAI 2022 | Adopt official code |
We test multiple advanced machine learning methods on two sets of real-world data. The test results (MSE) are shown in the following table:
| Method | Particle collision | Asteroid prediction | | :----------------: | :----------------------------------------------------------: | :----------------------------------------------------------: | | XGBoost | $0.0094\pm0.0006$ | $0.0646\pm0.1031$ | | LightGBM | $0.0056\pm0.0004$ | $0.1391\pm0.1676$ | | CatBoost | $0.0028\pm0.0002$ | $0.0817\pm0.0846$ | | TabNet | $0.0040\pm0.0006$ | $0.0627\pm0.0939$ | | TabTransformer | $0.0038\pm0.0008$ | $0.4219\pm0.2776$ | | FT-Transformer | $0.0050\pm0.0020$ | $0.2136\pm0.2189$ | | DANETs | $0.0076\pm0.0009$ | $0.1709\pm0.1859$ | | Task-based Network | $\mathbf{0.0016\pm0.0005}$ | $\mathbf{0.0513\pm0.0551}$ |
Resource
Here is a resource summary for neuronal diversity in artificial networks.
| Resource | Type | Description | | :----------------------------------------------------------: | :----------------------------------------------------------: | :----------------------------------------------------------: | | QuadraLib | Library | The QuadraLib is a library for the efficient optimization and design exploration of quadratic networks.The paper of QuadraLib won MLSys 2022’s best paper award. | | Dr. Fenglei Fan’s GitHub Page | Code | Dr. Fenglei Fan’s GitHub Page summarizes a series of papers and associated code on quadratic networks, including quadratic autoencoder and the training algorithm ReLinear. | | Polynomial Network | Code | This repertoire shows how to build a deep polynomial network and sparsify it with tensor decomposition. | | Dendrite | Book | A comprehensive book covering all aspects of dendritic computation. |
Dependences
You should ensure that the version of pytorch corresponds to the version of cuda so that gpu acceleration can be guaranteed. Here is a reference version
Pytorch >= 2.1.0
cuda >= 12.1
Other major dependencies are automatically installed when installing tnlearn.
Install
Tnlearn and its dependencies can be easily installed with pip:
pip install tnlearn
Tnlearn and its dependencies can be easily installed locally:
- download the package
- create a new virtual environment
- enter the same-level directory of
setup.py - Execute:
pip install -e .
or
pip install -e . --no-deps
pip install -r requirements.txt
Quick start
This is a quick example to show you how to use tnlearn in regression tasks. Note that your data types should be tabular data.
from tnlearn import VecSymRegressor
from tnlearn import MLPRegressor
from sklearn.datasets import make_regression
from sklearn.model_selection import train_test_split
# Generate data.
X, y = make_regression(n_samples=200, random_state=1)
X_train, X_test, y_train, y_test = train_test_split(X, y, random_state=1)
# A vectorized symbolic regression algorithm is used to generate task-based neurons.
neuron = VecSymRegressor()
neuron.fit(X_train, y_train)
# Build neural network using task-based neurons and train it.
clf = MLPRegressor(neurons=neuron.neuron,
layers_list=[50,30,10]) #Specify the structure of the hidden layers in the MLP.
clf.fit(X_train, y_train)
# Predict
clf.predict(X_test)
Another quick example to show you how to use polynomial tensor regressor to build neurons:
from tnlearn import PolyTensorRegression
from tnlearn import MLPRegressor
from sklearn.datasets import make_regression
from sklearn.model_selection import train_test_split
# Generate data.
X, y = make_regression(n_samples=200, random_state=1)
X_train, X_test, y_train, y_test = train_test_split(X, y, random_state=1)
# A polynomial tensor regressor is used to generate task-based neurons.
neuron = PolyTensorRegression()
neuron.fit(X_train, y_train)
# Build neural network using task-based neurons and train it.
clf = MLPRegressor(neurons=neuron.neuron,
layers_list=[50,30,10]) #Specify the structure of the hidden layers in the MLP.
clf.fit(X_train, y_train)
# Predict
clf.predict(X_test)
There are many hyperparameters in tnlearn that can be debugged, making the neural network performance more superior. Please see the API documentation for specific usage.
API documentation
Here's our official API documentation, available on Read the Docs.
Citation
If you find Tnlearn useful, please cite it in your publications.
@article{fan2024no,
title={No One-Size-Fits-All Neurons: Task-based Neurons for Artificial Neural Networks},
author={Fan, Feng-Lei and Wang, Meng and Dong, Hang-Cheng and Ma, Jianwei and Zeng, Tieyong},
journal={arXiv preprint arXiv:2405.02369},
year={2024}
}
Related Skills
proje
Interactive vocabulary learning platform with smart flashcards and spaced repetition for effective language acquisition.
YC-Killer
2.7kA library of enterprise-grade AI agents designed to democratize artificial intelligence and provide free, open-source alternatives to overvalued Y Combinator startups. If you are excited about democratizing AI access & AI agents, please star ⭐️ this repository and use the link in the readme to join our open source AI research team.
best-practices-researcher
The most comprehensive Claude Code skills registry | Web Search: https://skills-registry-web.vercel.app
groundhog
398Groundhog's primary purpose is to teach people how Cursor and all these other coding agents work under the hood. If you understand how these coding assistants work from first principles, then you can drive these tools harder (or perhaps make your own!).
