AdaptiveResonance.jl
A Julia package for Adaptive Resonance Theory (ART) algorithms.
Install / Use
/learn @AP6YC/AdaptiveResonance.jlREADME
A Julia package for Adaptive Resonance Theory (ART) algorithms.
| Documentation | Testing Status | Coverage | Reference |
|:------------------:|:----------------:|:------------:|:-------------:|
| |
|
|
|
|
|
|
|
|
| Documentation Build | JuliaHub Status | Dependents | Release |
|
|
|
|
|
Please read the documentation for detailed usage and tutorials.
Contents
Overview
Adaptive Resonance Theory (ART) is a neurocognitive theory of how recurrent cellular networks can learn distributed patterns without supervision. As a theory, it provides coherent and consistent explanations of how real neural networks learn patterns through competition, and it predicts the phenomena of attention and expectation as central to learning. In engineering, the theory has been applied to a myriad of algorithmic models for unsupervised machine learning, though it has been extended to supervised and reinforcement learning frameworks. This package provides implementations of many of these algorithms in Julia for both scientific research and engineering applications. Basic installation is outlined in Installation, while a quickstart is provided in Quickstart. Detailed usage and examples are provided in the documentation.
Usage
Installation
This project is distributed as a Julia package, available on JuliaHub, so you must first install Julia on your system. Its usage follows the usual Julia package installation procedure, interactively:
julia> ]
(@v.10) pkg> add AdaptiveResonance
or programmatically:
julia> using Pkg
julia> Pkg.add("AdaptiveResonance")
You may also add the package directly from GitHub to get the latest changes between releases:
julia> ]
(@v.10) pkg> add https://github.com/AP6YC/AdaptiveResonance.jl
Quickstart
Load the module with
using AdaptiveResonance
The stateful information of ART modules are structs with default constructures such as
art = DDVFA()
You can pass module-specific options during construction with keyword arguments such as
art = DDVFA(rho_ub=0.75, rho_lb=0.4)
For more advanced users, options for the modules are contained in Parameters.jl structs.
These options can be passed keyword arguments before instantiating the model:
opts = opts_DDVFA(rho_ub=0.75, rho_lb=0.4)
art = DDVFA(opts)
Train and test the models with train! and classify:
# Unsupervised ART module
art = DDVFA()
# Supervised ARTMAP module
artmap = SFAM()
# Load some data
train_x, train_y, test_x, test_y = load_your_data()
# Unsupervised training and testing
train!(art, train_x)
y_hat_art = classify(art, test_x)
# Supervised training and testing
train!(artmap, train_x, train_y)
y_hat_artmap = classify(art, test_x)
train! and classify can accept incremental or batch data, where rows are features and columns are samples.
Unsupervised ART modules can also accommodate simple supervised learning where internal categories are mapped to supervised labels with the keyword argument y:
# Unsupervised ART module
art = DDVFA()
train!(art, train_x, y=train_y)
These modules also support retrieving the "best-matching unit" in the case of complete mismatch (i.e., the next-best category if the presented sample is completely unrecognized) with the keyword argument get_bmu:
# Get the best-matching unit in the case of complete mismatch
y_hat_bmu = classify(art, test_x, get_bmu=true)
Implemented Modules
This project has implementations of the following ART (unsupervised) and ARTMAP (supervised) modules:
- ART
- ARTMAP
Because each of these modules is a framework for many variants in the literature, this project also implements these variants by changing their module options. Variants built upon these modules are:
- ART
GammaNormalizedFuzzyART: Gamma-Normalized FuzzyART (variant of FuzzyART).
- ARTMAP
DAM: Default ARTMAP (variant of SFAM).
In addition to these modules, this package contains the following accessory methods:
- ARTSCENE: the ARTSCENE algorithm's multiple-stage filtering process is implemented as
artscene_filter. Each filter stage is implemented internally if further granularity is required. - performance: classification accuracy is implemented as
performance. - complement_code: complement coding is implemented with
complement_code. However, training and classification methods complement code their inputs unless they are passedpreprocessed=true, indicating to the model that this step has already been done. - linear_normalization: the first step to complement coding,
linear_normalizationnormalizes input arrays within[0, 1].
Contributing
If you have a question or concern, please raise an issue. For m
Related Skills
YC-Killer
2.7kA library of enterprise-grade AI agents designed to democratize artificial intelligence and provide free, open-source alternatives to overvalued Y Combinator startups. If you are excited about democratizing AI access & AI agents, please star ⭐️ this repository and use the link in the readme to join our open source AI research team.
best-practices-researcher
The most comprehensive Claude Code skills registry | Web Search: https://skills-registry-web.vercel.app
groundhog
400Groundhog's primary purpose is to teach people how Cursor and all these other coding agents work under the hood. If you understand how these coding assistants work from first principles, then you can drive these tools harder (or perhaps make your own!).
last30days-skill
19.1kAI agent skill that researches any topic across Reddit, X, YouTube, HN, Polymarket, and the web - then synthesizes a grounded summary

