SkillAgentSearch skills...

Hyperactive

A unified interface for optimization algorithms and experiments

Install / Use

/learn @hyperactive-project/Hyperactive

README

<p align="center"> <a href="https://github.com/SimonBlanke/Hyperactive"> <picture> <source media="(prefers-color-scheme: dark)" srcset="./docs/images/hyperactive_logo_ink_dark.svg"> <source media="(prefers-color-scheme: light)" srcset="./docs/images/hyperactive_logo_ink.svg"> <img src="./docs/images/hyperactive_logo_ink.svg" width="400" alt="Hyperactive Logo"> </picture> </a> </p>
<h3 align="center"> A unified interface for optimization algorithms and experiments in Python. </h3> <p align="center"> <a href="https://github.com/SimonBlanke/Hyperactive/actions"><img src="https://img.shields.io/github/actions/workflow/status/SimonBlanke/Hyperactive/test.yml?style=for-the-badge&logo=githubactions&logoColor=white&label=tests" alt="Tests"></a> <a href="https://codecov.io/gh/SimonBlanke/Hyperactive"><img src="https://img.shields.io/codecov/c/github/SimonBlanke/Hyperactive?style=for-the-badge&logo=codecov&logoColor=white" alt="Coverage"></a> </p> <br> <table align="center"> <tr> <td align="right"><b>Documentation</b></td> <td align="center">&#9656;</td> <td> <a href="https://hyperactive.readthedocs.io/en/latest/">Homepage</a> &#183; <a href="https://hyperactive.readthedocs.io/en/latest/user_guide.html">User Guide</a> &#183; <a href="https://hyperactive.readthedocs.io/en/latest/api_reference.html">API Reference</a> &#183; <a href="https://hyperactive.readthedocs.io/en/latest/examples.html">Examples</a> </td> </tr> <tr> <td align="right"><b>On this page</b></td> <td align="center">&#9656;</td> <td> <a href="#key-features">Features</a> &#183; <a href="#examples">Examples</a> &#183; <a href="#core-concepts">Concepts</a> &#183; <a href="#citation">Citation</a> </td> </tr> </table> <br>
<a href="https://github.com/SimonBlanke/Hyperactive"> <img src="./docs/images/bayes_ackley.gif" width="240" align="right" alt="Bayesian Optimization on Ackley Function"> </a>

Hyperactive provides 31 optimization algorithms across 3 backends (GFO, Optuna, scikit-learn), accessible through a unified experiment-based interface. The library separates optimization problems from algorithms, enabling you to swap optimizers without changing your experiment code.

Designed for hyperparameter tuning, model selection, and black-box optimization. Native integrations with scikit-learn, sktime, skpro, and PyTorch allow tuning ML models with minimal setup. Define your objective, specify a search space, and run.

<p> <a href="https://www.linkedin.com/company/german-center-for-open-source-ai"><img src="https://img.shields.io/badge/LinkedIn-Follow-0A66C2?style=flat-square&logo=linkedin" alt="LinkedIn"></a> <a href="https://discord.gg/7uKdHfdcJG"><img src="https://img.shields.io/badge/Discord-Chat-5865F2?style=flat-square&logo=discord&logoColor=white" alt="Discord"></a> </p> <br>

Installation

pip install hyperactive
<p> <a href="https://pypi.org/project/hyperactive/"><img src="https://img.shields.io/pypi/v/hyperactive?style=flat-square&color=blue" alt="PyPI"></a> <a href="https://pypi.org/project/hyperactive/"><img src="https://img.shields.io/pypi/pyversions/hyperactive?style=flat-square" alt="Python"></a> </p> <details> <summary>Optional dependencies</summary>
pip install hyperactive[sklearn-integration]  # scikit-learn integration
pip install hyperactive[sktime-integration]   # sktime/skpro integration
pip install hyperactive[all_extras]           # Everything including Optuna
</details> <br>

Key Features

<table> <tr> <td width="33%"> <a href="https://hyperactive.readthedocs.io/en/latest/user_guide/optimizers/index.html"><b>31 Optimization Algorithms</b></a><br> <sub>Local, global, population-based, and model-based methods across 3 backends (GFO, Optuna, sklearn).</sub> </td> <td width="33%"> <a href="https://hyperactive.readthedocs.io/en/latest/user_guide/experiments.html"><b>Experiment Abstraction</b></a><br> <sub>Clean separation between what to optimize (experiments) and how to optimize (algorithms).</sub> </td> <td width="33%"> <a href="https://hyperactive.readthedocs.io/en/latest/user_guide/search_spaces.html"><b>Flexible Search Spaces</b></a><br> <sub>Discrete, continuous, and mixed parameter types. Define spaces with NumPy arrays or lists.</sub> </td> </tr> <tr> <td width="33%"> <a href="https://hyperactive.readthedocs.io/en/latest/user_guide/integrations.html"><b>ML Framework Integrations</b></a><br> <sub>Native support for scikit-learn, sktime, skpro, and PyTorch with minimal code changes.</sub> </td> <td width="33%"> <a href="https://hyperactive.readthedocs.io/en/latest/user_guide/optimizers/optuna.html"><b>Multiple Backends</b></a><br> <sub>GFO algorithms, Optuna samplers, and sklearn search methods through one unified API.</sub> </td> <td width="33%"> <a href="https://hyperactive.readthedocs.io/en/latest/api_reference.html"><b>Stable & Tested</b></a><br> <sub>5+ years of development, comprehensive test coverage, and active maintenance since 2019.</sub> </td> </tr> </table> <br>

Quick Start

import numpy as np
from hyperactive.opt.gfo import HillClimbing

# Define objective function (maximize)
def objective(params):
    x, y = params["x"], params["y"]
    return -(x**2 + y**2)  # Negative paraboloid, optimum at (0, 0)

# Define search space
search_space = {
    "x": np.arange(-5, 5, 0.1),
    "y": np.arange(-5, 5, 0.1),
}

# Run optimization
optimizer = HillClimbing(
    search_space=search_space,
    n_iter=100,
    experiment=objective,
)
best_params = optimizer.solve()

print(f"Best params: {best_params}")

Output:

Best params: {'x': 0.0, 'y': 0.0}
<br>

Core Concepts

Hyperactive separates what you optimize from how you optimize. Define your experiment (objective function) and search space once, then swap optimizers freely without changing your code. The unified interface abstracts away backend differences, letting you focus on your optimization problem.

flowchart TB
    subgraph USER["Your Code"]
        direction LR
        F["def objective(params):<br/>    return score"]
        SP["search_space = {<br/>    'x': np.arange(...),<br/>    'y': [1, 2, 3]<br/>}"]
    end

    subgraph HYPER["Hyperactive"]
        direction TB
        OPT["Optimizer"]

        subgraph BACKENDS["Backends"]
            GFO["GFO<br/>21 algorithms"]
            OPTUNA["Optuna<br/>8 algorithms"]
            SKL["sklearn<br/>2 algorithms"]
            MORE["...<br/>more to come"]
        end

        OPT --> GFO
        OPT --> OPTUNA
        OPT --> SKL
        OPT --> MORE
    end

    subgraph OUT["Output"]
        BEST["best_params"]
    end

    F --> OPT
    SP --> OPT
    HYPER --> OUT

Optimizer: Implements the search strategy (Hill Climbing, Bayesian, Particle Swarm, etc.).

Search Space: Defines valid parameter combinations as NumPy arrays or lists.

Experiment: Your objective function or a built-in experiment (SklearnCvExperiment, etc.).

Best Parameters: The optimizer returns the parameters that maximize the objective.

<br>

Examples

<details open> <summary><b>Scikit-learn Hyperparameter Tuning</b></summary>
from sklearn.svm import SVC
from sklearn.datasets import load_iris
from sklearn.model_selection import train_test_split

from hyperactive.integrations.sklearn import OptCV
from hyperactive.opt.gfo import HillClimbing

# Load data
X, y = load_iris(return_X_y=True)
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2)

# Define search space and optimizer
search_space = {"kernel": ["linear", "rbf"], "C": [1, 10, 100]}
optimizer = HillClimbing(search_space=search_space, n_iter=20)

# Create tuned estimator
tuned_svc = OptCV(SVC(), optimizer)
tuned_svc.fit(X_train, y_train)

print(f"Best params: {tuned_svc.best_params_}")
print(f"Test accuracy: {tuned_svc.score(X_test, y_test):.3f}")
</details> <details> <summary><b>Bayesian Optimization</b></summary>
import numpy as np
from hyperactive.opt.gfo import BayesianOptimizer

def ackley(params):
    x, y = params["x"], params["y"]
    return -(
        -20 * np.exp(-0.2 * np.sqrt(0.5 * (x**2 + y**2)))
        - np.exp(0.5 * (np.cos(2 * np.pi * x) + np.cos(2 * np.pi * y)))
        + np.e + 20
    )

search_space = {
    "x": np.arange(-5, 5, 0.01),
    "y": np.arange(-5, 5, 0.01),
}

optimizer = BayesianOptimizer(
    search_space=search_space,
    n_iter=50,
    experiment=ackley,
)
best_params = optimizer.solve()
</details> <details> <summary><b>Particle Swarm Optimization</b></summary>
import numpy as np
from hyperactive.opt.gfo import ParticleSwarmOptimizer

def rastrigin(params):
    A = 10
    values = [params[f"x{i}"] for i in range(5)]
    return -sum(v**2 - A * np.cos(2 * np.pi * v) + A for v in values)

search_space = {f"x{i}": np.arange(-5.12, 5.12, 0.1) for i in range(5)}

optimizer = ParticleSwarmOptimizer(
    search_space=search_space,
    n_iter=500,
    experiment=rastrigin,
    population_size=20,
)
best_params = optimizer.solve()
</details> <details> <summary><b>Experiment Abstraction with SklearnCvExperiment</b></summary>
import numpy as np
from sklearn.svm import SVC
from sklearn.datasets import load_iris
from sklearn.metrics import accuracy_score
from sklearn.model_selection import KFold

from hyperactive.experiment.integrations import SklearnCvExperiment
from hyperactive.opt.gfo import HillClimbing

X, y = load_iris(return_X_y=True)

# Create reusable experiment
sklearn_exp = SklearnCvExperiment(
    estimator=SVC(),
    scoring=accuracy_score,
    cv=KFold(n_splits=3, shuffle=True),
    X=X,
    y=y,
)

search_space = {
    "C": np.logspace(-2, 2, num=10),
    "kernel": ["linear", "rbf"],
}

optimizer = HillCli

Related Skills

View on GitHub
GitHub Stars549
CategoryData
Updated2d ago
Forks75

Languages

Python

Security Score

100/100

Audited on Mar 29, 2026

No findings