SkillAgentSearch skills...

Holocron

PyTorch implementations of recent Computer Vision tricks (ReXNet, RepVGG, Unet3p, YOLOv4, CIoU loss, AdaBelief, PolyLoss, MobileOne). Other additions: AdEMAMix

Install / Use

/learn @frgfm/Holocron

README

<p align="center"> <img src="https://github.com/frgfm/Holocron/releases/download/v0.1.3/holocron_logo_text.png" width="40%"> </p> <p align="center"> <a href="https://github.com/frgfm/Holocron/actions/workflows/build.yml"> <img alt="CI Status" src="https://img.shields.io/github/actions/workflow/status/frgfm/holocron/package.yml?branch=main&label=CI&logo=github&style=flat-square"> </a> <a href="https://github.com/astral-sh/ruff"> <img src="https://img.shields.io/badge/Linter-Ruff-FCC21B?style=flat-square&logo=ruff&logoColor=white" alt="ruff"> </a> <a href="https://github.com/astral-sh/ty"> <img src="https://img.shields.io/badge/Typecheck-Ty-261230?style=flat-square&logo=astral&logoColor=white" alt="ty"> </a> <a href="https://www.codacy.com/gh/frgfm/Holocron/dashboard?utm_source=github.com&amp;utm_medium=referral&amp;utm_content=frgfm/Holocron&amp;utm_campaign=Badge_Grade"><img src="https://app.codacy.com/project/badge/Grade/49fc8908c44b45d3b64131e49558f1e9"/></a> <a href="https://codecov.io/gh/frgfm/holocron"> <img src="https://img.shields.io/codecov/c/github/frgfm/holocron.svg?logo=codecov&style=flat-square" alt="Test coverage percentage"> </a> </p> <p align="center"> <a href="https://pypi.org/project/pylocron/"> <img src="https://img.shields.io/pypi/v/pylocron.svg?logo=python&logoColor=fff&style=flat-square&label=PyPI" alt="PyPi Status"> </a> <img alt="GitHub release (latest by date)" src="https://img.shields.io/github/v/release/frgfm/holocron?label=Release&logo=github"> <img src="https://img.shields.io/pypi/pyversions/pylocron.svg?logo=Python&label=Python&logoColor=fff&style=flat-square" alt="pyversions"> <a href="https://github.com/frgfm/Holocron/blob/main/LICENSE"> <img src="https://img.shields.io/github/license/frgfm/Holocron.svg?label=License&logoColor=fff&style=flat-square" alt="License"> </a> </p> <p align="center"> <a href="https://huggingface.co/spaces/frgfm/holocron"> <img src="https://img.shields.io/badge/%F0%9F%A4%97%20Hugging%20Face-Spaces-blue" alt="Huggingface Spaces"> </a> <a href="https://colab.research.google.com/github/frgfm/notebooks/blob/main/holocron/quicktour.ipynb"> <img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open in Colab"> </a> </p> <p align="center"> <a href="https://frgfm.github.io/holocron"> <img src="https://img.shields.io/github/actions/workflow/status/frgfm/holocron/page-build.yml?branch=main&label=Documentation&logo=read-the-docs&logoColor=white&style=flat-square" alt="Documentation Status"> </a> </p>

Implementations of recent Deep Learning tricks in Computer Vision, easily paired up with your favorite framework and model zoo.

Holocrons were information-storage datacron devices used by both the Jedi Order and the Sith that contained ancient lessons or valuable information in holographic form.

Source: Wookieepedia

Quick Tour

Open In Colab

This project was created for quality implementations, increased developer flexibility and maximum compatibility with the PyTorch ecosystem. For instance, here is a short snippet to showcase how Holocron models are meant to be used:

from PIL import Image
from torchvision.transforms.v2 import Compose, ConvertImageDtype, Normalize, PILToTensor, Resize
from torchvision.transforms.v2.functional import InterpolationMode
from holocron.models.classification import repvgg_a0

# Load your model
model = repvgg_a0(pretrained=True).eval()

# Read your image
img = Image.open(path_to_an_image).convert("RGB")

# Preprocessing
config = model.default_cfg
transform = Compose([
    Resize(config['input_shape'][1:], interpolation=InterpolationMode.BILINEAR),
    PILToTensor(),
    ConvertImageDtype(torch.float32),
    Normalize(config['mean'], config['std'])
])

input_tensor = transform(img).unsqueeze(0)

# Inference
with torch.inference_mode():
    output = model(input_tensor)
print(config['classes'][output.squeeze(0).argmax().item()], output.squeeze(0).softmax(dim=0).max())

Installation

Prerequisites

Python 3.11 (or higher) and uv/pip are required to install Holocron.

Latest stable release

You can install the last stable release of the package using pypi as follows:

pip install pylocron

Developer mode

Alternatively, if you wish to use the latest features of the project that haven't made their way to a release yet, you can install the package from source (install Git first):

git clone https://github.com/frgfm/Holocron.git
pip install -e Holocron/.

Paper references

PyTorch layers for every need

Models for vision tasks

Vision-related operations

Trying something else than Adam

More goodies

Documentation

The full package documentation is available here for detailed specifications.

Demo app

The project includes a minimal demo app using Gradio

demo_app

You can check the live demo, hosted on :hugs: HuggingFace Spaces :hugs: over here :point_down: Hugging Face Spaces

Reference scripts

Reference scripts are provided to train your models using holocron on famous public datasets. Those scripts currently support the following vision tasks:

Latency benchmark

You crave for SOTA performances, but you don't know whether it fits your needs in terms of latency?

In the table below, you will find a latency benchmark for all supported models:

| Arch | GPU mean (std) | CPU mean (std) | | ------------------------------------------------------------ | ----------------- | ------------------ | | repvgg_a0* | 3.14ms (0.87ms) | 23.28ms (1.21ms) | | repvgg_a1* | 4.13ms (1.00ms) | 29.61ms (0.46ms) | | repvgg_a2* | 7.35ms (1.11ms) | 46.87ms (1.27ms) | | [repvgg_b0](https://frgfm.github.io/Holocron/latest/models.html#holocron.models.repvgg_b

Related Skills

View on GitHub
GitHub Stars328
CategoryEducation
Updated2mo ago
Forks48

Languages

Python

Security Score

100/100

Audited on Feb 1, 2026

No findings