SkillAgentSearch skills...

Xetrack

A lightweight tool to track parameters using duckdb

Install / Use

/learn @xdssio/Xetrack
About this skill

Quality Score

0/100

Supported Platforms

Universal

README

<p align="center"> <img src="https://raw.githubusercontent.com/xdssio/xetrack/main/docs/images/logo.jpg" alt="logo" width="400" /> </p> <p align="center"> <a href="https://github.com/xdssio/xetrack/actions/workflows/ci.yml"> <img src="https://github.com/xdssio/xetrack/actions/workflows/ci.yml/badge.svg" alt="CI Status" /> </a> <a href="https://pypi.org/project/xetrack/"> <img src="https://img.shields.io/pypi/v/xetrack.svg" alt="PyPI version" /> </a> <a href="https://pypi.org/project/xetrack/"> <img src="https://img.shields.io/pypi/pyversions/xetrack.svg" alt="Python versions" /> </a> <a href="https://github.com/xdssio/xetrack/blob/main/LICENSE"> <img src="https://img.shields.io/badge/license-MIT-blue.svg" alt="License: MIT" /> </a> <a href="https://github.com/xdssio/xetrack/issues"> <img src="https://img.shields.io/github/issues/xdssio/xetrack.svg" alt="GitHub issues" /> </a> <a href="https://github.com/xdssio/xetrack/network/members"> <img src="https://img.shields.io/github/forks/xdssio/xetrack.svg" alt="GitHub forks" /> </a> <a href="https://github.com/xdssio/xetrack/stargazers"> <img src="https://img.shields.io/github/stars/xdssio/xetrack.svg" alt="GitHub stars" /> </a> </p>

xetrack

Lightweight, local-first experiment tracker and benchmark store built on SQLite and duckdb.

Why xetrack Exists

Most experiment trackers — like Weights & Biases — rely on cloud servers... xetrack is a lightweight package to track benchmarks, experiments, and monitor structured data.
It is focused on simplicity and flexibility. You create a "Tracker", and let it track benchmark results, model training and inference monitoring. Later retrieve as pandas or polars DataFrames, or connect to it directly as a database.

Features

  • Simple
  • Embedded
  • Fast
  • Pandas & Polars support
  • SQL-like
  • Object store with deduplication
  • CLI for basic functions
  • Multiprocessing reads and writes
  • Loguru logs integration
  • Experiment tracking
  • Model monitoring

Installation

pip install xetrack
pip install xetrack[duckdb] # to use duckdb as engine
pip install xetrack[assets] # to be able to use the assets manager to save objects
pip install xetrack[cache] # to enable function result caching
pip install xetrack[polars] # to use polars instead of pandas for DataFrames

Examples

Complete examples for every feature are available in the examples/ directory:

# Run all examples
python examples/run_all.py

# Run individual examples
python examples/01_quickstart.py
python examples/02_track_functions.py
# ... etc

See examples/README.md for full documentation of all 9+ examples.

Quickstart

from xetrack import Tracker

tracker = Tracker('database_db', 
                  params={'model': 'resnet18'}
                  )
tracker.log({"accuracy":0.9, "loss":0.1, "epoch":1}) # All you really need

tracker.latest
{'accuracy': 0.9, 'loss': 0.1, 'epoch': 1, 'model': 'resnet18', 'timestamp': '18-08-2023 11:02:35.162360',
 'track_id': 'cd8afc54-5992-4828-893d-a4cada28dba5'}


tracker.to_df(all=True)  # retrieve all the runs as dataframe
                    timestamp                              track_id     model  loss  epoch  accuracy
0  26-09-2023 12:17:00.342814  398c985a-dc15-42da-88aa-6ac6cbf55794  resnet18   0.1      1       0.9

Multiple experiment types: Use different table names to organize different types of experiments in the same database.

model_tracker = Tracker('experiments_db', table='model_experiments')
data_tracker = Tracker('experiments_db', table='data_experiments')

Params are values which are added to every future row:

$ tracker.set_params({'model': 'resnet18', 'dataset': 'cifar10'})
$ tracker.log({"accuracy":0.9, "loss":0.1, "epoch":2})

{'accuracy': 0.9, 'loss': 0.1, 'epoch': 2, 'model': 'resnet18', 'dataset': 'cifar10', 
 'timestamp': '26-09-2023 12:18:40.151756', 'track_id': '398c985a-dc15-42da-88aa-6ac6cbf55794'}

You can also set a value to an entire run with set_value ("back in time"):

tracker.set_value('test_accuracy', 0.9) # Only known at the end of the experiment
tracker.to_df()

                    timestamp                              track_id     model  loss  epoch  accuracy  dataset  test_accuracy
0  26-09-2023 12:17:00.342814  398c985a-dc15-42da-88aa-6ac6cbf55794  resnet18   0.1      1       0.9      NaN            0.9
2  26-09-2023 12:18:40.151756  398c985a-dc15-42da-88aa-6ac6cbf55794  resnet18   0.1      2       0.9  cifar10            0.9

Track functions

You can track any function.

  • The return value is logged before returned
tracker = Tracker('database_db', 
    log_system_params=True, 
    log_network_params=True, 
    measurement_interval=0.1)
image = tracker.track(read_image, *args, **kwargs)
tracker.latest
{'result': 571084, 'name': 'read_image', 'time': 0.30797290802001953, 'error': '', 'disk_percent': 0.6,
 'p_memory_percent': 0.496507, 'cpu': 0.0, 'memory_percent': 32.874608, 'bytes_sent': 0.0078125,
 'bytes_recv': 0.583984375}

Or with a wrapper:


@tracker.wrap(params={'name':'foofoo'})
def foo(a: int, b: str):
    return a + len(b)

result = foo(1, 'hello')
tracker.latest
{'function_name': 'foo', 'args': "[1, 'hello']", 'kwargs': '{}', 'error': '', 'function_time': 4.0531158447265625e-06, 
 'function_result': 6, 'name': 'foofoo', 'timestamp': '26-09-2023 12:21:02.200245', 'track_id': '398c985a-dc15-42da-88aa-6ac6cbf55794'}

Automatic Dataclass and Pydantic BaseModel Unpacking

NEW: When tracking functions, xetrack automatically unpacks frozen dataclasses and Pydantic BaseModels into individual tracked fields with dot-notation prefixes.

This is especially useful for ML experiments where you have complex configuration objects:

from dataclasses import dataclass

@dataclass(frozen=True)
class TrainingConfig:
    learning_rate: float
    batch_size: int
    epochs: int
    optimizer: str = "adam"

@tracker.wrap()
def train_model(config: TrainingConfig):
    # Your training logic here
    return {"accuracy": 0.95, "loss": 0.05}

config = TrainingConfig(learning_rate=0.001, batch_size=32, epochs=10)
result = train_model(config)

# All config fields are automatically unpacked and tracked!
tracker.latest
{
    'function_name': 'train_model',
    'config_learning_rate': 0.001,      # ← Unpacked from dataclass
    'config_batch_size': 32,            # ← Unpacked from dataclass
    'config_epochs': 10,                # ← Unpacked from dataclass
    'config_optimizer': 'adam',         # ← Unpacked from dataclass
    'accuracy': 0.95,
    'loss': 0.05,
    'timestamp': '...',
    'track_id': '...'
}

Works with multiple dataclasses:

@dataclass(frozen=True)
class ModelConfig:
    model_type: str
    num_layers: int

@dataclass(frozen=True)
class DataConfig:
    dataset: str
    batch_size: int

def experiment(model_cfg: ModelConfig, data_cfg: DataConfig):
    return {"score": 0.92}

result = tracker.track(
    experiment,
    args=[
        ModelConfig(model_type="transformer", num_layers=12),
        DataConfig(dataset="cifar10", batch_size=64)
    ]
)

# Result includes: model_cfg_model_type, model_cfg_num_layers, 
#                  data_cfg_dataset, data_cfg_batch_size, score

Also works with Pydantic BaseModel:

from pydantic import BaseModel

class ExperimentConfig(BaseModel):
    experiment_name: str
    seed: int
    use_gpu: bool = True

@tracker.wrap()
def run_experiment(cfg: ExperimentConfig):
    return {"status": "completed"}

config = ExperimentConfig(experiment_name="exp_001", seed=42)
result = run_experiment(config)

# Automatically tracks: cfg.experiment_name, cfg.seed, cfg.use_gpu, status

Benefits:

  • Clean function signatures (one config object instead of many parameters)
  • All config values automatically tracked individually for easy filtering/analysis
  • Works with both tracker.track() and @tracker.wrap() decorator
  • Supports both frozen and non-frozen dataclasses
  • Compatible with Pydantic BaseModel via model_dump()

Track assets (Oriented for ML models)

Requirements: pip install xetrack[assets] (installs sqlitedict)

When you attempt to track a non primitive value which is not a list or a dict - xetrack saves it as assets with deduplication and log the object hash:

  • Tips: If you plan to log the same object many times over, after the first time you log it, just insert the hash instead for future values to save time on encoding and hashing.
$ tracker = Tracker('database_db', params={'model': 'logistic regression'})
$ lr = Logisticregression().fit(X_train, y_train)
$ tracker.log({'accuracy': float(lr.score(X_test, y_test)), 'lr': lr})
{'accuracy': 0.9777777777777777, 'lr': '53425a65a40a49f4',  # <-- this is the model hash
    'dataset': 'iris', 'model': 'logistic regression', 'timestamp': '2023-12-27 12:21:00.727834', 'track_id': 'wisteria-turkey-4392'}

$ model = tracker.get('53425a65a40a49f4') # retrieve an object
$ model.score(X_test, y_test)
0.9777777777777777

You can retrieve the model in CLI if you need only the model in production and mind carring the rest of the file

# bash
xt assets export database.db 53425a65a40a49f4 model.cloudpickle
# python
import cloudpickle
with open("model_cloudpickle", 'rb') as f:
    model = cloudpickle.loads(f.read())
# LogisticRegression()

Function Result Caching

Xetrack provides transparent disk-based caching for expensive function results using diskcache. When enabled, results are automatically cached based on function name, arguments, and keyword arguments.

Installation

pip install xetrack[cache]

Basic Usage

Simply provide a `cache

View on GitHub
GitHub Stars7
CategoryDevelopment
Updated4d ago
Forks1

Languages

Python

Security Score

85/100

Audited on Apr 1, 2026

No findings