SkillAgentSearch skills...

Gomlx

GoMLX: An Accelerated Machine Learning Framework For Go

Install / Use

/learn @gomlx/Gomlx
About this skill

Quality Score

0/100

Supported Platforms

Universal

README

GoMLX, an Accelerated ML and Math Framework

GoDev GitHub Go Report Card Linux/amd64 Tests Linux/arm64 Tests Darwin/arm64 Tests Windows/amd64 Tests Coverage Slack Sponsor gomlx

📖 About GoMLX

<img align="right" src="docs/gomlx_gopher2.png" alt="GoMLX Gopher" width="220px"/>

GoMLX is an easy-to-use set of Machine Learning and generic math libraries and tools. It can be seen as a PyTorch/Jax/TensorFlow for Go.

It can be used to train, fine-tune, modify, and combine machine learning models. It provides all the tools to make that work easy: from a complete set of differentiable operators, all the way to UItools to plot metrics while training in a notebook.

It runs almost everywhere Go runs, using a pure Go backend. It runs even in the browser with WASM (see demo created with GoMLX). Likely, it will work in embedded devices as well (see Tamago).

It also supports a very optimized backend engine based on OpenXLA that uses just-in-time compilation to CPU, GPUs (Nvidia, and likely AMD ROCm, Intel, Macs) and Google's TPUs. It also supports modern distributed execution (new, still being actively improved) for multi-TPU or multi-GPU using XLA Shardy, an evolution of the GSPMD distribution).

It's the same engine that powers Google's Jax, TensorFlow and Pytorch/XLA, and it has the same speed in many cases. Use this backend to train large models or with large datasets.

[!Tip]

<div> <p>It was developed to be a full-featured ML platform for Go, productionizable and easily to experiment with ML ideas —see Long-Term Goals below.</p>

It strives to be simple to read and reason about, leading the user to a correct and transparent mental model of what is going on (no surprises)—aligned with Go philosophy. At the cost of more typing (more verbose) at times.

It is also incredibly flexible and easy to extend and try non-conventional ideas: use it to experiment with new optimizer ideas, complex regularizers, funky multitasking, etc.

Documentation is kept up to date (if it is not well-documented, it is as if the code is not there), and error messages are useful (always with a stack-trace) and try to make it easy to solve issues.

</div>

🗺️ Overview

GoMLX is a full-featured ML framework, supporting various well-known ML components
from the bottom to the top of the stack. But it is still only a slice of what a major ML library/framework should provide (like TensorFlow, Jax, or PyTorch).

Examples developed using GoMLX:

Highlights:

🚀 NEW 🚀 go-coreml: Go bindings to Apple's CoreML, supporting Metal acceleration.

  • Converting ONNX models to GoMLX with onnx-gomlx: both as an alternative for onnxruntime (leveraging XLA), but also to further fine-tune models. See also go-huggingface to easily download ONNX model files from HuggingFace.
  • Docker "gomlx_jupyterlab" with integrated JupyterLab and GoNB (a Go kernel for Jupyter notebooks)
  • Three backends:
    1. xla: OpenXLA backend for CPUs, GPUs, and TPUs. State-of-the-art as these things go. For linux/amd64, linux/arm64 (CPU) and darwin/arm64 (CPU) for now. Using the go-xla Go version of the APIs.
    2. go: a pure Go backend (no C/C++ dependencies): slower but very portable (compiles to WASM/Windows/etc.):
    3. 🚀 NEW 🚀 go-coreml: Go bindings to Apple's CoreML, supporting Metal acceleration.
  • Autodiff: automatic differentiation—only gradients for now, no jacobian.
  • Context: automatic variable management for ML models.
  • ML layers library with some of the most popular machine learning "layers": FFN layers,
    various activation functions, layer and batch normalization, convolutions, pooling, dropout, Multi-Head-Attention (for transformer layers), LSTM, KAN (B-Splines, GR-KAN/KAT networks, Discrete-KAN, PiecewiseLinear KAN), PiecewiseLinear
View on GitHub
GitHub Stars1.3k
CategoryEducation
Updated15h ago
Forks69

Languages

Go

Security Score

100/100

Audited on Mar 27, 2026

No findings