SkillAgentSearch skills...

Optimum

πŸš€ Accelerate inference and training of πŸ€— Transformers, Diffusers, TIMM and Sentence Transformers with easy to use hardware optimization tools

Install / Use

/learn @huggingface/Optimum

README

<!--- Copyright 2025 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. --> <h1 align="center"><p>πŸ€— Optimum</p></h1> <p align="center"> <a href="https://pypi.org/project/optimum/"><img alt="PyPI - License" src="https://img.shields.io/pypi/l/optimum"/></a> <a href="https://pypi.org/project/optimum/"><img alt="PyPI - Python Version" src="https://img.shields.io/pypi/pyversions/optimum"/></a> <a href="https://pypi.org/project/optimum/"><img alt="PyPI - Version" src="https://img.shields.io/pypi/v/optimum"/></a> <a href="https://pypi.org/project/optimum/"><img alt="PyPI - Downloads" src="https://img.shields.io/pypi/dm/optimum"/></a> <a href="https://huggingface.co/docs/optimum/index"><img alt="Documentation" src="https://img.shields.io/website/http/huggingface.co/docs/optimum/index.svg?down_color=red&down_message=offline&up_message=online"/></a> </p> <p align="center"> Optimum is an extension of Transformers πŸ€– Diffusers 🧨 TIMM πŸ–ΌοΈ and Sentence-Transformers πŸ€—, providing a set of optimization tools and enabling maximum efficiency to train and run models on targeted hardware, while keeping things easy to use. </p>

Installation

Optimum can be installed using pip as follows:

python -m pip install optimum

If you'd like to use the accelerator-specific features of Optimum, you can check the documentation and install the required dependencies according to the table below:

| Accelerator | Installation | | :---------------------------------------------------------------------------------- | :-------------------------------------------------------------------------- | | ONNX | pip install --upgrade --upgrade-strategy eager optimum[onnx] | | ONNX Runtime | pip install --upgrade --upgrade-strategy eager optimum[onnxruntime] | | ONNX Runtime GPU | pip install --upgrade --upgrade-strategy eager optimum[onnxruntime-gpu] | | OpenVINO | pip install --upgrade --upgrade-strategy eager optimum[openvino] | | NVIDIA TensorRT-LLM | docker run -it --gpus all --ipc host huggingface/optimum-nvidia | | AMD Instinct GPUs and Ryzen AI NPU | pip install --upgrade --upgrade-strategy eager optimum[amd] | | AWS Trainum & Inferentia | pip install --upgrade --upgrade-strategy eager optimum[neuronx] | | Intel Gaudi Accelerators (HPU) | pip install --upgrade --upgrade-strategy eager optimum[habana] | | FuriosaAI | pip install --upgrade --upgrade-strategy eager optimum[furiosa] |

The --upgrade --upgrade-strategy eager option is needed to ensure the different packages are upgraded to the latest possible version.

To install from source:

python -m pip install git+https://github.com/huggingface/optimum.git

For the accelerator-specific features, append optimum[accelerator_type] to the above command:

python -m pip install optimum[onnxruntime]@git+https://github.com/huggingface/optimum.git

Accelerated Inference

Optimum provides multiple tools to export and run optimized models on various ecosystems:

  • ONNX / ONNX Runtime, one of the most popular open formats for model export, and a high-performance inference engine for deployment.
  • OpenVINO, a toolkit for optimizing, quantizing and deploying deep learning models on Intel hardware.
  • ExecuTorch, PyTorch’s native solution for on-device inference across mobile and edge devices.
  • Intel Gaudi Accelerators enabling optimal performance on first-gen Gaudi, Gaudi2 and Gaudi3.
  • AWS Inferentia for accelerated inference on Inf2 and Inf1 instances.
  • NVIDIA TensorRT-LLM.

The export and optimizations can be done both programmatically and with a command line.

ONNX + ONNX Runtime

🚨🚨🚨 ONNX integration was moved to optimum-onnx so make sure to follow the installation instructions 🚨🚨🚨

Before you begin, make sure you have all the necessary libraries installed :

pip install --upgrade --upgrade-strategy eager optimum[onnx]

It is possible to export Transformers, Diffusers, Sentence Transformers and Timm models to the ONNX format and perform graph optimization as well as quantization easily.

For more information on the ONNX export, please check the documentation.

Once the model is exported to the ONNX format, we provide Python classes enabling you to run the exported ONNX model in a seamless manner using ONNX Runtime in the backend.

For this make sure you have ONNX Runtime installed, fore more information check out the installation instructions.

More details on how to run ONNX models with ORTModelForXXX classes here.

Intel (OpenVINO + NNCF)

Before you begin, make sure you have all the necessary libraries installed.

pip install --upgrade --upgrade-strategy eager optimum[openvino]

You can find more information on the different integration in our documentation and in the examples of optimum-intel.

ExecuTorch

Before you begin, make sure you have all the necessary libraries installed :

pip install optimum-executorch@git+https://github.com/huggingface/optimum-executorch.git

Users can export Transformers models to ExecuTorch and run inference on edge devices within PyTorch's ecosystem.

For more information about export Transformers to ExecuTorch, please check the doc for Optimum-ExecuTorch.

Quanto

Quanto is a pytorch quantization backend which allows you to quantize a model either using the python API or the optimum-cli.

You can see more details and examples in the Quanto repository.

Accelerated training

Optimum provides wrappers around the original Transformers Trainer to enable training on powerful hardware easily. We support many providers:

Intel Gaudi Accelerators

Before you begin, make sure you have all the necessary libraries installed :

pip install --upgrade --upgrade-strategy eager optimum[habana]

You can find examples in the documentation and in the examples.

AWS Trainium

Before you begin, make sure you have all the necessary libraries installed :

pip install --upgrade --upgrade-strategy eager optimum[neuronx]

You can find examples in the documentation and in the tutorials.

Related Skills

View on GitHub
GitHub Stars3.3k
CategoryDevelopment
Updated5h ago
Forks628

Languages

Python

Security Score

100/100

Audited on Mar 25, 2026

No findings