SkillAgentSearch skills...

OLMo

Modeling, training, eval, and inference code for OLMo

Install / Use

/learn @allenai/OLMo
About this skill

Quality Score

0/100

Supported Platforms

Universal

README

<div align="center"> <!-- <img src="https://github.com/allenai/OLMo/assets/8812459/774ac485-a535-4768-8f7c-db7be20f5cc3" width="300"/> --> <br> <br> <h1>OLMo: Open Language Model</h1> </div> <p align="center"> <a href="https://github.com/allenai/OLMo/blob/main/LICENSE"> <img alt="GitHub License" src="https://img.shields.io/github/license/allenai/OLMo"> </a> <a href="https://github.com/allenai/OLMo/releases"> <img alt="GitHub release" src="https://img.shields.io/github/release/allenai/OLMo.svg"> </a> <a href="https://arxiv.org/pdf/2501.00656.pdf"> <img alt="Paper URL" src="https://img.shields.io/badge/arxiv-2402.00838-blue"> </a> <a href="https://playground.allenai.org"> <img alt="Playground" src="https://img.shields.io/badge/Ai2-Playground-F0529C"> </a> <a href="https://discord.gg/sZq3jTNVNG"> <img alt="Discord" src="https://img.shields.io/badge/Discord%20-%20blue?style=flat&logo=discord&label=Ai2&color=%235B65E9"> </a> </p>

⚠️ NOTICE ⚠️ This repository is out of date with our more recent releases and is no longer active. For the latest Olmo releases and updates, please visit: https://github.com/allenai/OLMo-core/

OLMo is a repository for training and using AI2's state-of-the-art open language models. It is designed by scientists, for scientists.

Installation

First, install PyTorch following the instructions specific to your operating system.

For training and fine-tuning, we recommend installing from source:

git clone https://github.com/allenai/OLMo.git
cd OLMo
pip install -e .[all]

You can also install from PyPI with:

pip install ai2-olmo

Pretraining

OLMo pretraining follows a two-stage training procedure. In the first stage, we train on large amounts of mostly web-based data: OLMo-mix-1124 In the second stage, we train on a smaller amount of high-quality, targeted data: Dolmino-mix-1124

You can find all the checkpoints, at minimum every 1000 training steps in OLMo core and Hugging Face format:

| Variant | OLMo Format (Stage 1) | OLMo Format (Stage 2) | Hugging Face Format | |----------------|-----------------------------------------------------------------------------------------------------|--------|----------------------------------------------------------------------------------| | OLMo-2 1B | OLMo-2 1B | OLMo-2 1B | Hugging Face for the 1B variant | | OLMo-2 7B | OLMo-2 7B | OLMo-2 7B | Hugging Face for the 7B variant | | OLMo-2 13B | OLMo-2 13B | OLMo-2 13B | Hugging Face for the 13B variant | | OLMo-2 32B | OLMo-2 32B | OLMo-2 32B | Hugging Face for the 32B variant |

Note: The 32B variant was trained on our new trainer. To train or fine-tune OLMo-2 32B, visit OLMo-core.

Steps to reproduce

To reproduce any of the training processes described below, run this:

torchrun --nproc_per_node=8 scripts/train.py {path_to_train_config}

For the training config, use any of the configs listed below.

If you want to override any of the settings in the training config without having to write a new config every time, you can do this:

torchrun --nproc_per_node=8 scripts/train.py {path_to_train_config} \
  --setting1=value \
  --setting2=value \
  --setting3.subsetting1=value

The training configs below refer to training data that gets streamed in live over HTTP. To reproduce at large scale, we recommend downloading the files locally and changing the paths to point to your local file system.

To run on Mac silicon devices:

python scripts/train.py {path_to_train_config}

Example:

python scripts/train.py configs/tiny/OLMo-20M.yaml --save_overwrite

Note: You need to upgrade PyTorch to 2.5.x to run.

Stage 1

Stage 1 is the biggest stage, where we train on 4T or 5T tokens on largely web-based data.

| | OLMo2 1B | OLMo2 7B | OLMo2 13B | |-----------------|-----------------|-------------------------------------------------------------------------------------------------------------------|--------------------------------------------------------------------------------------------------------------------| | Number of tokens| 4 Trillion | 4 Trillion | 5 Trillion | | Checkpoint |stage1-step1907359-tokens4001B| stage1-step928646-tokens3896B | stage1-step596057-tokens5001B | | Training config | OLMo2-1B-stage1.yaml |OLMo2-7B-stage1.yaml | OLMo2-13B-stage1.yaml | | | WandB | wandb.ai/OLMo2-1B|wandb.ai/OLMo2-7B | wandb.ai/OLMo2-13B |

You can find the .csv.gz files containing the training data here.

Stage 2 for the 1B

For the 1B model, we have trained three times with different data order on 50B high quality tokens, used last checkpoint of seed 42 as final checkpoint.

| | Checkpoint | Training config | WandB | |------------------------|-------------------------------------------------------------------------------------------------------------------------------------|----------------------------------------------------------------------------------------|-------------| | random seed 42069 | stage2-ingredient1-step23852-tokens51B | OLMo2-1B-stage2-seed42069.yaml | wandb.ai/OLMo2-1B | | random seed 666 | stage2-ingredient2-step23852-tokens51B | OLMo2-1B-stage2-seed666.yaml | wandb.ai/OLMo2-1B | | random seed 42 (main) | stage2-ingredient3-step23852-tokens51B | OLMo2-1B-stage2-seed42.yaml | wandb.ai/OLMo2-1B |

Stage 2 for the 7B

For the 7B model, we train three times with different data order on 50B high quality tokens, and then average ("soup") the models.

| | Checkpoint | Training config | WandB | |------------------------|-------------------------------------------------------------------------------------------------------------------------------------|----------------------------------------------------------------------------------------|-------------| | random seed 42 | stage2-ingredient1-step11931-tokens50B | OLMo2-7B-stage2-seed42.yaml | wandb.ai/OLMo2-7B | | random seed 42069 | [stage2-ingredient2-step11931-tokens50B](https://huggingface.co/allenai/OLMo-2-1124-7B/tree/stage2-ingredient2

Related Skills

View on GitHub
GitHub Stars6.4k
CategoryDevelopment
Updated11h ago
Forks726

Languages

Python

Security Score

95/100

Audited on Mar 30, 2026

No findings