SkillAgentSearch skills...

Chandra

OCR model that handles complex tables, forms, handwriting with full layout.

Install / Use

/learn @datalab-to/Chandra
About this skill

Quality Score

0/100

Supported Platforms

Universal

Tags

README

<p align="center"> <img src="assets/datalab-logo.png" alt="Datalab Logo" width="150"/> </p> <h1 align="center">Datalab</h1> <p align="center"> <strong>State of the Art models for Document Intelligence</strong> </p> <p align="center"> <a href="https://opensource.org/licenses/Apache-2.0"><img src="https://img.shields.io/badge/Code%20License-Apache_2.0-green.svg" alt="Code License"></a> <a href="https://www.datalab.to/pricing"><img src="https://img.shields.io/badge/Model%20License-OpenRAIL--M-blue.svg" alt="Model License"></a> <a href="https://discord.gg/KuZwXNGnfH"><img src="https://img.shields.io/badge/Discord-Join%20us-5865F2?logo=discord&logoColor=white" alt="Discord"></a> </p> <hr/>

Chandra OCR 2

Chandra OCR 2 is a state of the art OCR model that converts images and PDFs into structured HTML/Markdown/JSON while preserving layout information.

News

  • 3/2026 - Chandra 2 is here with significant improvements to math, tables, layout, and multilingual OCR
  • 10/2025 - Chandra 1 launched

Features

  • Tops external olmocr benchmark and significant improvement in internal multilingual benchmarks
  • Convert documents to markdown, html, or json with detailed layout information
  • Support for 90+ languages (benchmark below)
  • Excellent handwriting support
  • Reconstructs forms accurately, including checkboxes
  • Strong performance with tables, math, and complex layouts
  • Extracts images and diagrams, and adds captions and structured data
  • Two inference modes: local (HuggingFace) and remote (vLLM server)
<img src="assets/examples/math/handwritten_math.png" width="600px"/>

Hosted API

  • We have a hosted API for Chandra here, which is more accurate and faster.
  • There is a free playground here if you want to try Chandra without installing.

Quickstart

The easiest way to start is with the CLI tools:

pip install chandra-ocr

# With vLLM (recommended, lightweight install)
chandra_vllm
chandra input.pdf ./output

# With HuggingFace (requires torch)
pip install chandra-ocr[hf]
chandra input.pdf ./output --method hf

# Interactive streamlit app
pip install chandra-ocr[app]
chandra_app

Benchmarks

Multilingual performance was a focus for us with Chandra 2. There isn't a good public multilingual OCR benchmark, so we made our own. This tests tables, math, ordering, layout, and text accuracy.

<img src="assets/benchmarks/multilingual.png" width="600px"/>

See full scores below. We also have a full 90-language benchmark.

We also benchmarked Chandra 2 with the widely accepted olmocr benchmark:

<img src="assets/benchmarks/bench.png" width="600px"/>

See full scores below.

Examples

| Type | Name | Link | |------|--------------------------|-------------------------------------------------------------------------------------------------------------| | Math | CS229 Textbook | View | | Math | Handwritten Math | View | | Math | Chinese Math | View | | Tables | Statistical Distribution | View | | Tables | Financial Table | View | | Forms | Registration Form | View | | Forms | Lease Form | View | | Handwriting | Cursive Writing | View | | Handwriting | Handwritten Notes | View | | Languages | Arabic | View | | Languages | Japanese | View | | Languages | Hindi | View | | Languages | Russian | View | | Other | Charts | View | | Other | Chemistry | View |

Installation

Package

# Base install (for vLLM backend)
pip install chandra-ocr

# With HuggingFace backend (includes torch, transformers)
pip install chandra-ocr[hf]

# With all extras
pip install chandra-ocr[all]

If you're using the HuggingFace method, we also recommend installing flash attention for better performance.

From Source

git clone https://github.com/datalab-to/chandra.git
cd chandra
uv sync
source .venv/bin/activate

Usage

CLI

Process single files or entire directories:

# Single file, with vllm server (see below for how to launch vllm)
chandra input.pdf ./output --method vllm

# Process all files in a directory with local model
chandra ./documents ./output --method hf

CLI Options:

  • --method [hf|vllm]: Inference method (default: vllm)
  • --page-range TEXT: Page range for PDFs (e.g., "1-5,7,9-12")
  • --max-output-tokens INTEGER: Max tokens per page
  • --max-workers INTEGER: Parallel workers for vLLM
  • --include-images/--no-images: Extract and save images (default: include)
  • --include-headers-footers/--no-headers-footers: Include page headers/footers (default: exclude)
  • --batch-size INTEGER: Pages per batch (default: 28 for vllm, 1 for hf)

Output Structure:

Each processed file creates a subdirectory with:

  • <filename>.md - Markdown output
  • <filename>.html - HTML output
  • <filename>_metadata.json - Metadata (page info, token count, etc.)
  • Extracted images are saved directly in the output directory

Streamlit Web App

Launch the interactive demo for single-page processing:

chandra_app

vLLM Server (Optional)

For production deployments or batch processing, use the vLLM server:

chandra_vllm

This launches a Docker container with optimized inference settings. Configure via environment variables:

  • VLLM_API_BASE: Server URL (default: http://localhost:8000/v1)
  • VLLM_MODEL_NAME: Model name for the server (default: chandra)
  • VLLM_GPUS: GPU device IDs (default: 0)

You can also start your own vllm server with the datalab-to/chandra-ocr-2 model.

Configuration

Settings can be configured via environment variables or a local.env file:

# Model settings
MODEL_CHECKPOINT=datalab-to/chandra-ocr-2
MAX_OUTPUT_TOKENS=12384

# vLLM settings
VLLM_API_BASE=http://localhost:8000/v1
VLLM_MODEL_NAME=chandra
VLLM_GPUS=0

Commercial usage

This code is Apache 2.0, and our model weights use a modified OpenRAIL-M license (free for research, personal use, and startups under $2M funding/revenue, cannot be used competitively with our API). To remove the OpenRAIL license requirements, or for broader commercial licensing, visit our pricing page here.

Benchmark table

| Model | ArXiv | Old Scans Math | Tables | Old Scans | Headers and Footers | Multi column | Long tiny text | Base | Overall | Source | |:--------------------------|:--------:|:--------------:|:--------:|:---------:|:-------------------:|:------------:|:--------------:|:----:|:--------------:|:------:| | Datalab API | 90.4 | 90.2 | 90.7 | 54.6 | 91.6 | 83.7 | 92.3 | 99.9 | 86.7 ± 0.8 | Own benchmarks | | Chandra 2 | 90.2 | 89.3 | 89.9 | 49.8 | 92.5 | 83.5 | 92.1 | 99.6 | 85.9 ± 0.8 | Own benchmarks | | dots.ocr 1.5 | 85.9 | 85.5 | 90.7 | 48.2 | 94.0 | 85.3 | 81.6 | 99.7 | 83.9 | dots.ocr repo | | Chandra 1 | 82.2 | 80.3 | 88.0 | 50.4 | 90.8 | 81.2 | 92.3 | 99.9 | 83.1 ± 0.9 | Own benchmarks | | olmOCR 2 | 83.0 | 82.3 | 84.9 | 47.7 | 96.1 | 83.7 | 81.9 | 99.6 | 82.4 | olmocr repo | | dots.ocr | 82.1 | 64.2 | 88.3 | 40.9 | 94.1 | 82.4 | 81.2 | 99.5 | 79.1 ± 1.0 | dots.ocr repo | | olmOCR v0.3.0 | 78.6 | 79.9 | 72.9 | 43.9 | 95.1 | 77.3 | 81.2 | 98.9 | 78.5 ± 1.1 | olmocr repo | | Datalab Marker v1.10.0 | 83.8 | 69.7 | 74.8 | 32.3 | 86.6 | 79.4 | 85.7 | 99.6 | 76.5 ± 1.0 | Own benchmarks | | Deepseek OCR | 75.2 | 72.3 | 79.7 | 33.3 | 96.1

View on GitHub
GitHub Stars6.0k
CategoryContent
Updated8m ago
Forks662

Languages

Python

Security Score

100/100

Audited on Mar 26, 2026

No findings