SkillAgentSearch skills...

Sam3

The repository provides code for running inference and finetuning with the Meta Segment Anything Model 3 (SAM 3), links for downloading the trained model checkpoints, and example notebooks that show how to use the model.

Install / Use

/learn @facebookresearch/Sam3
About this skill

Quality Score

0/100

Supported Platforms

Universal

README

SAM 3: Segment Anything with Concepts

Meta Superintelligence Labs

Nicolas Carion*, Laura Gustafson*, Yuan-Ting Hu*, Shoubhik Debnath*, Ronghang Hu*, Didac Suris*, Chaitanya Ryali*, Kalyan Vasudev Alwala*, Haitham Khedr*, Andrew Huang, Jie Lei, Tengyu Ma, Baishan Guo, Arpit Kalla, Markus Marks, Joseph Greer, Meng Wang, Peize Sun, Roman Rädle, Triantafyllos Afouras, Effrosyni Mavroudi, Katherine Xu°, Tsung-Han Wu°, Yu Zhou°, Liliane Momeni°, Rishi Hazra°, Shuangrui Ding°, Sagar Vaze°, Francois Porcher°, Feng Li°, Siyuan Li°, Aishwarya Kamath°, Ho Kei Cheng°, Piotr Dollar†, Nikhila Ravi†, Kate Saenko†, Pengchuan Zhang†, Christoph Feichtenhofer

* core contributor, ° intern, † project lead, order is random within groups

[Paper] [Project] [Demo] [Blog] [BibTeX]

SAM 3 architecture SAM 3 is a unified foundation model for promptable segmentation in images and videos. It can detect, segment, and track objects using text or visual prompts such as points, boxes, and masks. Compared to its predecessor SAM 2, SAM 3 introduces the ability to exhaustively segment all instances of an open-vocabulary concept specified by a short text phrase or exemplars. Unlike prior work, SAM 3 can handle a vastly larger set of open-vocabulary prompts. It achieves 75-80% of human performance on our new SA-CO benchmark which contains 270K unique concepts, over 50 times more than existing benchmarks.

This breakthrough is driven by an innovative data engine that has automatically annotated over 4 million unique concepts, creating the largest high-quality open-vocabulary segmentation dataset to date. In addition, SAM 3 introduces a new model architecture featuring a presence token that improves discrimination between closely related text prompts (e.g., “a player in white” vs. “a player in red”), as well as a decoupled detector–tracker design that minimizes task interference and scales efficiently with data.

<p align="center"> <img src="assets/dog.gif" width=380 /> <img src="assets/player.gif" width=380 /> </p>

Latest updates

03/27/2026 -- SAM 3.1 Object Multiplex is released. It introduces a shared-memory approach for joint multi-object tracking that is significantly faster without sacrificing accuracy.

  • A new suite of improved model checkpoints (denoted as SAM 3.1) are released on Hugging Face. See RELEASE_SAM3p1.md for full details.
    • To use the new SAM 3.1 checkpoints, you need the latest model code from this repo. If you have installed an earlier version of this repo, pull the latest code from this repo (with git pull), and then reinstall the repo following Installation below.

Installation

Prerequisites

  • Python 3.12 or higher
  • PyTorch 2.7 or higher
  • CUDA-compatible GPU with CUDA 12.6 or higher
  1. Create a new Conda environment:
conda create -n sam3 python=3.12
conda deactivate
conda activate sam3
  1. Install PyTorch with CUDA support:
pip install torch==2.10.0 torchvision --index-url https://download.pytorch.org/whl/cu128
  1. Clone the repository and install the package:
git clone https://github.com/facebookresearch/sam3.git
cd sam3
pip install -e .
  1. Install additional dependencies for example notebooks or development:
# For running example notebooks
pip install -e ".[notebooks]"

# For development
pip install -e ".[train,dev]"
  1. Optional dependencies for faster inference
pip install einops ninja && pip install flash-attn-3 --no-deps --index-url https://download.pytorch.org/whl/cu128
pip install git+https://github.com/ronghanghu/cc_torch.git

Getting Started

⚠️ Before using SAM 3, please request access to the checkpoints on the SAM 3 Hugging Face repo. Once accepted, you need to be authenticated to download the checkpoints. You can do this by running the following steps (e.g. hf auth login after generating an access token.)

Basic Usage

import torch
#################################### For Image ####################################
from PIL import Image
from sam3.model_builder import build_sam3_image_model
from sam3.model.sam3_image_processor import Sam3Processor
# Load the model
model = build_sam3_image_model()
processor = Sam3Processor(model)
# Load an image
image = Image.open("<YOUR_IMAGE_PATH.jpg>")
inference_state = processor.set_image(image)
# Prompt the model with text
output = processor.set_text_prompt(state=inference_state, prompt="<YOUR_TEXT_PROMPT>")

# Get the masks, bounding boxes, and scores
masks, boxes, scores = output["masks"], output["boxes"], output["scores"]

#################################### For Video ####################################

from sam3.model_builder import build_sam3_video_predictor

video_predictor = build_sam3_video_predictor()
video_path = "<YOUR_VIDEO_PATH>" # a JPEG folder or an MP4 video file
# Start a session
response = video_predictor.handle_request(
    request=dict(
        type="start_session",
        resource_path=video_path,
    )
)
response = video_predictor.handle_request(
    request=dict(
        type="add_prompt",
        session_id=response["session_id"],
        frame_index=0, # Arbitrary frame index
        text="<YOUR_TEXT_PROMPT>",
    )
)
output = response["outputs"]

Examples

The examples directory contains notebooks demonstrating how to use SAM3 with various types of prompts:

There are additional notebooks in the examples directory that demonstrate how to use SAM 3 for interactive instance segmentation in images and videos (SAM 1/2 tasks), or as a tool for an MLLM, and how to run evaluations on the SA-Co dataset.

To run the Jupyter notebook examples:

# Make sure you have the notebooks dependencies installed
pip install -e ".[notebooks]"

# Start Jupyter notebook
jupyter notebook examples/sam3_image_predictor_example.ipynb

Model

SAM 3 consists of a detector and a tracker that share a vision encoder. It has 848M parameters. The detector is a DETR-based model conditioned on text, geometry, and image exemplars. The tracker inherits the SAM 2 transformer encoder-decoder architecture, supporting video segmentation and interactive refinement.

Image Results

<div align="center"> <table style="min-width: 80%; border: 2px solid #ddd; border-collapse: collapse"> <thead> <tr> <th rowspan="3" style="border-right: 2px solid #ddd; padding: 12px 20px">Model</th> <th colspan="3" style="text-align: center; border-right: 2px solid #ddd; padding: 12px 20px">Instance Segmentation</th> <th colspan="5" style="text-align: center; padding: 12px 20px">Box Detection</th> </tr> <tr> <th colspan="2" style="text-align: center; border-right: 1px solid #eee; padding: 12px 20px">LVIS</th> <th style="text-align: center; border-right: 2px solid #ddd; padding: 12px 20px">SA-Co/Gold</th> <th colspan="2" style="text-align: center; border-right: 1px solid #eee; padding: 12px 20px">LVIS</th> <th colspan="2"
View on GitHub
GitHub Stars8.5k
CategoryDevelopment
Updated1h ago
Forks1.2k

Languages

Python

Security Score

80/100

Audited on Mar 28, 2026

No findings