Chameleon
Repository for Meta Chameleon, a mixed-modal early-fusion foundation model from FAIR.
Install / Use
/learn @facebookresearch/ChameleonREADME
Meta Chameleon
Paper | Blog | Model Checkpoint Download | HuggingFace
This repository contains artifacts for the Meta Chameleon model from FAIR, Meta AI. In this repository is:
- Standalone Inference Code — a fast GPU-based inference implementation for running model checkpoints
- Input-Output Viewing — a harness for richly viewing multimodal model inputs and outputs with a browser-based tool
- Evaluation Prompts — mixed-modal and text-only prompts for human evaluation
System Requirements
Running constituent components for inference and the input-output viewer currently requires a CUDA-capable GPU. If you'd like to run inference on other hardware, other inference implementations, including HuggingFace, are platform agnostic.
Getting Started
First, pip install this repository:
pip install -U git+https://github.com/facebookresearch/chameleon.git
Alternatively, if you want access to the full visualizer, you'll need to clone this repository (instead of installing), then pip install from the repository root:
git clone https://github.com/facebookresearch/chameleon.git
cd chameleon
pip install -e .
Model checkpoints and configs must be downloaded before running inference or the viewer. After requesting model access, run the following script, adding pre-signed download URL you were emailed when prompted:
python -m chameleon.download_data [pre-signed URL]
(you can also paste the command given in the email containing the download link)
Running the Viewer
The viewer visualizes multi-modal model input and output. It is most easily run with docker-compose. You'll need to clone the repository, not just a pip install.
The following runs both the service and viewer interface.
By default, this runs the 7B parameter model. You can change the
model_pathvariable in./config/model_viewer.yamlto select another model and alter other configuration:
docker-compose up --build
You can open the viewer at http://localhost:7654/
Running the MiniViewer
The miniviewer is a light weight debug visualizer, that can be run with:
python -m chameleon.miniviewer
This runs the 7B parameter model. To run the 30B model, use the following command:
python -m chameleon.miniviewer --model-size 30b
You can open the miniviewer at http://localhost:5000/.
License
Use of this repository and related resources are governed by the Chameleon Research License and the LICENSE file.
Citation
To cite the paper, model, or software, please use the below:
@article{Chameleon_Team_Chameleon_Mixed-Modal_Early-Fusion_2024,
author = {Chameleon Team},
doi = {10.48550/arXiv.2405.09818},
journal = {arXiv preprint arXiv:2405.09818},
title = {Chameleon: Mixed-Modal Early-Fusion Foundation Models},
url = {https://github.com/facebookresearch/chameleon},
year = {2024}
}
Related Skills
node-connect
342.5kDiagnose OpenClaw node connection and pairing failures for Android, iOS, and macOS companion apps
frontend-design
85.3kCreate distinctive, production-grade frontend interfaces with high design quality. Use this skill when the user asks to build web components, pages, or applications. Generates creative, polished code that avoids generic AI aesthetics.
openai-whisper-api
342.5kTranscribe audio via OpenAI Audio Transcriptions API (Whisper).
qqbot-media
342.5kQQBot 富媒体收发能力。使用 <qqmedia> 标签,系统根据文件扩展名自动识别类型(图片/语音/视频/文件)。
