SkillAgentSearch skills...

Opencompass

OpenCompass is an LLM evaluation platform, supporting a wide range of models (Llama3, Mistral, InternLM2,GPT-4,LLaMa2, Qwen,GLM, Claude, etc) over 100+ datasets.

Install / Use

/learn @open-compass/Opencompass
About this skill

Quality Score

0/100

Supported Platforms

Claude Code
Claude Desktop

README

<div align="center"> <img src="docs/en/_static/image/logo.svg" width="500px"/> <br /> <br />

[![][github-release-shield]][github-release-link] [![][github-releasedate-shield]][github-releasedate-link] [![][github-contributors-shield]][github-contributors-link]<br> [![][github-forks-shield]][github-forks-link] [![][github-stars-shield]][github-stars-link] [![][github-issues-shield]][github-issues-link] [![][github-license-shield]][github-license-link]

<!-- [![PyPI](https://badge.fury.io/py/opencompass.svg)](https://pypi.org/project/opencompass/) -->

🌐Website | 📖CompassHub | 📊CompassRank | 📘Documentation | 🛠️Installation | 🤔Reporting Issues

English | 简体中文

[![][github-trending-shield]][github-trending-url]

</div> <p align="center"> 👋 join us on <a href="https://discord.gg/KKwfEbFj7U" target="_blank">Discord</a> and <a href="https://r.vansin.top/?r=opencompass" target="_blank">WeChat</a> </p>

[!IMPORTANT]

Star Us, You will receive all release notifications from GitHub without any delay ~ ⭐️

<details> <summary><kbd>Star History</kbd></summary> <picture> <source media="(prefers-color-scheme: dark)" srcset="https://api.star-history.com/svg?repos=open-compass%2Fopencompass&theme=dark&type=Date"> <img width="100%" src="https://api.star-history.com/svg?repos=open-compass%2Fopencompass&type=Date"> </picture> </details>

🧭 Welcome

to OpenCompass!

Just like a compass guides us on our journey, OpenCompass will guide you through the complex landscape of evaluating large language models. With its powerful algorithms and intuitive interface, OpenCompass makes it easy to assess the quality and effectiveness of your NLP models.

🚩🚩🚩 Explore opportunities at OpenCompass! We're currently hiring full-time researchers/engineers and interns. If you're passionate about LLM and OpenCompass, don't hesitate to reach out to us via email. We'd love to hear from you!

🔥🔥🔥 We are delighted to announce that the OpenCompass has been recommended by the Meta AI, click Get Started of Llama for more information.

Attention<br /> Breaking Change Notice: In version 0.4.0, we are consolidating all AMOTIC configuration files (previously located in ./configs/datasets, ./configs/models, and ./configs/summarizers) into the opencompass package. Users are advised to update their configuration references to reflect this structural change.

🚀 What's New <a><img width="35" height="20" src="https://user-images.githubusercontent.com/12782558/212848161-5e783dd6-11e8-4fe0-bbba-39ffb77730be.png"></a>

  • [2026.02.05] OpenCompass now supports Intern-S1-Pro related general and scientific evaluation benchmarks. Please check Example for Evaluating Intern-S1-Pro and Model Card for more details! 🔥🔥🔥
  • [2025.12.08] OpenCompass now supports evaluation for SciReasoner. Please check Example for Evaluating SciReasoner and Project GitHub Repo for more details! 🔥🔥🔥
  • [2025.07.26] OpenCompass now supports Intern-S1 related general and scientific evaluation benchmarks. Please check Tutorial for Evaluating Intern-S1 for more details! 🔥🔥🔥
  • [2025.04.01] OpenCompass now supports CascadeEvaluator, a flexible evaluation mechanism that allows multiple evaluators to work in sequence. This enables creating customized evaluation pipelines for complex assessment scenarios. Check out the documentation for more details! 🔥🔥🔥
  • [2025.03.11] We have supported evaluation for SuperGPQA which is a great benchmark for measuring LLM knowledge ability 🔥🔥🔥
  • [2025.02.28] We have added a tutorial for DeepSeek-R1 series model, please check Evaluating Reasoning Model for more details! 🔥🔥🔥
  • [2025.02.15] We have added two powerful evaluation tools: GenericLLMEvaluator for LLM-as-judge evaluations and MATHVerifyEvaluator for mathematical reasoning assessments. Check out the documentation for LLM Judge and Math Evaluation for more details! 🔥🔥🔥
  • [2025.01.16] We now support the InternLM3-8B-Instruct model which has enhanced performance on reasoning and knowledge-intensive tasks.
  • [2024.12.17] We have provided the evaluation script for the December CompassAcademic, which allows users to easily reproduce the official evaluation results by configuring it.
  • [2024.11.14] OpenCompass now offers support for a sophisticated benchmark designed to evaluate complex reasoning skills — MuSR. Check out the demo and give it a spin! 🔥🔥🔥
  • [2024.11.14] OpenCompass now supports the brand new long-context language model evaluation benchmark — BABILong. Have a look at the demo and give it a try! 🔥🔥🔥
  • [2024.10.14] We now support the OpenAI multilingual QA dataset MMMLU. Feel free to give it a try! 🔥🔥🔥
  • [2024.09.19] We now support Qwen2.5(0.5B to 72B) with multiple backend(huggingface/vllm/lmdeploy). Feel free to give them a try! 🔥🔥🔥
  • [2024.09.17] We now support OpenAI o1(o1-mini-2024-09-12 and o1-preview-2024-09-12). Feel free to give them a try! 🔥🔥🔥
  • [2024.09.05] We now support answer extraction through model post-processing to provide a more accurate representation of the model's capabilities. As part of this update, we have integrated XFinder as our first post-processing model. For more detailed information, please refer to the documentation, and give it a try! 🔥🔥🔥
  • [2024.08.20] OpenCompass now supports the SciCode: A Research Coding Benchmark Curated by Scientists. 🔥🔥🔥
  • [2024.08.16] OpenCompass now supports the brand new long-context language model evaluation benchmark — RULER. RULER provides an evaluation of long-context including retrieval, multi-hop tracing, aggregation, and question answering through flexible configurations. Check out the RULER evaluation config now! 🔥🔥🔥
  • [2024.08.09] We have released the example data and configuration for the CompassBench-202408, welcome to CompassBench for more details. 🔥🔥🔥
  • [2024.08.01] We supported the Gemma2 models. Welcome to try! 🔥🔥🔥
  • [2024.07.23] We supported the ModelScope datasets, you can load them on demand without downloading all the data to your local disk. Welcome to try! 🔥🔥🔥
  • [2024.07.17] We are excited to announce the release of NeedleBench's technical report. We invite you to visit our support documentation for detailed evaluation guidelines. 🔥🔥🔥
  • [2024.07.04] OpenCompass now supports InternLM2.5, which has outstanding reasoning capability, 1M Context window and and stronger tool use, you can try the models in OpenCompass Config and InternLM .🔥🔥🔥.
  • [2024.06.20] OpenCompass now supports one-click switching between inference acceleration backends, enhancing the efficiency of the evaluation process. In addition to the default HuggingFace inference backend, it now also supports popular backends LMDeploy and vLLM. This feature is available via a simple command-line switch and through deployment APIs. For detailed usage, see the documentation.🔥🔥🔥.

More

📊 Leaderboard

We provide OpenCompass Leaderboard for the community to rank all public models and API models. If you would like to join the evaluation, please provide the model repository URL or a standard API interface to the email address opencompass@pjlab.org.cn.

You can also refer to Guide to Reproducing CompassAcademic Leaderboard Results to quickly reproduce the leaderboard results.

<p align="right"><a href="#top">🔝Back to top</a></p>

🛠️ Installation

Below are the steps for quick installation and datasets preparation.

💻 Environment Setup

We highly recommend using conda to manage your python environment.

  • Create your virtual environment

    conda create --name opencompass python=3.10 -y
    conda activate opencompass
    
  • Install OpenCompass via pip

      pip install -U opencompass
    
      ## Full installation (with support for more datasets)
      
    

Related Skills

View on GitHub
GitHub Stars6.8k
CategoryCustomer
Updated7h ago
Forks746

Languages

Python

Security Score

100/100

Audited on Mar 19, 2026

No findings