Evaluate
🤗 Evaluate: A library for easily evaluating machine learning models and datasets.
Install / Use
/learn @huggingface/EvaluateREADME
Tip: For more recent evaluation approaches, for example for evaluating LLMs, we recommend our newer and more actively maintained library LightEval.
🤗 Evaluate is a library that makes evaluating and comparing models and reporting their performance easier and more standardized.
It currently contains:
- implementations of dozens of popular metrics: the existing metrics cover a variety of tasks spanning from NLP to Computer Vision, and include dataset-specific metrics for datasets. With a simple command like
accuracy = load("accuracy"), get any of these metrics ready to use for evaluating a ML model in any framework (Numpy/Pandas/PyTorch/TensorFlow/JAX). - comparisons and measurements: comparisons are used to measure the difference between models and measurements are tools to evaluate datasets.
- an easy way of adding new evaluation modules to the 🤗 Hub: you can create new evaluation modules and push them to a dedicated Space in the 🤗 Hub with
evaluate-cli create [metric name], which allows you to see easily compare different metrics and their outputs for the same sets of references and predictions.
🔎 Find a metric, comparison, measurement on the Hub
🤗 Evaluate also has lots of useful features like:
- Type checking: the input types are checked to make sure that you are using the right input formats for each metric
- Metric cards: each metrics comes with a card that describes the values, limitations and their ranges, as well as providing examples of their usage and usefulness.
- Community metrics: Metrics live on the Hugging Face Hub and you can easily add your own metrics for your project or to collaborate with others.
Installation
With pip
🤗 Evaluate can be installed from PyPi and has to be installed in a virtual environment (venv or conda for instance)
pip install evaluate
Usage
🤗 Evaluate's main methods are:
evaluate.list_evaluation_modules()to list the available metrics, comparisons and measurementsevaluate.load(module_name, **kwargs)to instantiate an evaluation moduleresults = module.compute(*kwargs)to compute the result of an evaluation module
Adding a new evaluation module
First install the necessary dependencies to create a new metric with the following command:
pip install evaluate[template]
Then you can get started with the following command which will create a new folder for your metric and display the necessary steps:
evaluate-cli create "Awesome Metric"
See this step-by-step guide in the documentation for detailed instructions.
Credits
Thanks to @marella for letting us use the evaluate namespace on PyPi previously used by his library.
Related Skills
groundhog
398Groundhog's primary purpose is to teach people how Cursor and all these other coding agents work under the hood. If you understand how these coding assistants work from first principles, then you can drive these tools harder (or perhaps make your own!).
isf-agent
a repo for an agent that helps researchers apply for isf funding
workshop-rules
Materials used to teach the summer camp <Data Science for Kids>
last30days-skill
13.4kAI agent skill that researches any topic across Reddit, X, YouTube, HN, Polymarket, and the web - then synthesizes a grounded summary
