Judgy
Python package for estimating a CIs for metrics evaluated by LLM-as-Judges.
Install / Use
/learn @ai-evals-course/JudgyREADME
judgy
A Python library for estimating success rates when using LLM judges for evaluation.
</div>Table of Contents
- Overview
- Installation
- Quick Start
- How It Works
- API Reference
- Real-World Usage Pattern
- Requirements
- Testing
- Contributing
- License
Overview
When using Large Language Models (LLMs) as judges to evaluate other models or systems, the judge's own biases and errors can significantly impact the reliability of the evaluation. judgy provides tools to estimate the true success rate of your system by correcting for LLM judge bias, and bootstrapping to generate a confidence interval.
Installation
Basic Installation
pip install judgy
Development Installation
git clone https://github.com/ai-evals-course/judgy.git
cd judgy
pip install -e .[dev,plotting]
Quick Start
import numpy as np
from judgy import estimate_success_rate
# Your data: 1 = Pass, 0 = Fail
test_labels = [1, 1, 0, 0, 1, 0, 1, 0] # Human labels on test set
test_preds = [1, 0, 0, 1, 1, 0, 1, 0] # LLM judge predictions on test set
unlabeled_preds = [1, 1, 0, 1, 0, 1, 0, 1] # LLM judge predictions on unlabeled data
# Estimate true pass rate with 95% confidence interval
theta_hat, lower_bound, upper_bound = estimate_success_rate(
test_labels=test_labels,
test_preds=test_preds,
unlabeled_preds=unlabeled_preds
)
print(f"Estimated true pass rate: {theta_hat:.3f}")
print(f"95% Confidence interval: [{lower_bound:.3f}, {upper_bound:.3f}]")
How It Works
The library implements a bias correction method based on the following steps:
- Judge Accuracy Estimation: Calculate the LLM judge's True Positive Rate (TPR) and True Negative Rate (TNR) using labeled test data
- Correction: Apply the correction formula to account for judge bias:
whereθ̂ = (p_obs + TNR - 1) / (TPR + TNR - 1)p_obsis the observed pass rate from the judge - Bootstrap Confidence Intervals: Use bootstrap resampling to quantify uncertainty in the estimate
API Reference
Core Function
estimate_success_rate(test_labels, test_preds, unlabeled_preds, bootstrap_iterations=20000, confidence_level=0.95)
Estimate true pass rate with bias correction and confidence intervals.
Parameters:
test_labels: Array-like of 0/1 values (human labels on test set)test_preds: Array-like of 0/1 values (judge predictions on test set)unlabeled_preds: Array-like of 0/1 values (judge predictions on unlabeled data)bootstrap_iterations: Number of bootstrap iterations (default: 20000)confidence_level: Confidence level between 0 and 1 (default: 0.95)
Returns:
theta_hat: Point estimate of true pass ratelower_bound: Lower bound of confidence intervalupper_bound: Upper bound of confidence interval
Real-World Usage Pattern
from judgy import estimate_success_rate
# Step 1: Collect human labels on a test set
test_labels = [...] # Human evaluation: 1 = good, 0 = bad
# Step 2: Get LLM judge predictions on the same test set
test_preds = [...] # LLM judge predictions: 1 = good, 0 = bad
# Step 3: Get LLM judge predictions on your unlabeled data
unlabeled_preds = [...] # LLM judge predictions on data you want to evaluate
# Step 4: Estimate the true pass rate
true_rate, lower, upper = estimate_success_rate(test_labels, test_preds, unlabeled_preds)
print(f"Your system's estimated true success rate: {true_rate:.1%}")
print(f"95% confidence interval: [{lower:.1%}, {upper:.1%}]")
Requirements
- Python 3.8+
- numpy >= 1.20.0
Testing
Run the test suite:
pytest tests/
Run with coverage:
pytest tests/ --cov=judgy --cov-report=html
Contributing
Contributions are welcome! Please feel free to submit a Pull Request. For major changes, please open an issue first to discuss what you would like to change.
- Fork the repository
- Create your feature branch (
git checkout -b feature/amazing-feature) - Commit your changes (
git commit -m 'Add some amazing feature') - Push to the branch (
git push origin feature/amazing-feature) - Open a Pull Request
License
This project is licensed under the MIT License - see the LICENSE file for details.
Acknowledgments
- The Rogan-Gladen correction method for bias correction in diagnostic tests
- Bootstrap methodology for confidence interval estimation
- The Python scientific computing ecosystem (NumPy, matplotlib)
Support
If you encounter any issues or have questions, please:
- Check the documentation
- Search existing issues
- Create a new issue with a minimal reproducible example
Note: This library assumes that your LLM judge performs better than random chance (TPR + TNR > 1). If your judge's accuracy is too low, the correction method may not be applicable.
Related Skills
node-connect
350.1kDiagnose OpenClaw node connection and pairing failures for Android, iOS, and macOS companion apps
frontend-design
109.9kCreate distinctive, production-grade frontend interfaces with high design quality. Use this skill when the user asks to build web components, pages, or applications. Generates creative, polished code that avoids generic AI aesthetics.
openai-whisper-api
350.1kTranscribe audio via OpenAI Audio Transcriptions API (Whisper).
qqbot-media
350.1kQQBot 富媒体收发能力。使用 <qqmedia> 标签,系统根据文件扩展名自动识别类型(图片/语音/视频/文件)。
