SkillAgentSearch skills...

Nannyml

nannyml: post-deployment data science in python

Install / Use

/learn @NannyML/Nannyml

README

<p align="center"> <img src="https://raw.githubusercontent.com/NannyML/nannyml/main/media/thumbnail-4.png"> </p> <p align="center"> <a href="https://pypi.org/project/nannyml/"> <img src="https://img.shields.io/pypi/v/nannyml.svg" /> </a> <a href="https://anaconda.org/conda-forge/nannyml"> <img src="https://anaconda.org/conda-forge/nannyml/badges/version.svg" /> </a> <a href="https://pypi.org/project/nannyml/"> <img src="https://img.shields.io/pypi/pyversions/nannyml.svg" /> </a> <a href="https://github.com/nannyml/nannyml/actions/workflows/dev.yml"> <img src="https://github.com/NannyML/nannyml/actions/workflows/dev.yml/badge.svg" /> </a> <a href='https://nannyml.readthedocs.io/en/main/?badge=main'> <img src='https://readthedocs.org/projects/nannyml/badge/?version=main' alt='Documentation Status' /> </a> <img alt="PyPI - License" src="https://img.shields.io/pypi/l/nannyml?color=green" /> <br /> <br /> <a href="https://www.producthunt.com/posts/nannyml?utm_source=badge-top-post-badge&utm_medium=badge&utm_souce=badge-nannyml" target="_blank"> <img src="https://api.producthunt.com/widgets/embed-image/v1/top-post-badge.svg?post_id=346412&theme=light&period=daily" alt="NannyML - OSS&#0032;Python&#0032;library&#0032;for&#0032;detecting&#0032;silent&#0032;ML&#0032;model&#0032;failure | Product Hunt" style="width: 250px; height: 54px;" width="250" height="54" /> </a> </p> <p align="center"> <strong> <a href="https://nannyml.com/">Website</a> • <a href="https://nannyml.readthedocs.io/en/stable/">Docs</a> • <a href="https://join.slack.com/t/nannymlbeta/shared_invite/zt-16fvpeddz-HAvTsjNEyC9CE6JXbiM7BQ">Community Slack</a> </strong> </p> <p align="center"> <img src="https://github.com/NannyML/nannyml/blob/main/media/estimate-performance-regression.gif?raw=true" alt="animated"> </p>

💡 What is NannyML?

NannyML is an open-source python library that allows you to estimate post-deployment model performance (without access to targets), detect data drift, and intelligently link data drift alerts back to changes in model performance. Built for data scientists, NannyML has an easy-to-use interface, interactive visualizations, is completely model-agnostic and currently supports all tabular use cases, classification and regression.

The core contributors of NannyML have researched and developed multiple novel algorithms for estimating model performance: confidence-based performance estimation (CBPE) and direct loss estimation (DLE). The nansters also invented a new approach to detect multivariate data drift using PCA-based data reconstruction.

If you like what we are working on, be sure to become a Nanster yourself, join our community slack <img src="https://raw.githubusercontent.com/NannyML/nannyml/main/media/slack.png" height='15'> and support us with a GitHub <img src="https://raw.githubusercontent.com/NannyML/nannyml/main/media/github.png" height='15'> star ⭐.

☔ Why use NannyML?

NannyML closes the loop with performance monitoring and post deployment data science, empowering data scientist to quickly understand and automatically detect silent model failure. By using NannyML, data scientists can finally maintain complete visibility and trust in their deployed machine learning models. Allowing you to have the following benefits:

  • End sleepless nights caused by not knowing your model performance 😴
  • Analyse data drift and model performance over time
  • Discover the root cause to why your models are not performing as expected
  • No alert fatigue! React only when necessary if model performance is impacted
  • Painless setup in any environment

🧠 GO DEEP

| NannyML Resources | Description | | --------------------------------------------------------------------------------------------------------------- | -------------------------------------------------------------------------------------- | | ☎️ NannyML 101 | New to NannyML? Start here! | | 🔮 Performance estimation | How the magic works. | | 🌍 Real world example | Take a look at a real-world example of NannyML. | | 🔑 Key concepts | Glossary of key concepts we use. | | 🔬 Technical reference | Monitor the performance of your ML models. | | 🔎 Blog | Thoughts on post-deployment data science from the NannyML team. | | 📬 Newsletter | All things post-deployment data science. Subscribe to see the latest papers and blogs. | | 💎 New in v0.13.1 | New features, bug fixes. | | 🧑‍💻 Contribute | How to contribute to the NannyML project and codebase. | | <img src="https://raw.githubusercontent.com/NannyML/nannyml/main/media/slack.png" height='15'> Join slack | Need help with your specific use case? Say hi on slack! |

🔱 Features

1. Performance estimation and monitoring

When the actual outcome of your deployed prediction models is delayed, or even when post-deployment target labels are completely absent, you can use NannyML's CBPE-algorithm to estimate model performance for classification or NannyML's DLE-algorithm for regression. These algorithms provide you with any estimated metric you would like, i.e. ROC AUC or RSME. Rather than estimating the performance of future model predictions, CBPE and DLE estimate the expected model performance of the predictions made at inference time.

<p><img src="https://raw.githubusercontent.com/NannyML/nannyml/main/docs/_static/tutorials/performance_calculation/regression/tutorial-performance-calculation-regression-RMSE.svg"></p>

NannyML can also track the realised performance of your machine learning model once targets are available.

2. Data drift detection

To detect multivariate feature drift NannyML uses PCA-based data reconstruction. Changes in the resulting reconstruction error are monitored over time and data drift alerts are logged when the reconstruction error in a certain period exceeds a threshold. This threshold is calculated based on the reconstruction error observed in the reference period.

<p><img src="https://raw.githubusercontent.com/NannyML/nannyml/main/docs/_static/how-it-works/butterfly-multivariate-drift-pca.svg"></p>

NannyML utilises statistical tests to detect univariate feature drift. We have just added a bunch of new univariate tests including Jensen-Shannon Distance and L-Infinity Distance, check out the comprehensive list. The results of these tests are tracked over time, properly corrected to counteract multiplicity and overlayed on the temporal feature distributions. (It is also possible to visualise the test-statistics over time, to get a notion of the drift magnitude.)

<p><img src="https://raw.githubusercontent.com/NannyML/nannyml/main/docs/_static/drift-guide-joyplot-distance_from_office.svg"><img src="docs/_static/

Related Skills

View on GitHub
GitHub Stars2.1k
CategoryOperations
Updated6h ago
Forks180

Languages

Python

Security Score

100/100

Audited on Mar 24, 2026

No findings