LiFT
The LinkedIn Fairness Toolkit (LiFT) is a Scala/Spark library that enables the measurement of fairness in large scale machine learning workflows.
Install / Use
/learn @linkedin/LiFTREADME
The LinkedIn Fairness Toolkit (LiFT)
📣 We've moved from Bintray to Artifactory!
As of version 0.2.2, we are only publishing versions to LinkedIn's Artifactory instance rather than Bintray, which is approaching end of life.
Introduction
The LinkedIn Fairness Toolkit (LiFT) is a Scala/Spark library that enables the measurement of fairness and the mitigation of bias in large-scale machine learning workflows. The measurement module includes measuring biases in training data, evaluating fairness metrics for ML models, and detecting statistically significant differences in their performance across different subgroups. It can also be used for ad-hoc fairness analysis. The mitigation part includes a post-processing method for transforming model scores to ensure the so-called equality of opportunity for rankings (in the presence/absence of position bias). This method can be directly applied to the model-generated scores without changing the existing model training pipeline.
This library was created by Sriram Vasudevan and Krishnaram Kenthapadi (work done while at LinkedIn).
Additional Contributors:
Copyright
Copyright 2020 LinkedIn Corporation All Rights Reserved.
Licensed under the BSD 2-Clause License (the "License"). See License in the project root for license information.
Features
LiFT provides a configuration-driven Spark job for scheduled deployments, with support for custom metrics through User Defined Functions (UDFs). APIs at various levels are also exposed to enable users to build upon the library's capabilities as they see fit. One can thus opt for a plug-and-play approach or deploy a customized job that uses LiFT. As a result, the library can be easily integrated into ML pipelines. It can also be utilized in Jupyter notebooks for more exploratory fairness analyses.
LiFT leverages Apache Spark to load input data into in-memory, fault-tolerant and scalable data structures. It strategically caches datasets and any pre-computation performed. Distributed computation is balanced with single system execution to obtain a good mix of scalability and speed. For example, distance, distribution and divergence related metrics are computed on the entire dataset in a distributed manner, while benefit vectors and permutation tests (for model performance) are computed on scored dataset samples that can be collected to the driver.
The LinkedIn Fairness Toolkit (LiFT) provides the following capabilities:
- Measuring Fairness Metrics on Training Data
- Measuring Fairness Metrics for Model Performance
- Achieving Equality of Opportunity
As part of the model performance metrics, it also contains the implementation of a new permutation testing framework that detects statistically significant differences in model performance (as measured by an arbitrary performance metric) across different subgroups.
High-level details about the parameters, metrics supported and usage are described below. More details about the metrics themselves are provided in the links above.
A list of automatically downloaded direct dependencies are provided here.
Usage
Building the Library
It is recommended to use Scala 2.11.8 and Spark 2.3.0. To build, run the following:
./gradlew build
This will produce a JAR file in the ./lift/build/libs/ directory.
If you want to use the library with Spark 2.4 (and the Scala 2.11.8 default), you can specify this when running the build command.
./gradlew build -PsparkVersion=2.4.3
You can also build an artifact with Spark 2.4 and Scala 2.12.
./gradlew build -PsparkVersion=2.4.3 -PscalaVersion=2.12.11
Tests typically run with the test task. If you want to force-run all tests, you can use:
./gradlew cleanTest test --no-build-cache
To force rebuild the library, you can use:
./gradlew clean build --no-build-cache
Add a LiFT Dependency to Your Project
Please check Artifactory for the latest artifact versions.
Gradle Example
The artifacts are available in LinkedIn's Artifactory instance and in Maven Central, so you can specify either repository in the top-level build.gradle file.
repositories {
mavenCentral()
maven {
url "https://linkedin.jfrog.io/artifactory/open-source/"
}
}
Add the LiFT dependency to the module-level build.gradle file. Here are some examples for multiple recent Spark/Scala version combinations:
dependencies {
compile 'com.linkedin.lift:lift_2.3.0_2.11:0.1.4'
}
dependencies {
compile 'com.linkedin.lift:lift_2.4.3_2.11:0.1.4'
}
dependencies {
compile 'com.linkedin.lift:lift_2.4.3_2.12:0.1.4'
}
Using the JAR File
Depending on the mode of usage, the built JAR can be deployed as part of an offline data pipeline, depended upon to build jobs using its APIs, or added to the classpath of a Spark Jupyter notebook or a Spark Shell instance. For example:
$SPARK_HOME/bin/spark-shell --jars target/lift_2.3.0_2.11_0.1.4.jar
Usage Examples
Measuring Dataset Fairness Metrics using the provided Spark job
LiFT provides a Spark job for measuring fairness metrics for training data, as well as for the validation or test dataset:
com.linkedin.fairness.eval.jobs.MeasureDatasetFairnessMetrics
This job can be configured using various parameters to compute fairness metrics on the dataset of interest:
1. datasetPath: Input data path
2. protectedDatasetPath: Input path to the protected dataset (optional).
If not provided, the library attempts to use
the right dataset based on the protected attribute.
3. dataFormat: Format of the input datasets. This is the parameter passed
to the Spark reader's format method. Defaults to avro.
4. dataOptions: A map of options to be used with Spark's reader (optional).
5. uidField: The unique ID field, like a memberId field. It acts as the join key for the primary dataset.
6. labelField: The label field
7. protectedAttributeField: The protected attribute field
8. uidProtectedAttributeField: The uid field (join key) for the protected attribute dataset
9. outputPath: Output data path
10. referenceDistribution: A reference distribution to compare against (optional).
Only accepted value currently is UNIFORM.
11. distanceMetrics: Distance and divergence metrics like SKEWS, INF_NORM_DIST,
TOTAL_VAR_DIST, JS_DIVERGENCE, KL_DIVERGENCE and
DEMOGRAPHIC_PARITY (optional).
12. overallMetrics: Aggregate metrics like GENERALIZED_ENTROPY_INDEX,
ATKINSONS_INDEX, THEIL_L_INDEX, THEIL_T_INDEX and
COEFFICIENT_OF_VARIATION, along with their corresponding
parameters.
13. benefitMetrics: The distance/divergence metrics to use as the benefit
vector when computing the overall metrics. Acceptable
values are SKEWS and DEMOGRAPHIC_PARITY.
The most up-to-date information on these parameters can always be found here.
The Spark job performs no preprocessing of the input data, and makes assumptions
like assuming that the unique ID field (the join key) is stored in the same
format in the input data and the protectedAttribute data. This might not
be the case for your dataset, in which case you can always create your own
Spark job similar to the provided example (described below).
Measuring Model Fairness Metrics using the provided Spark job
LiFT provides a Spark job for measuring fairness metrics for model performance, based on the labels and scores of the test or validation data:
com.linkedin.fairness.eval.jobs.MeasureModelFairnessMetrics
This job can be configured using various parameters to compute fairness metrics on the dataset of interest:
1. datasetPath Input data path
2. protectedDatasetPath Input path to the protected dataset (optional).
If not provided, the library attempts to use
the right dataset based on the protected attribute.
3. dataFormat: Format of the input datasets. This is the parameter passed
to the Spark reader's format method. Defaults to avro.
4. dataOptions: A map of options to be used with Spark's reader (optional).
5. uidField The unique ID field, like a memberId field. It acts as the join key for the primary dataset.
6. labelField The label field
7. scoreField The score field
8. scoreType Whether the scores are raw scores or probabilities.
Accepted values are RAW or PROB.
9. protectedAttributeField The protected attribute field
10. uidProtectedAttributeField The uid field (join key) for the protected attribute dataset.
11. groupIdField An optional field to be used for grouping, in case of ranking metrics
12. outputPath Output data path
13. referenceDistribution A reference distribution to compare against (optional).
Only accepted value currently is UNIFORM.
14. approxRows The approximate number of rows to sample from the input
Related Skills
YC-Killer
2.7kA library of enterprise-grade AI agents designed to democratize artificial intelligence and provide free, open-source alternatives to overvalued Y Combinator startups. If you are excited about democratizing AI access & AI agents, please star ⭐️ this repository and use the link in the readme to join our open source AI research team.
groundhog
399Groundhog's primary purpose is to teach people how Cursor and all these other coding agents work under the hood. If you understand how these coding assistants work from first principles, then you can drive these tools harder (or perhaps make your own!).
last30days-skill
18.8kAI agent skill that researches any topic across Reddit, X, YouTube, HN, Polymarket, and the web - then synthesizes a grounded summary
sec-edgar-agentkit
10AI agent toolkit for accessing and analyzing SEC EDGAR filing data. Build intelligent agents with LangChain, MCP-use, Gradio, Dify, and smolagents to analyze financial statements, insider trading, and company filings.
