SkillAgentSearch skills...

Feathr

Feathr – A scalable, unified data and AI engineering platform for enterprise

Install / Use

/learn @feathr-ai/Feathr

README

<html> <h1 align="center"> <img src="./images/feathr_logo.png" width="256"/> </h1> <h3 align="center"> A scalable, unified data and AI engineering platform for enterprise </h3> <h3 align="center"> Important Links: <a href="https://join.slack.com/t/feathrai/shared_invite/zt-1ffva5u6v-voq0Us7bbKAw873cEzHOSg">Slack</a> & <a href="https://github.com/feathr-ai/feathr/discussions">Discussions</a>. <a href="https://feathr-ai.github.io/feathr/">Docs</a>. </h3> </html>

License GitHub Release Docs Latest Python API CII Best Practices

What is Feathr?

Feathr is a data and AI engineering platform that is widely used in production at LinkedIn for many years and was open sourced in 2022. It is currently a project under LF AI & Data Foundation.

Read our announcement on Open Sourcing Feathr and Feathr on Azure, as well as the announcement from LF AI & Data Foundation.

Feathr lets you:

  • Define data and feature transformations based on raw data sources (batch and streaming) using Pythonic APIs.
  • Register transformations by names and get transformed data(features) for various use cases including AI modeling, compliance, go-to-market and more.
  • Share transformations and data(features) across team and company.

Feathr is particularly useful in AI modeling where it automatically computes your feature transformations and joins them to your training data, using point-in-time-correct semantics to avoid data leakage, and supports materializing and deploying your features for use online in production.

🌟 Feathr Highlights

  • Native cloud integration with simplified and scalable architecture.
  • Battle tested in production for more than 6 years: LinkedIn has been using Feathr in production for over 6 years and backed by a dedicated team.
  • Scalable with built-in optimizations: Feathr can process billions of rows and PB scale data with built-in optimizations such as bloom filters and salted joins.
  • Rich transformation APIs including time-based aggregations, sliding window joins, look-up features, all with point-in-time correctness for AI.
  • Pythonic APIs and highly customizable user-defined functions (UDFs) with native PySpark and Spark SQL support to lower the learning curve for all data scientists.
  • Unified data transformation API works in offline batch, streaming, and online environments.
  • Feathr’s built-in registry makes named transformations and data/feature reuse a breeze.

🏃 Getting Started with Feathr - Feathr Sandbox

The easiest way to try out Feathr is to use the Feathr Sandbox which is a self-contained container with most of Feathr's capabilities and you should be productive in 5 minutes. To use it, simply run this command:

# 80: Feathr UI, 8888: Jupyter, 7080: Interpret
docker run -it --rm -p 8888:8888 -p 8081:80 -p 7080:7080 -e GRANT_SUDO=yes feathrfeaturestore/feathr-sandbox:releases-v1.0.0

And you can view Feathr quickstart jupyter notebook:

http://localhost:8888/lab/workspaces/auto-w/tree/local_quickstart_notebook.ipynb

After running the notebook, all the features will be registered in the UI, and you can visit the Feathr UI at:

http://localhost:8081

🛠️ Install Feathr Client Locally

If you want to install Feathr client in a python environment, use this:

pip install feathr

Or use the latest code from GitHub:

pip install git+https://github.com/feathr-ai/feathr.git#subdirectory=feathr_project

☁️ Running Feathr on Cloud for Production

Feathr has native integrations with Databricks and Azure Synapse:

Follow the Feathr ARM deployment guide to run Feathr on Azure. This allows you to quickly get started with automated deployment using Azure Resource Manager template.

If you want to set up everything manually, you can checkout the Feathr CLI deployment guide to run Feathr on Azure. This allows you to understand what is going on and set up one resource at a time.

📓 Documentation

🧪 Samples

| Name | Description | Platform | | ----------------------------------------------------------------------------------- | ------------------------------------------------------------------------------------------------------------------------------------- | -------------------------------------- | | NYC Taxi Demo | Quickstart notebook that showcases how to define, materialize, and register features with NYC taxi-fare prediction sample data. | Azure Synapse, Databricks, Local Spark | | Databricks Quickstart NYC Taxi Demo | Quickstart Databricks notebook with NYC taxi-fare prediction sample data. | Databricks | | Feature Embedding | Feathr UDF example showing how to define and use feature embedding with a pre-trained Transformer model and hotel review sample data. | Databricks | | Fraud Detection Demo | An example to demonstrate Feature Store using multiple data sources such as user account and transaction data. | Azure Synapse, Databricks, Local Spark | | Product Recommendation Demo | Feathr Feature Store example notebook with a product recommendation scenario | Azure Synapse, Databricks, Local Spark |

🔡 Feathr Highlighted Capabilities

Please read Feathr Full Capabilities for more examples. Below are a few selected ones:

Feathr UI

Feathr provides an intuitive UI so you can search and explore all the available features and their corresponding lineages.

You can use Feathr UI to search features, identify data sources, track feature lineages and manage access controls. Check out the latest live demo here to see what Feathr UI can do for you. Use one of following accounts when you are prompted to login:

  • A work or school organization account, includes Office 365 subscribers.
  • Microsoft personal account, this means an account can access to Skype, Outlook.com, OneDrive, and Xbox LIVE.

Feathr UI

For more information on the Feathr UI and the registry behind it, please refer to Feathr Feature Registry

Rich UDF Support

Feathr has highly customizable UDFs with native PySpark and Spark SQL integration to lower learning curve for data scientists:

def add_new_dropoff_and_fare_amount_column(df: DataFrame):
    df = df.withColumn("f_day_of_week", dayofweek("lpep_dropoff_datetime"))
    df = df.withColumn("fare_amount_cents", df.fare_amount.cast('double') * 100)
    return df

batch_source = HdfsSource(name="nycTaxiBatchSource",
                        path="abfss://feathrazuretest3fs@feathrazuretest3storage.dfs.core.windows.net/demo_data/green_tripdata_2020-04.csv",
                        preprocessing=add_new_dropoff_and_fare_amount_column,
                        event_timestamp_column="new_lpep_dropoff_datetime",
                        timestamp_format="yyyy-MM-dd HH:mm:ss")

Defining Window Aggregation Features with Point-in-time correctness

agg_features = [Feature(name="f_location_avg_fare",
                        key=location_id,                          # Query/join key of the feature(group)
                        feature_type=FLOAT,
                        transform=WindowAggTransformation(        # Window Aggregation transformation
                   

Related Skills

View on GitHub
GitHub Stars1.9k
CategoryOperations
Updated11d ago
Forks242

Languages

Scala

Security Score

100/100

Audited on Mar 13, 2026

No findings