Dlt
data load tool (dlt) is an open source Python library that makes data loading easy π οΈ
Install / Use
/learn @dlt-hub/DltREADME
π Join our thriving community of likeminded developers and build the future together!
</h3> <div align="center"> <a target="_blank" href="https://dlthub.com/community" style="background:none"> <img src="https://img.shields.io/badge/slack-join-dlt.svg?labelColor=191937&color=6F6FF7&logo=slack" style="width: 260px;" /> </a> </div> <div align="center"> <a target="_blank" href="https://pypi.org/project/dlt/" style="background:none"> <img src="https://img.shields.io/pypi/v/dlt?labelColor=191937&color=6F6FF7"> </a> <a target="_blank" href="https://pypi.org/project/dlt/" style="background:none"> <img src="https://img.shields.io/pypi/pyversions/dlt?labelColor=191937&color=6F6FF7"> </a> <a target="_blank" href="https://pypi.org/project/dlt/" style="background:none"> <img src="https://img.shields.io/pypi/dm/dlt?labelColor=191937&color=6F6FF7"> </a> </div>Installation
dlt supports Python 3.9 through Python 3.14. Note that some optional extras are not yet available for Python 3.14, so support for this version is considered experimental.
pip install dlt
Quick Start
Load chess game data from chess.com API and save it in DuckDB:
import dlt
from dlt.sources.helpers import requests
# Create a dlt pipeline that will load
# chess player data to the DuckDB destination
pipeline = dlt.pipeline(
pipeline_name='chess_pipeline',
destination='duckdb',
dataset_name='player_data'
)
# Grab some player data from Chess.com API
data = []
for player in ['magnuscarlsen', 'rpragchess']:
response = requests.get(f'https://api.chess.com/pub/player/{player}')
response.raise_for_status()
data.append(response.json())
# Extract, normalize, and load the data
pipeline.run(data, table_name='player')
Try it out in our Colab Demo or directly on our wasm-based playground in our docs.
Features
dlt is an open-source Python library that loads data from various, often messy data sources into well-structured datasets. It provides lightweight Python interfaces to extract, load, inspect, and transform data. dlt and dlt docs are built from the ground up to be used with LLMs: the LLM-native workflow will take your pipeline code to data in a notebook for over 5000 sources.
dlt is designed to be easy to use, flexible, and scalable:
- dlt extracts data from REST APIs, SQL databases, cloud storage, Python data structures, and many more.
- dlt infers schemas and data types, normalizes the data, and handles nested data structures.
- dlt supports a variety of popular destinations and has an interface to add custom destinations to create reverse ETL pipelines.
- dlt automates pipeline maintenance with incremental loading, schema evolution, and schema and data contracts.
- dlt supports Python and SQL data access, transformations, pipeline inspection, and visualizing data in Marimo Notebooks.
- dlt can be deployed anywhere Python runs, be it on Airflow, serverless functions, or any other cloud deployment of your choice.
Documentation
For detailed usage and configuration, please refer to the official documentation.
Examples
You can find examples for various use cases in the examples folder, or in the code examples section of our docs page.
Adding as dependency
dlt follows the semantic versioning with the MAJOR.MINOR.PATCH pattern.
majormeans breaking changes and removed deprecationsminornew features, sometimes automatic migrationspatchbug fixes
We suggest that you allow only patch level updates automatically using the Compatible Release Specifier. For example dlt~=1.23.0 allows only versions >=1.23.0 and less than <1.24.0
Please also see our release notes for notable changes between versions.
Get Involved
The dlt project is quickly growing, and we're excited to have you join our community! Here's how you can get involved:
- Connect with the Community: Join other dlt users and contributors on our Slack
- Report issues and suggest features: Please use the GitHub Issues to report bugs or suggest new features. Before creating a new issue, make sure to search the tracker for possible duplicates and add a comment if you find one.
- Track progress of our work and our plans: Please check out our public Github project
- Improve documentation: Help us enhance the dlt documentation.
Contribute code
Please read CONTRIBUTING before you make a PR.
- π£ New destinations are unlikely to be merged due to high maintenance cost (but we are happy to improve SQLAlchemy destination to handle more dialects)
- Significant changes require tests and docs and in many cases writing tests will be more laborious than writing code
- Bugfixes and improvements are welcome! You'll get help with writing tests and docs + a decent review.
License
dlt is released under the Apache 2.0 License.
Related Skills
node-connect
339.3kDiagnose OpenClaw node connection and pairing failures for Android, iOS, and macOS companion apps
claude-opus-4-5-migration
83.9kMigrate prompts and code from Claude Sonnet 4.0, Sonnet 4.5, or Opus 4.1 to Opus 4.5
frontend-design
83.9kCreate distinctive, production-grade frontend interfaces with high design quality. Use this skill when the user asks to build web components, pages, or applications. Generates creative, polished code that avoids generic AI aesthetics.
model-usage
339.3kUse CodexBar CLI local cost usage to summarize per-model usage for Codex or Claude, including the current (most recent) model or a full model breakdown. Trigger when asked for model-level usage/cost data from codexbar, or when you need a scriptable per-model summary from codexbar cost JSON.
