Affinity
Typed, annotated vectors for well-documented datasets
Install / Use
/learn @liquidcarbon/AffinityREADME
Affinity
Affinity makes it easy to create well-annotated datasets from vector data. What your data means should always travel together with the data.
Affinity is a pythonic dialect of Data Definition Language (DDL). Affinity does not replace any dataframe library, but can be used with any package you like.
If you're unsatisfied that documenting your data models has remained an afterthought, check out the ideas here.
Installation
Install with any flavor of pip install affinity, or copy affinity.py into your project. It's only one file.
🐼 🦆 Affinity requires Pandas (works with v2 and v3) and DuckDB (1.3 and up). Polars and pyarrow are optional.
Usage
Now all your data models can be concisely declared as python classes.
import affinity as af
class SensorData(af.Dataset):
"""Experimental data from Top Secret Sensor Tech."""
t = af.VectorF32("elapsed time (sec)")
channel = af.VectorI8("channel number (left to right)")
voltage = af.VectorF64("something we measured (mV)")
is_laser_on = af.VectorBool("are the lights on?")
exp_id = af.ScalarI32("FK to `experiment`")
LOCATION = af.Location(folder="s3://mybucket/affinity", file="raw.parquet", partition_by=["channel"])
# how to use affinity Datasets:
data = SensorData() # ✅ empty dataset
data = SensorData(**fields) # ✅ build manually
data = SensorData.build(...) # ✅ build from a source (dataframes, DuckDB)
data.df # .pl / .arrow # ✅ view as dataframe (Pandas/Polars/Arrow)
data.metadata # ✅ annotations (data dict with column and dataset comments), origin
data.origin # ✅ creation metadata, some data provenance
data.sql(...) # ✅ run DuckDB SQL query on the dataset
data.to_parquet(...) # ✅ data.metadata -> Parquet metadata
data.partition() # ✅ get formatted paths and partitioned datasets
How it works
The af.Dataset is Affinity's BaseModel, the base class that defines the behavior of children data classes:
- concise class declaration sets the expected dtypes and descriptions for each attribute (column)
- class attributes can be represented by any array (defaults to
pd.Seriesbecause it handles nullable integers well) - class instances can be constructed from scalars, vectors/iterables, or other datasets
- type hints for scalar and vector data
Detailed example: Parquet Round-Trip
All you need to create a data class are typed classes and comments explaining what the fields mean.
1. Declare class
import affinity as af
class IsotopeData(af.Dataset):
"""NIST Atomic Weights & Isotopic Compositions.[^1]
[^1] https://www.nist.gov/pml/atomic-weights-and-isotopic-compositions-relative-atomic-masses
"""
symbol = af.VectorObject("Element")
z = af.VectorI8("Atomic Number (Z)")
mass = af.VectorF64("Isotope Mass (Da)")
abundance = af.VectorF64("Relative natural abundance")
IsotopeData.z
# DescriptorType Int8 of len 0 # Atomic Number (Z)
# Series([], dtype: Int8)
IsotopeData().pl # show fields and types
# shape: (0, 4)
# symbol z mass abundance
# str i8 f64 f64
IsotopeData.LOCATION # new in v0.4
# Location(folder=PosixPath('.'), file='IsotopeData_export.csv', partition_by=[])
The class attributes are instantiated Vector objects of zero length. Using the descriptor pattern, they are replaced with actual data arrays on building the instance.
2. Build class instance from querying a CSV
To build the dataset, we use IsotopeData.build() method with query argument. We use DuckDB FROM-first syntax, with rename=True keyword argument. The fields in the query result will be assigned names and types provided in the class definition. With rename=False (default), the source columns must be named exactly as class attributes. When safe type casting is not possible, an error will be raised; element with z=128 would not fit this dataset. Good thing there isn't one (not even as a Wikipedia article)!
url = "https://raw.githubusercontent.com/liquidcarbon/chembiodata/main/isotopes.csv"
data_from_sql = IsotopeData.build(query=f"FROM '{url}'", rename=True)
# data_from_sql = IsotopeData.build(query=f"FROM '{url}'") # will fail
query_without_rename = f"""
SELECT
Symbol as symbol,
Number as z,
Mass as mass,
Abundance as abundance,
FROM '{url}'
"""
data_from_sql2 = IsotopeData.build(query=query_without_rename)
assert data_from_sql == data_from_sql2
print(data_from_sql)
# Dataset IsotopeData of shape (354, 4)
# symbol = ['H', 'H' ... 'Ts', 'Og']
# z = [1, 1 ... 117, 118]
# mass = [1.007825, 2.014102 ... 292.20746, 294.21392]
# abundance = [0.999885, 0.000115 ... 0.0, 0.0]
3. Write to Parquet, with metadata.
data_from_sql.to_parquet("test.parquet") # requires PyArrow
4. Inspect metadata using PyArrow:
The schema metadata as shown here is truncated; full-length keys and values are in pf.schema_arrow.metadata.
import pyarrow.parquet as pq
pf = pq.ParquetFile("isotopes.parquet")
pf.schema_arrow
# symbol: string
# z: int8
# mass: double
# abundance: double
# -- schema metadata --
# table_comment: 'NIST Atomic Weights & Isotopic Compositions.[^1]
# [' + 97
# symbol: 'Element'
# z: 'Atomic Number (Z)'
# mass: 'Isotope Mass (Da)'
# abundance: 'Relative natural abundance'
# created_ts: '1724787055721'
# source: 'dataframe, shape (354, 4)
# query:
# SELECT
# Symbol as symbol,
# ' + 146
[!TIP] Though in all examples here the comment field is a string, Arrow allows non-string data in Parquet metadata (some caveats apply). If you're packaging multidimensional vectors, check out "test_objects_as_metadata" in the test file.
5. Inspect metadata using DuckDB
DuckDB provides several functions for querying Parquet metadata. We're specifically interested in key-value metadata, where both keys and values are of type BLOB. It can be decoded on the fly using SELECT DECODE(key), DECODE(value) FROM parquet_kv_metadata(...), or like so:
import duckdb
source = duckdb.sql("FROM parquet_kv_metadata('isotopes.parquet') WHERE key='source'")
print(source.fetchall()[-1][-1].decode())
# dataframe, shape (354, 4)
# query:
# SELECT
# Symbol as symbol,
# Number as z,
# Mass as mass,
# Abundance as abundance,
# FROM 'https://raw.githubusercontent.com/liquidcarbon/chembiodata/main/isotopes.csv'
6. Read Parquet:
data_from_parquet = IsotopeData.build(query="FROM 'isotopes.parquet'")
assert data_from_sql == data_from_parquet
print(data_from_parquet.pl.dtypes)
# [String, Int8, Float64, Float64]
7. Bonus: Partitions
The special attribute LOCATION helps you write the data where you want, how you want it. LOCATION does not have to be declared, but it is set to sensible (unpartitioned) defaults.
On calling af.Dataset.partition(), you'll get the formatted list of Hive-style partitions and the datasets broken up accordingly.
This is en route to af.Dataset.save(), which in all likelihood won't be done since there's far too many ways to handle this.
class PartitionedIsotopeData(af.Dataset):
symbol = af.VectorObject("Element")
z = af.VectorI8("Atomic Number (Z)")
mass = af.VectorF64("Isotope Mass (Da)")
abundance = af.VectorF64("Relative natural abundance")
LOCATION = af.Location(folder="s3://myisotopes", file="data.csv", partition_by=["z"])
url = "https://raw.githubusercontent.com/liquidcarbon/chembiodata/main/isotopes.csv"
data_from_sql = PartitionedIsotopeData.build(query=f"FROM '{url}'", rename=True)
names, folders, filepaths, datasets = data_from_sql.partition()
# this variety of outputs is helpful when populating cloud warehouses,
# such as Athena/Glue via awswrangler.
names[:3], folders[:3]
# ([['1'], ['2'], ['3']], ['s3://myisotopes/z=1/', 's3://myisotopes/z=2/', 's3://myisotopes/z=3/'])
#
filepaths[:3], datasets[:3]
# (['s3://myisotopes/z=1/data.csv', 's3://myisotopes/z=2/data.csv', 's3://myisotopes/z=3/data.csv'], [Dataset PartitionedIsotopeData of shape (3, 4)
# symbol = ['H', 'H', 'H']
# z = [1, 1, 1]
# mass = [1.007825, 2.014102, 3.016049]
# abundance = [0.999885, 0.000115, 0.0], Dataset PartitionedIsotopeData of shape (2, 4)
# symbol = ['He', 'He']
# z = [2, 2]
# mass = [3.016029, 4.002603]
# abundance = [1e-06, 0.999999], Dataset PartitionedIsotopeData of shape (2, 4)
# symbol = ['Li', 'Li']
# z = [3, 3]
# mass = [6.015123, 7.016003]
# abundance = [0.0759, 0.9241]])
If you work with AWS Athena, also check out kwargs_for_create_athena_table method available on all Datasets.
Motivation
Once upon a time, relational databases met object-oriented programming, and gave rise to object-relational mapping. Django ORM and SQLAlchemy unlocked the ability to represent database entries as python objects, with attributes for columns and relations, and methods for create-read-update-delete (CRUD) transactions. These scalar data elements (numbers, strings, booleans) carry a lot of meaning: someone's name or email or account balance, amounts of available items, time of important events. They change relatively frequently, one row at a time, and live in active, fast memory (RAM).
future blurb about OLAP and columnar and cloud data storage
We need something new for vector data.
There are many options for working with dataframes compo
Related Skills
node-connect
353.1kDiagnose OpenClaw node connection and pairing failures for Android, iOS, and macOS companion apps
frontend-design
111.6kCreate distinctive, production-grade frontend interfaces with high design quality. Use this skill when the user asks to build web components, pages, or applications. Generates creative, polished code that avoids generic AI aesthetics.
openai-whisper-api
353.1kTranscribe audio via OpenAI Audio Transcriptions API (Whisper).
qqbot-media
353.1kQQBot 富媒体收发能力。使用 <qqmedia> 标签,系统根据文件扩展名自动识别类型(图片/语音/视频/文件)。
