Pymovements
A python package for processing eye movement data
Install / Use
/learn @pymovements/PymovementsREADME
pymovements is an open-source python package for processing eye movement data. It provides a simple interface to download publicly available datasets, preprocess gaze data, detect oculomotoric events and render plots to visually analyze your results.
- Website: https://github.com/pymovements/pymovements
- Documentation: https://pymovements.readthedocs.io
- Source code: https://github.com/pymovements/pymovements
- PyPI package: https://pypi.org/project/pymovements
- Conda package: https://anaconda.org/conda-forge/pymovements
- Bug reports: https://github.com/pymovements/pymovements/issues
- Contributing: https://github.com/pymovements/pymovements/blob/main/CONTRIBUTING.md
- Mailing list: pymovements@python.org (subscribe)
- Discord: https://discord.gg/K2uS2R6PNj
Getting Started
If you are new to pymovements or to eye-tracking data analysis, we recommend starting with the User Guide, which introduces the concepts, data
structures, and workflows used throughout the library: 👉 :doc:user-guide/index
Quick example
For a minimal example of loading and processing eye-tracking data with pymovements:
import pymovements as pm
dataset = pm.Dataset(
'JuDo1000', # choose a public dataset from our dataset library
path='data/judo100', # setup your local dataset path
)
dataset.download() # download a public dataset from our dataset library
dataset.load() # load the dataset
Transform coordinates and calculate velocities:
dataset.pix2deg() # transform pixel coordinates to degrees of visual angle
dataset.pos2vel() # transform positional data to velocity data
Detect oculomotoric events:
dataset.detect('ivt') # detect fixation using the I-VT algorithm
dataset.detect('microsaccades') # detect saccades using the microsaccades algorithm
<!-- With pymovements loading your eye movement [datasets](https://pymovements.readthedocs.io/en/stable/datasets/index.html) is just a few lines of code away -->
Quick Links
- :doc:
Installation Options <user-guide/getting-started/installation> - Tutorials
- API Reference
Contributing
We welcome any sort of contribution to pymovements!
For a detailed guide, please refer to our CONTRIBUTING.md first.
If you have any questions, please open an issue or write to us at pymovements@python.org
Citing
If you are using pymovements in your research, we would be happy if you cite our work by using the following BibTex entry:
@inproceedings{pymovements,
author = {Krakowczyk, Daniel G. and Reich, David R. and Chwastek, Jakob and Jakobi, Deborah N.
and Prasse, Paul and Süss, Assunta and Turuta, Oleksii and Kasprowski, Paweł
and Jäger, Lena A.},
title = {pymovements: A Python Package for Processing Eye Movement Data},
year = {2023},
isbn = {979-8-4007-0150-4/23/05},
publisher = {Association for Computing Machinery},
address = {New York, NY, USA},
url = {https://doi.org/10.1145/3588015.3590134},
doi = {10.1145/3588015.3590134},
booktitle = {2023 Symposium on Eye Tracking Research and Applications},
location = {Tubingen, Germany},
series = {ETRA '23},
}
There is also a preprint available on arxiv.
Related Skills
node-connect
354.2kDiagnose OpenClaw node connection and pairing failures for Android, iOS, and macOS companion apps
claude-opus-4-5-migration
112.2kMigrate prompts and code from Claude Sonnet 4.0, Sonnet 4.5, or Opus 4.1 to Opus 4.5
frontend-design
112.2kCreate distinctive, production-grade frontend interfaces with high design quality. Use this skill when the user asks to build web components, pages, or applications. Generates creative, polished code that avoids generic AI aesthetics.
model-usage
354.2kUse CodexBar CLI local cost usage to summarize per-model usage for Codex or Claude, including the current (most recent) model or a full model breakdown. Trigger when asked for model-level usage/cost data from codexbar, or when you need a scriptable per-model summary from codexbar cost JSON.
