Pose2sim
Markerless kinematics with any cameras — From 2D Pose estimation to 3D OpenSim motion
Install / Use
/learn @perfanalytics/Pose2simREADME
Pose2Sim
https://github.com/user-attachments/assets/51a9c5a1-a168-4747-9f99-b0670927df95
Pose2Sim provides a workflow for 3D markerless kinematics (human or animal), as an alternative to traditional marker-based MoCap methods.
Pose2Sim is free and open-source, requiring low-cost hardware but with research-grade accuracy and production-grade robustness. It gives maximum control over clearly explained parameters. Any combination of phones, webcams, or GoPros can be used with fully clothed subjects, so it is particularly adapted to the sports field, the doctor's office, or for outdoor 3D animation capture.
Note: For real-time analysis with a single camera, please consider Sports2D (note that the motion must lie in the sagittal or frontal plane).
Fun fact:
Pose2Sim stands for "OpenPose to OpenSim", as it originally used OpenPose inputs (2D keypoints coordinates) and led to an OpenSim result (full-body 3D joint angles). Pose estimation is now performed with more recent models from RTMPose, and custom models (from DeepLabCut, for example) can also be used.
Pose2Sim releases:
- [x] v0.1 (08/2021): Published paper
- [x] v0.2 (01/2022): Published code
- [x] v0.3 (01/2023): Supported other pose estimation algorithms
- [x] v0.4 (07/2023): New calibration tool based on scene measurements
- [x] v0.5 (12/2023): Automatic batch processing
- [x] v0.6 (02/2024): Marker augmentation, Blender visualizer
- [x] v0.7 (03/2024): Multi-person analysis
- [x] v0.8 (04/2024): New synchronization tool
- [x] v0.9 (07/2024): Integration of pose estimation in the pipeline
- [x] v0.10 (09/2024): Integration of OpenSim in the pipeline
- [ ] v0.11: Integration of Sports2D, monocular 3D pose estimation, and documentation on new website
- [ ] v0.12: Graphical User Interface
- [ ] v0.13: Calibration based on keypoint detection, Handling left/right swaps, Correcting lens distortions
- [ ] v1.0: First full release
N.B.: As always, I am more than happy to welcome contributors (see How to contribute).
N.B: Please set undistort_points and handle_LR_swap to false for now since it currently leads to inaccuracies. I'll try to fix it soon.
Contents
- Installation and Demonstration
- Use on your own data
- Utilities
- How to cite and how to contribute
Installation and Demonstration
Installation
-
Create a conda environment:
Download and install Miniconda.
Open an Anaconda prompt and create a virtual environment:conda create -n Pose2Sim python=3.12 -y conda activate Pose2Sim -
Install OpenSim:
Install the OpenSim Python API (if you do not want to install via conda, refer to this page):conda install -c opensim-org opensim -y -
Install Pose2Sim:
-
OPTION 1: Quick install: Open a terminal.
pip install pose2sim -
OPTION 2: Build from source and test the last changes: Open a terminal in the directory of your choice and Clone the Pose2Sim repository.
git clone --depth 1 https://github.com/perfanalytics/pose2sim.git cd pose2sim pip install .
-
-
Optional:
For faster inference, you can run on the GPU. Install pyTorch with CUDA and cuDNN support, and ONNX Runtime with GPU support (not available on MacOS).
Be aware that GPU support takes an additional 6 GB on disk. The full installation is then 10.75 GB instead of 4.75 GB.Run
nvidia-smiin a terminal. If this results in an error, your GPU is probably not compatible with CUDA. If not, note the "CUDA version": it is the latest version your driver is compatible with (more information on this post).Then go to the ONNXruntime requirement page, note the latest compatible CUDA and cuDNN requirements. Next, go to the pyTorch website and install the latest version that satisfies these requirements (beware that torch 2.4 ships with cuDNN 9, while torch 2.3 installs cuDNN 8). For example:
pip3 install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/cu124Finally, install ONNX Runtime with GPU support:
pip uninstall onnxruntime pip install onnxruntime-gpuCheck that everything went well within Python with these commands:
import torch; import onnxruntime as ort print(torch.cuda.is_available(), ort.get_available_providers()) # Should print "True ['CUDAExecutionProvider', ...]"
</br>Note on storage use:
A full installation takes up to 10 GB of storage spate. However, GPU support is not mandatory and takes about 6 GB. A minimal installation with carefully chosen pose models and without GPU support would take less than 3 GB.
<img src="Content/Storage.png" width="760">
Demonstration Part-1: End to end video to 3D joint angle computation
_**This demonstration provides an example experiment of a person balanc
