SkillAgentSearch skills...

Pose2sim

Markerless kinematics with any cameras — From 2D Pose estimation to 3D OpenSim motion

Install / Use

/learn @perfanalytics/Pose2sim

README

Continuous integration PyPI version
Downloads Stars GitHub forks GitHub issues GitHub issues-closed
status DOI License
Discord

Pose2Sim

https://github.com/user-attachments/assets/51a9c5a1-a168-4747-9f99-b0670927df95

Pose2Sim provides a workflow for 3D markerless kinematics (human or animal), as an alternative to traditional marker-based MoCap methods.

Pose2Sim is free and open-source, requiring low-cost hardware but with research-grade accuracy and production-grade robustness. It gives maximum control over clearly explained parameters. Any combination of phones, webcams, or GoPros can be used with fully clothed subjects, so it is particularly adapted to the sports field, the doctor's office, or for outdoor 3D animation capture.

Note: For real-time analysis with a single camera, please consider Sports2D (note that the motion must lie in the sagittal or frontal plane).

Fun fact:
Pose2Sim stands for "OpenPose to OpenSim", as it originally used OpenPose inputs (2D keypoints coordinates) and led to an OpenSim result (full-body 3D joint angles). Pose estimation is now performed with more recent models from RTMPose, and custom models (from DeepLabCut, for example) can also be used.

<br> <img src="Content/Pose2Sim_workflow.jpg" width="760"> <!-- GitHub Star Button --> <!-- <a class="github-button" href="https://github.com/perfanalytics/pose2sim" data-color-scheme="no-preference: light; light: light; dark: dark;" data-icon="octicon-star" data-show-count="true" aria-label="Star perfanalytics/pose2sim on GitHub">Star</a> <script async defer src="https://buttons.github.io/buttons.js"></script> --> </br>

Pose2Sim releases:

  • [x] v0.1 (08/2021): Published paper
  • [x] v0.2 (01/2022): Published code
  • [x] v0.3 (01/2023): Supported other pose estimation algorithms
  • [x] v0.4 (07/2023): New calibration tool based on scene measurements
  • [x] v0.5 (12/2023): Automatic batch processing
  • [x] v0.6 (02/2024): Marker augmentation, Blender visualizer
  • [x] v0.7 (03/2024): Multi-person analysis
  • [x] v0.8 (04/2024): New synchronization tool
  • [x] v0.9 (07/2024): Integration of pose estimation in the pipeline
  • [x] v0.10 (09/2024): Integration of OpenSim in the pipeline
  • [ ] v0.11: Integration of Sports2D, monocular 3D pose estimation, and documentation on new website
  • [ ] v0.12: Graphical User Interface
  • [ ] v0.13: Calibration based on keypoint detection, Handling left/right swaps, Correcting lens distortions
  • [ ] v1.0: First full release

N.B.: As always, I am more than happy to welcome contributors (see How to contribute).
N.B: Please set undistort_points and handle_LR_swap to false for now since it currently leads to inaccuracies. I'll try to fix it soon.

</br>

Contents

  1. Installation and Demonstration
    1. Installation
    2. Demonstration Part-1: End to end video to 3D joint angle computation
    3. Demonstration Part-2: Visualize your results with OpenSim or Blender
    4. Demonstration Part-3: Try multi-person analysis
    5. Demonstration Part-4: Try batch processing
    6. Too slow for you?
  2. Use on your own data
    1. Setting up your project
    2. 2D pose estimation
      1. With RTMPose (default)
      2. With MMPose (coming soon)
      3. With DeepLabCut
      4. With OpenPose (legacy)
      5. With Mediapipe BlazePose (legacy)
      6. With AlphaPose (legacy)
    3. Camera calibration
      1. Convert from Caliscope, AniPose, FreeMocap, Qualisys, Optitrack, Vicon, OpenCap, EasyMocap, or bioCV
      2. Calculate from scratch
    4. Synchronizing, Associating, Triangulating, Filtering
      1. Synchronization
      2. Associate persons across cameras
      3. Triangulating keypoints
      4. Filtering 3D coordinates
      5. Marker augmentation
    5. OpenSim kinematics
      1. Within Pose2Sim
      2. Within OpenSim GUI
      3. Command Line
  3. Utilities
  4. How to cite and how to contribute
    1. How to cite
    2. How to contribute and to-do list
</br>

Installation and Demonstration

Installation

  1. Create a conda environment:
    Download and install Miniconda.
    Open an Anaconda prompt and create a virtual environment:

    conda create -n Pose2Sim python=3.12 -y 
    conda activate Pose2Sim
    
  2. Install OpenSim:
    Install the OpenSim Python API (if you do not want to install via conda, refer to this page):

    conda install -c opensim-org opensim -y
    
  3. Install Pose2Sim:

    • OPTION 1: Quick install: Open a terminal.

      pip install pose2sim
      
    • OPTION 2: Build from source and test the last changes: Open a terminal in the directory of your choice and Clone the Pose2Sim repository.

      git clone --depth 1 https://github.com/perfanalytics/pose2sim.git
      cd pose2sim
      pip install .
      
  4. Optional:
    For faster inference, you can run on the GPU. Install pyTorch with CUDA and cuDNN support, and ONNX Runtime with GPU support (not available on MacOS).
    Be aware that GPU support takes an additional 6 GB on disk. The full installation is then 10.75 GB instead of 4.75 GB.

    Run nvidia-smi in a terminal. If this results in an error, your GPU is probably not compatible with CUDA. If not, note the "CUDA version": it is the latest version your driver is compatible with (more information on this post).

    Then go to the ONNXruntime requirement page, note the latest compatible CUDA and cuDNN requirements. Next, go to the pyTorch website and install the latest version that satisfies these requirements (beware that torch 2.4 ships with cuDNN 9, while torch 2.3 installs cuDNN 8). For example:

    pip3 install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/cu124
    

    Finally, install ONNX Runtime with GPU support:

    pip uninstall onnxruntime
    pip install onnxruntime-gpu
    

    Check that everything went well within Python with these commands:

    import torch; import onnxruntime as ort
    print(torch.cuda.is_available(), ort.get_available_providers())
    # Should print "True ['CUDAExecutionProvider', ...]"
    
<!-- print(f'torch version: {torch.__version__}, cuda version: {torch.version.cuda}, cudnn version: {torch.backends.cudnn.version()}, onnxruntime version: {ort.__version__}') -->

Note on storage use:
A full installation takes up to 10 GB of storage spate. However, GPU support is not mandatory and takes about 6 GB. A minimal installation with carefully chosen pose models and without GPU support would take less than 3 GB.
<img src="Content/Storage.png" width="760">

</br>

Demonstration Part-1: End to end video to 3D joint angle computation

_**This demonstration provides an example experiment of a person balanc

View on GitHub
GitHub Stars587
CategoryDevelopment
Updated1h ago
Forks95

Languages

Python

Security Score

100/100

Audited on Mar 23, 2026

No findings