ProjFlow
[CVPR 2026] ProjFlow: Projection Sampling with Flow Matching for Zero-Shot Exact Spatial Motion Control
Install / Use
/learn @Akihisa-Watanabe/ProjFlowREADME
Akihisa Watanabe, Qing Yu, Edgar Simo-Serra, Kent Fujiwara
<img src="images/teaser.png"> </div>Overview
Official PyTorch implementation of ProjFlow, a training-free projection sampler for enforcing exact (hard) spatial constraints for human motion generation with flow-matching motion priors.
⚙️ Getting Started
<details> <summary><b>Installation, checkpoints, and data</b></summary>0. Clone this repository
git clone https://github.com/Akihisa-Watanabe/ProjFlow.git
cd ProjFlow
1. Conda Environment
We provide a conda environment file for a reproducible setup.
conda env create -f environment.yml
conda activate projflow
2. Models and Dependencies
ProjFlow is a sampler: it runs on top of a pretrained motion prior (we use an ACMDM Flow backbone in the paper). Depending on what you want to do, you may need different assets:
- Demo / generation: requires the pretrained motion prior checkpoint.
- Evaluation (paper metrics): additionally requires standard HumanML3D evaluator checkpoints + GloVe metadata files.
2.1 Download pre-trained checkpoints (required)
We host the pretrained checkpoints on Hugging Face:
You can download them with the HF CLI:
# Install Hugging Face Hub CLI if needed
pip install -U "huggingface_hub[cli]"
# Download checkpoints into the current directory (keeps the repo-style folder layout)
hf download Akihisa-Watanabe/ProjFlow \
--include "checkpoints/**" \
--local-dir .
Expected default location for the demo configs:
./checkpoints/t2m/ACMDM_Raw_Flow_S_PatchSize22/model/latest.tar
2.3 Evaluator checkpoints (optional; required only for evaluation scripts)
The scripts under evaluation_*.py follow the standard HumanML3D evaluation pipeline used by the ACMDM ecosystem.
To avoid duplicating (and potentially drifting from) the canonical setup steps, please follow the evaluator download + placement instructions in the ACMDM README.
ProjFlow expects the same evaluator checkpoint layout as ACMDM. In particular, the common default paths are:
./checkpoints/t2m/text_mot_match/model/finest.tar
./checkpoints/t2m/text_mot_match_clip/model/finest.tar
2.4 GloVe metadata (optional; required only for evaluation scripts)
For evaluator-side text tokenization, we use the conventional GloVe-based vocabulary metadata files. Again, please follow the GloVe download instructions in the ACMDM README.
ProjFlow expects the following files under:
./glove/our_vab_data.npy
./glove/our_vab_words.pkl
./glove/our_vab_idx.pkl
3. Obtain Data (optional)
You do not need to download any dataset if you only want to generate motions from text prompts and run the demo constraints.
If you want to reproduce and evaluate our method, obtain the HumanML3D dataset.
By default, our scripts expect:
./datasets/HumanML3D/
new_joints/
texts/
train.txt
val.txt
test.txt
</details>
🎬 Demo
<details> <summary><b>Demo scripts and Blender visualization</b></summary>1. Demo runner (Relative, Loop, 2D-to-3D lifting)
Run from the repository root:
# Relative joint offsets
python -m demo.run --config demo/configs/relative_offset.yaml
# Loop-closure constraints
python -m demo.run --config demo/configs/loop_closure.yaml
# 2D projection lifting
python -m demo.run --config demo/configs/lift_heart.yaml
2. Inpainting demo
Inpainting uses a dedicated script:
python demo/inpaint/run_inpaint.py \
--text "a person walks in a circle" \
--preset circle \
--joint_id 0 \
--n_frames 196 \
--name ACMDM_Raw_Flow_S_PatchSize22 \
--model ACMDM-Raw-Flow-S-PatchSize22 \
--out_dir outputs/demo_inpaint_circle \
--save_mp4
Reference for concrete values in this inpaint example:
| Argument | Example value | Description |
| ------------ | ------------------------------ | --------------------------------------------------------------------- |
| --model | ACMDM-Raw-Flow-S-PatchSize22 | Model key in ACMDM_models. |
| --name | ACMDM_Raw_Flow_S_PatchSize22 | Checkpoint experiment folder name. |
| --joint_id | 0 | Controlled joint index (0 = pelvis/root in T2M). |
| --n_frames | 196 | Generated sequence length. |
| --out_dir | outputs/demo_inpaint_circle | Output directory for samples_world.npy, plots, and optional videos. |
3. Blender visualization (headless)
You can render generated .npy motions with Blender:
python -m demo.projflow_blender_viz.run_headless \
--motion outputs/demo_inpaint_circle/samples_world.npy \
--out outputs/demo_inpaint_circle/motion_00_blender.mp4 \
--config demo/projflow_blender_viz/config_default.json
The rendered motion will look something like this:
<p align="center"> <img src="images/blender.gif" alt="Rendered motion example"> </p>Reference for concrete values in this Blender example:
| Argument | Example value | Description |
| ---------------------- | ----------------------------------------------- | ------------------------------------------------------------ |
| --motion | outputs/demo_lift_heart/motion_00_world.npy | Input motion from demo output ((T,J,3) world coordinates). |
| --out | outputs/demo_lift_heart/motion_00_blender.mp4 | Rendered video output path. |
| --config | demo/projflow_blender_viz/config_default.json | Render settings (engine, fps, camera, lighting, style). |
| --blender (optional) | /path/to/blender | Use when Blender is not found on PATH. |
If Blender is not on PATH, set it with --blender /path/to/blender (or BLENDER_BIN).
</details>[!NOTE] We test our code on Blender 5.0.1.
📊 Evaluation
<details> <summary><b>Evaluation setup and command examples</b></summary>Evaluation requires:
- HumanML3D data under
./datasets/ - evaluator checkpoints under
./checkpoints/t2m/text_mot_match*/... - GloVe metadata under
./glove/
For (2) and (3), please follow the ACMDM README.
Run a single controlled evaluation with:
python evaluation_ProjFlow.py \
--name ACMDM_Raw_Flow_S_PatchSize22 \
--model ACMDM-Raw-Flow-S-PatchSize22 \
--dataset_name t2m \
--dataset_dir ./datasets \
--checkpoints_dir ./checkpoints \
--cfg 3 \
--index 0 \
--intensity 100
Reference for concrete values in this evaluation example:
| Argument | Example value | Description |
| ------------------------ | ----------------------------------------------------------------- | ------------------------------------------------------------------------- |
| --name | ACMDM_Raw_Flow_S_PatchSize22 | Checkpoint experiment folder name under ./checkpoints/<dataset_name>/. |
| --model | ACMDM-Raw-Flow-S-PatchSize22 | Model key in ACMDM_models (must match checkpoint architecture). |
| --dataset_name | t2m | Evaluation dataset. |
| --dataset_dir | ./datasets | Root directory containing dataset files. |
| --checkpoints_dir | ./checkpoints | Root directory containing checkpoints and eval logs. |
| --cfg | 3 | Classifier-free guidance scale used during sampling. |
| --index | 0 | Controlled joint index (multiple values allowed, e.g. --index 0 10 11). |
| --intensity | 100 | Constraint intensity (Appendix settings: 1 2 5 25 100). |
--name must match your checkpoint folder:
./checkpoints/<dataset_name>/<name>/model/latest.tar
Common joint indices used in our evaluation:
| Index | Joint | | ----- | -----
Related Skills
node-connect
348.5kDiagnose OpenClaw node connection and pairing failures for Android, iOS, and macOS companion apps
frontend-design
109.1kCreate distinctive, production-grade frontend interfaces with high design quality. Use this skill when the user asks to build web components, pages, or applications. Generates creative, polished code that avoids generic AI aesthetics.
openai-whisper-api
348.5kTranscribe audio via OpenAI Audio Transcriptions API (Whisper).
qqbot-media
348.5kQQBot 富媒体收发能力。使用 <qqmedia> 标签,系统根据文件扩展名自动识别类型(图片/语音/视频/文件)。
