InferBiomechanics
Training and evaluation scripts for ML models on AddBiomechanics data to infer missing dynamics data on mocap
Install / Use
/learn @keenon/InferBiomechanicsREADME
Welcome to InferBiomechanics
This is a small repository with example scripts, code and baselines to train and evaluate models that can infer physics information from pure motion, trained and evaluated on AddBiomechanics data.
Getting The Data
If you are running on the Sherlock cluster, the latest data is already loaded at /home/groups/delp/data.
For local training runs, the latest dataset is available in a Google Drive folder, here.
Please download the data, and place it into the data folder in this repository. When completed, there should be a data/train folder, and a data/dev folder, each with many *.b3d files in them.
Running the Code
First, run pip3 install -r requirements.txt
There are several tasks you might want to run, all of which can be accessed from the command line entrypoint, src/cli/main.py. To run main.py, you'll need to be in the src directory.
Training a Model
To generate model snapshots, run this command:
python3 main.py train ...
We use Weights and Biases to track model training, so you'll need to create an account there, and either:
- set the
WANDB_API_KEYenvironment variable to your API key. - run
wandb loginfrom the command line, and follow the instructions.
Once that's set up, your runs will automatically log to your account, and you can see them in the web interface. By default, the runs log to a shared academic project, shpd1.
Visualizing a Model
It's often helpful to be able to see how a model is screwing up, and what might be strange in the data.
This command will automatically load the selected model type from the latest checkpoint, run the loaded model on the training set, and visualize the results in the browser:
python3 main.py visualize ...
ANALYTICAL BASELINE: To visualize the results of running an analytical baseline, run with the --model analytical flag.
Evaluating a Model
To get performance numbers for a given model on the whole dataset, even if it hasn't finished training, run:
python3 main.py analyze ...
This will automatically pick up the latest model checkpoint file, and run it on the whole dataset, and print out the results.
ANALYTICAL BASELINE: To evaluate the results of running an analytical baseline on the whole dataset, run with the --model analytical flag.
Related Skills
node-connect
349.9kDiagnose OpenClaw node connection and pairing failures for Android, iOS, and macOS companion apps
frontend-design
109.8kCreate distinctive, production-grade frontend interfaces with high design quality. Use this skill when the user asks to build web components, pages, or applications. Generates creative, polished code that avoids generic AI aesthetics.
openai-whisper-api
349.9kTranscribe audio via OpenAI Audio Transcriptions API (Whisper).
qqbot-media
349.9kQQBot 富媒体收发能力。使用 <qqmedia> 标签,系统根据文件扩展名自动识别类型(图片/语音/视频/文件)。
