Fdm
Learned Perceptive Forward Dynamics Model
Install / Use
/learn @leggedrobotics/FdmREADME
Our novel perceptive Forward Dynamics Model (FDM) enables real-time, learned traversability assessment for safe robot navigation by predicting future states based on environmental geometry and proprioceptive history. Trained in simulation and fine-tuned with real-world data, the model captures the full system dynamics beyond rigid body simulation. Integrated into a zero-shot Model Predictive Path Integral (MPPI) planner, our approach removes the need for tedious cost function tuning, improving safety and generalization. Tested on the ANYmal legged robot, our method significantly boosts navigation success in rough environments, with effective sim-to-real transfer.
⭐ If you find our perceptive FDM useful, star it on GitHub to get notified of new releases! The repository features:
- Implementation of the perceptive FDM training code as extension for IsaacLab
- Integration of the perceptive FDM into a Model Predictive Path Integral (MPPI) planner
- Real-world deployment of the perceptive FDM on the ANYmal robot
Paper
A technical introduction to the theory behind our perceptive FDM is provided in our open-access RSS paper, available here. For a quick overview, watch the accompanying 5-minute presentation YouTube. More information about the work is available in the abstract below.
<details> <summary>Abstract</summary> <br> Ensuring safe navigation in complex environments requires accurate real-time traversability assessment and understanding of environmental interactions relative to the robot's capabilities. Traditional methods, which assume simplified dynamics, often require designing and tuning cost functions to safely guide paths or actions toward the goal. This process is tedious, environment-dependent, and not generalizable. To overcome these issues, we propose a novel learned perceptive Forward Dynamics Model (FDM) that predicts the robot's future state conditioned on the surrounding geometry and history of proprioceptive measurements, proposing a more scalable, safer, and heuristic-free solution. The FDM is trained on multiple years of simulated navigation experience, including high-risk maneuvers, and real-world interactions to incorporate the full system dynamics beyond rigid body simulation. We integrate our perceptive FDM into a zero-shot Model Predictive Path Integral (MPPI) planning framework, leveraging the learned mapping between actions, future states, and failure probability. This allows for optimizing a simplified cost function, eliminating the need for extensive cost-tuning to ensure safety. On the legged robot ANYmal, the proposed perceptive FDM improves the position estimation by on average 41% over competitive baselines, which translates into a 27% higher navigation success rate in rough simulation environments. Moreover, we demonstrate effective sim-to-real transfer and showcase the benefit of training on synthetic and real data. </details>Citation
@inproceedings{roth2025fdm,
title={Learned Perceptive Forward Dynamics Model for Safe and Platform-aware Robotic Navigation},
author={Roth, Pascal and Frey, Jonas and Cadena, Cesar and Hutter, Marco},
booktitle={Robotics: Science and Systems (RSS 2025)},
year={2025}
}
Installation
IsaacLab Extension (Training and Evaluation)
<p style="background-color:#ffdddd; border-left:4px solid #f44336; padding:10px;"> <strong>⚠️ WARNING:</strong> With our code update to the latest IsaacLab version, we experience the robot sinking into the ground which will affect model learning. We are investigating the issue. </p>The extension is developed with IsaacLab version 2.1.1 (latest tested commit 19b24c7). Future versions may work, but are not tested. IsaacLab runs on Ubuntu 20.04 - 24.04. Our paper results are using the locomotion policy by Miki et al.. As this one is not public, the repos default is the standard IsaacLab ANYmal policy with reduced stability. Furthermore, no new parameter parameter tuning has been performed. Results may therefore differ.
The extension is installed as follows:
-
Install IsaacSim and IsaacLab:
Follow the IsaacLab installation guide to ensure IsaacSim and IsaacLab are installed.
NOTE: Please use an IsaacLab version where PR2183 has been merged, which contains changes necessary to run the scripts successfully. -
Clone this repository and install as part of IsaacLab:
git clone git@github.com:leggedrobotics/fdm.git cd <path-to-isaaclab-repo> ./isaaclab.sh -p -m pip install -e <path-to-your-fdm-repo>/exts/fdm ./isaaclab.sh -p -m pip install -e <path-to-your-fdm-repo>/nav-suite/exts/nav_suite ./isaaclab.sh -p -m pip install -e <path-to-your-fdm-repo>/nav-suite/exts/nav_tasksImportant: Make sure the submodules are correctly initialized and assets are downloaded from git lfs (Install instructions here):
cd <path-to-your-fdm-repo> git submodule update --init git lfs pull -
Verify the installation: To verify the installation, run the training script in debug mode:
cd <path-to-your-fdm-repo> <path-to-isaaclab-repo>/isaaclab.sh -p scripts/train.py --mode debugThe IsaacSim GUI should open and data collection should start, meaning the robots are moving. Later, the FDM training should start.
For further details on the IsaacLab extensions, see the README
ROS Integration (Real-World Deployment)
The integration is done with ROS Noetic on Ubuntu 20.04 for ANYmal D (Release 24.04). For details on the ROS integration, see the README.
-
Create a catkin workspace:
mkdir -p fdm_ws/src cd fdm_ws/src -
Clone the Repository:
git clone https://github.com/leggedrobotics/forward_dynamics_model.git -
Build the workspace:
cd .. catkin build fdm_navigation_ros waypoint_rviz_plugin -
Install the FDM Navigation Package
pip install -e src/forward_dynamics_model/forward_dynamics_model/ros/ -
Verify the installation:
Source the workspace and execute the planner.
source devel/setup.bash roslaunch fdm_navigation_ros planner.launch
In order to use the WaypointTool to set waypoints for the planner, it has to be added to the rviz gui of ANYbotics.
To do so, add following line under Visualization Manager -> Tools:
Visualization Manager:
...
Tools:
- Class: rviz/WaypointTool
Topic: waypoint
Usage
Model Demo
The latest model (model.zip) trained with the locomotion policy of Miki et al. are available to download:
IMPORTANT For evaluations in simulation, it is recommended to use the simulation model as it is not fitted for a particular real-world platform.
NOTE: The repos default is the IsaacLab locomotion policy, as the one of Miki et al. is not public. For comparable results, please retrain with possible adjusted parameters.
Follow the instructions above to setup the FDM extension for IsaacLab, then extract the model inside <path-to-your-fdm-repo>/logs/fdm/fdm_se2_prediction_depth/fdm_latest:
mkdir -p <path-to-your-fdm-repo>/logs/fdm/fdm_se2_prediction_depth/fdm_latest
cd <path-to-your-fdm-repo>/logs/fdm/fdm_se2_prediction_depth/fdm_latest
mv <path-to-model-download>/model.zip .
unzip model.zip
then run the test script for the dynamics estimation model:
cd <path-to-your-fdm-repo>
<path-to-isaaclab-repo>/isaaclab.sh -p scripts/test.py --runs fdm_latest
or for the planning first set the parameters into planning_mode (during training some bounds are higher to increase
stability later but then are adjusted for the actual planning tasks), this can be done in the
init line 36. The planner test script can then be executed as follows:
cd <path-to-your-fdm-repo>
<path-to-isaaclab-repo>/isaaclab.

