DynOSAM
Offical code release for DynoSAM: Dynamic Object Smoothing And Mapping. Accepted Transactions on Robotics (Visual SLAM SI). A visual SLAM framework and pipeline for Dynamic environements, estimating for the motion/pose of objects and their structure, as well as the camera odometry and static map.
Install / Use
/learn @ACFR-RPG/DynOSAMREADME
DynoSAM: Dynamic Object Smoothing and Mapping
A Stereo/RGB-D Visual Odometry pipeline for Dynamic SLAM.
DynoSAM estimates camera poses, object motions/poses, as well as static background and temporal dynamic object maps. It provides full-batch, sliding-window, and incremental optimization procedures and is fully integrated with ROS2.
</div> <br /> <div align="center"> <img src="./docs/media/aria_demo_parallel_hybrid.gif"/> <p style="font-style: italic; color: gray;"> DynoSAM running Parallel-Hybrid formulation in incremental optimisation mode on a indoor sequence recorded with an Intel RealSense. Playback is 2x speed.</p> </div> <div align="center"> <img src="./docs/media/omd-demo.gif"/> <p style="font-style: italic; color: gray;"> Example output running on the Oxford Multimotion Dataset (OMD, 'Swinging 4 Unconstrained'). This visualisation was generated using playback after full-batch optimisation.</p> </div>📚 Publications
The offical code used for our paper:
- Jesse Morris, Yiduo Wang, Mikolaj Kliniewski, Viorela Ila, DynoSAM: Open-Source Smoothing and Mapping Framework for Dynamic SLAM. Accepted Transactions on Robotics (T-RO) Visual SLAM Special Issue, 2025.
** Update December 2025 **
Our work has been accepted to IEEE Robotics and Automation Letters (RA-L)
** Update November 2025 **
Our work has been accepted to IEEE Transactions on Robotics (T-RO)
** Update September 2025 **
This code now also contains the code for our new work
- J.Morris, Y. Wang, V. Ila. Online Dynamic SLAM with Incremental Smoothing and Mapping, Accepted Robotics and Automation Letters (RA-L), 2025
We kindly ask to cite our papers if you find these works useful:
@misc{morris2025dynosam,
author={Morris, Jesse and Wang, Yiduo and Kliniewski, Mikolaj and Ila, Viorela},
journal={IEEE Transactions on Robotics},
title={DynoSAM: Open-Source Smoothing and Mapping Framework for Dynamic SLAM},
year={2025},
doi={10.1109/TRO.2025.3641813}
}
@inproceedings{morris2025online,
author={Morris, Jesse and Wang, Yiduo and Ila, Viorela},
journal={IEEE Robotics and Automation Letters},
title={Online Dynamic SLAM with Incremental Smoothing and Mapping},
year={2026},
volume={},
number={},
pages={1-8},
doi={10.1109/LRA.2026.3655286}}
}
Related Publications
DynoSAM was build as a culmination of several works:
- J. Morris, Y. Wang, V. Ila. The Importance of Coordinate Frames in Dynamic SLAM. IEEE Intl. Conf. on Robotics and Automation (ICRA), 2024
- J. Zhang, M. Henein, R. Mahony, V. Ila VDO-SLAM: A Visual Dynamic Object-aware SLAM System, ArXiv
- M. Henein, J. Zhang, R. Mahony, V. Ila. Dynamic SLAM: The Need for Speed.2020 IEEE International Conference on Robotics and Automation (ICRA). IEEE, 2020.
📖 Overview
Key Features (Nov 2025 Update)
- 🚀 CUDA Integration: Front-end acceleration.
- 🧠 TensorRT: Integrated object detection and tracking.
- ⚡ Sparse Tracking: Options to move away from dense optical flow.
- 📦 Modular: System broken into packages for faster compile times.
- 🐳 Docker: Updated image with new dependencies.
Documentation
We auto generate Doxygen code docs for all classes in DynoSAM. The code docs are up-to-date with the main branch.
1. ⚙️ Installation
We provide a detailed installation guide including dependencies and Docker support. See detailed instructions here: Insallation instructions
1.1 Edge Device Support
We currently support building for AARM64 devices and DynoSAM has been tested on an NVIDIA ORIN NX. Docker file support and more details are provided in the install instructions.
NOTE: DynoSAM does not currently run real-time on the ORIN NX (mostly bottlenecked by the object detection process). On a more powerful device better performance is expected.
Also see the Docker README.md and the dynosam_nn README.md for more information on hardware and performance.
2. 🚀 Running DynoSAM
2.1 Paramters
DynoSAM is configured using a combination of YAML files (pipeline, frontend, datasets) and GFLAGS (overridable command-line parameters). ROS parameters are used only for input file paths.
All .yaml and .flags files must be placed in a single parameter folder which defines the params_path.
params/
FrontendParams.yaml
PipelineParams.yaml
[DatasetParams.yaml]
[CameraParams.yaml]
*.flags
- YAML files are loaded using config_utilities.
- GFlags provide run-time reconfiguration (important for automated experiments).
- ROS parameters are used sparingly (mainly for file paths).
NOTE: Note: Additional GFlags cannot be passed through ros2 launch. To override GFlags pass them directly with ros2 run, or modify the flag files inside the params folder.
To print active parameters:
ros2 run dynosam_utils eval_launch.py --show_dyno_args=true
To see all gflag options run:
ros2 run dynosam_ros dynosam_node --help
2.2 Quick Start (Launch File)
Launch the full pipeline with ROS2:
ros2 launch dynosam_ros dyno_sam_launch.py \
params_path:=<path-to-params-folder> \
dataset_path:=<path-to-dataset> \
v:=<verbosity> \
--data_provider_type=<data_set_loader_type> \
--output_path=</path/to/output/folder>
The launch file:
- Loads all
.flagsfiles in the parameter folder, - Applies dataset provider selection via
--data_provider_type, - Logs outputs to
--output_path(must exist beforehand).
2.3 Experiment & Evaluation Launcher
For fine‑grained control and automated experiments, use:
ros2 run dynosam_utils eval_launch.py \
--dataset_path <path> \
--params_path <absolute path> \
--output_path <path> \
--name <experiment name> \
--run_pipeline \
--run_analysis \
<extra GFLAG cli arguments>
This script:
- Automates running the pipeline and evaluations,
- Forwards all extra CLI arguments to DynoSAM (allowing any GFLAG override),
- Creates result folders
output_path/name/automatically.
Example:
ros2 run dynosam_utils eval_launch.py \
--output_path=/tmp/results \
--name=test \
--run_pipeline \
--data_provider_type=2
2.4 Programmatic Execution (Python)
All command‑line behaviour can be replicated in Python. See: run_experiments_tro.py for examples.
3. 📂 Datasets
3.1 Pre-processing Image data
DynoSAM requires input image data in the form:
- RGB
- Depth/Stereo
- Dense Optical Flow
- Dense Semantic Instance mask
Each image is expected in the following form:
- rgb image is expected to be a valid 8bit image (1, 3 and 4 channel images are accepted).
- depth must be a CV_64F image where the value of each pixel represents the metric depth.
- mask must be a CV_32SC1 where the static background is of value $0$ and all other objects are lablled with a tracking label $j$. $j$ is assumed to be globally consistent and is used to map the tracked object $j$ to the ground truth.
- flow must be a CV_32FC2 representing a standard optical-flow image representation.
For dense optical flow (ie. pre Novemeber 2025) we use RAFT. Currently this pre-processing code is not available.
For instance segmentation we use YOLOv8 for both image pre-processing and online processing. Both Python and C++ (using TensorRT acceleration) models can be found in the dynosam_nn package. See the REAMDE for more details.
- If
prefer_provided_optical_flow: true(YAML), the pipeline expects a dense flow image. Otherwise, it falls back to sparse KLT. - If
prefer_provided_object_detection: true(YAML) , an instance mask must be provided. If false, masks are generated online via.
3.2 Running with pre-processed data
DynoSAM provides dataset loaders that parse pre-processed images (ie. depth, optical flow, masks), and ground truth into a unified format.
All official datasets are hosted at the ACFR-RPG Datasets page. To download a dataset, create a directory for the relevant dataset and, within the directory, run the command
wget -m -np -nH --cut-dirs=4 -R "index.html*" https://data.acfr.usyd.edu.au/rpg/dynosam/[Dataset]/[Subset]
For example, for the kitti dataset with subset 0004, create and enter the directory kitti-0004 download all files:
wget -m -np -nH --cut-dirs=4 -R "index.html*" https://data.acfr.usyd.edu.au/rpg/dynosam/kitti/0004
NOTE: when developing using docker, download the sequences into the data folder mounted into the docker container so it may be accessed by the program.
The following datasets are officially supported: | Dataset | Dataset ID | Notes | |--------|--------------|-------| | KITTI Tracking | 0 | Uses modified version with GT motion, flow, masks. | | Virtual KITTI 2 | 1 | Raw dataset supported directly. | | Cluster-SLAM (CARLA) | 2 | Raw dataset available; we recommend our processed version. | | Oxford
Related Skills
node-connect
337.7kDiagnose OpenClaw node connection and pairing failures for Android, iOS, and macOS companion apps
frontend-design
83.3kCreate distinctive, production-grade frontend interfaces with high design quality. Use this skill when the user asks to build web components, pages, or applications. Generates creative, polished code that avoids generic AI aesthetics.
openai-whisper-api
337.7kTranscribe audio via OpenAI Audio Transcriptions API (Whisper).
commit-push-pr
83.3kCommit, push, and open a PR
