Neuralmotionplanner
PyTorch Code for Neural MP: A Generalist Neural Motion Planner
Install / Use
/learn @mihdalal/NeuralmotionplannerREADME
Neural MP: A Generalist Neural Motion Planner
This repository is the official implementation of Neural MP: A Generalist Neural Motion Planner
Neural MP is a machine learning-based motion planning system for robotic manipulation tasks. It combines neural networks trained on large-scale simulated data with lightweight optimization techniques to generate efficient, collision-free trajectories. Neural MP is designed to generalize across diverse environments and obstacle configurations, making it suitable for both simulated and real-world robotic applications. This repository contains the implementation, data generation tools, and evaluation scripts for Neural MP.
https://github.com/user-attachments/assets/17eeb664-ea4c-4904-b82e-cb5231fc84b9
Authors: Murtaza Dalal*, Jiahui Yang*, Russell Mendonca, Youssef Khaky, Ruslan Salakhutdinov, Deepak Pathak
Website: https://mihdalal.github.io/neuralmotionplanner
Paper: https://mihdalal.github.io/neuralmotionplanner/resources/paper.pdf
Models: https://huggingface.co/mihdalal/NeuralMP
If you find this codebase useful in your research, please cite:
@article{dalal2024neuralmp,
title={Neural MP: A Generalist Neural Motion Planner},
author={Murtaza Dalal and Jiahui Yang and Russell Mendonca and Youssef Khaky and Ruslan Salakhutdinov and Deepak Pathak},
journal = {arXiv preprint arXiv:2409.05864},
year={2024},
}
Table of Contents
Prerequisites
- Conda
- NVIDIA GPU with appropriate drivers (for GPU support)
Installation Instructions
1. Environment Setup
The system has been tested with: Python3.8, CUDA12.1, RTX3090 GPU with driver version 535
1.1 Environment Variables
export PATH=/usr/local/cuda-12.1/bin:$PATH
export CUDA_HOME=/usr/local/cuda-12.1/
export WANDB_API_KEY=your_wandb_api_key_here
export MESA_GLSL_VERSION_OVERRIDE="330"
export MESA_GL_VERSION_OVERRIDE="3.3"
export OMP_NUM_THREADS=1 # Crucial for fast SubprocVecEnv performance!
1.2 Create Conda Environment
conda create -n neural_mp python=3.8
conda activate neural_mp
2. System Dependencies
Install required system libraries:
sudo apt-get update && sudo apt-get install -y \
swig cmake libgomp1 libjpeg8-dev zlib1g-dev libpython3.8 \
libxcursor-dev libxrandr-dev libxinerama-dev libxi-dev libegl1 \
libglfw3-dev libglfw3 libgl1-mesa-glx libfdk-aac-dev libass-dev \
libopus-dev libtheora-dev libvorbis-dev libvpx-dev libssl-dev \
libboost-serialization-dev libboost-filesystem-dev libboost-system-dev \
libboost-program-options-dev libboost-test-dev libeigen3-dev libode-dev \
libyaml-cpp-dev libboost-python-dev libboost-numpy-dev libglfw3-dev \
libgles2-mesa-dev patchelf libgl1-mesa-dev libgl1-mesa-glx libglew-dev \
libosmesa6-dev
3. Clone Repository
To clone the repository with all its submodules, use the following command:
git clone --recurse-submodules https://github.com/mihdalal/neuralmotionplanner.git
cd neuralmotionplanner
If you've already cloned the repository without the --recurse-submodules flag, you can initialize and update the submodules like so:
git submodule update --init --recursive
Create directories to store real world data
mkdir real_world_test_set && cd real_world_test_set && \
mkdir collected_configs collected_pcds collected_trajs evals && cd ..
4. Python Dependencies
4.1 Set up OMPL
We provide two ways to install the OMPL:
- Build from source as appeared on the official installation guide
./install-ompl-ubuntu.sh --python
- However, in case the compilation doesn't go through successfully, we provide a pre-compiled zip file as an alternate approach
unzip containers/ompl-1.5.2.zip
After installation, run
echo "<path to neuralmotionplanner folder>/neuralmotionplanner/ompl-1.5.2/py-bindings" >> ~/miniconda3/envs/neural_mp/lib/python3.8/site-packages/ompl.pth
4.2 Install PyTorch and PyTorch3D
pip install torch==2.1.0 torchvision==0.16.0 torchaudio==2.1.0 --index-url https://download.pytorch.org/whl/cu121
pip install "git+https://github.com/facebookresearch/pytorch3d.git"
4.3 Install Project Dependencies
pip install -e pybullet-object-models/
pip install -e robomimic/
pip install -e pointnet2_ops/
pip install -e robofin/
pip install -e ./
pip install -r requirements.txt
In practice we found that pointnet2_ops extension is not easy to build and often raises import errors. Hence, we offer our pre-compiled _ext.cpython-38-x86_64-linux-gnu.so file. Using python3.8, torch2.1.0 and CUDA12.1, you can install the package without compiling new extensions. Please note, in order to use the pre-compiled extension, you need to comment out this part of code in setup.py, otherwise it will rebuild and overwrite the file.
4.4 For contributers
pre-commit install
5. Real World Dependencies
If you are interested to run Neural MP in the real world, please install ManiMo.
- Set
MANIMO_PATHas an environment variable in the.bashrcfile:export MANIMO_PATH={FOLDER_PATH_TO_MANIMO}/manimo/manimo - Run the setup script on the client computer. Note that
mambasetup does not work, always useminiconda:source setup_manimo_env_client.sh - Run the setup script on the server computer. Note that
mambasetup does not work, always useminiconda:source setup_manimo_env_server.sh
To verify that the installation works, run the polymetis server on NUC by running the following script under the scripts folder:
python get_current_position.py
Real World Deployment
Real world deployment commands on a setup of a single Franka robot with default panda gripper and multiple Intel Realsense Cameras
Camera Calibration
Step 1: Setup camera id and intrinsics in ManiMo camera config at multi_real_sense_neural_mp
- Replace
device_idwith the actual camera id shown on its label - Execute script
get_camera_intrinsics.py. Replaceintrinsicswith the terminal output.
python neural_mp/real_utils/get_camera_intrinsics.py -n <camera serial id>
e.g.
python neural_mp/real_utils/get_camera_intrinsics.py -n 102422076289
- You may add/delete camera configs to accomendate the actual number of cameras you are using (in our setup we have 4 cameras). You can also adjust other camera parameters according to your need, but please follow the naming convention in the config file.
Step 2: Calibration with Apriltag
- Print an Apriltag and attach it to the panda gripper, the larger the better, click here to visit the April Tag generator (in our case we are using a
50mmApriltag from tag familytagStandard52h13, ID17) - Update the Apriltag specification into calibration_apriltag.yaml. You may set
display_imagestoTrueto debug the images captured by the camera - Clear any potential obstacles in front of the robot.
- Execute script
calibration_apriltag.pyfor each camera, it will automatically calibrate camera extrinsics and save it as a.pklfile. Specify the index of the camera you want to calibrate and make sure the apriltag will be in view during calibration. You may activate the--flipflag to turn the end effector 180deg, so the apriltag could be captured by the cameras behind the robot.
# e.g. calibrate camera 1 which locates at the back of the robot
python neural_mp/real_utils/calibration_apriltag.py -c 1 --flip
<div align="center">
<img src="media/readme1.png" width="500" height="400" title="readme1">
</div>
Step 3: Manual Offset Tuning
In step 2, the calibration process will assume the center of the Apriltag locates exactly at the end effector position, which is normally not the case. There will exist a small xyz shift between the Apriltag and the actual end-effector position. To mitigate this error, we need to go through step 3 to update the mv_shift param in multi_real_sense_neural_mp.
- Execute script
calibration_shift.py
python neural_mp/real_utils/calibration_shift.py
- Open the printed web link (should be something like
http://127.0.0.1:7000/static/) formeshcatvisualization - Now you s
