NeuPAN
[TRO 2025] NeuPAN: Direct Point Robot Navigation with End-to-End Model-based Learning.
Install / Use
/learn @hanruihua/NeuPANREADME
NeuPAN: Direct Point Robot Navigation with End-to-End Model-based Learning
<a href="https://github.com/hanruihua/neupan"><img src='https://img.shields.io/github/stars/hanruihua/neupan?=yellow' alt='GitHub Stars'></a> <a href="https://ieeexplore.ieee.org/abstract/document/10938329"><img src='https://img.shields.io/badge/PDF-IEEE-brightgreen' alt='PDF'></a> <a href="https://arxiv.org/pdf/2403.06828.pdf"><img src='https://img.shields.io/badge/PDF-Arxiv-brightgreen' alt='PDF'></a> <a href="https://youtu.be/SdSLWUmZZgQ"><img src='https://img.shields.io/badge/Video-Youtube-blue' alt='youtube'></a> <a href="https://www.bilibili.com/video/BV1Zx421y778/?vd_source=cf6ba629063343717a192a5be9fe8985"><img src='https://img.shields.io/badge/Video-Bilibili-blue' alt='youtube'></a> <a href="https://hanruihua.github.io/neupan_project/"><img src='https://img.shields.io/badge/Website-NeuPAN-orange' alt='website'></a> <a href="https://github.com/hanruihua/neupan_ros"><img src='https://img.shields.io/badge/Wrapper-ROS/ROS2-red' alt='ROS'></a>
</div>Table of Contents
- News
- Introduction
- Demonstrations
- Why NeuPAN
- Quick Start
- Examples
- Configuration
- Advanced Usage
- Current Limitations
- FAQ
- License
- Citation
News
- 2025-12-25: Our NeuPAN planner is integrated into the odin-nav-stack project, feel free to try it out!
- 2025-04-24: We provide the python 3.8 compatible NeuPAN version available on py38 branch.
- 2025-03-26: Available on IEEE Transactions on Robotics.
- 2025-03-25: Code released!
- 2025-02-04: Our paper is accepted by T-RO 2025!
Introduction
NeuPAN (Neural Proximal Alternating-minimization Network) is an end-to-end, real-time, map-free, and easy-to-deploy MPC based robot motion planner. By integrating learning-based and optimization-based techniques, NeuPAN directly maps obstacle points data to control actions in real-time by solving an end-to-end mathematical model with numerous point-level collision avoidance constraints. This eliminates middle modules design to avoid error propagation and achieves high accuracy, allowing the robot to navigate in cluttered and unknown environments efficiently and safely.

Demonstrations
More real world demonstrations are available on the project page.
https://github.com/user-attachments/assets/7e53b88c-aba9-4eea-8708-9bbf0d0305fc
https://github.com/user-attachments/assets/e37c5775-6e80-4cb5-9320-a04b54792e0e
https://github.com/user-attachments/assets/71eef683-a996-488f-b51b-89e149d0cc6e
https://github.com/user-attachments/assets/fb3f2992-da00-4730-872f-8f3bb8b29163
Why NeuPAN
| Aspect | Traditional Modular Planners (TEB, DWA) | End-to-End Learning-based Planners (RL, IL) | NeuPAN | | ------ | --------------------------------------- | -------------------------------- | ---------- | | Architecture | Modular pipeline with object detection, mapping, and planning | End-to-end policy network | End-to-end framework without middle modules, avoiding error propagation | | Environment Handling | Limited by map representation and object models | Depends on training environments | Directly processes obstacle points, handles cluttered/unstructured environments with arbitrary shaped objects | | Training Data | N/A (rule-based) | Requires huge amount of real/simulated data | Minimal training data: simply random points within a certain range | | Training Time | N/A (rule-based) | Hours to days on GPU clusters | Fast training: 1-2 hours on CPU for new robot geometry | | Retraining | N/A | Often requires retraining for new environments | Train once for robot geometry, apply to various environments without retraining | | Safety Guarantee | Relies on accurate perception | No formal safety guarantee | Mathematical optimization with collision avoidance constraints for reliable deployment | | Deployment | Complex integration required | Black-box policy, hard to debug | Easy to deploy and flexible to integrate with global planners (A*, VLN) |
Quick Start
Requirements: Python >= 3.10
# Install NeuPAN
git clone https://github.com/hanruihua/NeuPAN
cd NeuPAN
pip install -e .
# Install IR-SIM for running examples
pip install ir-sim
# Run your first example
cd example
python run_exp.py -e corridor -d diff
Examples
You can run examples in the example folder to see the navigation performance of diff (differential), acker (ackermann) and omni (Omnidirectional) robot powered by NeuPAN in IR-SIM. Scenarios include:
convex_obs(convex obstacles)corridor(corridor)dyna_non_obs(dynamic nonconvex obstacles)dyna_obs(dynamic obstacles)non_obs(nonconvex obstacles)pf(path following)pf_obs(path following with obstacles)polygon_robot(polygon robot)reverse(car reverse parking)- dune training
- LON training
Some demonstrations run by run_exp.py are shown below:
| python run_exp.py -e non_obs -d acker <img src="https://github.com/user-attachments/assets/db88e4c6-2605-4bd4-92b9-8898da732832" width="350" /> | python run_exp.py -e dyna_non_obs -d acker <img src="https://github.com/user-attachments/assets/4db0594d-f23f-48f0-bf5b-54805c817ac4" width="350" /> |
| :-------------------------------------------------------------------------------------------------------------------------------------------------------------: | :----------------------------------------------------------------------------------------------------------------------------------------------------------: |
| python run_exp.py -e polygon_robot -d diff <img src="https://github.com/user-attachments/assets/feb91992-4d4c-4392-b78d-3553f554667f" width="350" /> | python run_exp.py -e dyna_obs -d diff -v <img src="https://github.com/user-attachments/assets/43602c29-a0d3-4d06-82e8-500fca8f4aa2" width="350" /> | |
| python run_exp.py -e corridor -d acker <img src="https://github.com/user-attachments/assets/fbcf9875-2e33-4e97-ba40-54826c2bc70d" width="350" /> | python run_exp.py -e corridor -d diff <img src="https://github.com/user-attachments/assets/82ccd0c5-9ac9-4fcf-8705-d81753c6b7a8" width="350" /> |
| python run_exp.py -e dyna_non_obs -d omni <img src="https://github.com/user-attachments/assets/4a9b01fa-d14a-4c95-86e0-00b3abe5a2e4" width="350" /> | python run_exp.py -e dyna_obs -d omni <img src="https://github.com/user-attachments/assets/a4534cde-d745-45d2-acee-0f8207d10a6b" width="350" /> |
[!NOTE] The pretrained DUNE model provided in these examples are trained under the size of the robot. If you adjust the size of the robot, you must retrain the DUNE model by referring to the DUNE Training example.
[!NOTE] Since the optimization solver cvxpy is not supported on GPU, we recommend using the CPU device to run the NeuPAN algorithm. Thus, the hardware platform with more powerful CPU (e.g. Intel i7) is recommended to achieve higher frequency and better performance. However, you can still use the GPU device to train the DUNE model for acceleration.
Configuration
Since there are quite a lot of parameters setting for the Neupan planner, we provide a YAML file to initialize the parameters in NeuPAN.
| Parameter Name | Type / Default Value | Description |
| --------------------- | -------------------- | ------------------------------------------------------------------------------------ |
| receding | int / 10 | MPC receding steps. |
| step_time | float / 0.1 | MPC time step (s). |
| ref_speed | float / 4.0 | MPC reference speed (m/s). |
| device | str / "cpu" | The device to run the planner. cpu or cuda |
| time_print | bool / False | Whether to print the time cost of forward step (s). |
| collision_threshold | float / 0.1 | The threshold for collision detection (m). |
| robot | dict / dict() | The parameters for the robot. See 'robot' section. |
| ipath | dict / dict() | The parameters for the naive initial path. See 'ipath' section. |
| pan | dict / dict() | The parameters for the proximal alternating minimization network. See 'pan' section. |
| adjust | dict / dict() | The parameters for the adjust weights. See 'adjust' section. |
| train | dict / dict() | The parameters for the DUNE training. See 'train' section. |
