Routefinder
[TMLR 2025 + ICML 2024 FM-Wild Oral] RouteFinder: Towards Foundation Models for Vehicle Routing Problems
Install / Use
/learn @ai4co/RoutefinderREADME
RouteFinder
<a href="https://colab.research.google.com/github/ai4co/routefinder/blob/main/examples/1.quickstart.ipynb"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"></a>
Towards Foundation Models for Vehicle Routing Problems
<div align="center"> <img src="assets/overview.png" alt="RouteFinder Overview" style="width: 100%; height: auto;"> </div>
📰 News
- Sep 2025: A new version (
v0.4.0) has been released. We have now added better installation instructions, released models and dataset on HugginFace, and more. Also, we are delighted to announce that RouteFinder has been accepted at TMLR 2025! See details on the release notes - Feb 2025: A new version (
v0.3.0) of RouteFinder has been released. We have added several improvements, among which increasing the number of VRP variants from 24 to 48! See details on the release notes - Oct 2024: A new version (
v0.2.0) of RouteFinder has been released! We have added the latest contributions from our preprint and much improved codebase - Jul 2024: RouteFinder has been accepted as an Oral presentatation at the ICML 2024 FM-Wild Workshop!
🚀 Installation
We use uv (Python package manager) to manage the dependencies:
uv venv --python 3.12 # create a new virtual environment
source .venv/bin/activate # activate the virtual environment
uv sync --all-extras # for all dependencies
Note that this project is also compatible with normal pip install -e . in case you use a different package manager.
🏁 Quickstart
Download data and checkpoints
To download the data and checkpoints from HuggingFace automatically, you can use:
python scripts/download_hf.py
Running
We recommend exploring this quickstart notebook to get started with the RouteFinder codebase!
The main runner (example here of main baseline) can be called via:
python run.py experiment=main/rf/rf-transformer-100
You may change the experiment by using the experiment=YOUR_EXP, with the path under configs/experiment directory.
Testing
You may use the provided test function to test the model:
python test.py --checkpoint checkpoints/100/rf-transformer.ckpt
or with additional parameters:
usage: test.py [-h] --checkpoint CHECKPOINT [--problem PROBLEM] [--size SIZE] [--datasets DATASETS] [--batch_size BATCH_SIZE]
[--device DEVICE] [--remove-mixed-backhaul | --no-remove-mixed-backhaul]
options:
-h, --help show this help message and exit
--checkpoint CHECKPOINT
Path to the model checkpoint
--problem PROBLEM Problem name: cvrp, vrptw, etc. or all
--size SIZE Problem size: 50, 100, for automatic loading
--datasets DATASETS Filename of the dataset(s) to evaluate. Defaults to all under data/{problem}/ dir
--batch_size BATCH_SIZE
--device DEVICE
--remove-mixed-backhaul, --no-remove-mixed-backhaul
Remove mixed backhaul instances. Use --no-remove-mixed-backhaul to keep them. (default: True)
We also have a notebook to automatically download and test models on the CVRPLIB here!
Other scripts
-
Data generation: We also include scripts to re-generate data manually (reproducible via random seeds) with
python scripts/generate_data.py. -
Classical baselines (OR-Tools and HGS-PyVRP): We additionally include a script to solve the problems using classical baselines with e.g.
python scripts/run_or_solvers.py --num_procs 20 --solver pyvrpto run PyVRP with 20 processes on all the dataset.
🔁 Reproducing Experiments
Main Experiments
The main experiments on 100 nodes are (rf=RouteFinder) RF-TE: rf/rf-transformer-100, RF-POMO: rf/rf-100, RF-MoE: rf/rf-moe-100, MTPOMO mtpomo-100 and MVMoE mvmoe-100. You may substitute 50 instead for 50 nodes. Note that we separate 50 and 100 because we created an automatic validation dataset reporting for all variants at different sizes (i.e. here).
Note that additional Hydra options as described here. For instance, you can add +trainer.devices="[0]" to run on a specific GPU (i.e., GPU 0).
Ablations and more
Other configs are available under configs/experiment directory.
EAL (Efficient Adapter Layers)
To run EAL, you may use the following command:
python run_eal.py
with the following parameters:
usage: run_eal.py [-h] [--model_type MODEL_TYPE] [--experiment EXPERIMENT]
[--variants_finetune VARIANTS_FINETUNE]
[--checkpoint CHECKPOINT] [--lr LR] [--num_runs NUM_RUNS]
options:
-h, --help show this help message and exit
--model_type MODEL_TYPE
Model type: rf, mvmoe, mtpomo
--experiment EXPERIMENT
Experiment type
--variants_finetune VARIANTS_FINETUNE
Variants to finetune on
--checkpoint CHECKPOINT
--lr LR
--num_runs NUM_RUNS
with additional parameters that can be found in the eal.py file.
Development
To test automatically if the code works, you can run:
python -m pytest tests/*
🚚 Available Environments
<div align="center"> <img src="assets/vrp.png" alt="VRP Problems" style="width: 100%; height: auto;"> </div>We consider 48 VRP variants. All variants include the base Capacity (C). The $k=5$ features O, B, L, TW, and MD can be combined into any subset, including the empty set and itself (i.e., a power set with $2^k = 32$ possible combinations. The Mixed (M) global feature creates new Mixed Backhaul (MB) variants in generalization studies, adding 16 more variants. We have the following environments available:
| VRP Variant | Capacity (C) | Open Route (O) | Backhaul (B) | Mixed (M) | Duration Limit (L) | Time Windows (TW) | Multi-depot (MD) | |------------------|:----------------:|:------------------:|:----------------:|:-------------:|:----------------------:|:---------------------:|:-------------------:| | CVRP | ✔ | | | | | | | | OVRP | ✔ | ✔ | | | | | | | VRPB | ✔ | | ✔ | | | | | | VRPL | ✔ | | | | ✔ | | | | VRPTW | ✔ | | | | | ✔ | | | OVRPTW | ✔ | ✔ | | | | ✔ | | | OVRPB | ✔ | ✔ | ✔ | | | | | | OVRPL | ✔ | ✔ | | | ✔ | | | | VRPBL | ✔ | | ✔ | | ✔ | | | | VRPBTW | ✔ | | ✔ | | | ✔ | | | VRPLTW | ✔ | | | | ✔ | ✔ | | | OVRPBL | ✔ | ✔ | ✔ | | ✔ | | | | OVRPBTW | ✔ | ✔ | ✔ | | | ✔ | | | OVRPLTW | ✔ | ✔ | | | ✔ | ✔ | | | VRPBLTW | ✔ | | ✔
Related Skills
node-connect
350.1kDiagnose OpenClaw node connection and pairing failures for Android, iOS, and macOS companion apps
frontend-design
109.9kCreate distinctive, production-grade frontend interfaces with high design quality. Use this skill when the user asks to build web components, pages, or applications. Generates creative, polished code that avoids generic AI aesthetics.
openai-whisper-api
350.1kTranscribe audio via OpenAI Audio Transcriptions API (Whisper).
qqbot-media
350.1kQQBot 富媒体收发能力。使用 <qqmedia> 标签,系统根据文件扩展名自动识别类型(图片/语音/视频/文件)。
