Mile
PyTorch code for the paper "Model-Based Imitation Learning for Urban Driving".
Install / Use
/learn @wayveai/MileREADME
MILE
This is the PyTorch implementation for inference and training of the world model and driving policy as described in:
<p align="center"> <img src="https://github.com/wayveai/mile/releases/download/v1.0/mile_driving_in_imagination.gif" alt="MILE driving in imagination"> <br/> Our model can drive in the simulator with a driving plan predicted entirely from imagination. <br/> From left to right we visualise: RGB input, ground truth bird's-eye view semantic segmentation, predicted bird's-eye view segmentation. <br/> When the RGB input becomes sepia-coloured, the model is driving in imagination. <sub><em> </em></sub> </p>Model-Based Imitation Learning for Urban Driving
Anthony Hu, Gianluca Corrado, Nicolas Griffiths, Zak Murez, Corina Gurau, Hudson Yeo, Alex Kendall, Roberto Cipolla and Jamie Shotton.
NeurIPS 2022<br/> Blog post
If you find our work useful, please consider citing:
@inproceedings{mile2022,
title = {Model-Based Imitation Learning for Urban Driving},
author = {Anthony Hu and Gianluca Corrado and Nicolas Griffiths and Zak Murez and Corina Gurau
and Hudson Yeo and Alex Kendall and Roberto Cipolla and Jamie Shotton},
booktitle = {Advances in Neural Information Processing Systems ({NeurIPS})},
year = {2022}
}
⚙ Setup
- Create the conda environment by running
conda env create. - Download CARLA 0.9.11.
- Install the carla package by running
conda activate milefollowed byeasy_install ${CARLA_ROOT}/PythonAPI/carla/dist/carla-0.9.11-py3.7-linux-x86_64.egg. - We also need to add
${CARLA_ROOT}/PythonAPI/carla/to thePYTHONPATH. This can be done by creating a file in the conda environment~/miniconda3/envs/mile/etc/conda/activate.d/env_vars.shcontaining:
#!/bin/bash
export CARLA_ROOT="<path_to_carla_root>"
export PYTHONPATH="${CARLA_ROOT}/PythonAPI/carla/"
🏄 Evaluation
- Download the model pre-trained weights.
- Run
bash run/evaluate.sh ${CARLA_PATH} ${CHECKPOINT_PATH} ${PORT}, with${CARLA_PATH}the path to the CARLA .sh executable,${CHECKPOINT_PATH}the path to the pre-trained weights, and${PORT}the port to run CARLA (usually2000).
📖 Data Collection
- Run
bash run/data_collect.sh ${CARLA_PATH} ${DATASET_ROOT} ${PORT} ${TEST_SUITE}, with${CARLA_PATH}the path to the CARLA .sh executable,${DATASET_ROOT}the path where to save data,${PORT}the port to run CARLA (usually2000), and${TEST_SUITE}the path to the config specifying from which town to collect data (e.g.config/test_suites/lb_data.yaml).
🏊 Training
To train the model from scratch:
- Organise the dataset folder as described in DATASET.md.
- Activate the environment with
conda activate mile. - Run
python train.py --config mile/configs/mile.yml DATASET.DATAROOT ${DATAROOT}, with${DATAROOT}the path to the dataset.
🙌 Credits
Thanks to the authors of End-to-End Urban Driving by Imitating a Reinforcement Learning Coach for providing a gym wrapper around CARLA making it easy to use, as well as an RL expert to collect data.
Related Skills
YC-Killer
2.7kA library of enterprise-grade AI agents designed to democratize artificial intelligence and provide free, open-source alternatives to overvalued Y Combinator startups. If you are excited about democratizing AI access & AI agents, please star ⭐️ this repository and use the link in the readme to join our open source AI research team.
groundhog
398Groundhog's primary purpose is to teach people how Cursor and all these other coding agents work under the hood. If you understand how these coding assistants work from first principles, then you can drive these tools harder (or perhaps make your own!).
last30days-skill
18.3kAI agent skill that researches any topic across Reddit, X, YouTube, HN, Polymarket, and the web - then synthesizes a grounded summary
sec-edgar-agentkit
10AI agent toolkit for accessing and analyzing SEC EDGAR filing data. Build intelligent agents with LangChain, MCP-use, Gradio, Dify, and smolagents to analyze financial statements, insider trading, and company filings.
