AdverseDrive
Attacking Vision based Perception in End-to-end Autonomous Driving Models
Install / Use
/learn @xz-group/AdverseDriveREADME
Adverse Drive
The goal of this project is to attack end-to-end self-driving models using physically realizable adversaries.
|<center>Target Objective</center>| <center>Conceptual Overview</center>| <center>Example</center> | | :-: | :-: | :-: | |Collision Attack|<img src="media/collision_overview.png" alt="collision_overview"> | <img src="media/collision_adversary.gif" alt="collision_adversary"/>| |Hijacking Attack|<img src="media/hijack_overview.png" alt="hijack_overview">|<img src="media/hijack_adversary.gif" alt="hijack_adversary">|
Pre-requisites
- Ubuntu 16.04
- Dedicated GPU with relevant CUDA drivers
- Docker-CE (for docker method)
Note: We highly recommend you use the dockerized version of our repository, due to being system independent. Furthermore, it would not affect the packages on your system.
Installation
- Clone the AdverseDrive repository
git clone https://github.com/xz-group/AdverseDrive
- Export Carla paths to
PYTHONPATH
source export_paths.sh
- Install the required Python packages
pip3 install -r requirements.txt
- Download the modified version of the Carla simulator[1], carla-adversedrive.tar.gz. Extract the contents of the directory and navigate into the extracted directory.
tar xvzf carla-adversedrive.tar.gz
cd carla-adverserdrive
- Run the Carla simulator on a terminal
./CarlaUE4.sh -windowed -ResX=800 -ResY=600
This starts Carla as a server on port 2000. Give it about 10-30 seconds to start up depending on your system.
- On a new terminal, start a python HTTP server. This allows the Carla simulator to read the generated attack images and load it onto Carla
sh run_adv_server.sh
Note: This requires port 8000 to be free.
- On another new terminal, run the infraction objective python script
python3 start_infraction_experiments.py
Note: the Jupyter notebook version of this script, called start_infraction_experiments.ipynb describes each step in detail. It is recommended to use that while starting out with this repository. Use jupyter notebook to start a jupyter server in this directory.
How it Works
- The above steps sets up an experiment defined by the experiment parameters in
config/infraction_parameters.json, including the Carla town being used, the task (straight, turn-left, turn-right), different scenes, the port number being used by Carla and Bayesian optimizer[3] parameters. - Runs the
baseline scenariowhere the Carla Imitation Learning[2] (IL) agent drives a vehicle from point A to point B as defined by the experiment scene and task. It returns a metric from the run (eg: sum of infraction for each frame). The baseline scenario is when there is no attack. - The Bayesian Optimizer suggests parameters for the attack, based on the returned metric (which serves as the objective function that we are trying to maximize), the attack is generated by
adversary_generator.pyand placed inadversary/adversary_{town_name}.png. - Carla reads the adversary image over the HTTP server and places in on pre-determined locations within the road.
- The IL model again runs through this
attack scenarioand returns a metric. - Steps 3-5 are repeated for a set number of experiments, in which successful attacks would be found.
Docker Method (recommended)
It is expected that you have some experience with dockers, and have installed and tested your installation to ensure you have GPU access via docker containers. A quick way to test it is by running:
# docker >= 19.03
docker run --gpus all,capabilities=utility nvidia/cuda:9.0-base nvidia-smi
# docker < 19.03 (requires nvidia-docker2)
docker run nvidia/cuda:9.0-base --runtime=nvidia nvidia-smi
And you should get a standard nvidia-smi output.
- Clone the AdverseDrive repo
git clone https://github.com/xz-group/AdverseDrive
- Pull the modified version of the Carla simulator:
docker pull xzgroup/carla:latest
- Pull the
AdverseDrivedocker containing all the prerequisite packages for running experiments (also server-friendly)
docker pull xzgroup/adversedrive:latest
- Run the our dockerized Carla simulator on a terminal
sh run_carla_docker.sh
This starts Carla as a server on port 2000. Give it about 10-30 seconds to start up depending on your system.
- On a new terminal, start a python HTTP server. This allows the Carla simulator to read the generated attack images and load it onto Carla
sh run_adv_server.sh
Note: This requires port 8000 to be free.
- On another new terminal, run the
xzgroup/adversedrivedocker
sh run_docker.sh
- Run the infraction objective python script
python3 start_infraction_experiments.py
More documentation
References
- Carla Simulator: https://github.com/carla-simulator/carla
- Imitation Learning: https://github.com/carla-simulator/imitation-learning
- Bayesian Optimization: https://github.com/fmfn/BayesianOptimization
Citation
If you use our work, kindly cite us using the following:
@misc{boloor2019,
title={Attacking Vision-based Perception in End-to-End Autonomous Driving Models},
author={Adith Boloor and Karthik Garimella and Xin He and
Christopher Gill and Yevgeniy Vorobeychik and Xuan Zhang},
year={2019},
eprint={1910.01907},
archivePrefix={arXiv},
primaryClass={cs.LG}
}
Related Skills
YC-Killer
2.7kA library of enterprise-grade AI agents designed to democratize artificial intelligence and provide free, open-source alternatives to overvalued Y Combinator startups. If you are excited about democratizing AI access & AI agents, please star ⭐️ this repository and use the link in the readme to join our open source AI research team.
groundhog
398Groundhog's primary purpose is to teach people how Cursor and all these other coding agents work under the hood. If you understand how these coding assistants work from first principles, then you can drive these tools harder (or perhaps make your own!).
isf-agent
a repo for an agent that helps researchers apply for isf funding
last30days-skill
17.6kAI agent skill that researches any topic across Reddit, X, YouTube, HN, Polymarket, and the web - then synthesizes a grounded summary
