GibsonEnv
Gibson Environments: Real-World Perception for Embodied Agents
Install / Use
/learn @StanfordVL/GibsonEnvREADME
GIBSON ENVIRONMENT for Embodied Active Agents with Real-World Perception
You shouldn't play video games all day, so shouldn't your AI! We built a virtual environment simulator, Gibson, that offers real-world experience for learning perception.
<img src=misc/ui.gif width="600">
Summary: Perception and being active (i.e. having a certain level of motion freedom) are closely tied. Learning active perception and sensorimotor control in the physical world is cumbersome as existing algorithms are too slow to efficiently learn in real-time and robots are fragile and costly. This has given a fruitful rise to learning in the simulation which consequently casts a question on transferring to real-world. We developed Gibson environment with the following primary characteristics:
I. being from the real-world and reflecting its semantic complexity through virtualizing real spaces,
II. having a baked-in mechanism for transferring to real-world (Goggles function), and
III. embodiment of the agent and making it subject to constraints of space and physics via integrating a physics engine (Bulletphysics).
Naming: Gibson environment is named after James J. Gibson, the author of "Ecological Approach to Visual Perception", 1979. “We must perceive in order to move, but we must also move in order to perceive” – JJ Gibson
Please see the website (http://gibsonenv.stanford.edu/) for more technical details. This repository is intended for distribution of the environment and installation/running instructions.
Paper
"Gibson Env: Real-World Perception for Embodied Agents", in CVPR 2018 [Spotlight Oral].
Release
This is the 0.3.1 release. Bug reports, suggestions for improvement, as well as community developments are encouraged and appreciated. change log file.
Database
The full database includes 572 spaces and 1440 floors and can be downloaded here. A diverse set of visualizations of all spaces in Gibson can be seen here. To make the core assets download package lighter for the users, we include a small subset (39) of the spaces. Users can download the rest of the spaces and add them to the assets folder. We also integrated Stanford 2D3DS and Matterport 3D as separate datasets if one wishes to use Gibson's simulator with those datasets (access here).
Table of contents
- Installation
- Quick Start
- Coding your RL agent
- Environment Configuration
- Goggles: transferring the agent to real-world
- Citation
Installation
Installation Method
There are two ways to install gibson, A. using our docker image (recommended) and B. building from source.
System requirements
The minimum system requirements are the following:
For docker installation (A):
- Ubuntu 16.04
- Nvidia GPU with VRAM > 6.0GB
- Nvidia driver >= 384
- CUDA >= 9.0, CuDNN >= v7
For building from the source(B):
- Ubuntu >= 14.04
- Nvidia GPU with VRAM > 6.0GB
- Nvidia driver >= 375
- CUDA >= 8.0, CuDNN >= v5
Download data
First, our environment core assets data are available here. You can follow the installation guide below to download and set up them properly. gibson/assets folder stores necessary data (agent models, environments, etc) to run gibson environment. Users can add more environments files into gibson/assets/dataset to run gibson on more environments. Visit the database readme for downloading more spaces. Please sign the license agreement before using Gibson's database.
A. Quick installation (docker)
We use docker to distribute our software, you need to install docker and nvidia-docker2.0 first.
Run docker run --runtime=nvidia --rm nvidia/cuda nvidia-smi to verify your installation.
You can either 1. pull from our docker image (recommended) or 2. build your own docker image.
- Pull from our docker image (recommended)
# download the dataset from https://storage.googleapis.com/gibson_scenes/dataset.tar.gz
docker pull xf1280/gibson:0.3.1
xhost +local:root
docker run --runtime=nvidia -ti --rm -e DISPLAY -v /tmp/.X11-unix:/tmp/.X11-unix -v <host path to dataset folder>:/root/mount/gibson/gibson/assets/dataset xf1280/gibson:0.3.1
- Build your own docker image
git clone https://github.com/StanfordVL/GibsonEnv.git
cd GibsonEnv
./download.sh # this script downloads assets data file and decompress it into gibson/assets folder
docker build . -t gibson ### finish building inside docker, note by default, dataset will not be included in the docker images
xhost +local:root ## enable display from docker
If the installation is successful, you should be able to run docker run --runtime=nvidia -ti --rm -e DISPLAY -v /tmp/.X11-unix:/tmp/.X11-unix -v <host path to dataset folder>:/root/mount/gibson/gibson/assets/dataset gibson to create a container. Note that we don't include
dataset files in docker image to keep our image slim, so you will need to mount it to the container when you start a container.
Notes on deployment on a headless server
Gibson Env supports deployment on a headless server and remote access with x11vnc.
You can build your own docker image with the docker file Dockerfile as above.
Instructions to run gibson on a headless server (requires X server running):
- Install nvidia-docker2 dependencies following the starter guide. Install
x11vncwithsudo apt-get install x11vnc. - Have xserver running on your host machine, and run
x11vncon DISPLAY :0. docker run --runtime=nvidia -ti --rm -e DISPLAY -v /tmp/.X11-unix/X0:/tmp/.X11-unix/X0 -v <host path to dataset folder>:/root/mount/gibson/gibson/assets/dataset <gibson image name>- Run gibson with
python <gibson example or training>inside docker. - Visit your
host:5900and you should be able to see the GUI.
If you don't have X server running, you can still run gibson, see this guide for more details.
B. Building from source
If you don't want to use our docker image, you can also install gibson locally. This will require some dependencies to be installed.
First, make sure you have Nvidia driver and CUDA installed. If you install from source, CUDA 9 is not necessary, as that is for nvidia-docker 2.0. Then, let's install some dependencies:
apt-get update
apt-get install libglew-dev libglm-dev libassimp-dev xorg-dev libglu1-mesa-dev libboost-dev \
mesa-common-dev freeglut3-dev libopenmpi-dev cmake golang libjpeg-turbo8-dev wmctrl \
xdotool libzmq3-dev zlib1g-dev
Install required deep learning libraries: Using python3.5 is recommended. You can create a python3.5 environment first.
conda create -n py35 python=3.5 anaconda
source activate py35 # the rest of the steps needs to be performed in the conda environment
conda install -c conda-forge opencv
pip install http://download.pytorch.org/whl/cu90/torch-0.3.1-cp35-cp35m-linux_x86_64.whl
pip install torchvision==0.2.0
pip install tensorflow==1.3
Clone the repository, download data and build
git clone https://github.com/StanfordVL/GibsonEnv.git
cd GibsonEnv
./download.sh # this script downloads assets data file and decompress it into gibson/assets folder
./build.sh build_local ### build C++ and CUDA files
pip install -e . ### Install python libraries
Install OpenAI baselines if you need to run the training demo.
git clone https://github.com/fxia22/baselines.git
pip install -e baselines
Uninstalling
Uninstall gibson is easy. If you installed with docker, just run docker images -a | grep "gibson" | awk '{print $3}' | xargs docker rmi to clean up the image. If you installed from source, uninstall with pip uninstall gibson
Quick Start
First run xhost +local:root on your host machine to enable display. You may need to run export DISPLAY=:0 first. After getting into the docker container with docker run --runtime=nvidia -ti --rm -e DISPLAY -v /tmp/.X11-unix:/tmp/.X11-unix -v <host path to dataset folder>:/root/mount/gibson/gibson/assets/dataset gibson, you will get an interactive shell. Now you can run a few demos.
If you installed from source, you can run those directly using the following commands without using docker.
python examples/demo/play_husky_nonviz.py ### Use ASWD keys on your keyboard to control a car to navigate around Gates building
<img src=misc/husky_nonviz.png width="600">
You will be able to use ASWD keys on your keyboard to control a car to navigate around Gates building. A camera output will not be shown in this particular demo.
python examples/demo/play_husky_camera.py ### Use A
