SkillAgentSearch skills...

E2map

No description available

Install / Use

/learn @knwoo/E2map
About this skill

Quality Score

0/100

Supported Platforms

Universal

README

E2Map: Experience-and-Emotion Map for Self-Reflective Robot Navigation with Language Models

[Project Page] [Paper]

Chan Kim<sup>*1</sup>, Keonwoo Kim<sup>*1</sup>, Mintaek Oh<sup>1</sup>, Hanbi Baek<sup>1</sup>, Jiyang Lee<sup>1</sup>, Donghwi Jung<sup>1</sup>, Soojin Woo<sup>1</sup>, Younkyung Woo<sup>2</sup>, John Tucker<sup>3</sup>, Roya Firoozi<sup>4</sup>, Seung-Woo Seo<sup>1</sup>, Mac Shwager<sup>3</sup>, Seong-Woo Kim<sup>1</sup>

(*Indicates equal contribution)

<sup>1</sup>Seoul National University, <sup>2</sup>Carnegie Mellon University, <sup>3</sup>Stanford University, <sup>4</sup>University of Waterloo

<p align="center"> <img src="./images/concept_w_full_background.png" width="85%" height="85%"> </p>

We present E2Map (Experience-and-Emotion Map), a spatial map that captures the agent's emotional responses to its experiences. Our method enables one-shot behavior adjustments in stochastic environments by updating the E2Map through the diverse capabilities of LLMs and LMM.

System Overview

<p align="center"> <img src="./images/system_architecture_w_background.png"> </p>

Table of Contents

Requirements

  • Anaconda or Miniconda
  • CUDA 11.8 or 12.1
  • ROS (Only tested in ROS Noetic, Ubuntu 20.04)

Setup

Cloning the Repository

git clone https://github.com/knwoo/e2map

Setting Conda Environment

cd <path to repository>
conda env create -f env.yml
conda activate e2map

Setting ROS Workspace

To execute related ROS commands,

cd <path to repository>
catkin_make
source devel/setup.bash

Installing Pre-trained LSeg Model

Download LSeg checkpoint with this link,

cd <path to repository>/src/models/lseg/
mkdir checkpoints

and put demo_e200.ckpt under checkpoints/.

Prerequisite Materials

There are a few steps to follow to reproduce our simulated environment.

Downloading Simulated Testbed Files

We created a real-world mirrored simulation environment of the conference room using the ROS Gazebo simulator.

<p align="center"> <img src="./images/209_gazebo.png" width="100%" height="100%"> </p>

1. Download pre-built maps

If you want to build the visual-language feature map for your own Gazebo indoor testbed, please refer VLMaps official repository.

Pre-built maps are used for planning and grounding landmarks. Download maps of our testbed from this link. Then follow the below commands:

cd <path to repository>/src/e2map
mkdir -p data/maps

Unzip room_209.zip and put the maps under <path to repository>/src/e2map/data/maps.

You can easily unzip via unzip <path to home directory>/Downloads/room_209.zip -d <path to repository>/src/e2map/data/maps.

2. Download mesh

Locate 209.dae under <path to repository>/src/environment/models/209/meshes/

3. Download texture file

Locate 209_texture.png under <path to repository>/src/environment/models/209/materials/textures/

4. Install dependencies and set environmental variables

To successfully interact with the quadruped robot, you should install additional Debian packages.

sudo apt update
sudo apt install liblcm-dev
sudo apt install ros-noetic-controller-interface ros-noetic-gazebo-ros-pkgs ros-noetic-gazebo-ros-control ros-noetic-joint-state-controller ros-noetic-effort-controllers ros-noetic-joint-trajectory-controller

Then, export the environment variable to successfully load the downloaded model of our testbed to the Gazebo simulator.

export GAZEBO_MODEL_PATH=$GAZEBO_MODEL_PATH:<path to repository>/src/environment/models

# ex) export GAZEBO_MODEL_PATH=$GAZEBO_MODEL_PATH:/home/knwoo/e2map/src/environment/models

Running Gazebo World

After following previous steps, run the following launch file to load the simulated environment.

roslaunch unitree_gazebo go1_209.launch

In another terminal, you can spawn a quadruped robot in the simulator by running the following command.

roslaunch unitree_guide go1_spawner.launch
  • If Go1 flips after spawning, increase /stand_wait_count in the go1_spawner.launch.

  • If Go1 sinks after spawning, increase /move_base_wait_count in the go1_spawner.launch.

To control the quadruped robot, run the following command in another terminal.

roslaunch test_zone teleop_key.launch

To remove the quadruped robot, run the following command in another terminal.

rosservice call gazebo/delete_model '{model_name: go1_gazebo}'

To respawn the robot, kill the terminal which launched go1_spawner.launch and run the roslaunch unitree_guide go1_spawner.launch again.

E2Map: Experience-and-Emotion Map

In this section, we provide two guidelines:

  1. Adjusting Behavior with E2Map: How to prepare to use E2Map for implementing self-reflective navigation agent.
  2. Navigating with Custom User Instruction: How to run language navigation with custom user instruction.

Adjusting Behavior with E2Map

If you want to run language navigation without E2Map, go to Navigation with Custom User Instructon directly.

1. Select workstation

Put the whole <path to repository>/src/foundations/e2map_update> directory inside your server and create the same conda environment by referring to Setting Conda Env.

Files related to update E2Map will be transferred via SFTP. Therefore, put 1) hostname (IP), 2) username, and 3) password of your remote workstation in publish_map.launch.

<!-- line: 15 -->
<arg name="hostname" default="<put your hostname>"/>
<arg name="username" default="<put your username>"/>
<arg name="password" default="<put your password>"/>

As illustrated in the paper, we use a server with 4 x NVIDIA GeForce RTX 4090 as a workstation.

If you have enough GPU resources on your machine, just run it on your local.

2. Prepare LLM & LMM

To reflect emotion and update E2Map, you should first set up related foundation models. In <path to repository>/src/foundations/e2map_update/, there are 1) two Python scripts for running event descriptor & emotion evaluator, and 2) one bash script to keep the self-reflection loop.

  • Event descriptor (LMM): Download the model via following GPT installation guideline. Then, put your personal API key in os.environ["OPENAI_API_KEY"] in event_descriptor.py like below.

    # line: 25
    os.environ["OPENAI_API_KEY"] = ""
    
  • Emotion evaluator (LLM): Download the model via following Ollama installation guideline. Then, put the name of the model in the model keyword argument in load_llm() in emotion_evaluator.py like below.

    # line: 94
    def load_llm():
       llm = Ollama(
          model="", # ex) model="llama3.1:70b"
          verbose=True,
          callback_manager=CallbackManager([StreamingStdOutCallbackHandler()]),
          temperature=0.7,
          top_k=40,
          top_p=0.9,
          system=system_prompt,
          num_predict=350,
       )
       return llm
    
    

We use GPT-4o for the event descriptor and llama3.1:70b for the emotion evaluator in the paper.

3. Make directories to cache input/output files.

Create directories to cache input images of the event descriptor and the results of the event descriptor and emotion evaluator.

cd <path to e2map_update>
mkdir images
mkdir texts
mkdir previous_images

These directories will be passed as a global variable in event_descriptor.py.

# line: 20
IMAGE_PATH = "images/" # to cache event images
TEXT_PATH = "texts/" # to cache emotion evaluation result 
PREV_PA
View on GitHub
GitHub Stars22
CategoryDevelopment
Updated3mo ago
Forks0

Languages

C++

Security Score

82/100

Audited on Dec 18, 2025

No findings