MapEx
MapEx: Indoor Structure Exploration with Probabilistic Information Gain from Global Map Predictions
Install / Use
/learn @castacks/MapExREADME
Preliminary Setup
Clone the repository and make sure that you are on the main branch.
git clone --recurse-submodules git@github.com:castacks/MapEx.git
cd ~/MapEx
git checkout main
git submodule update --init --recursive
Set up Mamba environment (Recommended)
Mamba is a package manager used for managing python environments and dependencies, known for having better speed and efficiency than conda. For more information, please refer to this <a href="https://mamba.readthedocs.io/en/latest/user_guide/mamba.html">link</a>.
wget https://github.com/conda-forge/miniforge/releases/latest/download/Mambaforge-Linux-x86_64.sh
bash Mambaforge-Linux-x86_64.sh
Go to the lama submodule folder and create a lama environment.
cd ~/MapEx/lama
mamba env create -f conda_env.yml
mamba activate lama
<!-- Install `torch` and relevant packages
mamba install pytorch torchvision torchaudio cudatoolkit=11.3 -c pytorch --y
mamba install wandb --yes
pip install pytorch-lightning==1.2.9 -->
Download pretrained prediction models (KTH dataset)
You can download pretrained models from this <a href="https://drive.google.com/drive/u/0/folders/1u9WZ9ftwaMbP-RVySuNSVEdUDV_x4Dw6">link</a>. Place the zip file under pretrained_models directory and unzip the file.
mv ~/Downloads/weights.zip ~/MapEx/pretrained_models/
cd ~/MapEx/pretrained_models/
unzip weights.zip
The pretrained_model directory and its subdirectories should be organized as below:
MapEx
├── pretrained_models
├── weights
├── big_lama
├── models
├── best.ckpt
├── lama_ensemble
├── train_1
├── models
├── best.ckpt
├── train_2
├── models
├── best.ckpt
├── train_3
├── models
├── best.ckpt
Install range_libc
Go to range_libc directory and install by following:
cd ~/MapEx/range_libc/pywrapper
python3 setup.py install
pip3 install Cython
Install KTH toolbox dependencies for raycasting and observation model
python3 -m pip install pyastar2d
mamba install numba --yes
Experiments
Run MapEx planner with baselines
In order to test MapEx, run the explore.py script.
cd scripts/
python3 explore.py
This will automatically call base.yaml, which contains default parameters and specifies filepaths, environment, and starting conditions. If you want to customize parameters, generate your own yaml file and save it in the configs directory.
Moreover, you can specify environment and starting positions as arguments to the script. For example,
python3 explore.py --collect_world_list 50010535_PLAN1 --start_pose 768 551
The list of environments is in the kth_test_maps directory. The list of start_pose for each environment can be seen in the clusters/run_explore.job.
Details on methods
modes_to_test in the yaml file specifies the methods that you test. Specifically,
modes_to_test: ['visvarprob'] #if you just want to test MapEx
modes_to_test: ['nearest', 'upen', 'hectoraug', 'visvarprob'] #if you want to compare MapEx and baselines
modes_to_test: ['obsunk', 'onlyvar', 'visunk', 'visvar', 'visvarprob'] #if you want ablation experiments
modes_to_test: ['nearest', 'obsunk', 'onlyvar', 'visunk', 'visvar', 'visvarprob', 'upen', 'hector', 'hectoraug'] #if you want to test all methods
visvarprob corresponds to the MapEx method, meaning the combination of visibility mask, variance, and probabilistic raycast. nearest is nearest frontier-based exploration, upen is our implementation of uncertainty-driven planner, proposed by <a href="https://arxiv.org/abs/2202.11907">Georgakis et al. ICRA 2022</a>, and hectoraug is our implementation of IG-hector method proposed by <a href="https://ieeexplore.ieee.org/document/8793769">Shrestha et al. ICRA 2019</a>.
<strong>Ablated methods</strong>: visvar (visibility mask + variance + deterministic raycast), visunk (visibility mask + counting number of pixels in the area), obsunk (visibility mask on observed occupancy grid + counting number of pixels in the area), onlyvar (using no visibility mask, but only summing variances) correspond with Deterministic, No Variance, Observed Map, and No Visibility methods in the ablation studies section of our original paper.
Evaluations, Metrics, and Data Processing
Trajectory Visualization and Data Postprocessing (Generating predictions for metric)
After you run explore.py, the script will produce results in the experiments folder. The script automatically generates a subdirectory inside it, using the current year, month, date. Under this folder, each result of map & start_pose & method pairs will be saved. For example,
MapEx
├── experiments
├── 20250131_test
├── 20250131_172221_50052750_513_880_visvarprob
├── global_obs
├── run_viz
├── odom.npy
├── 20250131_172221_50052750_513_880_upen
...
global_obs contains 2D top-down view observed occupancy grid map in .png file format over all timesteps. run_viz folder contains visualization of exploration over timesteps as below.

Now for the purpose of metric and evaluations, run simple_lama_pred.py to generate predictions for these observations.
python3 simple_lama_pred.py
Make sure to customize modelalltrain_path and input_experiment_root_folder as needed. Especially, input_experiment_root_folder should be modified if the directory that contains the exploration results change. simple_lama_pred.py will generate predictions and save them in the global_pred directory under each folder like below:
MapEx
├── experiments
├── 20250131_test
├── 20250131_172221_50052750_513_880_visvarprob
├── global_obs
├── global_pred
├── run_viz
├── odom.npy
├── 20250131_172221_50052750_513_880_upen
...
Coverage and Predicted IoU
After you generated predictions of your observations, run calc_metrics_subdirectory.py. Make sure to customize your root_path and exp_parent_dir in the code.
python3 calc_metrics_subdirectory.py
This script will compute Coverage and Predicted IoU, and will visualize the plots like below.

Code Management
This repository will be maintained and improved by <a href="https://seungchan-kim.github.io" target="_blank"><strong>Seungchan Kim</strong></a>. If you have any questions regarding the codes, please email to seungch2@andrew.cmu.edu.
Citation
If you find our paper or code useful, please cite us:
@INPROCEEDINGS{ho2025mapex,
author={Ho, Cherie and Kim, Seungchan and Moon, Brady and Parandekar, Aditya and Harutyunyan, Narek and Wang, Chen and Sycara, Katia and Best, Graeme and Scherer, Sebastian},
booktitle={2025 IEEE International Conference on Robotics and Automation (ICRA)},
title={MapEx: Indoor Structure Exploration with Probabilistic Information Gain from Global Map Predictions},
year={2025},
pages={13074-13080},
doi={10.1109/ICRA55743.2025.11128862}
}
Related Skills
node-connect
351.4kDiagnose OpenClaw node connection and pairing failures for Android, iOS, and macOS companion apps
frontend-design
110.7kCreate distinctive, production-grade frontend interfaces with high design quality. Use this skill when the user asks to build web components, pages, or applications. Generates creative, polished code that avoids generic AI aesthetics.
openai-whisper-api
351.4kTranscribe audio via OpenAI Audio Transcriptions API (Whisper).
qqbot-media
351.4kQQBot 富媒体收发能力。使用 <qqmedia> 标签,系统根据文件扩展名自动识别类型(图片/语音/视频/文件)。
