XMem2
A tool for efficient semi-supervised video object segmentation (great results with minimal manual labor) and a dataset for benchmarking
Install / Use
/learn @mbzuai-metaverse/XMem2README
XMem++
Production-level Video Segmentation From Few Annotated Frames
Maksym Bekuzarov$^\dagger$, Ariana Michelle Bermudez Venegas$^\dagger$, Joon-Young Lee, Hao Li
Metaverse Lab @ MBZUAI (Mohamed bin Zayed University of Artificial Intelligence)
$^\dagger$ These authors equally contributed to the work.
Table of contents
- Performance demo)
- Overview
- Getting started
- Use the GUI
- Use XMem++ command-line and Python interface
- Importing existing projects
- Docker support
- Data format
- Training
- Methodology
- Frame annotation candidate selector
- PUMaVOS Dataset
- Citation
Demo
Inspired by movie industry use cases, XMem++ is an Interactive Video Segmentation Tool that takes a few user-provided segmentation masks and segments very challenging use cases with minimal human supervision, such as
- parts of the objects (only 6 annotated frames provided):
https://github.com/max810/XMem2/assets/29955120/3d3761e2-2e73-484a-a1ed-ec717d8fed05
- fluid objects like hair (only 5 annotated frames provided):
https://github.com/max810/XMem2/assets/29955120/ba746a2a-6333-4654-b39c-b93b9eb1ae0c
- deformable objects like clothes (5 and 11 annotated frames used accordingly)
https://github.com/max810/XMem2/assets/29955120/3a8750e0-44ca-4cce-9b16-8f7154dbb217
https://github.com/max810/XMem2/assets/29955120/689512a6-f60a-4258-b282-4b799f12b0c9
[LIMITATIONS]
Overview
|
|
|:--:|
| XMem++ updated GUI |
XMem++ is built on top of XMem by Ho Kei Cheng, Alexander Schwing and improves upon it by adding the following:
- Permanent memory module that greatly improves the model's accuracy with just a few manual annotations provided (see results)
- Annotation candidate selection algorithm that selects $k$ next best frames for the user to provide annotations for.
- We used XMem++ to collect and annotate PUMaVOS - 23 video dataset with unusual and challenging annotation scenarios at 480p, 30FPS. See Dataset
In addition to the following features:
- Improved GUI - references tab to see/edit what frames are in the permanent memory, candidates tab - shows candidate frames for annotation predicted by the algorithm and more.
- Negligible speed and memory usage overhead compared to XMem (if using few manually provided annotations)
- Easy to use Python interface - now you can use XMem++ as a GUI application and a Python library easily.
- 30+ FPS on 480p footage on RTX 3090
- Come with a GUI (modified from MiVOS).
Getting started
Environment setup
First, install the required Python packages:
- Python 3.8+
- PyTorch 1.11+ (See PyTorch for installation instructions)
torchvisioncorresponding to the PyTorch version- OpenCV (try
pip install opencv-python) - Others:
pip install -r requirements.txt - To use the GUI:
pip install -r requirements_demo.txt
Download weights
Download the pretrained models either using ./scripts/download_models.sh, or manually and put them in ./saves (create the folder if it doesn't exist). You can download them from [XMem GitHub] or [XMem Google Drive]. For inference you only need XMem.pth, but for GUI also download fbrs.pth and s2m.pth.
Use the GUI
To run the GUI on a new video:
python interactive_demo.py --video example_videos/chair/chair.mp4
To run on a list of images:
python interactive_demo.py --images example_videos/chair/JPEGImages
Both of these commands will create a folder for the current vide in workspace folder (default is .workspace) and save all the masks and predictions there.
To keep editing an existing project in a workspace, run the following command:
python interactive_demo.py --workspace ./workspace/<video_name>
If you have more than 1 object make sure to add --num-objects <num_objects> to the commands above the first time you create a project. It will saved in the project file after that for your convenience =)
Like this:
python interactive_demo.py --images example_videos/caps/JPEGImages --num-objects 2
For more information visit DEMO.md
Use XMem++ command-line and Python interface
We provide a simple command-line interface in process_video.py which you can use like this:
python process_video.py \
--video <path to video file/extracted .jpg frames> \
--masks <path to directory with existing .png masks> \
--output <path to save results>
The script will just take existing video and ground truth masks (all in the given directory will be used) and runs segmentation once.
Short-form arguments -v -m -o are also supported.
See Python API or main.py for more complex use-cases and explanations.
Importing existing projects
If you already have existing frames and/or masks from other tools, you can import them into the workspace with the following command:
python import_existing.py --name <name_of_the_project_to_create> [--images <path_to_folder_with_frames>] [--mask <path_to_folder_with_masks>]
One of --images, --masks (or both) should be specified.
You can also specify --size <int> to resize the frames on-the-fly (to smaller side, preserving ratio)
This will do the following:
- Create a project directory inside your woskpace with the name from the
--nameargument. - Copy your given images/masks inside.
- Convert RGB masks to necessary color palette (XMem++ uses DAVIS color palette, where each new object=new color).
- Resize the frames if specified with the
--sizeargument.
Docker support
We provide 2 images at DockerHub:
max810/xmem2:base-inference- smaller and lighter - for running inference from command line as in Command line section.max810/xmem2:gui- for running the graphical interface interactively.
To use them just run ./run_inference_in_docker.sh or ./run_gui_in_docker.sh with corresponding cmd/gui arguments (see respective sections [Inference] [GUI]). They supply proper arguments to docker run command and create the corresponding volumes for input/output directories automatically.
Examples:
# Inference
./run_inference_in_docker.sh -v example_videos/caps/JPEGImages -m example_videos/caps/Annotations -o directory/that/does/not/exist/yet
# Interactive GUI
./run_gui_in_docker.sh --video example_videos/chair/chair.mp4 --num_objects 2
For the GUI you can change variables $LOCAL_WORKSPACE_DIR and $DISPLAY_TO_USE in run_gui_in_docker.sh if necessary.
Be wary that the interactive import buttons will not work (they will open paths within the container filesystem, not the host one).
Building your own images
For command-line inference:
docker build . -t <your-repo/your-image-name[:your-tag]> --target xmem2-base-inference
For GUI:
docker build . -t <your-repo/your-image-name[:your-tag]> --target xmem2-gui
Data format
- Images are expected to use .jpg format.
- Masks are RGB .png files that use the DAVIS color palette, saved as a palette image (
Image.convert('P')in Pillow Image Module)). If your masks don't follow this color palette, just use runpython import_existing.pyto automatically convert them (see Importing existing projects). - When using
run_on_video.pywith a video_file, masks should be namedframe_%06d.<ext>starting at0:frame_000000.jpg, frame_0000001.jpg, ...This is preferred filename for any use case.
More information and convenience commands are provided in Data format help
Training
For training, refer to the original XMem repo.
We use the original weights provided by XMem, the model has not been retrained or fine-tuned in any way.
Feel free to fine-tune XMem and replace the weights in this project.
Methodology
|
|
|:--:|
| XMem++ architecture overview with comments |
XMem++ is a memory-based interactive segmentation model - this means it uses a set of reference frames/feature maps and their corresponding masks, either predicted or given as ground truth if available, to predict masks for new frames based on how similar they are to already processed frames with known segmentation.
Just like XMem, we use the two types of memory inspired by the Atkinson-Shiffrin human memory model - working memory and long-term memory. The first
