MoPoE
Code release for ICLR 2021 paper "Generalised Multimodal ELBO"
Install / Use
/learn @thomassutter/MoPoEREADME
MoPoE-VAE
This is the official code for the ICLR 2021 paper "Generalized Multimodal ELBO". Here is the link to the OpenReview-Site: https://openreview.net/forum?id=5Y21V0RDBV
If you have any questions about the code or the paper, we are happy to help!
Preliminaries
This code was developed and tested with:
- Python version 3.5.6
- PyTorch version 1.4.0
- CUDA version 11.0
- The conda environment defined in
environment.yml
First, set up the conda enviroment as follows:
conda env create -f environment.yml # create conda env
conda activate mopoe # activate conda env
Second, download the data, inception network, and pretrained classifiers:
curl -L -o tmp.zip https://drive.google.com/drive/folders/1lr-laYwjDq3AzalaIe9jN4shpt1wBsYM?usp=sharing
unzip tmp.zip
unzip celeba_data.zip -d data/
unzip data_mnistsvhntext.zip -d data/
unzip PolyMNIST.zip -d data/
Experiments
Experiments can be started by running the respective job_* script.
To choose between running the MVAE, MMVAE, and MoPoE-VAE, one needs to
change the script's METHOD variabe to "poe", "moe", or "joint_elbo"
respectively. By default, each experiment uses METHOD="joint_elbo".
running MNIST-SVHN-Text
./job_mnistsvhntext
running PolyMNIST
./job_polymnist
running Bimodal Celeba
./job_celeba
Related Skills
node-connect
351.2kDiagnose OpenClaw node connection and pairing failures for Android, iOS, and macOS companion apps
frontend-design
110.6kCreate distinctive, production-grade frontend interfaces with high design quality. Use this skill when the user asks to build web components, pages, or applications. Generates creative, polished code that avoids generic AI aesthetics.
openai-whisper-api
351.2kTranscribe audio via OpenAI Audio Transcriptions API (Whisper).
qqbot-media
351.2kQQBot 富媒体收发能力。使用 <qqmedia> 标签,系统根据文件扩展名自动识别类型(图片/语音/视频/文件)。
