WYTIWYR
WYTIWYR: A User Intent-Aware Framework with Multi-modal Inputs for Visualization Retrieval
Install / Use
/learn @SerendipitysX/WYTIWYRREADME
WYTIWYR: A User Intent-Aware Framework with Multi-modal Inputs for Visualization Retrieval
This repository contains the code for a user intent-aware framework with multi-modal inputs for visualization retrieval, which consists of two stages: first, the Annotation stage disentangles the visual attributes within the query chart; and second, the Retrieval stage embeds the user’s intent with customized text prompt as well as bitmap query chart, to recall targeted retrieval result.


The detail of the framework can be referred to the original paper and it would update after it is published.
Dependencies and Installation
git clone https://github.com/SerendipitysX/WYTIWYR.git
cd WYTIWYR
conda create --name <environment_name> --file requirements.txt
Dataset Preparation
Our dataset includes synthetic charts from Beagle dataset and real-world cases from the Internet, with a total of 18 chart types and 33,260 images. Additionally, we provide additional attribute information and extracted features; the detailed description can be found in the description.md file (click here).
To use the dataset, please follow the steps below:
- Download the dataset from here.
- Unzip the folder and save it to
data/.
Pretrained Model
In this work, we benefit from some excellent pretrained models, including CLIP for aligning multi-modal features and DIS for background removal. To inform future work, we also provide pretrained attribute classifiers as well as training code.
To use these models, please follow the steps below:
- Download the background removal model from here and CLIP model from here)
- Download the pretrained classification models from here.
- Unzip the folder and save it to
models/.
Backend Setup
Following WYTIWYR framework, there are two stage namely Annotation and Retrieval. Before settng up these two file, make sure the ip adress and port right. Also the you can specify paramenter $\nu$ and $\mu$ to adjust the weight of attribute and user prompt as descripted in paper.
$$ \mathcal{S} = S_{\mathcal{Q}} \cdot \text{e}^{\nu S_{\mathcal{I}A} +\mu S{\mathcal{M}}} $$
To run annotation,
python annotation_and_retrieval/annotation.py --ip 'localhost' --port1 7779
To run retrieval,
python annotation_and_retrieval/retrieval.py --ip 'localhost' --port2 7780 --mu 5 --nu 1
Frontend Setup
Environment Setup
- Set up Node.js and environment. Please refer to here
- Set up Vue.js environment:
npm install vue
Host Setting
In file WYTIWYR/frontend/retrieval/src/store/index.ts, please set the host that run your backend.
import Vue from "vue";
import Vuex from "vuex";
Vue.use(Vuex);
export default new Vuex.Store({
state: {
watchlst: ["all_attributes.Type"],
annotation_host: "http://10.30.11.33:7779/", // change to your own hosts
retrieval_host: "http://10.30.11.33:7780/",
...
Project Setup
-
Go to
WYTIWYR/frontend/retrievalpath -
Install all the needed packages through npm
npm install -
Compiles and hot-reloads for development
npm run serve
Compiles and minifies for production
npm run build
Cases
To show users can customize retrieval inputs based on disentangled attributes of the query chart and intent prompt, we explore two main use scenarios including design space extension by explicit goals and fuzzy retrieval by user intent. For design space extension by explicit goals, we Investigated 4 cases, namely (a) original attribute change, (b) new attribute addition, (c) existing attribute deletion, (d) attribute transfer, see the figure below for detail.

For fuzzy retrieval by user intent, we Investigated 3 cases, namely (a) text information seeking, (b) relevant topic finding, (c) abstract description searching, see the figure below for detail.

Contact
We are glad to hear from you. If you have any questions, please feel free to contact xrakexss@gmail.com or open issues on this repository.
License
This project is open sourced under GNU Affero General Public License v3.0.
