UCMFH
Source codes of the paper "When CLIP meets Cross-modal Hashing Retrieval: A New Strong Baseline"
Install / Use
/learn @XinyuXia97/UCMFHREADME
UCMFH
When CLIP Meets Cross-modal Hashing Retrieval: A New Strong Baseline
Datasets
We release the three experimental datasets as follows:
Demo
Taking MIR Flickr as an example, our model can be trained and verified by the following command:
bash test-flickr.sh
Citation
If you use this code, please cite it:
@article{xia2023clip,
title={When CLIP meets cross-modal hashing retrieval: A new strong baseline},
author={Xia, Xinyu and Dong, Guohua and Li, Fengling and Zhu, Lei and Ying, Xiaomin},
journal={Information Fusion},
pages={101968},
year={2023},
publisher={Elsevier}
}
Related Skills
node-connect
350.8kDiagnose OpenClaw node connection and pairing failures for Android, iOS, and macOS companion apps
frontend-design
110.4kCreate distinctive, production-grade frontend interfaces with high design quality. Use this skill when the user asks to build web components, pages, or applications. Generates creative, polished code that avoids generic AI aesthetics.
openai-whisper-api
350.8kTranscribe audio via OpenAI Audio Transcriptions API (Whisper).
qqbot-media
350.8kQQBot 富媒体收发能力。使用 <qqmedia> 标签,系统根据文件扩展名自动识别类型(图片/语音/视频/文件)。
