UniFEx
Framework for computationally efficient training of universal image feature extraction models.
Install / Use
/learn @morrisfl/UniFExREADME
Efficient and Discriminative Image Feature Extraction for Universal Image Retrieval
This repository contains the code associated with the publication "Efficient and Discriminative Image Feature Extraction for Universal Image Retrieval", which was accepted for presentation at this year's DAGMA German Conference on Pattern Recognition (GCPR).
Abstract
Current image retrieval systems often face domain specificity and generalization issues. This study aims to overcome these limitations by developing a computationally efficient training framework for a universal feature extractor that provides strong semantic image representations across various domains. To this end, we curated a multi-domain training dataset, called M4D-35k, which allows for resource-efficient training. Additionally, we conduct an extensive evaluation and comparison of various state-of-the-art visual-semantic foundation models and margin-based metric learning loss functions regarding their suitability for efficient universal feature extraction. Despite constrained computational resources, we achieve near state-of-the-art results on the Google Universal Image Embedding Challenge (GUIEC), with a mMP@5 of 0.721. This places our method at the second rank on the leaderboard, just 0.7 percentage points behind the best performing method. However, our model has 32% fewer overall parameters and 289 times fewer trainable parameters. Compared to methods with similar computational requirements, we outperform the previous state of the art by 3.3 percentage points.
Figure 1: Results on the GUIEC test set. Comparing our approach to the GUIEC leaderboard by plotting the evaluation
metric mMP@5 over the number of total model parameters. The bubble’s area is proportional to the number of trainable
model parameters.
Table of Contents
I. Setup
Here, we describe a step-by-step guide to setup and install dependencies on a UNIX-based system, such as Ubuntu, using
conda as package manager. If conda is not available, alternative package managers such as venv can be used.
1. Create a virtual environment
conda create -n env_unifex python=3.8
conda activate env_unifex
2. Clone the repository
git clone git@github.com:morrisfl/UniFEx.git
3. Install pytorch
Depending on your system and compute requirements, you may need to change the command below. See pytorch.org for more details. In order to submit the embedding models to the 2022 GUIEC, PyTorch 1.11.0 is required.
conda install pytorch==1.11.0 torchvision==0.12.0 cudatoolkit=11.3 -c pytorch
4. Install the repository with all dependencies
cd UniFEx
python -m pip install .
If you want to make changes to the code, you can install the repository in editable mode:
python -m pip install -e .
5. Setup Google Drive access (optional)
In order to automatically upload checkpoints to Google Drive, you need to create a Google Drive API key.
Setup instructions can be found here. If you don't want to upload checkpoints to Google Drive,
please set the MODEL.cloud_upload parameter in the configuration file to False.
II. Data Preparation
In the process of fine-tuning/linear probing the embedding models, different datasets and dataset combinations can be used. The list of available datasets, and information about pre-processing, downloading and how to use them for training can be found here.
M4D-35k
The M4D-35k dataset is a custom curated multi-domain training dataset. It was created for resource-efficient training of multi-domain image embeddings. The curation process involved dataset selection and data sampling (optimize data size) by maximizing the performance on the GUIEC evaluation dataset. M4D-35k consists of 35k classes and 328k images sourced from four different datasets:
| Domain | Dataset | # classes | # images | |-----------------------|-------------------------------------------------------------------------------------------------------------------|:---------:|:--------:| | Packaged goods | Products-10k | 9.5k | 141.5k | | Landmarks | Google Landmarks v2 (subset) | 10.0k | 79.2k | | Apparel & Accessories | DeepFashion (Consumer to Shop) | 14.3k | 100.4k | | Cars | Stanford Cars (refined) | 1.0k | 7.3k | | Multi-Domain | M4D-35k | 34.8k | 328.4k |
Notable, the Stanford Cars dataset was refined by enhancing the class granularity. Instead of classifying cars only by their model, the class labels were extended to the car color. More information about the refinement process can be found here.
The corresponding annotations of the M4D-35k dataset can be found in data/m4d-35k_train.csv. Make sure to download the
corresponding datasets included in the M4D-35k dataset and place them in a <data_dir> of your choice. More information
about the dataset and directory structure can be found here.
To use M4D-35k for training, add m4d_35k to the DATASET.names parameter in the configuration file in configs/.
III. Embedding Model
Figure 2: Overview of the model architecture. The image embedding model consists of a visual-semantic foundation model
as backbone, followed by a projection head. During training the model is optimized using a margin-based metric learning
loss function.
Different foundation models can be used, as shown in the table below.
| Foundation Model | Encoder architecture | type | model_name | weights |
|:------------------------------------------------------------------:|:--------------------:|:---------------:|:------------------------------------------------------------:|:----------------------------------------------------------:|
| OpenCLIP | ViT | clip | see OpenCLIP | see OpenCLIP |
| OpenCLIP | ConvNeXt | clip_convnext | see OpenCLIP | see OpenCLIP |
| CLIPA | ViT | clipav2 | see OpenCLIP | see OpenCLIP |
| EVA-CLIP | ViT | eva02 | see timm | - |
| MetaCLIP | ViT | meta-clip | see OpenCLIP | see OpenCLIP |
| SigLIP | ViT | siglip | see timm | - |
| DINOv2 | ViT | dinov2 | see timm | - |
| SAM | ViT | sam | see timm | - |
In order to adjust the model architecture of the image embedding model, the following main parameters can be changed in the configuration file:
MODEL.embedding_dim: the dimension of the image embedding.MODEL.BACKBONE.type: the type of the visual-semantic foundation model, supported types are those listed in the table above.MODEL.BACKBONE.model_name: the name of the visual-semantic foundation model, specified by OpenCLIP or timm.MODEL.BACKBONE.weights: the weights of the visual-semantic foundation model, only required for OpenCLIP models (corresponds to the pretrained parameter in OpenCLIP).MODEL.NECK.type: the type to reduce the embedding dimension to the specifiedMODEL.embedding_dim, supported types areproj_layerandpooling.MODEL.HEAD.name: the name of the margin-based metric learning loss, supported names areArcFace,DynM-ArcFace,AdaCos,LiArcFace, `Curricu
Related Skills
node-connect
347.2kDiagnose OpenClaw node connection and pairing failures for Android, iOS, and macOS companion apps
frontend-design
108.0kCreate distinctive, production-grade frontend interfaces with high design quality. Use this skill when the user asks to build web components, pages, or applications. Generates creative, polished code that avoids generic AI aesthetics.
openai-whisper-api
347.2kTranscribe audio via OpenAI Audio Transcriptions API (Whisper).
qqbot-media
347.2kQQBot 富媒体收发能力。使用 <qqmedia> 标签,系统根据文件扩展名自动识别类型(图片/语音/视频/文件)。
