SkillAgentSearch skills...

LPE

Codes for the WACV 2023 paper: "Semantic Guided Latent Parts Embedding for Few-Shot Learning"

Install / Use

/learn @MartaYang/LPE
About this skill

Quality Score

0/100

Supported Platforms

Universal

README

<div align="center"> <h1>Semantic Guided Latent Parts Embedding for Few-Shot Learning <br> (WACV 2023)</h1> </div> <div align="center"> <h3><a href=https://martayang.github.io/>Fengyuan Yang</a>, <a href=https://vipl.ict.ac.cn/homepage/rpwang/index.htm>Ruiping Wang</a>, <a href=http://people.ucas.ac.cn/~xlchen?language=en>Xilin Chen</a></h3> </div> <div align="center"> <h4> <a href=https://openaccess.thecvf.com/content/WACV2023/papers/Yang_Semantic_Guided_Latent_Parts_Embedding_for_Few-Shot_Learning_WACV_2023_paper.pdf>[Paper link]</a>, <a href=https://openaccess.thecvf.com/content/WACV2023/supplemental/Yang_Semantic_Guided_Latent_WACV_2023_supplemental.pdf>[Supp link]</a></h4> </div>

1. Requirements

  • Python 3.7
  • PyTorch 1.9.0

2. Datasets

  • Original datasets

    • All 4 datasets are the same as previous works (e.g., DeepEMD, renet), and can be download from their links: miniImagenet, tieredImageNet, CIFAR-FS, CUB-FS.
    • Download and extract them in a certain folder, let's say /data/FSLDatasets/LPE_dataset, then remember to set args.data_dir to this folder when running the code later.
  • Semantic embeddings

    • Additional semantic embeddings of these 4 datasets leveraged by our method can be downloaded here.
    • Download and put them in the corresponding dataset folder (e.g., put miniimagenet/wnid2CLIPemb_zscore.npy to /data/FSLDatasets/LPE_dataset/miniimagenet/wnid2CLIPemb_zscore.npy), then remember to set args.semantic_path to the location of this file and args.sem_dim accordingly when running the code later.

3. Usage

Our training and testing scripts are all at scripts/train.sh, and corresponding output logs can found at this folder too.

4. Results

The 1-shot and 5-shot classification results can be found in the corresponding output logs.

Citation

If you find our paper or codes useful, please consider citing our paper:

@InProceedings{Yang_2023_WACV,
    author    = {Yang, Fengyuan and Wang, Ruiping and Chen, Xilin},
    title     = {Semantic Guided Latent Parts Embedding for Few-Shot Learning},
    booktitle = {Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision (WACV)},
    month     = {January},
    year      = {2023},
    pages     = {5447-5457}
}

Acknowledgments

Our codes are based on renet and DeepEMD, and we really appreciate it.

Further

If you have any question, feel free to contact me. My email is fengyuan.yang@vipl.ict.ac.cn

View on GitHub
GitHub Stars143
CategoryEducation
Updated3mo ago
Forks58

Languages

Python

Security Score

92/100

Audited on Dec 8, 2025

No findings