SkillAgentSearch skills...

MoMA

Medical Image Analysis (MEDIA_2024) paper: MoMA: Momentum Contrastive Learning with Multi-head Attention-based Knowledge Distillation for Histopathology Image Analysis

Install / Use

/learn @trinhvg/MoMA
About this skill

Quality Score

0/100

Supported Platforms

Universal

README

MoMA: Momentum Contrastive Learning with Multi-head Attention-based Knowledge Distillation for Histopathology Image Analysis

Trinh Thi Le Vuong and Jin Tae Kwak. Medical Image Analysis (MEDIA) 2024.

Implementation of paper [arXiv]:

Release note: The CNN version has been released. We will release the ViT and SwinViT soon.

<p align="center"> <img src="figures/overview.png" width="600"> </p>

Overview of distillation flow across different tasks and datasets. 1) Supervised task is always conducted, 2) Feature distillation is applied if a well-trained teacher model is available, and 3) Vanilla ${L}_{KD}$ is employed if teacher and student models conduct the same task.

<p align="center"> <img src="figures/KD_dataset.png" width="600"> </p>

Overview of distillation flow across different tasks and datasets. 1) Supervised task is always conducted, 2) Feature distillation is applied if a well-trained teacher model is available, and 3) Vanilla ${L}_{KD}$ is employed if teacher and student models conduct the same task. SSL stands for self-supervised learning.

Train the teacher network (optional) or vanilla students

./scripts/run_vanilla.sh

Train the moma student network

If the student and teacher dataset vary in number of categories, you may need to use "--std_strict, --tec_strict".

./scripts/run_moma.sh

Train the student network using other KD methods

./scripts/run_comparison.sh

Related Skills

View on GitHub
GitHub Stars8
CategoryEducation
Updated10mo ago
Forks1

Languages

Python

Security Score

67/100

Audited on Jun 3, 2025

No findings