SkillAgentSearch skills...

PromptSRC

[ICCV'23 Main Track, WECIA'23 Oral] Official repository of paper titled "Self-regulating Prompts: Foundational Model Adaptation without Forgetting".

Install / Use

/learn @muzairkhattak/PromptSRC
About this skill

Quality Score

0/100

Supported Platforms

Universal

README

Self-regulating Prompts: Foundational Model Adaptation without Forgetting [ICCV 2023]

Self-regulating Prompts: Foundational Model Adaptation without Forgetting<br> Muhammad Uzair Khattak*, Syed Talal Wasim*, Muzammal Naseer, Salman Khan, Ming-Hsuan Yang, Fahad Shahbaz Khan

*Joint first authors

paper Website video slides

Official implementation of the paper "Self-regulating Prompts: Foundational Model Adaptation without Forgetting".

<hr />

PWC PWC PWC PWC PWC

<hr />

:rocket: News

  • (July 14, 2023)
    • Our work is accepted to ICCV 2023! :tada:
  • (July 12, 2023)
<hr />

Highlights

main figure

<p align="justify"> <b> <span style="color: blue;">Left</span></b>: Existing prompt learning approaches for foundational Vision-Language models like CLIP rely on task-specific objectives that restrict prompt learning to learn a feature space suitable only for downstream tasks and consequently lose the generalized knowledge of CLIP (shown in <span style="color: purple;">purple</span></b>). Our self-regulating framework explicitly guides the training trajectory of prompts towards the closest point between two optimal solution manifolds (solid line) to learn task-specific representations while also retaining generalized CLIP knowledge (shown in <span style="color: green;">green</span>). <b><span style="color: blue;">Middle</span></b>: Averaged across 11 image recognition datasets, PromptSRC surpasses existing methods on the base-to-novel generalization setting. <b><span style="color: blue;">Right</span></b>: We evaluate our approach on four diverse image recognition benchmarks for CLIP and show consistent gains over previous state-of-the-art approaches. </p>

<p align="justify"> Abstract: Prompt learning has emerged as an efficient alternative for fine-tuning foundational models, such as CLIP, for various downstream tasks. Conventionally trained using the task-specific objective, i.e., cross-entropy loss, prompts tend to overfit downstream data distributions and find it challenging to capture task-agnostic general features from the frozen CLIP. This leads to the loss of the model's original generalization capability. To address this issue, our work introduces a self-regularization framework for prompting called PromptSRC (Prompting with Self-regulating Constraints). PromptSRC guides the prompts to optimize for both task-specific and task-agnostic general representations using a three-pronged approach by: (a) regulating {prompted} representations via mutual agreement maximization with the frozen model, (b) regulating with self-ensemble of prompts over the training trajectory to encode their complementary strengths, and (c) regulating with textual diversity to mitigate sample diversity imbalance with the visual branch. To the best of our knowledge, this is the first regularization framework for prompt learning that avoids overfitting by jointly attending to pre-trained model features, the training trajectory during prompting, and the textual diversity. PromptSRC explicitly steers the prompts to learn a representation space that maximizes performance on downstream tasks without compromising CLIP generalization. We perform experiments on 4 benchmarks where PromptSRC performs favorably well compared to the existing methods. Our code and pre-trained models are publicly available. </p>

Regularization Framework for Prompt Learning

We propose PromptSRC (Prompting with Self-regulating Constraints) which steers the prompts to learn a representation space that maximizes performance on downstream tasks without compromising CLIP generalization.

Key components of PromptSRC:

  1. Mutual agreement maximization: PromptSRC explicitly guides the prompts to jointly acquire both <i>task-specific knowledge</i> and <i>task-agnostic generalized knowledge</i> by maximizing the mutual agreement between prompted and features of the frozen VL model.
  2. Gaussian weighted prompt aggregation: We propose a weighted self-ensembling strategy for prompts over the training trajectory that captures complementary features and enhances their generalization abilities.
  3. Textual diversity: PromptSRC regulates prompts with textual diversity to mitigate sample diversity imbalance compared to the visual branch during training.

:ballot_box_with_check: Supported Methods

| Method | Paper | Configs | Training Scripts | |---------------------------|:----------------------------------------------|:---------------------------------------------------------------:|:-------------------------------:| | PromptSRC | arXiv | link | link | | Independent V-L Prompting | - | link | link | | MaPLe | CVPR 2023 | link | link | | CoOp | IJCV 2022 | link | link | | Co-CoOp | CVPR 2022 | link | link |

<hr />

Results

Results reported below show accuracy for base and novel classes for across 11 recognition datasets averaged over 3 seeds.

Effectiveness of PromptSRC in comparison with baseline Independent V-L Prompting

PromptSRC effectively maximizes supervised task performance (base classes) without compromising on CLIP's original generalization towards new unseen tasks (novel classes).

| Name | Base Acc. | Novel Acc. | HM | Epochs |
|---------------------------------------------------------------------------------|:---------:|:----------:|:---------:|:------:| | CLIP | 69.34 | 74.22 | 71.70 | - |
| Independent V-L Prompting | 84.21 | 71.79 | 77.51 | 20 | | PromptSRC (ours) | 84.26 | 76.10 | 79.97 | 20 |

PromptSRC in comparison with existing state-of-the-art

| Name | Base Acc. | Novel Acc. | HM | Epochs | |--------------------------------------------|:---------:|:----------:|:---------:|:------:| | CLIP | 69.34 | 74.22 | 71.70 | - |
| CoOp | 82.69 | 63.22 | 71.66 | 200 | | CoCoOp | 80.47 | 71.69 | 75.83 | 10 | | ProDA | 81.56 | 75.83 | 76.65 | 100 | | MaPLe | 82.28 | 75.14 | 78.55 | 5 | | PromptSRC (ours) | 84.26 | 76.10 | 79.97 | 20 |

Installation

For installation and other package requirements, please fo

View on GitHub
GitHub Stars286
CategoryDevelopment
Updated21d ago
Forks18

Languages

Python

Security Score

95/100

Audited on Mar 16, 2026

No findings