You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
This repository contains PyTorch implementation for ICLR2023 spotlight paper PLOT:Prompt Learning with Optimal Transport for Vision-Language Models[arXiv]
PLOT is a method to jointly learn multiple comprehensive prompts to describe diverse characteristics of categories such as intrinsic attributes or extrinsic contexts. To solve the degradation problem of multiple prompts, we introduce optimal transport to match the multiple local patterns of vision and text modalities. Specifically, we first model images and the categories with visual and textual feature sets. Then, we apply a two-stage optimization strategy to learn the prompts. In the inner loop, we optimize the optimal transport distance to align visual features and prompts by the Sinkhorn algorithm, while in the outer loop, we learn the prompts by this distance from the supervised data.
Updates
May 2023: We release a brief script for the visualization.
April 2023: PLOT can further benifit from the initilization using ChatGPT and obtain the average 1 shot performance on 11 datasets as 71.7! Code will come soon!
March 2023: PLOT can support the VIT-B/16 backbone now and obtain the average 1 shot performance on 11 datasets as 70.6! Please refer to PLOT++ for details.
If you find our work useful in your research, please consider citing:
@inproceedings{chen2023plot,
title={Prompt Learning with Optimal Transport for Vision-Language Models},
author={Chen, Guangyi and Yao, Weiran and Song, Xiangchen and Li, Xinyue and Rao, Yongming and Zhang, Kun}
booktitle={ICLR},
year={2023}
}
About
[ICLR2023] PLOT: Prompt Learning with Optimal Transport for Vision-Language Models