You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Download any of the three data sets, FashionIQ, Shoes and Fashion200k, to run the code.
We provided the three datasets in Google Drivetemporarily because some image links are not permanent. Follow the requirements of the original author, datasets are only used for academic purposes.
Copy the dataset folders to the data folder. The data folder is structured as follows:
We use Weights and Biases to log our experiments. You can register an account or provide an existing one, head it over to *config.json and fill out your wandb_account_name.You can also change the default at options/command_line.py.
@inproceedings{chen2024composed,
author = "Chen, Yiyang and Zheng, Zhedong and Ji, Wei and Qu, Leigang and Chua, Tat-Seng",
title = "Composed Image Retrieval with Text Feedback via Multi-grained Uncertainty Regularization",
booktitle = "International Conference on Learning Representations (ICLR)",
code = "https://github.com/Monoxide-Chen/uncertainty\_retrieval",
year = "2024"
}
About
ICLR‘24 Offical Implementation of Composed Image Retrieval with Text Feedback via Multi-grained Uncertainty Regularization