You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
(NeurIPS 2023 spotlight) Large-scale Dataset Distillation/Condensation, 50 IPC (Images Per Class) achieves the highest 60.8% on original ImageNet-1K val set.
This is a collection of our work targeted at large-scale dataset distillation.
SCDD : Self-supervised Compression Method for Dataset Distillation .
CDA (@TMLR'24): Dataset Distillation via Curriculum Data Synthesis in Large Data Era.
SRe2L (@NeurIPS'23 spotlight): Squeeze, Recover and Relabel: Dataset Condensation at ImageNet Scale From A New Perspective.
Citation
@article{yin2023dataset,
title={Dataset Distillation via Curriculum Data Synthesis in Large Data Era},
author={Yin, Zeyuan and Shen, Zhiqiang},
journal={Transactions on Machine Learning Research},
year={2024}
}
@inproceedings{yin2023squeeze,
title={Squeeze, Recover and Relabel: Dataset Condensation at ImageNet Scale From A New Perspective},
author={Yin, Zeyuan and Xing, Eric and Shen, Zhiqiang},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
}
About
(NeurIPS 2023 spotlight) Large-scale Dataset Distillation/Condensation, 50 IPC (Images Per Class) achieves the highest 60.8% on original ImageNet-1K val set.