You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
A curated list of resources including papers, datasets, and relevant links pertaining to spatial transformation for image composition, which aims to adjust the view/pose of the inserted foreground object in a composite image via simple spatial transformation (e.g., TPS transformation, perspective transformation). For more complete resources on general
image composition (object insertion), please refer to Awesome-Image-Composition.
Contributing
Contributions are welcome. If you wish to contribute, feel free to send a pull request. If you have suggestions for new sections to be included, please raise an issue and discuss before sending a pull request.
Junhong Gou, Bo Zhang, Li Niu, Jianfu Zhang, Jianlou Si, Chen Qian, Liqing Zhang: "Virtual Accessory Try-On via Keypoint Hallucination." arXiv preprint arXiv:2310.17131 (2023) [arXiv]
Bo Zhang, Yue Liu, Kaixin Lu, Li Niu, Liqing Zhang: "Spatial Transformation for Image Composition via Correspondence Learning." arXiv preprint arXiv:2207.02398 (2022) [arXiv]
Fangneng Zhan, Hongyuan Zhu, Shijian Lu: "Spatial Fusion GAN for Image Synthesis." CVPR (2019) [pdf]
Chen-Hsuan Lin, Ersin Yumer, Oliver Wang, Eli Shechtman, Simon Lucey: "ST-GAN: Spatial Transformer Generative Adversarial Networks for Image Compositing." CVPR (2018) [pdf][code]
Datasets
STRAT: it contains three subdatasets: STRAT-glasses, STRAT-hat, and STRAT-tie, which correspond to "glasses try-on", "hat try-on", and "tie try-on" respectively. The accessory image (resp., human face or portrait image) is treated as foreground (resp., background). In each subdataset, the training set has 2000 pairs of foregrounds and backgrounds, while the test set has 1000 pairs of foregrounds and backgrounds. [pdf][link]