| CARVIEW |
MyTimeMachine : Personalized Facial Age Transformation
SIGGRAPH 2025 (TOG Journal)
Abstract
Video
Motivation: Why Age Transformation?
Method
Given an input face of $\textit{Oprah Winfrey}$ at 70 years old, our adapter re-ages her face to resemble her appearance at 30, while preserving the style of the input image. To achieve personalized re-aging, we collect $\sim$50 images of an individual across different ages and train an adapter network that updates the latent code generated by the global age encoder SAM. Our adapter preserves identity during interpolation when the target age falls within the range of ages seen in the training data, while also extrapolating well to unseen ages.
Animation
Age Regression (De-aging from ~70 years old)
at this age
Please try selecting different celebrities by clicking on the thumbnails. Move the slider to adjust the target age.
Age Progression (Aging from ~40 years old)
at this age
Please try selecting different celebrities by clicking on the thumbnails. Move the slider to adjust the target age.
Q&A
-
Why GAN instead of Diffusion?
At the time of this research, GAN handles better inversion-editing trade-off than diffusion, thanks to the benefits of well-trained latent space of StyleGAN.
Empirically, we find that GAN can maintain input image's style (pose/lighting/expression/etc.) while providing good editing power for aging, which diffusion is not good at.
More specifically, we find that diffusion inversion/editing methods (rf-inversion/rf-solver-edit/etc.) are overfitting to the input image when editing the aging. See appendix for more details. -
Why face-swapping instead of direct video-based re-aging?
At the time of this research, face-swapping is empirically better than direct video-based re-aging methods (e.g. STIT and VideoEditGAN) for aging.
'Better' here means less flickering / fast inference without pivotal tuning per frame / better quality etc. See pdf for more details.
Acknowledgements
We thank Noah Frahm for reviewing early drafts and suggesting helpful improvements. We thank Yiran Xu for thoughtful discussions and feedback during the course of this research. This research was partially funded by Lenovo Research (Morrisville, NC). We are grateful to the members of the Mobile Technology Innovations Lab for their support and assistance.
BibTeX
@article{qiMyTimeMachinePersonalizedFacial2025,
title = {{{MyTimeMachine}}: {{Personalized Facial Age Transformation}}},
shorttitle = {{{MyTimeMachine}}},
author = {Qi, Luchao and Wu, Jiaye and Gong, Bang and Wang, Annie N. and Jacobs, David W. and Sengupta, Roni},
date = {2025-07-27},
journaltitle = {ACM Trans. Graph.},
volume = {44},
number = {4},
pages = {140:1--140:16},
issn = {0730-0301},
doi = {10.1145/3731172},
url = {https://dl.acm.org/doi/10.1145/3731172},
}