| CARVIEW |
Select Language
HTTP/2 200
server: GitHub.com
content-type: text/html; charset=utf-8
last-modified: Fri, 24 Oct 2025 02:18:15 GMT
access-control-allow-origin: *
strict-transport-security: max-age=31556952
etag: W/"68fae1e7-75cf"
expires: Tue, 30 Dec 2025 05:19:13 GMT
cache-control: max-age=600
content-encoding: gzip
x-proxy-cache: MISS
x-github-request-id: 608D:328FD3:9A5CA0:AD602F:69535E77
accept-ranges: bytes
date: Tue, 30 Dec 2025 06:50:36 GMT
via: 1.1 varnish
age: 0
x-served-by: cache-bom-vanm7210028-BOM
x-cache: HIT
x-cache-hits: 0
x-timer: S1767077436.834891,VS0,VE211
vary: Accept-Encoding
x-fastly-request-id: a99d7f0561c7d7fbded38521159b16fa076173ef
content-length: 6275
The Aging Multiverse: Generating Condition-Aware Facial Aging Tree via Training-Free Diffusion
The Aging Multiverse: Generating Condition-Aware Facial Aging Tree via Training-Free Diffusion
Bang Gong1*,
Luchao Qi1*,
Jiaye Wu2,
Zhicheng Fu3,
Chunbo Song3,
David W. Jacobs2,
John Nicholson3,
Roni Sengupta1
1UNC Chapel Hill
2University of Maryland
3Lenovo
* Equal contribution
Abstract
We introduce the Aging Multiverse, a framework for generating multiple plausible facial aging trajectories from a single image, each conditioned on external factors such as environment, health, and lifestyle. Unlike prior methods that model aging as a single deterministic path, our approach creates an aging tree that visualizes diverse futures. To enable this, we propose a training-free diffusion-based method that balances identity preservation, age accuracy, and condition control.
Our key contributions include attention mixing to modulate editing strength and a Simulated Aging Regularization strategy to stabilize edits. Extensive experiments and user studies demonstrate state-of-the-art performance across identity preservation, aging realism, and conditional alignment, outperforming existing editing and age-progression models, which often fail to account for one or more of the editing criteria. By transforming aging into a multi-dimensional, controllable, and interpretable process, our approach opens up new creative and practical avenues in digital storytelling, health education, and personalized visualization.
Video
Method
Overview of our training-free conditional-age progression framework. Given an input image and a textual description of external aging factors, our method leverages flow matching techniques to perform editing. Our approach balances three competing objectives—identity preservation, age accuracy, and condition alignment—enabling conditional age transformation without retraining. Our key innovations are: (i) attention mixing of Key and Value tensors between inversion and editing, and (ii) attention regularization with simulated unconditional aging to achieve the best inversion-editability trade-off.
Results
Celebrity
Source ( years old)
→
Target age:
Ours
RF-Solver-Edit
FlowEdit
FireFlow
FADING
(only aging effect)
(only aging effect)
Please try selecting different examples by clicking on the thumbnails.
Non-celebrity
Source ( years old)
→
Target age:
Ours
RF-Solver-Edit
FlowEdit
FireFlow
FADING
(only aging effect)
(only aging effect)
Please try selecting different examples by clicking on the thumbnails.
Acknowledgements
This research was supported in part by Lenovo Research (Morrisville, NC). We gratefully acknowledge the invaluable support and assistance of the members of the Mobile Technology Innovations Lab. This work was also supported in part by the National Science Foundation under Grant No.2213335.
BibTeX
@misc{gong2025agingmultiversegeneratingconditionaware,
title={The Aging Multiverse: Generating Condition-Aware Facial Aging Tree via Training-Free Diffusion},
author={Bang Gong and Luchao Qi and Jiaye Wu and Zhicheng Fu and Chunbo Song and David W. Jacobs and John Nicholson and Roni Sengupta},
year={2025},
eprint={2506.21008},
archivePrefix={arXiv},
primaryClass={cs.CV},
url={https://arxiv.org/abs/2506.21008},
}