| CARVIEW |
Select Language
HTTP/2 301
server: GitHub.com
content-type: text/html
location: https://omaralezaby.github.io/inretouch/
access-control-allow-origin: *
strict-transport-security: max-age=31556952
expires: Mon, 29 Dec 2025 16:54:08 GMT
cache-control: max-age=600
x-proxy-cache: MISS
x-github-request-id: 2BF6:3ABDEF:907B49:A230CA:6952AFD7
accept-ranges: bytes
age: 0
date: Mon, 29 Dec 2025 16:44:08 GMT
via: 1.1 varnish
x-served-by: cache-bom-vanm7210086-BOM
x-cache: MISS
x-cache-hits: 0
x-timer: S1767026648.104048,VS0,VE210
vary: Accept-Encoding
x-fastly-request-id: e7d84a277ab47206da62ee1d7dbc6368d89ddadd
content-length: 162
HTTP/2 200
server: GitHub.com
content-type: text/html; charset=utf-8
last-modified: Fri, 05 Dec 2025 17:23:46 GMT
access-control-allow-origin: *
strict-transport-security: max-age=31556952
etag: W/"69331522-3834"
expires: Mon, 29 Dec 2025 16:54:08 GMT
cache-control: max-age=600
content-encoding: gzip
x-proxy-cache: MISS
x-github-request-id: DF6E:444BC:924B80:A3FFB6:6952AFD8
accept-ranges: bytes
age: 0
date: Mon, 29 Dec 2025 16:44:08 GMT
via: 1.1 varnish
x-served-by: cache-bom-vanm7210086-BOM
x-cache: MISS
x-cache-hits: 0
x-timer: S1767026648.328238,VS0,VE222
vary: Accept-Encoding
x-fastly-request-id: 572f8ff074a40d37bd8273ba46194a2de59db2c2
content-length: 3826
INRetouch: Context Aware Implicit Neural Representation for Photography Retouching
INRetouch: Context Aware Implicit Neural Representation for Photography Retouching
Computer Vision Lab, University of Würzburg
INRetouch transfer edits from an image to a video without any visual or temporal artifacts.
We train our model on image pairs before and after editing.
Designed to be lightweight and efficient, our model enables
affordable inference, which motivated us to extend its ap-
plication to video editing. As shown in the video, our method effectively learns edits from images and
applies them to videos, producing visually pleasing results
with excellent temporal consistency and no noticeable artifacts.
This can be attributed to the clarity from the use of
image pairs before and after editing as a reference and the
design of our method to focus on color modification through
local awareness.
Unlike existing methods, such as style transfer and generative-based models, which often struggle with temporal consistency and introduce significant noise, our approach overcomes these limitations. This demonstrates both the effectiveness of our network and the controllability of the learned edits
Unlike existing methods, such as style transfer and generative-based models, which often struggle with temporal consistency and introduce significant noise, our approach overcomes these limitations. This demonstrates both the effectiveness of our network and the controllability of the learned edits
We propose InRetouch, a novel implicit neural representation method for one-shot image retouching transfer.
Abstract
Professional photo editing remains challenging, requiring extensive knowledge of imaging
pipelines and significant expertise. While recent deep learning approaches, particularly
style transfer methods, have attempted to automate this process, they often struggle with
output fidelity, editing control, and complex retouching capabilities. We propose a novel
retouch transfer approach that learns from professional edits through before-after image
pairs, enabling precise replication of complex editing operations. We develop a context-aware
Implicit Neural Representation that learns to apply edits adaptively based on image content
and context, and is capable of learning from a single example. Our method extracts implicit
transformations from reference edits and adaptively applies them to new images. To facilitate
this research direction, we introduce a comprehensive Photo Retouching Dataset comprising 100,000
high-quality images edited using over 170 professional Adobe Lightroom presets. Through extensive
evaluation, we demonstrate that our approach not only surpasses existing methods in photo retouching
but also enhances performance in related image reconstruction tasks like Gamut Mapping and Raw
Reconstruction. By bridging the gap between professional editing capabilities and automated solutions,
our work presents a significant step toward making sophisticated photo editing more accessible
while maintaining high-fidelity results.
Qualitative Results
Transfer retouches from professionally edited images collected online.
Comparison between different methods on retouching transfer Benchmark.
Local Modifications Applities
Importance of context awareness for local and region specific modifications.
Dataset Styles Vararity
Visualization of the variety of edits in the used presets applied to a natural image (highlighted top-left).
Contacts
Omar Elezabi: omar.elezabi@uni-wuerzburg.deMarcos V. Conde: marcos.conde@uni-wuerzburg.de
BibTeX
@article{elezabi2024inretouch,
title={INRetouch: Context Aware Implicit Neural Representation for Photography Retouching},
author={Elezabi, Omar and Conde, Marcos V and Wu, Zongwei and Timofte, Radu},
journal={arXiv preprint arXiv:2412.03848},
year={2024}
}