HTTP/2 301
server: GitHub.com
content-type: text/html
location: https://lychenyoko.github.io/content_aware_gan_compression/
x-github-request-id: 87C5:318CF6:7ECD9C:8E63CF:6951972D
accept-ranges: bytes
age: 0
date: Sun, 28 Dec 2025 20:46:38 GMT
via: 1.1 varnish
x-served-by: cache-bom-vanm7210081-BOM
x-cache: MISS
x-cache-hits: 0
x-timer: S1766954798.226277,VS0,VE211
vary: Accept-Encoding
x-fastly-request-id: c903165c096ae490a0a92b4a884fd7d34258dbc3
content-length: 162
HTTP/2 200
server: GitHub.com
content-type: text/html; charset=utf-8
last-modified: Fri, 02 Jul 2021 02:36:40 GMT
access-control-allow-origin: *
strict-transport-security: max-age=31556952
etag: W/"60de7bb8-6a15"
expires: Sun, 28 Dec 2025 20:56:38 GMT
cache-control: max-age=600
content-encoding: gzip
x-proxy-cache: MISS
x-github-request-id: 0DD9:3157C7:7EEE2B:8E8675:6951972E
accept-ranges: bytes
age: 0
date: Sun, 28 Dec 2025 20:46:38 GMT
via: 1.1 varnish
x-served-by: cache-bom-vanm7210081-BOM
x-cache: MISS
x-cache-hits: 0
x-timer: S1766954798.451220,VS0,VE220
vary: Accept-Encoding
x-fastly-request-id: 5f3451696e981cb439d928c99087efece5e83e43
content-length: 7859
Content-Aware GAN Compression
Content-Aware GAN Compression
1Princeton University
2Adobe Research
In CVPR 2021

|
|
Examples showing the generative ability of our 11x-accelerated generator vs. the full-szie one. In particular, our model generates the interested contents visually comparable to the full-size model.
|
Abstract
Generative adversarial networks (GANs), e.g., StyleGAN2, play a vital role in various image generation and synthesis tasks, yet their notoriously high computational cost hinders their efficient deployment on edge devices. Directly applying generic compression approaches yields poor results on GANs, which motivates a number of recent GAN compression works. While prior works mainly accelerate conditional GANs, e.g., pix2pix and CycleGAN, compressing state-of-the-art unconditional GANs has rarely been explored and is more challenging. In this paper, we propose novel approaches for unconditional GAN compression. We first introduce effective channel pruning and knowledge distillation schemes specialized for unconditional GANs. We then propose a novel content-aware method to guide the processes of both pruning and distillation. With content-awareness, we can effectively prune channels that are unimportant to the contents of interest, e.g., human faces, and focus our distillation on these regions, which significantly enhances the distillation quality. On StyleGAN2 and SN-GAN, we achieve a substantial improvement over the state-of-the-art compression method. Notably, we reduce the FLOPs of StyleGAN2 by 11x with visually negligible image quality loss compared to the full-size model. More interestingly, when applied to various image manipulation tasks, our compressed model forms a smoother and better disentangled latent manifold, making it more effective for image editing.
Full Paper with Supplementary
 |
Yuchen Liu, Zhixin Shu, Yijun Li, Zhe Lin, Federico Perazzi, S.Y. Kung
Content-Aware GAN Compression
CVPR, 2021 (Paper)
|