Exporters From Japan
Wholesale exporters from Japan   Company Established 1983
CARVIEW
Select Language

Model Editing

Given an identity parameterized by weights, we can manipulate attributes by traversing semantic directions in the w2w weight subspace. The edited weights result in a new model, where the subject has different attributes while still maintaining as much of the prior identity. These edits are not image-specific, and persist in appearance across different generation seeds and prompts. Additionally, as we operate on an identity weight manifold, minimal changes are made to other concepts, such as scene layout or other people. Try out the sliders below to demonstrate edits in w2w space.

Slide the bars to edit the identity.

Editing
0
0
0

Inversion

By constraining a diffusion model's weights to lie in w2w space while following the standard diffusion loss, we can invert the subject (i.e., identity) from a single image into the model without overfitting. This results in a new model which encodes the subject. Typical inversion into a generative latent space projects the input onto the data (e.g., image) manifold. Similarly, we project onto the manifold of identity-encoding model weights. Projection into w2w space generalizes to unrealistic or non-human identities, distilling a realistic subject from an out-of-distribution identity. We provide examples of inversion below with a variety of input types.

Click on an image to invert its subject into a model.

inversion rect1b rect2b rect3b

Sampling New Models

Modeling the underlying manifold of subject-encoding weights allows sampling a new model that lies on it. This results in a new model that generates a novel identity that is consistent across generations. We provide examples of sampling models from w2w space below, demonstrating a variety of facial attributes, hairstyles, and contexts.

Click to sample an identity-encoding model.

sampling rect1 rect2 rect2 rect2 rect2

Connection to Generative Latent Spaces

w2w_vs_gan

As seen from the interactive examples above, weights2weights space enables applications analogous to those of a traditional generative latent space–-inversion, editing, and sampling–-but producing model weights rather than images. With generative models such as GANs, the instance is a latent mapping to an image, whereas the instance with weights2weights is a set of identity-encoding weights.

Extending to Other Domains

dogs

We find that similar subspaces can be created for other visual concepts beyond human identities. For instance, we apply the weights2weights framework to models encoding different dog breeds or cars. We encourage further efforts in exploring the generality of weights2weights.

More Results


Composing Edits in Weight Space

compose


Continuous Control over Identity Edits

continuous


Identity Inversion + Editing

inversion_edit


Out-of-Distribution Identity Projection

ood_project


w2w Sampling and Nearest Neighbor Models

sampling

Acknowledgments

The authors would like to thank Grace Luo, Lisa Dunlap, Konpat Preechakul, Sheng-Yu Wang, Stephanie Fu, Or Patashnik, Daniel Cohen-Or, and Sergey Tulyakov for helpful discussions. AD is supported by the US Department of Energy Computational Science Graduate Fellowship. Part of the work was completed by AD as an intern with Snap Inc. YG is funded by the Google Fellowship. Additional funding came from ONR MURI.


BibTeX

@inproceedings{dravidinterpreting,
  title={Interpreting the Weight Space of Customized Diffusion Models},
  author={Dravid, Amil and Gandelsman, Yossi and Wang, Kuan-Chieh and Abdal, Rameen and Wetzstein, Gordon and Efros, Alexei A and Aberman, Kfir},
  booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}
}
 
Original Source | Taken Source