You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
{{ message }}
This repository was archived by the owner on Jan 4, 2020. It is now read-only.
Or, you can download the latest release. It contains all weights, codes and examples.
How to evaluate
To test the code, make changes to the following lines in the file Eval.py. here you need to specify the path to the image style and content. After that, save the changes to the file and run it.
parser.add_argument('--content', type=str, default='input/chicago.jpg',
help='File path to the content image')
parser.add_argument('--style', type=str, default='style/style11.jpg',
help='File path to the style image, or multiple style \ images separated by commas if you want to do style \ interpolation or spatial control')
How to train
You can train your own SANet using Train.ipynb
Examples
Original:
Stylized under 1.jpg:
Stylized under Composition-VII.jpg:
Stylized under Starry.jpg:
Stylized under candy.jpg:
Stylized under la_muse.jpg:
Stylized under rain_princess.jpg:
Stylized under seated_nude.jpg:
Stylized under style11.jpg:
Stylized under udnie.jpg:
Stylized under wave.jpg:
Stylized under wreck.jpg:
Original:
Stylized under 1.jpg:
Stylized under Composition-VII.jpg:
Stylized under Starry.jpg:
Stylized under candy.jpg:
Stylized under la_muse.jpg:
Stylized under rain_princess.jpg:
Stylized under seated_nude.jpg:
Stylized under style11.jpg:
Stylized under udnie.jpg:
Stylized under wave.jpg:
Stylized under wreck.jpg:
About
Arbitrary Style Transfer with Style-Attentional Networks