You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
This is a native app that shows how to integrate Apple's Core ML Stable Diffusion implementation in a native Swift UI application. The Core ML port is a simplification of the Stable Diffusion implementation from the diffusers library. This application can be used for faster iteration, or as sample code for any use cases.
This is what the app looks like on macOS:
On first launch, the application downloads a zipped archive with a Core ML version of Stability AI's Stable Diffusion v2 base, from this location in the Hugging Face Hub. This process takes a while, as several GB of data have to be downloaded and unarchived.
Quantized models run faster, but they require macOS Ventura 14, or iOS/iPadOS 17.
The application will try to guess the best hardware to run models on. You can override this setting using the Advanced section in the controls sidebar.
You need Xcode to build the app. When you clone the repo, please update common.xcconfig with your development team identifier. Code signing is required to run on iOS, but it's currently disabled for macOS.
Known Issues
Performance on iPhone is somewhat erratic, sometimes it's ~20x slower and the phone heats up. This happens because the model could not be scheduled to run on the Neural Engine and everything happens in the CPU. We have not been able to determine the reasons for this problem. If you observe the same, here are some recommendations:
Detach from Xcode
Kill apps you are not using.
Let the iPhone cool down before repeating the test.
Reboot your device.
Next Steps
Allow additional models to be downloaded from the Hub.