Skip to content
This repository has been archived by the owner on Mar 29, 2024. It is now read-only.

Stable Diffusion inference on iOS / macOS using MPSGraph (this fork does not require xcode for building on macOS

License

Notifications You must be signed in to change notification settings

0ihsan/maple-diffusion

 
 

Repository files navigation

🍁 maple diffusion

Maple Diffusion runs Stable Diffusion models locally on macOS / iOS devices, in Swift, using the MPSGraph framework (not Python).

analog diffusion running on base m1 macbook air.

  • ~1.4s / step on an M1 MacBook Air (base).
  • ~2.3s / step on an iPhone 13 Pro

To attain usable performance without tripping over iOS's 4GB memory limit, Maple Diffusion relies internally on FP16 (NHWC) tensors, operator fusion from MPSGraph, and a truly pitiable degree of swapping models to device storage.

On macOS, Maple Diffusion uses slightly more (~6GB) memory.

device requirements

Maple Diffusion should run on any Apple Silicon Mac (M1, M2, etc.). Intel Macs should also work now thanks to this pull request.

Maple Diffusion should run on any iOS device with sufficient RAM (≥6144MB RAM definitely works; 4096MB doesn't). That means recent iPads should work out of the box, and recent iPhones should work if you can get the Increase Memory Limit capability working (to unlock 4GB of app-usable RAM). iPhone 14 variants reportedly didn't work until iOS 16.1 stable.

Maple Diffusion doesn't expect Xcode for macOS builds. swiftc is enough. However, Xcode 14 and iOS 16 are required iOS builds. Other versions may require changing build settings or just not work. iOS 16.1 (beta) was reportedly broken and always generating a gray image, but I think that's fixed.

usage

To build and run Maple Diffusion:

  1. Download a Stable Diffusion PyTorch model checkpoint (sd-v1-4.ckpt, or some derivation thereof)
  2. Download this repo
git clone https://github.com/0ihsan/maple-diffusion.git && cd maple-diffusion
  1. Convert the ckpt to bin files: Setup & install Python with PyTorch, if you haven't already. (WARNING: this script deletes conda env named maple-diffusion if exists).
# may need to install conda first https://github.com/conda-forge/miniforge#homebrew
conda deactivate
conda remove -n maple-diffusion --all
conda create -n maple-diffusion python=3.10
conda activate maple-diffusion
pip install torch typing_extensions numpy Pillow requests pytorch_lightning
  1. Convert the PyTorch model checkpoint into a bunch of fp16 binary blobs.
./maple-convert.py ~/Downloads/sd-v1-4.ckpt
  1. Run (for macOS)
make

This will create a directior called sd.app. It creates a simple app which you can copy to /Applications folder. It copies the bins/ (model) to the app's Resources directory so the app is standalone.

or build with xcode

  • 5] Open the maple-diffusion Xcode project. Select the device you want to run on from the Product > Destination menu.
  • 6] Manually add the Increased Memory Limit capability to the maple-diffusion target (this step might not be needed on iPads, but it's definitely needed on iPhones - the default limit is 3GB).
  • 7] Build & run the project on your device with the Product > Run menu.

related projects

  • Native Diffusion (repo) is a Swift Package-ified version of this codebase with several improvements (including image-to-image)
  • Waifu Art AI (announcement, App Store link) is an iOS / macOS app for (anime-style) Stable Diffusion based on this codebase
  • Draw Things (announcement, App Store link) is an iOS app for Stable Diffusion (using an independent codebase with similar MPSGraph-based approach)

About

Stable Diffusion inference on iOS / macOS using MPSGraph (this fork does not require xcode for building on macOS

Resources

License

Stars

Watchers

Forks

Languages

  • Swift 96.1%
  • Python 2.9%
  • Makefile 1.0%