August 23, 2022

Deploy Stable Diffusion for AI Image Generation

Stable Diffusion WebUI is my current go-to UI - it’s quite useful to go over the feature showcase page.

Setup ComfyUI

git clone
cd ComfyUI
python -m venv venv
source venv/bin/activate
pip install torch torchvision torchaudio --extra-index-url xformers
pip install -r requirements.txt

Put your SD checkpoints (the huge ckpt/safetensors files) in: models/checkpoints

Then, visit

Setup Stable Diffusion WebUI

Ubuntu WebUI Setup

Here are the notes for me to setup A1111 on my server:

git clone a1111
cd a1111

Install extensions:

cd a1111/extensions
git clone
git clone
git clone
git clone

Change startup script in and add:

export COMMANDLINE_ARGS="--xformers --cloudflared --gradio-auth username:password"

Start A1111: ./ and find the cloudflare link to login.

Use Batchlinks Downloader to download models:


Mac WebUI Setup

I ran into so many issues trying to set it up on my MacBook Pro M1 and finally made it work (Ubuntu setup is actually much easier - see below).

The most important lesson learned: the Python version matters!

I figured this out from the inline comment here after having many issues with Python 3.8.0 and 3.9.7 - It would be helpful if the author can highlight this in the README file.

I use pyenv to manage my Python versions and use the following commands to install Python 3.10.6 first.

pyenv versions
pyenv install 3.10.6
pyenv global 3.10.6

It should be simple as the few steps above if the Python version is correct.

“Stable Diffusion 2.0 and 2.1 require both a model and a configuration file, and image width & height will need to be set to 768 or higher when generating images”

For using Stable Diffusion v2.0, follow the instruction to download the checkpoints and yaml files.

My /stable-diffusion-webui/models/Stable-diffusion/ folder looks like the following:

Note: for v2.0, you may need to run ./ --no-half or restart to make it work.

for 768-v-ema.ckpt SD v2.0, you have to use at least 768x768 or higher, e.g., 768x1024 to generate images otherwise you get garbage images shown below:

I want to record the issues I ran into below in case I need to refer to them later.

Ubuntu WebUI Setup

Tested on Ubuntu 20.04.5 LTS, it’s as simple as the following two lines:

sudo apt install wget git python3 python3-venv
bash <(wget -qO-

Install xformers by editing (see discussions), then start WebUI using ./ and xformers will be installed (for my new 4090, this did not work, adding --xformers worked):

# Commandline arguments for, for example: export COMMANDLINE_ARGS="--medvram --opt-split-attention"
export COMMANDLINE_ARGS="--reinstall-xformers"

Next, change to remove the installation argument above (you only need to install it once) and enable xformers:

export COMMANDLINE_ARGS="--xformers"

To enable a public Gradio link with authentication, change with arguments:

export COMMANDLINE_ARGS="--xformers --share --gradio-auth your-user-name:your-password"

To enable extension installation with --share, change with arguments (otherwise this error):

export COMMANDLINE_ARGS="--xformers --share --gradio-auth your-user-name:your-password --enable-insecure-extension-access"

If you have a server with a fixed IP address, say x.x.x.x, then you can use --share to run WebUI at x.x.x.x:7860.

You can also install webui tunnels plugin to have a Cloudflare URL by running:

./ --gradio-auth username:password --cloudflared

To download a model:

Install Dreambooth Extension

Note for installing

NOTE: “Once installed, you must restart the Stable-Diffusion WebUI completely. Reloading the UI will not install the necessary requirements.”

Install to use the Lora Weights.

Check out my tutorial on how to use this extension.

WebUI Colab

You can use if you just want to use WebUI via Colab - Just run the chosen Colab Notebook (find the model you want to use) and you will get a URL to use WebUI - the speed is OK.

Configure WebUI Server Behind Firewall

then start webui using the following command

./ --medvram --xformers --ngrok 2Xu0wxxx --ngrok-options '{"domain":""}' --api --gradio-auth username:pwd --api-auth username:pwd --cors-allow-origins-regex '(.*?)'

then can be used as backend.

Setup InvokeAI

This is the my notes of installing InvokeAI instruction on MacBook Pro M1. Tested with Python 3.10.6.

git clone
cd InvokeAI
python -m venv venv
pip install --use-pep517 .
invokeai --web

Visit http://localhost:9090 to use the UI.

InvokeAI seems to take more resources than AUTOMATIC1111 Stable Diffusion WebUI below.

M1 Stable Diffusion Deployment

I just followed the instructions here.

Tested on my 2020 MacBook Pro M1 with 16G RAM and Torch 1.13.0.

Run the following to generate the models in coreml-sd folder:

git clone
cd ml-stable-diffusion
conda create -n coreml_stable_diffusion python=3.8 -y
conda activate coreml_stable_diffusion
pip install -e .
huggingface-cli login
python -m python_coreml_stable_diffusion.torch2coreml --convert-unet --convert-text-encoder --convert-vae-decoder --convert-safety-checker -o coreml-sd

Generate image with Python and output to image-outputs folder:

python -m python_coreml_stable_diffusion.pipeline --prompt "a photo of an astronaut riding a horse on mars" -i coreml-sd -o image-outputs --compute-unit ALL --seed 93

The method above loads the model every time, which is quite slow (2-3 minutes). Use Swift to speed up model loading by setting up the Resources:

python -m python_coreml_stable_diffusion.torch2coreml --convert-unet --convert-text-encoder --convert-vae-decoder --convert-safety-checker --bundle-resources-for-swift-cli -o coreml-sd 

Then, generate image with Swift and output to image-outputs folder:

swift run StableDiffusionSample "a photo of an astronaut riding a horse on mars" --resource-path coreml-sd/Resources/ --seed 93 --output-path image-outputs

Ubuntu Deployment

In the past few months, I tried almost all popular text-to-image AI generation models/products, such as Dall-E 2, MidJourney, Disco Diffusion, Stable Diffusion, etc. Stable Diffusion checkpoint was just released a few days ago. I deployed one on my old GPU server and record my notes here for people who may also want to try. Machine creativity is a quite interesting research area for IS scholars and I jotted down some potential research topics in the end of this post as well.

I first spent a few hours trying to set up Stable Diffusion on Mac M1 and failed - I cannot install the packages properly, e.g., version not found, dependency issues, etc. I found some successful attempts here but have no time to try them yet.

I ended up setting up Stable Diffusion on my old GPU server running Ubuntu and here are my notes.

lsb_release -a
No LSB modules are available.
Distributor ID:	Ubuntu
Description:	Ubuntu 18.04.6 LTS
Release:	18.04
Codename:	bionic

| NVIDIA-SMI 440.100      Driver Version: 440.100      CUDA Version: 10.2     |
| GPU  Name        Persistence-M| Bus-Id        Disp.A | Volatile Uncorr. ECC |
| Fan  Temp  Perf  Pwr:Usage/Cap|         Memory-Usage | GPU-Util  Compute M. |
|   0  GeForce RTX 208...  Off  | 00000000:1A:00.0 Off |                  N/A |
| 30%   27C    P8    20W / 250W |      1MiB / 11019MiB |      0%      Default |
|   1  GeForce RTX 208...  Off  | 00000000:68:00.0 Off |                  N/A |
| 30%   26C    P8    19W / 250W |     73MiB / 11016MiB |      0%      Default |
sudo apt install links

rename the checkpoint file to model.ckpt and put it in the following folder (create a new one):

mkdir -p models/ldm/stable-diffusion-v1/

A side note on estimated training cost based on the reported GPU usage and the related AWS price I found:

Price of p4d.24xlarge instance with 8 A100 with 40G VRAM:

The training would cost between 225,000 USD and 600,000 USD.

name: ldm
  - pytorch
  - defaults
  - python=3.8.5
  - pip=20.3
  - cudatoolkit=10.2
  - pytorch=1.11.0
conda env create -f environment.yaml
conda activate ldm

Now, Stable Diffusion is ready to go and let’s see what AI will create based on the following text:

A car in 2050 designed by Antoni Gaudi

python optimizedSD/ --prompt "a car in 2050 designed by Antoni Gaudi" --H 512 --W 512 --seed 27 --n_iter 2 --n_samples 10 --ddim_steps 50

This whole area is relatively new and there are many potential interesting research topics, e.g.,

Anyway, out of the 20 generated images from the prompt above, the following are my top 3:

PS. The featured image for this post is generated using Stable Diffusion, whose full parameters with model link can be found at Takin.AI.