Stable Diffusion 3 (SD3) is close to being release I can sense as Stability.AI have just release its APIs that you can access. These are free APIs but you need credits to generate images. You can access SD3 and SD3 Turbo models and they use 6.5 and 4 credits respectively.
And of course upon its release my favourite ComfyUI creator Zho got to work and produced on the same days ComfyUI Nodes that will let you access the APIs. In this post we will explore these further. Access the English translation if you have trouble understanding the instructions.
Use ComfyUI Manager to install these custom nodes by using the option Install from GIT and provide this URL for installation. Once installed restart ComfyUI using ComfyUI Manager and refersh the ComfyUI page.
The workflow is available also but you can also create a new Two Node workflow: Stable Diffusion 3 API node and Save Image. If you do use the provided workflow make sure that you notice Zho uses Preview node, which means generated image is not stored/saved in the Output folder. It caught me off-guard and I didn’t have some of the initial images that were generated because they weren’t saved in my Output folder.
The Node features:
- positive: positive prompt text
- negative: negative prompt text (not supported by Turbo model)
- aspect_ratio: Aspect ratio of the images, 9 types in total: “21:9”, “16:9”, “5:4”, “3:2”, “1:1”, “2:3”, “4:5” , “9:16”, “9:21” (not applicable to Tu Sheng Tu)
- mode: text to image or image to image
- model: SD3 or SD3 Turbo
- seed: seed
- image: Optional, only used for drawing pictures
- strength: optional, only used for graph generation
Before you can start using it you need to get the API Key from Stability.AI Dev site and navigate to API Keys to create a new one or copy an existing one. Here you can top up credits if you need via the Billing page, for $10 you will get 1000 credit which lets you generate 153 images in SD3 and 250 in SD3 Tubo.
Once you have the API Key, edit the Config.JSON file of the custom node which is found under “custom_nodes/ComfyUI-StableDiffusion3-API/” folder. Edit and enter the API Key you copied into the “STABILITY_KEY” tag as shown below.
Save and close the file.
Now when you run the Workflow you should get images coming from the API into the ComfyUI preview/save node. The custom node also supports Image2Image which is used to generate the resulting image along with your prompt.
Here in this example you can see the image2image generation, where the colours are somewhat influenced by the input image.
Overall I see big improvements in text handling but in terms of quality of the image it is quite similar to SDXL. SD3 is also more coherent when it comes to understanding counts, so when you specify two cats you will get 2x cats in your image. I ran out of credits before I could experiment further so I will wait for the full release of the model which could be run locally.
For the time being if you are curious and cannot wait then its best to use ComfyUI Workflow for the SD3 API to generate images and experiment.
If you'd like to support our site please consider buying us a Ko-fi, grab a product or subscribe. Need a faster GPU, get access to fastest GPUs for less than $1 per hour with RunPod.io