Yet another week and new tools have come out so one must play and experiment with them. I discovered through a X post (aka Twitter) that was shared by makeitrad and was keen to explore what was available.

StabilityAI have release Control-LoRA for SDXL which are low-rank parameter fine tuned ControlNet for SDXL which should support the usage of consumer level GPUs. I guess this means that your CUDA out of memory issue is unlikely to happen and hence the memory of your GPU is use more efficiently.

There two types of Rank files: Rank 256 (740MB) and Rank 128 (380MB)

I downloaded the 740MB versions of Rank 256 from their huggingface site.

There is Depth map created using MiDas and ClipDrop; we have Canny Edge detection; Photography and Sketch Colorizer; and Revision

As I shared in one of my earlier posts about ComfyUI, the creator of this is now at StablilityAI which means of course as they would release the model there are implemented ComfyUI workflows available as well on this site. These are the .JSON files in this site.

The thing that is lacking on their site is documentation so I am documenting more details here. Assuming you are downloading all the models and ComfyUI workflows to use them on your computer.

Installation

Download the control-lora-*.safetensors files and place them in the ControlNet models folder of ComfyUI eg. ComfyUI/models/controlnet. These are files (256 or 128 rank version):

  • control-lora-canny-rank256.safetensors
  • control-lora-depth-rank256.safetensors
  • control-lora-recolor-rank256.safetensors
  • control-lora-sketch-rank256.safetensors

Download the clip_vision_g.safetensors and place them in the Clip Vision folder of ComfyUI eg. ComfyUI/models/clip_vision

Now you can fire up your ComfyUI and start to experiment with the various workflows provided. However if like me you got errors with custom nodes missing then make sure you have these installed.

Errors like – When loading the graph, the following node types were not found:

  • CannyEdgePreprocessor
  • ImageScaleToTotalPixels
  • etc

Make sure you are using the latest version of ComfyUI. Use git pull https://github.com/comfyanonymous/ComfyUI before starting up ComfyUI command line.

Install various Custom Nodes like: Stability-ComfyUI-nodes, ComfyUI-post-processing, WIP ComfyUI’s ControlNet preprocessor auxiliary models (make sure you remove previous version comfyui_controlnet_preprocessors if you had it installed) and MTB Nodes. Restart ComfyUI at this point.

To make life easier I am using ComfyUI Manager which is UX that let’s me easily search, select and install/uninstall these without have to mess around with GIT clone and install scripts.

Control LoRA Recolor

I took an old B&W photo of my mom that was on my PC and tested out the using the workflow file provided for Recolor. Initial result was not optimal and had extra colours (yes that’s how I spell colour) which are easy to counter by adding positive and negative prompts. I described the dress which is a Saree and that she’s wearing gold earrings as positive and a few negatives like eye makeup, face coloring, bindi which resulted in a pretty decent output.

I will experiment more with black and white images and share them on my socials, make sure you are following.

Control LoRA Revision

There are two workflows provided in this case; one which takes and input image feeds it to SDXL and spits out similar images like it. So you don’t have to provide SDXL text base prompts. The results were okay in my tests but this was just an initial play here. I tried a variety of different kinds of images and mostly the resulting images were of similar style and composition

Next if you try the second workflow file provided it let’s you load up two images and does like a blend between the two. So I took one landscape image and one more abstract to see what the results would look like. I have to say that it was pretty cool to see the two images blended together and create a new version.

Control LoRA Depth

Here is an example of the Depth model in action and producing images with similar depth, in this case I used the default prompt.

Control LoRa Canny

I look one of my earlier images created using SDXL as well and feed that as the input to get similarly composed results. You can see the preview of the edge detection how its defined the outline that are detected from the input image. SDXL then does a pretty good job at reproducing a new image with similar shape.

Conclusion

In conclusion, these new tools equip and allow us achieve similar and superior results while getting the best of both words, ControlNet and SDXL. What I would do is as these workflows only include the base model of SDXL, you can add the refiner and go as further as upscaling the image so you can achieve higher quality outputs at higher resolutions as well.

Hope you found this information useful, let me know if you have any questions or comments. Support our site by buying us a coffee.

If you'd like to support our site please consider buying us a Ko-fi, grab a product or subscribe. Need a faster GPU, get access to fastest GPUs for less than $1 per hour with RunPod.io