AP Workflow 4.0.2 for ComfyUI

Those artists using generative AI models to create the amazing images that you see shared on social media do much more than push a button. Behind the scenes, the best ones program a complex sequence of actions that require a full understanding of the inner workings of these models.

A great tool to learn about those inner workings is ComfyUI.

To study and experiment with ComfyUI, Alessandro created the AP Workflow, which is used every week to generate the two covers of Synthetic Work (Alessandro’s newsletter for business leaders and smart people on the impact of AI on jobs, productivity, and operations).


What’s New in 4.0.2

  • Except for the Seed and the Image Dimensions, the generation parameters are now controlled by a single node which also supports checkpoint configurations.
  • Even fewer cables thanks to the implementation of per-group @rgthree Repeater nodes.
  • A better way to choose between the two available FreeU nodes.
  • Better organized debug information printed in the console.

How to Download It

The entire workflow is embedded in the workflow picture itself. Click on it and the full version will open in a new tab. Right click on the full version image and download it. Drag it inside ComfyUI, and you’ll have the same workflow you see above.

Before you download the workflow, be sure you read “4.0.2” in the image. If not, you are looking at a cached version of the image.

Basic Functions

AP Workflow 4.0.2 includes the following basic functions:

SDXL Base+Refiner

All images are generated using both the SDXL Base model and the Refiner model, each automatically configured to perform a certain amount of diffusion steps according to the “Step Ratio” formula defined in the dedicated widget.

To use the Refiner, you must enable it in the “Functions” section and you must set the “End at Step / Start at Step” switch to 2 in the “Parameters” section.

Fine-tuned SDXL (or just the SDXL Base)

All images are generated just with the SDXL Base model or a fine-tuned SDXL model that requires no Refiner.

If you don’t want to use the Refiner, you must disable it in the “Functions” section, and set the “End at Step / Start at Step” switch to 1 in the “Parameters” section.

XY Plot

The “XY Plot” sub-function will generate images using with the SDXL Base+Refiner models, or just the Base/Fine-Tuned SDXL model, according to your configuration.

To activate it, follow the instructions in the “Base/Fine-Tuned SDXL + XY Plot” section of the green area.

Notice that the XY Plot function can work in conjunction with ControlNet, the Detailers (Hands and Faces), and the Upscalers.

LoRAs, ControlNet and Control-LoRAs

You can condition your images with the ControlNet preprocessors, including the new OpenPose preprocessor compatible with SDXL, and LoRAs.

Currently, up to six ControlNet preprocessors can be configured to work concurrently, but you can add additional ControlNet stack nodes if you wish.

Activate each one ControlNet model with its dedicated switch in the “ControlNet XL + Control-LoRAs” section of the workflow.

Notice that the ControlNet conditioning can work in conjunction with the XY Plot function, the Refiner, the Detailers (Hands and Faces), and the Upscalers.

You can also activate dozens of concurrent LoRAs in the “Base/Fine-Tuned SDXL + XY Plot” function.

Prompt Builder

You can use a (very simple) prompt builder to quickly switch between your three most used types and styles of image for the positive prompt, and your three most used prompts for negative prompts. Or you can just have the old-fashioned single prompt box.

Advanced Functions

AP Workflow 4.0.2 includes the following advanced functions:


As an alternative to the SDXL Base+Refiner models, or the Base/Fine-Tuned SDXL model, you can generate images with the ReVision method.

To use ReVision, you must enable it in the “Functions” section. You must also disable the Base+Refiner SDXL option and Base/Fine-Tuned SDXL option in the “Functions” section.

Notice that ReVision can work in conjunction with the Detailers (Hands and Faces) and the Upscalers.

Also notice that the ReVision model does NOT take into account the positive prompt defined in the Prompt Builder section, but it considers the negative prompt.

Hand Detailer

The Hand Detailer will identify hands in the image and attempt to improve their anatomy through two consecutive passes, generating an image after processing.

The resulting image will be then passed to the Face Detailer (if enabled) and/or to the Upscalers (if enabled).

The Hand Detailer uses a dedicated ControlNet and Checkpoint based on SD 1.5. It works even if your base model is SDXL or Fine-Tuned SDXL.

To use the Hand Detailer you must enable it in the “Functions” section.

Face Detailer

The Face Detailer will identify small and large faces, generating higher resolution for both, and attempting to improve their aesthetics according to two independent configurations (large faces require a different treatment than small faces).

The Face Detailer will generate an image after processing small faces and another after processing small and large faces. The resulting image will be then passed to the Upscalers (if enabled)

To use the Hand Detailer you must enable it in the “Functions” section.


You can upscale your image, generated by the SDXL Base+Refiner models, the Base/Fine-Tuned SDXL model, or the ReVision model, with one or two upscalers in sequence.

If you have enabled the Detailers (Hands and/or Faces), the upscalers will only upscale those images.

To use just one or both upscalers, you must enable them in the “Functions” section.

Experimental Functions

AP Workflow 4.0.2 includes the following experimental functions:

Universal Negative Prompt

The user /u/AI_Characters has identified a “Universal Negative Prompt”, which tends to improve the quality of most images in most situations. For more information, read:

This techniques works well in most situations. However, it can make very difficult to generate images with multiple different subjects. Hence, it’s not enabled by default.

To enable it, you must:

1. Change input to 2 in the “Negative Prompt” switch in the “Universal Negative Prompt” section
2. Change input to 4 in the “Negative Prompt” switch in the “Prompt Builder” section

Free Lunch

AI researchers have discovered an optimization for Stable Diffusion models that improves the quality of the generated images. For more information, read:

The Free Lunch technique requires the FreeU experimental node, which is not optimized for MPS and DirectML devices. On these systems, the node requires the CPU rather than the GPU, slowing down the image generation process.

Hence, it’s muted by default. To enable it, un-mute it with CTRL+M.

Two alternative FreeU nodes are provided, each configured with a different set of parameters recommended by members of the AI community. They are located just above the XY Plot sub-function.

Choose Image to Proceed

A new experimental node, developed by u/Old_System7203, allows users to generate a batch of images with the Base/Fine-Tuned SDXL model and then pick the favorite image to continue the workflow with the Refiner, the Detailers (Hands and Faces), and the Upscalers.

The node is bypassed by default. To enable it, un-bypass it with CTRL+B.

The node is located just above the “SDXL Refiner” section.

NOTICE: All experimental/temporary nodes are in blue.

Debug Functions

AP Workflow 4.0.2 includes the following debug functions:

Parameters print

The following parameters are printed in the terminal to provide more information about the ongoing image generation: checkpoint/model, sampler, scheduler, steps, CFG scale, image dimensions, seed, positive and negative prompt.

You can also write some personal notes about the next generation that will be printed in the terminal in the next run.

An additional string is printed in the terminal to inform you if the Universal Negative Prompt is used or not.

This output can be saved in a file by adding the appropriate node, if you desire so.

Images metadata

All images are saved with details about the prompts and the generation parameters that should be compatible with A1111 WebUI / Vladmandic SD.Next / SD Prompt Reader.

Required Custom Nodes

If, after loading the workflow, you see a lot of red boxes, you must install some custom node suites.

This workflow depends on a few custom nodes that you might not have installed. You should download and install ComfyUI Manager and then install the following custom nodes suites to be sure you can replicate this workflow:


Why is this workflow so sparse? You wasted a lot of space!

The workflow is designed to be easy to follow, not to be space-efficient. A tighter arrangement of the nodes, or the collapse of some of them, would make it hard to understand the flow of information through the pipeline.

Given the size of the workflow, it’s highly recommended that you install the ComfyUI extension called graphNavigator and you save views for the areas that you want to jump to quickly.

Here’s a recommended configuration:

Can’t you consolidate the configuration parameters and switches a little bit more?

This workflow could be massively simplified by using the new Efficiency Loader SDXL and Efficiency KSampler SDXL custom nodes. However, doing so would hide a lot of the SDXL architecture, which is preferred to remain visible for education purpose.

What else is missing? / Can you add XYZ?

Features under evaluation:

  • Prompt enrichment via LLMs (GPT-4, LLaMA, etc.)
  • Face Swapping
  • Inpainting/Outpainting
  • Support for Stable Diffusion 1.5 and fine-tuned variants.


The AP Workflow can generate images like the following:

I Need Help!

The AP Workflow is provided as is, and it’s intended for individuals interested in learning how ComfyUI and Stable Diffusion work.

However, if your company wants to build commercial solutions on top of ComfyUI and you need help with this workflow, you could work with Alessandro on your specific challenge.

Special Thanks

The AP Workflow wouldn’t exist without the dozen of custom nodes created by very generous members of the AI community.

In particular, special thanks to:

@LucianoCirino: His XY Plot nodes are the very reason why Alessandro started working on this workflow.

@rgthree: This workflow is so clean thanks to his:

Reroute nodes, the best you can find among custom node suites.
Big Context and Context Switch nodes, the best custom nodes available today to branch out an expansive workflow.
Mute/Bypass Repeater nodes, critical to shut down entire groups of nodes and reduce wasted computation cycles.

@receyuki: His/her SD Parameter Generator node and SD Type Converter node allow to manipulate the checkpoint/model information better than other nodes.

Thanks to them and to all the other custom node creators for their help in debugging and enhancing their great nodes.

Full Changelog


  • Except for the Seed and the Image Dimensions, the generation parameters are now controlled by a single node which also supports checkpoint configurations.
  • Even fewer cables thanks to the implementation of per-group @rgthree Repeater nodes.
  • A better way to choose between the two available FreeU nodes.
  • Better organized debug information printed in the console.


  • Cleaner layout without flying cables.
  • Even more debug information printed in the console (it could be saved in a log file if desired).
  • Two different Free Lunch nodes with settings recommended by the AI community.


  • The layout has been partially revamped. Now the Functions switch, the Prompt Builder, and Parameters selector are closer to each other and more compact. The debug nodes are in their own group.
  • More debug information printed in the console (it could be saved in a log file if desired)
  • Images are now saved with metadata readable in A1111 WebUI, Vladmandic SD.Next, and SD Prompt Reader
  • There’s a new Hands Refiner function.
  • The experimental Free Lunch optimization has been implemented
  • There’s a new optional feature to select the best image of a batch before executing the entire workflow.
  • The Universal Negative Prompt is no more enabled by default. I found situations where it constrains the image generation too much.


  • The Prompt Builder now offers the possibility to print in the terminal the seed and a note about the queued generation.
  • Experimental support for the Universal Negative Prompt theory of /u/AI_Characters, as described here.
  • Context Switch nodes have been rationalized.
  • The ReVision model now correctly works with the Detailer.
  • If you enable the second Upscaler, it now saves the picture correctly.


  • Support for Fine-Tuned SDXL models that don’t require the Refiner.
  • A second Upscaler has been added.
  • The Upscaler now previews what image is being upscaled, to avoid confusion.
  • An automatic mechanism to choose which image to upscale based on priorities has been added.
  • A (simple) function to print in the terminal the positive and negative prompt before any generation has been added.
  • Now the workflow doesn’t generate unnecessary images when you don’t use certain functions.
  • The wiring has been partially simplified (I hope I don’t have to regret this later on).


Now you can choose between the SDXL Base+Refiner models or the ReVision model to generate the initial image. You can use either in conjunction with the Detailer, the Upscaler, or both. You can also bypass entire portions of the workflow to speed up image generation.


A very simple prompt builder inspired by the style selector of A1111 WebUI / Vladmandic SD.Next has been introduced. While the XY Plot is meant for systematic comparisons of different prompts, this prompt builder is meant to quickly switch between prompt templates that you use often.


An upscaling function that can upscale the images generated by the SDXL Refiner, the FaceDetailer, or the ReVision functions has been added.


Now you can use the new ReVision section to generate images inspired by a source image or to blend together two source images into a new image.


A LoRA Stack node to load up to three LoRAs has been added. If necessary, it can be further chained with other LoRA Stack nodes.


The ControlNet XL section has been expanded to include the new Control-LoRAs released by Stability AI: Canny, Depth, Recolor, and Sketch.


The FaceDetailer section has been completely revamped. Now you can refine small faces and big faces in separate ways. It needs more testing, especially for the big faces.


The XY Plot section I completely revamped to offer maximum flexibility. It’s a bit harder to use, but you are not limited to four comparison terms only.


The workflow is not organized with a much cleaner layout.


Some changes in the way KSampler Efficiency Advanced node displays image previews required a modification of the configuration of that node. If your node has a red border and you don’t know why, re-download the workflow.


The workflow has been completely redesigned to use the SDXL models as they were meant to be used.