Jumpstart

How to Use APW

APW is currently available in the following versions:

Version
Focus
Local / Cloud
Access
Support
New Features
Next Release
Stability
Yes / Yes
Free for everyone
No
In 2-3 months
Support for the newest AI models and bleeding-edge new features
Yes / No
Early Access Program members
Not guaranteed, on Discord
In 1-2 weeks

This is how it looks:

You can run APW on your own computer, or you can run APW in the cloud.

How to Run APW 12.0 in the Cloud

Alessandro partnered with RunComfy to offer a cloud-based version of APW 12.0.

This means that you don’t have to worry about the installation of ComfyUI, the conflict between Python packages, the installation of ComfyUI custom node suites, the downloading of all AI models, if you have enough VRAM to use every function, or any other technical aspect.

You can just focus on generating images and videos.

To run APW 12.0 in the cloud:

Visit the RunComfy page dedicated to APW and click Run Workflow.

Then, choose your instance size:

And finally, wait for the machine to be ready:

How To Run APW 12.0 Locally

If you generate images and videos that depend on closely-guarded intellectual property, you might prefer to download and use APW on your own computer.

To do that, assuming you have already installed ComfyUI and ComfyUI Manager, follow these steps:

  1. Download the AP Workflows Nodes Snapshot for ComfyUI Manager.
  2. Download the APW 12.0 json file.
  3. Follow the installation steps.
  4. Open APW 12.0 from ComfyUI Manager with CTRL/CMD+O.

  5. [optional] Download the basic Web Front End for APW
  6. [optional] Download the advanced Wed Front End for APW for the Patreon/Ko-fi supporters who joined the Early Access program.

What’s new in APW 12.0

New Features
  • A dedicated L4 pipeline for text-to-video (T2V), image-to-video (I2V), and video-to-video (V2V).
  • Support for video generation with Hunyuan Video (T2V and V2V up to 1280x720px and 129 frames), and CogVideoX 1.0/1.5 (T2V and I2V up to 1360x768px and 81 frames).
  • Support for Hunyuan Video LoRAs and CogVideo LoRAs.

    You can choose between LoRAs for CogVideoX 1.5 and CogVideoX 1.0. For example, you can use the DimensionX Orbit LoRAs for CogVideoX 1.0 or the Prompt Camera Motion LoRA by NimVideo for CogVideoX 1.5.

  • A dedicated T2V/I2V Trajectory Editor function to control the motion of movies generated with CogVideoX.
  • A Video Flipper function. You can use it to generate a camera movement opposite to the one provided by the motion LoRA you are using.
  • A Video Acceleration function which allows you to activate Torch.Compile and Sage Attention and speed up the generation of videos with both CogVideoX and Hunyuan Video.

  • Support for Stable Diffusion 3.5 Large.
  • Support for the new Advanced ControlNet nodes and the new SD3.5 ControlNet Canny, Depth, and Blur models.
Design Changes
  • The Inpainter function now uses the new FLUX 1 Dev Fill model for both inpainting and outpainting.
  • The Image/Video Uploader function has been redesigned to allow the uploading of a source video, too.

    Additionally, now you can specify a list of images instead of a batch as Source Image. Previously, this feature was only available for the Reference Images.

  • APW now features three separate FLUX Redux functions. You can use them in two ways:
    1. To create variants of the subject (style, composition, and subject) in one or two reference images defined in the Image/Video Uploader function.
    2. To capture only the style of the reference image/s and use it to condition the generation of a completely different subject (similar to what IPAdapter does).
  • In the SD1.5/XL Configurator function, it’s much easier to switch between Stable Diffusion 1.5 and SDXL.
  • The Face Detailer function now allows you to manually choose which faces from the source image should be detailed. Notice that the feature is disabled by default and the function continues to automatically detail all identified faces as usual.
  • The Image Comparer function has been moved to the Auxiliary Function group.
  • The Image Saver function is now split in two: Final Image Saver and Intermediate Images Saver.
    The former is always on, and continues to save two versions of the same image: one with metadata and one without. The latter function is muted by default and you must activate it manually if you want to save all the intermediate images generated by the various APW functions.
  • Now only the image saved by the Final Image Saver function generate notifications (sound, browser, and/or Discord).
  • APW now serves the web front end on port 80 by default (if you prefer, you can still change it back to 8000, or any other).
  • The Prompt Tagger function is now turned off by default to save system resources. When it’s on, it equally tags user prompts, user prompts enhanced by LLMs (generated by the Prompt Enhancer function), or image captions (generated by the Caption Generator function).
  • The Prompt Enricher function has been slightly redesigned.
  • The XYZ Plot function has been moved into the L3 pipeline.
  • The Controller function has been redesigned to group its toggles and offer more clarity.
  • The Repainter (img2img) function has been simplified. It’s current state is transitory, until we have better nodes for the new FLUX.1 Dev ControlNets.
  • The L3 Pipeline is more compact and the Image Manipulators functions now are executed after the Upscalers functions.
  • All notes scattered throughout APW have been converted to Markdown syntax for increased legibility and interactivity. To render them correctly, be sure your ComfyUI Front End is updated to version 1.6.9 or higher.
  • The entire workflow is now perfectly aligned to the canvas grid with standardized distances between groups. Yes, Alessandro is the only one who cares about this.
Bug Fixes
  • The DetailerDebug nodes in the Face Detailer function have been fixed.
  • Support for the updated Advanced Prompt Enhancer node.
  • All saved image filenames start with the seed number again.
Removed Features
  • The Dall-E Image Generation function has been removed.
  • The DynamiCrafter video generation model has been removed.

To Download APW 13.0 EA1

  1. Join the AP Workflows Early Access program.
  2. Join the Discord server for Early Access program members (you'll receive an email with invite)
  3. Download the AP Workflows Nodes Snapshot for ComfyUI Manager from the Discord server.
  4. Download the APW 13.0 EA1 json file from the Discord server.
  5. Follow the installation steps.
  6. Open APW 13.0 EA1 from ComfyUI Manager with CTRL/CMD+O.

What's New in APW 13.0 EA1

APW 13.0 early access features available now:
  • [EA1] APW now supports the WanVideo 2.1 model for I2V, V2V, and T2V generation.
  • [EA1] APW now supports the Hunyuan Video for I2V generation.
  • [EA1] APW now supports the CogVideoX for T2V generation.
  • [EA1] APW 13.0 introduces an almost complete redesign:
    • The Image Generation Pipeline now features separate function for SDXL and SD1.5. No more need to change dozens of settings to switch between SDXL and SD1.5.
    • Each subsection in the Image Generation Pipeline now has ample space to configure and organize your LoRAs. LoRAs for SDXL and SD1.5 are no more second class citizens.
    • The design of each Video Generation Pipeline subsection has been redesigned to be consistent across video models.
    • The Frond End section now lives just below the Control Panel section, for easier access.
    • The Intermediate Images Saver function has been greatly expanded, and it's now moved to the Aux Functions section of APW.
    • A new Video Spec function has been added to the Control Panel section, so you can centrally set the size and the FPS of the videos to be generated.
    • Most of the other functions have been aggregated in a new Image Enhancement Pipeline.
  • [EA1] APW features a new always-on Attention Mode function which centralizes the configuration of the attention mode for all video models. The default configuration is set to Scaled Dot Product Attention (sdpa), but you can change it to Sage Attention (sageattn) or Flash Attention (flash_attn), depending on how your machine is configured.
  • [EA1] Every image generation pipeline (SD1.5, SDXL, SD3.5, FLUX.1), the (in/re)painter functions, the Object Swapper function, and the Face and Hand Refiner functions now use new nodes to load their diffusion models and their VAE models. These new nodes offer granular capability to define weight and compute dtypes, allowing you to use APW with resource-constrained computers in a much easier way.

    These new nodes are also configured to automatically take advantage of Sage Attention if your machine is configured to use it.
  • [EA1] The Latent function has been renamed to Image Spec.
  • [EA1] The Training Helper function has been moved back next to the Caption Generator function.
  • [EA1] The XYZ Plot function has been removed as the custom node suite it depends on has not been updated for months. The absence of this function greatly speed up the loading of APW.

    Please use AP Plot for ComfyUI for your comparison and evaluation needs.

Early Access Program

Get an Early Access membership via Patreon or Ko-fi, and you'll have access to a dedicated Discord server where Alessandro shares the pre-release versions of all AP Workflows as he progresses in its development.

Pros

  • You'll gain a competitive edge at work!
  • You'll be able to provide early feedback on AP Workflows' design and potentially influence its development.
  • You'll support the future development of the AP Workflows.
Cons

  • There will be no documentation.
  • Things might change without notice.
  • There is no guarantee you'll receive support.

Stay In The Loop
Sign up to to receive notifications when a new Early Access pre-release is available.

Work with Alessandro

AP Workflows are provided as they are, for research and education purposes only.

However, Alessandro can assist your company on a wide range of activities:

  • Workflow Design Workshop
  • LoRA Training & Model Fine Tuning Workshop [coming soon]
  • Customized Workflow Design
  • Remote Consulting
Learn more here.

Functions

Where to Start?

APW is a large, moderately complex workflow. It can be difficult to navigate if you are new to ComfyUI.

APW is pre-configured to generate images with the FLUX 1 model. After you download the workflow, you have to do nothing in particular but queue a generation with the prompt already set in for you in the Prompt Builder function.

Check the outcome of your image generation/manipulation in the Final Image Saver section on the bottom-left of the workflow.

WARNING: If ComfyUI doesn’t generate anything, you might have a problem with your installation or an incompatibility between the custom node suites you have already installed and the ones necessary to run APW. Check carefully the Required Custom Nodes section of this document.

Once you have established that the workflow is generating images correctly, you can use the following functions to customize your image/video generation pipeline:

  • Controller
  • Seed and Latent
  • Image/Video Uploader and/or Prompt Builder

The Controller function allows you to enable or disable every other function in APW.

By default, the APW 12.0 Controller is configured to generate images with the FLUX.1 Dev model.
But, of course, APW can do so much more than merely generate images:

Notice: You should NOT manually enable or disable any function in APW by clicking on the group icons. You should only use the toggles in the Controller function.

Once you have activated the functions that you want to use, if you need to, you can modify the settings of each image and video generation pipeline, to:

  • Reconfigure models, shifts, samplers, schedulers, steps, or CFG (where applicable).
  • Apply LoRAs.
  • Reconfigure Image Conditioners and Image Optimizers.

You should not change any setting in any area of the workflow unless you have to tweak how a certain function behaves.

Navigation

APW is a large ComfyUI workflow and moving across its functions can be time-consuming. To speed up your navigation, a number of bright yellow Bookmark nodes have been placed in strategic locations.

Pressing the letter or number associated with each Bookmark node will take you to the corresponding section of the workflow.

The Bookmark nodes work even when they are muted or bypassed.

You can move them around the workflow to better suit your navigation style.
You can also change the letter/number associated with them as well as their zoom level (useful if you have a large monitor).

The following shortcuts have been configured for you:

§
Image Comparer
1
Controller and Uploader
2
Seed, Latent, and XYZ Plot
3
Positive Prompt Builder
4
Negative Prompt Builder
5
Prompt Tagger
6
Prompt Enricher
7
Face Detailer
8
Hand Detailer
9
Upscaler (SUPIR)
0
Image Saver

ALT + 1
SD1.5/XL Configurator
CTRL + 1
SD1.5/XL LoRAs
ALT + 2
FLUX Configurator
CTRL + 1
FLUX LoRAs
ALT + 3
SD3 Configurator
CTRL + 3
SD3 LoRAs
ALT + 4
Dall-E 3 Image Generator
ALT + 5
Painters Configurator
CTRL + 5
Painters LoRAs

To see the full list of bookmarks deployed in APW at any given time, right click on the ComfyUI canvas, select the rgthree-comfyui menu, and then look at the Bookmarks section.


To further help you navigate APW, one of the required custom nodes enables a minimap of the workflow. Extremely useful to move around quickly.

Prompt Providers

APW is designed to greatly improve the quality of your generated images in a number of ways. One of the most effective ways to do so is by modifying or rewriting the prompt you use to generate images.

Prompt Providers can automatically invoke large language models (LLM) and visual language models (VLM) to generate prompts and captions thanks to the following functions.

APW 12.0 features three prompt providers:

  • Prompt builder
  • Caption generator
  • Image generator

Prompt Builder

APW features a visual and flexible prompt builder.

You can use it to quickly switch between frequently used types and styles of image for the positive prompt, and frequently used negative prompts.

If you don’t need any special modifier for your positive prompt, just use the bright green node in the middle of the Prompt Builder function.

Prompt Enricher

The Prompt Enricher function enriches your positive prompt with additional text generated by a large language model (LLM).

APW allows you to use either centralized proprietary models (OpenAI o1 and GPT-4, Anthropic Claude) or open access models (e.g., LLaMA, Mistral, etc.) installed locally.

The use of centralized LLMs requires an API key. To set up your API key, follow these instructions.

If you use centralized LLMs, you will be charged every time the Prompt Enricher function is enabled, and a new queue is processed.

The use of local open access models requires the separate installation of an AI system like LM Studio, Ollama, or Oobabooga WebUI.

Alessandro highly recommends the use of LM Studio and APW is configured to use it by default.
Additional details are provided in the Setup for Prompt Enrichment with LM Studio section of this document.

If you don’t want to rely on an advanced AI system like LM Studio, but you still want the flexibility to serve any open access LLM you like, Alessandro recommends the use of llamafile.

The Prompt Enricher function features three example prompts that you can use to enrich their ComfyUI positive prompts: a generic one, one focused on film still generation (AI Cinema), and one focused on collage art images.

Multiple switches are in place to choose the preferred system prompt and the preferred AI system to process system prompt and user prompt.

Prompt Enricher - Before
Prompt Enricher - After

Caption Generator

APW offers a wide range of options to automatically caption any image uploaded via the Image/Video Uploader function with a Visual Language Model (VLM).

You can use Florence 2, OpenAI GPT-4V, or any other VLM you have installed locally (e.g., Meta LLaMA 3.2) and served via LM Studio or an alternative AI sytstem.

The Caption Generator function replaces any positive prompt you have written with the generated caption. To avoid losing LoRA tags in the process, you can define the LoRA invocation tags in the Prompt Tagger function.

This approach is designed to improve the quality of the images generated by the Repainter (img2img) function and others.

Notice that, just like for the Prompt Enricher function, the use of OpenAI models requires an OpenAI API key. To setup your OpenAI API key, follow these instructions.

Image Providers

APW allows you to generate images from text instructions written in natural language (text-to-image or T2I), or to upload source images for further manipulation (image-to-image or I2I).

Image Generators

APW features three independent image generation pipelines, each with its own configuration:

  • FLUX.1
  • Stable Diffusion 3
  • Stable Diffusion 1.5 & SDXL

These pipelines can be activated or deactivated independently, and each one can be configured with its own set of parameters.

This means that, given a prompt, you could configure APW to always generate both a FLUX and SDXL image.

It also means that, without any manual reconfiguration, you could configure APW to generate an image with SD 1.5 and immediately repaint it with FLUX.

Image Uploaders

You can use the Image/Video Uploader function to upload individual images and as well as entire folders of images, processed as a list.

Each image loading node in the Image/Video Uploader function supports reading and previewing images in subfolders of the ComfyUI input folder. This feature is particularly useful when you have a large number of images in the input folder and you want to organize them in subfolders.

The Image/Video Uploader function allows you to load up to three different types of images:

  • Source Image/s: used by most functions across the workflow.
  • 1st Reference Image: used by some Image Conditioners like the SD1.5/XL IPAdapter 1 or the FLUX Redux 1 functions.
  • 2nd Reference Image: used by the SD1.5/XL IPAdapter 1 and the FLUX Redux 2 function.

The 1st Reference Image and the 2nd Reference Image nodes can be reconfigured to process all the images in a folder as a list.

This is very powerful feature that comes handy in advanced use cases where you need automation. For example, you might want to use this feature to repaint the same source image multiple times, by using a difference reference image for the IPAdapter or the Redux function each time.

Video Providers

APW allows you to generate videos from a text prompt (text-to-video or T2V), from a source image (image-to-video or I2V), or from a source video (video-to-video or V2V) thanks to a dedicated video generation pipeline.

Video Generators

APW features independent video generation models:

  • Hunyuan Video for T2V and V2V generations, up to 1280x720px and 129 frames.
  • CogVideoX (both 1.0 and 1.5 versions) for T2V and I2V generations, up to 1360x768px and 81 frames.

As future iterations of these models expand support to more modalities, APW will expand the video generation pipeline accordingly.

Video Uploader

If you want to use the Hunyuan Video model for V2V generations, you must enable the Image/Video Uploader function to upload a source video.

Image Painters

APW offers the capability to inpaint, outpaint, or repaint a source image loaded via the Image/Video Uploader function, or an image generated with any of the image generators.

These three functions leverage the FLUX.1 Dev model and can be further conditioned by its LoRAs.

Notice that this type of inpainting/outpainting/repainting is different from the one automatically performed by image manipulators function like Hand Detailer, Face Detailer, Object Swapper, and Face Swapper.

Repainter (img2img)

This function will inpaint the entire source image, performing an operation known as img2img or I2I.

This approach is useful to reuse the same pose and setting of the source image while changing the subject and environment completely. To achieve that goal, you should set the value of the denoise parameter in the Inpainter node quite high (for example: 0.85).

Superman

The Repainter (img2img) function can also be used to add details to a source image without altering its subject and setting. If that’s your goal, you should set the value of the denoise parameter in the Inpainter node to a very low value (for example: 0.20).

Inpainter

This function allows you to define a mask to only inpaint a specific area of the source image.

The value of the denoise parameter in the Inpainter node should be set low (for example: 0.20) if you want to keep the inpainted area as close as possible to the original.

The inpainting mask must be defined manually in the Image/Video Uploader function, via the ComfyUI Mask Editor.

Superman

Image Expander

This auxiliary function allows you to define an outer mask to be used by the Inpainter function. This approach is useful when you want to extend the source image in one or more directions.

When the Image Expander function is active, the value of the denoise parameter in the Inpainter function must be set to 1.0.

The outpainting mask must be defined manually in the Image Expander function, by configuring the number of pixels to add to the image in every direction.

Superman

Image Optimizers

Images generated with the SD 1.5 & XL Pipeline can be further optimized via a number of advanced and experimental functions.

Perturbed-Attention Guidance (PAG)

Perturbed-Attention Guidance (PAG) helps you generate images that follow your prompt more closely without increasing the CFG value and risk generating burned images.

For more information, review the research project: https://ku-cvlab.github.io/Perturbed-Attention-Guidance/

Kohya Deep Shrink

Deep Shrink is an optimization technique alternative to HighRes Fix, developed by @kohya, promising more consistent and faster results when the target image resolution is outside the training dataset for the chosen diffusion model.

Free Lunch (v1 and v2)

AI researchers have discovered an optimization technique for Stable Diffusion models that improves the quality of the generated images. The technique has been named “Free Lunch”. Further refinements of this technique have led to the availability of a FreeUv2 node.

For more information, read: https://arxiv.org/abs/2309.11497

You can enable either the FreeUv1 (default) or the FreeUv2 node in the Free Lunch function. Both have been set up following the guidance of @seb_caway, who did extensive testing to establish the best possible configuration.

Notice that the FreeU nodes are not optimized for MPS and DirectML devices. On these systems, the nodes force the image generation to use the CPU rather than the MPS or DirectML devices, considerably slowing down the process.

Image Conditioners

APW supports the following types of image conditioners:

LoRAs

You can condition the images generated with the SD 1.5 & XL Pipeline, SD 3 Pipeline the FLUX Pipeline, or the Painter Pipeline thanks to a number of LoRAs.

Superman

Each LoRA must be activated in the LoRAs function of the relevant pipeline.

LoRAs can be organized by categories to more easily remember what each one does. For example, APW organizes FLUX LoRAs in the following way:

Recommended settings for each LoRA in APW, and how to combine them to obtain more intentional generations, can be found here.

When a LoRA (and all Embeddings) requires a specific invocation token in the positive prompt, but you need to submit many different prompts in an automated way, you can use the Prompt Tagger function.

ControlNet

You can further condition the image generation performed with the SD 1.5 & XL Pipeline, SD 3 Pipeline or the FLUX Pipeline thanks to a number of ControlNet models.

APW supports the configuration of up to four concurrent ControlNet models. The efficacy of each one can be further increased by activating up a dedicated ControlNet preprocessor.

Each ControlNet model is trained to work with a specific version of Stable Diffusion. So, if you want to reconfigure the default ControlNet functions, you must be careful in choosing the appropriate ControlNet model for each pipeline: SD1.5/XL, 3.0, or FLUX.

Each ControlNet model and the optional preprocessor must be defined and manually activated in its ControlNet function.

By default, APW is configured to use the following ControlNet models:

  • Tile (with no preprocessor)
  • Canny (with the AnyLine preprocessor)
  • Depth (with Metric3D preprocessor)
  • Pose (with DWPose preprocessor)

However, you can switch each preprocessor to the one you prefer via an AIO Aux Preprocessor node, or you can completely reconfigure each ControlNet function to do what you want:

If you want to see how each ControlNet preprocessor captures the details of the source image, you can use the ControlNet Previews function to visualize up to twelve previews. The ControlNet Previews function can be activated from the Controller function.

IPAdapter (for SD 1.5 & XL only)

The SD 1.5 & XL Pipeline features two independent IPAdapter functions:

These functions enable the use of the IPAdapter technique to generate variants of the reference image uploaded via the Image/Video Uploader function.

People use this technique to generate consistent characters in different poses, or to apply a certain style to new subjects.

For more information on how to use this technique, Alessandro recommends reading the documentation of @cubiq’s IPAdapter Plus custom node suite: https://github.com/cubiq/ComfyUI_IPAdapter_plus.

Superman
Superman Variant

APW allows you to specify an attention mask that the IPAdapter should focus on.

The attention mask must be defined in the Image/Video Uploader function, via the ComfyUI Mask Editor, for the reference image (not the source image).

To force the IPAdapter to consider the attention mask, you must change the switch in the Activate Attention Mask node, inside the IPAdapter function, from False to True.

APW does not include IPAdapter models for SD3.5 and FLUX.1 as the quality of the output generated by these models is still unsatisfactory.

Redux (for FLUX.1 only)

The FLUX.1 image generation pipeline features three independent Redux functions.

Similarly to the IPAdapter functions, the Redux functions enable you to generate variants of the reference image uploaded via the Image/Video Uploader function.

Redux can do more than just generate variants of the reference image. It can also merge difference concepts coming from separate source images into a new generated image.

The Redux functions also support FLUX.1 LoRAs, enabling unprecedented creativity:

Image Manipulators

After you generate or upload an image, you can use pass that image through a series of Image Manipulators. Each can be activated/deactivated in the Controller function.

Notice that you can activate multiple Image Manipulators in sequence, creating an image enhancement pipeline.

If every Image Manipulator is activated, the image will be passed through the following functions, in the specified order:

  1. Object Swapper
  2. Face Swapper
  3. Face Detailer
  4. Hand Detailer

Object Swapper

THe Object Swapper function is capable of identifying a wide range of objects and features in the source image thanks to the GroundingDINO technique.

You can describe the feature/s to be found in the source image with natural language.

Once an object/feature has been identified, it will be modified according to the prompt you defined in the Object Swapper function.

Notice that the Object Swapper function uses a dedicated ControlNet XL Tile model and a dedicated SDXL diffusion model. They work even if your source image has been generated with a SD 1.5, Fine-Tuned SDXL, or SD3 model.

The reason for this design choice is that ControlNet XL Tile performs better than both ControlNet 1.5 Tile and ControlNet 3.0 Tile models.

Superman
Superman with a Christmas jumper

Notice that the Object Swapper function can be used also to modify the physical aspect of the subjects in the source image.

Superman
Supermand with Blond Hair

One of the most requested use case for the Object Swapper function is eyes inpainting.

Superman

Face Swapper

The Face Swapper function identifies the face of one or more subjects in the source image, and swaps them with a face of choice. If your source image has multiple faces, you can target the desired one via an index value.

You must upload an image of the face to be swapped via the 1st Reference Image node in the Image/Video Uploader function.

Superman in TV

Face Detailer

The Face Detailer function identifies small and large faces in the source image, and attempts to improve their aesthetics according to two independent configurations: large faces require a different treatment than small faces.

The Face Detailer function will generate an image after processing small faces and another after also processing large faces.

Notice that the Face Detailer function uses a dedicated ControlNet XL Tile model and a dedicated SDXL diffusion model. They work even if your source image has been generated with a SD 1.5, SD3, or FLUX model.

The reason for this design choice is that ControlNet XL Tile performs better than both ControlNet 1.5 Tile and ControlNet 3.0 Tile models.

Hand Detailer

The Hand Detailer function identifies hands in the source image, and attempts to improve their anatomy through two consecutive passes, generating an image after each pass.

The Hand Detailer function uses a dedicated Mesh Graphormer Depth preprocessor node and a SD1.5 Inpaint Depth Hand ControlNet model.

They work even if your source image has been generated with an SDXL, SD3, or FLUX model.

The reason for this design choice is that no Inpaint Depth Hand model exists in the SDXL, SD3, or FLUX variant.

However, since the Mesh Graphormer Depth preprocessor node occasionally struggles to identify hands in non-photographic images, you have the option to revert to the old DW preprocessor node.

Image Upscalers

APW features two independent upscaling models: CCSR (based on SD 2.1) and SUPIR (based on SDXL).

While these upscaling models internally use specific versions of Stable Diffusion, they can be used to upscale images generated with any image generator, including FLUX.1, or images uploaded via the Image/Video Uploader function.

Upscaler (CCSR)

CCSR is easier to configure than SUPIR, and it generates exceptional upscaling results, on par or superior to the ones you can obtain with Magnific AI or Topaz Gigapixel.

In some edge cases, for example where the image to upscale features an intricate pattern that must be preserved, CCSR can perform better than SUPIR.

Upscaler (SUPIR)

Just like for the Upscaler (CCSR) function, the Upscaler (SUPIR) function generates exceptional upscaling results, on par or superior to the ones you can obtain with Magnific AI or Topaz Gigapixel.

Differently from the Upscaler (CCSR) function, the Upscaler (SUPIR) function allows you to condition the image upscaling process with one or more LoRAs, and it allows to perform “creative upscaling” similar to the one offered by Magnific AI, by lowering the strength of the control_scale_end parameter.

Image Finishers

The very last steps of the image enhancement pipeline are performed by a series of image finishers.

If every Image Finisher is activated, the image will be passed through the following functions, in the specified order:

  1. Colorizer
  2. Color Corrector
  3. LUT Applier
  4. Grain Maker
  5. Watermarker

Colorizer

APW includes a Colorizer function, able to colorize a monochrome image uploaded via the Image/Video Uploader function.

While APW allows you to colorize an image in other ways, for example via the Repainter (img2img) function, the Colorizer function is more accurate and significantly faster.

Color Corrector

The Color Corrector function automatically adjusts the gamma, contrast, exposure, offset, hue, saturation, etc. of the source image/s.

Superman

LUT Applier

The LUT Applier function allows you to apply a LUT file to the source image/s.

You can add your LUT files (in .cube extension) to the /ComfyUI/custom_nodes/ComfyUI_essentials/luts/ folder.

Superman

Grain Maker

The Grain Maker function allows you to add grain and/or a vignette effect to the source image/s.

The grain can be controlled in terms of intensity, scale, and temperature.

Watermarker

This function generates a copy of the image/s you are processing with APW with a text of your choice in the position of your choice.

You can add your fonts to the /ComfyUI/custom_nodes/ComfyUI_Comfyroll_CustomNodes/fonts folder.

Notice that the text of the watermark is not the same text you define in the Add a Note to Generation node in the Seed function.

Also notice that the Watermarker function is not yet capable of watermarking generated videos.

Image Evaluators

APW is capable of generating images at an industrial scale. To help you choose the best images among hundreds or thousands, you can count on up to four Image Evaluators.

Each Image Evaluator can be activated/deactivated in the Controller section of the workflow.

Image Comparer

The Image Comparer function allows you to manually compare three pairs of images.

For example, you could compare a source image uploaded via the Image/Video Uploader function with the same image after it has been manipulated and enhanced in the L2 and L3 pipelines.

Face Analyzer

The Face Analyzer function allows you to evaluate a batch of generated images and automatically choose the ones that present facial landmarks very similar to the ones in a Reference Image you upload via the Image/Video Uploader function.

Aesthetic Score Predictor

APW features an Aesthetic Score Predictor function, capable of rearranging a batch of generated images based on their aesthetic score.

The aesthetic score is calculated starting from the prompt you have defined in the Prompt Builder function.

The Aesthetic Score Predictor function can be reconfigured to automatically exclude images below a certain aesthetic score. This approach is particularly useful when used in conjunction with the Image Chooser function to automatically filter a number a large number of generated images.

The Aesthetic Score Predictor function is not perfect. You might disagree with the score assigned to the images. In his tests, Alessandro found that the AI model used by this function does a reasonable good job at identifying the top two images in a batch.

Image Chooser

When you generate a batch of images with the Image Generator (SD) function, or you upload a series of images via the Image/Video Uploader function, you might want to pause APW execution and choose a specific image from the batch to further process it with the Image Manipulators.

This is possible thanks to the Image Chooser node. By default, it’s configured in Pass through mode, which doesn’t pause the execution of the workflow.

Change the node mode from Pass through to Always pause or Only pause if batch to enforce a pause and choose the image.

Video Conditioners

APW features the following video conditioners:

LoRAs

APW allows you to condition the videos generated with the Hunyuan Video or CogVideoX pipelines with one or more LoRAs per model.

By default, the LoRAs are disabled and must be manually configured and then activated via the Controller function.

Also by default, each model is configured to use only one LoRA at the time, which you can select via an input switch node, but you can reconfigure these functions to use multiple LoRAs in chain.

For example, you could configure the CogVideo LoRAs function to use two motion LoRAs in chain:

Trajectory Editor

APW includes a Trajectory Editor function where you can draw the point of origin and the point of destination for the motion you want your video to follow.

At the moment, this feature is only compatible with CogVideoX 1.0.

Notice that the trajectory preview node in this function automatically resizes to match the size of the frame you defined in the CogVideo Frames function.

This is is a limitation of the node used by APW and might cause some misalignment in your layout.

Video Manipulators

APW features the following video manipulators:

Video Flipper

The Video Flipper function allows you to create a mirrored version of your video. This is useful if you only have a motion LoRA in one direction, and you want to generate a video with the motion in the opposite direction.

Auxiliary Functions

APW includes the following auxiliary functions:

Prompt Tagger

The Prompt Tagger function allows you to automatically tag both your positive prompts and your negative prompts with specific triggering words to activate LoRAs or Embeddings.

This function automatically applies the tags to user prompts, user prompts enhanced by the Prompt Enhancer function, or image captions generated by the Caption Generator function.

To use this function, be sure to not include any tag in either the positive or negative prompt you define in the Prompt Builder function.

Training Helper for Caption Generator

APW allows you to automatically caption all images in a folder and save the captions in text files. This capability is useful to create a training dataset that you can use with AP Trainer for ComfyUI, any other front-end for koyha_ss, or any other third-party training solution (OneTrainer, SimpleTuner, etc.).

To use this capability, you need to activate the Caption Generator toggle and the Training Helper toggle in the Controller function and modify the Single Image or List? node in the Image/Video Uploader function to choose list instead of single images.

Once that is done, crucially, you’ll have to queue as many generations as the number of images in the folder you want to caption. To do so, check the Extra Options box in the Queue menu.

The Training Helper function will generate a caption for each image in the folder you specified, and save the caption in a file named after the image, but with the .txt extension.

By default these caption files are saved in the same folder where the images are located, but you can specify a different folder.

Intermediate Image Saver

The Intermediate Image Saver function allows you to save the images generated by each one of the functions of APW.

This is particularly useful if you have set up a complex image enhancement pipeline and you want to retain the intermediate images generated before the final one.

For example, if you activated the Face Detailer and the Hand Detailer functions, you might want to save the image generated only the small faces in your source image have been fixed, or the image generated after the hands have been fixed, before additional manipulations are applied.

XYZ Plot

The XYZ Plot function generates a series of images permutating any parameter across any node in the workflow, according to the configuration you define.

ControlNet Previews

This function allows you to visualize how twelve different ControlNet models captures the details of the image you uploaded via the Image/Video Uploader function.

Each preprocessor can be configured to load a different ControlNet model, so you are not constrained by the twelve models selected as defaults for APW.

The ControlNet Previews function is useful to decide what models to use in the ControlNet + Control-LoRAs function before you commit for a long image generation run. It’s recommended that you activate this function only once, to see the previews, and then deactivate it.

Video Acceleration

This optional function allows you to speed up any video generation performed by Hunyuan Video or CogVideoX if you have the supported hardware and you configured your OS appropriately.

At the moment, the Video Acceleration function supports Torch.Compile and Sage Attention.
Both are very complex to install. How to install them is beyond the scope of this documentation.

Debug Functions

Scattered across the APW, you’ll several nodes in dark blue color. These nodes are designed to help you debug the workflow and understand the impact of each node on the final image. They are completely optional and can be muted or removed without impact on the functionality of the workflow.

Additionally, APW includes the following debug capabilities:

Master Log

AP Workflow features a highly granular logging system able to print on the terminal, any parameter of any node in the workflow.

By default, the Master Log function prints the following information:

  1. Positive Prompt
  2. Negative Prompt
  3. Seed
  4. Generation Notes (defined by you in the Seed function)
  5. Log data from each image generation pipeline

In turn, each image generation pipeline prints the following information:

  1. Model
  2. Sampler
  3. Scheduler
  4. Steps
  5. CFG
  6. CLIP Skip (where applicable)
  7. Guidance (where applicable)
  8. Base Shift (where applicable)
  9. Max Shift (where applicable)

This information can be completely reconfigured to include only the parameters you are interested in.

This information can be further saved in a file by adding the appropriate node, if you wish so.

Front End Log

The Front End Log function prints the same identical information of the Master Log function but only in the Discord channel or Telegram chat used by the Discord bot or the Telegram bot front ends.

Compared to the Master Log function, the Front End Log function prints one extra piece of information: the LoRA applied to the image generation.

Image Metadata

All images generated or manipulated with APW include metadata about the prompts and the generation parameters used by ComfyUI. That metadata should be readable by A1111 WebUI, Vladmandic SD.Next, SD Prompt Reader, and other applications.

Alessandro recommends XnView MP to read the metadata of the images generated with APW.

Notice that the following APW outputs don’t embed metadata:

  • The copy of the final generated image in JPEG format.
  • The generated images served by Discord and Telegram bots.
  • Generated videos.

Notifications

APW alerts you when a job in the queue is completed with both a sounds and a browser notification. You can disable either by simply muting the corresponding nodes in the Image Saver function.

You can also configure an extra node to send notifications to your personal Discord server via webhooks, or use a webhook provided by automation services like Zapier to send notifications to your inbox, phone, etc.

Notice that the Discord notification of a job completion has nothing do to with the Discord bot front end. The two are completely separate features.

Front Ends

If using the default ComfyUI interface is too complex for your users, APW allows you to enable three alternative simplified front ends:

  • a Web front end
  • a Discord bot
  • a Telegram bot

The Web front end is a very simple web interface that allows your users to type their prompt on a webpage served by the ComfyUI machine on a port of your choice.

As long as you have expertise in web design and you can write HTML, CSS, and JS, you can customize the Web Front End to your liking.

Patreon and Ko-fi members who support the APW project by joining the Early Access program, have access to a special version of the Web Front End that includes a number of additional features, like the ability to upload images and videos, and to download the generated content:

The Discord bot and the Telegram bot are more advanced front ends compared to the default web interface. They can:

  • Accept text prompts from users and return both the generated image and the full list of parameters used to generate it.
  • Accept user requests for architecture, advertising, fashion, or game asset image types.
  • Accept user requests for photographic, artistic, or experimental image style.
  • Accept user requests for landscape, cinema, portrait, or square image format, as well as custom sizes.
  • Send a helpful message to the user with the list of commands that can be used.

You can reconfigure APW to customize these bots in a number of ways:

  • You can customize a series of sentences that the bots return to the user to acknowledge the generation request.
  • You can customize the prompt append associated with each image type.
  • You can customize the LoRAs associated with each image style.
  • You can customize the dimensions associated with each image format.
  • For the Telegram bot, you can to specify a list of Chat IDs where the bot is allowed to serve images.

The creation of the Discord bot and the Telegram bot is a complex process that goes beyond the purposes of this documentation. However, the APW Discord server, accessible via the Early Adopters program, has a number of guides to help you set up these bots.

Installation

Required Custom Nodes

AP Workflows depend on multiple custom nodes to work. You must download and install ComfyUI Manager, and then install the required custom nodes suites to be sure you can run this workflow.

Instructions:

  1. Install ComfyUI in a new folder to create a clean, new environment (a Python 3.10 venv is recommended).
  2. Install ComfyUI Manager.
  3. Shut down ComfyUI.
  4. Download the snapshot.
  5. Move/copy the snapshot to the /ComfyUI/custom_nodes/ComfyUI-Manager/snapshots folder.
  6. Restart ComfyUI.
  7. Open ComfyUI Manager and then the new Snapshot Manager.
  8. Restore the AP Workflows Nodes Snapshot.
  9. Restart ComfyUI.
WARNING: After installing the required custom nodes, you might see the following error:

AttributeError: module ‘cv2.gapi.wip.draw’ has no attribute ‘Text’

after which, ComfyUI will fail to import multiple custom node suites at startup.

If so, you must perform the following steps:

  1. Terminate ComfyUI.
  2. Manually enter its virtual environment from the ComfyUI installation folder by typing: source venv/bin/activate
  3. Type: pip uninstall -y opencv-python opencv-contrib-python opencv-python-headless
  4. Type: pip install opencv-python==4.7.0.72
  5. Restart ComfyUI.

Required AI Models

Many nodes used throughouth APW require specific AI models to perform their task. While some nodes automatically download the required models, others require you to download them manually.

At today, there’s not an easy way to export the full list of models Alessandro is using in his ComfyUI environment.

The best way to know which models you need to download is by opening ComfyUI Manager and proceed to the Install Models section. Here you’ll find a list of all the models each node requires or recommends to download.

WARNING: Even if you already have all the AI models necessary to run APW already installed in your system, you still need to remap them in each node of the workflow.

Alessandro’s paths don’t necessarily match your paths, and ComfyUI doesn’t automatically remap APW AI models to the AI models to your paths.

Additionally, in some cases, if ComfyUI cannot find the AI model that a node requires, it might automatically reassign another model to a certain node. For example, this happens with ControlNet-related nodes.

Most errors encountered by APW users can be solved by carefully reviewing the image of the workflow on this page, and the manual remapping of the AI models in the nodes.

ComfyUI Front End Compatibility

The ComfyUI Front End (not to be confused with the APW Front Ends) is going through a phase of intense development, transforming in major ways to offer all workflow designers an exceptional user experience.

Because of this, if you load the nightly builds of the ComfyUI Front End (via the –front-end-version Comfy-Org/ComfyUI_frontend@latest parameter), you might experience frustrating and time consuming bugs.

This is especially true with APW, which depends on dozens of custom node suites. Support for the latest and greatest features of the ComfyUI Front End is still uneven across the board.

The latest version of the ComfyUI Front End known to work in a stable way with APW 12.0 is 1.9.17. You can load it with the following parameter in your ComfyUI launch script:

–front-end-version Comfy-Org/ComfyUI_frontend@1.9.17

If you have experienced issues with broken node links, strange empty spaces, misaligned nodes, do the following:

  1. Terminate your ComfyUI instance.
  2. Update your ComfyUI launch script (if you have one) to add the parameter indicated above.
  3. Relaunch ComfyUI.
  4. Ignore any error message you might still see.
  5. Load the default ComfyUI workflow.
  6. Load APW 12.0

Node XYZ Failed To Install Or Import

Some custom node suites needed by APW can be challenging to install for the least technical users.

If you can’t install or import a custom node suite necessary to run APW, you might be trying to use APW in a pre-existing ComfyUI environment that you have installed loooooooooong time ago.

If you have a similar problem, be sure to:

  1. Have all your packages up to date in your Python virtual environment for ComfyUI.

    To do so:

    1. Terminate ComfyUI.
    2. Manually activate its Python virtual environment with source /venv/bin/activate.
    3. Run pip install -U pip to upgrade pip.
    4. Run pip install -U setuptools to upgrade setuptools.
    5. Run pip install -U wheel to upgrade wheel.
    6. Run pip install -U -r requirements.txt to upgrade all the packages in the virtual environment.
  2. Check the installation instructions of the custom node suite you are having troubles with.

It’s also possible that you didn’t follow all the instructions provided by the custom node authors.

If you have installed ComfyUI in a new environment and you still fail to install or import a custom node suite, open an issue on the GitHub repository of the author.

Setup For Prompt Enrichment With LM Studio

APW allows you to enrich their positive prompt with additional text generated by a locally-installed open access large language model like Meta LLaMA.

APW supports this feature through the integration with LM Studio and other AI systems like Ollama.

Any model supported by LM Studio can be used by APW, including all models at the top of the HuggingFace LLM Leaderboard.

Guidance on how to install and configure LM Studio is beyond the scope of this document and you should refer to the product documentation for more information.

Once LM Studio is installed and configured, you must load the LLM of choice, assign to it the appropriate preset, and activate the Local Inference Server.

Alessandro usually works with LLaMA 3 fine-tuned models and the LLaMA 3 preset (which LM Studio automatically downloads).

LM Studio Local Inference Server

WARNING: Assigning the wrong preset to an LLM will result in the Prompt Enrichment function not working correctly.

Secure ComfyUI Connection With SSL

If you need to secure the connection to your ComfyUI instance with SSL you have multiple options.

In production environments, you typically use signed SSL certificates served by a reverse proxy like Nginx or Traefik.

In small, development environments, you might want to server SSL certificates directly from the machine where ComfyUI is installed.
If so, you have multiple possibilities, described below.

If you already have signed SSL certificates.

  1. Use the following flags to start your ComfyUI instance:

    –tls-keyfile “path_to_your_folder_of_choice\comfyui_key.pem” –tls-certfile “path_to_your_folder_of_choice\comfyui_cert.pem”

If you don’t have signed SSL certificates, you want to test ComfyUI with a self-signed certificate, and you want to connect to it from the same machine where it’s running.

  1. Download the latest mkcert binary for your operating system, save it in an appropriate folder, and rename it as mkcert (purely for convenience).
  2. Open the terminal app you prefer and go to the folder where you stored the mkcert binary.
  3. Install the certificate authority by executing following command:
    mkcert -install
  4. Generate a new certificate for your ComfyUI machine by executing following command:
    mkcert localhost
  5. (purely for convenience) Rename the generated files from localhost.pem to comfyui_cert.pem and from localhost-key.pem to comfyui_key.pem
  6. Move comfyui_cert.pem and comfyui_key.pem to a folder where you want to store the certificate in a permanent way. For example: C:\Certificates\
  7. Use the following flags to start your ComfyUI instance:

    –tls-keyfile “C:\Certificates\comfyui_key.pem” –tls-certfile “C:\Certificates\comfyui_cert.pem”

If you don’t have signed SSL certificates, you want to test ComfyUI with a self-signed certificate, and you want to connect to it from another machine in the local network.

On the Windows/Linux/macOS machine where ComfyUI is installed:

  1. Download the latest mkcert binary for your operating system, save it in an appropriate folder, and rename it as mkcert (purely for convenience).
  2. Open the terminal app you prefer and go to the folder where you stored the mkcert binary.
  3. Install the certificate authority by executing following command:
    mkcert -install
  4. Generate a new certificate for your ComfyUI machine by executing following command:

    mkcert ip_address_of_your_comfyui_machine
    (for example, for 192.168.1.1, run mkcert 192.168.1.1)
  5. (purely for convenience) Rename the generated files from 192.168.1.1.pem to comfyui_cert.pem and from 192.168.1.1-key.pem to comfyui_key.pem
  6. Move comfyui_cert.pem and comfyui_key.pem to a folder where you want to store the certificate in a permanent way. For example: C:\Certificates\
  7. Use the following flags to start your ComfyUI instance:

    –tls-keyfile “C:\Certificates\comfyui_key.pem” –tls-certfile “C:\Certificates\comfyui_cert.pem”
  8. Find the mkcert rootCA.pem file created at step #4.

    For example, in Windows 11, the file is located in C:\Users\your_username\Application Data\mkcert
  9. Copy rootCA.pem on a USB key and transfer it to the machine that you want to use to connect to ComfyUI.

On the macOS machine that you want to use to connect to ComfyUI:

  1. Open Keychain Access
  2. Drag and drop the rootCA.pem file from the USB key into the System keychain.
  3. Enter your administrator password if prompted.
  4. Find the rootCA.pem certificate in the System keychain, double-click it, and expand the Trust section.
  5. Set When using this certificate to Always Trust.
  6. Close the certificate window and enter your administrator password again if prompted.
  7. If you had a ComfyUI tab already open in your browser, close it and connect to ComfyUI again.

On the Windows machine that you want to use to connect to ComfyUI:

  1. Open the Microsoft Management Console by pressing Win + R and typing mmc.
  2. In the Microsoft Management Console, go to File > Add/Remove Snap-in.
  3. In the Add or Remove Snap-ins window, select Certificates from the list and click Add.
  4. Choose Computer account when prompted for which type of account to manage.
  5. Select Local computer (the computer this console is running on).
  6. Click Finish, then OK to close the Add or Remove Snap-ins window.
  7. In the Microsoft Management Console, expand Certificates (Local Computer) in the left-hand pane.
  8. Expand Trusted Root Certification Authorities.
  9. Right-click on Certificates under Trusted Root Certification Authorities and select All Tasks > Import.
  10. In the Certificate Import Wizard, click Next.
  11. Click Browse and navigate to the USB key where you saved the rootCA.pem file.
  12. Change the file type to All Files (*.*) to see the rootCA.pem file.
  13. Select the rootCA.pem file and click Open, then Next
  14. Ensure Place all certificates in the following store is selected and Trusted Root Certification Authorities is chosen as the store.
  15. Click Next, then Finish to complete the import process.
  16. To verify the certificate is installed, go back to the Microsoft Management Console. Expand Trusted Root Certification Authorities and click on Certificates. Look for your rootCA certificate in the list. It should now be trusted by the system.
  17. Restart the Windows machine if necessary.

On the Ubuntu Linux machine that you want to use to connect to ComfyUI:

  1. Copy the rootCA.pem file from the USB key to the /usr/local/share/ca-certificates directory.
  2. Update the CA store by running the following command in a terminal window:
    sudo update-ca-certificates
  3. Verify the installation by running the following command in a terminal window:
    sudo ls /etc/ssl/certs/ | grep rootCA.pem

FAQ

Are you trying to replicate A1111 WebUI, Vladmandic SD.Next, or Invoke AI?

Alessandro never intended to recreate those UIs in ComfyUI and has no plan to do so in the future.
While APW enables some of the capabilities offered by those UIs, its philosophy and goals are very different. Read below.

Why are you using ComfyUI instead of easier-to-maintain solutions like A1111 WebUI, Vladmandic SD.Next, or Invoke AI?

  1. Alessandro wanted to learn, and help others learn, the SDXL architecture, understanding what goes where and how different building blocks influence the generation. A1111 WebUI and similar tools makes it harder. With ComfyUI, you knows exactly what’s happening. So APW is, first and foremost, a learning tool.
  2. While Alessandro considers A1111 WebUi and similar toos invaluable, and he’s grateful for their gift to the community, he finds their interfaces chaotic. He wanted to explore alternative design layouts. Many people might argue that ComfyUI is even more chaotic than A1111 WebUI or that APW, in particular, is more chaotic than A1111 WebUI.
    Ultimately, different brains process information in different ways, and some people prefer one approach over the other. Some people find node systems easier to work with than more standard UIs.
  3. Alessandro served in the enterprise IT industry for over two decades. ComfyUI allowed him to demostrate how AI models paired with automation can be used to create complex image generation pipelines useful in certain industrial applications. This is not currently possible with A1111 WebUI and similar solutions.
  4. The most sophisticated AI systems we have today (Midjourney, ChatGPT, etc.) don’t generate images or text by simply processing the user prompt. They depend on complex pipelines and/or Mixture of Experts (MoE) which enrich the user prompt and process it in many different ways. Alessandro’s long-term goal is to use ComfyUI to create multi-modal pipelines that can produce digital outputs as good as the ones from the AI systems mentioned above, without human intervention. APW 5.0 was the first step in that direction. The goal is not attainable with A1111 WebUI and similar solutions as they are implemented today.

How can I support or sponsor this project?

Show your support!

If you are a company interested in sponsoring APW, reach out.

Special Thanks

ComfyUI Node Developers

APW wouldn’t exist without the dozens of custom nodes created by very generous members of the AI community. Please consider funding these talented developers.

In particular, special thanks to:

@rgthree:

  • His Reroute nodes are the most flexible reroute node you can find among custom node suites.
  • His Fast Groups Muter/Bypasser nodes offer the ultimate flexibility in creating customized control switches.
  • His Power LoRA Loader nodes are critical to navigate the ever growing catalog of LoRAs for SD and FLUX.
  • His Image Comparer node is an exceptional help in inspecting the image generation.

@kijai:

Kijai’s nodes used in APW are too many to be listed. Among them:

  • His Set and Get nodes allow the removal of most wires in the workflow without sacrificing the capability to understand the flow of information across the workflow.
  • His CCSR and SUPIR nodes makes exceptional upscaling possible.
  • His Florence2Run node powers a key part of the Caption Generator function of the workflow.
  • His Hunyuan Video and CogVideoX nodes are the main engines of the Video Generator pipeline of the workflow.

@Kosinkadink:

  • His Advanced ControlNet nodes are the must-have implementation of ControlNet for all APW image generation models.

@matan1905:

  • His ComfyUI Serving Toolkit node suite powers all APW front ends.

@cubiq:

  • His implementation of the IP Adapter technique allows all of us to do exceptional things with Stable Diffusion.

@ltdrdata:

  • The nodes in his Impact Pack power many Image Manipulators functions in the workflow.
  • His ComfyUI Manager is critical to manage the myriad of package and model dependencies in the workflow.

@glibsonoran:

  • He evolved his Plush custom node suite to support the many needs of APW and now his nodes power the Prompt Enricher and the Caption Generator functions.

@crystool:

  • His Load Image with Metadata node provides the critical feature in the Image/Video Uploader function which helps us all stay sane when dealing with a large set of images.

@huchenlei

  • His improvements to the ComfyUI user interface make APW 100x easier and more fun to use.

Hall of Fame

@receyuki:

  • His SD Parameter Generator node supported the many needs of APW for a long time, and he worked above and beyond to deliver the ultimate control panel for complex ComfyUI workflows.

@jags111:

  • His fork of LucianoCirino’s nodes allowed APW offer a great XY Plot function for a long time.

@LucianoCirino:

  • His XY Plot node is the very reason why Alessandro started working on this workflow.

Thanks to all of you, and to all other custom node creators for their help in debugging and enhancing their great nodes.