/ AI Tools / Ultimate Guide to ComfyUI for Beginners (2025)
AI Tools 12 min read

Ultimate Guide to ComfyUI for Beginners (2025)

Complete beginner's guide to ComfyUI. Installation, first workflows, essential nodes, and everything you need to start creating AI images and videos.

Ultimate guide to ComfyUI for beginners

ComfyUI is the most powerful interface for AI image and video generation. It's also intimidating for beginners. Those node graphs look like engineering diagrams. This guide will take you from complete beginner to confidently building your own workflows.

Quick Answer: ComfyUI is a node-based interface for running AI image and video generation models. Unlike simple prompt boxes, ComfyUI shows you exactly how generation works by connecting nodes in a visual workflow. While the learning curve is steeper than alternatives, the control and flexibility are unmatched. Most users become productive within a few hours.

Key Takeaways:
  • ComfyUI is a visual, node-based workflow editor for AI generation
  • It supports SDXL, Flux, LTX-2, Wan, and nearly every major model
  • Custom nodes extend functionality massively
  • Learning curve is 2-4 hours for basics, weeks for mastery
  • Free and open-source with active community support

What is ComfyUI?

ComfyUI is a visual programming interface for AI generation. Instead of a simple text box where you type prompts, ComfyUI shows you the entire generation process as connected nodes.

Why Nodes?

Each node does one thing:

  • Load Model node: Loads your AI model
  • Text node: Holds your prompt
  • Sampler node: Does the actual generation
  • Save node: Saves your image

By connecting these nodes, you build a workflow. This gives you complete control over every step of generation.

ComfyUI vs Alternatives

Feature ComfyUI AUTOMATIC1111 Midjourney
Interface Node-based Tab-based Chat/Discord
Control Maximum High Limited
Learning curve Steepest Medium Easiest
Flexibility Highest High Low
Video support Excellent Limited None
Cost Free Free Subscription

For a detailed comparison, see our ComfyUI vs AUTOMATIC1111 guide.

Installation

Requirements

Minimum:

  • NVIDIA GPU with 6GB+ VRAM
  • 16GB RAM
  • 50GB free storage
  • Windows 10/11, Linux, or macOS

Recommended:

  • NVIDIA GPU with 12GB+ VRAM
  • 32GB RAM
  • 200GB+ SSD storage

Windows Installation (Easiest)

Step 1: Download the portable package

  1. Go to the ComfyUI GitHub releases
  2. Download the Windows portable package
  3. Extract to a folder (avoid spaces in path)

Step 2: Run ComfyUI

  1. Double-click run_nvidia_gpu.bat
  2. Wait for the server to start
  3. Open browser to http://127.0.0.1:8188

That's it. ComfyUI is running.

Step 3: Download a model

  1. Download an SDXL model from Civitai or Hugging Face
  2. Place in ComfyUI/models/checkpoints/
  3. Refresh ComfyUI (press F5)

Linux/Mac Installation

Using git:

git clone https://github.com/comfyanonymous/ComfyUI.git
cd ComfyUI
pip install -r requirements.txt
python main.py

Using conda:

git clone https://github.com/comfyanonymous/ComfyUI.git
cd ComfyUI
conda create -n comfyui python=3.10
conda activate comfyui
pip install torch torchvision torchaudio
pip install -r requirements.txt
python main.py

Model File Locations

Place your model files in these directories:

Model Type Location
Checkpoints (SDXL, SD1.5) models/checkpoints/
LoRAs models/loras/
VAE models/vae/
ControlNet models/controlnet/
Embeddings models/embeddings/
CLIP models/clip/
Upscalers models/upscale_models/

Understanding the Interface

Main Areas

Canvas (center): Where your node workflow lives Menu (right-click): Access all available nodes Queue (bottom): Shows pending and completed generations Settings (gear icon): Configuration options

  • Pan: Middle-mouse drag or spacebar + drag
  • Zoom: Scroll wheel
  • Select: Left-click on nodes
  • Multi-select: Shift + click or box selection
  • Delete: Select node(s), press Delete
  • Undo: Ctrl+Z

Node Basics

Anatomy of a node:

  • Title bar: Node name
  • Inputs (left): Colored dots for connections
  • Outputs (right): Colored dots for outputs
  • Parameters: Adjustable settings within the node

Connection colors:

  • Purple: Model connections
  • Pink: Conditioning/prompt
  • Yellow: Latent/image
  • Orange: VAE
  • Green: Mask

Your First Workflow

Loading the Default Workflow

When you open ComfyUI, you see a default workflow. Let's understand it:

Nodes present:

  1. Load Checkpoint: Loads your AI model
  2. CLIP Text Encode (x2): Positive and negative prompts
  3. Empty Latent Image: Sets output resolution
  4. KSampler: Does the generation
  5. VAE Decode: Converts to viewable image
  6. Save Image: Saves the result

Running Your First Generation

Step 1: In Load Checkpoint, select your downloaded model

Step 2: In the positive CLIP Text Encode, type your prompt:

a beautiful sunset over the ocean, vibrant colors, photorealistic

Step 3: In the negative CLIP Text Encode, type:

blurry, low quality, distorted

Step 4: In Empty Latent Image, set:

  • Width: 1024
  • Height: 1024

Step 5: Click "Queue Prompt" (or press Ctrl+Enter)

Step 6: Watch the progress bar. When done, your image appears in the Save Image node.

Congratulations. You just generated your first ComfyUI image.

Essential Nodes Explained

Model Loading Nodes

Load Checkpoint:

  • Loads SDXL, SD1.5, or compatible models
  • Select from dropdown of files in checkpoints folder

Load LoRA:

  • Adds style/subject LoRAs to your model
  • Connect between checkpoint and CLIP encoders
  • strength_model: How much LoRA affects generation
  • strength_clip: How much LoRA affects text understanding

Text and Prompt Nodes

CLIP Text Encode:

Free ComfyUI Workflows

Find free, open-source ComfyUI workflows for techniques in this article. Open source is strong.

100% Free MIT License Production Ready Star & Try Workflows
  • Converts text to format the model understands
  • Connect to CLIP output from checkpoint
  • Positive: What you want
  • Negative: What you don't want

CLIP Text Encode (SDXL):

  • SDXL-specific with two text inputs
  • First: Main description
  • Second: Style/quality terms

Image Dimension Nodes

Empty Latent Image:

  • Sets output size
  • Common sizes: 1024x1024, 1152x896, 896x1152
  • Batch size: Generate multiple images at once

Load Image:

  • Brings in existing images
  • Used for img2img, ControlNet, reference

Sampling Nodes

KSampler: The core generation node.

Parameters:

  • seed: Random seed (same seed = same image)
  • steps: More = better quality, slower
  • cfg: How closely to follow prompt (7-8 typical)
  • sampler_name: Algorithm (euler, dpm++, etc.)
  • scheduler: Step scheduling (normal, karras, etc.)
  • denoise: 1.0 for text-to-image, lower for img2img

Output Nodes

VAE Decode:

  • Converts latent to viewable image
  • Usually uses checkpoint's built-in VAE

Save Image:

  • Saves to output folder
  • Customize filename prefix

Building Custom Workflows

Adding Nodes

Method 1: Right-click menu

  1. Right-click on canvas
  2. Navigate categories
  3. Click to add node

Method 2: Search

  1. Double-click on canvas
  2. Type node name
  3. Select from results

Connecting Nodes

  1. Click and drag from output (right side) to input (left side)
  2. Colors must match (or be compatible)
  3. Release to connect

Reroute and Organization

Reroute node:

  • Adds control points to wires
  • Double-click wire to add reroute
  • Helps organize complex workflows

Groups:

  • Select nodes, right-click, "Group"
  • Add labels to workflow sections
  • Color-code for clarity

For organization tips, see our workflow organization guide.

Installing Custom Nodes

ComfyUI Manager (Essential)

ComfyUI Manager lets you install nodes with one click.

Want to skip the complexity? Apatero gives you professional AI results instantly with no technical setup required.

Zero setup Same quality Start in 30 seconds Try Apatero Free
No credit card required

Installation:

cd ComfyUI/custom_nodes
git clone https://github.com/ltdrdata/ComfyUI-Manager.git

Restart ComfyUI.

Usage:

  1. Click "Manager" in menu
  2. Search for nodes
  3. Click Install
  4. Restart ComfyUI

Essential Custom Nodes to Install

Start with these:

  1. Impact Pack - Face fixing, segmentation
  2. WAS Node Suite - Utilities
  3. ControlNet Preprocessors - For ControlNet workflows
  4. IPAdapter Plus - Face/style consistency
  5. VideoHelperSuite - Video support

For the complete list, see our best custom nodes guide.

Common Workflows

Text-to-Image (Basic)

Already covered above. The default workflow.

Image-to-Image

Modifications to basic workflow:

  1. Add "Load Image" node
  2. Add "VAE Encode" node
  3. Connect: Load Image → VAE Encode → KSampler (latent input)
  4. Set KSampler denoise to 0.5-0.8
  5. Lower denoise = more like original image

With LoRA

Modifications:

  1. Add "Load LoRA" node
  2. Connect: Checkpoint → LoRA → CLIP Text Encode
  3. Also connect model output through LoRA
  4. Adjust strength (0.5-1.0 typical)

With ControlNet

Additional nodes needed:

  1. Load ControlNet Model
  2. Preprocessor (Canny, Depth, OpenPose, etc.)
  3. Apply ControlNet

Connection flow:

  1. Load reference image
  2. Preprocess (Canny edges, depth map, etc.)
  3. Apply ControlNet to conditioning
  4. Continue to KSampler

For ControlNet details, see our ControlNet guide.

Upscaling Workflow

Simple upscale:

Join 115 other course members

Create Your First Mega-Realistic AI Influencer in 51 Lessons

Create ultra-realistic AI influencers with lifelike skin details, professional selfies, and complex scenes. Get two complete courses in one bundle. ComfyUI Foundation to master the tech, and Fanvue Creator Academy to learn how to market yourself as an AI creator.

Early-bird pricing ends in:
--
Days
:
--
Hours
:
--
Minutes
:
--
Seconds
51 Lessons • 2 Complete Courses
One-Time Payment
Lifetime Updates
Save $200 - Price Increases to $399 Forever
Early-bird discount for our first students. We are constantly adding more value, but you lock in $199 forever.
Beginner friendly
Production ready
Always updated
  1. Load generated or external image
  2. Add "Upscale Image" node
  3. Select upscale model
  4. Save result

AI-enhanced upscale:

  1. Generate at normal size
  2. Upscale image 2x
  3. Run through KSampler at low denoise (0.3-0.5)
  4. Result: Larger image with added detail

Video Generation in ComfyUI

Setting Up for Video

Required nodes:

  • ComfyUI-VideoHelperSuite
  • ComfyUI-LTXVideo (for LTX-2)
  • ComfyUI-WanVideoWrapper (for Wan)

Required models:

  • Download LTX-2 or Wan model files
  • Place in appropriate folders

Basic Video Workflow

  1. Load video model
  2. Add text prompt
  3. Set frame count and resolution
  4. Generate
  5. Use Video Combine node to create video file

For detailed video workflows, see our LTX-2 guide and Wan guide.

Troubleshooting Common Issues

"Out of Memory" (OOM)

Causes:

  • Model too large for VRAM
  • Resolution too high
  • Too many nodes/models loaded

Solutions:

  • Reduce resolution
  • Enable model offloading in settings
  • Use GGUF quantized models
  • Close other GPU applications

Slow Generation

Causes:

  • High step count
  • Large resolution
  • Slow sampler choice

Solutions:

  • Reduce steps (25-30 usually sufficient)
  • Use faster samplers (euler_ancestral, dpm++)
  • Generate at lower resolution, upscale after

Black or Corrupted Output

Causes:

  • VAE mismatch
  • Broken connections
  • Incompatible model/settings

Solutions:

  • Try explicit VAE (not baked-in)
  • Check all connections
  • Reset to default workflow and rebuild

Missing Nodes Error

When loading workflows:

  1. Note which nodes are missing
  2. Install via ComfyUI Manager
  3. Restart and reload workflow

Node Not Found in Menu

Causes:

  • Node not installed
  • Installation error
  • Conflicting nodes

Solutions:

  • Check Manager for installation status
  • Look at terminal for errors
  • Try reinstalling node pack

Tips for Success

Workflow Management

  1. Save frequently: Ctrl+S to save workflows
  2. Name descriptively: "SDXL_portrait_workflow_v2.json"
  3. Create templates: Save common starting points
  4. Version your workflows: Keep old versions

Performance Optimization

  1. Use appropriate resolution: Don't go bigger than needed
  2. Batch strategically: Generate multiple seeds at once
  3. Cache models: Keep frequently used models loaded
  4. SSD storage: Fast storage speeds workflow

Learning Progression

Week 1: Basic text-to-image, understand core nodes Week 2: LoRAs, img2img, simple modifications Week 3: ControlNet basics, custom node exploration Month 1: Video generation, complex workflows Ongoing: Advanced techniques, workflow optimization

Resources for Learning

Official Resources

  • ComfyUI GitHub: Documentation and updates
  • ComfyUI Examples: Built-in workflow examples

Community Resources

  • r/comfyui: Reddit community
  • ComfyUI Discord: Real-time help
  • OpenArt: Workflow sharing platform
  • Civitai: Models and workflows

Video Tutorials

YouTube is excellent for ComfyUI learning:

  • Search "ComfyUI beginner tutorial"
  • Follow along with hands-on examples
  • Watch workflow walkthroughs

For curated learning resources, see our AI learning resources guide.

Frequently Asked Questions

Is ComfyUI hard to learn?

The basics take 2-4 hours. Proficiency takes weeks. It's more complex than alternatives but the investment pays off in capability.

Do I need a powerful computer?

You need a dedicated GPU. Minimum 6GB VRAM, recommended 12GB+. AMD support exists but NVIDIA is more reliable.

Can I use ComfyUI commercially?

Yes. ComfyUI itself is open source. Check the license of specific models you use.

How is ComfyUI different from AUTOMATIC1111?

ComfyUI uses visual nodes, A1111 uses tabs and settings. ComfyUI is more flexible but has a steeper learning curve.

Does ComfyUI support video?

Yes, with excellent support for LTX-2, Wan, and other video models through custom nodes.

Where do I find workflows?

OpenArt, Civitai, Reddit, and Discord all have downloadable workflows.

Can I create my own custom nodes?

Yes, if you know Python. See the developer documentation.

Why choose ComfyUI over easier tools?

Maximum control, flexibility, and the ability to do things not possible elsewhere. Professional users typically prefer ComfyUI.

Wrapping Up

ComfyUI transforms AI generation from a black box into a visible, controllable process. Yes, the learning curve exists. But once you understand nodes and connections, you have capabilities that simpler tools can't match.

Getting started checklist:

  1. Install ComfyUI (use portable package on Windows)
  2. Download a model (SDXL recommended)
  3. Run the default workflow
  4. Install ComfyUI Manager
  5. Add essential custom nodes
  6. Start modifying and building

The ComfyUI community is helpful. When stuck, search Reddit, Discord, or YouTube. Chances are someone had the same question.

For those preferring cloud-based generation without local setup, Apatero.com offers professional tools. But for maximum control and capability, learning ComfyUI is worth the investment.

Quick Reference

Keyboard Shortcuts

Shortcut Action
Ctrl+Enter Queue prompt
Ctrl+S Save workflow
Ctrl+O Open workflow
Ctrl+Z Undo
Delete Remove selected
Ctrl+G Group selected
Double-click Search nodes
Space+drag Pan canvas

Common Sampler Settings

Use Case Sampler Scheduler Steps
Fast preview euler_ancestral normal 15
Balanced dpm++ 2m karras 25
High quality dpm++ sde karras 35
Consistency euler normal 30

For SDXL:

  • Resolution: 1024x1024
  • Steps: 25-30
  • CFG: 7
  • Sampler: dpm++ 2m
  • Scheduler: karras

Start here, then experiment based on results.

Ready to Create Your AI Influencer?

Join 115 students mastering ComfyUI and AI influencer marketing in our complete 51-lesson course.

Early-bird pricing ends in:
--
Days
:
--
Hours
:
--
Minutes
:
--
Seconds
Claim Your Spot - $199
Save $200 - Price Increases to $399 Forever