Complete ComfyUI Workflow for Flux 2 Klein
Step-by-step guide to setting up and using Flux 2 Klein in ComfyUI. Official workflows, node configurations, and tips for optimal image generation.
ComfyUI has become the standard interface for running Flux models, and Black Forest Labs has embraced this by releasing official workflow templates for Flux 2 Klein. This guide walks you through everything from initial setup to advanced configurations that will have you generating images in minutes.
ComfyUI's node-based approach might seem intimidating at first, but it offers flexibility that simpler interfaces can't match. Once you understand the basic flow, you can customize Klein's behavior precisely to your needs.
Prerequisites
Before we start, ensure you have:
- ComfyUI installed (latest version recommended)
- Python 3.10+ with CUDA support
- 12GB+ VRAM GPU for Klein 4B (20GB+ for 9B)
- Sufficient storage (~15GB for model files)
If you don't have ComfyUI installed, clone the repository and follow the installation instructions on their GitHub page.
Downloading Model Files
Flux 2 Klein requires three main components:
1. Main Model Weights
Download from Hugging Face:
For 4B model:
black-forest-labs/FLUX.2-klein-4B
For 9B model:
black-forest-labs/FLUX.2-klein-9B
Place the .safetensors file in:
ComfyUI/models/diffusion_models/
2. Text Encoder (T5)
Klein uses the T5 text encoder. Download:
google/t5-v1_1-xxl
Place in:
ComfyUI/models/text_encoders/
3. VAE
The VAE handles image encoding/decoding:
black-forest-labs/FLUX.1-dev (ae.safetensors)
Place in:
ComfyUI/models/vae/
Basic Workflow Setup
Here's the minimal node setup for text-to-image generation:
Basic ComfyUI workflow for Flux 2 Klein text-to-image generation
Essential Nodes
1. Load Diffusion Model
- Select your Klein 4B or 9B model
- This is the core generation model
2. CLIP Loader
- Load the T5 text encoder
- Handles prompt interpretation
3. VAE Loader
- Load the Flux VAE
- Handles latent-to-image conversion
4. CLIP Text Encode
Free ComfyUI Workflows
Find free, open-source ComfyUI workflows for techniques in this article. Open source is strong.
- Connect your prompt here
- Outputs conditioning for the sampler
5. Empty Latent Image
- Set your desired resolution (1024x1024 recommended)
- Batch size for multiple images
6. KSampler
- Steps: 4 (Klein is distilled for few steps)
- Sampler: euler
- Scheduler: normal
- CFG: 1.0-2.0 (Klein works with low CFG)
- Denoise: 1.0
7. VAE Decode
- Converts latent to viewable image
8. Save Image
- Outputs your final result
Connecting the Nodes
The basic flow is:
Prompt → CLIP Text Encode → KSampler
Model Loader → KSampler
Empty Latent → KSampler
KSampler → VAE Decode → Save Image
VAE Loader → VAE Decode
Optimal Settings for Klein
Klein's distilled nature means different settings than typical Stable Diffusion workflows.
Sampler Settings
| Setting | Recommended Value | Notes |
|---|---|---|
| Steps | 4 | Klein is optimized for 4 steps |
| Sampler | euler | Best balance for Klein |
| Scheduler | normal | Simple works best |
| CFG Scale | 1.0-2.0 | Lower than typical SD |
| Denoise | 1.0 | Full denoise for t2i |
Resolution Settings
Klein handles various resolutions well:
- 1024x1024 - Default, fastest
- 1152x896 - Landscape
- 896x1152 - Portrait
- 1536x1024 - Wide landscape
- 1024x1536 - Tall portrait
Higher resolutions increase VRAM usage and generation time.
Want to skip the complexity? Apatero gives you professional AI results instantly with no technical setup required.
Image-to-Image Workflow
Klein supports image-to-image generation for editing and variations.
Additional Nodes Needed
1. Load Image
- Brings in your source image
2. VAE Encode
- Converts image to latent space
Modified Settings
- Denoise: 0.4-0.8 - Lower values keep more of original
- Connect VAE Encode output to KSampler's latent input instead of Empty Latent
Multi-Image Reference Workflow
One of Klein's advanced features is accepting multiple reference images for generation.
Setup
- Load multiple images
- VAE Encode each
- Use LatentBatch node to combine
- Connect batched latents to KSampler
This allows Klein to blend concepts from multiple sources into coherent outputs.
Advanced workflow supporting multiple reference images
Using Official Workflows
Black Forest Labs provides official ComfyUI workflows optimized for Klein.
Importing Official Workflows
- Download workflow JSON from BFL's GitHub
- In ComfyUI, click "Load" or drag the file
- All nodes configure automatically
- Ensure model paths match your installation
Available Official Workflows
- Text-to-Image Basic - Simple prompt to image
- Image-to-Image - Edit existing images
- Multi-Reference - Combine multiple source images
- Batch Generation - Generate multiple variations
Troubleshooting Common Issues
Out of Memory (OOM) Errors
Solutions:
Earn Up To $1,250+/Month Creating Content
Join our exclusive creator affiliate program. Get paid per viral video based on performance. Create content in your style with full creative freedom.
- Reduce resolution to 1024x1024
- Enable attention slicing in ComfyUI settings
- Use FP8 quantized model
- Close other VRAM-consuming applications
Slow Generation
Check:
- CUDA is properly installed
- GPU is being utilized (not CPU fallback)
- No background processes consuming GPU
- Latest ComfyUI version installed
Poor Quality Output
Verify:
- Using correct step count (4 for Klein)
- CFG scale is low (1.0-2.0)
- Proper model files loaded (not corrupted)
- Prompt is descriptive and clear
Model Not Loading
Ensure:
- File placed in correct directory
- File extension is .safetensors
- Sufficient VRAM for model size
- ComfyUI restarted after adding models
Performance Optimization
Maximize Klein's speed in ComfyUI:
Enable GPU Acceleration
In ComfyUI settings, ensure:
- CUDA execution provider selected
- FP16 precision enabled
- Memory efficient attention enabled (for lower VRAM)
Workflow Optimization
- Avoid unnecessary nodes
- Don't chain multiple KSamplers unless needed
- Use batch generation instead of repeated single runs
- Cache frequently used latents
Saving and Sharing Workflows
Export Workflow
Click "Save" to export your workflow as JSON. This preserves all node connections and settings.
Share with Others
Workflows can be shared as:
- JSON files
- Workflow images (metadata embedded)
- Node configurations in documentation
Key Takeaways
- Download three components: Model weights, T5 text encoder, and VAE
- Use 4 steps with euler sampler for optimal Klein results
- Keep CFG low (1.0-2.0) unlike typical Stable Diffusion
- Official workflows available from Black Forest Labs for quick start
- 1024x1024 is the sweet spot for speed and quality balance
- Enable attention slicing if experiencing memory issues
Frequently Asked Questions
What's the minimum ComfyUI version for Klein?
The latest version is recommended. Older versions may lack required nodes or optimizations for Flux models.
Can I use Klein with ControlNet in ComfyUI?
ControlNet support for Klein is developing. Some community implementations exist but aren't as mature as SDXL ControlNet.
Why is my generation taking longer than expected?
Klein should generate in 1-3 seconds on capable hardware. If slower, check that CUDA is being used and no memory offloading is occurring.
How do I add LoRAs to the Klein workflow?
Add a "Load LoRA" node between the model loader and KSampler. Connect model and CLIP outputs through the LoRA node.
Can I queue multiple generations in ComfyUI?
Yes, use the queue feature to batch prompts. Each will generate sequentially using your workflow.
What's the difference between the official workflow and custom setups?
Official workflows are optimized and tested by Black Forest Labs. Custom setups offer flexibility but may need tuning.
Does ComfyUI support Klein's image editing features?
Yes, through image-to-image workflows with appropriate denoise settings.
How do I update Klein models in ComfyUI?
Replace the model file in the appropriate directory and restart ComfyUI.
Can I run Klein and SDXL in the same ComfyUI instance?
Yes, simply load the appropriate model for each workflow. Both can coexist.
Where can I find community workflows for Klein?
Check OpenArt, Civitai, and the ComfyUI subreddit for community-created workflows.
ComfyUI provides the flexibility and power to get the most out of Flux 2 Klein. Start with the basic workflow, understand how the pieces connect, and gradually explore advanced features as your needs grow.
For users who prefer simpler interfaces or lack local hardware, Apatero offers browser-based generation with multiple models including video generation and LoRA training on Pro plans.
Ready to Create Your AI Influencer?
Join 115 students mastering ComfyUI and AI influencer marketing in our complete 51-lesson course.
Related Articles
AI Art Market Statistics 2025: Industry Size, Trends, and Growth Projections
Comprehensive AI art market statistics including market size, creator earnings, platform data, and growth projections with 75+ data points.
AI Creator Survey 2025: How 1,500 Artists Use AI Tools (Original Research)
Original survey of 1,500 AI creators covering tools, earnings, workflows, and challenges. First-hand data on how people actually use AI generation.
AI Deepfakes: Ethics, Legal Risks, and Responsible Use in 2025
The complete guide to deepfake ethics and legality. What's allowed, what's not, and how to create AI content responsibly without legal risk.