Fix GGUF Models Not Working with LoRA and ControlNet
Solve compatibility issues when using GGUF quantized models with LoRAs and ControlNet in ComfyUI
GGUF quantized models offer tremendous VRAM savings, allowing users to run large models like SDXL or Flux on consumer GPUs that would otherwise struggle with full-precision weights. However, when GGUF LoRA not working issues occur - when you try to apply your favorite LoRAs or use ControlNet for precise composition control - you encounter cryptic errors, silent failures, or outputs that completely ignore your adapters. The GGUF LoRA not working problem is a frustrating experience that stems from fundamental differences in how GGUF stores model weights compared to standard safetensors format, and understanding these differences is the key to resolving compatibility issues.
The good news is that GGUF models absolutely work with LoRAs and ControlNet when configured correctly, solving the GGUF LoRA not working error. The solution involves using specialized loader nodes that understand GGUF's quantized format and can properly dequantize weights before applying modifications. This guide walks through the technical reasons for the GGUF LoRA not working issue and provides concrete workflows for successfully combining GGUF's memory efficiency with the creative flexibility of LoRAs and precise control of ControlNet.
Understanding Why GGUF Creates Compatibility Challenges
GGUF (GPT-Generated Unified Format) was originally developed for large language models and later adapted for Stable Diffusion models. Its primary purpose is aggressive compression through quantization, reducing model sizes by 50-75% while maintaining acceptable quality. However, this compression creates the GGUF LoRA not working issue and several other technical challenges when trying to apply model modifications like LoRAs.
How Quantization Affects Weight Storage
Standard safetensors models store weights as floating-point numbers, typically in FP16 (16-bit) or FP32 (32-bit) format. Each weight value is stored precisely, allowing direct mathematical operations. When you apply a LoRA, the LoRA weights are multiplied by a strength factor and added directly to these base weights through simple matrix arithmetic.
GGUF quantization fundamentally changes this storage format. Instead of storing individual floating-point values, GGUF groups weights into blocks and represents them using fewer bits. A Q4_K_M quantization, for example, stores weights using only 4 bits per value on average, with some additional metadata for scaling. This block-based representation means the actual weight values must be reconstructed (dequantized) before any mathematical operations can occur.
When a standard LoRA loader attempts to apply modifications to a GGUF model, it tries to perform matrix operations on quantized blocks rather than individual weights. This mismatch causes the GGUF LoRA not working error, resulting in shape errors, type mismatches, or numerical instabilities depending on exactly how the operation fails.
The Dequantization Requirement
For LoRA application to work correctly with GGUF models, the workflow must:
- Load the GGUF model and identify which layers will receive LoRA modifications
- Dequantize those specific layers back to FP16 or FP32 format
- Apply the LoRA weights through standard matrix operations
- Optionally re-quantize for memory efficiency during inference
Standard ComfyUI LoRA loaders don't know anything about GGUF's block-based format, which is why GGUF LoRA not working errors occur. They expect to receive tensors that can be directly modified and pass back to the model. GGUF-aware loaders handle the dequantization step internally, presenting a compatible interface to the LoRA application logic and fixing the GGUF LoRA not working problem.
Node Pack Architecture Differences
Different GGUF node packs take different approaches to this problem. The ComfyUI-GGUF node pack, for example, provides specialized loader nodes that accept LoRA files directly and handle the dequantization internally. This approach ensures the LoRA application occurs at the right point in the loading pipeline.
Other approaches involve loading the GGUF model, then applying patches that convert specific layers to full precision before standard LoRA nodes process them. Understanding which approach your node pack uses is essential for building working workflows.
Configuring LoRA Application with GGUF Models
Successfully using LoRAs with GGUF models and resolving the GGUF LoRA not working issue requires understanding your specific node pack's approach and following its expected workflow pattern. Once configured correctly, you'll never see GGUF LoRA not working errors again.
Installing GGUF-Compatible Nodes
The most widely used solution is the ComfyUI-GGUF node pack. Install it through ComfyUI Manager or manually clone the repository:
cd ComfyUI/custom_nodes
git clone https://github.com/city96/ComfyUI-GGUF
cd ComfyUI-GGUF
pip install -r requirements.txt
After installation, restart ComfyUI to load the new nodes. You'll find several new loader nodes in the node menu under the "GGUF" category.
Understanding the GGUF Loader Nodes
The ComfyUI-GGUF pack provides several loader variations:
UnetLoaderGGUF: Loads just the UNet from a GGUF file. This is the core generation model and where LoRAs typically apply their modifications.
DualCLIPLoaderGGUF: Loads CLIP text encoders from GGUF format. Important for SDXL which uses two CLIP encoders.
UnetLoaderGGUFAdvanced: Extended loader with additional options including LoRA inputs.
For LoRA support, you need the advanced loader that includes LoRA input connections. The basic loader doesn't provide LoRA integration.
Building a GGUF LoRA Workflow
Here's a complete workflow structure for using LoRAs with GGUF models:
[UnetLoaderGGUFAdvanced]
- unet_name: your_model.gguf
- loras: (connect LoRA loader output)
[LoRALoaderModelOnly] or [LoraStackLoader]
- lora_name: your_lora.safetensors
- strength_model: 0.8
-> Output connects to UnetLoaderGGUFAdvanced loras input
[CLIPLoader] (standard or GGUF version)
- clip_name: clip_model
[VAELoader]
- vae_name: your_vae
[KSampler]
- model: from UnetLoaderGGUFAdvanced
- positive/negative: from CLIP encoding
- latent: from EmptyLatentImage
The critical difference from standard workflows is that the LoRA connects to the GGUF loader rather than being applied after loading. This allows the loader to handle dequantization before LoRA application.
Multiple LoRA Application
To use multiple LoRAs with GGUF models, chain them through a LoRA stack node:
# Conceptual workflow structure
lora_stack = [
("style_lora.safetensors", 0.7, 0.7),
("character_lora.safetensors", 0.9, 0.9),
("detail_lora.safetensors", 0.5, 0.5)
]
# Connect stack output to GGUF loader's LoRA input
The GGUF loader processes each LoRA in sequence, dequantizing relevant layers, applying the modification, then moving to the next LoRA. This sequential processing means order can matter for overlapping modifications.
Troubleshooting LoRA Application
If your LoRA has no visible effect and you're experiencing GGUF LoRA not working issues:
Check connection routing: The most common cause of GGUF LoRA not working is incorrect connection routing. Ensure the LoRA connects to the GGUF loader's LoRA input, not to a separate LoRA loader node. The separate node approach causes GGUF LoRA not working because it can't access the dequantized weights.
Verify LoRA compatibility: LoRAs must match the base model architecture. An SD 1.5 LoRA won't work with an SDXL GGUF model regardless of loader configuration.
Test strength values: Start with strength 1.0 to ensure the LoRA is actually being applied. GGUF quantization can slightly reduce LoRA responsiveness, so you may need higher strengths than with full-precision models.
Check console output: ComfyUI's console shows loading messages. Look for confirmation that the LoRA was loaded and applied to the model.
Configuring ControlNet with GGUF Models
ControlNet integration with GGUF models is generally simpler than LoRA because ControlNet operates as a separate guidance network rather than modifying the base model's weights directly.
Why ControlNet Usually Works
ControlNet models are separate neural networks that provide spatial conditioning to the main UNet. They don't modify the base model's weights; instead, they inject guidance signals at specific points in the diffusion process. This architectural separation means ControlNet doesn't care whether the base UNet weights are quantized or full precision.
The ControlNet model itself remains in standard format (typically safetensors), and its outputs are added to the base model's hidden states during inference. As long as the precision of these outputs matches what the base model expects, the combination works correctly.
Precision Matching Requirements
While ControlNet doesn't directly interact with quantized weights, precision mismatches between the ControlNet output and the base model's computation can cause errors. GGUF models typically compute in FP16 for efficiency (dequantizing from lower precision to FP16 for actual operations). Your ControlNet should also operate in FP16 for compatibility.
Most ControlNet models default to FP16, so this rarely causes issues. However, if you're using FP32 ControlNet models or have forced FP32 computation elsewhere in your workflow, you may encounter type mismatch errors.
ControlNet Workflow with GGUF
Here's the workflow structure for using ControlNet with GGUF models:
[UnetLoaderGGUFAdvanced]
- unet_name: your_model.gguf
-> model output
[ControlNetLoader]
- control_net_name: your_controlnet.safetensors
-> control_net output
[Preprocessor node (Canny, Depth, etc.)]
- image: your input image
-> processed image output
[ApplyControlNet]
- conditioning: from CLIP encoding
- control_net: from ControlNetLoader
- image: from preprocessor
- strength: 0.8
-> conditioning output to KSampler
[KSampler]
- model: from UnetLoaderGGUFAdvanced
- positive: from ApplyControlNet
- negative: from CLIP encoding
Note that ControlNet applies to the conditioning, not directly to the model. This is why it works regardless of the base model's format.
Using Multiple ControlNets
Multiple ControlNets can be chained by sequential ApplyControlNet nodes:
[ApplyControlNet] (Canny edges)
- conditioning: base positive conditioning
- strength: 0.6
-> conditioning output
[ApplyControlNet] (Depth map)
- conditioning: output from previous ControlNet
- strength: 0.4
-> final conditioning to KSampler
Each ControlNet adds its guidance to the accumulated conditioning. The GGUF base model receives this combined guidance identically to how a full-precision model would.
ControlNet Union and GGUF
ControlNet Union models, which combine multiple control types into a single model, work with GGUF base models just like individual ControlNets. The Union model is still a separate network providing conditioning guidance.
[ControlNetLoader]
- control_net_name: controlnet_union.safetensors
[ApplyControlNet]
- control_hint_type: select the control type (canny, depth, etc.)
- strength: 0.7
IP-Adapter Considerations
IP-Adapter presents a more complex case because it modifies the attention mechanisms of the base model. With GGUF models, IP-Adapter may or may not work depending on:
- Whether the GGUF loader exposes the necessary hooks for attention modification
- Whether the IP-Adapter implementation can work with dequantized attention weights
Test IP-Adapter with your specific GGUF node pack. Some implementations work fine; others require patches or alternative approaches.
Diagnosing and Fixing Common Errors
When GGUF compatibility issues occur, the error messages can be cryptic. Here's how to interpret and resolve the most common problems.
Shape Mismatch Errors
Error message: RuntimeError: shape '[X, Y]' is invalid for input of size Z
Cause: The code is trying to reshape quantized block data as if it were individual weights, causing GGUF LoRA not working.
Solution: You're likely using a standard LoRA loader instead of connecting through the GGUF loader, which is the primary cause of GGUF LoRA not working. Rewire your workflow to apply LoRAs through the GGUF loader's LoRA input.
Type Mismatch Errors
Error message: RuntimeError: expected scalar type Float but found Half (or vice versa)
Cause: Different parts of the workflow are operating at different precisions.
Solution: Check precision settings throughout your workflow. Ensure ControlNet models match the base model's computation precision (typically FP16 for GGUF). Some nodes have explicit precision settings; standardize them.
Free ComfyUI Workflows
Find free, open-source ComfyUI workflows for techniques in this article. Open source is strong.
Attribute Errors
Error message: AttributeError: 'X' object has no attribute 'Y'
Cause: Using nodes that expect standard model objects with GGUF model objects that have different interfaces.
Solution: Use nodes designed for GGUF, or ensure you're using the correct GGUF loader that provides the expected interface. The ComfyUI-GGUF pack's advanced loader exposes standard model interfaces that most nodes expect.
NaN or Inf Values
Error message: RuntimeError: Detected NaN/Inf or completely black outputs
Cause: Numerical instability from precision mismatches during computation.
Solution:
- Try a different GGUF quantization level (Q8 is more stable than Q4)
- Reduce LoRA strength values
- Check for conflicting LoRAs that push weights to extreme values
- Ensure all precision-sensitive nodes are using consistent settings
Silent Failures (No Effect)
Symptom: LoRA or ControlNet appears to load but has no visible effect on output.
Causes and solutions:
- LoRA connected to wrong node (rewire through GGUF loader)
- Wrong architecture LoRA (verify SDXL vs SD1.5)
- Strength set to 0 or very low (increase strength)
- Conditioning not connected properly (trace all connections)
Optimizing Performance and Quality
Once you have working GGUF + LoRA + ControlNet workflows, you can optimize for better performance and output quality.
Quantization Level Selection
Different GGUF quantization levels affect LoRA compatibility and quality:
Q8_0: 8-bit quantization with highest quality and best LoRA compatibility. Only ~50% size reduction but very close to full precision behavior.
Q6_K: Good balance of size reduction and quality. LoRAs work well with minor quality differences.
Q5_K_M: More aggressive compression. LoRAs still work but may need strength adjustments.
Q4_K_M: Maximum compression for VRAM savings. LoRA application may show more quality degradation.
For critical LoRA work, prefer Q8 or Q6 quantization. Use Q4 when VRAM constraints are severe and you can accept some quality reduction.
Memory Management
GGUF's main benefit is memory efficiency. To maximize this:
# Launch ComfyUI with appropriate memory settings
python main.py --lowvram # Aggressive offloading for limited VRAM
python main.py --normalvram # Balanced approach
python main.py --highvram # Keep more in VRAM for speed
GGUF models with LoRAs typically need less VRAM than full-precision equivalents even with the dequantization overhead, because only the modified layers need full precision during LoRA application.
LoRA Strength Calibration
GGUF models may respond slightly differently to LoRAs than full-precision models. The quantization affects fine weight details that LoRAs interact with. Generally:
- Start with your normal strength values
- If the effect seems weak, increase by 10-20%
- If you see artifacts, the strength may be too high
- Different quantization levels may need different strengths for the same visual effect
Caching Strategies
For workflows using the same GGUF model with different LoRAs:
Some GGUF loaders support caching the dequantized base model and applying different LoRAs without re-loading. This significantly speeds up iteration when testing multiple LoRAs. Check your node pack's documentation for caching options.
Want to skip the complexity? Apatero gives you professional AI results instantly with no technical setup required.
Advanced Configurations
For complex workflows combining multiple techniques with GGUF models.
LoRA + ControlNet + IP-Adapter
When combining all three with GGUF:
- Load GGUF model with LoRA through advanced loader
- Apply IP-Adapter if supported (test with your node pack)
- Apply ControlNet to conditioning
- Generate with combined modifications
The order matters: LoRA modifies the base model, IP-Adapter modifies attention, ControlNet modifies conditioning. Each operates at a different level.
Regional Prompting with GGUF
Regional prompting nodes that split the generation into areas work with GGUF models because they operate on conditioning and latents rather than model weights. Configure them as you would with standard models.
Inpainting with GGUF
GGUF inpainting models (if available) work with LoRAs through the same mechanisms. Load the inpainting GGUF with its LoRAs, provide your mask, and generate. ControlNet inpainting guidance also works normally.
Future Compatibility
The GGUF ecosystem for Stable Diffusion models is actively developing. Current limitations are being addressed:
More quantization formats: New formats like IQ (importance-weighted quantization) provide better quality at similar sizes.
Better node support: More nodes are being updated to handle GGUF models natively without requiring special loaders.
Automatic dequantization: Future implementations may handle the dequantization transparently, allowing standard LoRA nodes to work directly.
Keep your GGUF node packs updated to benefit from improvements. New releases often fix compatibility issues and add support for additional features.
Conclusion
GGUF models provide valuable VRAM savings that enable running large models on consumer hardware, and they absolutely work with LoRAs and ControlNet when properly configured, solving the GGUF LoRA not working issue completely. The key to fixing GGUF LoRA not working is understanding that GGUF's quantized weight storage requires specialized loader nodes that handle dequantization before modifications can be applied.
For LoRAs, always connect through your GGUF loader's LoRA input rather than using separate LoRA loader nodes - this is the key fix for GGUF LoRA not working. For ControlNet, the standard approach works because ControlNet provides conditioning guidance rather than weight modifications. Match precision settings throughout your workflow to avoid type mismatches.
When troubleshooting GGUF LoRA not working, focus on the connection routing (LoRAs through GGUF loader), architecture matching (SDXL LoRA with SDXL GGUF), and precision consistency. Most GGUF LoRA not working issues stem from these three areas.
The combination of GGUF compression with LoRA customization and ControlNet guidance provides excellent creative flexibility with manageable VRAM requirements. With proper workflow configuration, you can achieve results nearly identical to full-precision models while using a fraction of the memory.
Advanced GGUF Workflow Configurations
Beyond basic compatibility fixes, advanced configurations maximize GGUF's potential in complex workflows.
Multi-LoRA Workflows with GGUF
When using multiple LoRAs with GGUF models, order and strength calibration become critical:
Workflow Structure:
GGUF Model Loader
↓
LoRA Stack Node (combining multiple LoRAs)
- Character LoRA (strength 0.8)
- Style LoRA (strength 0.5)
- Detail LoRA (strength 0.3)
↓
KSampler
Calibration Tips:
- Reduce individual strengths compared to full-precision models
- Total combined strength should rarely exceed 1.5
- Character/subject LoRAs should be strongest
- Style and detail LoRAs work as refinement
GGUF with ControlNet Union
ControlNet Union combines multiple control types efficiently with GGUF:
GGUF Model Loader → model
ControlNet Union Loader → control_net
Image → Preprocessor → control_image
Apply ControlNet Union
- Select control type (depth, canny, pose, etc.)
- Set strength for each type
→ conditioning
KSampler (model, conditioning)
Union models are particularly efficient with GGUF since you only load one ControlNet model instead of multiple.
Join 115 other course members
Create Your First Mega-Realistic AI Influencer in 51 Lessons
Create ultra-realistic AI influencers with lifelike skin details, professional selfies, and complex scenes. Get two complete courses in one bundle. ComfyUI Foundation to master the tech, and Fanvue Creator Academy to learn how to market yourself as an AI creator.
Regional Control with GGUF
Regional prompting works with GGUF models for precise spatial control:
Use Case: Different LoRAs for different image regions
Implementation:
- Create regional masks for different areas
- Apply different LoRAs to each region through GGUF loader
- Blend regions in final output
This technique enables complex compositions like a character with one LoRA in a scene with a style LoRA background.
Performance Optimization Strategies
Maximize GGUF performance in production workflows.
Caching and Preloading
Model Caching: GGUF models benefit from keeping the dequantized version cached. Configure your workflow to maintain model in memory between generations.
LoRA Precomputation: If using the same LoRA across many generations, precompute the merged weights once rather than reapplying each generation. Some node packs support this optimization.
Batch Processing Optimization
Batch Configuration:
# Optimal batch processing settings for GGUF
batch_size = 4 # May need to be lower than full-precision
enable_attention_slicing = True # Memory efficiency
For extensive batch processing with GGUF, see our batch processing guide.
Memory Budget Allocation
Plan memory allocation across workflow components:
| Component | Q8 GGUF | Q4 GGUF |
|---|---|---|
| Base model | ~12GB | ~6GB |
| LoRAs (3x) | ~0.5GB | ~0.5GB |
| ControlNet | ~2.5GB | ~2.5GB |
| VAE | ~0.3GB | ~0.3GB |
| Working | ~4GB | ~4GB |
| Total | ~19GB | ~13GB |
This helps you plan which quantization level fits your VRAM.
Integration with Optimization Tools
Combine GGUF with other optimization techniques for maximum efficiency.
GGUF with TeaCache
TeaCache works normally with GGUF models:
GGUF Model Loader → TeaCache → KSampler
The caching operates on the sampling level, independent of model quantization. You get both memory savings from GGUF and speed improvements from TeaCache.
For TeaCache configuration, see our optimization guide.
GGUF with SageAttention
SageAttention accelerates the attention computation within GGUF models:
- Dequantized attention layers use SageAttention
- Speed improvement stacks with GGUF memory savings
- No special configuration needed
Combined Optimization Stack
Maximum optimization combines multiple techniques:
GGUF Model (Q4_K_M for memory)
+ TeaCache (for speed)
+ SageAttention (for attention speed)
+ Memory-efficient attention (for further memory savings)
This can enable running large models like Flux on 12GB cards while maintaining reasonable generation speeds.
Troubleshooting Complex Workflows
Advanced troubleshooting for sophisticated GGUF workflows.
Debugging Connection Issues
When complex workflows fail with GGUF models:
- Isolate the problem: Remove components until workflow works
- Add components back: One at a time to find the issue
- Check node versions: Some nodes have GGUF-specific updates
- Verify precision matching: Throughout the workflow
Memory Fragmentation
Symptom: Out of memory errors despite sufficient total VRAM.
Cause: Memory fragmentation from repeated loading/unloading.
Solutions:
- Restart ComfyUI to defragment
- Enable CUDA memory allocation caching
- Process in smaller batches with restarts between
Quality Debugging
When quality doesn't match expectations:
- Generate same prompt with full-precision model
- Compare outputs systematically
- If GGUF is significantly worse, try higher quantization (Q8 instead of Q4)
- Check if specific LoRAs interact poorly with quantization
For general VRAM optimization techniques, see our VRAM flags guide.
Future GGUF Development
The GGUF ecosystem continues evolving with active development.
Emerging Improvements
Better Quantization Methods: New quantization schemes like IQ (importance-weighted quantization) provide better quality at same sizes.
Native Node Support: More ComfyUI nodes are adding native GGUF support, reducing need for special loaders.
Automatic Dequantization: Future implementations may handle dequantization transparently, making GGUF models work with any node.
Staying Updated
Keep GGUF node packs updated for latest improvements:
cd ComfyUI/custom_nodes/ComfyUI-GGUF
git pull
pip install -r requirements.txt
New releases often fix compatibility issues and add support for additional features.
Getting Started with GGUF Models
For users new to GGUF quantized models, understanding the fundamentals before encountering compatibility issues saves significant troubleshooting time.
Understanding When to Use GGUF
GGUF Is Ideal For:
- Systems with 6-12GB VRAM running SDXL or larger models
- Users prioritizing model variety over maximum quality
- Testing multiple models without VRAM constraints
- Situations where 50-75% smaller file sizes matter
GGUF May Not Be Ideal For:
- Production workflows requiring maximum quality
- Heavy LoRA usage where compatibility matters
- Users with ample VRAM (24GB+) who can use full precision
- Workflows requiring features GGUF node packs don't support
Recommended Learning Path
Step 1 - Understand ComfyUI Basics: Before adding GGUF complexity, ensure you understand standard ComfyUI workflows. Our essential nodes guide covers foundational concepts.
Step 2 - Test Standard Models First: Verify your LoRAs and ControlNets work correctly with standard safetensors models before testing with GGUF. This establishes a working baseline for comparison.
Step 3 - Install GGUF Node Pack: Add ComfyUI-GGUF node pack and verify basic GGUF model loading works before adding LoRAs or ControlNet.
Step 4 - Add LoRAs Through GGUF Loader: Use the advanced GGUF loader's LoRA input rather than separate LoRA loader nodes. This is the most common mistake that causes compatibility issues.
Step 5 - Add ControlNet: ControlNet typically works without modification since it operates on conditioning rather than model weights.
First GGUF Workflow Recommendations
Start Simple: Create a basic text-to-image workflow with just GGUF model loading. Verify generation quality matches your expectations before adding complexity.
Add One Feature at a Time: When building complex workflows, add LoRA, then test. Add ControlNet, then test. This isolation identifies which component causes issues.
Compare to Standard Models: Generate identical prompts with both GGUF and standard versions of the same model. Quality differences should be subtle; major differences indicate configuration issues.
Common Beginner Mistakes
Mistake: Using Standard LoRA Loader with GGUF The most common error. Standard LoRA loaders can't handle quantized weights. Always connect LoRAs through the GGUF loader's LoRA input.
Mistake: Expecting Identical Quality GGUF quantization trades some quality for memory savings. Q4 quantization produces visible quality reduction. Use Q8 if quality is critical.
Mistake: Mixing SDXL LoRAs with SD 1.5 GGUF LoRAs must match base model architecture regardless of GGUF. SDXL LoRAs work only with SDXL models.
For complete beginners to AI image generation concepts, our beginner's guide provides foundational knowledge that makes GGUF optimization easier to understand.
Ready to Create Your AI Influencer?
Join 115 students mastering ComfyUI and AI influencer marketing in our complete 51-lesson course.
Related Articles
10 Most Common ComfyUI Beginner Mistakes and How to Fix Them in 2025
Avoid the top 10 ComfyUI beginner pitfalls that frustrate new users. Complete troubleshooting guide with solutions for VRAM errors, model loading issues, and workflow problems.
25 ComfyUI Tips and Tricks That Pro Users Don't Want You to Know in 2025
Discover 25 advanced ComfyUI tips, workflow optimization techniques, and pro-level tricks that expert users leverage. Complete guide to CFG tuning, batch processing, and quality improvements.
360 Anime Spin with Anisora v3.2: Complete Character Rotation Guide ComfyUI 2025
Master 360-degree anime character rotation with Anisora v3.2 in ComfyUI. Learn camera orbit workflows, multi-view consistency, and professional turnaround animation techniques.