Qwen 2.5 VL for Image Understanding - Complete Guide
Master Qwen 2.5 VL vision-language model for image analysis, document understanding, and visual question answering with local deployment
Vision-language models have transformed how we interact with visual content, enabling machines to understand, describe, and reason about images in ways that were impossible just a few years ago. Qwen 2.5 VL represents Alibaba's latest contribution to this field, offering powerful multimodal capabilities that you can run locally without relying on cloud APIs. This comprehensive guide covers everything from understanding what Qwen 2.5 VL can do to deploying it on your own hardware for practical applications in image analysis, document processing, and visual question answering.
Understanding Vision-Language Models
Before diving into Qwen 2.5 VL specifically, it helps to understand what vision-language models are and why they matter. These models bridge two modalities that humans process effortlessly but machines have historically kept separate: visual perception and language understanding.
Traditional computer vision models could classify images or detect objects, but they couldn't explain what they saw or answer arbitrary questions about visual content. Language models could process text beautifully but were blind to images. Vision-language models combine these capabilities, taking images as input alongside text prompts and generating text responses that demonstrate understanding of the visual content.
This capability enables applications that require both seeing and reasoning: describing images for accessibility, extracting data from documents, answering questions about charts and diagrams, analyzing screenshots, and much more. The practical applications span from personal productivity to enterprise automation.
Qwen 2.5 VL Capabilities
Qwen 2.5 VL excels at several categories of visual understanding tasks. Each represents a different way of extracting value from images through language.
Image Description and Captioning
The most fundamental capability is describing what's in an image. Qwen 2.5 VL can generate descriptions at multiple levels of detail depending on your needs. Ask for a brief caption and you get a sentence. Ask for a detailed description and you get paragraphs covering composition, colors, objects, people, actions, text, and mood.
The model handles diverse image types well, from photographs to illustrations to screenshots. It identifies objects accurately, describes spatial relationships between elements, and picks up on stylistic and atmospheric qualities. For images containing people, it describes poses, expressions, clothing, and activities while being appropriately careful about identity claims.
This capability is valuable for automated image cataloging, accessibility descriptions, content moderation pipelines, and anywhere you need textual representations of visual content.
Document OCR and Understanding
Qwen 2.5 VL's document understanding goes significantly beyond basic OCR. While it can extract text from images of documents, it also understands document structure, relationships between elements, and the meaning of what it's reading.
Show it a receipt and it extracts not just the text but organizes it into merchant name, items, prices, tax, and total. Show it a form and it identifies fields and their values. Show it a table and it understands rows, columns, and headers. This structural understanding transforms raw text extraction into usable data.
The model handles various document types: printed documents, handwritten notes (with varying accuracy based on legibility), signs, labels, business cards, invoices, and more. It works across languages, with particularly strong performance in Chinese and English.
For practical document processing, you can ask Qwen 2.5 VL to extract specific information ("What is the invoice number?"), summarize content ("Summarize this contract clause"), or convert to structured formats ("Extract this table as JSON").
Visual Question Answering
Perhaps the most flexible capability is visual question answering (VQA), where you ask arbitrary questions about an image and get accurate answers. This goes beyond description into reasoning about visual content.
You can ask factual questions: "How many people are in this photo?" "What color is the car?" "What time does the sign say the store closes?" You can ask inferential questions: "What season does this appear to be?" "What is the person likely doing next?" "What emotion does this scene convey?" You can ask analytical questions: "Is this chart showing growth or decline?" "What's the relationship between these two elements?" "What's wrong with this image?"
VQA makes Qwen 2.5 VL a general-purpose visual assistant that can answer whatever you need to know about an image. The accuracy depends on image quality and question complexity, but the capability is remarkably solid across diverse queries.
Chart and Graph Analysis
A specific strength worth highlighting is chart and graph understanding. Qwen 2.5 VL can interpret data visualizations and answer questions about them, extracting both visual features and underlying data.
Show it a bar chart and it can tell you which category has the highest value, estimate the actual numbers, describe trends, and compare elements. It works with line graphs, pie charts, scatter plots, and other common visualization types. This capability is valuable for analyzing screenshots from reports, extracting data from images when the source data isn't available, and automating visual data review.
The model reasons about what charts show rather than just describing their visual appearance. It understands that charts represent data and interprets them So.
Multi-Image Analysis
Qwen 2.5 VL can process multiple images together, enabling comparison, sequencing, and relationship analysis. This capability supports use cases that single-image analysis can't handle.
You can ask it to compare two product images and list differences, analyze a sequence of images to understand what happened, find common elements across a set of images, or identify which image from a group best matches a description. The model maintains context across images and reasons about their relationships.
Multi-image analysis is particularly valuable for quality control (comparing to reference), temporal analysis (understanding change), and selection tasks (choosing from options).
Model Sizes and Hardware Requirements
Qwen 2.5 VL comes in multiple sizes to fit different hardware capabilities and performance needs. Choosing the right size balances capability against resource requirements.
Qwen 2.5 VL 2B (Smallest)
The 2-billion parameter model runs on modest hardware, requiring only 4-6GB of VRAM for inference. This size works on many consumer GPUs and even some integrated graphics with sufficient system memory.
The 2B model handles basic tasks well: simple descriptions, text extraction from clear documents, and straightforward VQA. It struggles with complex reasoning, dense documents, or subtle questions. Think of it as appropriate for simple, high-volume tasks where speed and accessibility matter more than maximum capability.
Use the 2B model when hardware is limited, when tasks are simple, or when you need to process many images quickly and can tolerate occasional errors.
Qwen 2.5 VL 7B (Balanced)
The 7-billion parameter model represents the sweet spot for most users. It requires 12-16GB of VRAM, putting it in range of enthusiast-class consumer GPUs like the RTX 3090, 4080, or better.
This size handles the full range of capabilities well: detailed descriptions, complex documents, multi-step reasoning, and subtle questions. The quality jump from 2B is substantial, and it handles edge cases much better. For most practical applications, the 7B model delivers excellent results.
This is the recommended starting point if your hardware supports it. It balances capability, speed, and resource usage effectively for diverse use cases.
Qwen 2.5 VL 72B (Maximum)
The 72-billion parameter model provides the highest capability but requires 40GB+ of VRAM, limiting it to workstation GPUs like the A100 or multi-GPU setups. This is overkill for most users but valuable for demanding applications.
The 72B model excels at complex reasoning, handles ambiguous queries better, makes fewer mistakes on difficult content, and provides more detailed and subtle responses. For critical applications where accuracy matters more than cost, or for research requiring maximum capability, the 72B model delivers.
Most users should only consider this size for specific demanding tasks or production deployments where the cost of errors justifies the hardware investment.
Local Deployment Guide
Running Qwen 2.5 VL locally gives you privacy, eliminates per-query costs, and enables integration into your own applications. Here's how to set it up.
Environment Preparation
Start with a Python environment (3.10+ recommended) with PyTorch installed for your hardware. For NVIDIA GPUs, install the CUDA-enabled version of PyTorch. For Apple Silicon, the MPS backend works with recent PyTorch versions.
# Create a virtual environment
python -m venv qwen-vl-env
source qwen-vl-env/bin/activate # On Windows: qwen-vl-env\Scripts\activate
# Install PyTorch (adjust for your CUDA version or use MPS)
pip install torch torchvision
# Install transformers and dependencies
pip install transformers accelerate pillow
You'll also need the qwen-vl-utils package for image processing:
pip install qwen-vl-utils
Loading the Model
Load Qwen 2.5 VL using the transformers library. The model downloads automatically from HuggingFace on first use:
from transformers import Qwen2VLForConditionalGeneration, AutoProcessor
# Choose your model size
model_name = "Qwen/Qwen2.5-VL-7B-Instruct"
# Load model (adjust device_map for your setup)
model = Qwen2VLForConditionalGeneration.from_pretrained(
model_name,
torch_dtype="auto",
device_map="auto",
trust_remote_code=True
)
# Load processor
processor = AutoProcessor.from_pretrained(model_name)
For lower VRAM usage, load in 8-bit or 4-bit quantization:
from transformers import BitsAndBytesConfig
quantization_config = BitsAndBytesConfig(
load_in_4bit=True,
bnb_4bit_compute_dtype=torch.float16
)
model = Qwen2VLForConditionalGeneration.from_pretrained(
model_name,
quantization_config=quantization_config,
device_map="auto",
trust_remote_code=True
)
Running Inference
Process images by combining them with text prompts in a conversation format:
from PIL import Image
from qwen_vl_utils import process_vision_info
# Prepare your image
image_path = "path/to/your/image.jpg"
# Create the conversation
messages = [
{
"role": "user",
"content": [
{"type": "image", "image": image_path},
{"type": "text", "text": "Describe this image in detail."}
]
}
]
# Process inputs
text = processor.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
image_inputs, video_inputs = process_vision_info(messages)
inputs = processor(
text=[text],
images=image_inputs,
videos=video_inputs,
padding=True,
return_tensors="pt"
)
inputs = inputs.to(model.device)
# Generate response
output_ids = model.generate(**inputs, max_new_tokens=512)
output_text = processor.batch_decode(
output_ids[:, inputs.input_ids.shape[1]:],
skip_special_tokens=True
)[0]
print(output_text)
Structured Data Extraction
For extracting structured data, prompt for specific output formats:
messages = [
{
"role": "user",
"content": [
{"type": "image", "image": "receipt.jpg"},
{"type": "text", "text": """Extract the following information from this receipt as JSON:
- merchant_name
- date
- items (list with name and price)
- subtotal
- tax
- total
Return only valid JSON."""}
]
}
]
The model generally follows formatting instructions well, though you may need to post-process the output for strict validation.
Multi-Image Processing
Process multiple images in a single query:
Free ComfyUI Workflows
Find free, open-source ComfyUI workflows for techniques in this article. Open source is strong.
messages = [
{
"role": "user",
"content": [
{"type": "image", "image": "image1.jpg"},
{"type": "image", "image": "image2.jpg"},
{"type": "text", "text": "Compare these two images. What are the main differences?"}
]
}
]
Batch Processing for Efficiency
For processing many images, batch them together:
def batch_process_images(image_paths, prompt, batch_size=4):
results = []
for i in range(0, len(image_paths), batch_size):
batch_paths = image_paths[i:i+batch_size]
batch_messages = [
[{"role": "user", "content": [
{"type": "image", "image": path},
{"type": "text", "text": prompt}
]}]
for path in batch_paths
]
# Process batch...
# results.extend(batch_results)
return results
Practical Applications
Here are concrete ways to use Qwen 2.5 VL in real workflows.
Automated Image Captioning for Datasets
Training image generation models requires captioned datasets. Use Qwen 2.5 VL to generate detailed captions for images:
caption_prompt = """Provide a detailed caption for this image suitable for training
an image generation model. Include: subjects, actions, setting, style, lighting,
colors, and composition. Be specific and descriptive."""
Generate captions at scale for your image collections, creating training data for LoRA training or model fine-tuning.
Screenshot Analysis for Automation
Analyze UI screenshots for automation or testing:
ui_prompt = """Analyze this screenshot and identify:
1. All clickable buttons and their labels
2. Text input fields
3. The main content or purpose of this screen
4. Any error messages or alerts
Format as a structured list."""
This enables building automation tools that understand what's on screen.
Document Processing Pipeline
Build document processing workflows that extract structured data:
invoice_prompt = """Extract all information from this invoice into a structured format:
- Invoice number and date
- Vendor details (name, address, contact)
- Client details
- Line items (description, quantity, unit price, total)
- Subtotal, tax, discounts, total
- Payment terms
Output as JSON."""
Process scanned invoices, receipts, or forms into database-ready formats.
Code Screenshot Analysis
For images of code, extract and explain:
code_prompt = """This is a screenshot of code. Please:
1. Transcribe the code exactly
2. Identify the programming language
3. Explain what this code does
4. Note any potential issues or improvements"""
Useful for extracting code from tutorials, analyzing screenshots, or documentation.
Chart Data Extraction
Extract data from chart images when source data isn't available:
chart_prompt = """Analyze this chart and extract:
1. Chart type
2. Title and axis labels
3. All data points or values (estimate if exact values aren't labeled)
4. Key trends or insights
Format the data as a table."""
ComfyUI Integration
Qwen 2.5 VL integrates into ComfyUI workflows through custom nodes, enabling vision-language capabilities within image generation pipelines.
Several community node packs provide Qwen 2.5 VL integration. Search the ComfyUI Manager for "Qwen" or "VLM" nodes. Once installed, you can use the model for:
- Automatic captioning: Generate prompts from reference images
- Image analysis: Analyze generated images for quality control
- Conditional logic: Make workflow decisions based on image content
- Iterative refinement: Use analysis to adjust generation parameters
A typical workflow might analyze a generated image, determine if it matches the intended prompt, and automatically re-generate with adjusted parameters if not.
Best Practices for Optimal Results
Getting the best results from Qwen 2.5 VL requires attention to prompting, image quality, and task matching.
Prompt Engineering
Write clear, specific prompts that define exactly what you want. Vague prompts get vague responses. Compare:
- Poor: "What's in this image?"
- Better: "Identify all the products visible in this store shelf image, listing brand names where readable."
Specify the output format when you need structured data. The model follows formatting instructions well when they're explicit.
For complex tasks, break them into steps. Instead of asking for everything at once, chain multiple queries that build on each other.
Want to skip the complexity? Apatero gives you professional AI results instantly with no technical setup required.
Image Quality Considerations
Image quality significantly affects results. Higher resolution images with clear content produce better analysis. Blurry, compressed, or poorly lit images degrade output quality.
For documents, ensure text is legible at the resolution provided. For scenes, ensure relevant details are visible. If analysis is failing, try providing a higher quality image before assuming model limitations.
The model handles diverse image types but works best with images that a human could also interpret clearly.
Matching Model Size to Task
Don't use the 72B model for simple captions, and don't expect the 2B model to handle complex reasoning. Match model size to task complexity:
- 2B: Simple descriptions, basic OCR, straightforward questions
- 7B: Detailed analysis, document understanding, moderate reasoning
- 72B: Complex reasoning, critical applications, maximum accuracy
Using an appropriately sized model balances quality against resource usage and speed.
Handling Failures
The model occasionally fails or produces incorrect results, especially for edge cases. Build error handling into production workflows:
- Validate extracted data against expected formats
- Implement confidence thresholds where possible
- Have fallback logic for failed extractions
- Log failures for review and prompt refinement
No vision-language model is perfect. Design systems that handle imperfection gracefully.
Comparison with Alternatives
Qwen 2.5 VL exists in a space of vision-language models with different tradeoffs.
vs. GPT-4V / GPT-4o
OpenAI's multimodal models are likely the quality leaders for complex reasoning tasks. However, they require API access with per-query costs and send your images to OpenAI's servers. Qwen 2.5 VL runs locally for free with full privacy. For most tasks, the quality difference is minimal, making Qwen an excellent choice when local deployment matters.
vs. LLaVA
LLaVA is another popular open-source VLM. Qwen 2.5 VL generally outperforms LLaVA on benchmarks, particularly for document understanding and OCR. However, LLaVA has a larger ecosystem of fine-tuned variants for specific tasks.
vs. Claude with Vision
Anthropic's Claude models with vision capability are strong competitors but again require API access. For users already using Claude, the multimodal capability is convenient. For local deployment needs, Qwen 2.5 VL is the better choice.
The right choice depends on your priorities: maximum quality (GPT-4V/Claude), local deployment (Qwen 2.5 VL), or specific fine-tuned capabilities (LLaVA variants).
Advanced Usage
Beyond basic inference, several advanced techniques extend Qwen 2.5 VL's capabilities.
Fine-Tuning for Specific Tasks
You can fine-tune Qwen 2.5 VL on your own data to improve performance for specific tasks. LoRA fine-tuning requires less resources than full fine-tuning and works well for specialization. This is valuable when you have domain-specific image types or need particular output formats that prompting alone doesn't achieve.
High-Throughput Deployment
For production deployments requiring high throughput, consider vLLM for optimized inference:
pip install vllm
vLLM provides continuous batching and other optimizations that significantly increase throughput compared to standard transformers inference.
API Service Deployment
Wrap Qwen 2.5 VL in an API service for application integration:
Join 115 other course members
Create Your First Mega-Realistic AI Influencer in 51 Lessons
Create ultra-realistic AI influencers with lifelike skin details, professional selfies, and complex scenes. Get two complete courses in one bundle. ComfyUI Foundation to master the tech, and Fanvue Creator Academy to learn how to market yourself as an AI creator.
from fastapi import FastAPI, UploadFile
# ... model loading code ...
app = FastAPI()
@app.post("/analyze")
async def analyze_image(image: UploadFile, prompt: str):
# Process image with model
# Return analysis result
pass
This enables any application to access your locally-deployed vision-language capabilities.
For users who want vision-language capabilities without managing local deployment, Apatero.com provides access to multimodal AI through managed infrastructure.
Integration with Image Generation Workflows
Qwen 2.5 VL's image understanding capabilities complement image generation tools, enabling sophisticated workflows that use both analysis and creation. Understanding how to combine these capabilities opens powerful automation possibilities.
Automated Captioning for Training Datasets
One of the most valuable integrations is using Qwen 2.5 VL to generate captions for image generation training datasets. Training LoRAs or fine-tuning models requires accurately captioned images, and manual captioning becomes impractical for large datasets.
Create a captioning pipeline that processes your training images through Qwen 2.5 VL with prompts designed for your training needs. For character LoRAs, prompt for descriptions of poses, expressions, clothing, and distinguishing features. For style LoRAs, prompt for artistic elements like brushwork, color palette, composition, and mood.
The generated captions provide a starting point that you then refine. Qwen 2.5 VL captures details you might miss while captioning hundreds of images manually, and you can edit the results to add trigger words or adjust emphasis. This hybrid approach combines AI efficiency with human judgment for optimal training dataset quality.
For complete guidance on preparing training datasets, see our Flux LoRA training guide, which covers captioning strategies that apply across different base models.
Generation Quality Control
Use Qwen 2.5 VL to analyze generated images and assess whether they match your intentions. After generating batches of images, run them through analysis to check for prompt adherence, identify artifacts or quality issues, and verify that specific elements you requested actually appear.
This automated quality control catches issues before they enter your workflow. If you're generating product images, verify the product appears correctly. If you're generating portraits, verify facial features match your requirements. If you're generating specific compositions, verify elements are positioned as intended.
Implement conditional logic based on analysis results: automatically regenerate images that fail quality checks, flag images for manual review when analysis is uncertain, or sort images into quality tiers based on analysis scores. This transforms random generation attempts into a systematic quality process.
Reference-Based Generation
Qwen 2.5 VL can analyze reference images and generate prompts that capture their essential elements for use with image generation. Show it a photograph and ask it to describe elements you want to recreate in a different style. The model identifies composition, lighting, color relationships, and subject positioning that you can then request in your generation prompt.
This workflow bridges the gap between visual references and text prompts. Instead of trying to translate what you see into words yourself, let Qwen 2.5 VL extract the structural and stylistic elements, then use those descriptions to guide generation.
Advanced Configuration and Optimization
Beyond basic deployment, several configurations and optimizations improve Qwen 2.5 VL's performance for specific use cases.
Context Length and Memory Management
Qwen 2.5 VL supports long context windows, which matters for multi-turn conversations and documents with many pages. However, longer contexts consume more memory and slow inference.
Configure context length based on your use case. For simple image analysis questions, short contexts suffice. For multi-page document processing or conversations that build on previous exchanges, you need longer contexts. Adjust the max_length parameter when generating to cap context usage.
Monitor memory usage during inference. If you're running into OOM errors with the 7B model, check whether your context has grown unexpectedly long. Clear conversation history between unrelated queries to avoid accumulating context that slows future responses.
Temperature and Sampling Configuration
Generation parameters affect how Qwen 2.5 VL responds. Lower temperature (0.1-0.3) produces more deterministic, factual responses suitable for document extraction and specific questions. Higher temperature (0.7-1.0) produces more varied, creative responses suitable for description and interpretation tasks.
For structured data extraction, use low temperature and consider setting top_k to a small value. You want the most likely correct answer, not creative variation. For descriptive tasks where multiple valid responses exist, moderate temperature produces more natural and detailed descriptions.
Experiment with these parameters for your specific tasks. The optimal settings depend on whether you prioritize consistency or expressiveness in outputs.
Concurrent Processing Architecture
For high-throughput applications, design your deployment architecture to handle concurrent requests efficiently. Running multiple Qwen 2.5 VL instances requires sufficient GPU memory, but you can process multiple requests in parallel on a single instance using batching.
The transformers library supports batched inference. Prepare multiple queries with their images, process them together, and receive results for all. This amortizes model loading and memory overhead across multiple requests, improving throughput compared to sequential processing.
For even higher throughput, deploy multiple model instances behind a load balancer. This scales horizontally as request volume grows. Kubernetes or similar orchestration systems can manage instance scaling automatically based on queue depth or response latency.
If you're considering cloud deployment options, our guide on getting started with RunPod covers setting up GPU instances for AI workloads.
Real-World Implementation Examples
Concrete implementation examples demonstrate how to apply Qwen 2.5 VL capabilities to actual problems.
E-commerce Product Analysis
E-commerce platforms need to extract structured data from product images: identify the product category, extract visible text (brand names, model numbers), describe key features, and generate SEO-friendly descriptions.
Build a pipeline that processes product images through Qwen 2.5 VL with prompts designed for each extraction task. Parse the outputs into structured database fields. Use confidence thresholds to route uncertain extractions to human review rather than entering bad data.
This automation scales product catalog management. Instead of manually reviewing and describing each product image, operators review only the exceptions that need attention.
Content Moderation Pipeline
Platforms hosting user-generated images need content moderation. Qwen 2.5 VL can analyze images for policy violations, identifying inappropriate content, copyright concerns, or misleading information.
Design moderation prompts that check for specific policy categories. The model returns assessments for each category with explanations. Route flagged content to human reviewers with the model's reasoning provided as context.
This hybrid moderation catches obvious violations automatically while surfacing edge cases for human judgment. Reviewers work more efficiently because the model has already done initial analysis and provided relevant context. Understand how this integrates with broader workflows by exploring LoRA model merging techniques for creating specialized content analysis models.
Technical Documentation Processing
Technical documentation often mixes text, diagrams, code snippets, and screenshots. Qwen 2.5 VL can process these documents holistically, understanding both textual and visual elements.
Extract information from technical diagrams and correlate it with surrounding text. Transcribe code from screenshots and explain what it does. Generate summaries that incorporate information from both text and figures rather than treating them separately.
This capability supports documentation conversion, search indexing, and knowledge base population. Instead of treating images as opaque blobs, your systems understand their content and can answer questions that require comprehending both text and visuals together.
Accessibility Image Description
Making visual content accessible requires detailed image descriptions for users relying on screen readers. Qwen 2.5 VL generates the detailed, contextual descriptions that accessibility standards require.
Configure prompts to produce descriptions appropriate for accessibility contexts: concrete and specific, describing relevant visual information, avoiding subjective interpretation, and providing sufficient detail for someone who cannot see the image to understand its content and purpose.
Automate description generation for image libraries, websites, or documents that need accessibility compliance. Human review ensures quality for critical content, but automation handles the volume that would otherwise remain undescribed.
Conclusion
Qwen 2.5 VL brings powerful vision-language capabilities to local deployment, enabling image understanding, document processing, and visual question answering without cloud API dependencies. The multiple model sizes accommodate hardware from consumer laptops to workstations, making it accessible across different resource levels.
The model excels at document OCR with structural understanding, detailed image description, flexible visual question answering, and chart analysis. For users who need these capabilities with privacy, without per-query costs, or with integration into local workflows, Qwen 2.5 VL is an excellent choice.
Deployment is straightforward with the transformers library, and the model integrates into various workflows from simple scripts to ComfyUI pipelines. By matching model size to task complexity and following prompting best practices, you can achieve results that approach or match cloud API alternatives while maintaining full control over your data and infrastructure.
Vision-language models represent a significant advancement in AI capabilities, and Qwen 2.5 VL makes these capabilities practical for local deployment. Whether you're building document processing pipelines, adding image understanding to applications, or exploring multimodal AI capabilities, Qwen 2.5 VL provides a capable and accessible foundation.
Ready to Create Your AI Influencer?
Join 115 students mastering ComfyUI and AI influencer marketing in our complete 51-lesson course.
Related Articles
10 Best AI Influencer Generator Tools Compared (2025)
Comprehensive comparison of the top AI influencer generator tools in 2025. Features, pricing, quality, and best use cases for each platform reviewed.
AI Adventure Book Generation with Real-Time Images
Generate interactive adventure books with real-time AI image creation. Complete workflow for dynamic storytelling with consistent visual generation.
AI Background Replacement: Professional Guide 2025
Master AI background replacement for professional results. Learn rembg, BiRefNet, and ComfyUI workflows for seamless background removal and replacement.