🤖 image-text-to-text

Qwen2.5-VL-3B-Instruct

Qwen/Qwen2.5-VL-3B-Instruct

Get AI Model →
6.1M
Downloads
❤️
638
Likes
🏷️
15
Tags
📦
transformers
Library
Model Details
Full Model IDQwen/Qwen2.5-VL-3B-Instruct
Pipeline / Taskimage-text-to-text
Librarytransformers
Downloads (all-time)6.1M
Likes638
Last Modified4/6/2025
Author / OrgQwen
PrivateNo — public
⚡ Quick Usage (Python)

Using the 🤗 Transformers library. Install with pip install transformers

from transformers import pipeline

# Load the model
pipe = pipeline("image-text-to-text", model="Qwen/Qwen2.5-VL-3B-Instruct")

# Run inference
result = pipe("Your input here")
print(result)
🏷️ Tags
transformerssafetensorsqwen2_5_vlimage-text-to-textmultimodalconversationalenarxiv:2309.00071arxiv:2409.12191arxiv:2308.12966eval-resultstext-generation-inferenceendpoints_compatibledeploy:azureregion:us
More image-text-to-text Models
See all →
Qwen3-VL-2B-Instruct

Qwen/Qwen3-VL-2B-Instruct

64.9M❤️ 368
Get AI Model →
Qwen2.5-VL-7B-Instruct

Qwen/Qwen2.5-VL-7B-Instruct

8.8M❤️ 1.5K
Get AI Model →
GLM-OCR

zai-org/GLM-OCR

7.3M❤️ 1.6K
Get AI Model →
🚀 Use This Model

Access model files, inference API, and full documentation on Hugging Face.

Open on Hugging Face →Browse Model Files ↗← Browse All Models
🤖 Task: image-text-to-text

This model is designed for the image-text-to-text task. Explore more models for this use case.

All image-text-to-text Models →
📊 Popularity
Downloads6.1M
❤️ Community Likes638
🛠️ Requirements
  • Install: pip install transformers
  • Python 3.8+ recommended for Transformers.
  • GPU (CUDA) speeds up inference significantly.
  • Use model.half() for fp16 on limited VRAM.
👋 Need help with code?