🤖 any-to-any

Qwen3-Omni-30B-A3B-Instruct-AWQ-4bit

cyankiwi/Qwen3-Omni-30B-A3B-Instruct-AWQ-4bit

Get AI Model →
108.2K
Downloads
❤️
49
Likes
🏷️
13
Tags
📦
transformers
Library
Model Details
Full Model IDcyankiwi/Qwen3-Omni-30B-A3B-Instruct-AWQ-4bit
Pipeline / Taskany-to-any
Librarytransformers
Downloads (all-time)108.2K
Likes49
Last Modified9/28/2025
Author / Orgcyankiwi
PrivateNo — public
⚡ Quick Usage (Python)

Using the 🤗 Transformers library. Install with pip install transformers

from transformers import pipeline

# Load the model
pipe = pipeline("any-to-any", model="cyankiwi/Qwen3-Omni-30B-A3B-Instruct-AWQ-4bit")

# Run inference
result = pipe("Your input here")
print(result)
🏷️ Tags
transformerssafetensorsqwen3_omni_moetext-to-audiomultimodalany-to-anyenbase_model:Qwen/Qwen3-Omni-30B-A3B-Instructbase_model:quantized:Qwen/Qwen3-Omni-30B-A3B-Instructlicense:otherendpoints_compatiblecompressed-tensorsregion:us
More any-to-any Models
See all →
OneThinker-SFT-Qwen3-8B

OneThink/OneThinker-SFT-Qwen3-8B

2.9M❤️ 4
Get AI Model →
gemma-4-E4B-it

google/gemma-4-E4B-it

2.4M❤️ 769
Get AI Model →
gemma-4-E2B-it

google/gemma-4-E2B-it

1.8M❤️ 499
Get AI Model →
🚀 Use This Model

Access model files, inference API, and full documentation on Hugging Face.

Open on Hugging Face →Browse Model Files ↗← Browse All Models
🤖 Task: any-to-any

This model is designed for the any-to-any task. Explore more models for this use case.

All any-to-any Models →
📊 Popularity
Downloads108.2K
❤️ Community Likes49
🛠️ Requirements
  • Install: pip install transformers
  • Python 3.8+ recommended for Transformers.
  • GPU (CUDA) speeds up inference significantly.
  • Use model.half() for fp16 on limited VRAM.
👋 Need help with code?