🤖 any-to-any

gemma-4-E2B-it-ONNX

onnx-community/gemma-4-E2B-it-ONNX

Get AI Model →
77.1K
Downloads
❤️
15
Likes
🏷️
10
Tags
📦
transformers.js
Library
Model Details
Full Model IDonnx-community/gemma-4-E2B-it-ONNX
Pipeline / Taskany-to-any
Librarytransformers.js
Downloads (all-time)77.1K
Likes15
Last Modified4/10/2026
Author / Orgonnx-community
PrivateNo — public
⚡ Quick Usage (Python)

Using the 🤗 Transformers library. Install with pip install transformers

from transformers import pipeline

# Load the model
pipe = pipeline("any-to-any", model="onnx-community/gemma-4-E2B-it-ONNX")

# Run inference
result = pipe("Your input here")
print(result)
🏷️ Tags
transformers.jsonnxgemma4image-text-to-textconversationalany-to-anybase_model:google/gemma-4-E2B-itbase_model:quantized:google/gemma-4-E2B-itlicense:apache-2.0region:us
More any-to-any Models
See all →
OneThinker-SFT-Qwen3-8B

OneThink/OneThinker-SFT-Qwen3-8B

2.9M❤️ 4
Get AI Model →
gemma-4-E4B-it

google/gemma-4-E4B-it

2.4M❤️ 769
Get AI Model →
gemma-4-E2B-it

google/gemma-4-E2B-it

1.8M❤️ 499
Get AI Model →
🚀 Use This Model

Access model files, inference API, and full documentation on Hugging Face.

Open on Hugging Face →Browse Model Files ↗← Browse All Models
🤖 Task: any-to-any

This model is designed for the any-to-any task. Explore more models for this use case.

All any-to-any Models →
📊 Popularity
Downloads77.1K
❤️ Community Likes15
🛠️ Requirements
  • Install: pip install transformers.js
  • Python 3.8+ recommended for Transformers.
  • GPU (CUDA) speeds up inference significantly.
  • Use model.half() for fp16 on limited VRAM.
👋 Need help with code?