Model Details
Full Model IDmlx-community/gemma-4-e2b-it-4bit
Pipeline / Taskany-to-any
Librarymlx
Downloads (all-time)218.3K
Likes9
Last Modified4/13/2026
Author / Orgmlx-community
PrivateNo — public
⚡ Quick Usage (Python)
Using the 🤗 Transformers library. Install with pip install transformers
from transformers import pipeline
# Load the model
pipe = pipeline("any-to-any", model="mlx-community/gemma-4-e2b-it-4bit")
# Run inference
result = pipe("Your input here")
print(result)🏷️ Tags
mlxsafetensorsgemma4any-to-anylicense:apache-2.04-bitregion:us
More any-to-any Models
See all →🚀 Use This Model
Access model files, inference API, and full documentation on Hugging Face.
Open on Hugging Face →Browse Model Files ↗← Browse All Models🤖 Task: any-to-any
This model is designed for the any-to-any task. Explore more models for this use case.
All any-to-any Models →📊 Popularity
⬇ Downloads218.3K
❤️ Community Likes9
🛠️ Requirements
- →Install: pip install mlx
- →Python 3.8+ recommended for Transformers.
- →GPU (CUDA) speeds up inference significantly.
- →Use model.half() for fp16 on limited VRAM.