🌐 Translation

t5-base

google-t5/t5-base

Get AI Model →
1.4M
Downloads
❤️
773
Likes
🏷️
28
Tags
📦
transformers
Library
Model Details
Full Model IDgoogle-t5/t5-base
Pipeline / Tasktranslation
Librarytransformers
Downloads (all-time)1.4M
Likes773
Last Modified2/14/2024
Author / Orggoogle-t5
PrivateNo — public
⚡ Quick Usage (Python)

Using the 🤗 Transformers library. Install with pip install transformers

from transformers import pipeline

# Load the model
pipe = pipeline("translation", model="google-t5/t5-base")

# Run inference
result = pipe("Your input here")
print(result)
🏷️ Tags
transformerspytorchtfjaxrustsafetensorst5text2text-generationsummarizationtranslationenfrrodedataset:c4arxiv:1805.12471arxiv:1708.00055arxiv:1704.05426arxiv:1606.05250arxiv:1808.09121arxiv:1810.12885arxiv:1905.10044arxiv:1910.09700license:apache-2.0text-generation-inferenceendpoints_compatibledeploy:azureregion:us
More Translation Models
See all →
t5-small

google-t5/t5-small

2.2M❤️ 542
Get AI Model →
vntl-llama3-8b-v2-gguf

lmg-anon/vntl-llama3-8b-v2-gguf

1.8M❤️ 13
Get AI Model →
nllb-200-distilled-600M

facebook/nllb-200-distilled-600M

1.2M❤️ 885
Get AI Model →
🚀 Use This Model

Access model files, inference API, and full documentation on Hugging Face.

Open on Hugging Face →Browse Model Files ↗← Browse All Models
🌐 Task: Translation

This model is designed for the Translation task. Explore more models for this use case.

All Translation Models →
📊 Popularity
Downloads1.4M
❤️ Community Likes773
🛠️ Requirements
  • Install: pip install transformers
  • Python 3.8+ recommended for Transformers.
  • GPU (CUDA) speeds up inference significantly.
  • Use model.half() for fp16 on limited VRAM.
👋 Need help with code?