🤖 audio-classification

ast-finetuned-audioset-10-10-0.4593

MIT/ast-finetuned-audioset-10-10-0.4593

Get AI Model →
416.1K
Downloads
❤️
352
Likes
🏷️
10
Tags
📦
transformers
Library
Model Details
Full Model IDMIT/ast-finetuned-audioset-10-10-0.4593
Pipeline / Taskaudio-classification
Librarytransformers
Downloads (all-time)416.1K
Likes352
Last Modified9/6/2023
Author / OrgMIT
PrivateNo — public
⚡ Quick Usage (Python)

Using the 🤗 Transformers library. Install with pip install transformers

from transformers import pipeline

# Load the model
pipe = pipeline("audio-classification", model="MIT/ast-finetuned-audioset-10-10-0.4593")

# Run inference
result = pipe("Your input here")
print(result)
🏷️ Tags
transformerspytorchsafetensorsaudio-spectrogram-transformeraudio-classificationarxiv:2104.01778license:bsd-3-clauseendpoints_compatibledeploy:azureregion:us
More audio-classification Models
See all →
clap-htsat-fused

laion/clap-htsat-fused

17.0M❤️ 77
Get AI Model →
wav2vec-vm-finetune

jakeBland/wav2vec-vm-finetune

905.0K❤️ 11
Get AI Model →
wav2vec2-large-robust-12-ft-emotion-msp-dim

audeering/wav2vec2-large-robust-12-ft-emotion-msp-dim

821.5K❤️ 158
Get AI Model →
🚀 Use This Model

Access model files, inference API, and full documentation on Hugging Face.

Open on Hugging Face →Browse Model Files ↗← Browse All Models
🤖 Task: audio-classification

This model is designed for the audio-classification task. Explore more models for this use case.

All audio-classification Models →
📊 Popularity
Downloads416.1K
❤️ Community Likes352
🛠️ Requirements
  • Install: pip install transformers
  • Python 3.8+ recommended for Transformers.
  • GPU (CUDA) speeds up inference significantly.
  • Use model.half() for fp16 on limited VRAM.
👋 Need help with code?