Model Details
Full Model IDaudeering/wav2vec2-large-robust-12-ft-emotion-msp-dim
Pipeline / Taskaudio-classification
Librarytransformers
Downloads (all-time)821.5K
Likes158
Last Modified9/19/2024
Author / Orgaudeering
PrivateNo — public
⚡ Quick Usage (Python)
Using the 🤗 Transformers library. Install with pip install transformers
from transformers import pipeline
# Load the model
pipe = pipeline("audio-classification", model="audeering/wav2vec2-large-robust-12-ft-emotion-msp-dim")
# Run inference
result = pipe("Your input here")
print(result)🏷️ Tags
transformerspytorchsafetensorswav2vec2speechaudioaudio-classificationemotion-recognitionendataset:msp-podcastarxiv:2203.07378license:cc-by-nc-sa-4.0endpoints_compatibledeploy:azureregion:us
More audio-classification Models
See all →🚀 Use This Model
Access model files, inference API, and full documentation on Hugging Face.
Open on Hugging Face →Browse Model Files ↗← Browse All Models🤖 Task: audio-classification
This model is designed for the audio-classification task. Explore more models for this use case.
All audio-classification Models →📊 Popularity
⬇ Downloads821.5K
❤️ Community Likes158
🛠️ Requirements
- →Install: pip install transformers
- →Python 3.8+ recommended for Transformers.
- →GPU (CUDA) speeds up inference significantly.
- →Use model.half() for fp16 on limited VRAM.