🤖 audio-classification

MERT-v1-330M

m-a-p/MERT-v1-330M

Get AI Model →
45.4K
Downloads
❤️
83
Likes
🏷️
10
Tags
📦
transformers
Library
Model Details
Full Model IDm-a-p/MERT-v1-330M
Pipeline / Taskaudio-classification
Librarytransformers
Downloads (all-time)45.4K
Likes83
Last Modified5/25/2025
Author / Orgm-a-p
PrivateNo — public
⚡ Quick Usage (Python)

Using the 🤗 Transformers library. Install with pip install transformers

from transformers import pipeline

# Load the model
pipe = pipeline("audio-classification", model="m-a-p/MERT-v1-330M")

# Run inference
result = pipe("Your input here")
print(result)
🏷️ Tags
transformerspytorchmert_modelfeature-extractionmusicaudio-classificationcustom_codearxiv:2306.00107license:cc-by-nc-4.0region:us
More audio-classification Models
See all →
clap-htsat-fused

laion/clap-htsat-fused

17.0M❤️ 77
Get AI Model →
wav2vec-vm-finetune

jakeBland/wav2vec-vm-finetune

905.0K❤️ 11
Get AI Model →
wav2vec2-large-robust-12-ft-emotion-msp-dim

audeering/wav2vec2-large-robust-12-ft-emotion-msp-dim

821.5K❤️ 158
Get AI Model →
🚀 Use This Model

Access model files, inference API, and full documentation on Hugging Face.

Open on Hugging Face →Browse Model Files ↗← Browse All Models
🤖 Task: audio-classification

This model is designed for the audio-classification task. Explore more models for this use case.

All audio-classification Models →
📊 Popularity
Downloads45.4K
❤️ Community Likes83
🛠️ Requirements
  • Install: pip install transformers
  • Python 3.8+ recommended for Transformers.
  • GPU (CUDA) speeds up inference significantly.
  • Use model.half() for fp16 on limited VRAM.
👋 Need help with code?