🤖 audio-classification

clap-htsat-fused

laion/clap-htsat-fused

Get AI Model →
17.0M
Downloads
❤️
77
Likes
🏷️
13
Tags
📦
transformers
Library
Model Details
Full Model IDlaion/clap-htsat-fused
Pipeline / Taskaudio-classification
Librarytransformers
Downloads (all-time)17.0M
Likes77
Last Modified1/12/2026
Author / Orglaion
PrivateNo — public
⚡ Quick Usage (Python)

Using the 🤗 Transformers library. Install with pip install transformers

from transformers import pipeline

# Load the model
pipe = pipeline("audio-classification", model="laion/clap-htsat-fused")

# Run inference
result = pipe("Your input here")
print(result)
🏷️ Tags
transformerspytorchsafetensorsclapfeature-extractionzero-shot audio classificationzero-shot audio retrievalaudio-classificationenarxiv:2211.06687license:apache-2.0endpoints_compatibleregion:us
More audio-classification Models
See all →
wav2vec-vm-finetune

jakeBland/wav2vec-vm-finetune

905.0K❤️ 11
Get AI Model →
wav2vec2-large-robust-12-ft-emotion-msp-dim

audeering/wav2vec2-large-robust-12-ft-emotion-msp-dim

821.5K❤️ 158
Get AI Model →
emotion-recognition-wav2vec2-IEMOCAP

speechbrain/emotion-recognition-wav2vec2-IEMOCAP

530.2K❤️ 184
Get AI Model →
🚀 Use This Model

Access model files, inference API, and full documentation on Hugging Face.

Open on Hugging Face →Browse Model Files ↗← Browse All Models
🤖 Task: audio-classification

This model is designed for the audio-classification task. Explore more models for this use case.

All audio-classification Models →
📊 Popularity
Downloads17.0M
❤️ Community Likes77
🛠️ Requirements
  • Install: pip install transformers
  • Python 3.8+ recommended for Transformers.
  • GPU (CUDA) speeds up inference significantly.
  • Use model.half() for fp16 on limited VRAM.
👋 Need help with code?