ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2006.00751
  4. Cited By
Evaluation of CNN-based Automatic Music Tagging Models

Evaluation of CNN-based Automatic Music Tagging Models

1 June 2020
Minz Won
Andrés Ferraro
Dmitry Bogdanov
Xavier Serra
    VLM
ArXiv (abs)PDFHTML

Papers citing "Evaluation of CNN-based Automatic Music Tagging Models"

14 / 64 papers shown
Title
A Benchmarking Initiative for Audio-Domain Music Generation Using the
  Freesound Loop Dataset
A Benchmarking Initiative for Audio-Domain Music Generation Using the Freesound Loop Dataset
Tun-Min Hung
Bo-Yu Chen
Yen-Tung Yeh
Yi-Hsuan Yang
53
12
0
03 Aug 2021
EMOPIA: A Multi-Modal Pop Piano Dataset For Emotion Recognition and
  Emotion-based Music Generation
EMOPIA: A Multi-Modal Pop Piano Dataset For Emotion Recognition and Emotion-based Music Generation
Hsiao-Tzu Hung
Joann Ching
Seungheon Doh
Nabin Kim
Juhan Nam
Yi-Hsuan Yang
79
94
0
03 Aug 2021
Codified audio language modeling learns useful representations for music
  information retrieval
Codified audio language modeling learns useful representations for music information retrieval
Rodrigo Castellon
Chris Donahue
Percy Liang
146
91
0
12 Jul 2021
Improving Sound Event Classification by Increasing Shift Invariance in
  Convolutional Neural Networks
Improving Sound Event Classification by Increasing Shift Invariance in Convolutional Neural Networks
Eduardo Fonseca
Andrés Ferraro
Xavier Serra
AI4TS
131
9
0
01 Jul 2021
A Modulation Front-End for Music Audio Tagging
A Modulation Front-End for Music Audio Tagging
Cyrus Vahidi
C. Saitis
Gyorgy Fazekas
47
2
0
25 May 2021
MusCaps: Generating Captions for Music Audio
MusCaps: Generating Captions for Music Audio
Ilaria Manco
Emmanouil Benetos
Elio Quinton
Gyorgy Fazekas
116
37
0
24 Apr 2021
MuSLCAT: Multi-Scale Multi-Level Convolutional Attention Transformer for
  Discriminative Music Modeling on Raw Waveforms
MuSLCAT: Multi-Scale Multi-Level Convolutional Attention Transformer for Discriminative Music Modeling on Raw Waveforms
Kai Middlebrook
Shyam Sudhakaran
David Guy Brizan
20
0
0
06 Apr 2021
Enriched Music Representations with Multiple Cross-modal Contrastive
  Learning
Enriched Music Representations with Multiple Cross-modal Contrastive Learning
Andrés Ferraro
Xavier Favory
Konstantinos Drossos
Yuntae Kim
Dmitry Bogdanov
129
26
0
01 Apr 2021
Listen, Read, and Identify: Multimodal Singing Language Identification
  of Music
Listen, Read, and Identify: Multimodal Singing Language Identification of Music
Keunwoo Choi
Yuxuan Wang
83
8
0
02 Mar 2021
TräumerAI: Dreaming Music with StyleGAN
TräumerAI: Dreaming Music with StyleGAN
Dasaem Jeong
Seungheon Doh
Taegyun Kwon
GAN
43
17
0
09 Feb 2021
Multimodal Metric Learning for Tag-based Music Retrieval
Multimodal Metric Learning for Tag-based Music Retrieval
Minz Won
Sergio Oramas
Oriol Nieto
F. Gouyon
Xavier Serra
141
45
0
30 Oct 2020
Mood Classification Using Listening Data
Mood Classification Using Listening Data
Filip Korzeniowski
Oriol Nieto
Matthew C. McCallum
Minz Won
Sergio Oramas
Erik M. Schmidt
133
12
0
22 Oct 2020
FSD50K: An Open Dataset of Human-Labeled Sound Events
FSD50K: An Open Dataset of Human-Labeled Sound Events
Eduardo Fonseca
Xavier Favory
Jordi Pons
F. Font
Xavier Serra
142
467
0
01 Oct 2020
audioLIME: Listenable Explanations Using Source Separation
audioLIME: Listenable Explanations Using Source Separation
Verena Haunschmid
Ethan Manilow
Gerhard Widmer
FAtt
64
32
0
02 Aug 2020
Previous
12