71

BEAT2AASIST model with layer fusion for ESDD 2026 Challenge

Sanghyeok Chung
Eujin Kim
Donggun Kim
Gaeun Heo
Jeongbin You
Nahyun Lee
Sunmook Choi
Soyul Han
Seungsang Oh
Il-Youp Kwak
Main:2 Pages
1 Figures
Bibliography:1 Pages
1 Tables
Abstract

Recent advances in audio generation have increased the risk of realistic environmental sound manipulation, motivating the ESDD 2026 Challenge as the first large-scale benchmark for Environmental Sound Deepfake Detection (ESDD). We propose BEAT2AASIST which extends BEATs-AASIST by splitting BEATs-derived representations along frequency or channel dimension and processing them with dual AASIST branches. To enrich feature representations, we incorporate top-k transformer layer fusion using concatenation, CNN-gated, and SE-gated strategies. In addition, vocoder-based data augmentation is applied to improve robustness against unseen spoofing methods. Experimental results on the official test sets demonstrate that the proposed approach achieves competitive performance across the challenge tracks.

View on arXiv
Comments on this paper