Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
2309.17189
Cited By
RTFS-Net: Recurrent Time-Frequency Modelling for Efficient Audio-Visual Speech Separation
29 September 2023
Samuel Pegg
Kai Li
Xiaolin Hu
Re-assign community
ArXiv
PDF
HTML
Papers citing
"RTFS-Net: Recurrent Time-Frequency Modelling for Efficient Audio-Visual Speech Separation"
8 / 8 papers shown
Title
Music Source Separation with Band-split RNN
Yi Luo
Jianwei Yu
49
104
0
30 Sep 2022
TF-GridNet: Making Time-Frequency Domain Models Great Again for Monaural Speaker Separation
Zhong-Qiu Wang
Samuele Cornell
Shukjae Choi
Younglo Lee
Byeonghak Kim
Shinji Watanabe
58
95
0
08 Sep 2022
Speech Separation Using an Asynchronous Fully Recurrent Convolutional Neural Network
Xiaolin Hu
Kai Li
Weiyi Zhang
Yi Luo
Jean-Marie Lemercier
Timo Gerkmann
44
47
0
04 Dec 2021
The Right to Talk: An Audio-Visual Transformer Approach
Thanh-Dat Truong
C. Duong
T. D. Vu
H. Pham
Bhiksha Raj
Ngan Le
Khoa Luu
49
36
0
06 Aug 2021
VisualVoice: Audio-Visual Speech Separation with Cross-Modal Consistency
Ruohan Gao
Kristen Grauman
CVBM
185
196
0
08 Jan 2021
Dual-Path Transformer Network: Direct Context-Aware Modeling for End-to-End Monaural Speech Separation
Jing-jing Chen
Qi-rong Mao
Dong Liu
54
279
0
28 Jul 2020
VoxCeleb2: Deep Speaker Recognition
Joon Son Chung
Arsha Nagrani
Andrew Zisserman
214
2,224
0
14 Jun 2018
Lip Reading Sentences in the Wild
Joon Son Chung
A. Senior
Oriol Vinyals
Andrew Zisserman
160
782
0
16 Nov 2016
1