Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
2010.12283
Cited By
ST-BERT: Cross-modal Language Model Pre-training For End-to-end Spoken Language Understanding
23 October 2020
Minjeong Kim
Gyuwan Kim
Sang-Woo Lee
Jung-Woo Ha
VLM
Re-assign community
ArXiv
PDF
HTML
Papers citing
"ST-BERT: Cross-modal Language Model Pre-training For End-to-end Spoken Language Understanding"
5 / 5 papers shown
Title
A Data-Efficient Visual-Audio Representation with Intuitive Fine-tuning for Voice-Controlled Robots
Peixin Chang
Shuijing Liu
Tianchen Ji
Neeloy Chakraborty
Kaiwen Hong
Katherine Driggs-Campbell
30
3
0
23 Jan 2023
TESSP: Text-Enhanced Self-Supervised Speech Pre-training
Zhuoyuan Yao
Shuo Ren
Sanyuan Chen
Ziyang Ma
Pengcheng Guo
Linfu Xie
22
5
0
24 Nov 2022
Tokenwise Contrastive Pretraining for Finer Speech-to-BERT Alignment in End-to-End Speech-to-Intent Systems
Vishal Sunder
Eric Fosler-Lussier
Samuel Thomas
H. Kuo
Brian Kingsbury
16
7
0
11 Apr 2022
Building Robust Spoken Language Understanding by Cross Attention between Phoneme Sequence and ASR Hypothesis
Zexun Wang
Yuquan Le
Yi Zhu
Yuming Zhao
M.-W. Feng
Meng Chen
Xiaodong He
15
5
0
22 Mar 2022
Google's Neural Machine Translation System: Bridging the Gap between Human and Machine Translation
Yonghui Wu
M. Schuster
Z. Chen
Quoc V. Le
Mohammad Norouzi
...
Alex Rudnick
Oriol Vinyals
G. Corrado
Macduff Hughes
J. Dean
AIMat
716
6,724
0
26 Sep 2016
1