ResearchTrend.AI
  • Communities
  • Connect sessions
  • AI calendar
  • Organizations
  • Join Slack
  • Contact Sales
Papers
Communities
Social Events
Terms and Conditions
Pricing
Contact Sales
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2402.00235
  4. Cited By
Exploring the limits of decoder-only models trained on public speech
  recognition corpora

Exploring the limits of decoder-only models trained on public speech recognition corpora

31 January 2024
Ankit Gupta
G. Saon
Brian Kingsbury
    OffRL
ArXiv (abs)PDFHTML

Papers citing "Exploring the limits of decoder-only models trained on public speech recognition corpora"

2 / 2 papers shown
Title
Prepending or Cross-Attention for Speech-to-Text? An Empirical Comparison
Prepending or Cross-Attention for Speech-to-Text? An Empirical ComparisonNorth American Chapter of the Association for Computational Linguistics (NAACL), 2025
Tsz Kin Lam
Marco Gaido
Sara Papi
L. Bentivogli
Barry Haddow
397
3
0
04 Jan 2025
OWSM-CTC: An Open Encoder-Only Speech Foundation Model for Speech
  Recognition, Translation, and Language Identification
OWSM-CTC: An Open Encoder-Only Speech Foundation Model for Speech Recognition, Translation, and Language Identification
Yifan Peng
Yui Sudo
Muhammad Shakeel
Shinji Watanabe
VLM
291
35
0
20 Feb 2024
1