ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2108.07005
  4. Cited By
An Effective Non-Autoregressive Model for Spoken Language Understanding

An Effective Non-Autoregressive Model for Spoken Language Understanding

16 August 2021
Lizhi Cheng
Weijia Jia
Wenmian Yang
    OffRL
ArXivPDFHTML

Papers citing "An Effective Non-Autoregressive Model for Spoken Language Understanding"

8 / 8 papers shown
Title
Towards Spoken Language Understanding via Multi-level Multi-grained
  Contrastive Learning
Towards Spoken Language Understanding via Multi-level Multi-grained Contrastive Learning
Xuxin Cheng
Wanshi Xu
Zhihong Zhu
Hongxiang Li
Yuexian Zou
61
13
0
31 May 2024
A Two-Stage Prediction-Aware Contrastive Learning Framework for
  Multi-Intent NLU
A Two-Stage Prediction-Aware Contrastive Learning Framework for Multi-Intent NLU
Guanhua Chen
Yutong Yao
Derek F. Wong
Lidia S. Chao
35
2
0
05 May 2024
Capture Salient Historical Information: A Fast and Accurate
  Non-Autoregressive Model for Multi-turn Spoken Language Understanding
Capture Salient Historical Information: A Fast and Accurate Non-Autoregressive Model for Multi-turn Spoken Language Understanding
Lizhi Cheng
Weijia Jia
Wenmian Yang
OffRL
34
4
0
24 Jun 2022
Label-aware Multi-level Contrastive Learning for Cross-lingual Spoken
  Language Understanding
Label-aware Multi-level Contrastive Learning for Cross-lingual Spoken Language Understanding
Shining Liang
Linjun Shou
Jian Pei
Ming Gong
Wanli Zuo
Xianglin Zuo
Daxin Jiang
25
7
0
07 May 2022
A Survey on Non-Autoregressive Generation for Neural Machine Translation
  and Beyond
A Survey on Non-Autoregressive Generation for Neural Machine Translation and Beyond
Yisheng Xiao
Lijun Wu
Junliang Guo
Juntao Li
M. Zhang
Tao Qin
Tie-Yan Liu
3DV
MedIm
AI4CE
30
82
0
20 Apr 2022
Learning Light-Weight Translation Models from Deep Transformer
Learning Light-Weight Translation Models from Deep Transformer
Bei Li
Ziyang Wang
Hui Liu
Quan Du
Tong Xiao
Chunliang Zhang
Jingbo Zhu
VLM
114
40
0
27 Dec 2020
Regularized Attentive Capsule Network for Overlapped Relation Extraction
Regularized Attentive Capsule Network for Overlapped Relation Extraction
Tianyi Liu
Xiangyu Lin
Weijia Jia
Mingliang Zhou
Wei Zhao
44
8
0
18 Dec 2020
A Stack-Propagation Framework with Token-Level Intent Detection for
  Spoken Language Understanding
A Stack-Propagation Framework with Token-Level Intent Detection for Spoken Language Understanding
Libo Qin
Wanxiang Che
Yangming Li
Haoyang Wen
Ting Liu
41
260
0
05 Sep 2019
1