ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2005.00850
  4. Cited By
ENGINE: Energy-Based Inference Networks for Non-Autoregressive Machine
  Translation

ENGINE: Energy-Based Inference Networks for Non-Autoregressive Machine Translation

2 May 2020
Lifu Tu
Richard Yuanzhe Pang
Sam Wiseman
Kevin Gimpel
ArXivPDFHTML

Papers citing "ENGINE: Energy-Based Inference Networks for Non-Autoregressive Machine Translation"

11 / 11 papers shown
Title
Energy-Based Concept Bottleneck Models: Unifying Prediction, Concept Intervention, and Probabilistic Interpretations
Energy-Based Concept Bottleneck Models: Unifying Prediction, Concept Intervention, and Probabilistic Interpretations
Xin-Chao Xu
Yi Qin
Lu Mi
Hao Wang
Xiaomeng Li
74
9
0
03 Jan 2025
Multi-Level Optimal Transport for Universal Cross-Tokenizer Knowledge Distillation on Language Models
Multi-Level Optimal Transport for Universal Cross-Tokenizer Knowledge Distillation on Language Models
Xiao Cui
Mo Zhu
Yulei Qin
Liang Xie
Wengang Zhou
Yiming Li
83
4
0
19 Dec 2024
f-Divergence Minimization for Sequence-Level Knowledge Distillation
f-Divergence Minimization for Sequence-Level Knowledge Distillation
Yuqiao Wen
Zichao Li
Wenyu Du
Lili Mou
30
53
0
27 Jul 2023
Energy-Based Models for Cross-Modal Localization using Convolutional
  Transformers
Energy-Based Models for Cross-Modal Localization using Convolutional Transformers
Alan Wu
Michael S. Ryoo
36
3
0
06 Jun 2023
On Reinforcement Learning and Distribution Matching for Fine-Tuning
  Language Models with no Catastrophic Forgetting
On Reinforcement Learning and Distribution Matching for Fine-Tuning Language Models with no Catastrophic Forgetting
Tomasz Korbak
Hady ElSahar
Germán Kruszewski
Marc Dymetman
CLL
25
50
0
01 Jun 2022
Token Dropping for Efficient BERT Pretraining
Token Dropping for Efficient BERT Pretraining
Le Hou
Richard Yuanzhe Pang
Dinesh Manocha
Yuexin Wu
Xinying Song
Xiaodan Song
Denny Zhou
22
43
0
24 Mar 2022
A Survey of Controllable Text Generation using Transformer-based
  Pre-trained Language Models
A Survey of Controllable Text Generation using Transformer-based Pre-trained Language Models
Hanqing Zhang
Haolin Song
Shaoyu Li
Ming Zhou
Dawei Song
52
214
0
14 Jan 2022
Fully Non-autoregressive Neural Machine Translation: Tricks of the Trade
Fully Non-autoregressive Neural Machine Translation: Tricks of the Trade
Jiatao Gu
X. Kong
28
135
0
31 Dec 2020
Energy-Based Models for Continual Learning
Energy-Based Models for Continual Learning
Shuang Li
Yilun Du
Gido M. van de Ven
Igor Mordatch
27
42
0
24 Nov 2020
Deep Encoder, Shallow Decoder: Reevaluating Non-autoregressive Machine
  Translation
Deep Encoder, Shallow Decoder: Reevaluating Non-autoregressive Machine Translation
Jungo Kasai
Nikolaos Pappas
Hao Peng
James Cross
Noah A. Smith
38
134
0
18 Jun 2020
Effective Approaches to Attention-based Neural Machine Translation
Effective Approaches to Attention-based Neural Machine Translation
Thang Luong
Hieu H. Pham
Christopher D. Manning
218
7,925
0
17 Aug 2015
1