ResearchTrend.AI
  • Communities
  • Connect sessions
  • AI calendar
  • Organizations
  • Join Slack
  • Contact Sales
Papers
Communities
Social Events
Terms and Conditions
Pricing
Contact Sales
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2026 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2010.07375
  4. Cited By
Decoding Methods for Neural Narrative Generation
v1v2 (latest)

Decoding Methods for Neural Narrative Generation

14 October 2020
Alexandra DeLucia
Aaron Mueller
Xiang Lisa Li
João Sedoc
ArXiv (abs)PDFHTML

Papers citing "Decoding Methods for Neural Narrative Generation"

18 / 18 papers shown
Title
Multi-Hypothesis Distillation of Multilingual Neural Translation Models for Low-Resource Languages
Multi-Hypothesis Distillation of Multilingual Neural Translation Models for Low-Resource Languages
Aarón Galiano-Jiménez
Juan Antonio Pérez-Ortiz
F. Sánchez-Martínez
Víctor M. Sánchez-Cartagena
178
0
0
29 Jul 2025
Standardize: Aligning Language Models with Expert-Defined Standards for
  Content Generation
Standardize: Aligning Language Models with Expert-Defined Standards for Content Generation
Joseph Marvin Imperial
Gail Forey
Harish Tayyar Madabushi
ALM
153
5
0
19 Feb 2024
Anti-LM Decoding for Zero-shot In-context Machine Translation
Anti-LM Decoding for Zero-shot In-context Machine Translation
Suzanna Sia
Alexandra DeLucia
Kevin Duh
301
3
0
14 Nov 2023
Are NLP Models Good at Tracing Thoughts: An Overview of Narrative
  Understanding
Are NLP Models Good at Tracing Thoughts: An Overview of Narrative UnderstandingConference on Empirical Methods in Natural Language Processing (EMNLP), 2023
Lixing Zhu
Runcong Zhao
Lin Gui
Yulan He
233
9
0
28 Oct 2023
Probing the Creativity of Large Language Models: Can models produce
  divergent semantic association?
Probing the Creativity of Large Language Models: Can models produce divergent semantic association?Conference on Empirical Methods in Natural Language Processing (EMNLP), 2023
Honghua Chen
Nai Ding
LRM
159
42
0
17 Oct 2023
Closing the Curious Case of Neural Text Degeneration
Closing the Curious Case of Neural Text DegenerationInternational Conference on Learning Representations (ICLR), 2023
Matthew Finlayson
John Hewitt
Alexander Koller
Swabha Swayamdipta
Ashish Sabharwal
240
28
0
02 Oct 2023
Flesch or Fumble? Evaluating Readability Standard Alignment of
  Instruction-Tuned Language Models
Flesch or Fumble? Evaluating Readability Standard Alignment of Instruction-Tuned Language ModelsIEEE Games Entertainment Media Conference (IEEE GEM), 2023
Joseph Marvin Imperial
Harish Tayyar Madabushi
ELM
215
22
0
11 Sep 2023
Tachikuma: Understading Complex Interactions with Multi-Character and
  Novel Objects by Large Language Models
Tachikuma: Understading Complex Interactions with Multi-Character and Novel Objects by Large Language Models
Yuanzhi Liang
Linchao Zhu
Yezhou Yang
LLMAG
145
6
0
24 Jul 2023
FIREBALL: A Dataset of Dungeons and Dragons Actual-Play with Structured
  Game State Information
FIREBALL: A Dataset of Dungeons and Dragons Actual-Play with Structured Game State InformationAnnual Meeting of the Association for Computational Linguistics (ACL), 2023
Andrew Zhu
Karmanya Aggarwal
Alexander H. Feng
Lara J. Martin
Chris Callison-Burch
247
20
0
02 May 2023
I Cast Detect Thoughts: Learning to Converse and Guide with Intents and
  Theory-of-Mind in Dungeons and Dragons
I Cast Detect Thoughts: Learning to Converse and Guide with Intents and Theory-of-Mind in Dungeons and DragonsAnnual Meeting of the Association for Computational Linguistics (ACL), 2022
Pei Zhou
Andrew Zhu
Jennifer Hu
Jay Pujara
Xiang Ren
Chris Callison-Burch
Yejin Choi
Prithviraj Ammanabrolu
188
32
0
20 Dec 2022
Contrastive Decoding: Open-ended Text Generation as Optimization
Contrastive Decoding: Open-ended Text Generation as OptimizationAnnual Meeting of the Association for Computational Linguistics (ACL), 2022
Xiang Lisa Li
Ari Holtzman
Daniel Fried
Abigail Z. Jacobs
Jason Eisner
Tatsunori Hashimoto
Luke Zettlemoyer
M. Lewis
331
505
0
27 Oct 2022
A Continuum of Generation Tasks for Investigating Length Bias and
  Degenerate Repetition
A Continuum of Generation Tasks for Investigating Length Bias and Degenerate RepetitionBlackboxNLP Workshop on Analyzing and Interpreting Neural Networks for NLP (BlackboxNLP), 2022
Darcey Riley
David Chiang
187
6
0
19 Oct 2022
Uniform Complexity for Text Generation
Uniform Complexity for Text GenerationConference on Empirical Methods in Natural Language Processing (EMNLP), 2022
Joseph Marvin Imperial
Harish Tayyar Madabushi
248
4
0
11 Apr 2022
On the probability-quality paradox in language generation
On the probability-quality paradox in language generationAnnual Meeting of the Association for Computational Linguistics (ACL), 2022
Clara Meister
Gian Wiher
Tiago Pimentel
Robert Bamler
190
15
0
31 Mar 2022
On Decoding Strategies for Neural Text Generators
On Decoding Strategies for Neural Text GeneratorsTransactions of the Association for Computational Linguistics (TACL), 2022
Gian Wiher
Clara Meister
Robert Bamler
238
88
0
29 Mar 2022
Do Language Models Plagiarize?
Do Language Models Plagiarize?The Web Conference (WWW), 2022
Jooyoung Lee
Thai Le
Jinghui Chen
Dongwon Lee
334
97
0
15 Mar 2022
Locally Typical Sampling
Locally Typical SamplingTransactions of the Association for Computational Linguistics (TACL), 2022
Clara Meister
Tiago Pimentel
Gian Wiher
Robert Bamler
588
117
0
01 Feb 2022
Inspiration through Observation: Demonstrating the Influence of
  Automatically Generated Text on Creative Writing
Inspiration through Observation: Demonstrating the Influence of Automatically Generated Text on Creative WritingInternational Conference on Innovative Computing and Cloud Computing (ICCC), 2021
Melissa Roemmele
114
25
0
08 Jul 2021
1