ResearchTrend.AI
  • Communities
  • Connect sessions
  • AI calendar
  • Organizations
  • Join Slack
  • Contact Sales
Papers
Communities
Social Events
Terms and Conditions
Pricing
Contact Sales
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2026 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2004.06201
  4. Cited By
Reverse Engineering Configurations of Neural Text Generation Models

Reverse Engineering Configurations of Neural Text Generation Models

Annual Meeting of the Association for Computational Linguistics (ACL), 2020
13 April 2020
Yi Tay
Dara Bahri
Che Zheng
Clifford Brunk
Donald Metzler
Andrew Tomkins
    DeLMO
ArXiv (abs)PDFHTML

Papers citing "Reverse Engineering Configurations of Neural Text Generation Models"

14 / 14 papers shown
Detection Avoidance Techniques for Large Language Models
Detection Avoidance Techniques for Large Language ModelsData & Policy (DP), 2025
Sinclair Schneider
Florian Steuber
João A. G. Schneider
Gabi Dreo Rodosek
DeLMO
312
1
0
10 Mar 2025
CodeIP: A Grammar-Guided Multi-Bit Watermark for Large Language Models of Code
CodeIP: A Grammar-Guided Multi-Bit Watermark for Large Language Models of CodeConference on Empirical Methods in Natural Language Processing (EMNLP), 2024
Batu Guan
Yao Wan
Zhangqian Bi
Zheng Wang
Hongyu Zhang
Yulei Sui
Pan Zhou
252
29
0
31 Dec 2024
Distortion-free Watermarks are not Truly Distortion-free under Watermark
  Key Collisions
Distortion-free Watermarks are not Truly Distortion-free under Watermark Key Collisions
Yihan Wu
Ruibo Chen
Zhengmian Hu
Yanshuo Chen
Junfeng Guo
Hongyang R. Zhang
Heng-Chiao Huang
WaLM
238
9
0
02 Jun 2024
How well can machine-generated texts be identified and can language
  models be trained to avoid identification?
How well can machine-generated texts be identified and can language models be trained to avoid identification?
Sinclair Schneider
Florian Steuber
João A. G. Schneider
Gabi Dreo Rodosek
DeLMO
178
1
0
25 Oct 2023
A Resilient and Accessible Distribution-Preserving Watermark for Large
  Language Models
A Resilient and Accessible Distribution-Preserving Watermark for Large Language ModelsInternational Conference on Machine Learning (ICML), 2023
Yihan Wu
Zhengmian Hu
Junfeng Guo
Hongyang R. Zhang
Heng-Chiao Huang
WaLM
341
47
0
11 Oct 2023
Unbiased Watermark for Large Language Models
Unbiased Watermark for Large Language ModelsInternational Conference on Learning Representations (ICLR), 2023
Zhengmian Hu
Lichang Chen
Xidong Wu
Yihan Wu
Hongyang R. Zhang
Heng-Chiao Huang
WaLM
416
99
0
22 Sep 2023
Detecting ChatGPT: A Survey of the State of Detecting ChatGPT-Generated
  Text
Detecting ChatGPT: A Survey of the State of Detecting ChatGPT-Generated TextRecent Advances in Natural Language Processing (RANLP), 2023
Mahdi Dhaini
Wessel Poelman
Ege Erdogan
DeLMO
288
21
0
14 Sep 2023
Reverse-Engineering Decoding Strategies Given Blackbox Access to a
  Language Generation System
Reverse-Engineering Decoding Strategies Given Blackbox Access to a Language Generation SystemInternational Conference on Natural Language Generation (INLG), 2023
Daphne Ippolito
Nicholas Carlini
Katherine Lee
Milad Nasr
Yun William Yu
278
9
0
09 Sep 2023
CHEAT: A Large-scale Dataset for Detecting ChatGPT-writtEn AbsTracts
CHEAT: A Large-scale Dataset for Detecting ChatGPT-writtEn AbsTractsIEEE Transactions on Big Data (IEEE Trans. Big Data), 2023
Peipeng Yu
Jiahan Chen
Xuan Feng
Zhihua Xia
377
67
0
24 Apr 2023
A Watermark for Large Language Models
A Watermark for Large Language ModelsInternational Conference on Machine Learning (ICML), 2023
John Kirchenbauer
Jonas Geiping
Yuxin Wen
Jonathan Katz
Ian Miers
Tom Goldstein
VLMWaLM
737
817
0
24 Jan 2023
Machine Generated Text: A Comprehensive Survey of Threat Models and
  Detection Methods
Machine Generated Text: A Comprehensive Survey of Threat Models and Detection MethodsIEEE Access (IEEE Access), 2022
Evan Crothers
Nathalie Japkowicz
H. Viktor
DeLMO
424
172
0
13 Oct 2022
Automatic Detection of Machine Generated Text: A Critical Survey
Automatic Detection of Machine Generated Text: A Critical SurveyInternational Conference on Computational Linguistics (COLING), 2020
Ganesh Jawahar
Muhammad Abdul-Mageed
L. Lakshmanan
DeLMO
396
290
0
02 Nov 2020
Detecting Cross-Modal Inconsistency to Defend Against Neural Fake News
Detecting Cross-Modal Inconsistency to Defend Against Neural Fake NewsConference on Empirical Methods in Natural Language Processing (EMNLP), 2020
Reuben Tan
Bryan A. Plummer
Kate Saenko
AAML
314
79
0
16 Sep 2020
Generative Models are Unsupervised Predictors of Page Quality: A
  Colossal-Scale Study
Generative Models are Unsupervised Predictors of Page Quality: A Colossal-Scale Study
Dara Bahri
Yi Tay
Che Zheng
Donald Metzler
Clifford Brunk
Andrew Tomkins
142
10
0
17 Aug 2020
1
Page 1 of 1