ResearchTrend.AI
  • Communities
  • Connect sessions
  • AI calendar
  • Organizations
  • Join Slack
  • Contact Sales
Papers
Communities
Social Events
Terms and Conditions
Pricing
Contact Sales
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2026 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2109.04867
  4. Cited By
Studying word order through iterative shuffling

Studying word order through iterative shuffling

Conference on Empirical Methods in Natural Language Processing (EMNLP), 2021
10 September 2021
Nikolay Malkin
Sameera Lanka
Pranav Goel
Nebojsa Jojic
ArXiv (abs)PDFHTMLGithub (159832★)

Papers citing "Studying word order through iterative shuffling"

8 / 8 papers shown
Black Big Boxes: Tracing Adjective Order Preferences in Large Language Models
Black Big Boxes: Tracing Adjective Order Preferences in Large Language Models
Jaap Jumelet
Lisa Bylinina
Willem H. Zuidema
Jakub Szymanik
360
5
0
02 Jul 2024
Amortizing intractable inference in large language models
Amortizing intractable inference in large language modelsInternational Conference on Learning Representations (ICLR), 2023
Marvin Schmitt
Moksh Jain
Daniel Habermann
Younesse Kaddar
Ullrich Kothe
Stefan T. Radev
Nikolay Malkin
AIFinBDL
461
89
0
06 Oct 2023
Human-imperceptible, Machine-recognizable Images
Human-imperceptible, Machine-recognizable ImagesInternational Joint Conference on Artificial Intelligence (IJCAI), 2023
Fusheng Hao
Fengxiang He
Yikai Wang
Fuxiang Wu
Jing Zhang
Jun Cheng
Dacheng Tao
AAML
193
2
0
06 Jun 2023
Language Model Behavior: A Comprehensive Survey
Language Model Behavior: A Comprehensive SurveyInternational Conference on Computational Logic (ICCL), 2023
Tyler A. Chang
Benjamin Bergen
VLMLRMLM&MA
542
159
0
20 Mar 2023
Order-sensitive Shapley Values for Evaluating Conceptual Soundness of
  NLP Models
Order-sensitive Shapley Values for Evaluating Conceptual Soundness of NLP Models
Kaiji Lu
Anupam Datta
349
0
0
01 Jun 2022
Recovering Private Text in Federated Learning of Language Models
Recovering Private Text in Federated Learning of Language ModelsNeural Information Processing Systems (NeurIPS), 2022
Samyak Gupta
Yangsibo Huang
Zexuan Zhong
Tianyu Gao
Kai Li
Danqi Chen
FedML
397
105
0
17 May 2022
Grammatical cues to subjecthood are redundant in a majority of simple
  clauses across languages
Grammatical cues to subjecthood are redundant in a majority of simple clauses across languages
Kyle Mahowald
Evgeniia Diachek
E. Gibson
Evelina Fedorenko
Richard Futrell
420
11
0
30 Jan 2022
Coherence boosting: When your pretrained language model is not paying
  enough attention
Coherence boosting: When your pretrained language model is not paying enough attention
Nikolay Malkin
Zhen Wang
Nebojsa Jojic
RALM
278
45
0
15 Oct 2021
1
Page 1 of 1