ResearchTrend.AI
  • Communities
  • Connect sessions
  • AI calendar
  • Organizations
  • Join Slack
  • Contact Sales
Papers
Communities
Social Events
Terms and Conditions
Pricing
Contact Sales
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2203.17213
  4. Cited By
Analyzing Wrap-Up Effects through an Information-Theoretic Lens
v1v2 (latest)

Analyzing Wrap-Up Effects through an Information-Theoretic Lens

Annual Meeting of the Association for Computational Linguistics (ACL), 2022
31 March 2022
Clara Meister
Tiago Pimentel
T. H. Clark
Robert Bamler
R. Levy
ArXiv (abs)PDFHTML

Papers citing "Analyzing Wrap-Up Effects through an Information-Theoretic Lens"

5 / 5 papers shown
Title
Large Language Models Are Human-Like Internally
Large Language Models Are Human-Like Internally
Tatsuki Kuribayashi
Yohei Oseki
Souhaib Ben Taieb
Kentaro Inui
Timothy Baldwin
345
10
0
03 Feb 2025
Psychometric Predictive Power of Large Language Models
Psychometric Predictive Power of Large Language Models
Tatsuki Kuribayashi
Yohei Oseki
Timothy Baldwin
LM&MA
181
5
0
13 Nov 2023
Measuring Information in Text Explanations
Measuring Information in Text Explanations
Zining Zhu
Frank Rudzicz
FAtt
124
0
0
06 Oct 2023
On the Effect of Anticipation on Reading Times
On the Effect of Anticipation on Reading TimesTransactions of the Association for Computational Linguistics (TACL), 2022
Tiago Pimentel
Clara Meister
Ethan Gotlieb Wilcox
R. Levy
Robert Bamler
210
31
0
25 Nov 2022
MedJEx: A Medical Jargon Extraction Model with Wiki's Hyperlink Span and
  Contextualized Masked Language Model Score
MedJEx: A Medical Jargon Extraction Model with Wiki's Hyperlink Span and Contextualized Masked Language Model ScoreConference on Empirical Methods in Natural Language Processing (EMNLP), 2022
Sunjae Kwon
Zonghai Yao
H. Jordan
David Levy
Brian Corner
Hong-ye Yu
129
22
0
12 Oct 2022
1