Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
2310.14840
Cited By
Transparency at the Source: Evaluating and Interpreting Language Models With Access to the True Distribution
23 October 2023
Jaap Jumelet
Willem H. Zuidema
Re-assign community
ArXiv
PDF
HTML
Papers citing
"Transparency at the Source: Evaluating and Interpreting Language Models With Access to the True Distribution"
7 / 7 papers shown
Title
What Languages are Easy to Language-Model? A Perspective from Learning Probabilistic Regular Languages
Nadav Borenstein
Anej Svete
R. Chan
Josef Valvoda
Franz Nowak
Isabelle Augenstein
Eleanor Chodroff
Ryan Cotterell
38
9
0
06 Jun 2024
Do Transformers Parse while Predicting the Masked Word?
Haoyu Zhao
A. Panigrahi
Rong Ge
Sanjeev Arora
74
29
0
14 Mar 2023
"Will You Find These Shortcuts?" A Protocol for Evaluating the Faithfulness of Input Salience Methods for Text Classification
Jasmijn Bastings
Sebastian Ebert
Polina Zablotskaia
Anders Sandholm
Katja Filippova
102
75
0
14 Nov 2021
The paradox of the compositionality of natural language: a neural machine translation case study
Verna Dankers
Elia Bruni
Dieuwke Hupkes
CoGe
155
75
0
12 Aug 2021
The Pile: An 800GB Dataset of Diverse Text for Language Modeling
Leo Gao
Stella Biderman
Sid Black
Laurence Golding
Travis Hoppe
...
Horace He
Anish Thite
Noa Nabeshima
Shawn Presser
Connor Leahy
AIMat
242
1,977
0
31 Dec 2020
Scaling Laws for Neural Language Models
Jared Kaplan
Sam McCandlish
T. Henighan
Tom B. Brown
B. Chess
R. Child
Scott Gray
Alec Radford
Jeff Wu
Dario Amodei
220
3,054
0
23 Jan 2020
What you can cram into a single vector: Probing sentence embeddings for linguistic properties
Alexis Conneau
Germán Kruszewski
Guillaume Lample
Loïc Barrault
Marco Baroni
199
876
0
03 May 2018
1