ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2007.06761
  4. Cited By
Can neural networks acquire a structural bias from raw linguistic data?

Can neural networks acquire a structural bias from raw linguistic data?

14 July 2020
Alex Warstadt
Samuel R. Bowman
    AI4CE
ArXivPDFHTML

Papers citing "Can neural networks acquire a structural bias from raw linguistic data?"

34 / 34 papers shown
Title
Tree Transformers are an Ineffective Model of Syntactic Constituency
Tree Transformers are an Ineffective Model of Syntactic Constituency
Michael Ginn
62
0
0
25 Nov 2024
Kallini et al. (2024) do not compare impossible languages with
  constituency-based ones
Kallini et al. (2024) do not compare impossible languages with constituency-based ones
Tim Hunter
16
0
0
16 Oct 2024
A Review of the Applications of Deep Learning-Based Emergent
  Communication
A Review of the Applications of Deep Learning-Based Emergent Communication
Brendon Boldt
David R. Mortensen
VLM
27
6
0
03 Jul 2024
Probing the Category of Verbal Aspect in Transformer Language Models
Probing the Category of Verbal Aspect in Transformer Language Models
Anisia Katinskaia
R. Yangarber
53
2
0
04 Jun 2024
Filtered Corpus Training (FiCT) Shows that Language Models can
  Generalize from Indirect Evidence
Filtered Corpus Training (FiCT) Shows that Language Models can Generalize from Indirect Evidence
Abhinav Patil
Jaap Jumelet
Yu Ying Chiu
Andy Lapastora
Peter Shen
Lexie Wang
Clevis Willrich
Shane Steinert-Threlkeld
27
13
0
24 May 2024
Semantic Sensitivities and Inconsistent Predictions: Measuring the
  Fragility of NLI Models
Semantic Sensitivities and Inconsistent Predictions: Measuring the Fragility of NLI Models
Erik Arakelyan
Zhaoqi Liu
Isabelle Augenstein
AAML
37
9
0
25 Jan 2024
In-context Learning Generalizes, But Not Always Robustly: The Case of
  Syntax
In-context Learning Generalizes, But Not Always Robustly: The Case of Syntax
Aaron Mueller
Albert Webson
Jackson Petty
Tal Linzen
ReLM
LRM
19
13
0
13 Nov 2023
Second Language Acquisition of Neural Language Models
Second Language Acquisition of Neural Language Models
Miyu Oba
Tatsuki Kuribayashi
Hiroki Ouchi
Taro Watanabe
13
5
0
05 Jun 2023
How to Plant Trees in Language Models: Data and Architectural Effects on
  the Emergence of Syntactic Inductive Biases
How to Plant Trees in Language Models: Data and Architectural Effects on the Emergence of Syntactic Inductive Biases
Aaron Mueller
Tal Linzen
AI4CE
8
20
0
31 May 2023
Measuring Inductive Biases of In-Context Learning with Underspecified
  Demonstrations
Measuring Inductive Biases of In-Context Learning with Underspecified Demonstrations
Chenglei Si
Dan Friedman
Nitish Joshi
Shi Feng
Danqi Chen
He He
8
42
0
22 May 2023
Does Vision Accelerate Hierarchical Generalization of Neural Language
  Learners?
Does Vision Accelerate Hierarchical Generalization of Neural Language Learners?
Tatsuki Kuribayashi
VLM
11
1
0
01 Feb 2023
A Discerning Several Thousand Judgments: GPT-3 Rates the Article +
  Adjective + Numeral + Noun Construction
A Discerning Several Thousand Judgments: GPT-3 Rates the Article + Adjective + Numeral + Noun Construction
Kyle Mahowald
22
24
0
29 Jan 2023
How poor is the stimulus? Evaluating hierarchical generalization in
  neural networks trained on child-directed speech
How poor is the stimulus? Evaluating hierarchical generalization in neural networks trained on child-directed speech
Aditya Yedetore
Tal Linzen
Robert Frank
R. Thomas McCoy
22
16
0
26 Jan 2023
Probing for Incremental Parse States in Autoregressive Language Models
Probing for Incremental Parse States in Autoregressive Language Models
Tiwalayo Eisape
Vineet Gangireddy
R. Levy
Yoon Kim
19
11
0
17 Nov 2022
Causal Analysis of Syntactic Agreement Neurons in Multilingual Language
  Models
Causal Analysis of Syntactic Agreement Neurons in Multilingual Language Models
Aaron Mueller
Yudi Xia
Tal Linzen
MILM
34
9
0
25 Oct 2022
Can Language Representation Models Think in Bets?
Can Language Representation Models Think in Bets?
Zhi–Bin Tang
M. Kejriwal
13
6
0
14 Oct 2022
Vision Transformers provably learn spatial structure
Vision Transformers provably learn spatial structure
Samy Jelassi
Michael E. Sander
Yuan-Fang Li
ViT
MLT
32
73
0
13 Oct 2022
Understanding Prior Bias and Choice Paralysis in Transformer-based
  Language Representation Models through Four Experimental Probes
Understanding Prior Bias and Choice Paralysis in Transformer-based Language Representation Models through Four Experimental Probes
Ke Shen
M. Kejriwal
16
4
0
03 Oct 2022
OOD-Probe: A Neural Interpretation of Out-of-Domain Generalization
OOD-Probe: A Neural Interpretation of Out-of-Domain Generalization
Zining Zhu
Soroosh Shahtalebi
Frank Rudzicz
21
4
0
25 Aug 2022
What Artificial Neural Networks Can Tell Us About Human Language
  Acquisition
What Artificial Neural Networks Can Tell Us About Human Language Acquisition
Alex Warstadt
Samuel R. Bowman
13
111
0
17 Aug 2022
Probing for the Usage of Grammatical Number
Probing for the Usage of Grammatical Number
Karim Lasri
Tiago Pimentel
Alessandro Lenci
Thierry Poibeau
Ryan Cotterell
25
55
0
19 Apr 2022
Coloring the Blank Slate: Pre-training Imparts a Hierarchical Inductive
  Bias to Sequence-to-sequence Models
Coloring the Blank Slate: Pre-training Imparts a Hierarchical Inductive Bias to Sequence-to-sequence Models
Aaron Mueller
Robert Frank
Tal Linzen
Luheng Wang
Sebastian Schuster
AIMat
16
33
0
17 Mar 2022
Interpreting the Robustness of Neural NLP Models to Textual
  Perturbations
Interpreting the Robustness of Neural NLP Models to Textual Perturbations
Yunxiang Zhang
Liangming Pan
Samson Tan
Min-Yen Kan
20
21
0
14 Oct 2021
Transformers Generalize Linearly
Transformers Generalize Linearly
Jackson Petty
Robert Frank
AI4CE
210
16
0
24 Sep 2021
Awakening Latent Grounding from Pretrained Language Models for Semantic
  Parsing
Awakening Latent Grounding from Pretrained Language Models for Semantic Parsing
Qian Liu
Dejian Yang
Jiahui Zhang
Jiaqi Guo
Bin Zhou
Jian-Guang Lou
43
41
0
22 Sep 2021
Pre-Trained Models: Past, Present and Future
Pre-Trained Models: Past, Present and Future
Xu Han
Zhengyan Zhang
Ning Ding
Yuxian Gu
Xiao Liu
...
Jie Tang
Ji-Rong Wen
Jinhui Yuan
Wayne Xin Zhao
Jun Zhu
AIFin
MQ
AI4MH
27
811
0
14 Jun 2021
Language Models Use Monotonicity to Assess NPI Licensing
Language Models Use Monotonicity to Assess NPI Licensing
Jaap Jumelet
Milica Denić
Jakub Szymanik
Dieuwke Hupkes
Shane Steinert-Threlkeld
KELM
13
28
0
28 May 2021
Does injecting linguistic structure into language models lead to better
  alignment with brain recordings?
Does injecting linguistic structure into language models lead to better alignment with brain recordings?
Mostafa Abdou
Ana Valeria González
Mariya Toneva
Daniel Hershcovich
Anders Søgaard
11
15
0
29 Jan 2021
Language Modelling as a Multi-Task Problem
Language Modelling as a Multi-Task Problem
Leon Weber
Jaap Jumelet
Elia Bruni
Dieuwke Hupkes
15
13
0
27 Jan 2021
LIME: Learning Inductive Bias for Primitives of Mathematical Reasoning
LIME: Learning Inductive Bias for Primitives of Mathematical Reasoning
Yuhuai Wu
M. Rabe
Wenda Li
Jimmy Ba
Roger C. Grosse
Christian Szegedy
AIMat
LRM
61
51
0
15 Jan 2021
Learning Which Features Matter: RoBERTa Acquires a Preference for
  Linguistic Generalizations (Eventually)
Learning Which Features Matter: RoBERTa Acquires a Preference for Linguistic Generalizations (Eventually)
Alex Warstadt
Yian Zhang
Haau-Sing Li
Haokun Liu
Samuel R. Bowman
SSL
AI4CE
29
20
0
11 Oct 2020
A Primer in BERTology: What we know about how BERT works
A Primer in BERTology: What we know about how BERT works
Anna Rogers
Olga Kovaleva
Anna Rumshisky
OffRL
30
1,455
0
27 Feb 2020
BERTs of a feather do not generalize together: Large variability in
  generalization across models with similar test set performance
BERTs of a feather do not generalize together: Large variability in generalization across models with similar test set performance
R. Thomas McCoy
Junghyun Min
Tal Linzen
16
147
0
07 Nov 2019
What you can cram into a single vector: Probing sentence embeddings for
  linguistic properties
What you can cram into a single vector: Probing sentence embeddings for linguistic properties
Alexis Conneau
Germán Kruszewski
Guillaume Lample
Loïc Barrault
Marco Baroni
199
882
0
03 May 2018
1