ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2010.02180
  4. Cited By
Pareto Probing: Trading Off Accuracy for Complexity

Pareto Probing: Trading Off Accuracy for Complexity

5 October 2020
Tiago Pimentel
Naomi Saphra
Adina Williams
Ryan Cotterell
ArXivPDFHTML

Papers citing "Pareto Probing: Trading Off Accuracy for Complexity"

12 / 12 papers shown
Title
Trustworthy Social Bias Measurement
Trustworthy Social Bias Measurement
Rishi Bommasani
Percy Liang
27
10
0
20 Dec 2022
Probing via Prompting
Probing via Prompting
Jiaoda Li
Ryan Cotterell
Mrinmaya Sachan
29
13
0
04 Jul 2022
On the Usefulness of Embeddings, Clusters and Strings for Text Generator
  Evaluation
On the Usefulness of Embeddings, Clusters and Strings for Text Generator Evaluation
Tiago Pimentel
Clara Meister
Ryan Cotterell
38
7
0
31 May 2022
Visualizing the Relationship Between Encoded Linguistic Information and
  Task Performance
Visualizing the Relationship Between Encoded Linguistic Information and Task Performance
Jiannan Xiang
Huayang Li
Defu Lian
Guoping Huang
Taro Watanabe
Lemao Liu
34
0
0
29 Mar 2022
Conditional probing: measuring usable information beyond a baseline
Conditional probing: measuring usable information beyond a baseline
John Hewitt
Kawin Ethayarajh
Percy Liang
Christopher D. Manning
31
55
0
19 Sep 2021
How Does Adversarial Fine-Tuning Benefit BERT?
How Does Adversarial Fine-Tuning Benefit BERT?
J. Ebrahimi
Hao Yang
Wei Zhang
AAML
20
4
0
31 Aug 2021
A multilabel approach to morphosyntactic probing
A multilabel approach to morphosyntactic probing
Naomi Tachikawa Shapiro
Amandalynne Paullada
Shane Steinert-Threlkeld
27
10
0
17 Apr 2021
Masked Language Modeling and the Distributional Hypothesis: Order Word
  Matters Pre-training for Little
Masked Language Modeling and the Distributional Hypothesis: Order Word Matters Pre-training for Little
Koustuv Sinha
Robin Jia
Dieuwke Hupkes
J. Pineau
Adina Williams
Douwe Kiela
45
243
0
14 Apr 2021
DirectProbe: Studying Representations without Classifiers
DirectProbe: Studying Representations without Classifiers
Yichu Zhou
Vivek Srikumar
27
27
0
13 Apr 2021
The Rediscovery Hypothesis: Language Models Need to Meet Linguistics
The Rediscovery Hypothesis: Language Models Need to Meet Linguistics
Vassilina Nikoulina
Maxat Tezekbayev
Nuradil Kozhakhmet
Madina Babazhanova
Matthias Gallé
Z. Assylbekov
29
8
0
02 Mar 2021
When Do You Need Billions of Words of Pretraining Data?
When Do You Need Billions of Words of Pretraining Data?
Yian Zhang
Alex Warstadt
Haau-Sing Li
Samuel R. Bowman
21
136
0
10 Nov 2020
Google's Neural Machine Translation System: Bridging the Gap between
  Human and Machine Translation
Google's Neural Machine Translation System: Bridging the Gap between Human and Machine Translation
Yonghui Wu
M. Schuster
Z. Chen
Quoc V. Le
Mohammad Norouzi
...
Alex Rudnick
Oriol Vinyals
G. Corrado
Macduff Hughes
J. Dean
AIMat
716
6,743
0
26 Sep 2016
1