ResearchTrend.AI
  • Communities
  • Connect sessions
  • AI calendar
  • Organizations
  • Join Slack
  • Contact Sales
Papers
Communities
Social Events
Terms and Conditions
Pricing
Contact Sales
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2026 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2105.03484
  4. Cited By
Empirical Evaluation of Pre-trained Transformers for Human-Level NLP:
  The Role of Sample Size and Dimensionality

Empirical Evaluation of Pre-trained Transformers for Human-Level NLP: The Role of Sample Size and Dimensionality

North American Chapter of the Association for Computational Linguistics (NAACL), 2021
7 May 2021
Adithya Ganesan
Matthew Matero
Aravind Reddy Ravula
Huy-Hien Vu
H. Andrew Schwartz
ArXiv (abs)PDFHTML

Papers citing "Empirical Evaluation of Pre-trained Transformers for Human-Level NLP: The Role of Sample Size and Dimensionality"

11 / 11 papers shown
Idiosyncratic Versus Normative Modeling of Atypical Speech Recognition: Dysarthric Case Studies
Idiosyncratic Versus Normative Modeling of Atypical Speech Recognition: Dysarthric Case Studies
Vishnu Raja
Adithya Ganesan
Anand Syamkumar
Ritwik Banerjee
H. Andrew Schwartz
224
2
0
20 Sep 2025
SOCIALITE-LLAMA: An Instruction-Tuned Model for Social Scientific Tasks
SOCIALITE-LLAMA: An Instruction-Tuned Model for Social Scientific Tasks
Gourab Dey
Adithya Ganesan
Yash Kumar Lal
Manal Shah
Shreyashee Sinha
Matthew Matero
Salvatore Giorgi
Vivek Kulkarni
H. Andrew Schwartz
ALM
341
11
0
03 Feb 2024
ALBA: Adaptive Language-based Assessments for Mental Health
ALBA: Adaptive Language-based Assessments for Mental HealthNorth American Chapter of the Association for Computational Linguistics (NAACL), 2023
Vasudha Varadarajan
S. Sikström
Oscar Kjell
H. Andrew Schwartz
383
12
0
11 Nov 2023
Systematic Evaluation of GPT-3 for Zero-Shot Personality Estimation
Systematic Evaluation of GPT-3 for Zero-Shot Personality EstimationWorkshop on Computational Approaches to Subjectivity, Sentiment and Social Media Analysis (WASSA), 2023
Adithya Ganesan
Yash Kumar Lal
August Håkan Nilsson
H. Andrew Schwartz
237
42
0
01 Jun 2023
ChatGPT: Jack of all trades, master of none
ChatGPT: Jack of all trades, master of noneInformation Fusion (Inf. Fusion), 2023
Jan Kocoñ
Igor Cichecki
Oliwier Kaszyca
Mateusz Kochanek
Dominika Szydło
...
Maciej Piasecki
Lukasz Radliñski
Konrad Wojtasik
Stanislaw Wo'zniak
Przemyslaw Kazienko
AI4MH
658
731
0
21 Feb 2023
Bike Frames: Understanding the Implicit Portrayal of Cyclists in the
  News
Bike Frames: Understanding the Implicit Portrayal of Cyclists in the News
Xingmeng Zhao
Dan Schumacher
Sashank Nalluri
Xavier Walton
Suhana Shrestha
Anthony Rios
260
3
0
15 Jan 2023
On Text-based Personality Computing: Challenges and Future Directions
On Text-based Personality Computing: Challenges and Future DirectionsAnnual Meeting of the Association for Computational Linguistics (ACL), 2022
Qixiang Fang
Anastasia Giachanou
A. Bagheri
L. Boeschoten
E. V. Kesteren
Mahdi Shafiee Kamalabad
Daniel L. Oberski
470
12
0
13 Dec 2022
Human Language Modeling
Human Language ModelingFindings (Findings), 2022
Nikita Soni
Matthew Matero
Niranjan Balasubramanian
H. Andrew Schwartz
225
8
0
10 May 2022
Different Affordances on Facebook and SMS Text Messaging Do Not Impede
  Generalization of Language-Based Predictive Models
Different Affordances on Facebook and SMS Text Messaging Do Not Impede Generalization of Language-Based Predictive ModelsInternational Conference on Web and Social Media (ICWSM), 2022
Tingting Liu
Salvatore Giorgi
Xiangyu Tao
Sharath Chandra Guntuku
Douglas Bellew
Brenda L. Curtis
Pallavi V. Kulkarni
298
3
0
03 Feb 2022
Evaluating Contextual Embeddings and their Extraction Layers for
  Depression Assessment
Evaluating Contextual Embeddings and their Extraction Layers for Depression AssessmentWorkshop on Computational Approaches to Subjectivity, Sentiment and Social Media Analysis (WASSA), 2021
Matthew Matero
Albert Y. C. Hung
H. Andrew Schwartz
AI4MH
382
4
0
27 Dec 2021
MeLT: Message-Level Transformer with Masked Document Representations as
  Pre-Training for Stance Detection
MeLT: Message-Level Transformer with Masked Document Representations as Pre-Training for Stance Detection
Matthew Matero
Nikita Soni
Niranjan Balasubramanian
H. Andrew Schwartz
273
23
0
16 Sep 2021
1
Page 1 of 1