ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2305.14938
  4. Cited By
Do LLMs Understand Social Knowledge? Evaluating the Sociability of Large
  Language Models with SocKET Benchmark

Do LLMs Understand Social Knowledge? Evaluating the Sociability of Large Language Models with SocKET Benchmark

24 May 2023
Minje Choi
Jiaxin Pei
Sagar Kumar
Chang Shu
David Jurgens
    ALM
    LLMAG
ArXivPDFHTML

Papers citing "Do LLMs Understand Social Knowledge? Evaluating the Sociability of Large Language Models with SocKET Benchmark"

13 / 13 papers shown
Title
Assessing how hyperparameters impact Large Language Models' sarcasm detection performance
Assessing how hyperparameters impact Large Language Models' sarcasm detection performance
Montgomery Gole
Andriy Miranskyy
AI4MH
21
0
0
08 Apr 2025
The Call for Socially Aware Language Technologies
The Call for Socially Aware Language Technologies
Diyi Yang
Dirk Hovy
David Jurgens
Barbara Plank
VLM
43
11
0
24 Feb 2025
Knowledge Planning in Large Language Models for Domain-Aligned
  Counseling Summarization
Knowledge Planning in Large Language Models for Domain-Aligned Counseling Summarization
Aseem Srivastava
Smriti Joshi
Tanmoy Chakraborty
Md. Shad Akhtar
24
3
0
23 Sep 2024
Language Evolution for Evading Social Media Regulation via LLM-based
  Multi-agent Simulation
Language Evolution for Evading Social Media Regulation via LLM-based Multi-agent Simulation
Jinyu Cai
Jialong Li
Mingyue Zhang
Munan Li
Chen-Shu Wang
Kenji Tei
LLMAG
38
6
0
05 May 2024
Fine-Grained Detection of Solidarity for Women and Migrants in 155 Years
  of German Parliamentary Debates
Fine-Grained Detection of Solidarity for Women and Migrants in 155 Years of German Parliamentary Debates
Aida Kostikova
Benjamin Paassen
Dominik Beese
Ole Putz
Gregor Wiedemann
Steffen Eger
27
3
0
09 Oct 2022
Measure and Improve Robustness in NLP Models: A Survey
Measure and Improve Robustness in NLP Models: A Survey
Xuezhi Wang
Haohan Wang
Diyi Yang
139
130
0
15 Dec 2021
Multitask Prompted Training Enables Zero-Shot Task Generalization
Multitask Prompted Training Enables Zero-Shot Task Generalization
Victor Sanh
Albert Webson
Colin Raffel
Stephen H. Bach
Lintang Sutawika
...
T. Bers
Stella Biderman
Leo Gao
Thomas Wolf
Alexander M. Rush
LRM
208
1,654
0
15 Oct 2021
Can Machines Learn Morality? The Delphi Experiment
Can Machines Learn Morality? The Delphi Experiment
Liwei Jiang
Jena D. Hwang
Chandra Bhagavatula
Ronan Le Bras
Jenny T Liang
...
Yulia Tsvetkov
Oren Etzioni
Maarten Sap
Regina A. Rini
Yejin Choi
FaML
117
110
0
14 Oct 2021
Measuring Sentence-Level and Aspect-Level (Un)certainty in Science
  Communications
Measuring Sentence-Level and Aspect-Level (Un)certainty in Science Communications
Jiaxin Pei
David Jurgens
23
28
0
30 Sep 2021
Latent Hatred: A Benchmark for Understanding Implicit Hate Speech
Latent Hatred: A Benchmark for Understanding Implicit Hate Speech
Mai Elsherief
Caleb Ziems
D. Muchlinski
Vaishnavi Anupindi
Jordyn Seybolt
M. D. Choudhury
Diyi Yang
92
235
0
11 Sep 2021
The Power of Scale for Parameter-Efficient Prompt Tuning
The Power of Scale for Parameter-Efficient Prompt Tuning
Brian Lester
Rami Al-Rfou
Noah Constant
VPVLM
280
3,835
0
18 Apr 2021
Making Pre-trained Language Models Better Few-shot Learners
Making Pre-trained Language Models Better Few-shot Learners
Tianyu Gao
Adam Fisch
Danqi Chen
241
1,913
0
31 Dec 2020
GLUE: A Multi-Task Benchmark and Analysis Platform for Natural Language
  Understanding
GLUE: A Multi-Task Benchmark and Analysis Platform for Natural Language Understanding
Alex Jinpeng Wang
Amanpreet Singh
Julian Michael
Felix Hill
Omer Levy
Samuel R. Bowman
ELM
294
6,943
0
20 Apr 2018
1