ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2407.05977
  4. Cited By
Exploring Human-LLM Conversations: Mental Models and the Originator of
  Toxicity

Exploring Human-LLM Conversations: Mental Models and the Originator of Toxicity

8 July 2024
Johannes Schneider
Arianna Casanova Flores
Anne-Catherine Kranz
ArXivPDFHTML

Papers citing "Exploring Human-LLM Conversations: Mental Models and the Originator of Toxicity"

4 / 4 papers shown
Title
Consistency of Responses and Continuations Generated by Large Language Models on Social Media
Consistency of Responses and Continuations Generated by Large Language Models on Social Media
Wenlu Fan
Y. X. Zhu
Chenyang Wang
Bin Wang
Wentao Xu
52
1
0
14 Jan 2025
Acceptable Use Policies for Foundation Models
Acceptable Use Policies for Foundation Models
Kevin Klyman
18
14
0
29 Aug 2024
WildChat: 1M ChatGPT Interaction Logs in the Wild
WildChat: 1M ChatGPT Interaction Logs in the Wild
Wenting Zhao
Xiang Ren
Jack Hessel
Claire Cardie
Yejin Choi
Yuntian Deng
40
171
0
02 May 2024
Training language models to follow instructions with human feedback
Training language models to follow instructions with human feedback
Long Ouyang
Jeff Wu
Xu Jiang
Diogo Almeida
Carroll L. Wainwright
...
Amanda Askell
Peter Welinder
Paul Christiano
Jan Leike
Ryan J. Lowe
OSLM
ALM
301
11,730
0
04 Mar 2022
1