ResearchTrend.AI
  • Communities
  • Connect sessions
  • AI calendar
  • Organizations
  • Join Slack
  • Contact Sales
Papers
Communities
Social Events
Terms and Conditions
Pricing
Contact Sales
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2026 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2409.00787
  4. Cited By
The Dark Side of Human Feedback: Poisoning Large Language Models via
  User Inputs

The Dark Side of Human Feedback: Poisoning Large Language Models via User Inputs

1 September 2024
Bocheng Chen
Hanqing Guo
Guangjing Wang
Yuanda Wang
Qiben Yan
    AAML
ArXiv (abs)PDFHTML

Papers citing "The Dark Side of Human Feedback: Poisoning Large Language Models via User Inputs"

2 / 2 papers shown
LLM in the Middle: A Systematic Review of Threats and Mitigations to Real-World LLM-based Systems
LLM in the Middle: A Systematic Review of Threats and Mitigations to Real-World LLM-based Systems
Vitor Hugo Galhardo Moia
Igor Jochem Sanz
Gabriel Antonio Fontes Rebello
Rodrigo Duarte de Meneses
Briland Hitaj
Ulf Lindqvist
238
0
0
12 Sep 2025
Towards Autonomous Reinforcement Learning for Real-World Robotic Manipulation with Large Language Models
Towards Autonomous Reinforcement Learning for Real-World Robotic Manipulation with Large Language ModelsIEEE Robotics and Automation Letters (IEEE RA-L), 2025
Niccolò Turcato
Matteo Iovino
Aris Synodinos
Alberto Dalla Libera
R. Carli
Pietro Falco
LM&Ro
479
1
0
06 Mar 2025
1