ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2410.08811
  4. Cited By
PoisonBench: Assessing Large Language Model Vulnerability to Data
  Poisoning

PoisonBench: Assessing Large Language Model Vulnerability to Data Poisoning

11 October 2024
Tingchen Fu
Mrinank Sharma
Philip H. S. Torr
Shay B. Cohen
David M. Krueger
Fazl Barez
    AAML
ArXivPDFHTML

Papers citing "PoisonBench: Assessing Large Language Model Vulnerability to Data Poisoning"

1 / 1 papers shown
Title
BoT: Breaking Long Thought Processes of o1-like Large Language Models through Backdoor Attack
BoT: Breaking Long Thought Processes of o1-like Large Language Models through Backdoor Attack
Zihao Zhu
Hongbao Zhang
Mingda Zhang
Ruotong Wang
Guanzong Wu
Ke Xu
Baoyuan Wu
AAML
LRM
51
4
0
16 Feb 2025
1