ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2309.13579
  4. Cited By
Seeing Is Not Always Believing: Invisible Collision Attack and Defence
  on Pre-Trained Models

Seeing Is Not Always Believing: Invisible Collision Attack and Defence on Pre-Trained Models

24 September 2023
Minghan Deng
Zhong Zhang
Junming Shao
    AAML
ArXivPDFHTML

Papers citing "Seeing Is Not Always Believing: Invisible Collision Attack and Defence on Pre-Trained Models"

1 / 1 papers shown
Title
Training language models to follow instructions with human feedback
Training language models to follow instructions with human feedback
Long Ouyang
Jeff Wu
Xu Jiang
Diogo Almeida
Carroll L. Wainwright
...
Amanda Askell
Peter Welinder
Paul Christiano
Jan Leike
Ryan J. Lowe
OSLM
ALM
313
11,915
0
04 Mar 2022
1