ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2402.04699
  4. Cited By
Breaking Free: How to Hack Safety Guardrails in Black-Box Diffusion
  Models!

Breaking Free: How to Hack Safety Guardrails in Black-Box Diffusion Models!

7 February 2024
Shashank Kotyan
Poyuan Mao
Pin-Yu Chen
Danilo Vasconcellos Vargas
    AAML
    DiffM
ArXivPDFHTML

Papers citing "Breaking Free: How to Hack Safety Guardrails in Black-Box Diffusion Models!"

4 / 4 papers shown
Title
Adversarial Diffusion Distillation
Adversarial Diffusion Distillation
Axel Sauer
Dominik Lorenz
A. Blattmann
Robin Rombach
138
326
0
28 Nov 2023
Diffusion-LM Improves Controllable Text Generation
Diffusion-LM Improves Controllable Text Generation
Xiang Lisa Li
John Thickstun
Ishaan Gulrajani
Percy Liang
Tatsunori B. Hashimoto
AI4CE
163
768
0
27 May 2022
RobustBench: a standardized adversarial robustness benchmark
RobustBench: a standardized adversarial robustness benchmark
Francesco Croce
Maksym Andriushchenko
Vikash Sehwag
Edoardo Debenedetti
Nicolas Flammarion
M. Chiang
Prateek Mittal
Matthias Hein
VLM
207
668
0
19 Oct 2020
Constructing Unrestricted Adversarial Examples with Generative Models
Constructing Unrestricted Adversarial Examples with Generative Models
Yang Song
Rui Shu
Nate Kushman
Stefano Ermon
GAN
AAML
168
300
0
21 May 2018
1