ResearchTrend.AI
  • Communities
  • Connect sessions
  • AI calendar
  • Organizations
  • Join Slack
  • Contact Sales
Papers
Communities
Social Events
Terms and Conditions
Pricing
Contact Sales
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2026 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2507.04446
  4. Cited By
Sampling-aware Adversarial Attacks Against Large Language Models
v1v2v3v4 (latest)

Sampling-aware Adversarial Attacks Against Large Language Models

6 July 2025
Tim Beyer
Yan Scholten
Leo Schwinn
Stephan Günnemann
    AAML
ArXiv (abs)PDFHTMLGithub

Papers citing "Sampling-aware Adversarial Attacks Against Large Language Models"

2 / 2 papers shown
AdversariaLLM: A Unified and Modular Toolbox for LLM Robustness Research
AdversariaLLM: A Unified and Modular Toolbox for LLM Robustness Research
Tim Beyer
Jonas Dornbusch
Jakob Steimle
Moritz Ladenburger
Leo Schwinn
Stephan Günnemann
AAML
283
2
0
06 Nov 2025
Diffusion LLMs are Natural Adversaries for any LLM
Diffusion LLMs are Natural Adversaries for any LLM
David Lüdke
Tom Wollschlager
Paul Ungermann
Stephan Günnemann
Leo Schwinn
DiffM
262
3
0
31 Oct 2025
1
Page 1 of 1