ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2310.09624
  4. Cited By
ASSERT: Automated Safety Scenario Red Teaming for Evaluating the
  Robustness of Large Language Models

ASSERT: Automated Safety Scenario Red Teaming for Evaluating the Robustness of Large Language Models

14 October 2023
Alex Mei
Sharon Levy
William Yang Wang
    AAML
ArXivPDFHTML

Papers citing "ASSERT: Automated Safety Scenario Red Teaming for Evaluating the Robustness of Large Language Models"

3 / 3 papers shown
Title
Be a Multitude to Itself: A Prompt Evolution Framework for Red Teaming
Be a Multitude to Itself: A Prompt Evolution Framework for Red Teaming
Rui Li
Peiyi Wang
Jingyuan Ma
Di Zhang
Lei Sha
Zhifang Sui
LLMAG
44
0
0
22 Feb 2025
Training language models to follow instructions with human feedback
Training language models to follow instructions with human feedback
Long Ouyang
Jeff Wu
Xu Jiang
Diogo Almeida
Carroll L. Wainwright
...
Amanda Askell
Peter Welinder
Paul Christiano
Jan Leike
Ryan J. Lowe
OSLM
ALM
301
11,730
0
04 Mar 2022
Fantastically Ordered Prompts and Where to Find Them: Overcoming
  Few-Shot Prompt Order Sensitivity
Fantastically Ordered Prompts and Where to Find Them: Overcoming Few-Shot Prompt Order Sensitivity
Yao Lu
Max Bartolo
Alastair Moore
Sebastian Riedel
Pontus Stenetorp
AILaw
LRM
274
882
0
18 Apr 2021
1