ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2402.09346
  4. Cited By
LLMAuditor: A Framework for Auditing Large Language Models Using
  Human-in-the-Loop

LLMAuditor: A Framework for Auditing Large Language Models Using Human-in-the-Loop

14 February 2024
Maryam Amirizaniani
Jihan Yao
Adrian Lavergne
Elizabeth Snell Okada
Aman Chadha
Tanya Roosta
Chirag Shah
    HILM
ArXivPDFHTML

Papers citing "LLMAuditor: A Framework for Auditing Large Language Models Using Human-in-the-Loop"

3 / 3 papers shown
Title
From Prompt Engineering to Prompt Science With Human in the Loop
From Prompt Engineering to Prompt Science With Human in the Loop
Chirag Shah
26
9
0
01 Jan 2024
Multiple-Choice Question Generation: Towards an Automated Assessment
  Framework
Multiple-Choice Question Generation: Towards an Automated Assessment Framework
Vatsal Raina
Mark J. F. Gales
AI4Ed
ELM
23
30
0
23 Sep 2022
Language Models as Knowledge Bases?
Language Models as Knowledge Bases?
Fabio Petroni
Tim Rocktaschel
Patrick Lewis
A. Bakhtin
Yuxiang Wu
Alexander H. Miller
Sebastian Riedel
KELM
AI4MH
396
2,576
0
03 Sep 2019
1