ResearchTrend.AI
  • Communities
  • Connect sessions
  • AI calendar
  • Organizations
  • Join Slack
  • Contact Sales
Papers
Communities
Social Events
Terms and Conditions
Pricing
Contact Sales
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2026 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2507.18802
  4. Cited By
DxHF: Providing High-Quality Human Feedback for LLM Alignment via Interactive Decomposition

DxHF: Providing High-Quality Human Feedback for LLM Alignment via Interactive Decomposition

ACM Symposium on User Interface Software and Technology (UIST), 2025
24 July 2025
Danqing Shi
Furui Cheng
Tino Weinkauf
Antti Oulasvirta
Mennatallah El-Assady
ArXiv (abs)PDFHTML

Papers citing "DxHF: Providing High-Quality Human Feedback for LLM Alignment via Interactive Decomposition"

1 / 1 papers shown
Interactive Groupwise Comparison for Reinforcement Learning from Human Feedback
Interactive Groupwise Comparison for Reinforcement Learning from Human Feedback
Jan Kompatscher
Danqing Shi
Giovanna Varni
Tino Weinkauf
Antti Oulasvirta
VLM
169
1
0
06 Jul 2025
1