ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2502.18548
  4. Cited By
What is the Alignment Objective of GRPO?

What is the Alignment Objective of GRPO?

25 February 2025
Milan Vojnovic
Se-Young Yun
ArXivPDFHTML

Papers citing "What is the Alignment Objective of GRPO?"

2 / 2 papers shown
Title
MultiClear: Multimodal Soft Exoskeleton Glove for Transparent Object Grasping Assistance
MultiClear: Multimodal Soft Exoskeleton Glove for Transparent Object Grasping Assistance
Chen Hu
Timothy Neate
Shan Luo
Letizia Gionfrida
31
2
0
04 Apr 2025
Robust Reinforcement Learning from Human Feedback for Large Language Models Fine-Tuning
Robust Reinforcement Learning from Human Feedback for Large Language Models Fine-Tuning
Kai Ye
Hongyi Zhou
Jin Zhu
Francesco Quinzan
C. Shi
20
0
0
03 Apr 2025
1