ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2308.05585
  4. Cited By
Proximal Policy Optimization Actual Combat: Manipulating Output
  Tokenizer Length

Proximal Policy Optimization Actual Combat: Manipulating Output Tokenizer Length

10 August 2023
Miao Fan
Chen Hu
Shuchang Zhou
    AAML
ArXivPDFHTML

Papers citing "Proximal Policy Optimization Actual Combat: Manipulating Output Tokenizer Length"

1 / 1 papers shown
Title
Training language models to follow instructions with human feedback
Training language models to follow instructions with human feedback
Long Ouyang
Jeff Wu
Xu Jiang
Diogo Almeida
Carroll L. Wainwright
...
Amanda Askell
Peter Welinder
Paul Christiano
Jan Leike
Ryan J. Lowe
OSLM
ALM
303
11,730
0
04 Mar 2022
1