ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2310.07075
  4. Cited By
Don't Fine-Tune, Decode: Syntax Error-Free Tool Use via Constrained
  Decoding

Don't Fine-Tune, Decode: Syntax Error-Free Tool Use via Constrained Decoding

10 October 2023
Kexun Zhang
Hongqiao Chen
Lei Li
W. Wang
ArXivPDFHTML

Papers citing "Don't Fine-Tune, Decode: Syntax Error-Free Tool Use via Constrained Decoding"

3 / 3 papers shown
Title
LLMs in the Imaginarium: Tool Learning through Simulated Trial and Error
LLMs in the Imaginarium: Tool Learning through Simulated Trial and Error
Boshi Wang
Hao Fang
Jason Eisner
Benjamin Van Durme
Yu-Chuan Su
CLL
27
7
0
07 Mar 2024
Sketch-Guided Constrained Decoding for Boosting Blackbox Large Language
  Models without Logit Access
Sketch-Guided Constrained Decoding for Boosting Blackbox Large Language Models without Logit Access
Saibo Geng
Berkay Döner
Chris Wendler
Martin Josifoski
Robert West
33
3
0
18 Jan 2024
Training language models to follow instructions with human feedback
Training language models to follow instructions with human feedback
Long Ouyang
Jeff Wu
Xu Jiang
Diogo Almeida
Carroll L. Wainwright
...
Amanda Askell
Peter Welinder
Paul Christiano
Jan Leike
Ryan J. Lowe
OSLM
ALM
301
11,730
0
04 Mar 2022
1