ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2305.13917
  4. Cited By
Generating Data for Symbolic Language with Large Language Models

Generating Data for Symbolic Language with Large Language Models

23 May 2023
Jiacheng Ye
Chengzu Li
Lingpeng Kong
Tao Yu
ArXivPDFHTML

Papers citing "Generating Data for Symbolic Language with Large Language Models"

13 / 13 papers shown
Title
ProGen: Progressive Zero-shot Dataset Generation via In-context Feedback
ProGen: Progressive Zero-shot Dataset Generation via In-context Feedback
Jiacheng Ye
Jiahui Gao
Jiangtao Feng
Zhiyong Wu
Tao Yu
Lingpeng Kong
SyDa
VLM
71
69
0
22 Oct 2022
Addressing Resource and Privacy Constraints in Semantic Parsing Through
  Data Augmentation
Addressing Resource and Privacy Constraints in Semantic Parsing Through Data Augmentation
Kevin Yang
Olivia Deng
Charles C. Chen
Richard Shin
Subhro Roy
Benjamin Van Durme
33
10
0
18 May 2022
Self-Consistency Improves Chain of Thought Reasoning in Language Models
Self-Consistency Improves Chain of Thought Reasoning in Language Models
Xuezhi Wang
Jason W. Wei
Dale Schuurmans
Quoc Le
Ed H. Chi
Sharan Narang
Aakanksha Chowdhery
Denny Zhou
ReLM
BDL
LRM
AI4CE
297
3,163
0
21 Mar 2022
Controllable Semantic Parsing via Retrieval Augmentation
Controllable Semantic Parsing via Retrieval Augmentation
Panupong Pasupat
Yuan Zhang
Kelvin Guu
123
47
0
16 Oct 2021
PICARD: Parsing Incrementally for Constrained Auto-Regressive Decoding
  from Language Models
PICARD: Parsing Incrementally for Constrained Auto-Regressive Decoding from Language Models
Torsten Scholak
Nathan Schucher
Dzmitry Bahdanau
146
373
0
10 Sep 2021
CodeT5: Identifier-aware Unified Pre-trained Encoder-Decoder Models for
  Code Understanding and Generation
CodeT5: Identifier-aware Unified Pre-trained Encoder-Decoder Models for Code Understanding and Generation
Yue Wang
Weishi Wang
Shafiq R. Joty
S. Hoi
204
1,451
0
02 Sep 2021
Train Short, Test Long: Attention with Linear Biases Enables Input
  Length Extrapolation
Train Short, Test Long: Attention with Linear Biases Enables Input Length Extrapolation
Ofir Press
Noah A. Smith
M. Lewis
242
690
0
27 Aug 2021
Constrained Language Models Yield Few-Shot Semantic Parsers
Constrained Language Models Yield Few-Shot Semantic Parsers
Richard Shin
C. H. Lin
Sam Thomson
Charles C. Chen
Subhro Roy
Emmanouil Antonios Platanios
Adam Pauls
Dan Klein
J. Eisner
Benjamin Van Durme
290
196
0
18 Apr 2021
What Makes Good In-Context Examples for GPT-$3$?
What Makes Good In-Context Examples for GPT-333?
Jiachang Liu
Dinghan Shen
Yizhe Zhang
Bill Dolan
Lawrence Carin
Weizhu Chen
AAML
RALM
275
1,296
0
17 Jan 2021
Making Pre-trained Language Models Better Few-shot Learners
Making Pre-trained Language Models Better Few-shot Learners
Tianyu Gao
Adam Fisch
Danqi Chen
241
1,898
0
31 Dec 2020
Extracting Training Data from Large Language Models
Extracting Training Data from Large Language Models
Nicholas Carlini
Florian Tramèr
Eric Wallace
Matthew Jagielski
Ariel Herbert-Voss
...
Tom B. Brown
D. Song
Ulfar Erlingsson
Alina Oprea
Colin Raffel
MLAU
SILM
267
1,798
0
14 Dec 2020
Revisiting Iterative Back-Translation from the Perspective of
  Compositional Generalization
Revisiting Iterative Back-Translation from the Perspective of Compositional Generalization
Yinuo Guo
Hualei Zhu
Zeqi Lin
Bei Chen
Jian-Guang Lou
Dongmei Zhang
BDL
179
26
0
08 Dec 2020
Scaling Laws for Neural Language Models
Scaling Laws for Neural Language Models
Jared Kaplan
Sam McCandlish
T. Henighan
Tom B. Brown
B. Chess
R. Child
Scott Gray
Alec Radford
Jeff Wu
Dario Amodei
226
4,424
0
23 Jan 2020
1