ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2205.02318
  4. Cited By
Language Models in the Loop: Incorporating Prompting into Weak
  Supervision

Language Models in the Loop: Incorporating Prompting into Weak Supervision

4 May 2022
Ryan Smith
Jason Alan Fries
Braden Hancock
Stephen H. Bach
ArXivPDFHTML

Papers citing "Language Models in the Loop: Incorporating Prompting into Weak Supervision"

20 / 20 papers shown
Title
ScriptoriumWS: A Code Generation Assistant for Weak Supervision
ScriptoriumWS: A Code Generation Assistant for Weak Supervision
Tzu-Heng Huang
Catherine Cao
Spencer Schoenberg
Harit Vishwakarma
Nicholas Roberts
Frederic Sala
NoLa
119
5
0
17 Feb 2025
ARISE: Iterative Rule Induction and Synthetic Data Generation for Text Classification
ARISE: Iterative Rule Induction and Synthetic Data Generation for Text Classification
Y. Meena
Vaibhav Singh
Ayush Maheshwari
Amrith Krishna
Ganesh Ramakrishnan
AI4TS
70
0
0
09 Feb 2025
WeShap: Weak Supervision Source Evaluation with Shapley Values
WeShap: Weak Supervision Source Evaluation with Shapley Values
Naiqing Guan
Nick Koudas
50
0
0
16 Jun 2024
Mixed Distillation Helps Smaller Language Model Better Reasoning
Mixed Distillation Helps Smaller Language Model Better Reasoning
Chenglin Li
Qianglong Chen
Liangyue Li
Wang Caiyu
Yicheng Li
Zhang Yin
Yin Zhang
LRM
26
11
0
17 Dec 2023
Let's Synthesize Step by Step: Iterative Dataset Synthesis with Large
  Language Models by Extrapolating Errors from Small Models
Let's Synthesize Step by Step: Iterative Dataset Synthesis with Large Language Models by Extrapolating Errors from Small Models
Ruida Wang
Wangchunshu Zhou
Mrinmaya Sachan
19
32
0
20 Oct 2023
UniversalNER: Targeted Distillation from Large Language Models for Open
  Named Entity Recognition
UniversalNER: Targeted Distillation from Large Language Models for Open Named Entity Recognition
Wenxuan Zhou
Sheng Zhang
Yu Gu
Muhao Chen
Hoifung Poon
22
59
0
07 Aug 2023
An Adaptive Method for Weak Supervision with Drifting Data
An Adaptive Method for Weak Supervision with Drifting Data
Alessio Mazzetto
Reza Esfandiarpoor
E. Upfal
Stephen H. Bach
Stephen H. Bach
54
1
0
02 Jun 2023
Language Models Enable Simple Systems for Generating Structured Views of Heterogeneous Data Lakes
Language Models Enable Simple Systems for Generating Structured Views of Heterogeneous Data Lakes
Simran Arora
Brandon Yang
Sabri Eyuboglu
A. Narayan
Andrew Hojel
Immanuel Trummer
Christopher Ré
SyDa
47
69
0
19 Apr 2023
Regularized Data Programming with Automated Bayesian Prior Selection
Regularized Data Programming with Automated Bayesian Prior Selection
Jacqueline R. M. A. Maasch
Hao Zhang
Qian Yang
Fei Wang
Volodymyr Kuleshov
25
0
0
17 Oct 2022
Honest Students from Untrusted Teachers: Learning an Interpretable
  Question-Answering Pipeline from a Pretrained Language Model
Honest Students from Untrusted Teachers: Learning an Interpretable Question-Answering Pipeline from a Pretrained Language Model
Jacob Eisenstein
D. Andor
Bernd Bohnet
Michael Collins
David M. Mimno
LRM
187
24
0
05 Oct 2022
Ask Me Anything: A simple strategy for prompting language models
Ask Me Anything: A simple strategy for prompting language models
Simran Arora
A. Narayan
Mayee F. Chen
Laurel J. Orr
Neel Guha
Kush S. Bhatia
Ines Chami
Frederic Sala
Christopher Ré
ReLM
LRM
206
205
0
05 Oct 2022
AutoWS-Bench-101: Benchmarking Automated Weak Supervision with 100
  Labels
AutoWS-Bench-101: Benchmarking Automated Weak Supervision with 100 Labels
Nicholas Roberts
Xintong Li
Tzu-Heng Huang
Dyah Adila
Spencer Schoenberg
Chengao Liu
Lauren Pick
Haotian Ma
Aws Albarghouthi
Frederic Sala
UQCV
22
7
0
30 Aug 2022
Self-Consistency Improves Chain of Thought Reasoning in Language Models
Self-Consistency Improves Chain of Thought Reasoning in Language Models
Xuezhi Wang
Jason W. Wei
Dale Schuurmans
Quoc Le
Ed H. Chi
Sharan Narang
Aakanksha Chowdhery
Denny Zhou
ReLM
BDL
LRM
AI4CE
297
3,217
0
21 Mar 2022
Training language models to follow instructions with human feedback
Training language models to follow instructions with human feedback
Long Ouyang
Jeff Wu
Xu Jiang
Diogo Almeida
Carroll L. Wainwright
...
Amanda Askell
Peter Welinder
Paul Christiano
Jan Leike
Ryan J. Lowe
OSLM
ALM
303
11,881
0
04 Mar 2022
Co-training Improves Prompt-based Learning for Large Language Models
Co-training Improves Prompt-based Learning for Large Language Models
Hunter Lang
Monica Agrawal
Yoon Kim
David Sontag
VLM
LRM
162
39
0
02 Feb 2022
Chain-of-Thought Prompting Elicits Reasoning in Large Language Models
Chain-of-Thought Prompting Elicits Reasoning in Large Language Models
Jason W. Wei
Xuezhi Wang
Dale Schuurmans
Maarten Bosma
Brian Ichter
F. Xia
Ed H. Chi
Quoc Le
Denny Zhou
LM&Ro
LRM
AI4CE
ReLM
315
8,402
0
28 Jan 2022
Multitask Prompted Training Enables Zero-Shot Task Generalization
Multitask Prompted Training Enables Zero-Shot Task Generalization
Victor Sanh
Albert Webson
Colin Raffel
Stephen H. Bach
Lintang Sutawika
...
T. Bers
Stella Biderman
Leo Gao
Thomas Wolf
Alexander M. Rush
LRM
211
1,654
0
15 Oct 2021
Creating Training Sets via Weak Indirect Supervision
Creating Training Sets via Weak Indirect Supervision
Jieyu Zhang
Bohan Wang
Xiangchen Song
Yujing Wang
Yaming Yang
Jing Bai
Alexander Ratner
OffRL
43
17
0
07 Oct 2021
Measuring and Improving Consistency in Pretrained Language Models
Measuring and Improving Consistency in Pretrained Language Models
Yanai Elazar
Nora Kassner
Shauli Ravfogel
Abhilasha Ravichander
Eduard H. Hovy
Hinrich Schütze
Yoav Goldberg
HILM
258
343
0
01 Feb 2021
The Pile: An 800GB Dataset of Diverse Text for Language Modeling
The Pile: An 800GB Dataset of Diverse Text for Language Modeling
Leo Gao
Stella Biderman
Sid Black
Laurence Golding
Travis Hoppe
...
Horace He
Anish Thite
Noa Nabeshima
Shawn Presser
Connor Leahy
AIMat
248
1,986
0
31 Dec 2020
1