ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2305.17442
  4. Cited By
Weaker Than You Think: A Critical Look at Weakly Supervised Learning

Weaker Than You Think: A Critical Look at Weakly Supervised Learning

27 May 2023
D. Zhu
Xiaoyu Shen
Marius Mosbach
Andreas Stephan
Dietrich Klakow
    NoLa
ArXivPDFHTML

Papers citing "Weaker Than You Think: A Critical Look at Weakly Supervised Learning"

7 / 7 papers shown
Title
FlexMatch: Boosting Semi-Supervised Learning with Curriculum Pseudo
  Labeling
FlexMatch: Boosting Semi-Supervised Learning with Curriculum Pseudo Labeling
Bowen Zhang
Yidong Wang
Wenxin Hou
Hao Wu
Jindong Wang
Manabu Okumura
T. Shinozaki
AAML
215
861
0
15 Oct 2021
RAFT: A Real-World Few-Shot Text Classification Benchmark
RAFT: A Real-World Few-Shot Text Classification Benchmark
Neel Alex
Eli Lifland
Lewis Tunstall
A. Thakur
Pegah Maham
...
Carolyn Ashurst
Paul Sedille
A. Carlier
M. Noetel
Andreas Stuhlmuller
RALM
173
56
0
28 Sep 2021
FewNLU: Benchmarking State-of-the-Art Methods for Few-Shot Natural
  Language Understanding
FewNLU: Benchmarking State-of-the-Art Methods for Few-Shot Natural Language Understanding
Yanan Zheng
Jing Zhou
Yujie Qian
Ming Ding
Chonghua Liao
Jian Li
Ruslan Salakhutdinov
Jie Tang
Sebastian Ruder
Zhilin Yang
ELM
204
29
0
27 Sep 2021
FLEX: Unifying Evaluation for Few-Shot NLP
FLEX: Unifying Evaluation for Few-Shot NLP
Jonathan Bragg
Arman Cohan
Kyle Lo
Iz Beltagy
195
104
0
15 Jul 2021
Fantastically Ordered Prompts and Where to Find Them: Overcoming
  Few-Shot Prompt Order Sensitivity
Fantastically Ordered Prompts and Where to Find Them: Overcoming Few-Shot Prompt Order Sensitivity
Yao Lu
Max Bartolo
Alastair Moore
Sebastian Riedel
Pontus Stenetorp
AILaw
LRM
277
1,114
0
18 Apr 2021
Memorisation versus Generalisation in Pre-trained Language Models
Memorisation versus Generalisation in Pre-trained Language Models
Michael Tänzer
Sebastian Ruder
Marek Rei
84
50
0
16 Apr 2021
Making Pre-trained Language Models Better Few-shot Learners
Making Pre-trained Language Models Better Few-shot Learners
Tianyu Gao
Adam Fisch
Danqi Chen
241
1,913
0
31 Dec 2020
1